This site was developed to support the Government 2.0 Taskforce, which operated from June to December 2009. The Government responded to the Government 2.0 Taskforce's report on 3 May 2010. As such, comments are now closed but you are encouraged to continue the conversation at

What data should we be releasing?

2009 August 17
by Nicholas Gruen

Andrew Leigh, freakonomist, econometrician and indefatigable crusader for the power of data has sent us a short and sweet submission (rtf) by email. I was going to ask him to work it up into a guest post, but then I can just quote it here.

Government 2.0 Taskforce Emailed Submission

Author: Andrew Leigh

Submission Text:

From the standpoint of researchers, one of the things that the Taskforce should strongly support is more data. A few examples are below.

• State and territory governments should release geocoded crime statistics of all crime reports. See for example this website created by the New York Times:

• FaHCSIA should release all data (including prices and quality ratings) from its annual Child Care Census.

• The Taskforce should encourage projects such as the digitisation of MP interest registers by

I’d like to open this thread up as a kind of repository to which anyone can add to an inventory of data sets that should be made public. Andrew has mentioned data held by state government agencies. Of course our only clear jurisdiction is in the Federal arena but I think we should be prepared to both talk about and make suggestions/recommendations regarding data held by other agencies – it’s up to them whether they want to accept them. In that spirit I’ll reiterate something I’ve argued previously namely that we should publish (pdf) data on individual companies workers compensation premiums where this provides reasonable information about their past safety record.

So please feel free to use this thread as a record of all the data you, the community think does, should or might exist that we should be trying to get freed to enable us to lead slightly more informed, and so slightly better lives.

67 Responses
  1. 2009 August 18
    Kevin Cox permalink

    It is unlikely that the government has little idea on what data it actually has let alone what it can release. It would be interesting to find out what sort of data inventory different departments and areas have.

    If there is a “protocol” recommended then government can gradually build up an inventory of what data it has then this can form the basis of what it can release. How you define what data you have is difficult and can only be done incrementally. However, let us assume that we somewhere have a partial inventory, then there can be other information attached to the inventory such as who can add to it, change it, release it and the characteristics of those who are allowed to view it.

    • 2009 August 20

      Kevin’s point about discovering this data is a good one:

      It is unlikely that the government has little idea on what data it actually has let alone what it can release. It would be interesting to find out what sort of data inventory different departments and areas have.

      If there is a “protocol” recommended then government can gradually build up an inventory of what data it has then this can form the basis of what it can release.

      Whether it’s in the remit of this taskforce or not, I’d be interested at looking at the wider context of people discovering data. As a a creator of data-mashing sites, I am frequently more interested in whether a data set concerning some issue or other exists at all, first of all, and in its quality and provenance second. Whether it comes for a government or not, as long as I have reasonable assurance of its fitness for my purposes, is only an incidental concern. So for me as a citizen and as a data consumer, the idea of a public facing inventory of government datasets is not so interesting. (I’m sure it might have value for the government internally, but that is not so interesting for me. What would be very interesting is a protocol that required government to advertise its data on public, community-maintained indexes of datasets such as the Freebase Database Database, or even the local Australian National Data Service.

      Not only is that reflective of my needs as an information-seeking consumer, but I think that places the government data sets alongside community and corporate-provided ones and allows us to choose datasets, and identify them and their shortcomings, on their merits. Are government datasets is more obscure formats? Is their quality high? It makes sense to open these questions up to public scrutiny by making sure that public data is in the places that consumers go to find data.

      The beauty of systems like the freebase one is that their records are not limited to published data. The data-set merely has to exist to be catalogued. So: Why not advertise that datasets exist even before they are accessible, and let the public debate around their usefulness and (privacy? commercial confidentiality?) risks influence the decisions around releasing them?

      • 2009 August 20

        Didn’t see this comment before posting my delicious idea below. Freebase looks awesome, & I can see that it would have several advantages over delicious as a repository of info about government datasets.

        Still think that #gov2au should be on delicious though!

  2. 2009 August 18
    Gordon Grace permalink

    Andrew Leigh:

    FaHCSIA should release all data (including prices and quality ratings) from its annual Child Care Census.

    …and then let DEEWR mash it up with

  3. 2009 August 18

    Andrew Leigh is a legend :)

    Is it too cute of me to suggest we revise the use of the term “data” to be the broader “content”.

    I find that words can be powerful in the online space and often a simple reworking of what you call something makes it more real for people. Data is a pretty hard word and it does not completely describe the holdings that we are all collectively trying to free.

    A way to describe what I mean is this: say we have all this great data about some ecological thingo (rivers, etc) and we also have a collection of photos, videos, reports, etc.

    The thematic organisation of all of that, being the content, is where we can really kick goals and not all of it can be understood as data and most of the new economy defers to the term ‘content’.

    I am sure Andrew will accept my slight adjustment to his submission.

    On topic, I do have a suggestion of ‘content’ but I am not going to tell you till the competition begins :)

  4. 2009 August 18

    I think that its not just about releasing data, per se, but also ensuring that such things are “archived” and made available. Obviously this is easier to do from here on in, but ultimately getting as much publicly releasable historical available as possible.

    This will allow people to find wide-ranging trends, and allow us to see changes in our Country and society from an empirical point of view. (Not that this is the only way we should view these things, but its helpful).

    Obviously this has lots of implication for Government in terms of storage and management.

  5. 2009 August 18
    Gordon Grace permalink

    an inventory of data sets that should be made public

    There are several data sets which are ‘made public’ as websites, but, with some additional work, may be able to more effectively exposed and repurposed.

    Some of these examples presently exist as web-based front-ends to databases (with no direct read-only access to the data via an API or open-format download), so they may not strictly meet your definition:

    > Legislation (
    > Government Structures, People, etc. (
    > Publications
    > Tenders (
    > Government Job Vacancies (

    I apologise to those who feel I am repeating myself – this post looks like a much better home for my original comment.

    • 2009 August 18
      Alexander Sadleir permalink

      I’d like to reitterate my request for the AusTender data. After reflecting on it for a couple of days and reading over the Australian Government Procurement Statement , I think it would be better suited as a packaged data store periodically released/updated.
      This would allow for the best analysis and visualisation without placing a burden on the live website or the people tasked with maintaining it. And certainly, I can’t see why anybody would object to releasing data that’s already public in a format that’s easier to use.

  6. 2009 August 18
    Henare Degan permalink

    I’d like to open this thread up as a kind of repository to which anyone can add to an inventory of data sets that should be made public.

    [My emphasis]

    It seems that the need to capture of the Taskforce’s work is getting more and more pressing if we’re using blog post comment threads to store ideas! :) See also Rob’s recent post.

    How about, as a start, we use what’s already out there and get a GovDex wiki set up for the Taskforce?



  7. 2009 August 18

    It’s obviously a dream, but I would say that government ought to be mandated to release any and all data it gathers so long as there is not an identifiable national security or personal privacy implication to doing so.

    Of course, that’s a huge ask. But why not set that as the ultimate goal?

    There are any number of already identified issues to doing this, not least of which is agencies figuring out what their holdings actually are, what license needs to be applied, what form (can I suggest open, remixable, non-proprietary) that data is provided in.

    And, to bake Henare and others (including me, I think), the blog is rapidly becoming an unwieldy place to manage idea generation (I think that’s actually a good thing – we have an engaged community), let’s get something set up on Govdex or elsewhere where the building of ideas can be managed more easily and use the blog for less complex things. A very Government 2.0 issue in itself, that we use the right tools for the right purposes.

    • 2009 August 18

      should we look at BaseCamp and CampFire??

    • 2009 August 21
      xtfer permalink

      I agree, but please not GovDex… how about a forum that is naturally open, inclusive and easy to use?

  8. 2009 August 18

    The question illustrates the level of change needed, and the size of the challenge that the Task Force faces. By asking “What Data should we be releasing?”, we accept that the natural state of data is that it is stored by government, and available for release based on certain criteria.

    In fact, the true position should be the complete opposite – data should be automatically released, unless there is a security or privacy reason that it shoudl be withheld. Stephen Collins makes the same point.

    In New Zealand the legal position is clear (the Official Information Act has been in place since 1982), and the in my view the principle is also clear – the data has been created using taxpayers money and should be made available to them. We’ve had a policy on government-held information since the 90s. The name is good – “government-held” – it recognizes the fact that government has collected information but it doesn’t own it as such – it’s our data and the government should let us have it. Colin Jackson writes more about this at

    I think similar considerations apply in Australia.

    • 2009 August 18

      You are completely right Laurence.

      The TF should need not do too much work on this issue in policy terms. Just tell the government that the time has come to release all ‘content’ unless it has national security/trade issues.

      The govt just needs to accept that then things will be found that are not working but, god forbid, govt can fix what has been identified. I think that is the job of govt and fixmystreet is exhibit A

      I will back them up on that directly to the government (as they say, I have my ways).

      What about a competition for identifying an agency that does not want to part with some content and then get the govt tell them to share (only joking)

      Then we can get on with the real work of making all this wonderful content, be it data, video, images, etc meaningful, compelling/engaging (an often overlooked core of web 2.0), and useful.

      • 2009 August 18
        Kevin Cox permalink

        Much of the data (content) collected by the government is about people. This data access must be restricted to the people concerned and to those they authorise so the issue is not just what data should be released but to whom it should be released. Commercial companies want to restrict data access of their tax file information to their stakeholders. Grant applications that are submitted, surveys that we fill out, our sales data, are all things that we wish to restrict.

        While in principle data should be made available to whom it is released and the purpose for the release is also a critical component and we cannot have an effective government 2.0 unless we address the issue of who is allowed to access what and make it part of the process when any data is stored.

      • 2009 August 18

        Agreed Kevin but there is a lot of ‘content’ that is generic and we need to get that out and working for both govt and the people pronto.

        The personal info stuff has been discussed since the Romans and the Greeks and, more recently, with the single sign-on (go DHS:) and other authorisation and identity projects.

        I think we need to split this so called data thing up a bit and not let stuff (like the austender being discussed) taht can come out now be bogged down by other more serious matters.

  9. 2009 August 18
    Kevin Cox permalink

    Jimi, I agree we need to discuss principles and strategies not particular instances with the caveat that it is best to illustrate principles with examples. Some principles might be.

    There should be an inventory of data held by the government that is easily accessible.
    How data is stored should be specified in the inventory
    How data can be accessed should be specified in the inventory
    Who collected the data and it veracity should be stored.
    Who is permitted to access it should be stored with the data (with the default being anyone)

    You might like to think of some other “principles” although I am not sure if that is the right word.

    Like most good systems development what we can do is to put in place a “minimal” system that does everything we want it to do for a small subset of the problem and get that working and see what issues arise and whether it is achieving what is desired. The important thing is to make any system that gets constructed fit with the principles.

    The nice thing about this approach is that we have lots of systems “that are almost there”. It would be good to examine some of those and see how they can be modified to fit with the principles. That is, the systems that will be constructed will be many and varied but that does not matter provided they fit within the framework of the principles.

    The thing NOT to do is to specify exactly HOW things must be done. We can suggest ways things can be achieved but we should not specify the implementation details. To give an example. We should say – there will be an inventory of data. We should not say there will be an inventory of data according to XYZ standard and it must only contain PQR.

  10. 2009 August 18

    I’d like to remind us not to forget video records.

    I am particularly keen to see the sittings of parliamant and senate made publicly accessible, searchable and re-publishable. There is a US example of such a site called and it helps tremendously with the transparency of politics.

    Then there are also video records of government agencies. I do understand that these have to be handled more carefully – if they even exist. But I am sure that certain events that are held for the public or certain meetings can be recorded and shared.

    • 2009 August 18

      Three cheers for Silvia for banging the video drum.

      Surely the NBN is a game changer that heralds much of what web 2.0 may bring to government and the people.

      Metavid is one of the most innovative sites across all governments and I can see now reason that it can not be done here.

      The only question perhaps is who pays …

  11. 2009 August 18
    Sean M permalink

    OK at the risk of the becoming a clm.

    Lets have a brief look at what we mean by data, its all well and good to say its about people and basic privacy needs to be preserved but missing is the point that Media Watch has made many times, some members of our community wilfully (or ignorantly) misuse data and statistics. Today’s tweets about the Pear Analytics 40.33% of Tweets are babble prove it yet again.

    I’m not trying to be “risk averse” or suggest data shouldn’t be available, I think it should, I think as much data in the open sunlight as possible is a good thing, but here we walk into a dilemma.

    As was mentioned above, people don’t really want data, they want content. Arguably that is data with a whole lot more stuff. I’m not sure the government can ever be properly responsible and accountable for the extra stuff that turns data into content. Conversely governments may, possibly rightly, want to take efforts to ensure that the content or the story that is drawn from the data is actually supported by the data. Again media reports of crime statistics can highlight some of the pitfalls.

    Taking a step back, Australia has one of the worlds most targeted welfare systems. Check with the OECD if in doubt.
    In being targeted it is also complex.

    Access to certain entitlements is governed by what you and your partner earn, what investments you have, how many children you have, their ages, where they go to school, how much you pay in rent and several other factors. The sort of data this produces is messy, very messy. Arguably even though service delivery mostly gets the delivery of these things right, it would be incredibly naive to believe they can tell you how much to whom and when.

    As I’m sure many people in policy agencies would tell you getting all the management information on the programs their policy work lead to, is not an easy task, given that it is critical to inform future policy it gets a bit tricky.

    So in accessing data, (privacy protected of course) which data, at what point in the policy cycle and with what context. I’m sure the researchers accessing this sort of data are capable of walking through the existing minefields. I’m not so sure that every one else does.

    So apologies for the rambling, what I want to get out is, there are key data accuracy and integrity issues that also have to be addressed. In addition recognition that unless you can make some sense of the data, you don’t really have access to any content at all.

    So to rephrase the title. “What data should we be releasing, and how do we ensure its meaningful, and useful to the public to whom it is released?

    • 2009 August 18

      Sean, you seem to fundamentally misunderstand the drive for open data. In particular when you ask your question at the end, I don’t believe that question is yours or the government’s to ask.

      The community will determine the meaningfulness and usefulness of data for themselves. It is simply the role of government to provide that data in an open, consumable and adequately deidentified form. From there, anyone who can make use of the data can and prove the meaningfulness and usefulness in implementation.

      Declaring that people will misuse data is nothing more than an argument without legs. With proper licensing and an up to date (like in the US) repository from which a definitive form of the dataset is available, it is always provable what the data itself actually is, regardless of any use it’s put to.

      The drive to open government data is widespread and international in its scope. Laurence Millar (above, from NZ), Sir Tim Berners-Lee, Tom Watson (UK MP), Vivek Kundra (US CIO) and others have argued long and loud for the change which must come. The sooner the better.

      Arguments from the public sector about complexity and misuse are spurious and do the community and its capability a great disservice.

  12. 2009 August 18
    Sean M permalink


    Let me try to put it another way. In order for data to be in a “open, consumable and adequately deidentified form” certain “processing” will occur. I am not attempting to say “hang-on its going to be misused” and I apoligise if my rambling has led to you forming that conclusion.

    What I thought I was trying to say is this, “data” is not information, nor is it knowledge and it is but a tiny part of understanding. The step of organising data into information, is not avoidable. If its in the simplest table its trying to be information. I understand that on some levels this is a semantic point.

    My concern is as follows, I spend a great deal of my working life trying to help users of information NOT to jump to conclusions that aren’t there. I would hope that in opening all of this data up, I would have less to worry about.

    I raise the complexity not because I think its something to hide behind because its too hard, but rather because unless you can understand that complexity and the underlying metadata what you are looking at will be meaningless. If someone was to code my comment into machine language (if that is even possible) it would only be readable to a select few.

    Thanks for the lecture but I get the program, I try to advocate for it all the time. A concern I can’t resolve is that much of what passes for “data” in the APS is so poor in quality, so limited in scope that its not consumble.

    I want the juggeranaut to flow through the Public Sector, I want the clarity and positive impact that I think and its ilk can bring, What I was trying to add to the discussion is that “open data” can be just as pointless as much of what passes for “public consultation” on planning matters if we don’t get it right.

    To paraphrase you again, what do we do when its not clear what the “definitive form of the dataset” actually is. Unfortunately despite what CIO’s might suggest often it is not the case.

    • 2009 August 18

      Sean, if I misunderstood your position, I certainly don’t now. I think we share an end goal, if not a common belief in the value of the data as it exists now.

      That said, there’s no question that much government data should be of much higher quality than it is. There have been crowdsourced efforts that have been singularly successful in cleaning up inadequate and badly formed datasets. Why not make that a goal of releasing data too?

  13. 2009 August 18

    Stephen has got to the nub of the issue: whose government it is, why it exists and for whose benefit.

    We ought not have government for government’s sake. The business of government is the people’s business. There is a subtle but important difference between “governing” and “ruling”. Government must be open to fend off the latter.

    Equally importantly – it is becoming increasing important for the governed to know who is governing them: and anonymous, unelected bureaucrats with the immunity of a High Court Judge, are answerable to…. whom? And when? If a sin cannot be pinned on the sinner, no redemption or penance is possible….

    Regrettably it is easy for any bureaucracy to become paternal rather than service-orientated.

    At last, the excuse “It will cost too much to print, store, file, find, send (information about your government)” is a non-argument now. At least that is the promise of Web 2.o. Personally I await the semantic web with more enthusiasm.

    Open government is a state of mind. Elected leaders have to mandate it. Then it at least is required to happen and it is harder to justify failure to implement government policy. If the policy is vague, the implementation will be equally amorphous.I foresee great resistance within the public service to any system which encourages scrutiny not within their control.

    Without open government being accepted by ALL stake holders, any attempt will be just that – an attempt: and possibly worse, a pretense.

    The US appears to be doing a fine job. We need to at least emulate and if possible, exceed the systems and philosophy they are putting into practice.

    And while we are at it, perhaps the philosophy of openness should be applied to Australian’s access to the Web itself. Hmmmm?


  14. 2009 August 18
    Sean M permalink

    Colin, I agree with you and Stephen that its the peoples/publics/ government/data/information.

    What I don’t think has been recognised is that simply some of that data is not there, it should be, but it isn’t.

    In order for there to be something for the public to look at, Petabytes or Googolbytes of code tables need to be interrogated, collated and presented. Without some robust standards, and agreed methodologies turning all those existing code tables into data can’t properly happen.

    If there is a place for openness to start its in deciding the standards and methods of extraction to apply in the first place.

  15. 2009 August 19
    Nicholas Gruen permalink

    Thanks for your comments on this thread. I’ve been flat out all day and this is the first time I’ve had a chance to respond. I agree that a blog is not the best repository for these ideas – but it’s a start.

    It’s easy for commenters to say that we should release everything unless there are specific reasons not to. I agree as I expect most, if not all Taskforce members do. But as one Taskforce member emailed me the other day “We cannot assume that the importance of freeing up government information is widely understood and there is an obligation as well as a practical necessity to argue the case. Unless our audience of public servants, politicians, the media and the general populace is also convinced of the importance of the issue, it will be painfully difficult to implement.”

    So my point in seeking examples is to give us as rich a menu as possible where we can identify specific examples of benefits that would accrue from releasing PSI. Specific examples make the case much better than principles. Specific examples show how concrete the benefits of better access to PSI can be.

    • 2009 August 19
      Kevin Cox permalink

      Allowing individuals to access their own personal data electronically is ALL that is needed to enable third party Identity Providers to give government departments privacy friendly and inexpensive identification systems. The reason is that if an individual can assert and through an independent Identity Provider prove the assertion, that they have a Birth Record, a Marriage Record, a Driver’s License, a Tax Record, a Social Security record, etc. then we know that there is in society a person with those records. If the person can now through their social network, prove that they are the same person, by asking people like their parents, their teacher, their employer, their doctor, their accountant that the person who has proved they have a record in the government files is the same person they know then the individual has their own electronic record that they can use to represent themselves.

      The individual can have multiple methods of access to their personal electronic record. These can be emails, phones, postal addresses, skype ids, logons from different organisations like linked in, google, microsoft, the banks, the local club – or anywhere where a person has identified themselves.

      Once such a record exists and the person is the only who controls it then it becomes very difficult for that record to be stolen because the record does not consist of personal identifying data but it consists of proven relationships between government entities, people, and other organisations. An individual can use multiple Identity Providers to record the relationships and may not choose to link all relationships. That is a person has already has many electronic presences. For a person’s electronic identity to be stolen the identity thief has to compromise ALL these relationships and not just a single one and this cannot be done through the electronic identity itself but requires the other parties to all the relationship to agree to the change. Typically identity theft is only partial. Someone steals or misuses your credit card number. If this happens then other parts of your electronic identity may become aware of the theft and take corrective action.

      Equally it becomes difficult for an individual to establish multiple identities because to do this the individual has to establish multiple relationships and have a completely separate set of non intersecting social relationships. This is very difficult to do and will almost certainly be detected.

      This is happening today without the active involvement of government but it becomes simpler and easier with government involvement. Banks and other significant organisations in society are beginning to embrace the concept. Citizens through linked in, facebook, twitter understand it and the general idea of an electronic presence is widely accepted in the community.

      Once individuals have a personal electronic presence government interaction with individuals can operate more efficiently. Secret electronic voting is trivial to implement meaning that any election will become inexpensive and rapid. The Health ID system becomes just another identifier and does not become as many fear a universal identifier. It also becomes inexpensive to implement because a person may not want another card but can identify themselves with one of the cards or devices they already have. Collection of taxes can be made simpler. Tax systems themselves can be changed because it is easier to implement electronic taxation collection. The tax system can be relieved of the burden of being the way the government implements many policies – like the recent fiscal stimulus. The passport office can be relieved of the adverse publicity it gets most days from women who decide to change their name. Implementation of social security payments like child support payments becomes less problematic as it more difficult for identity fraud to occur. The immigration department will be able to know if short term visitors are still in the country and know how to contact them. I can continue on with cost savings and additional functionality but to get a better list of benefits ask any government department that deals with the public how they could make their operations more effective if they were able to reliably and privately interact electronically with their clients. Any government department that has to deal with the public will become more efficient in its operations if one of the ways it contacts and interacts with citizens is through reliable, inexpensive, private electronic presences.

      We even have an electronic standard SAML (Security Assertion Markup Language) that specifies how “credentials” can be moved electronically.

      Some might ask – why can’t the government do this themselves and be the “Identity Provider”. This was the “idea” behind the Australia Card and both the government and citizens realised it was not a good idea and so it was abandoned.

      For governments to become involved all they need to do is open up their records of individuals to the individuals themselves. Multiple Identity Providers will (and are) doing the rest.

    • 2009 August 19

      Nicholas, I can tell that the argument you quote came from one of the public sector members of the Taskforce. I’ll be astounded if the case is otherwise. It’s couched in very public sector terms.

      I think this is one of those cases where it’s been argued by others that we should be looking to others for existing solutions. The US has made it very clear through Vivek Kundra and at about what and why and Laurence Millar has or did do the same as NZ CTO. I’ve heard the same from Jason Ryan of the SSC in NZ. And, same again with POIT and the longstanding arguments of people like Tom Watson in the UK.

      Why reinvent the wheel for the Taskforce’s purposes? We have established reasons and frameworks. In this case, Australia is not so different that we need to establish a special set of reasons. Use those that exist.

      • 2009 August 19
        Nicholas Gruen permalink


        It was not a public sector member of the Taskforce.

        I’m not sure why you’re disagreeing with me – both the member and I and I think the overwhelming majority if not all the TF members share the goal of turning what is already loosely speaking government policy – the OECD principles – into reality.

        I hope we’d all agree we’re a long way from that reality.

        There’s no desire to reinvent the wheel. There’s an understanding that to a substantial extent, this is a matter of culture. I hasten to add that it’s not only culture, we’re keen to get policy done where it needs to be done. But all the policy in the world will have a much more muted effect if those people who often slip into regarding themselves as the owners of the data (even if policy tells them they’re not) don’t see the compelling logic of the PSI agenda.

        I was involved in the Review of the National Innovation System, and something pretty big that it achieved, which is largely unsung, is to change the way the R&D tax concession is calculated. It will make a large difference to its effectiveness and may end up costing the government money. Crucial to getting that through was convincing Treasury to back it. I want to be able to get the heavy hitters on board, and I need the ammo to do that (I think I’ve just mixed a metaphor!)

        I have appreciated virtually all your comments on this blog, but I get the impression you think this is pretty straightforward, and I’m not sure that’s right. I do think that the bare outlines of the policy are pretty straightforward – and they’re already articulated in our presentation of the OECD principles in our Issues Paper, though we’d add at least one additional principle – which is timeliness.

        But that’s just the beginning – crucial as it is.

    • 2009 August 19
      Henare Degan permalink

      Hi Nicholas,

      Great to see you back here responding to the comments, thanks! Are there any plans to solve the problem of where to store these ideas and the more developed discussions, possibly using some of the suggestions a few people have posted?

      Good to hear more on your reasons for this question – it rings very true that when promoting change, the benefits need to be clearly elucidated to stakeholders and when the change has less tangible or indirect benefits, it’s even harder.

      Hopefully gathering these specific examples of the types of data to open will allow us all to present specific benefits to the various groups we need to convince, and work with down the track, to open the data.

      • 2009 August 19
        Nicholas Gruen permalink

        Yes, thx Henare, that’s my reasoning. And compliments on the hat and sunnies.

    • 2009 August 19

      I can not agree more with Nicholas.

      There should be a time pretty soon where the talking needs to stop and the action needs to begin.

      I say just open the wallet and lets get on with some cracking and achievable exemplars so we can widen the engagements and improve the awareness.

      The problem with all the talk is that we are discussing something that can cope with as many opinions as a game of footy – just ask anyone who has ever coached how many opinions that can be.

      My ideas are already forming, as I hope they would after nearly 20 years riding the digital beast.

      I bet my thinking is very different to many people on these forums.

      The TF only need to know that we have a healthy community of vastly experienced digital folks standing ready to start building amazing web 2.0 (and again I stress, web 1.0 and 3.0) stuff that will illuminate the possibilities more than a million words on this site.

      This is all not to denigrate the great discussion that is going on here and, yes, we need to get smart on how we manage the conversation going forward, away from this thread/reply old school solution

      But, first, let us get some projects underway, between you and me Nicholas, the digital industry has been smashed by the GFC and the almost complete stop in on federal projects since about 3 months before the last election.

      I could go on about the lack of govt support to the digital industry but this is the wrong thread and I think I have better forums to bang n about that (not that the TF would be wrong to investigate industry support)

    • 2009 August 20
      Sean M permalink

      Ok as just a possible example of data to make available.

      We often hear of GP shortages, so lets publish the data by region (SLA/Electorate/whatever) of GP consults per 100 or 1000 people (Which gives the best granularity for the more reasonable privacy) and Specialist consults by type and relevant larger region.

      This could give us a map of where GP’s and specialists are. Would help new specialists know where to locate, give allied health an idea of locations to provides services and give state and local governments better ideas for where services could go. Its not far from what has been suggested already and might make the macro policy less politicised whilst making local services more readily issues local communities can work around.

      Another sort of example might be to provide the datasets of federal road funding per km of road in LGA (ideally State and Local govt data added as well) It may allow councils to readily work out that their costs are higher than somewhere else. Obviously all sorts of third parties could do analysis for local governments community groups.

      In sketching this out I think the real promise is that open datasets will make everyone across the country have more equal access to information, a Local govt in Eastern Perth may well set a benchmark for road maintenance that changes local governments in Ipswich.

      • 2009 August 20
        Gordon Grace permalink

        We often hear of GP shortages, so lets publish the data by region (SLA/Electorate/whatever) of GP consults per 100 or 1000 people (Which gives the best granularity for the more reasonable privacy) and Specialist consults by type and relevant larger region.

        Maybe the Doctor Connect site could get you started…?

  16. 2009 August 19

    My wishlists of data sets / improved data access (this is a mix of specific measures, data sets and fuzzy goals, so shh, no nitpicking):

    1) Encourage RDFa / Microformats for all public ‘contact us’ pages, to describe who a department is in a machine processable way. Benefit: higher google rankings to find out how to speak to government through existing channels, a starting point for a semantic description of what a government agency is / does / etc.

    2) Look at making some of the existing Geodata offerings more public. Currently, it costs an arm and a leg to get your hands on data from PSMA. When Google acquired a right to use this data and put it into google maps and the relevant API, it was a game changing proposition for developers, both individuals and companies.

    Look at making Very Basic geoinfo, like addresses to geocodes, suburb outlines/shapes, LGA outlines / shapes / etc – free or lower cost. The complicated stuff? Add later; retain current pricing models / etc.

    The next layer: descriptions of public facilities and other bits and pieces. Forests, the location of public toilets, train stations, bus stations (probably more per state for that one), rivers, water catchments, populations of given areas etc.

    Provides the general public good quality meta data about our physical world as known by the government, also provides an ability to integrate other data sources back into government data (fictional example: looks on flickr for geotagged photos in the right location, to add photos to the site).

    3) Look at improving access to the assorted state level Title systems, by producing address to lot / plan / etc number mappings. Though each state is slightly different, the common threads of systems basically boil down to “identify a property and describe it; plus any encumberances and other bits and pieces”.
    Looking at methods to provide raw data dumps (CSV!) of “This address = this title, more data at this URL (you will probably have to pay to access it)”; and rely on the existing infrastructure to provide those titles.

    My work is a valuation company (national); and we spend significant amounts of time / money to do title searches; which are a unifying component of our business and several others.
    Though there are a few commercial companies in this space; the data is not free nor open.

    4) Aggressively market your existing tools for (1) government agencies to release existing data in an open way with appropriate licencing (2) people to apply for access to data sets.

    Provide a very simple set of steps, describing “the path to open data”.
    – First, raw data as CSV under open licences. See
    – Second, XML or other well modelled services (SOAP/REST web services) on a per object basis, as well as a few basic operations – ’search’. is a good example of this.
    – Third, remodelling the underlying data structures as needed (add the notion of ‘links to other copies of this resource’) and producing RDF dumps – see for the dream, and for a specific example of how it might look (notice: it has identifiers from IMDB, Wikipedia, MusicBrainz and more attached to it)

    When I mean simple, I mean simple.
    “Click here to read the (popular database here) to CSV tutorial”, “Click here to read about what kind of data I should/n’t publish”, “What licence do I choose?”, “Who do I call for help!”, “Download the tutorial webservice / rdf publishing tools; so you can build your own (in C#, Java, PHP, whatever)”

  17. 2009 August 19
    Richard Best permalink

    Very interesting discussion here. Without wishing to put my head above the parapet, may I suggest that it’s not always as simple as advocating a position of “release everything and let the public decide whether the data is useful”. If a government agency is releasing a copyright dataset, for example, arguably it is incumbent on it to specify the terms of re-use, without which people may not know the respects in which the dataset can be re-used and confusion or, worse, infringement of copyright, might result. One might argue that a simple answer is that the agencies not enforce their intellectual property rights. I doubt that would be a satisfactory answer.

    Drawing analogies with the US can be dangerous, given that there is largely no copyright in US federal works. Further, the US courts take a different approach to copyright in datasets and databases compared with that in Australia, the UK and New Zealand.

    In Australia, the UK and New Zealand, there is a significant difference between mere release of copyright works and release on terms positively allowing re-use. Under New Zealand’s Official Information Act, for example, mere release of a copyright work in response to a request for it does not entitle the recipient to re-use that work in a way that would infringe copyright in it. The last time I checked (admittedly 5 years ago), the same position applied in the UK: see (see the quoted passage). Similarly, the mere posting of a copyright dataset on a website is unlikely, without more (whether express or implicit), to allow re-use on broad terms.

    Nothing I’ve said above is meant to imply that there shouldn’t be a presumption favouring release of copyright datasets on terms allowing liberal re-use. There are cogent reasons for arguing that the opposite is true (i.e., that there should be such a presumption) except where there are sound legal or policy justifications to the contrary (which may either prevent release or restrict the scope of permitted re-use). My only point is that agencies, when releasing copyright works, should be clear on the scope of permitted re-use. Creative Commons licences provide an obvious vehicle for that, and Queensland’s Government Information Licensing Framework is leading the way internationally on their use in a government context.


    • 2009 August 19

      Richard, I think those of us suggesting open and complete (as possible and appropriate) release are well aware of the copyright and licensing implications. That’s why we are advocating for permissive licensing alongside release of data.

      The GILF project ( has taken a long, hard look at this issue and has much value to add, I very much agree.

      I’m fully aware of the fundamental differences in copyright between the US and Australia. As much as anything, issues of Crown Copyright and the usage rights it does or doesn’t grant, and any necessary reform, are a part of the Government 2.0 agenda.

      However, if NZ and the UK can resolve many of those issues, as they have in recent times, we can hardly say we’re different – the legal framework we operate under has ancestral commonality and the issues are similar.

      I think the more we go about pointing out the size of the several chasms that Government 2.0 needs to cross, the less bridge building we’ll undertake.

  18. 2009 August 19
    Richard Best permalink

    All great stuff Stephen. Just to be clear, though, I certainly wasn’t intending to impede any bridge building. In my view the kind of open climate you’re advocating is entirely doable within the existing legislative environment. We just need to build the bridge with the right materials, including appropriate licensing where required. As you obviously know, in Australia the legal solutions are already there. If there’s a chasm to be crossed, it’s a cultural one, in terms of changing mindsets. To a large extent, I think that’s a matter of educating agencies on the benefits of open release and licensing.

    By the way, I’m looking at this as a New Zealand-based observer, very interested in what you’re all doing in Australia.

    I guess the main thrust of my earlier comment was to point something which not everyone appreciates. You obviously do, but many people don’t.

    Onwards and upwards. I don’t see any unpassable chasms.

    • 2009 August 19

      So, Richard, violent agreement (as I am with Nicholas) and differences on minor points. That’s a good thing.

      And, by whatever power, if we all ever get activated into actually making this happen, we’ll be unstoppable!

      So much potential benefit, so little time.

  19. 2009 August 19

    Nicholas, no reply link on your reply to me… Hmm.

    Delighted it wasn’t an APS Taskforce member, yes. More surprised it wasn’t, yes.

    I don’t think I disagree with you. I just get the feel as the Taskforce is encountering the hurdles we all knew it would – complexity, culture, justification, challenge – perhaps we’re not looking to see whether they’re problems with existing solutions, whether in whole or in part.

    By no means do I think that getting PSI freed up, available in a useful form and licensed helpfully isn’t a challenge. I’m well aware that it’s an enormous challenge. And, I’d guess, insurmountable in some places in the short-medium term.

    So, absolutely, we need robust policy formation as well as ideas on practice and implementation. In the meantime, I’m an advocate of doing something. As an example, at GOVIS in NZ in May, attendees were issued a challenge to return to their workplaces and figure out what basic, low risk data they were sitting on that could be licensed under an appropriate CC model and released within a month. There were releases put forward as options by some attendees the next day.

    Why not try something like that? Get a working model in place?

    My guess is that the you, the Taskforce and I agree violently on almost everything, except perhaps on implementation detail on some things. I’m a huge supporter of what you’re doing and the promise it holds, but I’m all for taking a few positive steps to prove concepts as we go along. I think you could encourage that and help the change along and build policy through low risk, short experiments.

    What do you think?

    • 2009 August 19

      Yes, Stephen is again on the money

      OK, he has changed my thinking again. Here is another suggestion that tries to paraphrase what SC said:

      TF sends out quality invitation to all agencies on the same lines as GOVIS, as mentioned by SC

      TF publishes the set of data/content, available, formats, etc that agencies have offered

      TF issues competition on what we would do with the content.

      TF funds a whole bunch of small but cool projects. Call them beta, call them exemplars, whatever gets the minister to sign off.

      TF puts the finished projects on an showcase website that is launched by aforementioned Minister at glittering event where I get to wear a suit (as my projects is, of course, the star of the show)

      Oh, yes, the web 2.0 bit. The showcase includes a public vote for the best of the breed (which of course I will win :)

      Meanwhile, the good folks who understand such things can go on with discussing the other data that is not so easily dislodged for sharing and we can stand by waiting for the fruits of their hard work.

      Just a thought :)

  20. 2009 August 19

    Picking up from Gordon Grace’s comments, I think it is important to consider not only what data is made available but in what form it is made available. Government websites such as the ABS and the already publish significant amounts of data, but there is no API and the historical data is almost all in Microsoft Excel format. To really tap into the value of Government-published data it should be more readily accessible, ideally as linked data.

    • 2009 August 20

      In reply to Sean

      … I think it is important to consider not only what data is made available but in what form it is made available. Government websites such as the ABS and the Reserve Bank already publish significant amounts of data, but there is no API and the historical data is almost all in Microsoft Excel format. To really tap into the value of Government-published data it should be more readily accessible, ideally as linked data.

      I concur that the formats and protocols are important, but I’d like to pry these two interlinked issues apart afor a moment, in this discussion of “Linked Data” and “Excel spreadsheets”.
      Formats:“Excel” denotes a set of data formats, messy proprietary ones for sure, but not that bad: Transforming Excel spreadsheets into other formats is only every a button-press away – specifically, excel sheets are very close to easily consumed (CSV) or almost-easily-consumed formats (ODF). Really, really easily – the former can be almost-transparently integrated into existing open-source databases. It’s not a big policy shift to remove even that small hurdle by requiring agency staff to press “Save As CSV” when exporting their public data. And for lots of data I would argue that CSV is an appropriate format for delivering those large numerical datasets.

      The shift to LinkedData though is a separate one – I’d argue it’s more of a protocol for discovering and annotating resources. It’s very powerful, and I think, very useful, but implementing it is a separate and harder task than producing CSV, and I’m not sure they are best considered together – we can accomplish either separated. I’d love to having dereferenceable URIs for all my resources so I can discover and interlink them using RDF, but ultimately if I want to easily make use of a big numerical dataset, I want to get to it as basic CSV, which has many well-tested and easy tools to handle it. (Similarly, I like working in KML for geographic data). Linked data would be a powerful supplement to the discovery process, and would facilitate re-use of many kinds of data, but for my use cases I’d regard that as a supplemental thing. and Linked data really shines with less tabular, more network-oriented data, like information about personal relationships, as with Daniel O’Connor’s Michael Jackson example.

      So yes, Sean, I agree with your point, both your points, but I want to discus how we ecode the data and how we enhance our data’s interoperation and discovery, as separate issues.

      • 2009 August 20

        You are right Dan, linking and CSV are really separate issues.

        While you are correct that it’s straightforward for the producer to convert Excel to CSV, it is not always as easy for the consumer if you are not working in Windows with Excel installed (as when I work from home on Mac or Linux). So, I’d be very happy if the ABS, RBA etc began providing CSV formats. Still, it is more of a “nice to have” compared to data that is not available at all.

  21. 2009 August 19

    As a spatial professional who spent some time with Queensland Health, I can underscore the value of more accessible data.

    With ~69,000 employees making Queensland Health one of the nations bigger employers, gains in productivity through optimised placement of human resources can be significant, particularly with current demands for medical professionals.

    Geocoded crime data is an example of unanticipated, but extremely useful, data for Qld health, who could exploit it to improve service delivery and reduce associated costs – to inform placement of Government Medical Officers.

    Crimes against the person occur at all times of day and all days of week and crime patterns vary across the state. Unfortunately, QH could only access crime data down to police region (7 regions within Queensland). Therefore the value of analysing this data to inform recruitment and staff management within the criminal forensic medical unit was much lower than it might have been, had we access to geocoded crime statistics.

    For an outstanding example of release of (US) public sector data, see:

    Within this site I think the release of restaurant inspection data is a huge motivating force for restaurants to pull their socks up, and a reward for those already performing well in the hygiene stakes. – eg:

    I also think that this information being available is a pretty handy service to the travelling public, however I do recognise that this is the jurisdiction of local governments, who here in Australia have a mixed record on enforcement and acting in the public interest.

    NB. Locally, the Queensland Government is improving their information delivery tools with sites such as:

    In summary I strongly support the publication of public sector datasets to the public domain, in a way that is meaningful for the consumer yet respects the privacy of the individual.

  22. 2009 August 20
    asa letourneau permalink

    why not first publish a web2.0 list of all digital public information content held by government agencies as data sets and allow the general public to vote on what they would like to see made available so that these ‘data sets’ can be priortitised for open access online via web2.0 widgets.

    If you don’t know what is available in the first place how can you make an informed decision of what you’d like to see? this will save everyone a lot of time and effort and provide a real incentive to contribute.

    (asa letourneau & sebastian gurciullo)

    • 2009 August 20
      Gordon Grace permalink

      first publish a web2.0 list of all digital public information content held by government agencies as data sets

      I suspect this would be a fairly significant chunk of work that, were we all to wait for it to be ready before suggesting data sets, may result in a lost opportunity. In the meantime, some pointers to the sorts of things the federal government may already be capable of providing online (at least in some form) could be found at

      > Government Sites By Portfolio
      > Government Sites by Topic
      > A-Z List of Government Sites

      These lists may provoke a response (at least in the readers of this blog) of:

      “Hey – that’s a website already providing government information / services
      , I wonder if they could provide the data behind that site in a better / different / open / interoperable fashion?”

  23. 2009 August 20

    I agree with Sean Carmody. ABS data is available, via Creative Commons. The Australian people should be able to access “their” data, this should in theory provide more accurate details for better decision making, policy writing. It is all about life cycle, collaboration and trust.

    Open data standards will restore trust between the Australian people & the Govt; and what is actually collected.

    • 2009 August 20
      asa letourneau permalink

      Agree with Adian that it is definitely about trust. It wouldn’t take many unsatisfying experiences I would think for people to dis-engage from government supplied ‘data’ if they feel in any way that they are not being given the opportuntitiy to create meaningful collaborations and connections beyond the vision of Government.

      • 2009 August 21
        Kevin Cox permalink

        It is inevitable that there will be “unsatisfying experiences”. Any system should have builtin mechanisms to report and have ways of acting on those difficulties.

  24. 2009 August 20

    I have a couple of different responses to this topic.

    First is, I work for a software company that works with data providers and public intelligence, so I’m really keen to have more people answer the question – what data to we want to see? What should the government provide?

    From a personal perspective, I would like transparency on our emissions and resource usage and the ability to more easily understand the information available so let’s add that to the request list.

    From a privacy perspective, the Australian Bureau of Statistics has conquered keeping data private and releases population census data to the world via CDATA Online. This data is confidentialised on the fly to protect individual privacy, that the data is used a lot by researchers and agencies.

    Next week, ABS will be launching their new Table Builder service which is a paid service. It allows access to the whole census database, and users can extract queries (not individual records, but aggregated) up to 10 million records in size.

    At my company, we’re hoping other organisations can leverage the work that ABS has done in providing data to the public. At statisticians, the ABS are very concerned about protecting privacy and ensuring that it’s really difficult to misinterpret the data / analysis that’s being conducted. It’s fair to say that they are conservative. Yet they are the first statistical agency in the world to be providing query access to the whole census database online via the web and ensuring that privacy is protected. It can be done!


    • 2009 August 20
      Kevin Cox permalink

      Jo – good question and congratulations to the ABS.

      The difficulty with the question is that for me to answer “What data do I want” I have to ask “What data do you have”. In all systems people do not know what they want until you give them something. As soon as you do that then they will tell you what they really want:)

      This is not incompetence or stupidity it is just the way we are constructed. We learn by doing and through experience and it is the same with all aspects of government 2.0.

      On the question of the ABS charging for access to data that should only happen if the money collected is clearly and obviously spent on improving access and then only in rare cases. Access in general should be free. Help in interpretation and help in accessing can be chargeable but not the actual access.

      The difficulty with charging for access is that many worthwhile accesses will not be made because the clients do not have the resources to pay for access. You also find that the cost of collecting of the money for access will cost a high proportion of the money collected. From the point of view of the whole community that is money wasted – unless the money collected results in better uses of the data. As well designed access is essentially cost free once it is built then charging for it does not result in better uses of the data.

  25. 2009 August 20

    I suspect this would be a fairly significant chunk of work that, were we all to wait for it to be ready before suggesting data sets, may result in a lost opportunity.

    It would be a less significant chunk of work if the gov2au taskforce crowdsourced the job of compiling an inventory of existing public datasets.

    Why not start by setting up a gov2au delicious account & calling for people to send links to the locations of public data with a specific tag like #ozgovdata? Along the lines of this UK example:

    • 2009 August 20

      Hi Miriam,

      I’m a big fan of crowdsourcing it!

      For what it’s worth, though, if we were discussing a third party crowd-sourcing option for government data I would personally recommend Freebase, as I mentioned above over delicious (I do love delicious, mind). My reasoning:

      it’s a bit more direct – it’s obvious to people what they are contributing to, and they see their input immediately reflected on the site
      it doesn’t artificially segment our government’s data from the data of its peers – would you not love to see the official dataset of parilamentary proceedings compared on equal footing with community-led and far superior options such as OpenAustralia?
      the resulting list of dataset, (as opposed to the datasets) if it stays in freebase, will not be under the exclusive control of the government although it will still be community curated, and will remain open. I would regard that as a virtue. By contrast, the delicious links list would necessarily be compressed down to a “final” list as with the uk example, since one can’t be assured that people will not tag random nonsense with the selected delicious tag
      it also allows you to crowdsource metadata about data sets (without requiring it) – so you can annotate a data set with information about whether it it currently open, who is responsible, and so on, as well as just the web page. Freebase doesn’t support tags, mind, but it is pretty easy to kick off metadata-rich pages. Here’s one for the NSW State Heritage Register, for example., annotated with its “Crown copyright” license and the topics it covers.

      There are of course other options – a plain old wiki page would get part of the way there, or indeed tagging a tweet with #gov2au #dataset – and there are certainly other tools besides. And of course, all of our proposals so far have the benefit of not needing the blessing of the taskforce to kick off.

      If we were going to look at tag-based crowdsourcing, by the way, what tags would we use?
      So, for example delicious tags gov2au+dataset or twitter hashtags #gov2au and #dataset anyway? Or would people perhaps prefer a single tag such as “gov2audataset”?

    • 2009 August 20
      Gordon Grace permalink

      Or #datagovau?

  26. 2009 August 20


    A specific benefit of releasing government data publicly is that the rest of the government would also become aware of that data and have the opportunity to reuse it in appropriate ways.

    Currently there is significant difficulty in many parts of government in finding out about all the data that has been captured and stored that may be useful to the programs of other departments (and sometimes also within the same department).

    This can result in similar research being conducted time and time again by different agencies and at different levels who simply don’t know that the data has already been collected elsewhere.

    If different departments and governments could be more aware of the data collected and available for reuse this would also open opportunities to combine data sets for greater benefit – such as across states or for certain demographic groups.

    Without breaching privacy, this would result in both lower data collection costs across governments and in improved policy and service outcomes.

    That’s without considering ANY of the benefits to parties outside the government itself.




  27. 2009 August 20
    Bec permalink

    Just including my comment from another post!

    The other day I was disappointed to find out that had no RSS feed that we could use to pull tenders listed under our agency into our own website as it was not desired to simply link to alone.

    • 2009 August 20
      Bec permalink

      BTW, it’s a real pain that the only way you can get a list of tenders by agency is to use the search… which isn’t a friendly URL!

  28. 2009 August 21
    Kevin Cox permalink

    Here is a great example of what can be done when organisations share data. This is the sort of thing that becomes possible for government organisations across Australia particularly the local area service organisations.

    These people are in a very competitive business not known for their cooperation.

    • 2009 August 21
      asa letourneau permalink

      And let’s not forget about the FUN factor. Imagine individual artists, arts orgnaisations, cultural sectors, (anyone with an imagination!) having access to a universe of data that allows them to PLAY in new ways. Great things will truly come from data sharing.

  29. 2009 August 21

    There should be a uniform natioanl mandatory release of food hygiene inspection information. NSW does this well, WA poorly, Victoria not at all. Death to secrecy and government refusals to release the information we as ratepapers and taxpayers have already paid for.

  30. 2009 August 27
    Dee permalink

    What about Spatial datasets? How do we handle this kind of data? Because if we think about it, spatial data (such as maps, organisational locations, models, photographs of locations etc) is highly sensitive in nature from a national security perspective. However how do we control and handle them from not falling in the wrong hands?

    • 2009 August 27

      Dee, I’m sure there are potential national security implication to classes of datasets, but I don’t see this sort of data set as being especially fraught. I don’t think anyone is talking about having a live, constantly updated dataset of Australian troop deployments. Rather, government has a lot of useful information on their hands tha not of any conceivable military use – say – maps of health infringements (as Fitzroyalty mentions), or toilet locations. And in any case, even popping the odd well known military installation on a map is not usually going to add much to the public knowledge. By all means, let’s keep security in mind, but it seems a comparatively minor issue for Australia- seems to me like we should be making the most of that?

    • 2009 August 27

      Dee, I think you’re imagining spatial data as more sensitive than most of it is. The federal government already makes available a significant amount of spatial data for public use in one form or another. I’m certain national security considerations have been included in the thinking before their release.

      Nobody in their right mind countenances the release of military movement information, or sensitive shipping and flight information. But we all know where the non-secret military installations are in Australia, right down to places like Pine Gap. If anyone wanted a relatively detailed look at them, they could just use Google Maps.

      There’s any amount of data that rightly shouldn’t be released. Open data advocates rarely suggest those sorts of things – personal and sensitive information – be released. The risk is we don’t release the data that ought to be available if all we consider is the negative risk rather than opportunity cost.

  31. 2009 August 28

    One of the constraints in releasing crime information is that victims of crime do not like to be identified. We’ve had a number of complaints from people who have become distressed at the level of detail we already provide i.e. street level data)

  32. 2009 August 28

    Btw, the Bureau provides free access to crime data for research and by going onto our website it is possible to identify (a) trends in any kind of crime over any period up to 10 years (b) how individual LGAs rank in terms of the major categories of crime and (c) for most LGAs: a report containing maps showing the spatial distribution of crime at street level. It is also possible to obtain raw data on crimes reported by crime type, time period and location (down to LGA). This information is free.

Comments are closed.