FRED’s new (data-format) friend, JSON

August 27th, 2013 | General | Susan McGregor | 2 Comments

It’s probably clear by now that we here at Data Docs have a lot of love for AL/FRED, the vast web- and API-accessible economic data repository run by the St. Louis Fed.

It is fair to say, though, that we never loved all parts of AL/FRED equally; while the thoroughness, timeliness, and accuracy of its data have always been outstanding, the XML format in which that data was delivered was better suited suited to an earlier, less JavaScript-driven internet. It was a relatively small price to pay, but accessing AL/FRED API data in a JavaScript application meant always having to write the descriptive but tiresome (and verbose) getElementsByTagName() commands in your code in order to use the AL/FRED data. Always, that is, until a week ago Monday.

Data Docs is proud to be an early user of the AL/FRED API’s JSON format, which can be accessed by simply adding file_type=json to any API call.

Introducing JSON

So let’s say your original API call looks something like the below (for a complete guide to AL/FRED, see our previous blog post):

http://api.stlouisfed.org/fred/series/observations?series_id=PAYEMS
&observation_start=2013-01-01&units=chg&api_key=xxxxxxxxxxxxxxxxxx


Without the new file_type parameter, the above query will return the monthly non-farm payrolls figures from January, 2013 until the present in an XML format, with the first element describing the data you’ve asked for, and each of the next containing one month’s reading, like so:

<?xml version="1.0" encoding="utf-8" ?>
<observations realtime_start="2013-08-27" realtime_end="2013-08-27" 
observation_start="2013-01-01" observation_end="9999-12-31" units="chg"
output_type="1" file_type="xml"
order_by="observation_date" sort_order="asc"count="7" offset="0" limit="100000">

<observation realtime_start="2013-08-27" realtime_end="2013-08-27" 
date="2013-01-01" value="148"/>
.
.
.
<observation realtime_start="2013-08-27" realtime_end="2013-08-27"
date="2013-07-01" value="162"/>

</observations>



Just by adding the new file_type=json parameter and value anywhere in the original query, however, we get essentially the same data structure in a handy JSON format. So the query becomes:

http://api.stlouisfed.org/fred/series/observations?series_id=PAYEMS
&observation_start=2013-01-01&units=chg&file_type=json&api_key=xxxxxxxxxxxxxxxxxx



And the result becomes:

{
    "realtime_start": "2013-08-27",
    "realtime_end": "2013-08-27",
    "observation_start": "2013-01-01",
    "observation_end": "9999-12-31",
    "units": "chg",
    "output_type": 1,
    "file_type": "json",
    "order_by": "observation_date",
    "sort_order": "asc",
    "count": 7,
    "offset": 0,
    "limit": 100000,
    "observations": [
        {
            "realtime_start": "2013-08-27",
            "realtime_end": "2013-08-27",
            "date": "2013-01-01",
            "value": "148"
        },
.
.
.
        {
            "realtime_start": "2013-08-27",
            "realtime_end": "2013-08-27",
            "date": "2013-07-01",
            "value": "162"
        }
    ]
}


Curly Braces instead of Carrots – So What?

If you’re not a JavaScript developer, the difference between the two result sets above probably doesn’t seem that thrilling. Thanks to good formatting (hat tip to the essential JSON validation-and-reformat site, jsonlint.org), you might possibly have found the second result set a bit easier to read.

The big deal is that JSON (which not-coincidentally stands for JavaScript Object Notation) is infinitely easier for JavaScript to read, too, meaning programmers have to write less code in order to use data in this format.

So, for example, while using the observations from the XML-formatted results would mean writing out data.getElementsByTagName(“observations”), the same information can be accessed from the JSON format simply by writing data.observations. And as anyone who’s every texted “Trying 2 reach u” knows, saving even a handful of characters can really add up.

2 Responses and Counting...

  • osiso 08.27.2013

    im new to coding, i want to make a simple blog that captures this data and automatically plots it out (non farm payrolls or anything else from FRED) via highcharts.

    the problem with highcharts for me is that the data.csv file format is cumbersome to format, especially if hte data is updated monthly. is there an easy way to automatically download the data from FRED via this API and plot it on a chart (like highcharts)?

    osiso

  • While part of the Data Docs project is to provide access to charting methods for the data sets that we work with, we haven’t quite gotten there yet. So in the meantime, here are a few approaches that can smooth the process of creating regularly-updated charts from FRED data (or any other data repository that has an API):

    If your solution doesn’t need to be completely automatic, I would look into using Python to download the data to your computer and do the reformatting you need. Python is a flexible and beginner-friendly programming language, and there are libraries available for downloading data from the web and automatically parsing JSON. If you’re new to programming, using Python on this project could be a great place to start.

    If you need a fully automatic solution, you can still use Python but will need to have it running on a web server (which can be arranged with your web hosting provider). The program will be the same, but because it is running on the web already, you won’t have to manually upload it to your server to see the new results on your blog. To have the script run automatically on a regular basis (for example, every month) you’ll need to set up what’s known as a ‘cron job’. In both cases, you may need to arrange special permissions and programs with your web host.

    If all of that sounds a bit much, you can always use a tool like Open Refine to open up your JSON file from FRED and convert it to CSV. Open Refine automatically records everything you do to a file, generating a macro (coding-like set of commands) that can then be applied to a new version of the file in seconds. If learning a whole language seems over-the-top, try Open Refine (which is great for all kinds of data-related tasks).

    Want some how-to help? Check out the videos from my 7-week data course at Columbia School of Journalism, which can give you a head start on Python, Refine and more.

Leave a Reply

* Name, Email, and Comment are Required


+ 4 = seven