By Robin Newberger
Hyper-local data, whether aggregated by municipal agencies or independent researchers, is providing policymakers and ordinary citizens with the information to answer countless questions affecting the lives of Chicago residents. For example, parcel-level data assembled from a variety of sources is helping researchers to address how and whether infrastructure investments affect home prices in different neighborhoods. A newly-constructed catalogue of neighborhood-level cultural assets is helping planners and community advocates measure cultural participation and the demographics of those audiences. Analyses of geocoded data points, the location of food establishments that have been cited with code violations, for example, is allowing health inspectors to predict where the next case of food poisoning is likely to surface.
The process of collecting and distributing data is undergoing a profound transformation, and neighborhood-level data – or more specifically, what data is available and who is using it – was the topic of the latest Civic Research Forum that took place in September 2016 at the Federal Reserve Bank of Chicago. The Civic Research Forum, organized by the Chicago Fed and World Business Chicago, and for the latest meeting, Local Initiatives Support Corporation (LISC), is a forum for bringing together researchers in the Chicago area to discuss areas of common interest and potentially encourage greater collaboration. The purpose of the most recent meeting was to share some of the progress that has been made towards building new datasets, identifying new questions that can now be answered through this data, and describing the impediments that limit the dissemination of information between data aggregators and users.
New technology platforms like the Open Data Portal from the city of Chicago are largely responsible for spurring this data revolution. The city of Chicago has now made it possible for members of the public to download information ranging from every reported crime in Chicago since 2001, to water quality at Chicago beaches, to the salaries of all municipal employees. In addition, the city has launched a mapping application called Open Grid, which allows users to display multiple data sets from different sources in the same geographic area. Further, much of the code the city has written to run the data is open source, allowing interested parties to build applications like a map or a tool that run on top of this data, potentially improving upon the analytical potential for both the city and the public.
Organizations like the Smart Chicago Collaborative, founded by the City of Chicago, Chicago Community Trust and the MacArthur Foundation, are also adding to the tools that support data analysis at the neighborhood level with projects like the Chicago Health Atlas that constructs health indicators by neighborhood and zip code, and the Array of Things Civic Engagement Project. Array of Things is a network of sensor boxes operated by the University of Chicago and Argonne National Laboratory which collects real-time data on the city’s environment and infrastructure. The Smart Chicago Collaborative also founded and runs the Civic User Testing Group (CUTGroup) made up of 1700 residents of Chicago and Cook County, who test new apps and websites for civic developers, and helped support the beginning of Chi HackNight, a weekly gathering open to the public that encourages people to create data applications for civic purposes.
Mapping the data ecosystem within the city represents the next big hurdle in developing this data infrastructure. Broadly, the data ecosystem consists of data producers and consumers, but within those categories are nonprofits that collect and consume data; informal networks of people who get together to talk about or work on data; and data intermediaries like LISC that bridge the gap between the producers and consumers. To understand what is missing in the data ecosystem, the Smart Chicago Collaborative conducted a city-wide survey of data needs in 2014, and held the School of Data Days conference to discuss some strategies for addressing these challenges, including identifying gaps in data availability (e.g. Chicago Public Schools data), and how to make data more accessible by building better “on-ramps” for nonprofit organizations, which often lack the time and human capital to be as data driven as they would like.
Getting data into the hands of people who are among the most impacted by it has become another major objective of this work. Even if data is available to the public, getting it to individuals within a community who can use the information is still a challenge. When a data analyst or “producer” engages with members of a community, most times it is for the community either to interpret or legitimize the analyst’s findings, or perhaps to give the researcher permission to access more data. Yet a core tenet of these new technology platforms is collaboration between the data aggregators and data users. In principle, if people know what is happening on their block or in their neighborhood, then they can implement an intervention. As things stand, neighborhood residents are generally not connecting the data to their daily experiences, or using the information for social good, because of limitations to accessing or fully understanding it. Thus more work is needed to improve the accessibility of data for the mainstream public, so that the data reaches both tech-savvy groups as well as neighborhood residents who want to improve their own backyards.
Participants at the forum offered some suggestions to address this education component. One idea was for the city to sponsor “civic tech fellows,” who could become the translation points between data generators and data users. They could comprise upperclassmen at local universities interested in both community organizing and computer science, and function like Americorps volunteers. Members of the Civic Analytics Network, a network of urban Chief Data Officers from around the country, are already discussing the idea of developing a training consortium for this type of work. Another suggestion was to deploy local artists as the translators, given they have the creative skills to tell the stories that can elucidate the links between data and the people. They might be able to visually articulate the existing issues that shape neighborhoods, and help people imagine the way those places could look in the future. As the discussion made clear, it is often not until data gets to a very local level that people can relate to it and see if it makes sense with their experiences. The availability of hyper-local data ushers in a new opportunity to mobilize city residents around issues that affect neighborhood services, quality of life and even social justice, but many of these connections have yet to be made.