Feeds:
Posts
Comments

Posts Tagged ‘visualization’

The November 14, 2012, Technology Salon NYC (TSNYC) focused on ways that ICTs can support work with children who migrate. An earlier post covers the discussion around Population Council’s upcoming ‘Adolescent Girls on the Move’ report. The current post focuses on the strategic use of data visualization for immigration advocacy, based on opening points from Brian Root and Enrique Piracés of Human Rights Watch (HRW).

Visualizing the US Detention Network and the transfers between detention centers.

The project

The HRW initiative used data to track and visualize the movement of people through the US immigration detention system after noticing that U.S. Immigration and Customs Enforcement (ICE) was moving people very freely without notifying their families or attorneys. HRW was aware of the problem but not its pervasiveness. The team obtained some large data sets from the US government via Freedom of Information Act (FOIA) requests. They used the data to track individuals’ routes through the immigration detention system, eventually mapping the whole system out at both aggregate levels and the level of individual. The patterns in the data informed HRW’s advocacy at the state and federal levels. In the process, HRW was able to learn some key lessons on advocacy and the importance of targeting data visualizations to specific advocacy purposes.

Data advocacy and storytelling

The data set HRW obtained included over 5.4 million records of 2.3 million people, with 10-12 variables. The team was able to connect these records to individuals, which helped tell a meaningful story to a broad audience. By mapping out all the US facilities involved and using geo-location to measure the distance that any individual had been transferred, the number of times an individual from Country X in Age Range X was transferred from one facility to another was visible, and patterns could be found. For example, often people on the East Coast were transferred to Texas, where there is a low ratio of immigration lawyers per detainee.

Even though the team had data and good stories to tell with the data, the two were not enough to create change. Human rights are often not high priority for decision makers, but budgeting is; so the team attached a cost to each vector that would allow HRW to tell decision makers how much was being spent for each of these unnecessary transfers.

They were also able to produce aggregated data at the local level. They created a state dashboard so that people could understand the data at the state level, since the detention facilities are state-run. The data highlighted local-level inefficiencies. The local press was then able to tell locally relevant stories, thus generating public opinion around the issue. This is a good example of the importance of moving from data to story telling in order to strengthen advocacy work.

HRW conveyed information and advocated both privately and publicly for change in the system. Their work resulted in the issuing of a new directive in January 2012.

FOIA and the data set

Obtaining data via FOIA acts can be quite difficult if an organization is a known human rights advocate. For others it can be much easier. It is a process of much letter sending and sometimes legal support.

Because FOIA data comes from the source, validation is not a major issue. Publishing methodologies openly helps with validation because others can observe how data are being used. In the case of HRW, data interpretations were shared with the US Government for discussion and refutation. The organization’s strength is in its credibility, thus HRW makes every effort to be conservative with data interpretation before publishing or making any type of statement.

One important issue is knowing what data to ask for and what is possible or available. Phrasing the FOI request to obtain the right data can be a challenge. In addition, sometimes agencies do not know how to generate the requested information from their data systems. Google searches for additional data sets that others have obtained can help. Sites such as CREW (Citizens for Responsibility and Ethics in Washington), which has 20,000 documents open on Scribd, and the Government Attic project, which collects and lists FOI requests, are attempting to consolidate existing FOI information.

The type of information available in the US could help identify which immigration facilities are dealing with the under-18 population and help speculate on the flow of child migrants. Gender and nationality variables could also tell stories about migration in the US. In addition, the data can be used to understand probability: If you are a Mexican male in San Jose, California, what is the likelihood of being detained? Of being deported?

The US Government collects and shares this type of data, however many other countries do not. Currently only 80 countries have FOI laws. Obtaining these large data sets is both a question of whether government ministries are collecting statistics and whether there are legal mechanisms to obtain data and information.

Data parsing

Several steps and tools helped HRW with data parsing. To determine whether data were stable, data were divided by column and reviewed, using a SHELL. Then the data were moved to a database (MySQL), however other programs may be a better choice. A set of programs and scripts was built to analyze the data, and detention facilities were geo-located using GeoNames. The highest quality result was used to move geo-location down to the block level and map all the facilities. Then TileMill and Quantum GIS (QGIS) were used to make maps and ProtoViz (now D3) was used to create data visualizations.

Once the data were there, common variables were noted throughout the different fields and used to group and link information and records to individuals. Many individuals had been in the system multiple times. The team then looked at different ways that the information could be linked. They were able to measure time, distance and the “bounce factor”, eg.., how many times an individual was transferred from one place to the other.

Highlighting problematic cases: One man’s history of transfers.

Key learning:

Remember the goal. Visualization tools are very exciting, and it is easy to be seduced by cool visualizations. It is critical to keep in mind the goal of the project. In the HRW case the goal was to change policy, so the team needed to create visualizations that would specifically lead to policy change. In discussions with the advocacy team, they defined that the visualizations needed to 1) demonstrate the complexity 2) allow people to understand the distance 3) show the vast numbers of people being moved.

Privacy. It is possible to link together individual records and other information to tell a broader story, but one needs to be very careful about this type of information identifying individuals and putting them at risk. For this reason not all information needs to be shared publicly for advocacy purposes. It can be visualized in private conversations with decision makers.

Data and the future

Open data, open source, data visualization, and big data are shaping the world we are embedded in. More and more information is being released, whether through open data, FOIA or information leaks like Wikileaks. Organizations need to begin learning how to use this information in more and better ways.

Many thanks to the Women’s Refugee Commission and the International Rescue Committee for hosting the Salon.

The next Technology Salon NYC will be coming up soon. Stay tuned for more information, and if you’d like to receive notifications about future salons, sign up for the mailing list!

Read Full Post »

Last week some 40 people from more than 20 different organizations with national and global humanitarian and relief missions attended a Google Partnership Exploration Workshop in Washington, DC, to share information in an interactive setting and explore how the organizations and Google geo & data visualization technologies can further each others’ missions.

Lucky me – I got to go on behalf of Plan.  Much of the meeting centered on how Google’s tools could help in disasters and emergencies, and what non-profits would like to be able to do with those tools, and how Google could help.

The meeting opened up with a representative of FEMA talking about the generally slow and government centered response of FEMA, and how that needed to turn into a quick, user generated information network that could provide real information in real time so that FEMA could offer a real, people centered response.  I loved hearing someone from government saying things like “we need to look at the public as a resource, not a liability.” The conclusion was that in a disaster/emergency you just need enough information to help you make a better decision.  The public is one of the best sources for that information, but government has tended to ignore it  because it’s not “official.”  Consider: a 911 caller is not a certified caller with a background check and training on how to report, but that’s the background of our 911 emergency system. Why can’t it be the same in a disaster?

We also heard about the World Bank’s ECAPRA project for disaster preparedness in Central America.  This project looks at probabilistic risk assessment, using geo-information to predict and assess where damage is likely.  The main points from the WB colleagues were that for SDI (Spatial Data Infrastructure) we need policies, requirements, and mandates, yes, but this is not sufficient – top down is not enough.  We also need software that enables a bottom up approach, aligned incentives that can drive us to open source agenda.  But not just open source code software, we’re talking mass collaboration – and that would change everything.  So then the challenge is how we help civil societies and govts to share and deliver data that enables decision making?  How do we support data collection from the top down and from the bottom up?  The WB is working with developers on some collaborative data collection mobile applications that allow people to easily collect information. In this system, different institutes still own the data but others can update and add to it. WB hopes to embed this within Central American national disaster planning systems, and to train and support the national systems to use these tools.  They will be free to use the elements that most link with the local situations in each country.  Each country is developing these open source applications themselves, and can choose the tools that work best for them.

Google stepped in then to share some Google Visualizations — Google Fusion TablesVisualization API, Chart API and Motion Charts (Gapminder).  With these applications, different sources can share data, or share some data and keep other data private.  You can compare data from different sources.  For example, there is a chart currently residing in Google Fusion Tables that pulls GDP data from the CIA Fact Book, the World Bank and the IMF, and allows you to compare data across countries from different sources.  You can then use that data to create your own data visualizations, including maps, tables, charts, and the fabulous Gap Minder/motion visualization charts (first made popular at TED by Hans Rosling). These can all be easily transferred to your own webpage.  If you have public data that deserves to be treated separately you can become a Google trusted source. (Click on the “information for publishers” link to see how to get your data made public) For a quick tutorial on how to make your own cool Gap Minder chart check out this link.  *Note Gapminder is not owned by Google. Gapminder is a foundation of its own, totally independent from Google. Google bought the software [Trendalyzer] to improve the technology further.

Next up was the American Red Cross who shared some of the challenges that they face and how they use geo-spatial and information mapping to overcome them.  Red Cross has a whole mobile data gathering system set up and works via volunteers during disasters to collect information.  They also have over 30 years of disaster data that they can use to analyze trends.  The ARC wants to do more with mapping and visualizations so that they can see what is happening right away, using maps, charts and analyzing trends.  What does the ARC want to see from Google?  A disaster dashboard – eg using Google Wave?  Inventory tracking and mapping capability.  Data mining and research capabilities such as with Fusion tables.  They want people to be able to go to the ARC and see not what the Red Cross is, but what the Red Cross does.  To use the site for up to date information that will help people manage during disasters and emergencies.

Wow, and this was all before lunch!

—————

Related posts:
I saw the future of geovisualization… after lunch
Is this map better than that map?
Ushahidi in Haiti:  what’s needed now

Read Full Post »

Follow

Get every new post delivered to your Inbox.

Join 761 other followers