Feeds:
Posts
Comments

Archive for the ‘technology salon’ Category

At the October 17, 2012 Technology Salon NYC, we focused on ways that ICTs can be used for qualitative monitoring and evaluation (M&E) efforts that aim to listen better to those who are participating in development programs. Our lead discussants were:  John Hecklinger, Global Giving; Ian Thorpe, UN DOCO and the World We Want 2015 Campaign; and Emily Jacobi, Digital Democracy. This salon was the final in a series of three on using new technologies in M&E work.

Global Giving shared experiences from their story-telling project which has collected tens of thousands of short narratives from community members about when an individual or organization tried to change something in their community. The collected stories are analyzed using Sensemaker to find patterns in the data with the aim of improving NGO work. (For more on Global Giving’s process see this document.)

The United Nations’ Beyond 2015 Campaign aims to spur a global conversation on the post-MDG development agenda. The campaign is conducting outreach to people and organizations to encourage them to participate in the discussion; offering a web platform (www.worldwewant2015.org) where the global conversation is taking place; and working to get offline voices into the conversation. A challenge will be synthesizing and making sense of all of the information coming in via all sorts of media channels and being accountable now and in future to those who participate in the process.

Digital Democracy works on digital literacy and human rights, and makes an effort to integrate qualitative monitoring and evaluation into their program work stream. They use photography, film and other media that transcend the language and literacy barriers. Using these kinds of media helps participants express opinions on issues that need addressing and builds trust. Photos have helped in program development as well as in defining quantitative and qualitative indicators.

A rich conversation took place around the following aspects:

1) Perception may trump hard data

One discussant raised the question “Do opinions matter more than hard data on services?” noting that perceptions about aid and development may be more important than numbers of items delivered, money spent, and timelines met. Even if an organization is meeting all of its targets, what may matter more is what people think about the organization and its work. Does the assistance they get respond to their needs? Rather than asking “Is the school open?” or “Did you get health care?” it may be more important to ask “How do you feel about health?” Agencies may be delivering projects that are not what people want or that do not respond to their needs, cultures, and so on. It is important to encourage people to talk amongst themselves about their priorities, what they think, encourage viewpoints from people of different backgrounds and see how to pull out information to help inform programs and approaches.

2) It is a complex process

Salon participants noted that people are clearly willing to share stories and unstructured feedback. However, the process of collecting and sorting through stories is unwieldy and far from perfect. More work needs to be done to simplify story-collection processes and make them more tech-enabled. In addition, more needs to be done to determine exactly how to feed the information gleaned back in a structured and organized way that helps with decision-making. One idea was the creation of a “Yelp” for NGOs. Tagging and/or asking program participants to tag photos and stories can help make sense of the data. If videos are subtitled, this can also be of great use to begin making sense of the type of information held in videos. Dotsub, for example, is a video subtitling platform that uses a Wikipedia style subtitling model, enabling crowd sourced video translations into any language.

3) Stories and tags are not enough

We know that collecting and tagging stories to pull out qualitative feedback is possible. But so what? The important next step is looking at the effective use of these stories and data. Some ideas on how to better use the data include adding SMS feedback, deep dives with NGOs, and face-to-face meetings. It’s important to move from collecting the stories to thinking about what questions should be asked, how the information can help NGOs improve their performance, how this qualitative data translates into change or different practice at the local and global levels, how the information could be used by local organizers for community mobilization or action, and how all this is informing program design, frameworks and indicators.

4) Outreach is important

Building an online platform does not guarantee that anyone will visit it or participate. Local partners are an important element to reach out and collect data about what people think and feel. Outreach needs to be done with many partners from all parts of a community or society in order to source different viewpoints. In addition, it is important to ask the right questions and establish trust or people will not want to share their views. Any quality participation process, whether online or offline, needs good facilitation and encouragement; it needs to be a two-way process, a conversation.

5) Be aware of bias

Understanding where the process may be biased is important. Everything from asking leading questions, defining the meta data in a certain way, creating processes that only include certain parts of the community or population, selecting certain partners, or asking questions that lead to learning what an organization thinks it needs to know can all create biased answers. Language is important here for several reasons: it will affect who is included or excluded and who is talking with whom. Using development jargon will not resonate with people, and the way development agencies frame questions may lead people to particular answers.

6)  Be aware of exclusion

Related to bias is the issue of exclusion. In large-scale consultations or online situations, it’s difficult to know who is talking and participating. Yet the more log-in information solicited, the less likely people are to participate in discussions. However by not asking, it’s hard to know who is responding, especially when anonymity is allowed. In addition, results also depend on who is willing and wants to participate. Participants agreed that there is no silver bullet to finding folks to participate and ensuring they represent diversity of opinion. One suggestion was that libraries and telecenters could play a role in engaging more remote or isolated communities in these kinds of dialogues.

7) Raising expectations

Asking people for feedback raises expectations that their input will be heard and that they will see some type of concrete result. In these feedback processes, what happens if the decisions made by NGOs or heads of state don’t reflect what people said or contributed? How can we ensure that we are actually listening to what people tell us? Often times we ask for people’s perceptions and then tell them why they are wrong. Follow up is also critical. A campaign from several years ago was mentioned where 93,000 people signed onto a pledge, and once that was achieved, the campaign ended and there was no further engagement with the 93,000 people. Soliciting input and feedback needs to be an ongoing relationship with continual dialogue and response. The process itself needs to be transparent and accountable to those who participate in it.

8 ) Don’t forget safety and protection

The issue of safety and protection for those who offer their opinions and feedback or raise issues and complaints was brought up. Participants noted that safety is very context specific and participatory risk assessments together with community members and partners can help mitigate and ensure that people are informed about potential risk. Avoiding a paternalistic stance is recommended, as sometimes human rights advocates know very well what their risk is and are willing to take it. NGOs should, however, be sure that those with whom they are working fully understand the risks and implications, especially when new media tools are involved that they may not have used before. Digital literacy is key.

9) Weave qualitative M&E into the whole process

Weaving consistent spaces for input and feedback into programs is important. As one discussant noted, “the very media tools we are training partners on are part of our monitoring and evaluation process.”  The initial consultation process itself can form part of the baseline. In addition to M&E, creating trust and a safe space to openly and honestly discuss failure and what did not go so well can help programs improve.  Qualitative information can also help provide a better understanding of the real and hard dynamics of the local context, for example the challenges faced during a complex emergency or protracted conflict. Qualitative monitoring can help people who are not on the ground have a greater appreciation for the circumstances, political framework, and the socio-economic dynamics.

10) Cheaper tool are needed

Some felt that the tools being shared (Sensemaker in particular) were too expensive and sophisticated for their needs, and too costly for smaller NGOs. Simpler tools would be useful in order to more easily digest the information and create visuals and other analyses that can be fed back to those who need to use the information to make changes. Other tools exist that might be helpful, such as Trimble’s Municipal Reporter, Open Data Kit, Kobe, iForm Builder, Episurveyor/Magpi and PoiMapper. One idea is to look at some of the tools being developed and used in the crisis mapping and response space to see if cost is dropping and capacity increasing as the field advances. (Note: several tools for parsing Twitter and other social media platforms were presented at the 2012 International Conference on Crisis Mapping, some of which could be examined and learned from.)

What next?

A final question at the Salon was around how the broader evaluation community can connect with the tools and people who are testing and experimenting with these new ways of conducting monitoring and evaluation. How can we create better momentum in the community to embrace these practices and help build this field?

Although this was the final Salon of our series on monitoring and evaluation, we’ll continue to work on what was learned and ways to take these ideas forward and keep the community talking and growing.

A huge thank you to our lead discussants and participants in this series of Salons, especially to the Community Systems Foundation and the Rockefeller Foundation’s monitoring and evaluation team for joining in the coordination with us. A special thanks to Rockefeller for all of the thoughtful discussion throughout the process and for hosting the Salons.

The next Technology Salon NYC will be November 14, 2012, hosted by the Women’s Refugee Commission and the International Rescue Committee. We’ll be shifting gears a little, and our topic will be around ways that new technologies can support children and youth who migrate, are forcibly displaced or are trafficked.

If you’d like to receive notifications about future salons, sign up for the mailing list!

Previous Salons in the ICTs and M&E Series:

12 lessons learned with ICTs for monitoring and accountability

11 points on strengthening local capacity to use new ICTs for monitoring and evaluation

Read Full Post »

New technologies are changing the nature of monitoring and evaluation, as discussed in our previous Salon on the use of ICTs in M&E. However, the use of new technologies in M&E efforts can seem daunting or irrelevant to those working in low resource settings, especially if there is little experience or low existing capacity with these new tools and approaches.

What is the role of donors and other intermediaries in strengthening local capacity in communities and development partners to use new technologies to enhance monitoring and evaluation efforts?

On August 30, the Rockefeller Foundation and the Community Systems Foundation (CSF) joined up with the Technology Salon NYC to host the second in a series of 3 Salons on the use of ICTs in monitoring and evaluating development outcomes and to discuss just this question. Our lead discussants were: Revati Prasad from Internews, Tom O’Connell from UNICEF and Jake Watson from the International Rescue Committee. (Thanks Jake for stepping in at the last minute!)

We started off with the comment that “Many of us are faced with the “I” word – in other words, having to demonstrate impact on the ground. But how can we do that if we are 4 levels removed from where change is happening?” How can organizations and donors or those sitting in offices in Washington DC or New York City support grantees and local offices to feed back more quickly and more accurately? From this question, the conversation flowed into a number of directions and suggestions.

1) Determine what works locally

Donor shouldn’t be coming in to say “here’s what works.” Instead, they should be creating local environments for innovation. Rather than pushing things down to people, we need to start thinking from the eyes of the community and incorporate that into how we think and what we do. One participant confirmed that idea with a concrete example. “We went in with ideas – wouldn’t SMS be great… but it became clear that SMS was not the right tool, it was voice. So we worked to establish a hotline. This has connected [the population] with services, it also connects with a database that came from [their] own needs and it tracks what they want to track.” As discussed in the last Salon, however, incentive and motivation are critical. “Early on, even though indicators were set by the community, there was no direct incentive to report.” Once the mentioned call center connected the reporting to access to services, people were more motivated to report.

2) Produce local, not national-level information

If you want to leverage technology for local decision-making, you need local level information, not broad national level information. You also need to recognize that the data will be messy. As one participant said, we need to get away from the idea of imperfect data, and instead think: is the information good enough to enable us to reach that child who wasn’t reached before? We need to stop thinking of knowledge as discrete chunks that endure for 3-4 years. We are actually processing information all the time. We can help managers to think of information as something to filter and use constantly and we can help them with tools to filter information, create simpler dashboards, see bottlenecks, and combine different channels of information to make decisions.

3) Remember why you are using ICTs in M&E

We should be doing M&E in order to achieve better results and leveraging technologies to achieve better impact for communities. Often, however, we end up doing it for the donor. “Donors get really excited about this multicolored thing with 50,000 graphs, but the guy on the ground doesn’t use a bit of it. We need to let go.” commented one participant. “I don’t need to know what the district manager knows. I need to know that he or she has a system in place that works for him or her. My job is to support local staff to have that system working. We need to focus on helping people do their jobs.”

4) Excel might be your ‘killer app’

Worldwide, the range of capacities is huge. Sometimes ICT sounds very sexy, but the greatest success might be teaching people how to use Excel, how to use databases to track human rights violations and domestic violence or setting up a front-end and a data entry system in a local language.

5) Technology capacity doesn’t equal M&E capacity

One participant noted that her organization is working with a technology hub that has very good tech skills but lacks capacity in development and M&E. Their work over the past year has been less about using technology and more about working with the hub to develop these other capacities: how to conduct focus groups, surveys, network analysis, developing toolkits and guides. There’s often excitement on the ground – ‘We can get data in 48 hours! Wow! Let’s go!’ However creating good M&E surveys to be used via technology tools is difficult. One participant expressed that finding local expertise in this area is not easy, especially considering staff turnover. “We don’t always have M&E experts on the ground.” In addition, “there is an art to polls and survey trees, especially when trying to take them from English into other languages. How do you write a primer for staff to create meaningful questions.”

6) Find the best level for ICTs to support the process

ICTs are not always the best tool at the community or district level, given issues of access, literacy, capacity, connection, electricity, etc., but participants mentioned working in blended ways, eg., doing traditional data collection and using ICTs to analyze the data, compile it, produce localized reports, and working with the community to interpret the information for better decision-making. Others use hand-drawn maps, examine issues from the community angle and then incorporate that into digital literacy work and expression work, using new technology tools to tell and document the communities’ stories.

7) Discover the shadow systems and edge of network

One participant noted that people will comply and they will move data through the system as requested from on high, but they simultaneously develop their own ways of tracking information that are actually useful to them. By discovering these ‘shadow systems’, you can see what is really useful. This ‘edge of network’ is where people with whom headquarters doesn’t have contact live and work. We rely on much of their information to build M&E systems yet we don’t consult and work with them often enough. Understanding this ‘edge of network’ is critical to designing and developing good M&E systems and supporting local level M&E for better information and decision-making.

8 ) The devil is in the details

There are many M&E tools to choose from and each has its pros and cons. Participants mentioned KoBo, RapidSMSNokia Data GatheringFrontlineSMS and Episurveyor. While there is a benefit to getting more clean data and getting it in real-time, there will always be post-processing tasks. The data can, however, be thrown on a dashboard for better decision-making. Challenges exist, however. For example, in Haiti, as one participant commented, there is a 10% electrification rate, so solar is required. “It’s difficult to get a local number with Clickatell [an SMS gateway]; you can only get an international number. But getting a local number is very complicated. If you go that route, you need a project coordinator. And if you are using SMS, how do you top off the beneficiaries so that they can reply? The few pennies it costs for people to reply are a deterrent. Yet working with telecom providers is very time-consuming and expensive in any country. Training local staff is an issue – trying to train everyone on the ICT package that you are giving them. You can’t take anything for granted. People usually don’t have experience with these systems.” Literacy is another stumbling block, so some organizations are looking at Interactive Voice Response (IVR) and trying to build a way for it to be rapidly deployed.

9) Who is the M&E for?

Results are one thing, but as one participant noted, “part of results measuring means engaging communities in saying whether the results are good for them.” Another participant commented that Ushahidi maps are great and donors love them. But in CAR, for example, there is 1% internet penetration and maybe 9% of the people text. “If you are creating a crisis map about the incidence of violence, your humanitarian actors may access it, it may improve service delivery, but it is in no way useful for people on the ground. There is reliance on technology, but how to make it useful for local communities is still the big question…. It’s hard to talk about citizen engagement and citizen awareness if you are not reaching citizens because they don’t have access to technology.” And “what about the opportunity cost for the poor? ”asked one participant. “Time is restricted. CSOs push things down to the people least able to use the time for participation. There is a cost to participation, yet we assume participation is a global good. The poorest are really scraping for time and resources.  ‘Who is the data for?’ is still a huge question. Often it’s ‘here’s what we’re going to do for you’ rather than meeting with people first, asking what’s wrong, then listening and asking what they would like to do about it, and listening some more.”

10) Reaching the ‘unreachable’

Reaching and engaging the poorest is still difficult, and the truly unreached will require very different approaches. “We’re really very much spoke to hub,” said one participant, “This is not enough. How can we innovate and resolve this.” Another emphasized the need to find out who’s not part of the conversation, who is left out or not present when these community discussions take place. “You might find out that adolescent girls with mobility issues are not there. You can ask those with whom you are consulting if they know of someone who is not at the meeting. You need to figure out how to reach the invisible members of the community.”  However, as noted, “we also have to protect them. Sometimes identifying people can expose them. There is no clear answer.”

11) Innovation or building on what’s already there?

So will INGOs and donors continue to try to adapt old survey ideas to new technology tools? And will this approach survive much longer? “Aren’t we mostly looking for information that we can act on? Are we going to keep sending teams out all the time or will we begin to work with information we can access differently? Can we release ourselves from that dependence on survey teams?” Some felt that ‘data exhaust’ might be one way of getting information differently; for example a mode like Google Flu Trends. But others noted the difficulty of getting information from non-online populations, who are the majority. In addition, with these new ICT-based methods, there is still a question about representativeness and coverage. Integrated approaches where ICTs are married with traditional methods seem to be the key. This begs the question: “Is innovation really better than building up what’s already there?” as one participant commented. “We need to ask – does it add value? Is it better than what is already there? If it does add perceived value locally, then how do we ensure that it comes to some kind of result. We need to keep our eye on the results we want to achieve. We need to be more results-oriented and do reality checks. We need to constantly ask ourselves:  Are we listening to folks?”

In conclusion

There is much to think about in this emerging area of ICTs and Monitoring and Evaluation.  Join us for the third Salon in the series on October 17 where we’ll continue discussions. If you are not yet on the Technology Salon mailing list, you can sign up here. A summary of the first Salon in the series is here. (A summary of the October 17th Salon is here.)

Salons run by Chatham House Rule, thus no attribution has been made. 

Read Full Post »

New technologies are opening up all kinds of possibilities for improving monitoring and evaluation. From on-going feedback and crowd-sourced input to more structured digital data collection, to access to large data sets and improved data visualization, the field is changing quickly.

On August 7, the Rockefeller Foundation and the Community Systems Foundation (CSF) joined up with the Technology Salon NYC for the first in a series of 3 Salons on the use of ICTs in monitoring and evaluating development outcomes. Our lead discussants were: Erica Kochi from UNICEF Innovations; Steven Davenport from Development Gateway and John Toner from CSF.

This particular Salon focused on the use of ICTs for social monitoring (a.k.a. ‘beneficiary feedback loops’) and accountability. Below is a summary of the key points that emerged at the Salon.

1) Monitoring and evaluation is changing

M&E is not only about formal data collection and indicators anymore, one discussant commented, “It’s free form, it contains sentiment.” New ICT tools can help donors and governments plan better. SMS and other social monitoring tools provide an additional element to more formal information sources and can help capture the pulse of the population. Combinations of official data sets with SMS data provide new ways of looking at cross-sections of information. Visualizations and trend analysis can offer combinations of information for decision making. Social monitoring, however, can be a scary thing for large institutions. It can seem too uncontrolled or potentially conflictive. One way to ease into it is through “bounded” crowd-sourcing (eg., working with a defined and more ‘trusted’ subset of the public) until there is comfort with these kinds of feedback mechanisms.

2) People need to be motivated to participate in social monitoring efforts

Building a platform or establishing an SMS response tool is not enough. One key to a successful social monitoring effort is working with existing networks, groups and organizations and doing well-planned and executed outreach, for example, in the newspaper, on the radio and on television. Social monitoring can and should go beyond producing information for a particular project or program. It should create an ongoing dialogue between and among people and institutions, expanding on traditional monitoring efforts and becoming a catalyst for organizations or government to better communicate and engage with the community. SMS feedback loops need to be thought of in terms of a dialogue or a series of questions rather than a one-question survey. “People get really engaged when they are involved in back and forth conversation.” Offering prizes or other kinds of external motivation can spike participation rates but also can create expectations that affect or skew programs in the long run. Sustainable approaches need to be identified early on. Rewards can also lead to false reports and re-registering, and need to be carefully managed.

3) Responsiveness to citizen/participant feedback is critical

One way to help motivate individuals to participate in social monitoring is for governments or institutions to show that citizen/participant feedback elicits a response (eg., better delivery of public services).  “Incentives are good,” said one discussant, “But at the core, if you get interactive with users, you will start to see the responses. Then you’ll have a targeted group that you can turn to.” Responsiveness can be an issue, however if there is limited government or institutional interest, resourcing or capacity, so it’s important to work on both sides of the equation so that demand does not outstrip response capacity. Monitoring the responsiveness to citizen/participant feedback is also important. “Was there a response promised? Did it happen? Has it been verified? What was the quality of it?”

4) Privacy and protection are always a concern

Salon participants brought up concerns about privacy and protection, especially for more sensitive issues that can put those who provide feedback at risk. There are a number of good practices in the IT world for keeping data itself private, for example presenting it in aggregate form, only releasing certain data, and setting up controls over who can access different levels of data. However with crowd-sourcing or incident mapping there can be serious concerns for those who report or provide feedback. Program managers need to have a very good handle on the potential risks involved or they can cause unintended harm to participants. Consulting with participants to better understand the context is a good idea.

5) Inclusion needs to be purposeful

Getting a representative response via SMS-based feedback or other social monitoring tools is not always easy. Mandatory ratios of male and female, age groups or other aspects can help ensure better representation. Different districts can be sampled in an effort to ensure overall response is representative. “If not,” commented one presenter, “you’ll just get data from urban males.” Barriers to participation also need consideration, such as language; however, working in multiple languages becomes very complicated very quickly. One participant noted that it is important to monitor whether people from different groups or geographic areas understand survey questions in the same way, and to be able to fine-tune the system as it goes along. A key concern is reaching and including the most vulnerable with these new technologies. “Donors want new technology as a default, but I cannot reach the most excluded with technology right now,” commented a participant.

6) Information should be useful to and used by the community

In addition to ensuring inclusion of individuals and groups, communities need to be involved in the entire process. “We need to be sure we are not just extracting information,” mentioned one participant. Organizations should be asking: What information does the community want? How can they get it themselves or from us? How can we help communities to collect the information they need on their own or provide them with local, sustainable support to do so?

7) Be sure to use the right tools for the job

Character limitation can be an issue with SMS. Decision tree models, where one question prompts another question that takes the user down a variety of paths, are one way around the character limit. SMS is not good for incredibly in-depth surveys however; it is good for breadth not depth. It’s important to use SMS and other digital tools for what they are good for. Paper can often be a better tool, and there is no shame in using it. Discussants emphasized that one shouldn’t underestimate the challenges in working with Telco operators and making short codes. Building the SMS network infrastructure takes months. Social media is on the rise, so how do you channel that into the M&E conversation?

8) Broader evaluative questions need to be established for these initiatives

The purpose of including ICT in different initiatives needs to be clear. Goals and evaluative questions need to be established. Teams need to work together because no one person is likely to have the programmatic, ICT and evaluation skills needed for a successfully implemented and well-documented project. Programs that include ICTs need better documentation and evaluation overall, including cost-benefit analyses and comparative analyses with other potential tools that could be used for these and similar processes.

9) Technology is not automatically cheaper and easier

These processes remain very iterative; they are not ‘automated’ processes. Initial surveys can only show patterns. What is more interesting is back-and-forth dialogue with participants. As one discussant noted, staff still spend a lot of time combing through data and responses to find patterns and nuances within the details. There is still a cost to these projects. In one instance, the major project budget went into a communication campaign that was launched and the work with existing physical networks to get people to participate. Compared to traditional ways of doing things (face-to-face, for example) the cost of outreach is not so expensive, but integrating SMS and other technologies does not automatically mean that money will be saved. The cost of SMS is also large in these kinds of projects because in order to ensure participation, representation, and inclusion, SMS usually needs to be free for participants. Even with bulk rates, if the program is at massive scale, it’s quite expensive. When assuming that governments or local organizations will take over these projects at some point, this is a real consideration.

10) Solutions at huge scale are not feasible for most organizations 

Some participants commented that the UN and the Red Cross and similarly sized organizations are the only ones who can work at the level of scale discussed at the Salon. Not many agencies have the weight to influence governments or mobile service providers, and these negotiations are difficult even for large-scale organizations. It’s important to look at solutions that react and respond to what development organizations and local NGOs can do. “And what about localized tools that can be used at district level or village level? For example, localized tools for participatory budgeting?” asked a participant. “There are ways to link high tech and SMS with low tech, radio outreach, working with journalists, working with other tools,” commented others. “We need to talk more about these ways of reaching everyone. We need to think more about the role of intermediaries in building capacity for beneficiaries and development partners to do this better.

11) New technology is not M&E magic

Even if you include new technology, successful initiatives require a team of people and need to be managed. There is no magic to doing translations or understanding the data – people are needed to put all this together, to understand it, to make it work. In addition, the tools covered at the Salon only collect one piece of the necessary information. “We have to be careful how we say things,” commented a discussant. We call it M&E, but it’s really ‘M’. We get confused with ourselves sometimes. What we are talking about today is monitoring results. Evaluation is how to take all that information then, and make an informed decision. It involves specialists and more information on top of this…” Another participant emphasized that SMS feedback can get at the symptoms but doesn’t seem to get at the root causes. Data needs to be triangulated and efforts made to address root causes and end users need to be involved.

12) Donors need to support adaptive design

Participants emphasized that those developing these programs, tools and systems need to be given space to try and to iterate, to use a process of adaptive design. Donors shouldn’t lock implementers into unsuitable design processes. A focused ‘ICT and Evaluation Fail Faire’ was suggested as a space for improving sharing and learning around ICTs and M&E. There is also learning to be shared from people involved in ICT projects that have scaled up. “We need to know what evidence is needed to scale up. There is excitement and investment, but not enough evidence,” it was concluded.

Our next Salon

Our next Salon in the series will take place on August 30th. It will focus on the role of intermediaries in building capacity for communities and development partners to use new technologies for monitoring and evaluation. We’ll be looking to discover good practices for advancing the use of ICTs in M&E in sustainable ways. Sign up for the Technology Salon mailing list here. [Update: A summary of the August 30 Salon is here.]

Salons run by Chatham House Rule, thus no attribution has been made. 

Read Full Post »

The Technology Salon* hosted at IREX on Thursday, June 6, focused on what the International Aid Transparency Initiative (IATI) would mean for international development, especially for US-based NGOs and government contractors.

Tony Pipa, Deputy Assistant Administrator, Policy, Planning and Learning at USAID, started the Salon off by noting that IATI is an inter-agency US government commitment, not only a USAID commitment. USAID is the lead agency for developing the IATI implementation plan, building on existing agreements on transparency, enhancing the US Government’s commitments to transparency, openness and accountability. A key element of these efforts is the Foreign Assistance Dashboard which places the data into the public realm in a user friendly way, making it easier to understand visually and also more accessible and easy to find. The goal is not only transparency, but greater accountability. The US Government hopes to streamline reporting requirements, meeting multiple requirements for a range of international and national reporting standards. The goal for USAID is making aid more useful for development.

Steve Davenport from AidData followed, giving some background on IATI itself. IATI was initially sponsored largely by DFID, but has since grown as a partnership. Over 75% of development assistance is represented by signatories to IATI now. Eight donors are now publishing and twenty-three developing countries have signed on (involving partner countries at the local level as well). Different groups are conducting pilots to see how to implement as IATI gains more traction. For this reason, it would be a good move for US INGOs and contractors to get in front of the transparency and accountability curve rather than get hit by the wave. Better transparency allows organizations to better show their results. The IATI standard can lead to better coordination among the different actors, making it easier to broaden our collective impact. This is especially important now given that aid budgets are being reduced. IATI can be thought of as a group of people, a set of commitments, and an XML standard for moving data from point a to point b. Application developers are beginning to pick this up and develop tools that allow for new ways of visualizing the data, making it actionable and improving accessibility, which can lead to better accountability.

Larry Nowels (Consultant at Hewlett, ONE campaignspoke about Hewlett experience with IATI. Hewlett has made a large investment in transparency and accountability, supporting US and European organizations as well as startups in Africa and Asia over the past 10 years. Transparency is a key building block, so that governments and their citizens know what is being spent, where and on what, and how to make better decisions about resources and reducing waste. It also allows citizens to hold their governments accountable. Hewlett was one of the original signatories and the second publisher to the IATI standard. A key question remains: What’s in it for an organization that publishes according to the standard? For some teams, IATI makes all the sense in the world, but for others it seems to be a waste of resources. The Obama Administration (Open Government Directive, Open Government Partnership, Foreign Assistance Dashboard), all show a strong commitment to transparency. The tough part is implementation of IATI standards and details are still being worked out to find an ideal way.

Larry considers a central repository ideal, but there are issues with quality control and the Foreign Assistance Dashboard does not add data that was not already publicly available. In addition, many US Government agencies have not been added to the Dashboard yet, and getting them on board will be difficult if they are less dedicated than USAID or State. It’s critical to institutionalize IATI and related initiatives and internalize them, given that we cannot assume Obama’s will be a multi-term presidency. In the past 3 years, a number of bills around the theme of accountability and transparency have been introduced by both parties. The Poe-Berman Bill (HR 3159) provides a law to entrench the use of tools like the Dashboard. The Administration, especially the State Department, however, has not engaged Congress enough on these issues, and this has led to some roadblocks. White House pressure could help strengthen support for this initiative; however, there may be pushback by Republicans who generally oppose the US subscribing to international standards.

Discussion**

What is the overlap between the Open Government Partnership (OGP) and IATI?

What is the practical, on-the-ground use or application of IATI data? What does it look like when it is working how it should? What would it ideally look like 5 years from now?

  • There is enormous need for data sharing in a crisis – it is essential for coordinating and understanding the unfolding situations in real-time in order to save lives. There is much more scrutiny as well as a need for rapid coordination and response during a humanitarian crisis, so it requires a higher level of transparency than development work. One way that has been suggested for getting more organizations on board is to start sharing more information during crises and draw the lessons over to development.
  • A project in Mexico City has run investigative campaigns on spending. This has led to the prosecution and resignations of political figures and even some threats against staff, which demonstrates how unsettling this open information can be to the powers that be. It is not about transparency for transparency’s sake. It’s about having a tool that can be used to inform, interpret situations and hold governments and donors accountable. It opens the system up for sharing information.
  • Currently this type of information isn’t available to Country Governments for coordination. Countries need to plan their fiscal year budgets, but rely heavily on donors, and both run on a different fiscal calendar. If donor information were more readily available, countries could plan better.
  • On a 5-year horizon, we would ideally see aid tracking down to the beneficiary level. Tools like IATI can help collect data in more automated ways. Open data can help us track both where funding is allocated and also what is actually being implemented. Additional work is needed on this side; for example, training journalists to understand how to use this data, how to access it – handing them a data file isn’t a very useful thing in and of itself.

That’s great, but great for whom? What does it mean? Does this lead to better aid? Better spending? And what if it creates unrealistic timelines, where development becomes more like a for-profit company that must demonstrate impact within a fiscal quarter? We all know that development initiatives and impact take much longer than three months. Will IATI mean that we will stop doing things that take longer? Things that cannot be checked off on a checkbox? Will we actually lower the quality of our programs by doing this?

  • IATI, like any form of transparency, is only one element of a whole stream of things. The new USAID monitoring and evaluation system is a breakthrough for actually learning from evaluations and data. It’s a longer-term investment than Congress is used to, so it’s a matter of convincing Congress that it is worth the value. There is a better chance of USAID admitting failure in the future if the systems are in place to demonstrate these failures in hard data and learn from them. It’s about discovering why we failed – if we spend money and it doesn’t work, we can at least then identify weaknesses and build on them. Showing failures also demonstrates credibility and a willingness to move forward positively.
  • We can err on the side of openness and transparency and engage congress and the public, making a distinction between performance management and the long-term impact of development projects. There is no way of holding back on publishing information until it is in a format that will be readily understandable to congress and the public. This is a reality that we are going to have to live with; we have to put the data out and build on it. This can help to start important conversations. IATI is important for closing the loop, not just on public resources but also private resources (which is why Hewlett’s commitment is important). As private development resources increase, USAID becomes less dominant in the development landscape. Making sure data from many sources comes in a common format will make it easy to compare, and bring this data together to help understand what it going on. The way to visualize and think of it now is different because we are still in early days. IATI will begin to change the approach for how you evaluate impact.
  • IATI data itself does not tell the whole story, so it’s important to look at additional sources of information beyond it. IATI is only one part of the monitoring and evaluation effort, only one part of the transparency and accountability effort.

How do you overcome conflicts of interest? If development outcomes or data that is opened are not in the interest of the country government, how do we know the data can be trusted, or how does it feed back to the public in each country?

  • China’s investment in Africa, for example, may make it more difficult to understand aid flows in some ways. It will take a while to enforce the standards, particularly if it is done quickly, but we can draw the BRICs into the conversation and we are working with them on these topics.

The hard part is the implementation. So what are the time lines? How soon do we think we will see the US publish data to IATI?

  • At this time, the US Government hasn’t created an implementation timeline, so the first order of business is to get IATI institutionalized, and not to rush on this. It’s a larger issue than just USAID, so it must be done carefully and tactfully so it stays in place over the long term. USAID is working on getting data on the Dashboard to get the Obligation of Spending data up and project level data up. USAID is trying to balance this with consistency and quality control. How do you produce quality data when you are publishing regularly? These issues must be addressed while the systems are being developed. Once USAID puts data on the Dashboard, it will begin being converted to IATI data

IATI is still a donor-led initiative. NGOs involvement opens this data up to use by communities. Training individuals to use this information is not necessarily sufficient. Are there plans to build institutions or civil society organizations to support the data to be useful for communities and the general public?

  • The data can assist with the development of watchdog organizations who provide a platform for citizens to act together for accountability. Examples of organizations that are currently receiving funding to do this are Sodnet and Twaweza. There has also been support to think tanks throughout Africa to build the capacity of objective, independent policy analysts who write critiques of government initiatives.
  • There is a definite need to mainstream IATI and bring everyone together into one single conversation instead of setting up parallel structures.

So how do you build these institutions, watchdogs, etc? Will USAID really put out RFPs that offer funding to train people to criticize them?

  • This is where Hewlett and other organizations come in. They can run these trainings and build capacities. The Knight News Challenge is doing a lot of work around data-driven journalism, for example.

This is going to put a lot of pressure on people to be more efficient and might drive down resources in these spheres. There is a limited amount of incentive for organizations to involve themselves. Is there a way to incentivize it?

  • It will also drive some internal efficiencies, creating greater internal coherence within development organizations. It’s very hard to pinpoint impact within organizations because there isn’t an easy way to draw comparisons between projects, implementation strategies, etc. People always worry: What if we find something that makes us look bad? So IATI is just one part of a bigger effort to push for commitment to transparency across the board. Committing to IATI can lead to a mindset which focuses organizations on efficiency, transparency and accountability.
  • Filling out the Dashboard will be helpful in many respects, and it will make information more accessible to the general public, as well as congressional staffers, etc. It can serve multiple constituencies while making data more usable and transparent. USAID is going to be as aggressive as possible to get information on the dashboard into IATI format. There has not been a conversation about requiring implementing partners to meet IATI standards, but USAID itself is committed.

***

Thanks to IREX for hosting the Salon, our fantastic lead discussants and participants for stimulating discussion, Wayan Vota for inviting me to coordinate the Salon and Anna Shaw for sharing her Salon notes which were the basis for this blog post.

Sign up here if you’d like to be on the invitation list for future Salons.

*The Technology Salon is sponsored Inveneo in collaboration with IREX, Plan International USA, Jhpiego and ARM.

**The Salon runs by Chatham House Rule, so no attribution has been made for the discussion portion of the Salon.

Read Full Post »

The Technology Salon (TSNYC) on the International Aid Transparency Initiative (IATI), held April 13th, offered an overview of IATI as a coming-together point for aid transparency. It also stimulated discussion on opportunities and challenges for organizations and institutions when publishing information within the IATI standard and shared some available tools to support publishing NGO data.
.
IATI Background
.
Simon Parrish from Aid Info explained that IATI aims to provide information that meets the needs of a number of diverse groups, is timely, is ‘compilable’ and comparable, improves efficiency and reduces duplication. Simon explained that IATI arose from the 2005 Paris Declaration on Aid Effectiveness and was launched as part of the Accra Agenda for Action in 2008 due to a strong call from civil society to donors, multilaterals and northern NGOs for greater transparency.
.
Organizations felt they were already working hard to be transparent; however governments, journalists, tax payers and others looking for information were not able to find what they needed. Rather than each organization creating its own improved transparency and accountability system, the idea was to use an open data approach, and this is where IATI came in. Since Accra, transparency and accountability have gained global traction and IATI has been a key part of this movement for the aid sector.
.
Donor agencies, the World Bank, the EU, the US Government and others have already signed on to IATI and have started to publish basic information. INGOs are also starting to come on board and schedule their dates for publication to the IATI standard. It is hoped that over time the quality and amount of information published will improve and expand. For example, ‘traceability’ needs to be improved so that aid can be followed down the supply chain. Information from international and local NGOs is critical in this because the closer to the ground the information is, the better it can be used for accountability purposes.
.
Opportunities and Questions around IATI
.
To complement Simon’s overview, I shared ideas on some of the opportunities that IATI can offer, and some common questions that may arise within INGOs who are considering publishing their information to IATI.
.
For example, IATI can help catalyze:
  • transparency and accountability as core values
  • better coordination and program planning (internally and externally)
  • reduced reporting burden (if donors agree to use IATI as a common tool)
  • improved aid effectiveness
  • collective learning in the aid sector
  • improved legitimacy for aid agencies
  • an opportunity to educate the donor public on how aid/development really works
  • ‘moral ground’ for IATI compliant aid organizations to pressure governments and private sector to be more transparent
  • space for communities and ‘beneficiaries’ to hold aid agencies more accountable for their work
  • space for engaging communities and the public in identifying what information about aid is useful to them
  • concrete ways for communities to contest, validate and discuss aid information, intentions, budgets, actions, results.
Concerns and questions that may arise within NGOs / CSOs around IATI include:
  • Is IATI the right way to achieve the goal of transparency and accountability?
  • Is the cost in time, money, systems, and potential risk of exposure worth the individual and collective gain?
  • Is IATI the flavor of the month, to be replaced in 2-4 years?
  • What is the burden for staff? Will it increase overhead? Will it take funds and efforts away from programs on the ground?
  • What is the position of the US Government/USAID? Will implementing agencies have to report in yet another format (financial, narrative)?
  • Much internal project documentation at NGOs/INGOs has not been written with the idea of it being published. There may be confidential information or poorly written internal documents. How will aid agencies manage this?
  • What if other agencies ‘steal’ ideas, approaches or donors?
  • What security/risks might be caused for sexual or political minority groups or vulnerable groups if activities are openly published?
  • Isn’t IATI too focused on ‘upward’ accountability to donors and tax payers? How will it improve accountability to local program participants and ’beneficiaries’? How can we improve and mandate feedback loops for participants in the same way we are doing for donors?
  • Does IATI offer ‘supplied data’ rather than offer a response to data demands from different sectors?
ICT Tools to support NGOs with IATI
.
Ruth Del Campo discussed some of the different tools that are available to support INGOs and smaller organizations with IATI reporting, including Open Aid Register (OAR) which she created to support smaller organizations to comply with IATI. The Foundation Center has created a tool to support Foundations to enter their information into the IATI Standard also. Aid Stream is being used by many UK organizations to convert their data to the IATI Standard. Geo-visualization tools include CartoDB, AidView.
.
IATI awareness in the US
.
Although tools exist and awareness around IATI is growing elsewhere, Ruth noted that in the US many organizations do not know what IATI is, and this is a problem. Another issue Ruth brought up is that most existing charity raters do not rate program effectiveness or program transparency. Instead, charities are judged based on overhead rates, growth, financial statements, and whether they are publishing certain information on their websites. These measures do not tell what an organization’s program impact or overall transparency are, and they do not trace funds far enough along the chain. Linking charity rating systems with IATI standards could encourage greater transparency and accountability and help the public make decisions based on program accountability in addition to financial accountability. (For background on INGO overhead, see Saundra Schimmelpfennig’s “Lies, White Lies, and Accounting Practices”).
.
Because many INGOs are not familiar with IATI, a greater dissemination effort is needed for IATI to be of optimal use. If only 20% of the aid picture is available, it will not be very helpful for coordination and decision making. Many INGOs feel that they are already transparent because they are publishing their annual reports as a .pdf file on their websites and they have an overhead rate within a certain percentage, but this is not enough. Much more needs to be done to gain awareness and buy-in from US INGOs, government, charity rating systems, donors, media and the public on transparency and IATI.
.
Discussion…
.
Following the 3 discussants, TSNYC participants jumped in for a good debate around key points:
.
Carrot or stick approach?
.
NGOs place great importance on their Charity Navigator rankings and Better Business Bureau reviews, and many donors select charities based on these rankings, so it will be important to link these with IATI. The Publish What You Fund index, which tracks the transparency of different organizations, has been helpful in getting countries and institutions on board. The Foundation Center lists transparency indicators on their site GlassPockets as well. The Brookings and CGD QuODA report was mentioned as a key reason that the US Government signed onto IATI at Busan last November, since the US was ranked very low on transparency and saw that they could bring their ranking up by signing on.
.
Consensus at the Technology Salon was that it is not likely that the US Government or USAID will make IATI compliance mandatory for their grantees and implementing partners as DFID has done. Rather, the existing dashboard for collecting information would be used to report into IATI, so the dashboard needs to be improved and regularly updated by US agencies. One concern was whether in this scenario, the information published by USAID would be useful for developing country governments or would only be of use to USAID Missions. On the bright side, it was felt that movement within the US Government over the past few years towards greater openness and transparency has been massive. TSNYC participants noted that there seems to be a fundamental mindset change in the current administration around transparency, but it’s still difficult to make change happen quickly.
.
Some members of the US Congress have latched onto the idea and are pushing for greater transparency and this could impact whether the IATI profile increases. Transparency and accountability are of interest to both major US parties. Liberals tend to be interested in the idea of being more open and sharing information; and conservatives tend to focus on value for money and stamping out corruption and lowering inefficient aid spending and waste. IATI can support with both and be a win for everyone.
.
Making IATI mandatory could, some cautioned, backfire. For example there are foundations and corporations that for a variety of reasons do not openly share information about their giving. If pressured, the tendency may be to shut down totally.
.
Showing what positive things can be done with IATI and how it can benefit CSO information management and coordination internally as well as externally was thought to be a better approach than positioning IATI as “we are being audited by everyone now.” IATI should be emphasized as an opportunity to join data together to know what everyone is doing, visualize the data using new technologies, and use it to make better program decisions and improve coordination as well as accountability. Some examples of vibrant and informative uses of IATI data include Mapping for Results, Interaction’s Haiti Aid Map and the Foundation Center’s comparison of Foundation giving and World Bank funding.
.
Transparency as a ‘norm’
.
Many organizations are investing in transparency for reasons that go far beyond IATI compliance. Three kinds or organizations were identified at the Salon session: those who comply because it is mandatory; those who comply because it’s inevitable; and those who comply because they believe in the inherent value of transparency as a core principle. Even within organizations, some teams such as Democracy and Governance, may be much more interested in IATI than, say, Education, Health, or Arts teams, simply because of the themes they work on and their competing priorities. It is hoped that in 5 years’ time, it is no longer a question of mandatory or inevitable compliance, but rather transparency becomes the norm and it starts to feel strange to work in a space that is not transparent. Leadership is important to get an organization on board.
.
Challenges and opportunities in IATI compliance
.
Challenges to IATI compliance were discussed in depth at the Salon, including questions around the amount of resources needed to report to IATI. It was noted that the biggest challenges are organization, coordination, and change of attitudes internally. Some of the core obstacles that Salon participants noted include:
.
Time and resources
Some pushback might be seen around IATI because investment in IATI compliance may not be seen as providing an immediate return to individual organizations. TSNYC participants felt that rather than a constraint, IATI provided an opportunity for organizations to better manage their own information for internal sharing and use. IATI can help improve program planning, reduce time spent gathering program information from colleagues and across countries, and support better internal coordination among offices and partners. It was noted that when governments started publishing open data, the people who most used it were government employees for their own work. IATI can be seen as an investment in better internal coordination and information management. Once the information is available in an open format it can be used for a number of data visualizations that can show an organization’s reach and impact, or help a number of organizations share their joint work and impact, such as in the case of coalitions and thematic or sectoral networks.
.
Project document quality
Concerns may be raised in some organizations regarding the state of project documents that were not originally written with publication in mind. Organizations will have to decide if they want to work retroactively, invest in quality control, and/or change processes over time so that documentation is ready for publication.
.
Losing the competitive edge
TSNYC participants worried that without USAID mandatory compliance, some INGOs, and contractors especially, would not be motivated to publish information for fear of losing their competitive edge. It is feared that getting contractors to report to any level of detail will be difficult. This, the group discussed, makes peer pressure and public pressure important, and mechanisms to encourage broader transparency will need to be found. One idea was to create a ‘5 star system’ of IATI compliance so that organizations with full compliance get a higher star rating (something that Aid Info is already working on). Another angle is the hope that IATI reporting could replace some other mandatory reporting mechanisms, and this may be another entry point.
.
Accountability to whom?  
It was recognized that IATI was initiated as a top-down approach to accountability. The question remains how to make IATI information more useful for ‘beneficiaries’ and program participants to track aid flows, and to contest and validate the information. What complaints mechanisms exist for communities where aid has not been effectively implemented? One point was that IATI is designed to do exactly that and that when it is more populated with information, then this more exciting part that involves playing with the data and seeing what communities have to say about it will start to happen.
.
Simon noted that there is a huge emerging civic hacker and ICT for social change movement. Access to aid information can be hugely liberating for people. At some aid transparency workshops the focus has been on what national NGOs and governments are doing. Young people are often angry that they don’t know about this. They often find the idea that the information is available to them very exciting. Much of the conversation at these meetings has been about ways to reach communities and about who can be involved as intermediaries.
.
IATI is still top down and the information that people need is bottom up. However the conversation is starting to happen. Infomediaries need to be multiple and varied so that there is not only one source of IATI data interpretation, but rather a variety of interpretations of the data. Social accountability processes like community score cards and social audits can be brought into the equation to extend the value of IATI information and bring in community opinion on aid projects and their effectiveness. Platforms like Huduma are examples of making open data more accessible and useful to communities.
.
* * * * *
.
A huge thanks to our discussants Ruth Del Campo and Simon Parrish and to all those who participated in this 3rd Technology Salon NYC!
.
Contact me if you’d like to get on the list for future TSNYC invitations.
.
The Technology Salon™ is an intimate, informal, and in person, discussion between information and communication technology experts and international development professionals, with a focus on both:
  • technology’s impact on donor-sponsored technical assistance delivery, and
  • private enterprise driven economic development, facilitated by technology.

Our meetings are lively conversations, not boring presentations – PowerPoint is banned and attendance is capped at 15 people – and frank participation with ideas, opinions, and predictions is actively encouraged through our key attributes. The Technology Salon is sponsored by Inveneo and a consortium of sponsors as a way to increase the discussion and dissemination of information and communication technology’s role in expanding solutions to long-standing international development challenges.

Read Full Post »

At the global level, a very small percentage of development funding goes to urban spaces, yet hard-hitting issues impact many of the urban poor: lack of tenure, lack of legality of land, informal settlements, lack of birth registration and civil registration in general, waste disposal, clean water, politicizing of local authorities and more. Can new technologies be a solution for some of these issues?

Tuesday’s Technology Salon NYC offered a space to discuss some of the key challenges and good practice related to working with children, youth, and urban communities and explored the potential role of ICTs in addressing issues related to urban poverty.

According to UNICEF, who co-organized and hosted the Salon at their offices, half of the world’s people – including over one billion children – live in cities and towns. By 2030, it is projected that the majority of the world’s children will grow up in urban areas, yet infrastructure and services are not keeping pace with this urban population growth. (See UNICEF’s 2012 State of the World’s Children or Plan’s 2010 Because I am a Girl Report: Digital and Urban Frontiers).

We welcomed 3 experienced and engaging discussants to the Salon, who commented on the intersection of children, youth, urban environments and new technologies:

  • Doris Gonzalez, Senior Program Manager Corporate Citizenship & Corporate Affairs at IBM Corporation who manages IBM’s grade 9-14 education model Pathways in Technology Early College High School (P-TECH), a program that maps work-based skills to the curriculum and provides mentors to students and teachers
  • Ron Shiffman, a city planner with close to 50 years of experience providing architectural, planning, community economic development, and sustainable and organizational development assistance to community-based groups in low- and moderate-income neighborhoods; and the co-founder of the Pratt Institute Center for Community and Environmental Development [PICCED]
  • Sheridan Bartlett, a researcher affiliated with the Children’s Environments Research Group at CUNY and the International Institute for Environment and Development in London, co-editor of the journal Environment and Urbanization, supporter of Slum Dwellers International, and researcher on the link between violence and living conditions as they affect young children.

Doris shared IBM’s experiences with technology education programs with children and youth in New York City, including the highly successful P-TECH program which prepares youth for jobs that require 21st century skills. Some of the key aims in the PTECH program are making technology accessible, engaging, relevant to children and youth, and connecting what kids are learning to the real world. Through the program, IBM and partners hope to turn out skilled employees who are on entry-level career tracks. They look at what jobs are hiring with AAS degrees, what skills are attached and how to map those skills back into curriculum. Helping children and youth acquire collaboration, communication, and problem solving skills is key to the approach, as are broad partnerships with various stakeholders including government, private sector, communities and youth themselves. The program has been lauded by President Obama and is in the process of being replicated in Chicago.

Ron highlighted that working on urban poverty is not new. His involvement began in 1963 when John F. Kennedy was elected and there was a thrust to address urban poverty. Many, including Ron, began to think about their role in abolishing (not just alleviating or reducing) poverty. ‘We thought as architects, as planners about how we could address the issues of urban poverty.’ Broader urban poverty initiatives grew from work done forming the first youth in action community based organizations (CBOs) in Bedford Sty, Harlem and the Lower East Side. Originally these programs were empowerment programs, not service delivery programs; however because they confronted power, they faced many challenges and eventually morphed into service delivery programs. Since then, this youth in action community based model has served other community based initiatives all over the world. Ron emphasized the importance of differentiating among the roles of CBOs, technical assistance providers, and intermediaries and the need to learn to better to support CBOs who are on the ground rather than supplanting their roles with NGOs and other intermediaries.

Sherry picked up where Ron left off, noting that she’d collaborated with a network of federations of the urban poor in 33 countries, looking at ways they’ve used technologies and how technology presents new challenges for communities.

A large percentage of the urban poor live illegally in non-formal settlements where they can’t vote and don’t have legal representation, she said. They want partnership with the local government and to make themselves visible. The first thing they do is to count themselves, document the land they live on, map every lane and garbage dump, every school (if schools exist), their incomes, livelihoods, and expenditures. This body of information can help them engage with local authorities. It’s an organizing tool that gives them a collective identity that can lead to a collective voice.

This process seems to be a natural fit for technology, in that it can allow for management and storing of information, she said. However, technology use can be incredibly complicated. Traditionally the process has been managed manually. The information comes in on paper, it is fed immediately back to community members, contested and corrected right there. The process is very participatory and very accurate, and includes everybody. Things that have been mapped or measured are validated. Boundaries are argued and joint agreement about the community reality and the priorities is reached right there.

Sherry noted that when technology is used to do this, participation becomes more restricted to a smaller, more technically savvy group rather than the entire community. It takes longer to get the information back to the community. Many urban poor are technology literate, but there are complexities. Sometimes youth want to go with technology and older leaders are more comfortable with the more manual, inclusive processes.

The complexity of urban environments was addressed by all three discussants. According to Sherry, many people still hold a development image of the ‘perfect village,’ contained, people sitting around a tree. But urban communities are very complex: Who owns what? There are landlords. Who represents whom? How do you create the space and the links with local authorities? Ron agreed, saying that cities and settlements  tend to be far more pluralistic. Understanding the nature and differences among them, how to weave together and work together is critical. Doris noted that one organization cannot work on this alone, but that multiple partners need to be involved.

Participants at the Technology Salon. (Photo: UNICEF)

Key points brought up during the ensuing discussion included:

Process, product and participation:

  • It’s important to work directly with community based organizations rather than surrogates (in the form of external NGOs). It’s critical to work on building the capacity of people and organizations on the ground, recognizing them as the core actors.
  • The process by which people engage is just as important as the end product. If new technologies are involved, participation needs special attention to ensure no one is being marginalized and the data is interpreted by local people, and they play a role in gathering it and learning, sharing and discussing during the process.
  • Digital data collection and fly-over mapping should not replace participatory processes of data collection and local interpretation. Bringing in more efficient processes via new technologies is possible, but it often means losing some of the richness and interpretation of the data. If you’re not including everyone in this process, you risk marginalizing people from the process.
  • Every community, village or barrio has a different personality. No one size approach or model or technology fits all.
  • It’s important to let people create solutions to their own challenges; set the right policies so that what is produced can be scaled (open/open source); and make sure things are hyper-local – yet help move the ideas and build networks of south-south collaboration so that people can connect.
  • The technologies only make sense when they are done within a participatory framework or context. People get excited about technology and the idea of really iterating and trying things out very quickly. But this only makes sense if you have a bigger plan. If you go in and start playing around, without context and a long-term plan, you lose the community’s trust. You can’t drop in tech without context, but you also can’t come and create such a huge infrastructure that it’s impossible to implement. You need balance.
Children and youth:
  • We shouldn’t forget protection and privacy issues, especially with children and with mapping. These need to be carefully built in and we need to be sure that information and maps are not misused by authorities.
  • We can’t forget the young people and adolescents that we are working with – what do we want them to gain out of this? Critical thinking, problem solving skills? This is what will serve them. Educational systems need to address the pedagogy – if children are even in schools – where do you get that critical thinking? How do you create space for innovation? What is the role of tech in helping support this innovation? What is the role of mapping? Just accounting? Just quantifying? Or are you helping youth know their communities better? Helping them understand safety? What are we mapping for? Community self-knowledge or outside advocacy?
Technology:
  • We need to always ask: Who owns and manages technology? Technology is never neutral. It can both empower and disempower people and communities, and certain groups of people within communities.
  • The technology is a tool. It’s not the technology that teaches, for example, it is teachers who teach. They use the tech to supplement what’s going on in the classroom.
  • Rather than bring in ‘really cool’ things from the outside, we should know what tech already exists, what children, youth and communities are already using and build on that.
  • The challenge always comes down to the cost. Even if an idea works in the US, will it work in other places? Can other places afford the technology that we are talking about? There are some very good projects, but they are impossible to replicate. We need to find feasible and sustainable ways that technology can help reduce costs while it improves the situation for the urban poor.
Engaging local and national governments and the private sector
  • Bottom up is important, but it is not enough. The role of local authorities is critical in these processes but national authorities tend to cut local budgets meaning local authorities cannot respond to local needs. It’s important to work at every level – national policies should enable local authorities and mandate local authorities to work with local communities.
  • Local authorities often need to be pushed to accept some of these new ideas and pulled forward. Often communities can be more technologically savvy than local authorities, which can turn the power dynamic upside down and be seen as threatening, or in some cases as an opportunity for engagement.
  • Communities may need to learn to engage with local governments. Adversarial or advocacy techniques may be useful sometimes but they are not always the right way to go about engaging with the authorities.
  • It’s useful for CBOs to work with both horizontal and vertical networks, and NGOs can play a role in helping this to happen, as long as the NGOs are not replacing or supplanting the CBOs.
  • There needs to be support from the local city government, and an interest, a need, an expressed dedication to wanting to be involved or these kinds of initiatives will fail or fizzle out.
  • There is a tendency to seek quick solutions, quick fixes, when we all know that creating change takes a long time and requires a long-term perspective and investment. The city of Medellin for example has done a good job of investing in connecting settlements to the city through infrastructure and access to technology. Long-term vision with participation from private, public and  community engagement is critical.
  • The quality of investment in poor areas needs to be as high as that in wealthier areas. Many interventions are low quality or limited when they are done in poor areas.
  • Multiple partners and collaboration among them is necessary for these initiatives to move forward and to be successful. You need to bring everyone to the table and to have an existing funding structure and commitment from local and national governments and ministries, as well as local communities and CBOs, NGOs and the private sector.
  • A role for the UN and INGOs can be to help ensure that the right channels are being opened for these projects, that the right partnerships are established, that systems and technologies are kept open and not locked into particular proprietary solutions.
Learning/sharing challenges, approaches and good practice:
  • There is much to learn from how marginalized youth in communities have been engaged without technologies. Once they have the information, no matter how it was gathered — in the sand, by SMS, on the wall — then how can marginalized young people access and address local authorities with it? How can we help enable them to feel more empowered? What can we learn from past efforts that we can apply?
  • There is a lack of exposure of those working in ICT to the urban space and vice versa. This reflects a need to break down the issues and opportunities and to think more deeply about the potential of technology as a part of the solution to urban poverty issues.
  • We need to make a distinction between wonderful projects that some are doing, but that are very costly and have a high cost per participant; and programs that can be done in developing countries. Consider that 75 million youth are now unemployed. The more we learn about what others are doing, the more information we have on how to do it, the better.
  • ICTs are a relatively new element in the urban space. It would be helpful to have a a follow-up report that focuses on how ICTs have been used to address specific issues with children, youth and communities in urban spaces and what specific challenges are posed when using ICTs in this space. What projects have been done or could be done? What are the challenges in implementing projects with refugee populations, undocumented populations, migrants, and other groups? We need to understand this better. We need a document or guide that explores these issues and suggests practical ways to move forward.
  • Social media and new technologies can be used to spread information on successful case studies, to share our learning and challenges and good practice so that we can apply the best approaches.
A huge thanks to ICT Works, UNICEF, our discussants and participants for making this 2nd Technology Salon NYC a success!

Save the date for our 3rd TSNYC, on April 13, 2-4pm at New York Law School. The topic will be the International Aid Transparency Initiative (IATI); how it can contribute to  better aid coordination and effectiveness; challenges and opportunities for CSOs in signing onto IATI; and ways that technology and open data are supporting the process. 

Read Full Post »

Civil society has been working for years on participation, transparency, accountability and governance issues. Plenty of newer initiatives (small and large) look at new technologies as a core tool in this work. But are these groups talking and learning from each other? What good practices exist for using new technologies to improve transparency, accountability and governance? What are some considerations and frameworks for thinking about the role of new technologies in this area of work? What needs consideration under this broad theme of good governance?

Tuesday’s Technology Salon* in New York City focused on those issues, kicked off by our two discussants, Hapee de Groot from Hivos and Katrin Verclas from Mobile Active. Discussion ensued around the nuances of how, with whom, when, why, and  in conjunction with what do new technologies play a role in transparency, accountability and good governance.

Some of the key points brought up during the Salon**:

What is “good governance?”  The overall term could be divided into a number of core aspects, and so the discussion is a big one and it’s complicated. Aid transparency is only one small part of the overall topic of good governance.

The World Bank definition includes aspects of:

  • Participation of citizens in political processes, freedom of expression and  association, free media
  • Political stability and absence of violence
  • Government effectiveness in the delivery of services
  • Regulatory quality, rule of law
  • Control of corruption

There’s a need to look at governments and aid, but also to look at the private sector. Some commented that aid transparency is in vogue because donors can drive it but it’s perhaps not as important as some of the other aspects and it’s currently being overemphasized. There are plenty of projects using ICTs and mobiles in other areas of governance work.

More data doesn’t equal more accountability. Data does not equal participation. Can mobile phones and other ICTs or social media reduce corruption? Can they drive new forms of participation? Can they hold power accountable in some ways? Yes, but there is no conclusive evidence that the use of new technology to deliver data down from governments to people or up from people to governments improves governance or accountability. The field of tech and governance suffers from ‘pilotitis’ just like the field of ICT4D. Some participants felt that of course open data doesn’t automatically equal accountability and it was never the idea to stop there. But at the same time, you can’t have accountability without open data and transparency. Opening the data is just the first step in a long road of reaching accountability and better governance.

Efficient vs transformational. Transactional efficiency within a system is one thing. Transformation is another. You can enhance an existing process from, say, writing on paper to calling on a landline to texting in information, thereby improving accuracy and speed. But there is something more which is the transformational side. What’s most interesting perhaps are those ways that ICTs can completely alter processes and systems. Again, there are a lot of promising examples but there is not much evidence of their impact at this point. One participant noted that current evidence seems to point toward the integration of mobiles (and other ICTs) into existing process as having a greater impact and quicker uptake within large, bureaucratic systems than disruptive use of new technologies. But the question remains – Are the systems good systems or should/could ICTs transform them to something totally different and better or can ICTs help do away with poorly working systems entirely, replacing them with something completely new?

Is open data just a big show? Some alluded to opaque transparency, where a government or another entity throws up a bunch of data and says “we are being open” but there is no realistic way to make sense of the data. Some felt that governments are signing onto open data pacts and partnerships as a fake show of transparency. These governments may say, “The data base is available. Go ahead and look at it.” But it costs a lot of money and high level skills to actually use the data. In addition, there is a need for regulatory frameworks and legislation around openness. Brazil was given as an example of a country that has joined the open government partnership, but as yet has no regulatory framework or freedom of information act, even though the country has a beautiful open government website. “Checks and balances are not inherent in the mobile phone. They need to be established in the legislation and then can be enhanced by mobile or other technology.” Open Data Hackathons can help turn data into information. The question of “what does open data actually mean?” came up also and the “cake test” was recommended as one way of defining “open”.

Is open data an extractive process?  Some at the Salon cautioned that the buzz around Open Data could be a bit false in some ways, and may be hyped up by private companies who want to make money off of nice data visualizations that they can sell to big donors or governments. The question was raised about how much data actually gets back to those people who provide it so that they can use it for their own purposes? The sense was that there’s nothing wrong with private companies helping make sense of data per se, but one could ask what the community who provided the data actually gets out of this process. Is it an extractive data mining process? And how much are communities benefiting from the process? How much are they involved? Mikel Maron wrote a great post yesterday on the link between open data and community empowerment – I highly recommend reading it for more on this.

Whose data? A related issue that wasn’t fully discussed at the Salon is: who does the information that is being “opened” actually belong to (in the case of household surveys, for example)? The government? The International NGO or multilateral agency who funds a project or research? The community? And what if a community doesn’t want its data to be open to the world – is anyone asking? What kind of consent is being granted? What are the privacy issues? And what if the government doesn’t want anyone to know the number of X people living in X place who fit X description? Whose decision is it to open data? What are the competing politics?

For example, what if an organization is working on an issue like HIV, cholera, violence or human trafficking. What if they want to crowd source information and publicly display it to work towards better transparency and improved service delivery, but the host government country denies the existence of the issue or situation? In one case I heard recently, the NGO wanted to work with government on better tracking and reporting so that treatment/resources could be allocated and services provided, but when the government found out about the project, they wanted control over the information and approval rights. Government went so far in another case as to pressure the mobile service provider who was partnering with the organization, and the mobile service provider dropped out of the project. These are good reminders that information is power and openness can be a big issue even in cases not initially identified as politically charged.

Privacy and security risks. The ubiquity of data can pose huge privacy and security concerns for activists, civil society and emerging democracies and some at the Salon felt this aspect is not being effectively addressed. Can there really be anonymous mobile data? Does the push/drive for more data jeopardize the political ambitions of certain groups (civil society that may be disliked by certain governments)? This can also be an issue for external donors supporting organizations in places like Syria or Iraq. Being open about local organizations that are receiving funding for democracy or governance work can cause problems (eg., they get shut down or people can be arrested or killed).

Can new ICTs weaken helpful traditional structures or systems?  Is new tech removing some middlemen who were an important part of culture or societal structure? Does it weaken some traditional structures that may actually be useful? The example of the US was given where a huge surge of people now engage directly with their congressperson via Twitter rather than via aggregation channels or other representatives. Can this actually paralyze political systems and make them less functional? Some countered, saying that Twitter is somewhat of a fad and over time this massive number of interactions will settle down, and in addition, not everyone gets involved on every issue all the time. Things will sort themselves out. Some asked if politicians would become afraid (someone – help!! there is a study on this issue that I can’t seem to locate) to make some of the secret deals that helped move agendas forward because they will be caught and so openness and transparency can actually paralyze them? In other words is it possible that transparency is not always a good thing in terms of government effectiveness? The example of paying Afghan police directly by mobile phone was given. This initiative apparently ended up failing because it cut decision makers who benefited from bribes out of the loop. Decoupling payments from power is potentially transformational, but how to actually implement these projects when they disrupt so much?

Does new technology create parallel structures? Are parallel structures good or bad? In an effort to bypass inefficient and/or unaccountable systems, in one case, private business owners started their own crime reporting and 911 system to respond and accompany victims to report to the police and follow up on incidents. Questions were raised whether this privatization of government roles was taking justice into ones’ own hands, forcing the government to be accountable, allowing it to shirk responsibilities, or providing a way for government to see an innovation and eventually take on a new and more effective system that had been tried and tested with private funds. This same issue can be seen with parallel emergency reporting systems and other similar uses of ICTs. It may be too early in the game to know what the eventual outcomes of these efforts will be and what the long term impact will be on governance. Or it may be that parallel systems work in some contexts and not in others.

***

The Salon could have gone for much longer but alas, we had to end. Dave Algoso covers some of the other ideas from the Salon in his post Technology for Transparency, Accountability and Governance, including how to approach and define the topic (top down vs bottom up? efficiency vs transformation?) and the importance of measuring impact.

Thanks to UNICEF and Chris Fabian for hosting the Salon. Thanks to Martin Tisne from the Transparency and Accountability Initiative for sparking the idea to choose this topic for the first Technology Salon in NYC, and thanks to Wayan Vota for inviting me to coordinate the series.

Contact me if you’d like to be on the invitation list for future Salons.

*The Technology Salon is sponsored by the UN Foundation’s Technology Partnership with the Vodafone Foundation as a way to increase the discussion and dissemination of information and communication technology’s role in expanding solutions to long-standing international development challenges. Technology Salons currently run in Washington DC (coordinated by@wayan_vota) and San Francisco, with New York City as the latest addition, coordinated by yours truly.

**The Salon runs by Chatham House Rules, so no attribution has been made in the above post.

Read Full Post »

« Newer Posts