Posts Tagged ‘NYC’

New technologies are opening up all kinds of possibilities for improving monitoring and evaluation. From on-going feedback and crowd-sourced input to more structured digital data collection, to access to large data sets and improved data visualization, the field is changing quickly.

On August 7, the Rockefeller Foundation and the Community Systems Foundation (CSF) joined up with the Technology Salon NYC for the first in a series of 3 Salons on the use of ICTs in monitoring and evaluating development outcomes. Our lead discussants were: Erica Kochi from UNICEF Innovations; Steven Davenport from Development Gateway and John Toner from CSF.

This particular Salon focused on the use of ICTs for social monitoring (a.k.a. ‘beneficiary feedback loops’) and accountability. Below is a summary of the key points that emerged at the Salon.

1) Monitoring and evaluation is changing

M&E is not only about formal data collection and indicators anymore, one discussant commented, “It’s free form, it contains sentiment.” New ICT tools can help donors and governments plan better. SMS and other social monitoring tools provide an additional element to more formal information sources and can help capture the pulse of the population. Combinations of official data sets with SMS data provide new ways of looking at cross-sections of information. Visualizations and trend analysis can offer combinations of information for decision making. Social monitoring, however, can be a scary thing for large institutions. It can seem too uncontrolled or potentially conflictive. One way to ease into it is through “bounded” crowd-sourcing (eg., working with a defined and more ‘trusted’ subset of the public) until there is comfort with these kinds of feedback mechanisms.

2) People need to be motivated to participate in social monitoring efforts

Building a platform or establishing an SMS response tool is not enough. One key to a successful social monitoring effort is working with existing networks, groups and organizations and doing well-planned and executed outreach, for example, in the newspaper, on the radio and on television. Social monitoring can and should go beyond producing information for a particular project or program. It should create an ongoing dialogue between and among people and institutions, expanding on traditional monitoring efforts and becoming a catalyst for organizations or government to better communicate and engage with the community. SMS feedback loops need to be thought of in terms of a dialogue or a series of questions rather than a one-question survey. “People get really engaged when they are involved in back and forth conversation.” Offering prizes or other kinds of external motivation can spike participation rates but also can create expectations that affect or skew programs in the long run. Sustainable approaches need to be identified early on. Rewards can also lead to false reports and re-registering, and need to be carefully managed.

3) Responsiveness to citizen/participant feedback is critical

One way to help motivate individuals to participate in social monitoring is for governments or institutions to show that citizen/participant feedback elicits a response (eg., better delivery of public services).  “Incentives are good,” said one discussant, “But at the core, if you get interactive with users, you will start to see the responses. Then you’ll have a targeted group that you can turn to.” Responsiveness can be an issue, however if there is limited government or institutional interest, resourcing or capacity, so it’s important to work on both sides of the equation so that demand does not outstrip response capacity. Monitoring the responsiveness to citizen/participant feedback is also important. “Was there a response promised? Did it happen? Has it been verified? What was the quality of it?”

4) Privacy and protection are always a concern

Salon participants brought up concerns about privacy and protection, especially for more sensitive issues that can put those who provide feedback at risk. There are a number of good practices in the IT world for keeping data itself private, for example presenting it in aggregate form, only releasing certain data, and setting up controls over who can access different levels of data. However with crowd-sourcing or incident mapping there can be serious concerns for those who report or provide feedback. Program managers need to have a very good handle on the potential risks involved or they can cause unintended harm to participants. Consulting with participants to better understand the context is a good idea.

5) Inclusion needs to be purposeful

Getting a representative response via SMS-based feedback or other social monitoring tools is not always easy. Mandatory ratios of male and female, age groups or other aspects can help ensure better representation. Different districts can be sampled in an effort to ensure overall response is representative. “If not,” commented one presenter, “you’ll just get data from urban males.” Barriers to participation also need consideration, such as language; however, working in multiple languages becomes very complicated very quickly. One participant noted that it is important to monitor whether people from different groups or geographic areas understand survey questions in the same way, and to be able to fine-tune the system as it goes along. A key concern is reaching and including the most vulnerable with these new technologies. “Donors want new technology as a default, but I cannot reach the most excluded with technology right now,” commented a participant.

6) Information should be useful to and used by the community

In addition to ensuring inclusion of individuals and groups, communities need to be involved in the entire process. “We need to be sure we are not just extracting information,” mentioned one participant. Organizations should be asking: What information does the community want? How can they get it themselves or from us? How can we help communities to collect the information they need on their own or provide them with local, sustainable support to do so?

7) Be sure to use the right tools for the job

Character limitation can be an issue with SMS. Decision tree models, where one question prompts another question that takes the user down a variety of paths, are one way around the character limit. SMS is not good for incredibly in-depth surveys however; it is good for breadth not depth. It’s important to use SMS and other digital tools for what they are good for. Paper can often be a better tool, and there is no shame in using it. Discussants emphasized that one shouldn’t underestimate the challenges in working with Telco operators and making short codes. Building the SMS network infrastructure takes months. Social media is on the rise, so how do you channel that into the M&E conversation?

8) Broader evaluative questions need to be established for these initiatives

The purpose of including ICT in different initiatives needs to be clear. Goals and evaluative questions need to be established. Teams need to work together because no one person is likely to have the programmatic, ICT and evaluation skills needed for a successfully implemented and well-documented project. Programs that include ICTs need better documentation and evaluation overall, including cost-benefit analyses and comparative analyses with other potential tools that could be used for these and similar processes.

9) Technology is not automatically cheaper and easier

These processes remain very iterative; they are not ‘automated’ processes. Initial surveys can only show patterns. What is more interesting is back-and-forth dialogue with participants. As one discussant noted, staff still spend a lot of time combing through data and responses to find patterns and nuances within the details. There is still a cost to these projects. In one instance, the major project budget went into a communication campaign that was launched and the work with existing physical networks to get people to participate. Compared to traditional ways of doing things (face-to-face, for example) the cost of outreach is not so expensive, but integrating SMS and other technologies does not automatically mean that money will be saved. The cost of SMS is also large in these kinds of projects because in order to ensure participation, representation, and inclusion, SMS usually needs to be free for participants. Even with bulk rates, if the program is at massive scale, it’s quite expensive. When assuming that governments or local organizations will take over these projects at some point, this is a real consideration.

10) Solutions at huge scale are not feasible for most organizations 

Some participants commented that the UN and the Red Cross and similarly sized organizations are the only ones who can work at the level of scale discussed at the Salon. Not many agencies have the weight to influence governments or mobile service providers, and these negotiations are difficult even for large-scale organizations. It’s important to look at solutions that react and respond to what development organizations and local NGOs can do. “And what about localized tools that can be used at district level or village level? For example, localized tools for participatory budgeting?” asked a participant. “There are ways to link high tech and SMS with low tech, radio outreach, working with journalists, working with other tools,” commented others. “We need to talk more about these ways of reaching everyone. We need to think more about the role of intermediaries in building capacity for beneficiaries and development partners to do this better.

11) New technology is not M&E magic

Even if you include new technology, successful initiatives require a team of people and need to be managed. There is no magic to doing translations or understanding the data – people are needed to put all this together, to understand it, to make it work. In addition, the tools covered at the Salon only collect one piece of the necessary information. “We have to be careful how we say things,” commented a discussant. We call it M&E, but it’s really ‘M’. We get confused with ourselves sometimes. What we are talking about today is monitoring results. Evaluation is how to take all that information then, and make an informed decision. It involves specialists and more information on top of this…” Another participant emphasized that SMS feedback can get at the symptoms but doesn’t seem to get at the root causes. Data needs to be triangulated and efforts made to address root causes and end users need to be involved.

12) Donors need to support adaptive design

Participants emphasized that those developing these programs, tools and systems need to be given space to try and to iterate, to use a process of adaptive design. Donors shouldn’t lock implementers into unsuitable design processes. A focused ‘ICT and Evaluation Fail Faire’ was suggested as a space for improving sharing and learning around ICTs and M&E. There is also learning to be shared from people involved in ICT projects that have scaled up. “We need to know what evidence is needed to scale up. There is excitement and investment, but not enough evidence,” it was concluded.

Our next Salon

Our next Salon in the series will take place on August 30th. It will focus on the role of intermediaries in building capacity for communities and development partners to use new technologies for monitoring and evaluation. We’ll be looking to discover good practices for advancing the use of ICTs in M&E in sustainable ways. Sign up for the Technology Salon mailing list here. [Update: A summary of the August 30 Salon is here.]

Salons run by Chatham House Rule, thus no attribution has been made. 

Read Full Post »

The Technology Salon (TSNYC) on the International Aid Transparency Initiative (IATI), held April 13th, offered an overview of IATI as a coming-together point for aid transparency. It also stimulated discussion on opportunities and challenges for organizations and institutions when publishing information within the IATI standard and shared some available tools to support publishing NGO data.
IATI Background
Simon Parrish from Aid Info explained that IATI aims to provide information that meets the needs of a number of diverse groups, is timely, is ‘compilable’ and comparable, improves efficiency and reduces duplication. Simon explained that IATI arose from the 2005 Paris Declaration on Aid Effectiveness and was launched as part of the Accra Agenda for Action in 2008 due to a strong call from civil society to donors, multilaterals and northern NGOs for greater transparency.
Organizations felt they were already working hard to be transparent; however governments, journalists, tax payers and others looking for information were not able to find what they needed. Rather than each organization creating its own improved transparency and accountability system, the idea was to use an open data approach, and this is where IATI came in. Since Accra, transparency and accountability have gained global traction and IATI has been a key part of this movement for the aid sector.
Donor agencies, the World Bank, the EU, the US Government and others have already signed on to IATI and have started to publish basic information. INGOs are also starting to come on board and schedule their dates for publication to the IATI standard. It is hoped that over time the quality and amount of information published will improve and expand. For example, ‘traceability’ needs to be improved so that aid can be followed down the supply chain. Information from international and local NGOs is critical in this because the closer to the ground the information is, the better it can be used for accountability purposes.
Opportunities and Questions around IATI
To complement Simon’s overview, I shared ideas on some of the opportunities that IATI can offer, and some common questions that may arise within INGOs who are considering publishing their information to IATI.
For example, IATI can help catalyze:
  • transparency and accountability as core values
  • better coordination and program planning (internally and externally)
  • reduced reporting burden (if donors agree to use IATI as a common tool)
  • improved aid effectiveness
  • collective learning in the aid sector
  • improved legitimacy for aid agencies
  • an opportunity to educate the donor public on how aid/development really works
  • ‘moral ground’ for IATI compliant aid organizations to pressure governments and private sector to be more transparent
  • space for communities and ‘beneficiaries’ to hold aid agencies more accountable for their work
  • space for engaging communities and the public in identifying what information about aid is useful to them
  • concrete ways for communities to contest, validate and discuss aid information, intentions, budgets, actions, results.
Concerns and questions that may arise within NGOs / CSOs around IATI include:
  • Is IATI the right way to achieve the goal of transparency and accountability?
  • Is the cost in time, money, systems, and potential risk of exposure worth the individual and collective gain?
  • Is IATI the flavor of the month, to be replaced in 2-4 years?
  • What is the burden for staff? Will it increase overhead? Will it take funds and efforts away from programs on the ground?
  • What is the position of the US Government/USAID? Will implementing agencies have to report in yet another format (financial, narrative)?
  • Much internal project documentation at NGOs/INGOs has not been written with the idea of it being published. There may be confidential information or poorly written internal documents. How will aid agencies manage this?
  • What if other agencies ‘steal’ ideas, approaches or donors?
  • What security/risks might be caused for sexual or political minority groups or vulnerable groups if activities are openly published?
  • Isn’t IATI too focused on ‘upward’ accountability to donors and tax payers? How will it improve accountability to local program participants and ’beneficiaries’? How can we improve and mandate feedback loops for participants in the same way we are doing for donors?
  • Does IATI offer ‘supplied data’ rather than offer a response to data demands from different sectors?
ICT Tools to support NGOs with IATI
Ruth Del Campo discussed some of the different tools that are available to support INGOs and smaller organizations with IATI reporting, including Open Aid Register (OAR) which she created to support smaller organizations to comply with IATI. The Foundation Center has created a tool to support Foundations to enter their information into the IATI Standard also. Aid Stream is being used by many UK organizations to convert their data to the IATI Standard. Geo-visualization tools include CartoDB, AidView.
IATI awareness in the US
Although tools exist and awareness around IATI is growing elsewhere, Ruth noted that in the US many organizations do not know what IATI is, and this is a problem. Another issue Ruth brought up is that most existing charity raters do not rate program effectiveness or program transparency. Instead, charities are judged based on overhead rates, growth, financial statements, and whether they are publishing certain information on their websites. These measures do not tell what an organization’s program impact or overall transparency are, and they do not trace funds far enough along the chain. Linking charity rating systems with IATI standards could encourage greater transparency and accountability and help the public make decisions based on program accountability in addition to financial accountability. (For background on INGO overhead, see Saundra Schimmelpfennig’s “Lies, White Lies, and Accounting Practices”).
Because many INGOs are not familiar with IATI, a greater dissemination effort is needed for IATI to be of optimal use. If only 20% of the aid picture is available, it will not be very helpful for coordination and decision making. Many INGOs feel that they are already transparent because they are publishing their annual reports as a .pdf file on their websites and they have an overhead rate within a certain percentage, but this is not enough. Much more needs to be done to gain awareness and buy-in from US INGOs, government, charity rating systems, donors, media and the public on transparency and IATI.
Following the 3 discussants, TSNYC participants jumped in for a good debate around key points:
Carrot or stick approach?
NGOs place great importance on their Charity Navigator rankings and Better Business Bureau reviews, and many donors select charities based on these rankings, so it will be important to link these with IATI. The Publish What You Fund index, which tracks the transparency of different organizations, has been helpful in getting countries and institutions on board. The Foundation Center lists transparency indicators on their site GlassPockets as well. The Brookings and CGD QuODA report was mentioned as a key reason that the US Government signed onto IATI at Busan last November, since the US was ranked very low on transparency and saw that they could bring their ranking up by signing on.
Consensus at the Technology Salon was that it is not likely that the US Government or USAID will make IATI compliance mandatory for their grantees and implementing partners as DFID has done. Rather, the existing dashboard for collecting information would be used to report into IATI, so the dashboard needs to be improved and regularly updated by US agencies. One concern was whether in this scenario, the information published by USAID would be useful for developing country governments or would only be of use to USAID Missions. On the bright side, it was felt that movement within the US Government over the past few years towards greater openness and transparency has been massive. TSNYC participants noted that there seems to be a fundamental mindset change in the current administration around transparency, but it’s still difficult to make change happen quickly.
Some members of the US Congress have latched onto the idea and are pushing for greater transparency and this could impact whether the IATI profile increases. Transparency and accountability are of interest to both major US parties. Liberals tend to be interested in the idea of being more open and sharing information; and conservatives tend to focus on value for money and stamping out corruption and lowering inefficient aid spending and waste. IATI can support with both and be a win for everyone.
Making IATI mandatory could, some cautioned, backfire. For example there are foundations and corporations that for a variety of reasons do not openly share information about their giving. If pressured, the tendency may be to shut down totally.
Showing what positive things can be done with IATI and how it can benefit CSO information management and coordination internally as well as externally was thought to be a better approach than positioning IATI as “we are being audited by everyone now.” IATI should be emphasized as an opportunity to join data together to know what everyone is doing, visualize the data using new technologies, and use it to make better program decisions and improve coordination as well as accountability. Some examples of vibrant and informative uses of IATI data include Mapping for Results, Interaction’s Haiti Aid Map and the Foundation Center’s comparison of Foundation giving and World Bank funding.
Transparency as a ‘norm’
Many organizations are investing in transparency for reasons that go far beyond IATI compliance. Three kinds or organizations were identified at the Salon session: those who comply because it is mandatory; those who comply because it’s inevitable; and those who comply because they believe in the inherent value of transparency as a core principle. Even within organizations, some teams such as Democracy and Governance, may be much more interested in IATI than, say, Education, Health, or Arts teams, simply because of the themes they work on and their competing priorities. It is hoped that in 5 years’ time, it is no longer a question of mandatory or inevitable compliance, but rather transparency becomes the norm and it starts to feel strange to work in a space that is not transparent. Leadership is important to get an organization on board.
Challenges and opportunities in IATI compliance
Challenges to IATI compliance were discussed in depth at the Salon, including questions around the amount of resources needed to report to IATI. It was noted that the biggest challenges are organization, coordination, and change of attitudes internally. Some of the core obstacles that Salon participants noted include:
Time and resources
Some pushback might be seen around IATI because investment in IATI compliance may not be seen as providing an immediate return to individual organizations. TSNYC participants felt that rather than a constraint, IATI provided an opportunity for organizations to better manage their own information for internal sharing and use. IATI can help improve program planning, reduce time spent gathering program information from colleagues and across countries, and support better internal coordination among offices and partners. It was noted that when governments started publishing open data, the people who most used it were government employees for their own work. IATI can be seen as an investment in better internal coordination and information management. Once the information is available in an open format it can be used for a number of data visualizations that can show an organization’s reach and impact, or help a number of organizations share their joint work and impact, such as in the case of coalitions and thematic or sectoral networks.
Project document quality
Concerns may be raised in some organizations regarding the state of project documents that were not originally written with publication in mind. Organizations will have to decide if they want to work retroactively, invest in quality control, and/or change processes over time so that documentation is ready for publication.
Losing the competitive edge
TSNYC participants worried that without USAID mandatory compliance, some INGOs, and contractors especially, would not be motivated to publish information for fear of losing their competitive edge. It is feared that getting contractors to report to any level of detail will be difficult. This, the group discussed, makes peer pressure and public pressure important, and mechanisms to encourage broader transparency will need to be found. One idea was to create a ‘5 star system’ of IATI compliance so that organizations with full compliance get a higher star rating (something that Aid Info is already working on). Another angle is the hope that IATI reporting could replace some other mandatory reporting mechanisms, and this may be another entry point.
Accountability to whom?  
It was recognized that IATI was initiated as a top-down approach to accountability. The question remains how to make IATI information more useful for ‘beneficiaries’ and program participants to track aid flows, and to contest and validate the information. What complaints mechanisms exist for communities where aid has not been effectively implemented? One point was that IATI is designed to do exactly that and that when it is more populated with information, then this more exciting part that involves playing with the data and seeing what communities have to say about it will start to happen.
Simon noted that there is a huge emerging civic hacker and ICT for social change movement. Access to aid information can be hugely liberating for people. At some aid transparency workshops the focus has been on what national NGOs and governments are doing. Young people are often angry that they don’t know about this. They often find the idea that the information is available to them very exciting. Much of the conversation at these meetings has been about ways to reach communities and about who can be involved as intermediaries.
IATI is still top down and the information that people need is bottom up. However the conversation is starting to happen. Infomediaries need to be multiple and varied so that there is not only one source of IATI data interpretation, but rather a variety of interpretations of the data. Social accountability processes like community score cards and social audits can be brought into the equation to extend the value of IATI information and bring in community opinion on aid projects and their effectiveness. Platforms like Huduma are examples of making open data more accessible and useful to communities.
* * * * *
A huge thanks to our discussants Ruth Del Campo and Simon Parrish and to all those who participated in this 3rd Technology Salon NYC!
Contact me if you’d like to get on the list for future TSNYC invitations.
The Technology Salon™ is an intimate, informal, and in person, discussion between information and communication technology experts and international development professionals, with a focus on both:
  • technology’s impact on donor-sponsored technical assistance delivery, and
  • private enterprise driven economic development, facilitated by technology.

Our meetings are lively conversations, not boring presentations – PowerPoint is banned and attendance is capped at 15 people – and frank participation with ideas, opinions, and predictions is actively encouraged through our key attributes. The Technology Salon is sponsored by Inveneo and a consortium of sponsors as a way to increase the discussion and dissemination of information and communication technology’s role in expanding solutions to long-standing international development challenges.

Read Full Post »

Civil society has been working for years on participation, transparency, accountability and governance issues. Plenty of newer initiatives (small and large) look at new technologies as a core tool in this work. But are these groups talking and learning from each other? What good practices exist for using new technologies to improve transparency, accountability and governance? What are some considerations and frameworks for thinking about the role of new technologies in this area of work? What needs consideration under this broad theme of good governance?

Tuesday’s Technology Salon* in New York City focused on those issues, kicked off by our two discussants, Hapee de Groot from Hivos and Katrin Verclas from Mobile Active. Discussion ensued around the nuances of how, with whom, when, why, and  in conjunction with what do new technologies play a role in transparency, accountability and good governance.

Some of the key points brought up during the Salon**:

What is “good governance?”  The overall term could be divided into a number of core aspects, and so the discussion is a big one and it’s complicated. Aid transparency is only one small part of the overall topic of good governance.

The World Bank definition includes aspects of:

  • Participation of citizens in political processes, freedom of expression and  association, free media
  • Political stability and absence of violence
  • Government effectiveness in the delivery of services
  • Regulatory quality, rule of law
  • Control of corruption

There’s a need to look at governments and aid, but also to look at the private sector. Some commented that aid transparency is in vogue because donors can drive it but it’s perhaps not as important as some of the other aspects and it’s currently being overemphasized. There are plenty of projects using ICTs and mobiles in other areas of governance work.

More data doesn’t equal more accountability. Data does not equal participation. Can mobile phones and other ICTs or social media reduce corruption? Can they drive new forms of participation? Can they hold power accountable in some ways? Yes, but there is no conclusive evidence that the use of new technology to deliver data down from governments to people or up from people to governments improves governance or accountability. The field of tech and governance suffers from ‘pilotitis’ just like the field of ICT4D. Some participants felt that of course open data doesn’t automatically equal accountability and it was never the idea to stop there. But at the same time, you can’t have accountability without open data and transparency. Opening the data is just the first step in a long road of reaching accountability and better governance.

Efficient vs transformational. Transactional efficiency within a system is one thing. Transformation is another. You can enhance an existing process from, say, writing on paper to calling on a landline to texting in information, thereby improving accuracy and speed. But there is something more which is the transformational side. What’s most interesting perhaps are those ways that ICTs can completely alter processes and systems. Again, there are a lot of promising examples but there is not much evidence of their impact at this point. One participant noted that current evidence seems to point toward the integration of mobiles (and other ICTs) into existing process as having a greater impact and quicker uptake within large, bureaucratic systems than disruptive use of new technologies. But the question remains – Are the systems good systems or should/could ICTs transform them to something totally different and better or can ICTs help do away with poorly working systems entirely, replacing them with something completely new?

Is open data just a big show? Some alluded to opaque transparency, where a government or another entity throws up a bunch of data and says “we are being open” but there is no realistic way to make sense of the data. Some felt that governments are signing onto open data pacts and partnerships as a fake show of transparency. These governments may say, “The data base is available. Go ahead and look at it.” But it costs a lot of money and high level skills to actually use the data. In addition, there is a need for regulatory frameworks and legislation around openness. Brazil was given as an example of a country that has joined the open government partnership, but as yet has no regulatory framework or freedom of information act, even though the country has a beautiful open government website. “Checks and balances are not inherent in the mobile phone. They need to be established in the legislation and then can be enhanced by mobile or other technology.” Open Data Hackathons can help turn data into information. The question of “what does open data actually mean?” came up also and the “cake test” was recommended as one way of defining “open”.

Is open data an extractive process?  Some at the Salon cautioned that the buzz around Open Data could be a bit false in some ways, and may be hyped up by private companies who want to make money off of nice data visualizations that they can sell to big donors or governments. The question was raised about how much data actually gets back to those people who provide it so that they can use it for their own purposes? The sense was that there’s nothing wrong with private companies helping make sense of data per se, but one could ask what the community who provided the data actually gets out of this process. Is it an extractive data mining process? And how much are communities benefiting from the process? How much are they involved? Mikel Maron wrote a great post yesterday on the link between open data and community empowerment – I highly recommend reading it for more on this.

Whose data? A related issue that wasn’t fully discussed at the Salon is: who does the information that is being “opened” actually belong to (in the case of household surveys, for example)? The government? The International NGO or multilateral agency who funds a project or research? The community? And what if a community doesn’t want its data to be open to the world – is anyone asking? What kind of consent is being granted? What are the privacy issues? And what if the government doesn’t want anyone to know the number of X people living in X place who fit X description? Whose decision is it to open data? What are the competing politics?

For example, what if an organization is working on an issue like HIV, cholera, violence or human trafficking. What if they want to crowd source information and publicly display it to work towards better transparency and improved service delivery, but the host government country denies the existence of the issue or situation? In one case I heard recently, the NGO wanted to work with government on better tracking and reporting so that treatment/resources could be allocated and services provided, but when the government found out about the project, they wanted control over the information and approval rights. Government went so far in another case as to pressure the mobile service provider who was partnering with the organization, and the mobile service provider dropped out of the project. These are good reminders that information is power and openness can be a big issue even in cases not initially identified as politically charged.

Privacy and security risks. The ubiquity of data can pose huge privacy and security concerns for activists, civil society and emerging democracies and some at the Salon felt this aspect is not being effectively addressed. Can there really be anonymous mobile data? Does the push/drive for more data jeopardize the political ambitions of certain groups (civil society that may be disliked by certain governments)? This can also be an issue for external donors supporting organizations in places like Syria or Iraq. Being open about local organizations that are receiving funding for democracy or governance work can cause problems (eg., they get shut down or people can be arrested or killed).

Can new ICTs weaken helpful traditional structures or systems?  Is new tech removing some middlemen who were an important part of culture or societal structure? Does it weaken some traditional structures that may actually be useful? The example of the US was given where a huge surge of people now engage directly with their congressperson via Twitter rather than via aggregation channels or other representatives. Can this actually paralyze political systems and make them less functional? Some countered, saying that Twitter is somewhat of a fad and over time this massive number of interactions will settle down, and in addition, not everyone gets involved on every issue all the time. Things will sort themselves out. Some asked if politicians would become afraid (someone – help!! there is a study on this issue that I can’t seem to locate) to make some of the secret deals that helped move agendas forward because they will be caught and so openness and transparency can actually paralyze them? In other words is it possible that transparency is not always a good thing in terms of government effectiveness? The example of paying Afghan police directly by mobile phone was given. This initiative apparently ended up failing because it cut decision makers who benefited from bribes out of the loop. Decoupling payments from power is potentially transformational, but how to actually implement these projects when they disrupt so much?

Does new technology create parallel structures? Are parallel structures good or bad? In an effort to bypass inefficient and/or unaccountable systems, in one case, private business owners started their own crime reporting and 911 system to respond and accompany victims to report to the police and follow up on incidents. Questions were raised whether this privatization of government roles was taking justice into ones’ own hands, forcing the government to be accountable, allowing it to shirk responsibilities, or providing a way for government to see an innovation and eventually take on a new and more effective system that had been tried and tested with private funds. This same issue can be seen with parallel emergency reporting systems and other similar uses of ICTs. It may be too early in the game to know what the eventual outcomes of these efforts will be and what the long term impact will be on governance. Or it may be that parallel systems work in some contexts and not in others.


The Salon could have gone for much longer but alas, we had to end. Dave Algoso covers some of the other ideas from the Salon in his post Technology for Transparency, Accountability and Governance, including how to approach and define the topic (top down vs bottom up? efficiency vs transformation?) and the importance of measuring impact.

Thanks to UNICEF and Chris Fabian for hosting the Salon. Thanks to Martin Tisne from the Transparency and Accountability Initiative for sparking the idea to choose this topic for the first Technology Salon in NYC, and thanks to Wayan Vota for inviting me to coordinate the series.

Contact me if you’d like to be on the invitation list for future Salons.

*The Technology Salon is sponsored by the UN Foundation’s Technology Partnership with the Vodafone Foundation as a way to increase the discussion and dissemination of information and communication technology’s role in expanding solutions to long-standing international development challenges. Technology Salons currently run in Washington DC (coordinated by@wayan_vota) and San Francisco, with New York City as the latest addition, coordinated by yours truly.

**The Salon runs by Chatham House Rules, so no attribution has been made in the above post.

Read Full Post »

« Newer Posts