Feeds:
Posts
Comments

Posts Tagged ‘tools’

Over the past few months, I’ve been working with CARE to develop a Responsible Data Maturity Model. This “RDMM” joins a growing set of tools (created by a wide variety of organizations) aimed at supporting organizations to move towards more responsible data management.

Responsible Data is a concept developed by the Responsible Data Forum. It outlines the collective duty to prioritize and respond to the ethical, legal, social and privacy-related challenges that come from using data. Responsible Data encompasses a variety of issues which are sometimes thought about separately, like data privacy and data protection, or ethical challenges. For any of these to be truly addressed, they need to be considered together.

CARE’s model identifies five levels of Responsible Data maturity:

  • Unaware: when an organization has not thought about Responsible Data much at all.
  • Ad-Hoc: when some staff or teams are raising the issue or doing something on their own, but there is no institutionalization of Responsible Data.
  • Developing: when there is some awareness, but the organization is only beginning to put policy, guidelines, procedures and governance in place.
  • Mastering: when the organization has its own house in order and is supporting its partners to do the same.
  • Leading: when the organization is looked to as a Responsible Data leader amongst its peers, setting an example of good practice, and influencing the wider field. Ideally an organization would be close to ‘mastering’ before placing itself in the ‘leading’ stage.

The main audience for the RDMM is the point person who is tasked with moving an organization or team forward to improve data practices and data ethics. The model can be adapted and used in ways that are appropriate for other team members who do not have Responsible Data as their main, day-to-day focus.

There are multiple other uses for the RDMM, however, for example:

  • As a diagnostic or baseline and planning tool for organizations to see where they are now, where they would like to be in 3 or 5 years, and where they need to put more support/resources.
  • As an audit framework for Responsible Data.
  • As a retro-active, after-action assessment tool or case study tool for looking at a particular program and seeing which Responsible Data elements were in place and contributed to good data practices, and then developing a case study to highlight good practices and gaps.
  • As a tool for evaluation if looking at a baseline/end-line for organizational approaches to Responsible Data.
  • In workshops as a participatory self-assessment tool to 1) help people see that moving towards a more responsible data approach is incremental and 2) to identify what a possible ideal state might look like. The tool can be adapted to what an organization sees as its ideal future state.
  • To help management understand and budget for a more responsible data approach.
  • With an adapted context, “persona,” or work stream approach that helps identify what Responsible Data maturity might look like for a particular project or program or for a particular role within a team or organization. For example, for headquarters versus for a country office, for the board versus for frontline implementers. It could also help organizations identify what parts of Responsible Data different positions or teams should be concerned with and accountable for.
  • As an investment roadmap for headquarters, leadership or donors to get a sense of what is the necessary investment to reach Responsible Data maturity.
  • As an iterative pathway to action, and a way to establish indicators or markers to mainstream Responsible Data throughout an organization.
  • In any other way you might think of! The RDMM is published with a Creative Commons License that allows you to modify and adapt it to suit your needs.

Over the past few months, we’ve tested the model with teams at headquarters, country offices, in mixed teams of people from different offices in one organization, and with groups from different organizations. We asked them to go through the different areas of the model and self-assess at which level they place themselves currently and which level they would like to achieve within a set time frame, for example 3 or 5 years. Then we worked with them to develop action points that would allow them to arrive to the desired level.

Teams found the exercise useful because:

  • It allowed them to break Responsible Data into disparate pieces that could be assigned to different parts of an organization or different members of a team.
  • It helped to lay out indicators or “markers” related to Responsible Data that could be integrated throughout an organization.
  • It allowed both teams and management to see that Responsible Data is a marathon not a sprint and will require that multiple work streams are addressed over time with the involvement of different skill sets and different parts of the organization (strategy, operations and IT, legal, programs, M&E, innovations, HR, fundraising and partnerships, etc.)
  • It helped teams with limited resources to see how to make incremental steps forward without feeling pressured to make Responsible Data their only focus.

We hope others will find the RDMM useful as well! It’s published under a creative commons license, so feel free to use it and adapt it in ways that will suit your needs.

We’re in the process of translating it into French and Spanish. We’d love to know if you use it, how, and if it is helpful to you! Please get in touch with me for more information.

Download the Responsible Data Maturity Model as a Word file.

Download the Responsible Data Maturity Model as a pdf

Read Full Post »

At the October 17, 2012 Technology Salon NYC, we focused on ways that ICTs can be used for qualitative monitoring and evaluation (M&E) efforts that aim to listen better to those who are participating in development programs. Our lead discussants were:  John Hecklinger, Global Giving; Ian Thorpe, UN DOCO and the World We Want 2015 Campaign; and Emily Jacobi, Digital Democracy. This salon was the final in a series of three on using new technologies in M&E work.

Global Giving shared experiences from their story-telling project which has collected tens of thousands of short narratives from community members about when an individual or organization tried to change something in their community. The collected stories are analyzed using Sensemaker to find patterns in the data with the aim of improving NGO work. (For more on Global Giving’s process see this document.)

The United Nations’ Beyond 2015 Campaign aims to spur a global conversation on the post-MDG development agenda. The campaign is conducting outreach to people and organizations to encourage them to participate in the discussion; offering a web platform (www.worldwewant2015.org) where the global conversation is taking place; and working to get offline voices into the conversation. A challenge will be synthesizing and making sense of all of the information coming in via all sorts of media channels and being accountable now and in future to those who participate in the process.

Digital Democracy works on digital literacy and human rights, and makes an effort to integrate qualitative monitoring and evaluation into their program work stream. They use photography, film and other media that transcend the language and literacy barriers. Using these kinds of media helps participants express opinions on issues that need addressing and builds trust. Photos have helped in program development as well as in defining quantitative and qualitative indicators.

A rich conversation took place around the following aspects:

1) Perception may trump hard data

One discussant raised the question “Do opinions matter more than hard data on services?” noting that perceptions about aid and development may be more important than numbers of items delivered, money spent, and timelines met. Even if an organization is meeting all of its targets, what may matter more is what people think about the organization and its work. Does the assistance they get respond to their needs? Rather than asking “Is the school open?” or “Did you get health care?” it may be more important to ask “How do you feel about health?” Agencies may be delivering projects that are not what people want or that do not respond to their needs, cultures, and so on. It is important to encourage people to talk amongst themselves about their priorities, what they think, encourage viewpoints from people of different backgrounds and see how to pull out information to help inform programs and approaches.

2) It is a complex process

Salon participants noted that people are clearly willing to share stories and unstructured feedback. However, the process of collecting and sorting through stories is unwieldy and far from perfect. More work needs to be done to simplify story-collection processes and make them more tech-enabled. In addition, more needs to be done to determine exactly how to feed the information gleaned back in a structured and organized way that helps with decision-making. One idea was the creation of a “Yelp” for NGOs. Tagging and/or asking program participants to tag photos and stories can help make sense of the data. If videos are subtitled, this can also be of great use to begin making sense of the type of information held in videos. Dotsub, for example, is a video subtitling platform that uses a Wikipedia style subtitling model, enabling crowd sourced video translations into any language.

3) Stories and tags are not enough

We know that collecting and tagging stories to pull out qualitative feedback is possible. But so what? The important next step is looking at the effective use of these stories and data. Some ideas on how to better use the data include adding SMS feedback, deep dives with NGOs, and face-to-face meetings. It’s important to move from collecting the stories to thinking about what questions should be asked, how the information can help NGOs improve their performance, how this qualitative data translates into change or different practice at the local and global levels, how the information could be used by local organizers for community mobilization or action, and how all this is informing program design, frameworks and indicators.

4) Outreach is important

Building an online platform does not guarantee that anyone will visit it or participate. Local partners are an important element to reach out and collect data about what people think and feel. Outreach needs to be done with many partners from all parts of a community or society in order to source different viewpoints. In addition, it is important to ask the right questions and establish trust or people will not want to share their views. Any quality participation process, whether online or offline, needs good facilitation and encouragement; it needs to be a two-way process, a conversation.

5) Be aware of bias

Understanding where the process may be biased is important. Everything from asking leading questions, defining the meta data in a certain way, creating processes that only include certain parts of the community or population, selecting certain partners, or asking questions that lead to learning what an organization thinks it needs to know can all create biased answers. Language is important here for several reasons: it will affect who is included or excluded and who is talking with whom. Using development jargon will not resonate with people, and the way development agencies frame questions may lead people to particular answers.

6)  Be aware of exclusion

Related to bias is the issue of exclusion. In large-scale consultations or online situations, it’s difficult to know who is talking and participating. Yet the more log-in information solicited, the less likely people are to participate in discussions. However by not asking, it’s hard to know who is responding, especially when anonymity is allowed. In addition, results also depend on who is willing and wants to participate. Participants agreed that there is no silver bullet to finding folks to participate and ensuring they represent diversity of opinion. One suggestion was that libraries and telecenters could play a role in engaging more remote or isolated communities in these kinds of dialogues.

7) Raising expectations

Asking people for feedback raises expectations that their input will be heard and that they will see some type of concrete result. In these feedback processes, what happens if the decisions made by NGOs or heads of state don’t reflect what people said or contributed? How can we ensure that we are actually listening to what people tell us? Often times we ask for people’s perceptions and then tell them why they are wrong. Follow up is also critical. A campaign from several years ago was mentioned where 93,000 people signed onto a pledge, and once that was achieved, the campaign ended and there was no further engagement with the 93,000 people. Soliciting input and feedback needs to be an ongoing relationship with continual dialogue and response. The process itself needs to be transparent and accountable to those who participate in it.

8 ) Don’t forget safety and protection

The issue of safety and protection for those who offer their opinions and feedback or raise issues and complaints was brought up. Participants noted that safety is very context specific and participatory risk assessments together with community members and partners can help mitigate and ensure that people are informed about potential risk. Avoiding a paternalistic stance is recommended, as sometimes human rights advocates know very well what their risk is and are willing to take it. NGOs should, however, be sure that those with whom they are working fully understand the risks and implications, especially when new media tools are involved that they may not have used before. Digital literacy is key.

9) Weave qualitative M&E into the whole process

Weaving consistent spaces for input and feedback into programs is important. As one discussant noted, “the very media tools we are training partners on are part of our monitoring and evaluation process.”  The initial consultation process itself can form part of the baseline. In addition to M&E, creating trust and a safe space to openly and honestly discuss failure and what did not go so well can help programs improve.  Qualitative information can also help provide a better understanding of the real and hard dynamics of the local context, for example the challenges faced during a complex emergency or protracted conflict. Qualitative monitoring can help people who are not on the ground have a greater appreciation for the circumstances, political framework, and the socio-economic dynamics.

10) Cheaper tool are needed

Some felt that the tools being shared (Sensemaker in particular) were too expensive and sophisticated for their needs, and too costly for smaller NGOs. Simpler tools would be useful in order to more easily digest the information and create visuals and other analyses that can be fed back to those who need to use the information to make changes. Other tools exist that might be helpful, such as Trimble’s Municipal Reporter, Open Data Kit, Kobe, iForm Builder, Episurveyor/Magpi and PoiMapper. One idea is to look at some of the tools being developed and used in the crisis mapping and response space to see if cost is dropping and capacity increasing as the field advances. (Note: several tools for parsing Twitter and other social media platforms were presented at the 2012 International Conference on Crisis Mapping, some of which could be examined and learned from.)

What next?

A final question at the Salon was around how the broader evaluation community can connect with the tools and people who are testing and experimenting with these new ways of conducting monitoring and evaluation. How can we create better momentum in the community to embrace these practices and help build this field?

Although this was the final Salon of our series on monitoring and evaluation, we’ll continue to work on what was learned and ways to take these ideas forward and keep the community talking and growing.

A huge thank you to our lead discussants and participants in this series of Salons, especially to the Community Systems Foundation and the Rockefeller Foundation’s monitoring and evaluation team for joining in the coordination with us. A special thanks to Rockefeller for all of the thoughtful discussion throughout the process and for hosting the Salons.

The next Technology Salon NYC will be November 14, 2012, hosted by the Women’s Refugee Commission and the International Rescue Committee. We’ll be shifting gears a little, and our topic will be around ways that new technologies can support children and youth who migrate, are forcibly displaced or are trafficked.

If you’d like to receive notifications about future salons, sign up for the mailing list!

Previous Salons in the ICTs and M&E Series:

12 lessons learned with ICTs for monitoring and accountability

11 points on strengthening local capacity to use new ICTs for monitoring and evaluation

Read Full Post »

New technologies are opening up all kinds of possibilities for improving monitoring and evaluation. From on-going feedback and crowd-sourced input to more structured digital data collection, to access to large data sets and improved data visualization, the field is changing quickly.

On August 7, the Rockefeller Foundation and the Community Systems Foundation (CSF) joined up with the Technology Salon NYC for the first in a series of 3 Salons on the use of ICTs in monitoring and evaluating development outcomes. Our lead discussants were: Erica Kochi from UNICEF Innovations; Steven Davenport from Development Gateway and John Toner from CSF.

This particular Salon focused on the use of ICTs for social monitoring (a.k.a. ‘beneficiary feedback loops’) and accountability. Below is a summary of the key points that emerged at the Salon.

1) Monitoring and evaluation is changing

M&E is not only about formal data collection and indicators anymore, one discussant commented, “It’s free form, it contains sentiment.” New ICT tools can help donors and governments plan better. SMS and other social monitoring tools provide an additional element to more formal information sources and can help capture the pulse of the population. Combinations of official data sets with SMS data provide new ways of looking at cross-sections of information. Visualizations and trend analysis can offer combinations of information for decision making. Social monitoring, however, can be a scary thing for large institutions. It can seem too uncontrolled or potentially conflictive. One way to ease into it is through “bounded” crowd-sourcing (eg., working with a defined and more ‘trusted’ subset of the public) until there is comfort with these kinds of feedback mechanisms.

2) People need to be motivated to participate in social monitoring efforts

Building a platform or establishing an SMS response tool is not enough. One key to a successful social monitoring effort is working with existing networks, groups and organizations and doing well-planned and executed outreach, for example, in the newspaper, on the radio and on television. Social monitoring can and should go beyond producing information for a particular project or program. It should create an ongoing dialogue between and among people and institutions, expanding on traditional monitoring efforts and becoming a catalyst for organizations or government to better communicate and engage with the community. SMS feedback loops need to be thought of in terms of a dialogue or a series of questions rather than a one-question survey. “People get really engaged when they are involved in back and forth conversation.” Offering prizes or other kinds of external motivation can spike participation rates but also can create expectations that affect or skew programs in the long run. Sustainable approaches need to be identified early on. Rewards can also lead to false reports and re-registering, and need to be carefully managed.

3) Responsiveness to citizen/participant feedback is critical

One way to help motivate individuals to participate in social monitoring is for governments or institutions to show that citizen/participant feedback elicits a response (eg., better delivery of public services).  “Incentives are good,” said one discussant, “But at the core, if you get interactive with users, you will start to see the responses. Then you’ll have a targeted group that you can turn to.” Responsiveness can be an issue, however if there is limited government or institutional interest, resourcing or capacity, so it’s important to work on both sides of the equation so that demand does not outstrip response capacity. Monitoring the responsiveness to citizen/participant feedback is also important. “Was there a response promised? Did it happen? Has it been verified? What was the quality of it?”

4) Privacy and protection are always a concern

Salon participants brought up concerns about privacy and protection, especially for more sensitive issues that can put those who provide feedback at risk. There are a number of good practices in the IT world for keeping data itself private, for example presenting it in aggregate form, only releasing certain data, and setting up controls over who can access different levels of data. However with crowd-sourcing or incident mapping there can be serious concerns for those who report or provide feedback. Program managers need to have a very good handle on the potential risks involved or they can cause unintended harm to participants. Consulting with participants to better understand the context is a good idea.

5) Inclusion needs to be purposeful

Getting a representative response via SMS-based feedback or other social monitoring tools is not always easy. Mandatory ratios of male and female, age groups or other aspects can help ensure better representation. Different districts can be sampled in an effort to ensure overall response is representative. “If not,” commented one presenter, “you’ll just get data from urban males.” Barriers to participation also need consideration, such as language; however, working in multiple languages becomes very complicated very quickly. One participant noted that it is important to monitor whether people from different groups or geographic areas understand survey questions in the same way, and to be able to fine-tune the system as it goes along. A key concern is reaching and including the most vulnerable with these new technologies. “Donors want new technology as a default, but I cannot reach the most excluded with technology right now,” commented a participant.

6) Information should be useful to and used by the community

In addition to ensuring inclusion of individuals and groups, communities need to be involved in the entire process. “We need to be sure we are not just extracting information,” mentioned one participant. Organizations should be asking: What information does the community want? How can they get it themselves or from us? How can we help communities to collect the information they need on their own or provide them with local, sustainable support to do so?

7) Be sure to use the right tools for the job

Character limitation can be an issue with SMS. Decision tree models, where one question prompts another question that takes the user down a variety of paths, are one way around the character limit. SMS is not good for incredibly in-depth surveys however; it is good for breadth not depth. It’s important to use SMS and other digital tools for what they are good for. Paper can often be a better tool, and there is no shame in using it. Discussants emphasized that one shouldn’t underestimate the challenges in working with Telco operators and making short codes. Building the SMS network infrastructure takes months. Social media is on the rise, so how do you channel that into the M&E conversation?

8) Broader evaluative questions need to be established for these initiatives

The purpose of including ICT in different initiatives needs to be clear. Goals and evaluative questions need to be established. Teams need to work together because no one person is likely to have the programmatic, ICT and evaluation skills needed for a successfully implemented and well-documented project. Programs that include ICTs need better documentation and evaluation overall, including cost-benefit analyses and comparative analyses with other potential tools that could be used for these and similar processes.

9) Technology is not automatically cheaper and easier

These processes remain very iterative; they are not ‘automated’ processes. Initial surveys can only show patterns. What is more interesting is back-and-forth dialogue with participants. As one discussant noted, staff still spend a lot of time combing through data and responses to find patterns and nuances within the details. There is still a cost to these projects. In one instance, the major project budget went into a communication campaign that was launched and the work with existing physical networks to get people to participate. Compared to traditional ways of doing things (face-to-face, for example) the cost of outreach is not so expensive, but integrating SMS and other technologies does not automatically mean that money will be saved. The cost of SMS is also large in these kinds of projects because in order to ensure participation, representation, and inclusion, SMS usually needs to be free for participants. Even with bulk rates, if the program is at massive scale, it’s quite expensive. When assuming that governments or local organizations will take over these projects at some point, this is a real consideration.

10) Solutions at huge scale are not feasible for most organizations 

Some participants commented that the UN and the Red Cross and similarly sized organizations are the only ones who can work at the level of scale discussed at the Salon. Not many agencies have the weight to influence governments or mobile service providers, and these negotiations are difficult even for large-scale organizations. It’s important to look at solutions that react and respond to what development organizations and local NGOs can do. “And what about localized tools that can be used at district level or village level? For example, localized tools for participatory budgeting?” asked a participant. “There are ways to link high tech and SMS with low tech, radio outreach, working with journalists, working with other tools,” commented others. “We need to talk more about these ways of reaching everyone. We need to think more about the role of intermediaries in building capacity for beneficiaries and development partners to do this better.

11) New technology is not M&E magic

Even if you include new technology, successful initiatives require a team of people and need to be managed. There is no magic to doing translations or understanding the data – people are needed to put all this together, to understand it, to make it work. In addition, the tools covered at the Salon only collect one piece of the necessary information. “We have to be careful how we say things,” commented a discussant. We call it M&E, but it’s really ‘M’. We get confused with ourselves sometimes. What we are talking about today is monitoring results. Evaluation is how to take all that information then, and make an informed decision. It involves specialists and more information on top of this…” Another participant emphasized that SMS feedback can get at the symptoms but doesn’t seem to get at the root causes. Data needs to be triangulated and efforts made to address root causes and end users need to be involved.

12) Donors need to support adaptive design

Participants emphasized that those developing these programs, tools and systems need to be given space to try and to iterate, to use a process of adaptive design. Donors shouldn’t lock implementers into unsuitable design processes. A focused ‘ICT and Evaluation Fail Faire’ was suggested as a space for improving sharing and learning around ICTs and M&E. There is also learning to be shared from people involved in ICT projects that have scaled up. “We need to know what evidence is needed to scale up. There is excitement and investment, but not enough evidence,” it was concluded.

Our next Salon

Our next Salon in the series will take place on August 30th. It will focus on the role of intermediaries in building capacity for communities and development partners to use new technologies for monitoring and evaluation. We’ll be looking to discover good practices for advancing the use of ICTs in M&E in sustainable ways. Sign up for the Technology Salon mailing list here. [Update: A summary of the August 30 Salon is here.]

Salons run by Chatham House Rule, thus no attribution has been made. 

Read Full Post »

I’ve been told that mention of the term ‘governance’ makes people want to immediately roll over and fall asleep, and that I’m a big weirdo for being interested in it. But I promise you governance is *so* not boring! (I’m also fairly sure that whatever my teacher droned on about as I slept through my ‘Government’ class senior year of high school was not ‘governance’.)

If you get excited about the concepts of ‘open’ or ‘transparent’ or ‘accountable’ or ‘sustainable’ or ‘human rights’ or ‘politics’ then you need to also get pumped about ‘governance’ because it includes elements of all of the above.

I am just back from a week-long workshop where, based on our different practical and strategic and thematic experiences, internal and external evaluations and reviews of good practice, videos and documents from other organizations, input from children and youth in several countries (and with the support of a fantastic facilitator), several of us from different Plan offices worked to define the basic elements for a global program strategy on Youth, Citizenship and Governance (to be completed over the next several months).

At the workshop, we got a copy of A Governance Learning Guide, which I’m finding very useful and am summarizing below.

Why is governance important?

Our focus is on children and youth, but many of the reasons that governance is important for them extend to governance overall.

From Plan UK’s Governance Learning Guide, chapter 1.

So what exactly do we mean by the term “governance”? 

In our case, we link governance work with our child-centered community approach (a rights-based approach) and in this particular strategy, we will be focusing on the processes by which the state exercises power, and the relationships between the state and citizens. We have separate yet related strands of work around child and youth participation in our internal governance structures (here’s one example), effectiveness of our institutional governance overall (see this discussion on International CSO governance, for example), and the participation of children and youth in high level decision-making fora.

Our concept of governance for the youth, citizenship and governance strategy is based on the following governance concepts*:

Accountability and responsiveness.  This includes formal government accountability as well as citizen-led accountability. Opportunities for children and youth to participate in formal accountability processes are often limited due to their age — they cannot participate in elections, for example. Citizen-led accountability can open new opportunities for children, youth and other more marginalized groups to hold those in power more accountable.

‘People no longer rely on governments alone to improve governance. All over the world we are seeing experiments in ‘participatory governance’. People and organisations are grasping the opportunities offered by decentralisation and other reform processes to demand more of a say in the public policy and budget processes that affect them. These ways of holding the state to account are often called ‘social accountability’. Examples include participatory budgeting, monitoring electoral processes, using online and mobile technology, and citizen evaluation of public services. These forms of citizen engagement and social accountability are particularly promising for young people, who often face challenges in getting their voices heard in formal policy and governance processes.’ (from the call for submissions for the Participatory Learning and Action Journal (PLA) special issue on Young Citizens: youth and participatory governance in Africa, published in December, 2011)

Accountability is also linked with openness and sharing of information such as local government budgets and plans (this is also referred to as ‘transparency’). Responsiveness, in our case, refers to ‘the extent to which service providers and decision makers listen, meet and respond to the needs and concerns of young people.’ Responsiveness includes the willingness of those in power to engage seriously with young people and a government’s commitment to ‘be responsive’ to the issues raised by citizens, including children and young people. Responsiveness entails also the administrative and financial capacity to respond concretely to a population’s needs, rights and input.

Voice and participation.  This refers to the capacity of young people to speak, be heard and connect to others. Voice is one of the most important means for young people to participate. Within the concept of ‘voice’ we also consider voice strategies for raising and amplifying voices, capacity to use voice in a variety of ways to bring about change, space to exercise the raising of voices, and voice as a means to participate and exercise citizenship rights. (We consider that every child has citizenship rights, not only those who hold citizenship in a particular country). It’s also important to qualify the use of the term participation. In the case of young people’s participation in governance, we are not referring to the participatory methods that we commonly use in program planning or evaluation (we are also not discounting these at all – these are critical for good development processes!). In governance work, we are rather taking it further to refer to the meaningful inclusion of children and young people in decision-making processes.  

Power and politics. These are key in governance work. It is essential to be aware of and understand politics and power dynamics so that children and young people (and other oft-excluded groups) are not overlooked, manipulated, intimidated or disempowered.

Image captured from Plan UK’s Governance Learning Guide, chapter 2 page 14.

A key question here is what children and young people are participating in, and what for. Another important question is where are children and young people participating? Is it in special events or spaces designated just for them or are they participating in adult spaces? How does the place and space where children and young people are participating impact on their ability to influence decisions?

It’s important to note the 4 types of power that are typically considered in power analyses (from VeneKlasen, 2007): power over (domination or control), power within (self-worth), power to (individual ability to act, agency) and power with (collective action, working together). These need to be analyzed and understood, including their social, cultural and historical factors that create and sustain different power dynamics in different situations and spaces.

Capacity. We refer here to the capacity of both decision-makers and young people. Decision makers need to have the ability to perform their duties and ensure services are delivered. This, in our case, includes the abilities of decision makers to interact, engage and listen to children and young people and to take them seriously and to be responsive (see above) to their views, needs and rights. Young people also need to have the capacity to hold decision makers to account and to express their concerns and their views, including the views of other children and young people who may be excluded and marginalized from the decision making process or from participating fully. Information literacy and the capacity to access, interpret and analyze information is a critical skill for children and young people.

Interactions between children and young people and decision makers. These spaces encompass critical aspects of participation, power and politics. An example of a space for interaction would be where children and young people, local government and school leaders come together to discuss budget plans and available resources for school infrastructure. These spaces are shaped by a number of factors, including social, economic, cultural ones. They are also not free of personal agendas, desires, intentions and prejudices. It’s critical to remember this in governance work — ‘tools’ and ‘mechanisms’ are not enough. (ICT4Governance and Tech for Transparency friends, I’m looking at you! Though I think most of us see this point as ‘beating a dead horse’ by now.)

From Chapter 2 of the Governance Learning Guide by Plan UK

*Summarized from Chapter 1 and Chapter 2 of Plan UK’s extremely useful and easily downloadable A Governance Learning Guide. The guide also has a number of practical use cases on different governance initiatives as well as an extensive section on additional resources.

Here’s a follow-up post (since governance is so clearly *not* boring and I’m sure there is high demand for more!) called 15 thoughts on good governance programming with youth.


Read Full Post »

Almost a year ago, I met Ernst Suur (@ernstsuur) for the first time. We bonded in frustration over the irony of not being able to find any on-line courses to study ICT4D. We each did some research and didn’t come up with much, so we agreed I should write a blog post (See: Where’s the ICT4D distance learning?) to see if we could crowd source anything to help us out. We got some great comments with some good resources for the few courses that do exist or the ones that are in design.

We also discovered TechChange, a newish organization looking to develop some on-line ICT4D courses. We all chatted a couple of times and decided to co-host a ICT4D chat on Twitter to see if we could come up with some additional ideas on what kinds of courses people were interested in. (See the chat summary here). I also had a chance to meet with Nick, Mark and Jordan in their DC office to discuss ideas.

So I’m really excited to see that now TechChange has 3 new on-line courses happening this year:

1) Tech Tools and Skills for Emergency Management from September 5-23.

‘This course will explore how new communication and mapping technologies are being used to respond to disasters, create early warning mechanisms, improve coordination efforts and much more. It will also consider some of the key challenges related to access, implementation, scale, and verification that working with new platforms present. The course is designed to assist professionals in developing concrete strategies and technological skills to work amid this rapidly evolving landscape.  Participants can expect a dynamic and interactive learning environment with a variety of real world examples from organizations working in the field including those involved in the humanitarian response to the Haitian earthquake’

Course topics include: Crisis mapping, human rights violations and elections monitoring, citizen journalism and crowd sourcing, and information overload and decision-making in real-time.

ICT tools covered include: Ushahidi, Quantum GIS, FrontlineSMS, Open Street Map, Managing News.

2) Global Innovations for Digital Organizing: New Media Tactics for Democratic Change from September 26-October 4.

‘New platforms of communication are revolutionizing social dynamics by democratizing access to and production of media. From Barack Obama’s youth mobilization efforts to the ongoing uprisings across the Middle East and North Africa, this course will examine how new channels of communication are being utilized and to what extent these efforts and techniques are successful or unsuccessful in a given context.  It will also provide participants with strategies for maximizing the impact of new media and train them in the effective use of a range of security and privacy tools.’

Course topics include: the new media landscape, offline organization and change through online mobilization, data and metrics, censorship, privacy and security.

3) Mobiles for International Development: New Platforms for Health, Finance and Education from October 16-November4.

‘The mobile phone is rapidly bringing communication to the most remote areas of the world. NGOs, governments and companies alike are beginning to realize the potential of this ubiquitous tool to address social challenges. This course will explore successful applications that facilitate economic transactions, support public health campaigns and connect learners to educational content. It will also critically engage with issues of equity, privacy and access.’

Course topics include: mobile money systems, mHealth and mobile diagnostics, data management for monitoring and evaluation, many-to-many communications integrating mobiles and radio, and mobile learning.

ICT tools covered include: mPesa, RapidSMS/Souktel, Sana Mobile, Medic Mobile, TxtEagle and FreedomFone.

Modalities:

Each course costs $350 (or $250 early bird price) and runs for 3 weeks. The courses require a time commitment of at least 6 hours per week in order to earn the certificate. There are also plenty of opportunities for those that want to spend more time to engage with additional materials and students can access content up to 6 months after course is over. The entire course will be delivered on-line ‘involving a variety of innovative online teaching approaches, including presentations, discussions, case studies, group exercises, simulations and will make extensive use of multimedia.’

I’ll be attending the 3rd course gratis in exchange for helping TechChange continue to shape the content and curriculum and providing feedback on the features and content. (Thank you, social media. Thank you, barter system!)

Register for any of the 3 courses here.

Read Full Post »

the Ghana team: row 1: Steven, Joyce, Yaw, Samuel; row 2: Bismark, Maakusi, James, Chris, Dan

I was in a workshop in the Upper West Region of Ghana this past week.  The goal was two-fold.  1) to train a small group of staff, ICT teachers and local partners on social media and new technologies for communications; and 2) to help them prepare for a project that will support 60 students to use arts and citizen media in youth-led advocacy around issues that youth identify.

I was planning to talk about how social media is different from traditional media, focusing on how it offers an opportunity to democratize information, and how we can support youth to use social media to reduce stereotypes about them and to bring their voices and priorities into global discussions.  But all those theories about social media being the great equalizer, the Internet allowing everyone’s voices to flourish and yadaya, don’t mean a lot unless barriers like language, electricity, gender, and financial resources are lowered and people can actually access the Internet regularly.

Mobile internet access is extremely good in this part of Ghana, but when we did a quick exercise to see what the experience levels of the group were, only half had used email or the Internet before.  So I started there, rather than with my fluffy theories about democratization, voice, networks and many-to-many communications.

We got really good feedback from the participants on the workshop.  Here’s how we did it:

What is Internet?

I asked the ICT teachers to explain what the Internet is, and to then try to put it into words that the youth or someone in a community who hadn’t used a computer before would be able to understand.  We discussed ways in which radios, mobile phones, televisions are the same or different from the Internet.

How can you access Internet here?

We listed common ways to access Internet in the area: through a computer at an internet café or at home or work, through a mobile phone (“smart phone”), or via a mobile phone or flash-type modem connected to a computer (such as the ones that we were using at the workshop).  We went through how to connect a modem to a computer to access internet via the mobile network.

Exploring Internet and using search functions

Riffing off Google search

We jumped into Internet training by Googling the community’s name to see what popped up, then we followed the paths to where they led us. We found an article where the secondary school headmaster (who was participating in the workshop) had been interviewed about the needs of the school.

Everyone found it hilarious, as they didn’t know the headmaster was featured in an online article.  This lead to a good discussion on consent, permission and the fact that information does go global, but it doesn’t stay global, because more and more people are able to access that same information locally too through the Internet, so you need to think carefully about what you say.

The article about the school had a comments stream. The first comment was directly related to the article, and said that the school deserved to get some help.  But the comments quickly turned to politics, including accusations that a local politician was stealing tractors.  Again this generated a big discussion, and again the local-global point hit home.  The internet is not ‘over there’ but potentially ‘right here’.  People really need to be aware of this when publishing something online or when being interviewed, photographed or filmed by someone who will publish something.

Other times when we’ve done this exercise, we haven’t found any information online about the community. In those cases, the lack of an online presence was a good catalyst to discuss why, and to motivate the community to get the skills and training to put up their own information. That is actually one of the goals of the project we are working on.

We used a projector, but small groups would have also been fine if there was no projector and a few computers were available. We generally use what we can pull together through our local offices, the small amount of equipment purchased with the project funds, and what the local school and partners have, and organize it however makes the most sense so that people can practice.  4-5 people per computer is fine for the workshop because people tend to teach each other and take turns. There will be some people who have more experience and who can show others how to do things, so that the facilitator can step out of the picture as soon as possible, just being available for any questions or trouble shooting.

Social networks and privacy

When we Googled the name of the community, we also found a Facebook page for alums from the secondary school.  That was a nice segue into social networks.  I showed my Facebook page and a few others were familiar with Facebook. One colleague talked about how she had just signed up and was finding old school friends there who she hadn’t seen in years. People had a few questions such as ‘Is it free?  How do you do it? Can you make it yourself?  Who exactly can see it?’  So we had to enter the thorny world of privacy, hoping no one would be scared off from using Internet because of privacy issues.

One of the ICT teachers, for example, was concerned that someone could find his personal emails by Googling.  I used to feel confident when I said ‘no they can’t’ but now it seems you can never be certain who can see what (thank you Facebook).  I tried to explain privacy settings and that it’s important to understand how they work, suggesting they could try different things with low sensitivity information until they felt comfortable, and test by Googling their own name to see if anything came up.

Online truth and safety

Another question that surfaced was ‘Is the internet true?’ This provoked a great discussion about how information comes from all sides, and that anyone can put information online.  And anyone else can discuss it.  It’s truth and opinions and you can’t believe everything you read, it’s not regulated, you need to find a few sources and make some judgment calls.

A participant brought up that children and youth could use Internet to find ‘bad’ things, that adults can prey on children and youth using the Internet.  We discussed that teachers and parents really need to have some understanding of how Internet works. Children and youth need to know how to protect themselves on the Internet; for example, not posting personal information or information that can identify their exact location.  We discussed online predators and how children and youth can stay secure, and how teachers and communities should learn more about Internet to support children and youth to stay safe.

We discussed the Internet as a place of both opportunities and risks, going back to our earlier discussions on Child Protection in this project and expanding on them.  I also shared an idea I’d seen on ICT Works about how to set up the computers in a way that the teachers/instructor can see all the screens and know what kids are doing on them – this is more effective than putting filters and controls on the machines.

Speaking of controls: virus protection and flash drives

The negative impact of viruses on productivity in African countries has been covered by the media, and I enthusiastically concur. I’ve wasted many hours because someone has come in with a flash drive that infected all the computers we are using at a workshop.  Our general rule is no flash drives allowed during the workshop period.  I have no illusions, however, that the computers will remain flash drive free forever.  One good thing to do to reduce the risk of these autorun viruses is to disable autorun on the computers.  This takes about 2 minutes.  After you do that, you just have to manually access flash drives by opening My Computer from the start menu. A second trick is to create an autorun.inf file that redirects the virus and stops it from propagating on your machine. Avast is a free software that seems to catch most autorun viruses.  Trend Micro doesn’t seem to do very well in West Africa.

Hands on, hands on, hands on

I cannot stress enough the importance of hands on. We try to make sure that there is a lot of free time at this kind of workshop for people to play around online.  This usually means keeping the workshop space open for a couple hours after the official workshop day has ended and opening up early in the morning. People will skip lunch, come early, and stay late for an opportunity to get on-line. Those with more experience can use that time to help others. People often use this time to help each other open personal email accounts and share their favorite sites.

No getting too technical

People don’t want to listen to a bunch of theory or mechanical explanations on how things work. They don’t need to see the inside of a CPU, for example. They need to know how to make things work for them.  And the only way they will figure it out is practice, trial and error, playing around.  If a few people in the workshop are really curious to know the mechanics of something, they will start asking (if the facilitator is approachable and non-threatening), but most people for starters just want to know how to use the tools.

No showing off

I’ll always remember my Kenyan colleague Mativo saying that in this kind of work, a facilitator’s main role is demystifying ICTs.  So that means being patient and never making anyone feel stupid for asking a question, or showing any frustration with them.  If someone makes a mistake or goes down a path and doesn’t know how to get back and the facilitator has to step in to do some ‘magic’ fixing, it’s good to talk people through some of the ‘fix’ steps in a clear way as they are being done.

My friend DK over at Media Snackers said that he noticed something when working with youth vs adults on Internet training: youth will click on everything to see what happens. Adults will ask what happens and ask for permission to click.  [update:  Media Snackers calls this the ‘button theory‘].  Paying close attention to learning styles and tendencies of each individual when facilitating, including those related to experience, rural or urban backgrounds, age, gender, literacy, other abilities, personality, and adjusting methodologies helps everyone learn better.

Have fun!

Lightening up the environment and making it hands on lowers people’s inhibitions and helps them have the confidence to learn by doing.

**Check back soon for a second post about photography, filming, uploading and setting up a YouTube account….

Related posts on Wait… What?

Child protection, the media and youth media programs

On girls and ICTs

Revisiting the topic of girls and ICTs

Putting Cumbana on the map: with ethics


Read Full Post »

Last week some 40 people from more than 20 different organizations with national and global humanitarian and relief missions attended a Google Partnership Exploration Workshop in Washington, DC, to share information in an interactive setting and explore how the organizations and Google geo & data visualization technologies can further each others’ missions.

Lucky me – I got to go on behalf of Plan.  Much of the meeting centered on how Google’s tools could help in disasters and emergencies, and what non-profits would like to be able to do with those tools, and how Google could help.

The meeting opened up with a representative of FEMA talking about the generally slow and government centered response of FEMA, and how that needed to turn into a quick, user generated information network that could provide real information in real time so that FEMA could offer a real, people centered response.  I loved hearing someone from government saying things like “we need to look at the public as a resource, not a liability.” The conclusion was that in a disaster/emergency you just need enough information to help you make a better decision.  The public is one of the best sources for that information, but government has tended to ignore it  because it’s not “official.”  Consider: a 911 caller is not a certified caller with a background check and training on how to report, but that’s the background of our 911 emergency system. Why can’t it be the same in a disaster?

We also heard about the World Bank’s ECAPRA project for disaster preparedness in Central America.  This project looks at probabilistic risk assessment, using geo-information to predict and assess where damage is likely.  The main points from the WB colleagues were that for SDI (Spatial Data Infrastructure) we need policies, requirements, and mandates, yes, but this is not sufficient – top down is not enough.  We also need software that enables a bottom up approach, aligned incentives that can drive us to open source agenda.  But not just open source code software, we’re talking mass collaboration – and that would change everything.  So then the challenge is how we help civil societies and govts to share and deliver data that enables decision making?  How do we support data collection from the top down and from the bottom up?  The WB is working with developers on some collaborative data collection mobile applications that allow people to easily collect information. In this system, different institutes still own the data but others can update and add to it. WB hopes to embed this within Central American national disaster planning systems, and to train and support the national systems to use these tools.  They will be free to use the elements that most link with the local situations in each country.  Each country is developing these open source applications themselves, and can choose the tools that work best for them.

Google stepped in then to share some Google Visualizations — Google Fusion TablesVisualization API, Chart API and Motion Charts (Gapminder).  With these applications, different sources can share data, or share some data and keep other data private.  You can compare data from different sources.  For example, there is a chart currently residing in Google Fusion Tables that pulls GDP data from the CIA Fact Book, the World Bank and the IMF, and allows you to compare data across countries from different sources.  You can then use that data to create your own data visualizations, including maps, tables, charts, and the fabulous Gap Minder/motion visualization charts (first made popular at TED by Hans Rosling). These can all be easily transferred to your own webpage.  If you have public data that deserves to be treated separately you can become a Google trusted source. (Click on the “information for publishers” link to see how to get your data made public) For a quick tutorial on how to make your own cool Gap Minder chart check out this link.  *Note Gapminder is not owned by Google. Gapminder is a foundation of its own, totally independent from Google. Google bought the software [Trendalyzer] to improve the technology further.

Next up was the American Red Cross who shared some of the challenges that they face and how they use geo-spatial and information mapping to overcome them.  Red Cross has a whole mobile data gathering system set up and works via volunteers during disasters to collect information.  They also have over 30 years of disaster data that they can use to analyze trends.  The ARC wants to do more with mapping and visualizations so that they can see what is happening right away, using maps, charts and analyzing trends.  What does the ARC want to see from Google?  A disaster dashboard – eg using Google Wave?  Inventory tracking and mapping capability.  Data mining and research capabilities such as with Fusion tables.  They want people to be able to go to the ARC and see not what the Red Cross is, but what the Red Cross does.  To use the site for up to date information that will help people manage during disasters and emergencies.

Wow, and this was all before lunch!

—————

Related posts:
I saw the future of geovisualization… after lunch
Is this map better than that map?
Ushahidi in Haiti:  what’s needed now

Read Full Post »