Feeds:
Posts
Comments

Posts Tagged ‘conference’

(Joint post from Linda Raftree, MERL Tech and Megan Colnar, Open Society Foundations)

The American Evaluation Association Conference happens once a year, and offers literally hundreds of sessions. It can take a while to sort though all of them. Because there are so many sessions, it’s easy to feel a bit lost in the crowds of people and content.

So, Megan Colnar (Open Society Foundations) and I thought we’d share some of the sessions that caught our eye.

I’m on the look-out for innovative tech applications, responsible and gender-sensitive data collection practices, and virtual or online/social media-focused evaluation techniques and methods. Megan plans to tune into sessions on policy change, complexity-aware techniques, and better MEL practices for funders. 

We both can’t wait to learn about evaluation in the post-truth and fake news era. Full disclosure, our sessions are also featured below.

Hope we see you there!

Wednesday, November 8th

3.15-4.15

4.30-6.00

We also think a lot of the ignite talks during this session in the Thurgood Salon South look interesting, like:

6.15-7.15

7.00-8.30

Tour of a few poster sessions before dinner. Highlights might include:

  • M&E for Journalism (51)
  • Measuring Advocacy (3)
  • Survey measures of corruption (53)
  • Theory of change in practice (186)
  • Using social networks as a decision-making tool (225)

 

Thursday, Nov 9th

8.00-9.00 – early risers are rewarded with some interesting options

9.15-10.15

10.30-11.15

12.15-1.15

1.15-2.00

2.15-3.00

3.15-4.15

4.30-5.15

 

Friday, Nov 10th

8.00-9.30early risers rewarded again!

11.00-11.45

1.45-3.15

3.30-4.15

4.30-5.15

5.30-6.15– if you can hold out for one more on a Friday evening

6.30-7.15

 

Saturday, Nov 11th–you’re on your own! Let us know what treasures you discover

Advertisements

Read Full Post »

Earlier this month I attended the African Evaluators’ Conference (AfrEA) in Cameroon as part of the Technology and Evaluation stream organized by Pact with financial support from The Rockefeller Foundation’s Evaluation Office and The MasterCard Foundation.

A first post about ICTs and M&E at the Afrea Conference went into some of the deliberations around using or not using ICTs and how we can learn and share more as institutions and evaluators. I’ve written previously about barriers and challenges with using ICTs in M&E of international development programs (see the list of posts at the bottom of this one). Many of these same conversations came up at AfrEA, so I won’t write about these again here. What I did want to capture and share were a few interesting design and implementation thoughts from the various ICT and M&E sessions. Here goes:

1) Asking questions via ICT may lead to more honest answers. Some populations are still not familiar with smart phones and tablets and this makes some people shy and quiet, yet it makes others more curious and animated to participate. Some people worry that mobiles, laptops and tablet create distance between the enumerator and the person participating in a survey. On the other hand, I’m hearing more and more examples of cases where using ICTs for surveying actually allow for a greater sense of personal privacy and more honest answers. I first heard about this several years ago with relation to children and youth in the US and Canada seeking psychological or reproductive health counseling. They seemed to feel more comfortable asking questions about sensitive issues via online chats (as opposed to asking a counselor or doctor face-to-face) because they felt anonymous. This same is true for telephone inquiries.

In the case of evaluations, someone suggested that rather than a mobile or tablet creating distance, a device can actually create an opportunity for privacy. For example, if a sensitive question comes up in a survey, an enumerator can hand the person being interviewed the mobile phone and look away when they provide their answer and hit enter, in the same way that waiters in some countries will swipe your ATM card and politely look away while you enter your PIN. Key is building people’s trust in these methods so they can be sure they are secure.

At a Salon on Feb 28, I heard about mobile polling being used to ask men in the Democratic Republic of Congo about sexual assault against men. There was a higher recorded affirmative rate when the question was answered via a mobile survey than when the question had been asked in other settings or though other means. This of course makes sense, considering that often when a reporter or surveyor comes around asking whether men have been victims of rape, no one wants to say publicly. It’s impossible to know in a situation of violence if a perpetrator might be standing around in the crowd watching someone getting interviewed, and clearly shame and stigma also prevent people from answering openly.

Another example at the AfrEA Tech Salon, was a comparison study done by an organization in a slum area in Accra. Five enumerators who spoke local languages conducted Water, Sanitation and Hygiene (WASH) surveys by mobile phone using Open Data Kit (an open source survey application) and the responses were compared with the same survey done on paper.  When people were asked in person by enumerators if they defecated outdoors, affirmative answers were very low. When people were asked the same question via a voice-based mobile phone survey, 26% of respondents reported open defecation.

2) Risk of collecting GPS coordinates. We had a short discussion on the plusses and minuses of using GPS and collecting geolocation data in monitoring and evaluation. One issue that came up was safety for enumerators who carry GPS devices. Some people highlighted that GPS devices can put staff/enumerators at risk of abuse from organized crime bands, military groups, or government authorities, especially in areas with high levels of conflict and violence. This makes me think that if geographic information is needed in these cases, it might be good to use a mobile phone application that collects GPS rather than a fancy smart phone or an actual GPS unit (for example, one could try out PoiMapper, which works on feature phones).

In addition, evaluators emphasized that we need to think through whether GPS data is really necessary at household level. It is tempting to always collect all the information that we possibly can, but we can never truly assure anyone that their information will not be de-anonymized somehow in the near or distant future, and in extremely high risk areas, this can be a huge risk. Many organizations do not have high-level security for their data, so it may be better to collect community or district level data than household locations. Some evaluators said they use ‘tricks’ to anonymize the geographical data, like pinpointing location a few miles off, but others felt this was not nearly enough to guarantee anonymity.

3) Devices can create unforeseen operational challenges at the micro-level. When doing a mobile survey by phone and asking people to press a number to select a particular answer to a question, one organization working in rural Ghana to collect feedback about government performance found that some phones were set to lock when a call was answered. People were pressing buttons to respond to phone surveys (press 1 for….), but their answers did not register because phones were locked, or answers registered incorrectly because the person was entering their PIN to unlock the phone. Others noted that when planning for training of enumerators or community members who will use their own devices for data collection, we cannot forget the fact that every model of phone is slightly different. This adds quite a lot of time to the training as each different model of phone needs to be explained to trainees. (There are a huge number of other challenges related to devices, but these were two that I had not thought of before.)

4) Motivation in the case of poor capacity to respond. An organization interested in tracking violence in a highly volatile area wanted to take reports of violence, but did not have a way to ensure that there would be a response from an INGO, humanitarian organization or government authority if/when violence was reported. This is a known issue — the difficulties of encouraging reporting if responsiveness is low. To keep people engaged this organization thanks people immediately for reporting and then sends peace messages and encouragement 2-3 times per week. Participants in the program have appreciated these ongoing messages and participation has continued to be steady, regardless of the fact that immediate help has not been provided as a result of reporting.

5) Mirroring physical processes with tech. One way to help digital tools gain more acceptance and to make them more user-friendly is to design them to mirror paper processes or other physical processes that people are already familiar with. For example, one organization shared their design process for a mobile application for village savings and loan (VSL) groups. Because security is a big concern among VSL members, the groups typically keep cash in a box with 3 padlocks. Three elected members must be present and agree to open and remove money from the box in order to conduct any transaction. To mimic this, the VSL mobile application requires 3 PINs to access mobile money or make transactions, and what’s more, the app sends everyone in the VSL Group an SMS notification if the 3 people with the PINs carry out a transaction, meaning the mobile app is even more secure than the original physical lock-box, because everyone knows what is happening all the time with the money.

****

As I mentioned in part 1 of this post, some new resources and forthcoming documentation may help to further set the stage for better learning and application of ICTs in the M&E process. Pact has just released their Mobile Technology Toolkit, and Michael Bamberger and I are finishing up a paper on ICT-enabled M&E that might help provide a starting point and possible framework to move things forward.

Here is the list of toolkits, blog posts and other links that we compiled for AfrEA – please add any that are missing!

Previous posts on ICTs and M&E on this blog:

Read Full Post »

I attended the African Evaluators’ Conference (AfrEA) in Cameroon last week as part of the Technology and Evaluation strand organized by Pact with financial support from The Rockefeller Foundation’s Evaluation Office and The MasterCard Foundation. The strand was a fantastic opportunity for learning, sharing and understanding more about the context, possibilities and realities of using ICTs in monitoring and evaluation (M&E). We heard from a variety of evaluators, development practitioners, researchers, tool-developers, donors, and private sector and government folks. Judging by the well-attended sessions, there is a huge amount of interest in ICTs and M&E.

Rather than repeat what’s I’ve written in other posts (see links at the bottom), I’ll focus here on some of the more relevant, interesting, and/or new information from the AfrEA discussions. This first post will go into institutional issues and the ‘field’ of ICTs and M&E. A second post will talk about design and operational tips I learned /was reminded of at AfrEA.

1) We tend to get stuck on data collection –Like other areas (I’m looking at you, Open Data) conversations tend to revolve around collecting data. We need to get beyond that and think more about why we are collecting data and what we are going to do with it (and do we really need all this data?). The evaluation field also needs to explore all the other ways it could be using ICTs for M&E, going beyond mobile phones and surveys. Collecting data is clearly a necessary part of M&E, but those data still need to be analyzed. As a participant from a data visualization firm said, there are so many ways you can use ICTs – they help you make sense of things, you can tag sentiment, you can visualize data and make data-based decisions. Others mentioned that ICTs can help us to share data with various stakeholders, improve sampling in RCTs (Randomized Control Trials), conduct quality checks on massive data sets, and manage staff who are working on data collection. Using big data, we can do analyses we never could have imagined before. We can open and share our data, and stop collecting the same data from the same people multiple times. We can use ICTs to share back what we’ve learned with evaluation stakeholders, governments, the public, and donors. The range of uses of ICTs is huge, yet the discussion tends to get stuck on mobile surveys and data collection, and we need to start thinking beyond that.

2) ICTs are changing how programs are implemented and how M&E is done — When a program already uses ICTs, data collection can be built in through the digital device itself (e.g., tracking user behavior, cookies, and via tests and quizzes), as one evaluator working on tech and education programs noted. As more programs integrate digital tools, it may become easier to collect monitoring and evaluation data with less effort. Along those lines, an evaluator looking at a large-scale mobile-based agricultural information system asked about approaches to conducting M&E that do not rely on enumerators and traditional M&E approaches. In his program, because the farmers who signed up for the mobile information service do not live in the same geographical community, traditional M&E approaches do not seem plausible and ICT-based approaches look like a logical answer. There is little documentation within the international development evaluation community, however, on how an evaluator might design an evaluation in this type of a situation. (I am guessing there may be some insights from market research and possibly from the transparency and accountability sectors, and among people working on “feedback loops”).

3) Moving beyond one-off efforts — Some people noted that mobile data gathering is still done mostly at the project level. Efforts tend to be short-term and one-off. The data collected is not well-integrated into management information systems or national level processes. (Here we may reference the infamous map of mHealth pilots in Uganda, and note the possibility of ICT-enabled M&E in other sectors going this same route). Numerous small pilots may be problematic if the goal is to institutionalize mobile data gathering into M&E at the wider level and do a better job of supporting and strengthening large-scale systems.

4) Sometimes ICTs are not the answer, even if you want them to be – One presenter (who considered himself a tech enthusiast) went into careful detail about his organization’s process of deciding not to use tablets for a complex evaluation across 4 countries with multiple indicators. In the end, the evaluation itself was too complex, and the team was not able to find the right tool for the job. The organization looked at simple, mid-range and highly complex applications and tools and after testing them all, opted out. Each possible tool presented a set of challenges that meant the tool was not a vast improvement over paper-based data collection, and the up-front costs and training were too expensive and lengthy to make the switch to digital tools worthwhile. In addition, the team felt that face-to-face dynamics in the community and having access to notes and written observations in the margins of a paper survey would enable them to conduct a better evaluation. Some tablets are beginning to enable more interactivity and better design for surveys, but not yet in a way that made them a viable option for this evaluation. I liked how the organization went through a very thorough and in-depth process to make this decision.

Other colleagues also commented that the tech tools are still not quite ‘there’ yet for M&E. Even top of the line business solutions are generally found to be somewhat clunky. Million dollar models are not relevant for environments that development evaluators are working in; in addition to their high cost, they often have too many features or require too much training. There are some excellent mid-range tools that are designed for the environment, but many lack vital features such as availability in multiple languages. Simple tools that are more easily accessible and understandable without a lot of training are not sophisticated enough to conduct a large-scale data collection exercise. One person I talked with suggested that the private sector will eventually develop appropriate tools, and the not-for-profit sector will then adopt them. She felt that those of us who are interested in ICTs in M&E are slightly ahead of the curve and need to wait a few years until the tools are more widespread and common. Many people attending the Tech and M&E sessions at AfrEA made the point that use of ICTs in M&E would get easier and cheaper as the field develops, tools get more advanced/appropriate/user-friendly and widely tested, and networks/ platforms/ infrastructure improves in less-connected rural areas.

5) Need for documentation, evaluation and training on use of ICTs in M&E – Some evaluators felt that ICTs are only suitable for routine data collection as part of an ongoing program, but not good for large-scale evaluations. Others pointed out that the notions of ‘ICT for M&E’ and ‘mobile data collection/mobile surveys’ are often used interchangeably, and evaluation practitioners need to look at the multiple ways that ICTs can be used in the wider field of M&E. ICTs are not just useful for moving from paper surveys to mobile data gathering. An evaluator working on a number of RCTs mentioned that his group relies on ICTs for improving samples, reducing bias, and automatically checking data quality.

There was general agreement that M&E practitioners need resources, opportunities for more discussion, and capacity strengthening on the multiple ways that ICTs may be able to support M&E. One evaluator noted that civil society organizations have a tendency to rush into things, hit a brick wall, and then cross their arms and say, “well, this doesn’t work” (in this case, ICTs for M&E). With training and capacity, and as more experience and documentation is gained, he considered that ICTs could have a huge role in making M&E more efficient and effective.

One evaluator, however, questioned whether having better, cheaper, higher quality data is actually leading to better decisions and outcomes. Another evaluator asked for more evidence of what works, when, with whom and under what circumstances so that evaluators could make better decisions around use of ICTs in M&E. Some felt that a decision tree or list of considerations or key questions to think through when integrating ICTs into M&E would be helpful for practitioners. In general, it was agreed that ICTs can help overcome some of our old challenges, but that they inevitably bring new challenges. Rather than shy away from using ICTs, we should try to understand these new challenges and find ways to overcome/work around them. Though the mHealth field has done quite a bit of useful research, and documentation on digital data collection is growing, use of ICTs is still relatively unexplored in the wider evaluation space.

6) There is no simple answer. One of my takeaways from all the sessions was that many M&E specialists are carefully considering options, and thinking quite a lot about which ICTs for what, whom, when and where rather than deciding from the start that ICTs are ‘good and beneficial’ or ‘bad and not worth considering.’ This is really encouraging, and to be expected of a thoughtful group like this. I hope to participate in more discussions of this nature that dig into the nuances of introducing ICTs into M&E.

Some new resources and forthcoming documentation may help to further set the stage for better learning and application of ICTs in the M&E process. Pact has just released their Mobile Technology Toolkit, and Michael Bamberger and I are finishing up a paper on ICT-enabled M&E that might help provide a starting point and possible framework to move things forward. The “field” of ICTs in M&E is quite broad, however, and there are many ways to slice the cake. Here is the list of toolkits, blog posts and other links that we compiled for AfrEA – please add any that you think are missing!

(Part 2 of this post)

Previous posts on ICTs and M&E:

Read Full Post »

I spent last week at the International Open Government Data Conference (IOGDC), put on by Data.gov, the World Bank Open Data Initiative and the Open Development Technology Alliance. For a full overview, watch some of the presentations and read the liveblog.

A point made by several presenters and panelists is that the field has advanced quite a bit in terms of getting data open, and that what really matters now is what people are doing with the data to improve and/or change things.

One of the keynoters, David Eaves, for example, commentedthe conferences we organize have got to talk less and less about how to get data open and have to start talking more about how do we use data to drive public policy objectives. I’m hoping the next International Open Government Data Conference will have an increasing number of presentations by citizens, non-profits and other outsiders [who] are using open data to drive their agenda, and how public servants are using open data strategically to drive to a[n] outcome.” 

There were some great anecdotal examples throughout the conference of how open data are having impact, especially in 4 key areas:

  1. economic growth/entrepreneurship
  2. transparency, accountability and governance
  3. improved resource allocation and provision of services
  4. connecting data dots and telling stories the public needs to know

There was also quite a bit of recognition that we need more evidence to back up the anecdotal success stories told around open data, and that it’s difficult to trace all the impact that open data are having because of the multiple and unintended impacts.

On the other hand, the question was raised: “is open data part of the public’s right to information? Because if we conceive of open data as a right, the framework and conceptualization change as do, perhaps, the measures of success.

Several panelists mentioned some of the big challenges around engaging citizens to use open data for social change, especially in areas with less resources and low access to the Internet.

If one of the key next steps is engaging citizens in using open data, we all need to think more about how to overcome barriers like language, literacy, who owns and accesses devices, low education levels, low capacity to interpret data, information literacy, power and culture, apathy, lack of incentive and motivation for citizen engagement, and poor capacity of duty bearers/governments to respond to citizen demand. (For more on some of these common challenges and approaches to addressing them, see 15 thoughts on good governance programming with youth.)

On the last day of IOGDC we had the opportunity to suggest our own topics during Open Space. I suggested the topic “Taking Open Data Offline” because it seems that often when we imagine all the fantastic possibilities of open data, we forget how many people live in remote, rural areas (or urban areas with poor infrastructure) where there is no broadband and where many of the above-mentioned barriers are very high. (See a Storified summary of our conversation here: #IOGDCoffline.)

The solutions most often mentioned for getting data into the hands of ordinary citizens are Internet and mobile apps. Sometimes when I’m around open data folks, I do a double take because the common understanding of ‘infomediary’ is ‘the developer making the mobile app’. This seems to ignore that, as Jim Hendler noted during the IOGDC pre-conference, some 75% of the world’s population is still offline.

We need to expand the notion of ‘infomediary’ in these discussions to think about the range of people, media, organizations and institutions who can help close the gap between big data and the average person, both in terms of getting open data out to the public in digestible ways and in terms of connecting local knowledge, information needs, feedback and opinions of citizens back to big data. There will need to be a wide range of infomediaries using a number of different communication tools and channels in order to really make open data accessible and useful.

Though things are changing, the majority of folks in the world don’t yet have smart phones. In a sense, the ‘most marginalized’ could be defined as ‘those who don’t have mobile phones.’ And even people who do have phones may not choose to spend their scarce resources to access open data. Data that is available online may be in English or one of only a few major languages. Most people in most of the world don’t purchase data packages, they buy pre-paid air time. The majority don’t have a constant connection to the cloud but rather rely on intermittent Internet access, if at all.

In addition, in areas where education levels are low or data interpretation skills are not strong, people may not have the skills to make use of open data found online. So other communications tools, channels and methods need to be considered for making open data accessible to the broader public via different kinds of intermediaries and infomediaries, multi-direction information sharing channels, feedback loops and combinations of online/offline communication. People may even need support formulating the questions they want answered by open data, considering that open data can be a very abstract concept for those who are not familiar with the Internet and the use of data for critical analysis.

Some great ideas on how to use SMS in open data and open government and accountability work exist, such as Huduma, UReport, I Paid A Bribe, and more. Others are doing really smart thinking about how to transform open data into engaging media for a general audience through beautiful graphics that allow for deep analysis and comparison and that tell compelling stories that allow for a personal connection.

We need to think more, however, about how we can adapt these ideas to offline settings, how to learn from approaches and methods that have been around since the pre-Internet days, and how to successfully blend online-offline tools and contexts for a more inclusive reach and, one hopes, a wider and broader impact.

‘Popular Education’ and ‘Communication for Development (C4D)’ are two fields the open data movement could learn from in terms of including more remote or ‘marginalized’ populations in local, national and global conversations that may be generated through opening up data.

I remember being on a 10-hour ride to Accra one time from the Upper West Region of Ghana. The driver was listening to talk radio in a local language. At one point, someone was reading out a list. I couldn’t understand what was being said, but I could tell the list contained names of communities and districts. The list-reading went on for quite a long time. At one point, the driver cheered and pumped his fist. I asked what he was happy about and he explained that they were reading the list of where the government would be constructing schools and assigning teachers in the next year. His community was going to get a secondary school, and his children would not have to travel far now to continue their education. Radio is still one of the best tools for sharing information. Radio can be combined with activities like ‘Listening Clubs’ where groups gather to listen and discuss. Integrating SMS or call-in options make it possible for radio stations to interact more dynamically with listeners. Tools like FrontlineSMS Radio allow tracking, measuring and visualization of listener feedback.

I lived in El Salvador for the 1990s. Long and complicated Peace Accords were signed in 1992 after 12 years of civil war. A huge effort was made to ‘popularize’ the contents of the Peace Accords so that the whole country would know what the agreements were and so that people could hold the different entities accountable for implementing them. The contents of the Peace Accords were shared via comics, radio, public service announcements, and a number of other media that were adapted to different audiences. Local NGOs worked hard to educate the affected populations on the rights they could legally claim stemming from the Accords and to provide support in doing so.

When the Civil War ended in Guatemala a few years later, the same thing happened. Guatemala, however had the additional complication of 22 indigenous languages. Grassroots ‘popular education’ approaches in multiple languages were used by various groups across the country in an effort to ensure that no one was left out, to help develop ‘critical conscience’ and critical thinking around the implementation of the Peace Accords, and to involve the public in the work around the Truth Commission, which investigated allegations of human rights violations during the war and opened them to the public as part of the reconciliation process. Latin America (thanks to Paolo Freire and others) has a long history of  ‘popular education’ approaches and methods that can be tapped into and linked with open data. Open data can be every bit as complicated as the legalistic contents of the Peace Accords and it is likely that data that is eventually opened will link with issues (lack of political participation, land ownership patterns, corruption, political favoritism, poor accountability and widespread marginalization) that were the cause of conflict in past decades.

The Mural of the People / O Mural do Povo from Verdade in Mozambique. “The marvelous Mozambican public will attribute each year the ‘Made in Frelimo Oscar of Incompetence’…  The 2012 candidates are…”

In Mozambique, 75% of the population live on less than $1.25 a day. Newspapers costing between 45 and 75 cents are considered a luxury. The 20,000 issues of the free and widely circulated Verdade Newspaper, which comes out once a week, reach an estimated 400,000 people in Maputo and a few other cities as they are read and passed around to be re-read. Verdade reaches an additional audience via Facebook, Twitter and YouTube, and a ‘Mural of the People’ where the public can participate and contribute their thoughts and opinions old-school style — on a public chalkboard. The above February 7, 2012, mural, for example, encouraged the population to vote in the ‘Incompetence Oscars – Made in Frelimo [the current government party]’. Candidates for an Incompetence Oscar included the “Minister of Unfinished Public Works.” (More about Verdade here.) This combination of online and offline tools helps spread news and generate opinion and conversation on government performance and accountability.

Social accountability tools like community scorecards, participatory budget advocacysocial auditsparticipatory videoparticipatory theater and community mapping have all been used successfully in accountability and governance work and would be more appropriate tools in some cases than Internet and mobile apps to generate citizen engagement around open data. Combining new ICTs together with these well-established approaches can help take open data offline and bring community knowledge and opinions online, so that open data is not strictly a top-down thing and so that community knowledge and processes can be aggregated, added to or connected back to open data sets and more widely shared via the Internet (keeping in mind a community’s right also to not have their data shared).

A smart combination of information and communication tools – whether Internet, mobile apps, posters, print media, murals, song, drama, face-to-face, radio, video, comics, community bulletin boards, open community fora or others – and a bottom-up, consultative, ‘popular education’ approach to open data could really help open data reach a wider group of citizens and equip them not only with information but with a variety of channels through which to participate more broadly in the definition of the right questions to ask and a wider skill set to use open data to question power and push for more accountability and positive social change.

Related posts on Wait… What?:

ICTs, social media, local government and youth-led social audits

Digital mapping and governance: the stories behind the maps

What does ‘open knowledge’ have to do with ‘open development’?

15 thoughts on good governance programming with youth

Governance is *so* not boring

Young Citizens, Youth and Participatory Governance in Africa

A practitioners’ discussion on social accountability and youth participatory governance

Can ICTs support accountability and transparency in education?

Orgasmatron moments

Listening and feedback mechanisms

Read Full Post »

Starting yesterday (I’m late on writing about this one!) people from across the African continent are meeting at the Conference on Child Protection Systems Strengthening in Sub-Saharan Africa in Dakar, Senegal, to discuss how child protection systems work effectively in the African context. Follow the conference at the #cps12 hashtag or via @CPSystemsS and check out a day one summary on Storify.

“Dozens of African countries are engaged in strengthening child protection systems and mobilizing new sectors for child protection. For example, education and health sectors are engaged in violence prevention work, and social protection is becoming an essential part of efforts to reduce child labour and child marriage. Child justice initiatives are being embedded in broader national justice and security reforms and the health sector is supporting birth registration. Mobile technologies are being used for the reporting of violence, family reunification and rapid assessments. Donors are also increasingly supporting child protection systems….

“’Just the way a health system deals with many diseases, a child protection system addresses a broad range of violations of children’s rights. Children cannot be protected effectively unless social workers, police officers, justice servants, teachers and health workers and communities work together to prevent and respond to abuse and violence…. Investment in national child protection systems leads to better outcomes for children because of children’s improved access to protection services, new investments in frontline workers to identify and respond to children in need; and improved partnerships to mobilize and use resources for children, families and communities,’ according to Joachim Theis, Regional Child Protection Advisor, UNICEF, West and Central Africa.”

Key conference themes include:

  • Mapping and assessment of national child protection systems
  • Strategies and approaches to strengthening national child protection systems
  • Community based child protection mechanisms
  • Children as actors and partners in child protection systems
  • Social welfare workforce strengthening
  • Services delivery in Sub-Saharan Africa
  • Aligning traditional child protection agendas with child protection systems
  • Strengthening monitoring and evaluation for child protection
  • Mobilizing resources for child protection systems

Resources and a discussion forum are available at wiki.childprotectionforum.org. Some relevant background papers include:

A few webinars are recorded here on the topics of systems strengthening, budgeting for national system and core competencies for better national child protection systems.

There’s also a good paper on community-based child protection systems that I summarized awhile back.

Coming up soon, a few of us from different organizations will be looking more closely at the role new ICTs can play in child protection systems. Some examples of ICTs and child protection systems are here.

(Thanks to Joachim Theis at UNICEF West Africa for sending over the info on this one.)

Read Full Post »