I used to write blog posts two or three times a week, but things have been a little quiet here for the past couple of years. That’s partly because I’ve been ‘doing actual work’ (as we like to say) trying to implement the theoretical ‘good practices’ that I like soapboxing about. I’ve also been doing some writing in other places and in ways that I hope might be more rigorously critiqued and thus have a wider influence than just putting them up on a blog.
One of those bits of work that’s recently been released publicly is a first version of a monitoring and evaluation framework for SIMLab. We started discussing this at the first M&E Tech conference in 2014. Laura Walker McDonald (SIMLab CEO) outlines why in a blog post.
Evaluating the use of ICTs—which are used for a variety of projects, from legal services, coordinating responses to infectious diseases, media reporting in repressive environments, and transferring money among the unbanked or voting—can hardly be reduced to a check-list. At SIMLab, our past nine years with FrontlineSMS has taught us that isolating and understanding the impact of technology on an intervention, in any sector, is complicated. ICTs change organizational processes and interpersonal relations. They can put vulnerable populations at risk, even while improving the efficiency of services delivered to others. ICTs break. Innovations fail to take hold, or prove to be unsustainable.
For these and many other reasons, it’s critical that we know which tools do and don’t work, and why. As M4D edges into another decade, we need to know what to invest in, which approaches to pursue and improve, and which approaches should be consigned to history. Even for widely-used platforms, adoption doesn’t automatically mean evidence of impact….
FrontlineSMS is a case in point: although the software has clocked up 200,000 downloads in 199 territories since October 2005, there are few truly robust studies of the way that the platform has impacted the project or organization it was implemented in. Evaluations rely on anecdotal data, or focus on the impact of the intervention, without isolating how the technology has affected it. Many do not consider whether the rollout of the software was well-designed, training effectively delivered, or the project sustainably planned.
As an organization that provides technology strategy and support to other organizations — both large and small — it is important for SIMLab to better understand the quality of that support and how it may translate into improvements as well as how introduction or improvement of information and communication technology contributes to impact at the broader scale.
This is a difficult proposition, given that isolating a single factor like technology is extremely tough, if not impossible. The Framework thus aims to get at the breadth of considerations that go into successful tech-enabled project design and implementation. It does not aim to attribute impact to a particular technology, but to better understand that technology’s contribution to the wider impact at various levels. We know this is incredibly complex, but thought it was worth a try.
As Laura notes in another blogpost,
One of our toughest challenges while writing the thing was to try to recognize the breadth of success factors that we see as contributing to success in a tech-enabled social change project, without accidentally trying to write a design manual for these types of projects. So we reoriented ourselves, and decided instead to put forward strong, values-based statements.* For this, we wanted to build on an existing frame that already had strong recognition among evaluators – the OECD-DAC criteria for the evaluation of development assistance. There was some precedent for this, as ALNAP adapted them in 2008 to make them better suited to humanitarian aid. We wanted our offering to simply extend and consider the criteria for technology-enabled social change projects.
Here are the adapted criteria that you can read more about in the Framework. They were designed for internal use, but we hope they might be useful to evaluators of technology-enabled programming, commissioners of evaluations of these programs, and those who want to do in-house examination of their own technology-enabled efforts. We welcome your thoughts and feedback — The Framework is published in draft format in the hope that others working on similar challenges can help make it better, and so that they could pick up and use any and all of it that would be helpful to them. The document includes practical guidance on developing an M&E plan, a typical project cycle, and some methodologies that might be useful, as well as sample log frames and evaluator terms of reference.
Happy reading and we really look forward to any feedback and suggestions!!
*****
The Criteria
Criterion 1: Relevance
The extent to which the technology choice is appropriately suited to the priorities, capacities and context of the target group or organization.
Consider: are the activities and outputs of the project consistent with the goal and objectives? Was there a good context analysis and needs assessment, or another way for needs to inform design – particularly through participation by end users? Did the implementer have the capacity, knowledge and experience to implement the project? Was the right technology tool and channel selected for the context and the users? Was content localized appropriately?
Criterion 2: Effectiveness
A measure of the extent to which an information and communication channel, technology tool, technology platform, or a combination of these attains its objectives.
Consider: In a technology-enabled effort, there may be one tool or platform, or a set of tools and platforms may be designed to work together as a suite. Additionally, the selection of a particular communication channel (SMS, voice, etc) matters in terms of cost and effectiveness. Was the project monitored and early snags and breakdowns identified and fixed, was there good user support? Did the tool and/or the channel meet the needs of the overall project? Note that this criterion should be examined at outcome level, not output level, and should examine how the objectives were formulated, by whom (did primary stakeholders participate?) and why.
Criterion 3: Efficiency
Efficiency measures the outputs – qualitative and quantitative – in relation to the inputs. It is an economic term which signifies that the project or program uses the least costly technology approach (including both the tech itself, and what it takes to sustain and use it) possible in order to achieve the desired results. This generally requires comparing alternative approaches (technological or non-technological) to achieving the same outputs, to see whether the most efficient tools and processes have been adopted. SIMLab looks at the interplay of efficiency and effectiveness, and to what degree a new tool or platform can support a reduction in cost, time, along with an increase in quality of data and/or services and reach/scale.
Consider: Was the technology tool rollout carried out as planned and on time? If not, what were the deviations from the plan, and how were they handled? If a new channel or tool replaced an existing one, how do the communication, digitization, transportation and processing costs of the new system compare to the previous one? Would it have been cheaper to build features into an existing tool rather than create a whole new tool? To what extent were aspects such as cost of data, ease of working with mobile providers, total cost of ownership and upgrading of the tool or platform considered?
Criterion 4: Impact
Impact relates to consequences of achieving or not achieving the outcomes. Impacts may take months or years to become apparent, and often cannot be established in an end-of-project evaluation. Identifying, documenting and/or proving attribution (as opposed to contribution) may be an issue here. ALNAP’s complex emergencies evaluation criteria include ‘coverage’ as well as impact; ‘the need to reach major population groups wherever they are.’ They note: ‘in determining why certain groups were covered or not, a central question is: ‘What were the main reasons that the intervention provided or failed to provide major population groups with assistance and protection, proportionate to their need?’ This is very relevant for us.
For SIMLab, a lack of coverage in an inclusive technology project means not only failing to reach some groups, but also widening the gap between those who do and do not have access to the systems and services leveraging technology. We believe that this has the potential to actively cause harm. Evaluation of inclusive tech has dual priorities: evaluating the role and contribution of technology, but also evaluating the inclusive function or contribution of the technology. A platform might perform well, have high usage rates, and save costs for an institution while not actually increasing inclusion. Evaluating both impact and coverage requires an assessment of risk, both to targeted populations and to others, as well as attention to unintended consequences of the introduction of a technology component.
Consider: To what extent does the choice of communications channels or tools enable wider and/or higher quality participation of stakeholders? Which stakeholders? Does it exclude certain groups, such as women, people with disabilities, or people with low incomes? If so, was this exclusion mitigated with other approaches, such as face-to-face communication or special focus groups? How has the project evaluated and mitigated risks, for example to women, LGBTQI people, or other vulnerable populations, relating to the use and management of their data? To what extent were ethical and responsible data protocols incorporated into the platform or tool design? Did all stakeholders understand and consent to the use of their data, where relevant? Were security and privacy protocols put into place during program design and implementation/rollout? How were protocols specifically integrated to ensure protection for more vulnerable populations or groups? What risk-mitigation steps were taken in case of any security holes found or suspected? Were there any breaches? How were they addressed?
Criterion 5: Sustainability
Sustainability is concerned with measuring whether the benefits of a technology tool or platform are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable. For SIMLab, sustainability includes both the ongoing benefits of the initiatives and the literal ongoing functioning of the digital tool or platform.
Consider: If the project required financial or time contributions from stakeholders, are they sustainable, and for how long? How likely is it that the business plan will enable the tool or platform to continue functioning, including background architecture work, essential updates, and user support? If the tool is open source, is there sufficient capacity to continue to maintain changes and updates to it? If it is proprietary, has the project implementer considered how to cover ongoing maintenance and support costs? If the project is designed to scale vertically (e.g., a centralized model of tool or platform management that rolls out in several countries) or be replicated horizontally (e.g., a model where a tool or platform can be adopted and managed locally in a number of places), has the concept shown this to be realistic?
Criterion 6: Coherence
The OECD-DAC does not have a 6th Criterion. However we’ve riffed on the ALNAP additional criterion of Coherence, which is related to the broader policy context (development, market, communication networks, data standards and interoperability mandates, national and international law) within which a technology was developed and implemented. We propose that evaluations of inclusive technology projects aim to critically assess the extent to which the technologies fit within the broader market, both local, national and international. This includes compliance with national and international regulation and law.
Consider: Has the project considered interoperability of platforms (for example, ensured that APIs are available) and standard data formats (so that data export is possible) to support sustainability and use of the tool in an ecosystem of other products? Is the project team confident that the project is in compliance with existing legal and regulatory frameworks? Is it working in harmony or against the wider context of other actions in the area? Eg., in an emergency situation, is it linking its information system in with those that can feasibly provide support? Is it creating demand that cannot feasibly be met? Working with or against government or wider development policy shifts?