Performance Measurement And Management – Future Critical Tools For Social And Impact Investors



16th November 2019

Performance Measurement And Management – Future Critical Tools For Social And Impact Investors

Performance Measurement And Management – Future Critical Tools For Social And Impact Investors

Performance measurement and management practices – monitoring, evaluation and impact assessment – play an invaluable role in providing insights into becoming more effective. Lessons from past evaluation studies have shaped our current approach to development.

The field of performance management continues to evolve in response to key pressures, reflecting and influencing trends in development thinking.

While the tools for performance management and the approaches used in evaluative research are sometimes hotly debated, we have learned that performance measurement is best used flexibly.

The method of assessment should be based on what is most useful for answering the questions at hand. Measurement is effective when it makes a real difference by delivering credible evidence on what works and why, to feed into policies and programme decisions.

To become more effective at the performance management of social investment and development initiatives, and based on our experience of conducting numerous impact assessments, Next Generation would like to propose the following that would lead to best practice in the performance measurement (including monitoring assessment and evaluation assessment) of development initiatives:

Lessons learned - 1: Use the data

Performance management can help organisations understand if they are working effectively and efficiently towards their goals, as well as identify which results they are achieving, and how.

What is required

Creating an evidence-based organisation is as much about politics and culture as it is about regulations or institutional set-up.

Evidence-based strategies and policymaking require commitment and buy-in from top levels to ensure a focus on understanding what works and why and being accountable for results.

Policymakers, investment managers and programme staff must be willing to build systems to become a knowledge-based organisation.

Regular interaction between evaluation units and management helps create a productive dialogue and maintain interest in evaluation findings.

Best practice suggestions

Creating an evidence-based organisation is as much about politics and culture as it is about regulations or institutional set-up.

Evidence-based strategies and policymaking require commitment and buy-in from top levels to ensure a focus on understanding what works and why and being accountable for results.

Policymakers, investment managers and programme staff must be willing to build systems to become a knowledge-based organisation.

Regular interaction between evaluation units and management helps create a productive dialogue and maintain interest in evaluation findings.

Lessons learned - 2: Learning is the reason for conducting measurement

Organisations that have a strong culture of learning tend to have greater impact, be more adaptable and less prone to failure. Investors and development agencies that adopt an institutional attitude that encourages critical thinking and a willingness to adapt and improve continuously will be more effective in achieving their goals.

What is required

Development is a complex business and many areas are still riddled with knowledge gaps.

In many cases, it is not entirely clear which approach will work best, why or how.

Careful experimentation is required to figure out what approaches are working and why, and what contextual or systemic factors are critical to achieving desired outcomes.

Evaluation and impact assessment are key links in this learning chain.

Best practice suggestions

Managers must make it acceptable to take calculated risks. They should encourage staff to reflect critically on their experiences to identify lessons they can apply in future.

The conclusions of evaluations are not always positive – sometimes they will reveal ineffective programmes or unintended harm caused by development activities.

Leaders need to understand that embracing an evaluative approach entails risk to them as individuals responsible for aid expenditures as well as to the credibility of the development cooperation programme.

Key steps to strengthen learning:

  • Integrate evaluation findings with policymaking and strategic planning at all levels of the organisation.
  • Use evaluation and impact assessment outcomes as important instruments for knowledge management.
  • Set a positive “tone from the top” where senior managers show clear support for learning and accept both positive and negative findings from evaluations and impact assessments.
  • Create incentives and systems to ensure that learning (including lessons from other evaluations) becomes part of normal daily business.

Lessons learned - 3: Be purposeful in measurement practices

The function of evaluation and impact assessment is to provide credible, independent evidence about the relevance, effectiveness, efficiency, impact and sustainability of development activities.

This information is used to support learning, test theories about how development results can be achieved, strengthen programme design and management, and inform decision-makers.

What is required

The quality of, interest in, and use of evaluation and impact assessment findings will be strengthened if decision-makers, management, staff and partners understand the role performance management plays in developmental operations.

Without this understanding, stakeholders risk viewing evaluation and assessment negatively as a burden that gets in the way of their work, rather than as a valuable support function.

Best practice suggestions

It is helpful to set out the role, function and value of performance management in a formal policy, either as a specific evaluation or performance management policy or as part of the overall development strategic framework.

The evaluation policy should be linked to other relevant policies and strategies, providing an appropriate institutional framework.

Performance management policies should describe what types of programmes and policies will be evaluated/assessed, how often and by whom, and create incentives for coordinating evaluation work with other development actors.

Lessons learned - 4: Match resources with expectations

This aspect includes creating dedicated performance management capacities to produce and use evaluation, having evaluation competence in operational and management units, and funding for commissioning studies, data collection, knowledge management and dissemination of lessons. Human resources are just as necessary as funding.

What is required

A lack of proper resourcing can have damaging effects on performance management units, as well as on the broader culture of learning and the effectiveness of the organisation overall.

Performance management can help save money by increasing efficiency and impact. The funding of performance measurement activities should therefore be understood in terms of benefits as well as costs.

Best practice suggestions

Project, programme or organisational evaluations should be funded by allocating a portion of programme budgets to evaluation/assessments.

There has been considerable debate about what percentage of funds is required to ensure adequate monitoring, evaluation and assessments.

Most organisations have a percentage (pre-)allocated.

Funds are also needed for strategic evaluation and analysis of organisational effectiveness.

Lessons learned - 5: Strengthen resources with processes and system

To be evaluated/assessed effectively, interventions must be able to demonstrate their intended results in measurable terms.

Programme design, monitoring, performance and knowledge management systems as well as evaluation and assessment competencies, skills and resources are prerequisites for high-quality, efficient evaluation.

What is required

Many interventions are ill-conceived and their goals not well-articulated; risks are not properly identified and assessed; intervention theories are too general; appropriate indicators are not clearly defined; baseline data is missing; there is no representative sampling; or it is not clear whether the activities were implemented as intended.

The result of such weaknesses in programme design and implementation is that evaluation/assessment studies cannot report sufficiently on results at the level of outcomes or impact.

Best practice suggestions

Performance management is not a standalone function. It relies on management systems to provide details on the intervention being evaluated, information on the context in which it is implemented and data on key indicators.

It is important to institute a systematic process to review programmes at the design phase to make sure they have well-defined objectives and a clear programme logic, and to check that they can readily be monitored, evaluated and assessed.

These criteria are taken into account during the programme approval process.

Lessons learned - 6: Be realistic about the expected results and ask the right questions

While performance measurement is a useful input, performance management alone cannot solve all the problems of learning and accountability faced by a social investor or a development agency.

What is required

Social investors are increasingly pursuing more complex, longer-term objectives in partnership with other actors. They are under pressure to demonstrate the results of development assistance.

Many are trying hard to report progress on high-level results and to link outcomes to specific activities they finance or have supported. There is widespread insistence on seeing tangible results and for making every cent count.

Sometimes this creates unrealistic expectations.

Best practice suggestions

Managers of development programmes should work closely with performance management units to identify strategic questions of interest and set out a research agenda to address them.

Performance management units should prioritise and choose themes, sectors and policy issues of high importance, focusing on evaluation/assessment topics with the most learning potential.

One way to focus performance management efforts is by identifying where the greatest risks to the investor lie, and focusing assessment efforts in high-risk areas, for instance large, high-cost programmes, or innovative programmes about which there is little information.

Another approach is to start from the basis of what information is needed for programme or policy reforms and to derive measurement themes and topics from those needs.

Lessons learned - 7: Choose the right tools

Performance management and measurement activities should use suitable research methods and statistical analysis to answer the evaluation questions and assess the development activity in terms of relevance, effectiveness, efficiency, impact and sustainability.

What is required

Most funders should have an evaluation guide or manual that outlines a range of evaluation tools and methods that can be used, offering a menu of options.

There may be trade-offs between different objectives for an evaluation.

Potential users should be consulted to ensure that the methods chosen will produce the information that will meet their needs.

Best practice suggestions

There is no single best method for evaluating or assessing all development interventions.

Evaluations and assessments are most likely to be useful when the methodology fits the questions, the context and the programme.

The important thing is flexibility in selecting an approach that is best suited to each evaluation/assessment.

Such “best fit” methods are most likely to produce useful, relevant and credible evidence. An overly rigid approach can stifle critical thinking.

Lessons learned - 8: Impact is collective

Performance management should involve all stakeholders, governments, other funders, intermediaries and beneficiaries as suited to the evaluation topic at hand.

Evaluations and impact assessments can be carried out jointly, involving other partners in the management or steering group.

Other ways of collaborating include sharing data, reports or context analyses, commissioning syntheses of evaluation findings, meta-evaluations and peer reviews, and developing common definitions and methodologies.

What is required

Joint performance management activities can be useful entry points for working with partners because they provide opportunities to discuss and build consensus on strategic objectives, as well as to critically analyse the approach to development and assess shared results.

Collaboration in planning and carrying out evaluations/assessments can create a basis for mutual accountability.

Collecting evidence from intended beneficiaries and local partners can also help ensure that the evaluation/impact analysis accurately reflects realities on the ground.

Best practice suggestions

Documents initiating a development cooperation partnership (e.g.joint assistance strategies or memoranda of agreement) should include plans for joint performance management.

Another useful way to create incentives for doing evaluations/assessment jointly is making it a requirement in the performance management policy.

Lessons learned - 9: Act on the findings

Performance management findings should influence the decisions and actions of investors, development agencies and government development policymakers. Procedures should be put in place to ensure that appropriate and timely actions are taken by those responsible for strategic planning, programme design and oversight. This usually involves a formal performance management system.

What is required

Investors and development organisations need to put adequate attention and resources into communicating the outcomes of their work.

The effort of performance management and measurement will be wasted if the final report goes unnoticed and unused.

Performance management systems ensure that all stakeholders are aware of relevant findings and take action to correct or improve future programmes.

The opportunity to identify lessons and act on evidence must be embedded in the overall design and management of development activities.

Institutional mechanisms for follow-up on findings are also required to ensure adequate response and follow-up action from management.

Best practice suggestions

It can be helpful to create institutional links between communication and performance management units to facilitate information sharing.

Communication units should draw inputs from evaluation/impact assessment reports when producing annual reports or other products used to raise public awareness.

Other useful strategies to improve the communication of performance management outcomes and findings include:

  • Producing a variety of summaries that focus on different parts of the evaluation/assessments that are of interest to specific audiences.
  • Grouping several evaluations around a theme or country and holding a workshop to discuss lessons and draw broader conclusions.
  • Disseminating evaluation/impact reports to staff and management through email and intranet systems, formal and informal workshops and briefings, or holding brownbag lunches or afterwork drinks events to discuss findings.
  • Web-based and social media technology can be used to discuss the findings of evaluation/impact reports with internal and external stakeholder audiences.
  • Holding media conferences and public debates to launch evaluations/impact reports, and using social media, video clips, podcasts, websites and email to build networks of interested people and share reports in a timely way.
  • Systematically sharing findings with the intended beneficiaries, the media and civil society. This requires building networks of contacts and planning ahead to reach relevant partner audiences effectively, as well as translating the evaluation into local language

Lessons learned - 10: Evaluate the performance management system

A high-quality performance management and measurement system is one that is credible, valid, accurate, understandable and useful. These characteristics are mutually reinforcing.

What is required

Performance management systems that use appropriate rigour are more likely to be trusted and used.

Robustness in performance management processes also helps increase the external validity of findings.

We can draw broader conclusions from individual studies, evaluations and impact assessments and contribute to collective knowledge about how development works and how effective investment is.

Simply put, a low-quality performance management process or system will not be useful. Poor evidence, flawed analysis or the wrong advice can even do more harm than good.

Best practice suggestions

The DAC Quality Standards for Development Evaluation (OECD, 2010) describes the basic characteristics of a quality process and report.

Evaluation and impact reports should provide a transparent description of the methodological approach, assumptions made, data gaps and any weaknesses in the report and process to allow the reader to make an informed judgement of its quality.

Systematic quality checks are often put in place to review terms of references, draft reports and final reports against the DAC standards.

Ideas about ensuring quality performance management systems and processes include:

  • independent assessment by an expert
  • systematic reviews to look at the overall quality of evidence
  • performance reviews
  • besides tracking the dissemination of reports, mechanisms should be in place to monitor how evaluation and impact assessment findings are used