Skip to main content



Illogical Framework: The Importance of Monitoring and Evaluation in International Development Studies

A Comment from Jessica R. Pomerantz

___________________________________________________________________________________________________________________________

Jessica R. Pomerantz is a second year Fellow at the Cornell Institute for Public Affairs, pursuing a Master of Public Administration. She was formerly a project analyst with the Institute for Public Policy at the University of New Mexico, where she completed her undergraduate degree in economics and political science. Prior to her studies at Cornell, Jessica worked at the office of the United Nations Permanent Observer, International Institute for Democracy and Electoral Assistance.

___________________________________________________________________________________________________________________________

In my brief experience with moni­toring and evaluation, I have be­come convinced that it is critically important both as an international development project component and as a field of academic study. Through­out my academic career at Cornell University, I have, at times, argued that monitoring and evaluation has actually impeded development efforts, but here I wish to amend my opinion. Bad monitoring and evaluation can sabotage development projects and our meaningful interpretation of develop­ment impacts; failures can appear to be successes and vice-versa. As a student and practitioner of monitor­ing and evaluation, I have drawn the conclusions listed below and I submit them for your consideration.

Monitoring and evaluation is a key element of the international develop­ment industry applicable to many ar­eas of public administration, domestic and international.

International development failures could be discovered and averted or corrected given proper monitoring and evaluation activities.

Anecdotal evidence from develop­ment activities in Afghanistan pro­vides one example of the international community’s lack of attention to moni­toring and evaluation concerning an ongoing development catastrophe.

Higher education ought to be filling the monitoring and evaluation knowl­edge gap but to date is failing to do so.

The Logical Framework Approach to International Development

A discussion of monitoring and evalu­ation should begin with a synopsis of the strengths and weaknesses of the logical framework—the international development industry’s early attempt to standardize project planning and design. In the late 1960s, a consulting firm developed the logical framework approach to planning and its associ­ated planning matrix, the LogFrame, at the request of the United States Agency for International Development (USAID).

Today, the logical framework has become a ubiquitous, often obligatory planning mechanism used by an over­whelming majority of international development agents—government and non-governmental organizations alike. The LogFrame is a snapshot-like sum­mary of a project in diagram form and describes a program in terms of input, output, outcome, and impact. These elements then provide the basis for monitoring and evaluation.

(Sample LogFrame matrix and a definition of the Logical Framework Approach (LFA) are available for refer­ence in the appendix.)

Agencies that use the Logical Frame­work Approach and the LogFrame matrix, or some permutation of one or both, include the World Bank, Asian Development Bank, International Fund for Agricultural Development, Inter-American Development Bank, European Commission, United Nations Food and Agriculture Organization, Department for International Develop­ment UK, and AusAID.1 Despite this endorsement, there are a number of drawbacks and issues inherent to the use of the LogFrame as a planning tool and the logical framework approach as a planning methodology that organi­zations using and mandating them have largely ignored.

Critics contend that the methodology is too rigid, overly simplistic, ethno­centric, and devoid of organizational context. Donors often mandate the approach ex post facto, after develop­ment workers have already designed or implemented a project, discon­necting the process from reality. The approach favors quantitative data, sometimes rarely available or unreli­able, over qualitative data, and easily ignores beneficiary experiences. It also restricts adaptation and often favors a single community perspective or outcome.

Monitoring and evaluation literature and trends over the past 30 years reveal that the LogFrame and the LFA are frequently in conflict with other industry paradigms, and development academics and professionals have repetitively lambasted them for various reasons. Robert Chambers of the UK-based Institute of Development Studies, a proponent of Participatory Rural Appraisal and an outspoken critic of the LFA, claimed it was exclusionary, elitist, and ignorant of context. Rick Davies, a monitoring and evaluation staple and manager of the Monitoring and Evaluation NEWS website, has expressed similar sentiments, arguing that the LFA is not appropriately structured to capture the complexity of social change and change theory required to explain success or failure in international development planning.

David Korten, formerly of USAID and the Institute of Development Research, among other institutions, has criticized LFA on similar grounds, contending that it is contrary to people-centered development goals. Norman Uphoff, of Cornell University, bemoans reductionism in the social sciences and has argued against the linear nature of development planning in favor of acknowledgements of the chaotic nature of development, which are absent from the LFA.

Other arguments contend that because the United States constructed the LFA, it is often inaccessible to other cultures, and many of its underlying premises do not properly translate (even linguistically) to its beneficial use by the global south. Because the LogFrame and the LFA designers based the approach on western con­cepts of linearity and social processes, it becomes little more than a burden when partner organizations employ it. Despite this history of criticism of the methodology behind the logical framework approach and the use of the LogFrame as a project-planning tool of international development agencies, it continues to be a mainstay of the industry.

This continuity has been possible predominantly because the LFA has remained unrivaled in its structure and summary functions. It provides an overview of project assumptions and desired outcomes at a glance, which satisfies the need for (the appearance of) efficiency and corporatized plan­ning. LFA alternatives exist and the current trend in project planning and measurement is now leaning towards Results-Based Management at the level of the national and international development organizations such as USAID, which abandoned the Log­Frame in 1996.

But NGOs and other international development organizations are still heavily LFA oriented, and the LogFrame is typically a mandatory component of a contractual obligation between an aid agency and a partnering organiza­tion. Alternatives have not ousted the LogFrame and the LFA simply because they are not comparable rivals. While the approach may be simplistic, the demands of context and perception largely disregarded by these frame­works can remain ignored because de­velopment workers poorly understand them and cannot easily represent them in a brief document.

Efforts to standardize monitoring and evaluation that resulted in the LFA and the LogFrame produced a range of critiques. Subsequent efforts to generate new paradigms of monitoring and evaluation techniques and new models for use in the field have led to alternatives, such as participatory evaluation and its offshoots. These alternatives have generated their own body of literature and critiques, and have yet to replace the LFA, still taught and widely mandated.

Remember that the LFA is just one component of the monitoring and evaluation process. I have not dis­cussed the vast number of alternative and complementary techniques or the methods of pre- and post-project monitoring and evaluation that also demand further study.

What is the purpose of monitoring and evaluation? The role of monitoring and evaluation in project management and institutional performance depends on the reasons for monitoring and evaluation. Is the institution trying to modify procedures for an optimal result, to determine best practices, or to replicate practices in other villages, cities, regions, or countries of interest? Is the organization evaluating field processes in order to inform decisions at the headquarters-based management level? Are results meant to be published in reports to be read by donors and funding agencies? Is monitoring and evaluation merely a policing technique to maintain control over operations, budget, or both?

There are a variety of premises underlying the purpose of monitoring and evaluation that can vary from organization to organization or from project to project. I raise these issues only to impress upon you that it is an enormous area that development organizations and academic institutions have barely addressed. Gains in the field lead to gains in international development, social programs, and social science. Our ignorance leads only to ineffective projects.

Monitoring and Evaluation of Rural Development in Afghanistan

I was privileged to observe one example of a large-scale national development program operating in a conflict/post-conflict setting for six months in 2010 while working in the Monitoring, Evaluation, and Reporting Unit at the National Area-Based Development Programme (NABDP), a joint initiative of the Ministry of Rural Rehabilitation and Development and the United Nations Development Programme (UNDP) in Afghanistan.

Monitoring and evaluation had become more of a hindrance to the program than an asset for a number of reasons, but the main reason was the timing and number of reports required. UNDP, donor countries, and the ministry all required reports from the program: biweekly, monthly, quar­terly, and annually. They increased in number in the short time I was there.

Additionally, UNDP mandated a results-based management approach to reporting but failed to provide train­ing on how to do this. The result was a lot of criticism of the reports NABDP did provide, and then an extensive back-and-forth while NABDP struggled to meet the obscure and undeclared needs of UNDP quarterly reports. It was similar with the ministry and the donors. The program was entirely staffed by Afghan nationals, with the exception of a handful of international advisors. None were native English speakers (except me), but the report­ing language was English.

Maladapted planning tools lead to failures in monitoring and evaluation, especially in a conflict/post-conflict situation in which staff members can­not easily leave the office and visit field sites on a regular basis due to the cost of travel and the risk incurred. The donor mandate (or industry precedent) of annual reporting often leads to meaningless performance benchmarks because the timeline of reporting is too narrow to conduct a descriptive impact assessment. For example, annual reports from NGOs operating in Afghanistan, such as the Afghanistan Civil Society Forum Or­ganization or ActionAid International Afghanistan, reveal that monitoring and evaluation often takes the form of counting the number of trainings staff attend in order to measure capacity building. A more apt measurement of organizational or individual capacity would be based on baseline levels of financial, managerial, or technical ca­pabilities over a more feasible review period of five or ten years. As a result, monitoring and evaluation describes the inputs and the outputs but never the impacts or the outcomes.

It is not easy to pinpoint the source of weakness in monitoring and evalu­ation, and by proxy, of development strategy in Afghanistan. Failure in the basic monitoring and evaluation approach is an easy target because the literature critical of the LogFrame is already vast, but there are other fac­tors of influence worth considering. Development agencies such as USAID over-report meaningless metrics such as spending or GDP, and other institu­tions often follow suit.

The Afghanistan National Develop­ment Strategy (ANDS) identifies 100 benchmarks of development for measurement, review, and appraisal. The international development part­ners and NGOs responsible for imple­menting ANDS lack the capacity to accomplish these objectives. Assess­ments commonly measure the out­puts of their projects rather than the outcomes. Although donors are often interested in human development indicators, the impact is not accurately measureable in the time span of re­porting—at least, not in a quantifiable format. Development workers more typically use case studies of individu­als or small cooperatives, interviews, or photographs as evaluation deliver­ables. NGOs cite a lack of capacity due to staff, time, and budgetary con­straints along with a lack of baseline indicators.

Critics attack international develop­ment agencies’ evaluation methods for being devoid of meaningful results. NGOs continue to mimic the ineffi­ciency of larger institutions although they presumably have more organi­zational flexibility. Critics cite depen­dence on funding as the main reason. Resource dependence steers organiza­tions, and the LogFrame is the most well known tool for aid disbursement. A fear of corruption, especially in Af­ghanistan, leads agencies to demand the illusion of accountability that the LFA provides.

This same approach also reduces very complex concepts to measurable fac­tors, such as the idea of empowerment or improved access to livelihood even when it is not necessarily possible to quantify the outcomes. Monitoring and evaluation can easily strangle rather than facilitate project planning and development industry advance­ment. This corporatization of the pub­lic sector places undue pressure on or­ganizations to standardize operations when greater flexibility and creativity would produce a better result. It is the responsibility of higher education to lead the way.

The Underdeveloped Field of Development Monitoring and Evaluation

Monitoring and evaluation is a bit of an anomaly as a field of interest because in one sense there has been a rush to professionalize the field, but in other aspects there has been a lack of professional attention. Monitoring and evaluation can be the largest cost component of a development project budget and yet there is little standard­ization of training or credentials.

There is no shortage of agencies offer­ing training in various forms. For ex­ample, I subscribe to a monitoring and evaluation list serve and in the month of January 2011 alone I received offers to sign up for an introduction to social auditing, monitoring and evaluation for results-based project management, participatory impact monitoring, Most Significant Change (MSC) training, knowledge management, and outcome mapping.

The cost ranges, the training topics, the scope and the course durations vary widely. I could study monitor­ing and evaluation for a few hundred dollars over a single weekend or pay thousands of dollars to attend a month-long training. I could receive a certificate from an unknown insti­tute or a well-known university. But would there be a marked difference in the outcome if I spent a few hours of my time reading guidance documents posted online by large development agencies instead? There are as many approaches to monitoring and evalua­tion of international development proj­ects as there are non-governmental organizations in the field. How can a practitioner or an organization deter­mine which to use?

While agencies have rushed to capitalize on the need for monitoring and evaluation training, institutions for higher education have been disappointingly slow to react.

When I last checked, there were two universities that offered monitoring and evaluation studies as a formal PhD program. One of those was in the United States. Other universities offered some evaluation degrees, but usually tied them to education or public health studies only. Some have begun to offer certificate and weekend programs, but the underlying message here is that it does not warrant their full attention over the course of a masters degree or a PhD program. This is a huge mistake.

Consider Cornell University. We have departments of Applied Economics and Management, Policy Analysis and Management, the Cornell Institute for Public Affairs, the Cornell Internation­al Institute for Food, Agriculture and Development (CIIFAD), and the College of Agricultural and Life Sciences. Which of these areas of study is exempt from the need for monitoring and evaluation training?

And yet, apart from the occasional course in qualitative or quantitative research methods, or haphazardly offered statistics courses across all departments, Cornell offers literally nothing in monitoring and evaluation studies. Not all institutions of higher learning are as slow to make gains in the social science of international development.

The Abdul Latif Jameel Poverty Action Lab at MIT is using randomized evalu­ations to study the impact of develop­ment projects. Using random trials may not be the best way to disburse development, but development experts consider them one of the most rigor­ous ways to measure effectiveness, and that is the point of higher educa­tion: to pioneer new industry methods through experimentation and re­search. In failing to address this need, Cornell is missing a huge opportunity to strengthen its position as a leader in higher education and in international development.

If you are unable to evaluate your own projects and programs, then you are not a practitioner or a professional; you are an amateur, and your efforts are just as likely to do harm as good. Without evaluation, you are engaged in guessing, not social science. The development industry is rife with guessing, and guessing is a dangerous business.

The world is on the verge of failing to meet the Millennium Development Goals thanks to guessing. From what I have seen of the international develop­ment industry, I have to say that my own country, the United States, often appears either heavily guilty of guess­ing with respect to development activi­ties, or guilty of veiling its geopolitical interests under the guise of develop­ment activities.

In its 2006 report, When Will We Ever Learn? Improving Lives Through Impact Evaluation, the Center for Global Development called for greater efforts in evaluating social programs. As Peter J. Matlon pointed out in his September 2009 presentation for the CIIFAD lecture series, USAID reports a success rate of 84% at the close of its projects, but what does success mean if the agency declares it immediately upon project completion?

An evaluation performed on con­clusion of the development project can report only the number of wells constructed, for example. If the agency never reports the long-term impact, and the long-term impact is that the wells poison the local population with arsenic, the program model falls under suspicion due to the lack of transpar­ency caused by improper evaluation, and must cast doubt on the entirety of the agency’s work.2 This is exactly the kind of situation we need to avoid by enthusiastically promoting the study and dissemination of monitoring and evaluation best practices.

There are a few initial steps the insti­tutions I named need to take in order to make a meaningful contribution to the field of monitoring and evaluation.

Cornell University ought to introduce a class—not another qualitative research methods course or a statistics class, but a class called Monitoring and Eval­uation in International Development. Representatives from CIIFAD have been receptive to the idea of creating a workshop and bringing related speak­ers on the subject to Cornell for the CIIFAD lecture series, and this would be a good start to get the Cornell com­munity talking and thinking about monitoring and evaluation.

UNDP and nearly all monolithic development organizations have published or sponsored the publication of manuals on monitoring and evaluation. The UNDP handbook I have is an indecipherable 232 pages. I would like to see smaller, more focused pamphlets or monographs that national staff (and foreign consultants) could turn to in times of need for instruction or clarification.

Something has to be done about train­ing. I am loath to say there ought to be a certification for trainers or some formal professionalization mechanism because I think the certification pro­cess is easily exploited, especially fi­nancially, and can eliminate creativity in the field. I think a good start would at least be a directory of monitoring and evaluation trainings, including feedback from participants that is on­line and international in scope.

I started learning about monitoring and evaluation when I enrolled in the International Planning and Develop­ment Workshop led by CIPA Professor David Lewis. Our task was to create an agricultural research and extension program for the Catholic University of Sudan. I headed the monitoring and evaluation unit and designed a module from scratch for the agricultural students based on a participatory approach. I decided the Logical Framework Ap­proach would be too technocratic and beyond the reach of this new school. Cornell flew Father Solomon Ewot, the dean of the Catholic University of Sudan, to Ithaca from Wau for our presentation. The entire class looked on nervously as a few of our chosen representatives presented our semester’s worth of hard work. Then, after it was over, Father Ewot turned to me and said, “Where’s the LogFrame?”

___________________________________________________________________________________________________________________________

Appendix

The chart below depicts a basic LogFrame matrix presented by the Swiss Cooperation Office Afghanistan at the 8th Livelihood Platform on March 2, 2009 in Kabul. The Swiss Cooperation Office is an implementing partner of the Afghanistan National Development Strategy. The purpose of the Livelihood Platform is to hold meetings among development agencies in Afghanistan to share knowledge on baseline data, studies, and manuals related to monitoring and evaluation.

This summary of the Logical Framework Approach is quoted from the 2004 World Bank publication, Monitoring & Evaluation: Some Tools, Methods, and Approaches.

The logical framework (LogFrame) helps to clarify objectives of any project, program, or policy. It aids in the identification of the expected causal links—the “program logic”—in the following results chain: inputs, processes, outputs (including coverage or “reach” across beneficiary groups), outcomes, and impact. It leads to the identification of performance indica­tors at each stage in this chain, as well as risks which might impede the attainment of the objectives.

The LogFrame is also a vehicle for engaging partners in clarifying objec­tives and designing activities. During implementation the LogFrame serves as a useful tool to review progress and take corrective action.

pom1

Endnotes

1 Finlayson 7

2 This is a reference to the often cited 1972 UNICEF well-digging project in Bangladesh as documented in the infographic Vision Statement: When Failure Looks Like Success by Andrew Zolli and Anne Marie Healy and published on Harvard Business Review’s Idea Watch on 1 April 2011.

References

Annual Narrative Report. 2009. Afghanistan Civil Society Forum Organization (ACSFo). Accessed December 6, 2010. www.acsf.af.

Aune, Jens B. November 2000. “Logical Framework Approach and PRA: Mutually Exclusive or Complementary Tools for Project Planning?” Development in Practice, Volume 10, Number 5, pp. 687-690.

CDA Collaborative Learning Projects. May 2009. The Listening Project: Field Visit Report — Afghanistan. Accessed November 6, 2010. http://www.cdainc.com/cdawww/pdf/casestudy/lp_afghanistan_report_revised_20100806_Pdf.pdf .

Center for Global Development. May 2006. “When Will We Ever Learn? Improving Lives Through Impact Evaluation. Accessed May 1, 2011. http://www.dochas.ie/pages/resources/documents/WillWeEverLearn.pdf.

Chambers, Roberts. 1997. “Whose Reality Counts? Putting the First Last.” IT Publications: London, UK.

Cordesman, Anthony. May 2007. “The Uncertain ‘Metrics’ of Afghanistan (and Iraq).” Center for Strategic International Studies, Arleigh A. Burke Chair in Strategy. Accessed December 6, 2010. http://www.comw.org/warreport/fulltext/070521cordesman.pdf.

Cousins, J Bradley and Lorna M. Earl. Spring/Summer 1999. “When the Boat Gets Missed: Response to M.F. Smith.” American Journal of Evaluation, Volume 20, Issue 2, pp. 309-318.

Çuhadar-Gürkaynak, E., B. Dayton, and T. Paffenholz. 2009. Evaluation in Conflict Resolution and Peacebuilding, in Handbook of Conflict Analysis and Resolution, eds. D. J. D. Sandole, S. Byrne, I. Sandole-Staroste, and J. Senehi, Routledge, Oxon and New York, pp. 286-299.

Dale, Reidar. February 2003. “The logical framework: an easy escape, a straitjacket, or a useful planning tool?” Oxfam GB: Development in Practice, Volume 13, Number 1.

Davies, Rick. January 2004. “Scale, Complexity, and the Representation of Theories of Change.” Evaluation, Volume 10, Number 1, pp. 101-121.

De Coning, C. and Romita, P. 2009. “Monitoring and Evaluation of Peace Operations.” International Peace Institute: New York, U.S.

Dietz, Ton and Sjoerd Zanen. 2009. “Assessing interventions and change among presumed beneficiaries of ‘development’: a toppled perspective on impact evaluation.” in The Netherlands Yearbook on International Cooperation 2008, ed. Paul Hoebink. Van Gorcum: Amsterdam, The Netherlands, pp. 145-163.

Earle, Lucy. April 2003. “Lost in the Matrix: The Logframe and the Local Picture.” Paper for INTRAC’s 5th Evaluation Conference: Measurement, Management and Accountability? Amsterdam, The Netherlands.

Finlayson, Peter. September 2004. “Strengthening Management Systems to Improve the Impact and Performance of Development Projects: The Application of Best Practice Methods in Asia and China.” Melbourne University Private Working Paper Series, Working Paper No. 18: Victoria, Australia.

Fujita, Nobuko, ed. June 18, 2010. “Beyond Logframe: Using Systems Concepts in Evaluation.” Monitoring and Evaluation NEWS. Accessed November 13, 2010. http://mande.co.uk/2010/uncategorized/beyond-logframe-using-systems-concepts-in-evaluation/.

Grace, Jo and Adam Pain. July 2004. “Rethinking Rural Livelihoods in Afghanistan.” Afghanistan Research and Evaluation Unit, Synthesis Paper Series: Kabul, Afghanistan.

Korten, David C. September-October 1980. “Community Organization and Rural Development: A Learning Process Approach.” Public Administration Review, Volume 40, Number 5, pp. 480-511.

Lang, Raymond. 2000. The Role of NGOs in the Process of Empowerment and Social Transformation of People with Disabilities, in Thomas M, Thomas MJ (eds.) Selected Readings in Community-Based Rehabilitation Series 1: Bangalore, India.

Monitoring & Evaluation: Some Tools, Methods & Approaches. 2004. The World Bank: Washington, DC, US.

OECD-DAC. 2007. “Encouraging Effec­tive Evaluation of Conflict Prevention and Peacebuilding Activities: Toward DAC Guidance.” A Joint Project of the DAC Network on Conflict, Peace and Develop­ment Co-operation and DAC Network on Development Evaluation: Paris, France.

Pain, Adam. December 2002. “Understand­ing and Monitoring Livelihoods under Conditions of Chronic Conflict: Lessons from Afghanistan.” Overseas Development Institute, The Livelihoods of Chronic Con­flict Working Paper Series, Working Paper 187: London, UK.

PRRP 2006: Reflection and Learnings. 2006. ActionAid International Afghanistan. Accessed December 6, 2010. http://www3.actionaid.org/afghanistan/images/PRRP%202006.pdf.

Sherman, Jake. February 2009. “The Afghan National Development Strategy: The Right Plan at the Wrong Time?” Centre for Security Sector Management (CSSM), Journal of Security Sector Management: London, UK.

Smits, Pernelle A. and Francois Champagne. December 2008. “An Assessment of the Theoretical Underpinnings of Practical Participatory Evaluation.” American Journal of Evaluation, Volume 29, Number 4, pp. 427-442.

Uphoff, Norman. 1996. “Why NGOs Are Not a Third Sector: A Sectoral Analysis with Some Thoughts of Accountability, Sustainability, and Evaluation.” in Edwards M, Hulmes D (eds.) Beyond the Magic Bullet: NGO Performance and Accountabliity in the Post-Cold War World. Kumarian Press: London, UK.

Vans der Velden, Fons. October 2003. “Capacity Assessment of Non-Governmental Development Organisations: Beyond the logical framework approach.” Context, international cooperation: Contextuals No. 1.

Wallace, Tina, with Lisa Bornstein and Jennifer Chapman. 2007. “The Aid Chain, Coercion and Commitment in Development NGOs.” Intermediate Technology Publications: London, UK.

Comments

Leave a Reply

About The Review:

The Cornell Policy Review is the official public policy journal of the Cornell Institute for Public Affairs, a graduate program offering a two-year Master's degree in Public Administration (MPA).