Logo
Munich Personal RePEc Archive

Speaking the Same Language within the International Organizations; A Proposal for an Enhanced Evaluation Approach to Measure and Compare Success of International Organizations

Farmanesh, Amir and Ortiz Bobea, Ariel and Sarwar, Jisha and Hasegawa, Tamiko (2006): Speaking the Same Language within the International Organizations; A Proposal for an Enhanced Evaluation Approach to Measure and Compare Success of International Organizations.

[thumbnail of MPRA_paper_15146.pdf]
Preview
PDF
MPRA_paper_15146.pdf

Download (300kB) | Preview

Abstract

It is currently difficult for Member States to assess and compare the success or performance of UN organizations despite recent movements towards results-based approaches. Efforts in the implementation of logical frameworks have been too independent and uncoordinated and left at the discretion of agencies. This has led to different and deficient implementations of the same theoretical approach making it almost impossible to draw any conclusions. The lack of a common approach is perceptible across agencies in the diversity of evaluation standards and terminology used to describe the same concepts, the unevenness and diversity of staff training as well as in the way intentions and results are presented. The myriad of organizations with some different sort of evaluation role may be seen as an additional symptom of the lack of coordination within the UN system.

The establishment of a useful and reliable evaluation process in the UN system requires three main elements: 1- a common and enhanced evaluation framework, 2- the human and organizational capacity to ensure the accurate implementation of the framework, and 3- the commitment of Member States and agencies to implement the approach.

This report mainly discusses the common evaluation framework and methodological issues, although it also provides significant insight regarding how to build the human and organizational capacity of the UN to carry out this approach.

Assessing the success of an organization entails the determination of three elements: mandate or mission relevance, effectiveness, and efficiency. The report provides insight into these three components of success but its primary focus is on effectiveness. Measuring effectiveness entails establishing precise targets to be reached by agencies and collecting actual results in order to assess if intended targets are being met. Indeed, assessing effectiveness encompasses comparing intentions (provided by targets) to actual achievements (collected through monitoring). The UN Secretariat itself does not provide targets to be met by the organization. Additionally, it over-emphasizes outputs (output implementation rates) and disregards the “big picture” provided by outcomes.

Under the proposed approach, subprograms meeting most of their targets are the most effective. Programs (agencies) with a large share of effective subprograms (programs) may be considered effective themselves. As a way to simplify and give an intuitive sense of effectiveness, subprograms could be attributed a category or color following a “traffic light” methodology (green for satisfactory, amber for average, red for below expectations) according to the share of targets satisfactorily met. The same could be done for programs according to their share of satisfactory subprograms. Program and subprogram performance data of every agency could be centralized (by a coordinating body) in a comprehensive webpage that would facilitate comparison between similar functions or themes across the UN system [Please refer to pg. 27 for an elaborate illustration].

The report also suggests the possibility of complementing this objective approach with a perception survey. Despite significant limitations of this type of subjective approach, it is still widely used and gives an idea of which organizations are best regarded by their peers. Contrasting actual performance data and perception indicators could be revealing, and could shed light in areas where the objective methodology may fall short.

One of the most important recommendations concerns the organizational capacity ensuring the accurate implementation of the evaluation approach. This capacity should be embodied by a centralizing coordinating body (perhaps under the CEB) that would 1-ensure a common evaluation training and support of UN staff and uniformity of standards (terminology, methods, etc.), 2- centralize performance data gathered from agencies in a common database and present results in a user-friendly manner where programs and agencies could be compared and 3- verify the validity of the data submitted by the agencies (performance auditing).

Atom RSS 1.0 RSS 2.0

Contact us: mpra@ub.uni-muenchen.de

This repository has been built using EPrints software.

MPRA is a RePEc service hosted by Logo of the University Library LMU Munich.