Introduction to compiled indicators
Compiled Indicators
Measuring human-centered projects is multi-faceted
In previous sections, we explored the importance of using diverse data types when measuring how human-centered interventions function in the world. But looking at datasets in isolation prevents us from seeing the complex whole. To bring these datasets together, we need a tool to bring it into concert. That tool is a compiled indicator which nis the generalist’s version of the statistical tool, the composite indicator.
Using a compiled indicator also sets you up to work with data and evaluation scientists in the future, if you’d like to pursue advanced computation. Since compiled indicators follow the logic and front-end methodology of a composite indicator, our citations reference that tool throughout this guide.
What is a compiled indicator?
Compiled indicators are based on the idea of composite indicators which are “…constructed to measure complex or multidimensional phenomena by combining individual indicators into one single measure by simple averaging or more advanced statistical methods.”1
Compiled indicators are very similar, but stop before the advanced statistical methods step. Put plainly, these combined measures are a single metric or score created by combining individual indicators into one single comprehensive measure. There are several different types of indicators in statistical use including environmental, leading, and sentiment indicators, but for our general application in measuring human-centered projects, we’ll focus on compiled indicators. National statistical offices, data scientists, and social scientists use combined measures to understand a wide variety of issues including public trust, quality of life, and urban resilience. They are commonly used to “…summarize complex, multi-dimensional realities with a view to supporting decision-makers”2 and “aim to measure complex, multidimensional phenomena, which cannot be measured directly…”3
Compiled indicators can work at different scales, to indicate the direction of big initiatives, like urban resilience or public trust, but also smaller-yet-vital impacts, such as those relating to life experiences like having a child or attending higher education, natural disaster responses, acquisition programs, and other important public services and programs.
In the federal government, the State Department and the U.S. Agency for International Development (USAID) use a variety of combined measures to understand Program and Project Design;4 the General Services Administration uses the EDX Index to understand digital experience,5 and the Veterans Health Administration uses a combination of data to understand whole-veteran health in their Patient-Aligned Care Teams.6
Building a compiled indicator is both art and science
The OECD Handbook advises that “…the construction of your composite indicator owes more to the craftsmanship of the modeller (sic) than to universally accepted scientific rules for encoding….”7 They also state that “…the justification for a compiled indicator lays in its fitness to the intended purpose and the acceptance of (your) peers’….”
So your north star in creating a compiled indicator needs to be the appropriateness of the datasets you select and the aggregation method you choose, not their perfection. At the start, you’ll almost certainly have more of one type of data than another. To build a robust compiled indicator, you must assemble the other data types and bring them into concert. For example, if you have access to a wealth of quantitative data about your agency’s programs and services, you must put in the effort to find qualitative data which reflects customer and employee perspectives about those programs and services. In addition, you must find relevant historical data, which might be locked in old documents and reports, old dashboards, or the institutional knowledge of long-time customers and employees.
The construction of your compiled indicator is not a physical, empirical truth, such as you might find in physical sciences. It is a craft, and expresses the perspective of the creator. This might seem unsatisfactory or risky, but if you construct a compiled indicator that is defensible, replicable, and verifiable, it can be used to responsibly reflect your work.
Conclusion
Your exertion is worth it! If you’re able to build a compiled indicator that is defensible, replicable, and verifiable, triangulating around the truth of impact, you will be able to make more intelligent, efficient, and effective choices for your intervention. You’ll avoid the pitfalls of focusing only on a single data type, and you’ll be able to use your compiled indicator to both prove the effectiveness of your approach(es), and leverage them to fill gaps, plan ahead, and future-proof your work.
In the end, measuring complex spaces will never be concrete and neat. That’s because the situations the public sector needs to measure are frequently the messy reality of human experience. However, messiness does not necessarily equate to sloppiness or lazy thinking. Building on your good work of identifying data to use, a compiled indicator can help you bring it all together. In the Measurement Operations Guide, you’ll learn how to build a compiled indicator.
Footnotes
-
Guidelines on production leading, composite, and sentiment indicators. United Nations Economic Commission for Europe. Chapter 2. Section 3. Paragraph 2.14. Geneva. Page 1. 2019. ↩
-
Skeith, M and Gallagher, J. Composite Indicators: An introduction to their development and use. US Agency for International Development. Policy and Planning. 1 Aug 2019. ↩
-
Guidelines on production leading, composite, and sentiment indicators. United Nations Economic Commission for Europe. Chapter 2. Section 3. Paragraph 2.14. Geneva. Page 67. 2019. ↩
-
Program Design and Performance Toolkit. US Department of State.pgs. 41-44 ↩
-
Meyers, A and Monroe, A. Determining the true value of a website: A case study. 16 April 2024. ↩
-
US Department of Veterans Affairs. Veterans Health Administration. https://www.patientcare.va.gov/primarycare/pact/Team-Based.asp ↩
-
OECD/European Union/EC-JRC (2008), Handbook on Constructing Composite Indicators: Methodology and User Guide, OECD Publishing, Paris,22 Aug 2008. 16. ↩