DSA2016: Politics in Development
- Irene Guijt (Oxfam GB) email
- Martin Walsh (Oxfam GB) email
- Deborah Hardoon (Oxfam) email
- Katherine Trebeck (Oxfam GB) email
The panel will debate that what is used to measure critical aspects of development - such as wellbeing, effectiveness and inequality, hides or highlights, reveals or makes invisible critical groups of people, issues and values that underpin society.
The panel members will illustrate that what is used to measure critical aspects of development - such as wellbeing, effectiveness and inequality, is not a neutral, value free technical issue. It hides or highlights, reveals or makes invisible critical groups of people, issues and values that underpin society. We would illustrate the this with our work on the politics of measuring inequality, work on the politics of evidence/results (linked to a bestselling book I co-authored), OGB's work on reviewing programme effectiveness, and new economic paradigms and what values are shaping societal and political choices.
The panel would stimulate discussion by the audience around:
1. Other examples of non-neutrality of measurement choices
2. How this has influenced work - what is seen, valued, funded and implemented
3. What issues and groups of people have been marginalised or kept invisible as a result.
4. What development researchers can do to reduce mistakes made through non-consideration of the value-laden nature of measurement.
This panel is closed to new paper proposals.
Prioritising short term results at the cost of long term structural change
This paper will look at tensions around measuring influencing that is framed in terms of the metrics of short term results - and what options exist for those who 'think and work politically'.
In a sector that invests $140 billion per year to reduce poverty and injustices, it is not just useful to know whether our bets on what might work or not are based on plausible theories of change; it is essential. Hence the proliferating studies, trials, meta-studies and the like - all seeking definitive answers to what works or not. Important insights are emerging on micro-credit, cash transfers, education and of course deworming (though controversially - see 'worm wars').
That's all fine for programmes on the ground, but what do we really know about what works when it comes to the secret plans and clever tricks of influencing strategies, advocacy efforts and campaigns for local, national or global change? Not enough by a long stretch. And yet entire organisations assume that long-term influencing initiatives are the way to go when it comes to structural poverty reduction and transforming entrenched injustices. After decades of service delivery, many international NGOs in particular are approaching structural change by investing more in influencing strategies.
However, are these organisations prepared to shift how they measure what matters? Deep problems occur when the results framing is dominated by those touting supposedly value-free methods and approaches and systems that prioritise metrics of short term results. This paper will look at the source of these tensions and options for those investing in "influencing authorities and the powerful, and less on delivering the services for which duty-bearers are responsible" such as Oxfam with its worldwide influencing ambition.
Negotiating effectiveness: the case of a transnational advocacy evaluation
In the development world, advocacy plays an increasingly important role while evaluating its effectiveness is complicated. This paper illustrates the negotiated nature of evaluating advocacy effectiveness, questioning the objective nature of evaluation.
In the development world, advocacy plays an increasingly important role while evaluating its effectiveness is complicated. Reflecting on a major advocacy evaluation from 2012-2015 by the Netherlands Ministry of Foreign Affairs in which the authors were evaluators, evaluation is inherently a political space where negotiation thrives as all stakeholders manoeuvre internal and external pressures around results and its assessment. Meanwhile operating within political and resource constraints. In assessing advocacy effectiveness, the cause and effect relation is unclear and change is mostly discovered in interaction with advocates and targets, based on experiences and interpretations. This makes evaluating advocacy effectiveness a dynamic process characterized by social interactions and political agendas and interests rather than an objective and rational process whereby the evaluator is merely instrumental to assess results. Therefore, it is necessary to look into negotiations as shaping the meaning of effectiveness. In this paper we question the idea of evaluation as objective and rational process and aim to answer the questions: what is being negotiated in the evaluation process and what does this mean for the evaluation process and quality? We will answer these questions by illustrating the negotiations around identifying outcomes, measuring outcomes and presenting outcomes as part of our advocacy evaluation and discuss its implications for the evaluation outcome.
The disempowering discourses of impact evaluation: who is excluded and how?
Impact evaluation has become increasingly important in international development practice, promoted as a means of supplying both accountability and learning. But are its own impacts so benign? This paper examines the dark side of impact evaluation. Who is excluded and how? What are the alternatives?
Impact evaluation has become increasingly important in international development practice. The proponents of ever more 'rigorous' modes of evaluation vigorously promote it as a means of supplying both accountability and programme learning, and as an essential component of evidence-informed development policy-making. While the methodologies of impact evaluation have been and are the subject of considerable debate, this paper examines impact evaluation as a 'technology of power' with an undeclared capacity to disempower. Focusing on the evolving practices of Oxfam and other international NGOs, it asks a series of critical questions: Who is excluded and how? In what ways are different categories of supposed beneficiaries (e.g. programme participants, CSOs and NGOs, other stakeholders in the outcomes of international development) excluded by contemporary processes of impact evaluation? How and why does this happen? What part is played by methodological hubris and epistemological misconceptions? Are these intentional exclusionary manoeuvres or accidents of arrogance? And what are the alternatives? Can we identify ways in which those excluded from current styles of impact evaluation can be empowered and their voices heard? A better kind of impact evaluation? Or is inclusive impact evaluation an oxymoron? Where do we go from here?
Prefiguration and participatory measures of progress
This paper will explore the importance and nature of prefiguration in terms of redressing power imbalances via grass roots participation in construction of measures of progress.
'Prefiguration' (see Boggs, 1977 ; cf Williams & Srnicek, 2015) is often heard in New Economy debates - activities and interactions that reflect the sort of society, economy, or politics sought by protagonists. Systems thinking (Mersmann, 2014) calls for support for 'pioneers' undertaking innovations according to new visions, hence demonstrating feasibility and desirability (to attract more actors and spread support).
This paper explores the importance and nature of prefiguration via redressing power imbalances through grass roots participation in construction of measures of progress.
It reflects on modes of democracy in which the 'thin' form of representative democracy (Barber, 1984) is often captured by entities wielding disproportionate influence due to economic strength (Fuentes-Nieva & Galasso, 2014) and contrasts this to potentially deeper conversations enabled by collectively co-creating a view of what 'progress' or 'development' entails via participatory and deliberative processes (Dryzek and Niemeyer, 2012).
Drawing on New Economy thinking in which grass roots participation and power to shape agendas are a key element, the paper examines scope for participatory measures to model mechanisms for interests of those currently not well served by policy making processes to have their views elevated and their perspectives responded to. It seeks to assess the extent to which this role meets the mantra of 'pioneering' called for in systems change literature.
In modelling a dimension of system change, prefigurative perspectives on participatory measures go beyond instrumental rationales for consultation previously emphasised (BRAINPOol 2014) and complement normative calls for widespread participation in construction of new measures of progress.
Inequality matters, but how should we measure it?
It is broadly accepted that economic inequality matters, but it is not clear what dimensions of inequality matter most. From wage differentials to the wealth gap, differences between regions or within households, absolute or relative inequality, differences between the tails of the distribution, or compared to the average, what should we be most worried about? Importantly, our choice of measures matter for other reasons. Depending on the precise measure of inequality used and the geography and timeframe considered, the gap between the rich and the poor and the direction of change can look very different. The choice of measure profoundly influences perceptions of economic inequality. In this way, institutions selecting the indicators to use can influence how publics understand the nature and extent of inequality and as such, measures are susceptible to the political interests of the institutions from which they emit.
This paper will explore some of the most frequently cited measures for economic inequality, including the indicator proposed for the SDGs , the World Bank's Shared Prosperity measure, and Oxfam's statistics on extreme wealth looking both at their technical qualities as well as the particular perspective of inequality they present and the interests underpinning them. We will also examine the potential of new measures not yet well understood, identifying the most appropriate measure(s) for inequality with respect to having the most relevance for the negative outcomes we are worried about and for informing policy.
Measuring Multidimensional Poverty: Dashboards, Union Identification, and the Multidimensional Poverty Index (MPI)
The Union and intersection approaches to identification of poverty do not avoid normative choices as often claimed. Rather these are made out of the public eye at the stage of indicator selection and weighting. We argue that the Alkire and Foster (2011) method makes these value judgements explicit.
We analyse three approaches to measuring multidimensional poverty, using a consistent set of data for 10 indicators in 101 developing countries. First we implement a simple dashboard of deprivations in ten indicators. While most dashboards stop there, we next describe the simultaneous deprivations experienced by people which conveys information on their joint distribution, yet fails to identify multidimensional poverty. We then implement a 'union' approach to measurement, and identify people as multidimensionally poor if they experience any one or more of the ten deprivations. The resulting Union headcount ratio of poverty is very high and may reflect errors of inclusion. We then implement an intermediary identification approach following Alkire and Foster (2011): the global Multidimensional Poverty Index (MPI). Exploring the censoring process of the intermediary identification, we observe that a Union MPI (or intersection) identification approach does not avoid normative choices as often claimed; rather these are made at the stage of indicator selection, and the identification process can be highly sensitive to these choices. The latter approaches often imply equal weights -which is itself a value judgement made out of the public eye. The global MPI clearly states value judgements, and performs robustness tests for them. The paper thus discusses strengths and challenges of different measurement approaches to multidimensional poverty.
Measuring the Hidden Contours of The Global Knowledge Economy with a Digital Index
Our Digital Knowledge Economy Index, combines traditional data sources with bespoke data on capacities and skills (measured via content-creation and participation on digital platforms) to provide a revealing view of where developing countries fit into the world's digital knowledge economy.
Taking advantage of the 'information revolution' is a priority in national development strategies. Eager to tap into economic and social opportunities afforded by increased access to digital information, many governments of developing countries have envisioned policies that guide their transformation into so-called 'knowledge economies'. However, the concept itself is rarely clearly defined, operationalised, or effectively measured. The few indices that have measured the state of knowledge economies around the world employ significantly different sets of variables, adding to the vagueness of the concept. These indices reflect their designer's conceptualization of the knowledge economy and rely on the accuracy and cross-sectional as well as longitudinal representativeness of their data sources, which may be called into question in low-income contexts. We thus propose the construction of a Digital Knowledge Economy Index, to account for previously unmeasured capacities and skills that are quantifiable via measuring content-creation and participation directly through digital platforms, such as the code-sharing platform GitHub, Wikipedia and domain registrations. With this approach, the 'traditional' data sources - national statistics and expert surveys - can be complemented by data that is collected online using bespoke methods and that reflects the underlying digital content creation, capacities and skills of the population. We believe that an index that combines traditional and novel data sources may provide a more revealing view of the status of the world's digital knowledge economy and highlight where each data source on its own may be biased in describing (in)equalities in the age of data.
This panel is closed to new paper proposals.