Viewing entries tagged
public policy

Why should Social Return on Investment be avoided at all costs?

Comment

Why should Social Return on Investment be avoided at all costs?

Not-for-profit (NFP) organisations are often both innovative in the way they deliver services and responsive to the needs of clients. As a result they are increasingly providing social services that were previously the domain of the public sector.

Coupled with the rise of social impact investing and the competition of ideas that it creates, NFPs are being asked by funders — both government and philanthropic — to demonstrate the economic value they generate.  

Instead of embracing established impact and economic evaluation tools, some NFPs have latched on to an increasingly popular concept known as social return on investment (SROI).

SROI is sometimes advanced as a legitimate addition to the portfolio of economic evaluation methods. It is not. It has been suggested in some quarters that SROI should be used in the public sector to assist in spending decisions. This would be a mistake.

WHAT IS SOCIAL RETURN ON INVESTMENT?

Described by a practitioner as “a form of stakeholder-driven evaluation blended with cost-benefit analysis tailored to social purposes”. An SROI analysis rests on NFP’s stakeholders providing intuitive opinions on the impact of a program or intervention the organisation has run for its beneficiaries. An assessment of financial value is assigned to the changes stakeholders have identified by using a combination of stakeholder perceptions and reference values. 

The liberal misappropriation of the language, but not the rigour, of mainstream economic evaluation lends SROI credibility among casual observers that it does not deserve.

HOW DOES IT WORK?

A SROI and a CBA share a common goal of seeking to monetise the impact of the benefits of an intervention and compare them with the costs of implementing it. However the method by which either benefits are attributed and monetised and costs are assigned in an SROI has no comparison to that of a rigorous cost-benefit analysis.

SROI practitioners engage stakeholders in the cost and benefit assignment process. SROI practitioners argue that this method ‘increases the depth of analysis’ as it ‘engages more broadly with those experiencing any change than traditional cost-benefit analysis’.

None of these estimates are subject to empirical test or verification. What is deemed to be important is what stakeholders themselves feel is the impact and value of the program.

In some instances SROI practitioners may attempt to moderate some of the estimated values based on reference databases, but there is little evidence that they have any more objectively accurate ideas than the stakeholders. Nor might they limit the benefits to tangible cost savings from reduced expenditure on public services.

From experience in psychological and economic theory, we know that this sort of introspection is subject to biases, especially optimism bias, wish-fulfilment and the failure of counterfactual reasoning.

WHAT’S WRONG WITH THIS?

The purpose of economic evaluation is to enable decision makers to make the best use of funds by allocating resources — which may be time, effort or money — to the alternatives that generate the optimal economic result. Deciding how to allocate resources involves examining a range of criteria including value for money, risk of failure, as well as meeting ethical and equity expectations. SROI cannot provide guidance for any of this.

We should note that the optimal result will include a mix of attributes including: that the outcome meets a societal standard of value for money; is a secure investment; that its benefits reliably exceed costs; and also that the outcome meets ethical and equity expectations.

Sometimes the best idea might be found in another service area, for example, some early childhood interventions have been shown to reduce interactions with the criminal justice system in adulthood.

A SROI methodology does not have the capacity to factor this because it assumes that the stakeholders possess the scope of vision to understand the better use of resources in another policy area.

In psychology and medicine, meta-analysis was developed to apply consistent methods for estimating the average effects of interventions rather than relying on a clinician’s self-interested best guess about what works. 

Pseudo-economic analyses are not a substitute for or means of recovering from inadequate experimental methods. Regardless of the economic methodology advanced, if the assessment of a policy intervention did not involve randomly allocating clients to either a treatment or a control group — a randomised control trial — then no form of economic analysis of the outcomes will be valid.

Finally not every outcome needs to be monetised. Provided appropriate experimental methods have been used, it should be completely satisfactory to show that participants are content or happier than their control group peers. Appealing to a ‘better’ economic outcome risks missing the point completely.

WHAT SHOULD WE BE DOING INSTEAD?

Economists and social policy analysts have made great strides in deriving plausible monetary values for social outcomes in the last decade. Most economists reviewing spending decision are content to calculate measures like net present value (NPV), benefit cost ratio and an estimate of the probability that an instance of an NPV falls within a specified range. Ethical and equity considerations are the province of politicians, representing the public domain.

Calculating the cost benefit of social outcomes has until recently been a protracted and challenging task, since the calculation of a CBA entails monetising variables like the value of a life or the value of injuries suffered by a victim of crime.

The emergence of repositories of effect sizes — that reliably and validly measure outcomes of a wide range of interventions — and the development of libraries of shadow prices means that techniques involving little more than wild guesses are redundant dead ends.

A properly constructed CBA might use the results of a meta-analysis alongside marginal costs, shadow prices and discount rates as inputs. Economists construct these elements subject to transparent rules. Important findings in any scientific discipline depend on reproducible results.

In comparison, an SROI uses stakeholder perceptions as a proxy for a base rate of change. This means that SROI lacks the ability to objectively determine the impact of a program/intervention. Without this it is at best wishful thinking.

Accurately calculating the benefits associated with a social intervention is something that economists have made great strides in capturing in recent times. In economic evaluation, the perspective that is adopted significantly impacts what costs are deemed relevant. For example, a ‘government or payer perspective’ would be interested in the costs that a social intervention can avoid measured by interactions with social services, health, welfare and criminal justice systems and reduced costs to third parties — for example, victims of crime.

The cavalier assignment of monetary values to ‘personal experiences’ in an SROI analysis may help an NFP understand how its services can affect stakeholders, but even assuming they are accurate they have little relevance to costs that they may have saved the state, and by extension taxpayers.

WHY SHOULD PUBLIC SERVANTS AVOID IT?

Comparisons of SROI measures achieved by different programs and organisations are meaningless and should be avoided at the risk of seeming to be innumerate and ill-informed.

Since the purpose of CBA is to provide cross-program, even cross-sectoral, evaluation of resource utilisation, SROI provides funders with no valuable information that could not be derived more reliably in other ways. With its cheap mimicry of robust economic concepts and language, SROI is a free-rider on the credibility of tried and tested methods of economic evaluation.

NFPs — and their private funders — should be free to use whatever methods they wish to assess the impact of their programs. However, for public servants and NGOs receiving public funding, using zombie economics to allocate public funds is irredeemably irresponsible.

Comment

How can cost benefit analysis be used to prioritise social policy spending?

Comment

How can cost benefit analysis be used to prioritise social policy spending?

Before economists came along and formalised the concept, people had long tried to weigh up the costs and benefits of pursuing different choices. The American polymath Benjamin Franklin was one of the first to document this in a letter to a friend:

“When difficult cases occur, they are difficult chiefly because while we have them under consideration, all the reasons pro and con are not present to the mind at the same time. To get over this, my way is to divide half a sheet of paper by a line into two columns; writing over the one ‘Pro’, and the other ‘Con’.”

Franklin nailed the underpinnings of a modern cost-benefit analysis (CBA) where the benefits of the decision are weighed against the costs - with whichever side weighing the most winning. All of us make decisions like this every day. Whether they be choices about whether to take the car or public transport to work, or cook dinner or eat out we are weighing up benefits we may gain in time and enjoyment versus the additional expense.

What is a cost-benefit analysis?

In theory, undertaking a CBA seems as straightforward as Franklin suggests, you sum up the costs and benefits in separate columns and see which is better off. In reality it can be more complex. What if the benefits and costs are not immediately obvious? Or occur simultaneously? How do you take into account an intangible benefit that might occur five years down the road?

These are overcome by expressing benefits and costs in monetary terms and discounting them by the time value of money. This allows costs and benefits, which generally occur at different time periods, to be expressed in terms of their net present value. By monetising benefits a CBA allows decision makers to assess whether a policy intervention is a sound investment, as well providing the ability to compare it with competing policy options - developing a new car ferry might look like a good investment, but a new bridge might be an even better one.

In public policy, CBAs have been used to assess investment decisions relating to major capital projects like building a new motorway or an extension to an airport. In recent times developments in economic theory and practice in the past decade have meant that CBA has become not only an accessible tool, but a preferred method, of allocating scarce investment resources across worthy causes in social policy too.

How do you put a number on that?

In the context of social provision, a CBA is based on a rigorous evaluation of a programme’s actual impact on an outcome of interest - for example, reoffending among released prisoners. The results of an high quality evaluation (like a randomised-control trial) can be compared to a meta-analysis of a systematic review of the literature.

The impact that the program has on reoffending for example can be translated into the economic benefit that it generates for state, for the released prisoner and for society more widely by examining the relationship between lower reoffending rates and other outcomes that have a financial impact. In this case that might mean benefits to the state from reduced police, courts and corrections costs that are no longer incurred, benefits to society from reduced victimisation costs from lower crime and benefits to the individual from higher lifetime earnings as they are now employable.

Why is it better than other methods?

Whereas other economic techniques like cost-effective analysis or cost utility analysis - popular methods used in healthcare and pharmaceutical decision making - compare two or more interventions based on a common unit of measurement. The monetisation of benefits in CBA allows decision makers to compare the relative costs and benefits across a suite of interventions in different policy areas. This attribute allows governments to be able to determine, for example what proportion of resources should be spent on early intervention and prevention versus treatment‑based interventions.

In econospeak the difference between effectiveness and efficiency is akin to the difference between doing the right things and doing things right. To improve efficiency, you must first be doing something effective. By using money as a metric it is possible to compare the benefits across a range of activities. It also allow the benefits from interventions that flow across sectors to be included, bolstering the case for investment.

In the U.S., it has been demonstrated that the Nurse Family Partnership - which provides maternal and early childhood support to low income mothers - has positive health benefits for the mother as well as long term benefits educational, employment and criminal justice benefits for their child.   

How can it be used to prioritise social policy spending?

In Australia we are facing increased demand for services, particularly in the health and justice space. Facing a situation like this it would be prudent to try and seek to get a greater efficiency dividend from the current resources we currently expend. However when it comes to making spending decisions we are largely flying blind. While they are no doubt made with good intentions, they seldom rely on a rigorous assessment of their relative effectiveness or efficiency at delivering tangible results.

It need not be this way. The Washington State Institute for Public Policy has been supporting the Washington State Legislature by providing impartial advice on the impact of policy spending decisions for over 30 years. Legislators have used the results of the Institute’s rigorous modelling to justify what could be considered radical changes to justice policy which have saved money and improved outcomes.

Comment