If you do an internet search for ‘data-driven disruption’ you can find articles about almost every industry being disrupted by digitalisation and new applications of data. Banking, transportation, healthcare, retail, and real estate, all have seen the emergence of new business models fundamentally changing how customers use their services. While there are instances of data-driven efforts in the nonprofit sector, they are not as widespread as they can be. Bridgespan Group estimated in 2015 that only 6% of nonprofits use data to drive improvements in their work.
At the same time, the Sustainable Development Goals (SDGs) have set a very ambitious global change agenda and we won’t be able to meet their targets by doing business as usual. To achieve the SDGs requires new ideas across the board: new solutions, new sources of funding, new ways of delivering services and new approaches to collaborating within and across social, public and private sectors.
The private sector already very successfully uses data analytics and machine learning not only to realise efficiency gains but also – even more importantly – to create completely new services and business models. For example, applying machine learning to wind forecasting is expected to reduce uncertainty in wind energy production by more than 45% and will allow utilities to integrate wind more easily with traditional forms of power supply. And entirely new utility start-ups such as Drift use machine learning technologies to provide customers with cheaper wholesale energy prices by more accurately predicting consumption.
In the nonprofit sector, early applications of data analytics and machine learning have mostly focused on improving fundraising and marketing. In a next step, the broader adoption of data analysis techniques and tools has the potential to help nonprofits increase their programmatic impact as well as identify completely new ways of achieving their mission.
- Gain improved intelligence on operating context and needs through expanded use of descriptive analytics techniques. On the program side, teams largely tend to use descriptive analytics – statistical techniques that provide insight into the past and answer: “What has happened?” – on survey data, sometimes complemented by samples from larger raw datasets, e.g. Facebook posts or tweets. In many settings this is the best information available. However, it presents obvious drawbacks: given the expense and time required to conduct surveys we frequently operate based on information that is years old. Also, surveys are often run to confirm or refute certain hypotheses making it challenging to utilise existing survey data to answer new sets of questions. The more we can directly analyse raw data, such as today’s internet searches, the more we will be able to obtain a close to real-time picture of the situation on the ground. Applying data analytics and machine learning to large raw datasets will likely also yield us new and unexpected insights as these techniques and tools allow us to unearth patterns and seek potential explanations for those in contrast to responding to a predefined set of questions.
- Identify those most at risk or most affected by a problem more accurately by using predictive analytics. For example, a County Department of Human Services in Pennsylvania recently implemented a predictive risk model designed to improve screening decision-making in the county’s child welfare system. The model integrates and analyses hundreds of data elements. The resulting score predicts the long-term likelihood of home removal and provides a recommendation on whether a follow-up investigation is warranted. The model has been shown to be effective in preventing the screening-out of at-risk children. It has also lowered the number of investigations with potential disruptive effects on low-risk families. One could imagine similar models being applied to screening cases of domestic violence or abuse of domestic migrant workers.
- Achieve best possible outcomes for individuals through the application of prescriptive analytics. In healthcare, some hospitals are now generating predictions of a patient’s readmission risk at the time of diagnosis. Patients with a higher likelihood of returning to the hospital within a month receive additional care and supports such as home visits. This has reduced the readmission rates and freed up resources that can be used to treat additional patients. There are many possible use cases for prescriptive analytics in the development sector, particularly in health where we have much existing data on what works in light of specific risk factors. Tools that incorporate these models could assist community health workers in triaging cases and prioritising their workload. They could also be applied to people suffering from addictions or people with learning challenges to prescribe individualised treatment and support plans.
As these approaches become more mature and wide-spread in their application their impact will go much beyond making workflows more efficient. They have the potential to fundamentally disrupt how we work and what we define as our core competencies. Today, it may seem challenging to move towards a future where recommending who to support and how could be largely automated. I also don’t want to minimise the challenges in this scenario: the availability of required data and the privacy issues involved.
However, I want to encourage us to actively embrace and shape this future as its potential for positive impact is immense. We need to work together to ensure that the automation involved in these techniques and tools will provide valuable insights that support humans in making thoughtful and effective decisions, free up our valuable and constrained resources and focus them on those parts of our work that truly make a difference in people’s lives.