Essential Metrics for Alternative Emergency Response Programs

Download PDF

Stock image of a cartoon person reviewing large graphs of data.

Alternative emergency response programs have the potential to connect people in crisis to teams of unarmed responders who provide compassionate care and needed services, reduce overreliance on law enforcement, and decrease strain on other first responders. As jurisdictions establish these programs, leaders must use data in real time for performance management and continuous improvement, which can increase programs’ potential to deliver on the promise of alternative response. For example, an alternative emergency response program might aim to connect individuals experiencing a mental health crisis to ongoing counseling. By tracking both the number of people referred to mental health services and if they actually access treatment, program leaders can better understand if the program is connecting people to the supports they need to avoid future mental health crises. At the same time, members of the public and policymakers can gauge whether the program is delivering on its promise.

The Harvard Kennedy School Government Performance Lab’s (GPL) data-driven performance management tools help alternative emergency response program leaders and their partners identify the most important data to measure, develop a deeper understanding of program performance, and take informed action to improve outcomes.

Measuring and reviewing data is crucial to driving change throughout an alternative emergency response program — not just at one moment in time, such as the conclusion of a pilot or the launch of a program expansion. Program managers and their partners can use data for:

  1. Reactive troubleshooting: Real-time, rapid identification of performance problems allows programs to make immediate course corrections.
  2. Tracking impact and managing performance: Reviewing data on a frequent basis over time can facilitate operational improvements that accelerate progress toward program Many of the metrics in this tool measure common intended impacts of alternative response, such as connecting people to services and diverting calls from traditional law enforcement. Tracking and improving metrics like these over time can increase the impact of alternative response programs. Program managers should also build in ways to rigorously evaluate programs (see box below).
  3. Expansion and sustainability: Identifying program impact, gaps in who is served, and the resources needed to better meet service recipients’ needs can determine what the case is for continuing or expanding an alternative response

To determine which metrics to prioritize measuring, alternative emergency response program leaders should compare their program’s goals to the metrics provided below and establish which of them align with the program’s intended impact.


Evaluation of Alternative Emergency Response Programs: To date, there are a small number of rigorous evaluations of alternative emergency response programs. To evaluate the impact of an alternative response program, outcomes must be compared to what would have happened in its absence. When there are not enough resources to dispatch alternative response to all eligible calls, comparing outcomes between eligible calls that receive alternative response and those that do not may allow jurisdictions to estimate a program’s effect. Other approaches that jurisdictions can use to estimate the effect of alternative response include comparing outcomes between the area where the program was launched and a similar, nearby geographic area that does not have alternative response services; or comparing outcomes between times of day when the alternative response team is active to when they are not active. If you are interested in evaluation of alternative emergency response programs, please contact us at govlab@hks.harvard.edu.


Metrics Bank

The tables below include 29 common, actionable metrics used by alternative emergency response teams to assess and improve their programs. The GPL selected these metrics based on its experience conducting projects with more than 30 jurisdictions in the Alternative 911 Emergency Response Implementation Cohort and by reviewing publicly available data from 17 alternative emergency response programs. This list is not exhaustive and will be updated as jurisdictions test additional metrics.

Metrics can help alternative response program leaders answer core questions about their program operations and the people they serve. Each metric below is organized by the core question it addresses:

  • Are programs successfully triaging and dispatching calls to the right responders? Where are there opportunities to respond to additional calls?
  • How is the traditional emergency response system impacted by alternative response teams?
  • Do alternative response staff receive the training and support they need to run the program successfully?
  • Are there disparities in who is being served or how well their needs are being met?
  • Are the people served by alternative response teams successfully connecting to services?
  • How are community members, some of whom are not directly served by alternative response, impacted by the program?

Each metric also includes information on three characteristics, listed below.

  • Common Data Source — Each metric includes a common data source that could be used to measure it. Metrics can typically be collected via existing data sources, such as dispatch and case management systems. Given that data is often stored in systems managed by different agencies (e.g., 911 dispatch system, law enforcement data, alternative response case management system), it is important to establish data-sharing processes prior to the launch of alternative emergency response programs. Data agreements and memoranda of understanding can facilitate the smooth transfer of data between agencies.
  • Priority — Metrics should be prioritized differently depending on program maturity. Each metric is categorized as either essential or supplemental.
    • Essential: We recommend all programs measure these 19 metrics, particularly in jurisdictions that are new to data-driven approaches. Essential metrics are often feasible to track with existing data, though they may require some data manipulation.
    • Supplemental: We recommend that more mature programs build on their essential metrics by considering 10 supplemental metrics, which aim to further improve program reform efforts. These metrics are more likely to include setting up new tools for data collection, such as surveys.
  • Review Frequency — Metrics should be measured constantly and then reviewed at different time intervals. However, if there are unusual trends or special circumstances, programs may want to review data more frequently. For example, we recommend quarterly review of staff demographics, but programs may choose to review this data more frequently during hiring season. Each metric is categorized by recommended review frequency, including:
    • Weekly
    • Monthly
    • Quarterly
    • Annually

Download the Tool


More Research & Insights