Findings

The results of our nationwide analysis of traffic stops and searches.

Police pull over more than 50,000 drivers on a typical day, more than 20 million motorists every year. Yet the most common police interaction — the traffic stop — has not been tracked, at least not in any systematic way.

The Stanford Open Policing Project — a unique partnership between the Stanford Computational Journalism Lab and the Stanford Computational Policy Lab — is changing that. Starting in 2015, the Open Policing Project began requesting such data from state after state. To date, the project has collected and standardized over 200 million records of traffic stop and search data from across the country.

Creating this resource has been marked by challenges. Some states don’t collect the demographic information of the drivers that police pull over. States that do collect the information don’t always release the data. Even when states do provide the information, the way they track and then process the data varies widely across the country, creating challenges for standardizing the information.

Data from 21 state patrol agencies and 29 municipal police departments, comprising nearly 100 million traffic stops, are sufficiently detailed to facilitate rigorous statistical analysis. The result? The project has found significant racial disparities in policing. These disparities can occur for many reasons: differences in driving behavior, to name one. But, in some cases, we find evidence that bias also plays a role.

On this site, you can explore our results. You’ll find tutorials that walk you through the steps to understand the data yourself, and information on a new statistical test of discrimination developed as part of this project. See our technical paper for more details.

We encourage you to dig into the data. Toward that end, we’re releasing the records we’ve collected and our analysis code. We’ll be regularly updating the repository, and we’re collecting more information every day. The raw data used to render the charts on this page are available for download below.

Stop rates

We start by analyzing the rates at which police stop motorists in locations across the country, relative to the population in those areas. The data show that officers generally stop black drivers at higher rates than white drivers, and stop Hispanic drivers at similar or lower rates than white drivers. These broad patterns persist after controlling for the drivers’ age and gender.

Examining stop rates is a natural starting point, but they can be hard to interpret. For example, driving behavior and time spent on the road likely differ by race or ethnicity. The racial composition of the local population also may not be representative of those who drive through an area, especially when dealing with stops on highways.

After the stop

In nearly every jurisdiction we find stopped black and Hispanic drivers are searched more often than white drivers. But if minorities also happen to carry contraband at higher rates, these higher search rates may stem from appropriate police work. Disentangling discrimination from effective policing is challenging and requires more subtle statistical analysis, as we do below.

Going beyond disparities: testing for discrimination in search decisions

The outcome test

In the 1950s, the Nobel prize-winning economist Gary Becker proposed an elegant method to test for bias in search decisions: the outcome test.

Becker proposed looking at search outcomes. If officers don’t discriminate, he argued, they should find contraband — like illegal drugs or weapons — on searched minorities at the same rate as on searched white drivers. If searches of minorities turn up contraband at lower rates than searches of white drivers, the outcome test suggests officers are applying a double standard, searching minorities on the basis of less evidence.

In our data, the success rate of searches (or the hit rate) is generally lower for Hispanic drivers compared to white drivers; so the outcome test indicates Hispanic drivers face discrimination. For black drivers, search hit rates are typically in line with those of white drivers, indicating an absence of discrimination.

Hit rates can be misleading

Becker’s outcome test is a compelling measure of discrimination. But it’s also an imperfect barometer of bias. The test can fail to detect discrimination when it’s there and can indicate discrimination when it’s not there, as we and other researchers have observed.

For example, say police officers have a small universe of types of drivers they stop. In fact, suppose there are just two types of white drivers: some of the white drivers have a 5% likelihood of carrying contraband, and the others have a 75% chance of carrying contraband. Suppose there are also just two types of black drivers: some black drivers have a 5% chance of carrying contraband, and the others have a 50% chance of carrying contraband.

In this hypothetical world, consider a fair police officer who only searches drivers with at least a 10% chance of carrying something illegal — regardless of race. In that situation, the white hit rate would be 75% and the black hit rate would be 50%. The officer used the same standard to search each driver, and so did not discriminate, even though the hit rates differ.

The threshold test

To address the shortcomings of the outcome test, we built on Becker’s ideas to develop a more robust statistical measure of discrimination: the threshold test. The threshold test combines information on both search rates and hit rates, and lets us directly infer the standard of evidence officers require before carrying out a search. In the example above, the threshold test tells us that the same 10% standard is applied to all drivers, indicating no bias. (See our technical paper for more information on how the test works.)

When we apply the threshold test to our traffic stop data, we find that police require less suspicion to search black and Hispanic drivers than white drivers. This double standard is evidence of discrimination.

As with all statistical tests of bias, our threshold test has limits. For example, if officers suspect more serious criminal activity when searching black and Hispanic drivers compared to white drivers, then lower search thresholds for these groups may be the result of non-discriminatory factors. Our results are just one step in understanding complex police interactions.

The effects of legalizing recreational marijuana use

Several states have recently legalized the use of recreational marijuana. We have detailed data in two of these states: Colorado and Washington.

After marijuana use was legalized, Colorado and Washington saw dramatic drops in search rates. That’s because many searches are drug-related. Take away marijuana as a crime and searches go down. (In both states we exclude searches after an arrest and other searches that are conducted as a procedural matter, regardless of any suspicion of drug possession.)

In Washington and Colorado, far fewer people — both white and minority drivers — are searched overall. However, the racial disparities in searches remain and there is a persistent gap in the threshold for searching white and minority drivers.

In 12 states where we have data and recreational marijuana is still illegal, search rates remain high.

Arizona
Searches per 100 stops
California
Searches per 100 stops
Florida
Searches per 100 stops
Massachusetts
Searches per 100 stops
Montana
Searches per 100 stops
North Carolina
Searches per 100 stops
Ohio
Searches per 100 stops
Rhode Island
Searches per 100 stops
South Carolina
Searches per 100 stops
Texas
Searches per 100 stops
Vermont
Searches per 100 stops
Wisconsin
Searches per 100 stops

Raw data

The data used to render these charts are contained in the following files: