When “Election Performance” Became “Election Efficiency”: How MIT’s Data Lab Quietly Stopped Measuring What Matters
By Mitt Castor
In 2013, the Pew Charitable Trusts launched the Elections Performance Index to provide “nonpartisan, objective measures of election administration in the United States.” The project convened academics and election officials from 14 states to identify metrics that could evaluate American elections across a comprehensive framework: registration, voting, and counting—with attention to both voter convenience and process security.
When MIT’s Election Data and Science Lab assumed management of the index in 2017, something curious happened. The framework remained, but the mission quietly shifted. What began as a broad assessment of election quality morphed into a narrower gauge of administrative efficiency. By 2026, the index had become a tool for certifying that American elections are “well-run” and “continue to get better every year”—even as the metrics themselves measure almost nothing related to election integrity.
The reformulated index currently employs 19 indicators organized into four categories: electoral environment, election infrastructure, mail voting, and in-person voting. States are rewarded for high voter turnout, low mail ballot rejection rates, minimal wait times, online registration systems, and membership in ERIC (the Electronic Registration Information Center). These are defensible measures of administrative smoothness. They are not, however, measures of election security.
The index contains no indicators for voter ID requirements, citizenship verification, voter roll accuracy, or detection of duplicate voting across jurisdictions. It does not attempt to measure ballot harvesting, chain-of-custody procedures, or the quality of post-election audits beyond whether they exist. In the EPI’s own methodology documentation, the Lab acknowledges that indicators were selected based on “data availability and consistency across states”—meaning that phenomena difficult to measure, such as fraud, are structurally excluded from consideration.
This creates an index that can declare election administration “strong” while remaining blind to the integrity concerns that have animated much of the national debate. A state could theoretically have rampant non-citizen voting and still score perfectly on every EPI metric, provided ballots were processed quickly and rejection rates stayed low. The index measures how efficiently votes were counted, not whether the votes counted were cast by eligible voters.
The consequences of this methodological lacuna are visible in the index’s 2024 rankings. Minnesota—a state whose biggest city has become synonymous with large-scale public fraud—ranks first in the nation with an 89% score. This is the same Minnesota where federal prosecutors secured convictions in the Feeding Our Future scandal, described by Attorney General Merrick Garland as “the country’s largest pandemic relief fraud scheme.” In that case, a nonprofit exploiting federal child nutrition programs stole over $240 million by submitting false meal counts and fraudulent attendance rosters—precisely the kind of systematic document fraud that would be trivially easy to apply to mail-in ballots.
Minnesota’s Somali community, which figured prominently in Feeding Our Future, has also been implicated in a wider pattern of Medicaid and daycare fraud that federal prosecutors now estimate at $9 billion across multiple programs. In December 2025, FBI Director Kash Patel called the daycare fraud investigations “the tip of a very large iceberg,” while federal agents raided more than 20 Minneapolis facilities. Minneapolis City Council member Jamal Osman’s wife operated a Feeding Our Future meal site that received over $400,000 in funding; Osman himself was the subject of Project Veritas allegations in 2020 regarding ballot harvesting in Rep. Ilhan Omar’s district, though Minnesota law permitted the practice at the time.
None of this appears on MIT’s Election Performance Index, because systematic fraud and lax enforcement are invisible to metrics that measure only whether ballots move through the system efficiently.
The political valence of MIT’s methodology becomes clearer when examined alongside the Lab’s public advocacy. In April 2026, Lab Director Charles Stewart III told USA Today that complying with President Trump’s executive order on mail-in ballot verification would be “a logistical nightmare” representing “magical thinking.” Stewart has been equally vocal in his opposition to the SAVE Act, which would require proof of citizenship for voter registration—a requirement already in place in 176 countries, including Mexico, India, and every nation in South America. When asked about such proposals, Stewart frames them as burdensome obstacles to voter access rather than reasonable safeguards against fraud.
This rhetorical posture is consistent with the EPI’s design. The index rewards states that make voting maximally frictionless and penalizes those that impose verification burdens, even when those burdens serve legitimate security purposes. States with aggressive signature-matching or identity verification see higher mail ballot rejection rates, which lowers their EPI score—despite the fact that catching fraudulent ballots is precisely what those procedures are designed to do.
The Heritage Foundation’s Election Integrity Scorecard offers a useful contrast. Heritage rates states on voter ID laws, citizenship verification, ballot harvesting restrictions, and chain-of-custody requirements. Its rankings produce nearly opposite results to MIT’s: Nevada and Colorado, which score in the top tier of the EPI, rank poorly on Heritage’s integrity-focused criteria because they have weak voter ID laws and expansive universal mail voting. The two indexes are measuring different things entirely—and calling them by different names reveals the game. MIT measures “election performance.” Heritage measures election integrity.
The distinction matters because language shapes perception. By branding its work as “election science” and publishing under the MIT imprimatur, the Election Lab lends the credibility of a world-class STEM institution to what is fundamentally a political choice about which values to prioritize. Voter convenience and administrative efficiency are legitimate values. So are ballot security and voter verification. An index that measures only the former while claiming to assess “election administration” as a whole is not science. It is partisan advocacy wearing a lab coat.
The problem is not that MIT’s researchers have political opinions. The problem is that they have constructed a measurement system that structurally excludes the concerns of half the country, then pronounced American elections “strong” based on metrics that do not address those concerns. This is not objectivity. It is the politics of selective measurement.
When Pew handed the index to MIT in 2017, the project’s stated purpose was to “drive a conversation about best practices in how to keep elections safe, secure, and accurate.” Nine years later, “safe” and “secure” have been quietly redefined to mean “convenient and efficient.” The index that once promised to measure election quality now certifies election administration as excellent while carefully avoiding any metric that might suggest otherwise.
Minnesota’s #1 ranking is not an anomaly. It is the index working exactly as designed.
What an expose! Another example how politicized institutions replace their original purpose with political goals.
Why does this not surprise me? Anyone who has been on the administrative side of anything knows that large institutions aren't interested in measuring things that are hard to measure. They only want to be able to collect easily collected data, whether those data are meaningful or not, then twist the results to the desired outcome and tout how "objective" they're being.