SIX MARKETS.
SIX METHODOLOGIES.
Reformatting that data into one national score would just launder the inconsistency.
There is no national standard for restaurant health inspections. The verdict labels mean the same thing everywhere — the math behind them changes per market.
NO NATIONAL STANDARD
THE VERDICT LABELS
WHAT WE ALWAYS DO
HOW EACH CITY WORKS
Chicago inspectors hand out real failures, plus a middle ground called "Pass with Conditions" — violations serious enough to warrant follow-up, but not enough to close the door.
What we do: separate real passes from conditional ones. Weight every inspection by recency (the decay curve favors the last 18 months). Penalize critical violations hard. A single recent failure matters. Twenty clean passes over a decade builds confidence.
NYC puts letter grades (A, B, C) in windows. The catch: the city also logs "Pass with Conditions" inspections — violations cited, restaurant stayed open. A failure in spirit, not on paper. And the grade in the window is just the latest snapshot.
What we do: look at the full track record, not just the current grade. Translate "Pass with Conditions" into what it actually is: violations cited. Weight by recency and severity.
NYC's thresholds are calibrated to its own score distribution — applying Chicago's numbers directly would give you a fake comparison.
Dallas gives every inspection a numeric score out of 100. Nearly half of all Dallas inspections fail — a city actually trying to hold restaurants accountable. Which also means a single bad day can look devastating if the latest score is all you see.
What we do: weight the full history with recent scores counting for more, using a recency-weighted deduction formula. One bad day doesn't define a restaurant. One good day doesn't redeem one with a pattern of problems.
Roughly 34% of scored Dallas restaurants land in EAT, 46% in YOUR CALL, 20% in BEAT.
SF also uses numeric scores out of 100, but the bar for "passing" is generous and the public data goes back years. A score from 2019 tells you almost nothing about today's kitchen.
What we do: same formula structure as Dallas — recency-weighted deduction — so the verdict reflects current state, not ancient history.
Same thresholds as Chicago and Dallas. Applying one bar across numeric-score cities keeps the verdicts comparable.
LA County has the most absurd grading system we cover: 96% of restaurants get an A. Read that again. A restaurant with recent critical violations and a recovered C in its history still gets an A in the window. The grade you see is not the signal you think it is.
What we do: ignore the current grade as the primary signal. Look at grade history over the last 18 months, critical violations (4+ point deductions) in the last 12 months, and whether a C was actually recovered or the restaurant is still limping. We call this our "D+" algorithm — it's not based on a numeric score at all.
Roughly 69% EAT, 29% YOUR CALL, under 2% BEAT — still top-heavy, but it reflects what the data actually shows once you stop giving credit for the county's generosity.
Florida ditched the old critical/noncritical split in 2013 for a three-tier system: High Priority violations (direct food-safety risks like temperature abuse and contamination), Intermediate (sanitation, equipment, handwashing facilities), and Basic (floors, walls, lighting). The state's Division of Hotels & Restaurants publishes 5 years of inspection history statewide — every county, every restaurant — for free.
What we do: weight High Priority violations hardest (they're the actual food-poisoning risks), Intermediate moderately, Basic minimally. Apply the same recency decay as our other markets so a 2021 inspection counts less than a 2026 one. Treat "Warning Issued" as a real middle ground — violations were cited and the inspector flagged the place, but the doors stayed open. That's not a clean pass.
Florida's data covers all 67 counties. We don't break it down further yet — county-level filtering shows up as chips on the Florida page. Thresholds are calibrated to Florida's score distribution since the inspection regime differs from the open-data city APIs we pull elsewhere.
WHAT WE DON'T DO
Inspection data reflects conditions at the time of each visit. A restaurant that failed may have fixed everything the next day. A restaurant that passed may have slipped last week. The pattern tells the story — not any one snapshot.