What This Project Is
On January 20, 2026, the White House published "365 Wins in 365 Days" -- a list of 365 claims about the administration's first year. This project takes each claim and subjects it to the same standard a forensic accountant applies to financial statements: verify every number, trace every causal chain, follow every dollar, and check every attribution.
This is not opinion journalism. We do not begin with a thesis and find evidence to support it. We begin with the claim as stated, decompose it into checkable assertions, and follow the evidence wherever it leads. Some claims hold up. Many do not. A few are worse than misleading -- they are precisely the opposite of what the evidence shows. We label each accordingly.
The standard is forensic, not adversarial. We are detectives, not prosecutors. When the evidence is ambiguous, we say so. When we cannot verify a claim, we say that too.
Evidence Hierarchy
Every factual assertion in this project is assigned one of three evidence levels. These are not opinions about how confident we feel -- they reflect the quality and independence of the underlying evidence.
Supported by at least two independent primary sources, or by multiple journalism sources that independently verified the same claim. "Independent" means the sources collected their data separately -- not that two articles cited the same government report. This is the highest level of confidence we assign.
Example: "Chicago recorded 416 murders in 2025" is an established fact -- confirmed independently by CPD preliminary data, the University of Chicago Crime Lab, CBS News, and WBEZ, each with access to the underlying records.
Supported by a single authoritative primary source, or by strong circumstantial evidence from multiple sources. The conclusion is well-supported but not independently corroborated at the level required for an established fact. Most analytical conclusions -- where we assess causation, attribution, or context -- are labeled at this level.
Example: "The crime decline preceded the federal task force" is a strong inference -- based on CPD's own monthly data showing homicides already down 29% before Operation Midway Blitz launched, but requiring interpretation of temporal correlation.
An assessment based on available evidence that goes beyond what the sources directly state. Used sparingly, always labeled, and always explained. We include informed speculation only when the pattern of evidence strongly suggests a conclusion that no source has explicitly drawn.
Example: Projecting the long-term fiscal impact of a policy when the CBO has not yet scored it, based on analogous historical programs.
Source Hierarchy
Not all sources are created equal. A Congressional Research Service report carries more evidentiary weight than a think tank blog post, which carries more than a social media claim. We classify every source into one of four tiers:
| Tier | Source Type | Examples |
|---|---|---|
| 1: Primary | Original government data, legal documents, official records | Congress.gov, Federal Register, PACER, SEC filings, BLS/BEA/Census, CBO, GAO |
| 2: Institutional | Expert analysis by established institutions | Federal Reserve reports, CRS, Inspector General reports, peer-reviewed research, UN/WHO/World Bank/IMF |
| 3: Quality Journalism | Investigative reporting with editorial standards | Reuters, AP, ProPublica, NYT/WSJ investigations, trade publications, FOIA documents |
| 4: Commentary | Analysis with acknowledged perspective | Think tanks (bias noted), domain experts, historical comparisons, editorial analysis |
Tier 4 sources are not excluded -- they often provide valuable analytical frameworks -- but they cannot anchor an established fact on their own. We always note when a source has an institutional perspective (e.g., "the Cato Institute, which advocates for immigration liberalization" or "the Center for Immigration Studies, which advocates for restriction").
Citation vs. Corroboration
This distinction is fundamental to our methodology and is the single most common error in public discourse about facts.
Citation
Three news articles all reporting that "BLS data shows 654,000 jobs created" is one source cited three times. The articles add no independent information -- they are all reading the same BLS release. If the BLS number is later revised, all three are simultaneously wrong.
Corroboration
BLS employment data showing job growth and ADP's independent payroll survey showing similar growth is two independent sources. BLS surveys establishments; ADP processes actual payrolls. They use different methodologies on different data. If they agree, confidence increases.
When in doubt about whether two sources are truly independent, we downgrade the evidence level to strong inference with a note explaining the limitation. A single authoritative primary source honestly labeled is stronger evidence than a falsely corroborated established fact.
How Institutions Produce Their Numbers
Many claims in this project cite government statistics. Understanding how those statistics are produced is essential to evaluating whether a claim uses them honestly.
Bureau of Labor Statistics (BLS)
Employment data comes from two separate surveys: the Current Employment Statistics (CES) survey of ~670,000 worksites (the "establishment survey," producing the headline jobs number), and the Current Population Survey (CPS) of ~60,000 households (producing the unemployment rate and demographic breakdowns). These surveys can and do disagree because they measure different things differently. CES counts jobs; CPS counts people. CES is revised annually via benchmark; CPS is not. Claims that cite one survey while ignoring the other are often misleading.
Bureau of Economic Analysis (BEA)
GDP estimates go through three releases: advance (one month after quarter-end), second (two months), and third (three months). Each revision incorporates more complete data. The advance estimate is the most cited and the least reliable. Annual revisions can change the picture substantially.
Census Bureau
Population estimates use administrative records, vital statistics, and modeling. International migration estimates are particularly uncertain -- the Census Bureau itself notes wide confidence intervals. The January population control adjustment in the CPS can mechanically shift employment counts by hundreds of thousands without any actual change in the labor market.
Customs and Border Protection (CBP)
"Encounters" are not the same as unique individuals -- one person can generate multiple encounters across attempts. "Apprehensions" and "inadmissibles" measure enforcement activity, not migration volume. A decline in encounters can reflect less migration, more effective deterrence, or simply shifted routes.
The Provenance Chain
Every assertion we make must trace back to a specific passage in a specific source. If we cannot point to it, we do not say it.
For each analysis, we maintain a provenance file that maps every factual claim to:
- The source document (with URL, title, publication date, and access date)
- The specific passage within that document
- The evidence level assigned and why
- The corroboration type (for established facts)
- An archived copy of the source, preserved against link rot
This chain serves two purposes. First, it disciplines our own work -- the act of tracing every assertion to a source catches errors before publication. Second, it enables verification -- any reader can follow the chain from our conclusion to the underlying evidence and judge for themselves.
Self-Audit Process
After completing each analysis, we conduct a systematic self-audit before publication. This is distinct from provenance validation (which checks structural completeness). The self-audit asks: are our own claims precise, fair, and defensible?
Self-Audit Checklist
- Every factual claim: can we point to a specific source passage?
- Every number: does it match what the source actually says, not what we remember?
- Every "established fact": does it truly have two independent sources?
- Every data source status claim: verified directly, today?
- Every third-party reference: URL included and reachable?
- Steel-man test: have we presented the strongest version of the claim?
- Temporal check: what was true at claim date vs. what's true now?
- Are we conflating citation with corroboration anywhere?
When we catch our own errors -- and we do -- we correct them openly. A corrections section will be maintained as the project progresses.
Steel-Manning
Before critiquing any claim, we present the strongest possible version of it. If a claim contains a kernel of truth, we identify it. If the underlying policy had genuine effects, we document them. If reasonable people could interpret the evidence differently, we acknowledge that.
This is not false balance. Many claims, after steel-manning, still collapse under the evidence. But presenting the strongest version first means our critique engages with substance, not straw men. If a reader agrees with the claim, they should feel their position was fairly represented before being challenged.
In practice, this means every analysis includes a section acknowledging what the claim gets right before addressing what it gets wrong. The evidence section presents facts before interpretation. The verdict follows from the evidence, not the other way around.
Analytical Lenses
We apply several recurring analytical frameworks wherever relevant. These are not opinions -- they are structured ways of reading claims that help surface what the rhetoric obscures.
Cui Bono
Who benefits? When a policy is framed as helping one group, who actually gains -- financially, politically, structurally?
Follow the Money
Where do the dollars go? Contracts, grants, tariff revenue, enforcement budgets -- the financial trail often tells a different story than the press release.
Stated vs. Revealed Preferences
What does the administration say it values vs. what its budgets, appointments, and enforcement patterns show it actually prioritizes?
Announcement vs. Outcome
Was the action announced, attempted, or achieved? An executive order signed is not a policy implemented. A policy implemented is not an outcome delivered.
The Denominator Problem
Absolute numbers without context mislead. "650,000 arrests" sounds different when you know the denominator. Rates, percentages, and per-capita figures reveal what raw counts conceal.
The Attribution Problem
Did this administration cause the outcome, inherit it, or simply preside over it? Pre-existing trends, prior legislation, and economic cycles often explain more than current policy.
The Padding Lens
Is this claim a genuinely distinct achievement, or the same action counted multiple times with different framing? We track overlapping claim clusters and flag padding explicitly.
This methodology is not a shield against criticism -- it is an invitation to it. If our evidence is wrong, show us better evidence. If our reasoning is flawed, identify the flaw. If our verdicts are unfair, make the case. The entire provenance chain is available for exactly this purpose.
The standard we hold these 365 claims to is the same standard we hold ourselves to: show your work, cite your sources, and let the evidence speak.