What This Takes
Each analysis in this project follows a consistent process: decompose the claim into checkable assertions, locate primary sources, verify every number against the original data, trace causal chains, check attribution, write the analysis, build the provenance chain, run the self-audit, and archive every source. Even with the most cutting-edge tools at my side, this takes time and attention, and there are 365 claims in this project alone.
This is currently one person's nights and weekends, on top of a day job. The project exists because the work felt necessary, but sustaining it requires support.
Why It Compounds
The individual analyses are useful on their own. But the real value of this project is cumulative.
Every analysis adds to a growing knowledge base: entities identified, relationships mapped, sources archived, cross-references built. By the time we reach item 300, the system knows which companies appear in multiple claims, which policy actions are counted more than once, which economic trends predate the administration, and which sources contradict each other. Analysis #300 is sharper than analysis #1 because it draws on everything that came before it.
This is the principle of compounding knowledge — each new piece of research makes the existing body more valuable, and the existing body makes each new piece of research faster and more precise. The entity registry, the source archive, the cross-reference graph, the provenance chains — these are not just documentation. They are infrastructure that any future investigation can build on.
The methodology is domain-agnostic. The same forensic approach — decompose claims, verify against primary sources, trace money and structure, label evidence tiers, build provenance chains — can be applied to any institutional communication: corporate earnings calls, campaign platforms, regulatory impact statements, international agreements. What we're building here is a template for a kind of journalism.
What This Makes Possible
The 365 analyses are the foundation, not the ceiling. The entity registry, source archive, and cross-reference graph we've built open the door to projects that would be impossible without this groundwork:
- Longitudinal tracking — revisiting claims as outcomes materialize. The administration claimed 365 wins on day one; what do the numbers show at month 18? Year two? The provenance chain lets us measure drift between promise and reality over time.
- Entity deep-dives — our knowledge graph tracks which companies, agencies, and individuals appear across multiple claims. Following a single entity through the full list reveals patterns invisible at the individual-claim level.
- Comparative analysis — applying the same methodology to prior administrations' claim sets, creating the first apples-to-apples comparison of how different White Houses characterize their records.
- The methodology itself — formalizing the forensic analysis framework into something other journalists, researchers, and civic organizations can pick up and apply to their own domains.
Each of these builds directly on the infrastructure that already exists. The hardest part — building the system, establishing the evidence standards, populating the knowledge base — is already underway.
What Support Enables
Continued analysis
Completing the remaining items and maintaining the quality standard across all 365.
Source archival and infrastructure
Every cited page is archived against link rot. The knowledge graph, vector search, and embedding pipeline that make compounding analysis possible all have real hosting and compute costs.
AI inference and tooling
Research-grade AI assistance for source analysis, entity extraction, and cross-reference discovery. Software tools and data access that accelerate the work.
Ongoing maintenance
Claims don't stop being relevant after analysis. Outcomes change, data gets revised, new evidence surfaces. Keeping analyses current takes sustained effort.
For a detailed breakdown of how resources are allocated at different levels — from current operations through local AI infrastructure and scaling the methodology — see the full advancement plan.
How to Support
This project is part of a broader body of work building public-interest technology. Supporting my Ko-fi supports all of it — the forensic journalism, the tools, and the infrastructure that makes this kind of work possible.
If this project is useful to you — as a reader, a journalist, a researcher, or just someone who believes public claims deserve public scrutiny — you can support it directly.
Support on Ko-fiCollaboration
This project is open to collaboration with journalists, newsrooms, researchers, and journalism schools who are interested in applying this methodology — or adapting it for their own domains. If you're working on something where forensic, primary-source analysis of institutional claims would be valuable, reach out.
Particularly interested in hearing from:
- Newsrooms that want to apply this approach to other institutional claim sets
- Journalism and political science programs interested in the methodology as a teaching tool
- Researchers working on misinformation, institutional communication, or computational fact-checking
- Data journalists who see applications for the entity graph and provenance system
The standard this project sets — cite everything, trace every chain, steel-man before critique, show your work — is expensive to maintain. But it's the only standard worth maintaining. If you agree, your support makes it possible.