Learn · Interpret
Whatever needs explaining — concepts, models, decisions.
There's a gap between a result and understanding that result. Between information and comprehension. Between what a model decided and why it decided it. explained.fyi lives in that gap — for human learning and for AI interpretability alike.
The same gap exists in two places. A concept you don't understand yet. A decision a model just made. Both need explaining. One engine does both.
For learners
You've read about backpropagation. You could describe it. But you couldn't derive it from scratch, predict its failure modes, or explain why the learning rate matters so much. That's not understanding — that's familiarity.
Concept → interactive explanation → real comprehension → you could teach it
For AI systems
The model denied the loan. The SHAP values show feature importance. But the applicant asked why, the regulator asked why, and a list of feature weights isn't an answer anyone can understand or act on.
Model output → interactive explanation → genuine interpretability → defensible decision
Learn
Every explanation on explained.fyi is interactive by design — not as decoration, but because the interaction is where the understanding happens. Adjust the model, watch what changes, build the intuition that reading alone cannot build.
"The test is simple: after engaging, could you teach it? Could you predict how it behaves under new conditions? That's the bar. Everything else is just familiarity."
teaches: sensitivity
How does the output change when you adjust this input? You build intuition for relationships between variables — the thing that makes experts seem able to predict the future.
Learning rate → convergence · R₀ → epidemic peak · Tax rate → revenue
teaches: emergence
What happens when you let this run? You learn how simple rules produce complex, unexpected outcomes that no static description could have prepared you for.
Epidemic spread · Game of Life · Neural net training · Market dynamics
teaches: process
What happens in what order, and why? Each step revealed when you're ready — paced to your comprehension, not to the author's desire to be thorough.
TCP handshake · Sorting algorithms · Neural net forward pass
teaches: architecture
What is connected to what? You learn composition by exploring — clicking into each component to reveal its role rather than being shown everything at once.
The transformer · OSI model · A cell · Federal Reserve system
Try it now
This is a live SIR epidemic model. Adjust R₀ and the recovery rate. Watch how a small change in transmissibility produces a dramatically different outbreak. This is what explained.fyi explanations feel like — you're not reading about the model, you're running it.
After two minutes with this slider, you'll understand intuitively why R₀ = 1 is the critical threshold. No textbook paragraph produces that understanding as efficiently.
Domains
Each subdomain has its own curators, its own library of explanations. The standard is consistent: did you understand this when you were done?
math.explained.fyi
Proofs made visible. Abstract structures given shape and motion.
cs.explained.fyi
Algorithms made observable. Systems made navigable.
science.explained.fyi
Physics, chemistry, biology — mechanisms made tangible.
economics.explained.fyi
Markets, incentives, systems — explained without ideology.
history.explained.fyi
Events, causes, consequences — navigable timelines and maps.
philosophy.explained.fyi
Arguments made navigable. Thought experiments made interactive.
Interpret
A model produced a result. SHAP values show feature importance. But your customer, your regulator, your auditor asked why — and a list of numbers isn't an answer anyone can understand or act on. explained.fyi turns model outputs into interactive explanations that produce genuine comprehension.
Results are one thing. Being able to interpret and explain them is another. The EU AI Act doesn't require feature weights. It requires meaningful explanations. Nobody has defined what that looks like. Until now.
Your application was declined. Not because of a list of factors — but because of a specific combination of signals the model learned to treat as risk. An interactive explanation shows which factors drove the decision, what "normal" looks like, and what would change the outcome.
Output: counterfactual explorer · factor breakdown · plain-language narrative
The model recommends treatment X with 84% confidence. The clinician needs to understand why — which signals drove it, what the training data distribution looks like, where the model's confidence thins. FDA requires it. The explanation must be meaningful.
Output: signal walkthrough · confidence visualisation · edge case explorer
Your data science team needs to understand what the model actually learned, where it's fragile, which feature interactions are driving outcomes in unexpected ways. Not SHAP values — a navigable, interactive map of model behaviour.
Output: feature interaction diagram · distribution explorer · failure mode map
Embed interactive explanations directly in your product. Every model decision surfaces with an explanation your users can actually engage with — adjusted for their level of technical sophistication, built from your model's actual outputs.
Output: embeddable widget · REST API · audience-aware plain language
The EU AI Act, FDA, and financial regulators all require meaningful explanations. None of them define what that means.
The EU AI Act mandates "meaningful information about the logic involved" for high-risk AI decisions. The FDA requires explainability for AI/ML-based Software as a Medical Device. Financial regulators require adverse action notices that actually explain credit decisions. Right now, organisations are satisfying these requirements with bullet-pointed feature lists that nobody reads or understands. explained.fyi is building the standard for what a genuinely meaningful AI explanation looks like.
One engine
The education product and the AI explainability product are the same philosophical claim expressed in two directions. Take something opaque. Produce an interactive explanation that builds genuine comprehension. The input changes. The quality bar is identical.
Learn inputs
The understanding engine
AI generates · humans judge
Interpret inputs
Output in both cases: an interactive explanation that produces genuine comprehension
01
Something opaque arrives
A concept nobody has made interactive yet. Or a model output with no meaningful explanation attached. The common thread: information exists, but understanding doesn't. Something needs explaining.
learn or interpret02
AI generates the explanation structure
Narrative arc, interaction types per section, simulation parameters, diagram architecture, counterfactual scenarios, plain-language framing. The mechanical 80% — built in seconds. For interpret: the model output is analysed, feature contributions mapped, edge cases identified.
AI03
Humans apply judgment
Domain experts verify accuracy. Editorial curators ensure interactions are load-bearing, not decorative. The question asked of every interaction: if you removed this, would someone understand less? If the answer is no, it goes. The 20% that's judgment — the difference between impressive and genuinely useful.
human04
Published — rated on one metric
For learn: canonical, citable, linkable from Laminar knowledge graphs. For interpret: embedded in your product or compliance documentation, available via API. Rated on one question only: did you understand this when you were done?
learn or interpretWho it's for
For learners
Any concept, any field. Interactive explanations designed to transfer real comprehension — not just familiarity. Walk away knowing you could teach it.
For AI teams
Regulatory compliance, customer-facing explanations, internal auditing. Interactive explanations of model outputs — embeddable, API-accessible, audience-aware.
For curators
Domain expertise plus editorial judgment. You know the difference between an interaction that teaches and one that decorates. Produce the canonical explanations for your field.
The gap between a result and understanding that result — between information and comprehension — is where we live. Both sides of it.
The explained.fyi thesis
3Blue1Brown
visual, not interactive
SHAP / LIME
technical, not meaningful
Wikipedia
reference, not comprehension
Khan Academy
curriculum, not depth
explained.fyi
the gap they all leave
explained.fyi is in private beta — for learners, for AI teams who need their model decisions explained, and for domain experts who want to become curators.
Tell us whether you're a learner, an AI team, or want to curate.