Learn · Interpret

The understanding
engine

Whatever needs explaining — concepts, models, decisions.

There's a gap between a result and understanding that result. Between information and comprehension. Between what a model decided and why it decided it. explained.fyi lives in that gap — for human learning and for AI interpretability alike.

For learners explained.fyi / learn Interactive, visual explanations of any concept. Adjust sliders, run simulations, explore diagrams. Walk away actually understanding. Browse domains → For AI systems explained.fyi / interpret Interactive explainability for model decisions. Loan denials, recommendations, risk scores — made genuinely understandable, not just documented. See how it works →

The same gap exists in two places. A concept you don't understand yet. A decision a model just made. Both need explaining. One engine does both.

For learners

A concept you haven't grasped yet

You've read about backpropagation. You could describe it. But you couldn't derive it from scratch, predict its failure modes, or explain why the learning rate matters so much. That's not understanding — that's familiarity.

Concept → interactive explanation → real comprehension → you could teach it

one engine

For AI systems

A decision a model just made

The model denied the loan. The SHAP values show feature importance. But the applicant asked why, the regulator asked why, and a list of feature weights isn't an answer anyone can understand or act on.

Model output → interactive explanation → genuine interpretability → defensible decision

Learn

Understanding any concept.
Interactively.

Every explanation on explained.fyi is interactive by design — not as decoration, but because the interaction is where the understanding happens. Adjust the model, watch what changes, build the intuition that reading alone cannot build.

"The test is simple: after engaging, could you teach it? Could you predict how it behaves under new conditions? That's the bar. Everything else is just familiarity."

Sliders

teaches: sensitivity

How does the output change when you adjust this input? You build intuition for relationships between variables — the thing that makes experts seem able to predict the future.

Learning rate → convergence · R₀ → epidemic peak · Tax rate → revenue

Simulations

teaches: emergence

What happens when you let this run? You learn how simple rules produce complex, unexpected outcomes that no static description could have prepared you for.

Epidemic spread · Game of Life · Neural net training · Market dynamics

Walkthroughs

teaches: process

What happens in what order, and why? Each step revealed when you're ready — paced to your comprehension, not to the author's desire to be thorough.

TCP handshake · Sorting algorithms · Neural net forward pass

Explorable diagrams

teaches: architecture

What is connected to what? You learn composition by exploring — clicking into each component to reveal its role rather than being shown everything at once.

The transformer · OSI model · A cell · Federal Reserve system

Try it now

This is a live SIR epidemic model. Adjust R₀ and the recovery rate. Watch how a small change in transmissibility produces a dramatically different outbreak. This is what explained.fyi explanations feel like — you're not reading about the model, you're running it.

After two minutes with this slider, you'll understand intuitively why R₀ = 1 is the critical threshold. No textbook paragraph produces that understanding as efficiently.

How an epidemic spreads — SIR model LIVE
R₀ (spread) 2.5
Recovery 0.07
Peak infected
Total infected
Days to peak
Susceptible
Infected
Recovered

Domains

Every field. One standard.

Each subdomain has its own curators, its own library of explanations. The standard is consistent: did you understand this when you were done?

Interpret

AI decisions.
Actually explained.

A model produced a result. SHAP values show feature importance. But your customer, your regulator, your auditor asked why — and a list of numbers isn't an answer anyone can understand or act on. explained.fyi turns model outputs into interactive explanations that produce genuine comprehension.

Results are one thing. Being able to interpret and explain them is another. The EU AI Act doesn't require feature weights. It requires meaningful explanations. Nobody has defined what that looks like. Until now.

# Pipe any model output to the API
POST https://api.explained.fyi/interpret

{
  "model_output": 0.87,
  "feature_values": {...},
  "context": "loan_application",
  "audience": "applicant"
}

# Returns: embeddable interactive explanation
# with sliders, counterfactuals, plain language
🏦

Credit & lending decisions

Your application was declined. Not because of a list of factors — but because of a specific combination of signals the model learned to treat as risk. An interactive explanation shows which factors drove the decision, what "normal" looks like, and what would change the outcome.

Output: counterfactual explorer · factor breakdown · plain-language narrative

⚕️

Clinical decision support

The model recommends treatment X with 84% confidence. The clinician needs to understand why — which signals drove it, what the training data distribution looks like, where the model's confidence thins. FDA requires it. The explanation must be meaningful.

Output: signal walkthrough · confidence visualisation · edge case explorer

🔍

Internal model auditing

Your data science team needs to understand what the model actually learned, where it's fragile, which feature interactions are driving outcomes in unexpected ways. Not SHAP values — a navigable, interactive map of model behaviour.

Output: feature interaction diagram · distribution explorer · failure mode map

Real-time explanation API

Embed interactive explanations directly in your product. Every model decision surfaces with an explanation your users can actually engage with — adjusted for their level of technical sophistication, built from your model's actual outputs.

Output: embeddable widget · REST API · audience-aware plain language

Regulatory

The EU AI Act, FDA, and financial regulators all require meaningful explanations. None of them define what that means.

The EU AI Act mandates "meaningful information about the logic involved" for high-risk AI decisions. The FDA requires explainability for AI/ML-based Software as a Medical Device. Financial regulators require adverse action notices that actually explain credit decisions. Right now, organisations are satisfying these requirements with bullet-pointed feature lists that nobody reads or understands. explained.fyi is building the standard for what a genuinely meaningful AI explanation looks like.

One engine

Two products.
Same philosophy.

The education product and the AI explainability product are the same philosophical claim expressed in two directions. Take something opaque. Produce an interactive explanation that builds genuine comprehension. The input changes. The quality bar is identical.

Learn inputs

A concept you don't understand yet
A mathematical structure
A historical event
A scientific mechanism
A philosophical argument

The understanding engine

AI generates · humans judge

Interpret inputs

A model decision or score
Feature importance values
A recommendation output
An anomaly detection flag
A risk assessment result

Output in both cases: an interactive explanation that produces genuine comprehension

input

01

Something opaque arrives

A concept nobody has made interactive yet. Or a model output with no meaningful explanation attached. The common thread: information exists, but understanding doesn't. Something needs explaining.

learn or interpret
generate

02

AI generates the explanation structure

Narrative arc, interaction types per section, simulation parameters, diagram architecture, counterfactual scenarios, plain-language framing. The mechanical 80% — built in seconds. For interpret: the model output is analysed, feature contributions mapped, edge cases identified.

AI
curate

03

Humans apply judgment

Domain experts verify accuracy. Editorial curators ensure interactions are load-bearing, not decorative. The question asked of every interaction: if you removed this, would someone understand less? If the answer is no, it goes. The 20% that's judgment — the difference between impressive and genuinely useful.

human
ship

04

Published — rated on one metric

For learn: canonical, citable, linkable from Laminar knowledge graphs. For interpret: embedded in your product or compliance documentation, available via API. Rated on one question only: did you understand this when you were done?

learn or interpret

Who it's for

For learners

I want to understand something

Any concept, any field. Interactive explanations designed to transfer real comprehension — not just familiarity. Walk away knowing you could teach it.

Browse any domain — maths to philosophy
Every explanation interactive by design
Connects to Laminar to close knowledge gaps
Start exploring

For AI teams

We need our model decisions explained

Regulatory compliance, customer-facing explanations, internal auditing. Interactive explanations of model outputs — embeddable, API-accessible, audience-aware.

EU AI Act · FDA · adverse action compliance
Embeddable widgets for your product
Counterfactual and factor explorers
Talk to us

For curators

I want to make understanding possible

Domain expertise plus editorial judgment. You know the difference between an interaction that teaches and one that decorates. Produce the canonical explanations for your field.

AI generates the draft — you provide judgment
Attribution on everything you curate
Your explanations become citable references
Apply to curate

The gap between a result and understanding that result — between information and comprehension — is where we live. Both sides of it.

The explained.fyi thesis

3Blue1Brown

visual, not interactive

SHAP / LIME

technical, not meaningful

Wikipedia

reference, not comprehension

Khan Academy

curriculum, not depth

explained.fyi

the gap they all leave

Early access

Explain everything.
Finally.

explained.fyi is in private beta — for learners, for AI teams who need their model decisions explained, and for domain experts who want to become curators.

Tell us whether you're a learner, an AI team, or want to curate.