AI-augmented code explainers are remarkable when they work. They turn a 5-minute “what does this regex do” question into a 5-second answer with structure and useful caveats. They turn an opaque inherited SQL query into a clause-by-clause walkthrough. For mainstream patterns in mainstream languages, they are genuinely good.
They are also fallible in specific, predictable ways. The failure modes matter because a confidently-wrong explanation is worse than no explanation: the reader believes false things and acts on them.
This post is a cross-cutting reflection on when AI code explanation is reliable, when it is not, and how to use it productively without surrendering judgment to it. The pillar post on reading code in plain English covers the broader strategy; the cluster posts on regex, SQL, bash, and Excel cover language-specific cases.
Where AI explanation reliably helps
For a snippet that is idiomatic, in a mainstream language, with intent that can be inferred from the code itself, AI explanation is reliable. Specifically:
Idiomatic patterns. A regex matching an email address, a SQL query joining users to orders, a bash one-liner finding files modified in the last week, an Excel formula doing a VLOOKUP with IFERROR fallback. Each of these is a pattern the model has seen thousands of times in training. The explanation describes what each piece does and notes the common pitfalls (greedy quantifier, missing JOIN predicate, word-splitting risk, missing fourth VLOOKUP argument).
Token-level decomposition. “What does (?P<x>...) mean in this regex” or “what does 2>&1 do at the end of this bash command.” The explainer answers these consistently because the tokens have specific, well-documented meanings. There is no judgment involved.
Surfacing patterns the reader missed. A novice writing their first SQL query may not know NULLs do not equal NULLs. The explainer flags this in the notes. A novice writing their first regex may not know \d matches Unicode digits in some engines. The explainer flags this. The teaching effect is real.
Translating between dialects. The explainer can describe what a pattern means in PostgreSQL versus MySQL, in PCRE versus JavaScript regex, in bash versus zsh. When dialect matters, the explainer often calls it out.
The high-confidence use case: paste an idiomatic snippet, read the structured explanation, treat it as a starting point for your own reading. The explainer is faster than reading from scratch and catches things you would miss.
Where AI explanation predictably fails
Three categories of failure account for almost all the bad explanations:
Vendor-specific extensions
A PostgreSQL query using RETURNING, JSON operators (->>, #>, @>), generated columns, or LATERAL joins. A bash script using arrays (${arr[@]}), process substitution (<(...)), or [[ ]] conditionals. An Excel formula using LET, LAMBDA, or MAP (Excel 365 only). A JavaScript regex with lookbehind (recent ES versions only).
The explainer often parses these as standard syntax and produces a description that is close but wrong. The query “SELECT … RETURNING id” gets explained as a SELECT with a column named “RETURNING,” which is nonsense.
The fix: when the explainer asks for a dialect, provide one. When the snippet uses constructs that look unusual, look them up in the canonical reference for the engine you are targeting. The explainer can be wrong; the documentation is the authority.
Novel idioms
A regex pattern that someone wrote for a specific purpose may not match a common idiom. The explainer describes each token correctly: “this character class matches lowercase letters,” “this quantifier matches one or more times.” But the overall purpose of the pattern is something specific the explainer cannot guess.
Example: a pattern like ^[a-z]{3}_\d{4}_v\d+$. The explainer says it matches “three lowercase letters, underscore, four digits, underscore, lowercase v, one or more digits.” Correct at the token level. The actual purpose was to validate a specific identifier format used by an internal system, and the pattern omits the rule that the four digits must be a valid year. The explainer cannot know that.
The fix: the explainer is for syntax, not intent. Read the syntax explanation, then ask yourself what intent the syntax serves. If the intent gap is obvious (your domain has rules the syntax does not enforce), the explainer cannot help; you have to think.
Hallucination
The most dangerous failure mode: the explainer produces a confident description of behavior the snippet does not actually have. This happens occasionally even on idiomatic input. It happens more often on unusual input, on input with subtle bugs, and on input the model has not seen (genuinely novel patterns).
Hallucination tends to invent plausible behavior. A regex with a typo gets explained as if the typo were intentional. A SQL query with a typo in a column name gets explained as if the column existed. A bash command with a syntax error gets explained as if the error were a feature.
The fix: cross-check. For regex, run the pattern against test input and verify it matches what the explanation claims. For SQL, run the query against a sample database and verify the result shape matches the explanation. For bash, dry-run with echo or set -x. For Excel, type the formula into a real spreadsheet with real data.
Three usage patterns that work
Pattern 1: Use the explainer to learn
You read an idiom you have not seen before. You paste it into the explainer. You read the structured explanation. You internalize the idiom. Next time you see it, you do not need the explainer.
This works because the explainer’s value is highest on first contact and declines as you build internal vocabulary. The cost of pasting and reading is small. The compounding benefit (faster reading of similar code in the future) is real.
Pattern 2: Use the explainer to translate
You read a snippet in a language you do not work in often. The explainer translates it to plain English. You apply your judgment to the plain English. You make a decision (use it, modify it, reject it).
This works for cross-team code review (the SRE reviewing a frontend regex, the data analyst reading the deploy script, the developer reading an Excel model). The explainer reduces the cognitive cost of switching contexts. You are still the one making the call.
Pattern 3: Use the explainer to surface concerns
You wrote the code. You know what it should do. You paste it into the explainer to see what it actually does. The notes section flags concerns you may have missed: a greedy quantifier, a missing GROUP BY column, a word-splitting risk.
The explainer is most useful here as a second pair of eyes. It is not always right, but it is right often enough to catch things human reading misses. Use the flagged concerns as starting points for your own analysis.
Three usage patterns that fail
Anti-pattern 1: Trust without verification on production code
Pasting a security-critical regex into the explainer, reading “yes, this matches valid email addresses,” and shipping. The explainer is not a security analyzer. It does not test against attack patterns. It does not consider ReDoS risk. For security-critical code, the explainer is one input among many; testing is the authority.
Anti-pattern 2: Use as a substitute for the canonical reference
Pasting a complex SQL query, reading the explanation, and assuming the explanation is exhaustive. The explainer summarizes; it is not a full reference. Vendor-specific behaviors, edge cases, and performance characteristics need the actual database documentation. The explainer points you toward the documentation; it does not replace it.
Anti-pattern 3: Skip the reading
Treating the explainer as the primary source rather than a tool. The 4-step strategy from the pillar post (intent, decompose, verify, edge cases) still applies. The explainer accelerates step 2 (decompose) and surfaces some of step 3 (verify). Steps 1 and 4 are still on you.
How AI explanation is changing reading
The explainer has not replaced careful reading; it has lowered the cost of careful reading. A snippet that used to take 10 minutes to decompose by hand takes 2 minutes with the explainer plus 3 minutes of your own thinking. The total is shorter, but the thinking is still required.
The longer-term effect on the field, optimistically: more people read more code more carefully because the cost is lower. More bugs caught at read time, fewer in production. More cross-team code review because the language barrier is lower. More junior engineers ramping up faster because the unknown vocabulary is searchable.
Pessimistically: more people skip the careful reading because the explainer feels sufficient. More bugs ship because the explainer’s “everything looks fine” is treated as ground truth. More cargo-culted code because the explainer does not flag intent gaps that matter.
Both outcomes are happening. The split depends on whether the team treats the explainer as a teacher or a substitute.
Closing
AI code explanation is a useful tool with predictable failure modes. It is reliable on idiomatic input and can be wrong (sometimes confidently) on dialect-specific or novel input. The right way to use it is as an accelerator on the steps you would already take, not as a replacement for them.
The four cluster posts in this series cover the language-specific cases:
- Why your regex matches more than you think
- Common SQL bugs the explainer catches
- Reading curl-as-bash safely before you paste
- Excel formulas that look right but aren’t
Each post pairs a language with the patterns the explainer can spot reliably. The pillar on reading code in plain English covers the strategy that holds across all of them. The explainers themselves are at glunty.com: free, with a generous per-IP daily limit, and designed to flag the patterns this series describes.