glunty

Alt Text Generator (AI)

Generate alt text for any image. WCAG-aligned. 5 free per day.

AI-generated alt text is a starting point, not a guarantee. Review every output before publishing, especially for images that contain text, charts, or culturally-specific content. Accessibility decisions affect real users; do not skip the review.

What this tool does

Reads an image and produces alt text suitable for the alt attribute of an HTML <img> element. Output follows WCAG 2.2 guidance: a single sentence (sometimes two) that conveys the image's information or purpose, no "image of" prefix (screen readers already announce that), no speculation about emotions or intent. If the image is decorative (provides no information), the output is the literal string DECORATIVE, which signals you should use alt="". Optional context biases the description toward what is relevant to your specific use.

How to use it

Click the file input, pick an image. Add optional context (article topic, product category, page purpose). Press Generate alt text. The free tier allows 5 generations per IP per day. Review the output before publishing; accessibility decisions warrant a human pass.

Common use cases

  • Bulk-generating starter alt text for an article with many images.
  • Filling in missing alt on legacy content during an accessibility audit.
  • Catching the case where a chart or screenshot needs detailed alt (the model surfaces these).
  • Quickly drafting alt for a social media post and tweaking it.
  • Confirming that decorative images can use empty alt (model returns DECORATIVE for purely visual flourishes).

Common pitfalls

  • Charts and infographics. A short alt cannot describe a complex chart. The tool's output covers the gist; for important charts, also include a long-description nearby (linked via aria-describedby or simply adjacent text). The alt is not the only mechanism.
  • Decorative images. If you set alt for a decorative image, screen readers announce content that adds noise. Use alt="" when the tool returns DECORATIVE.
  • Cultural and named entities. The model recognizes general categories better than specific people, places, or works of art. If the image is of a specific named subject, edit the output to include the name.

Frequently asked questions

Where does my image go?
The image is sent to glunty which forwards it to Anthropic Claude (vision model) for processing. glunty does not log or store your image; Anthropic processes it for the duration of the request under their data-usage policy (no training on API inputs by default). Do not upload images that contain sensitive content (PII visible on signage, confidential documents, private people without consent). Strip metadata before uploading if EXIF location data is in your image.
Why does it return DECORATIVE instead of generating descriptive text?
Some images carry no information; they are visual flourishes (a divider, a spacer pattern, a stock-photo accent). Adding alt text to those creates noise for screen-reader users without benefit. The tool returns the literal string DECORATIVE for these so you know to use alt="" (empty alt) in your HTML. Do not use alt="DECORATIVE" in the markup; it is a sentinel for you, not the alt value.
How does this differ from automated alt text in WordPress or my CMS?
Most CMS auto-alt features run a generic vision model with no context. This tool accepts your context (the article topic, the page purpose) and the model uses it to bias the description toward what is actually relevant on your page. A photo of a bicycle on a cycling-safety article gets different alt text than the same photo on an exercise-equipment store.
What length of alt text is right?
WCAG suggests "as short as possible while conveying the information or purpose." For a typical photo, that is one sentence; for a complex chart or infographic, that is a sentence plus a pointer to a longer description elsewhere on the page. The tool output sits in this single-sentence range; if your image is complex, manually add a longer description nearby and link to it via aria-describedby.
Why 5 per day instead of 10?
Vision-model calls cost more per request than text-only calls because the model processes the image content. The cap balances "free tier is genuinely free" against "pay for the API use." If you have many images to process, the Anthropic API directly is the right path; the per-call cost is reasonable but adds up over hundreds of images.
Will it describe people, named places, or branded products?
General categories (a person, a building, a logo) are described accurately. Specific named entities (a particular celebrity, a particular landmark, a particular product) are usually NOT in the output unless they are clearly identifiable from context. The model errs toward generic descriptions to avoid false claims. If the named subject is what makes the image meaningful, edit the output to include the name.

Embed this tool

Free for any use; attribution appreciated. Paste this on your site:

The embed runs the same tool that lives at this URL. No tracking; no ads inside the embed. Resize height as needed for your layout.

Cite this tool

For academic, journalistic, or technical references. Pick a format:

Citations use 2026 as the publication year. Access date is left as a fillable placeholder where the citation style expects one.

Embedded tool from glunty.com