glunty

Improve My LLM Prompt

Rewrite an LLM prompt to be clearer and more specific. 10 free per day.

What this tool does

Rewrites a vague or under-specified prompt into one that is more likely to produce useful output from an LLM. Returns three sections: an improved prompt ready to copy, a list of the specific edits and why each one helps, and a short list of additional context or constraints worth adding before you send. Useful for both first-time prompt writers and experienced users who want a sanity check before paying for a long generation.

How to use it

Paste your draft. Press Improve. Read the rewritten version. If the structural changes help, copy the improved prompt and use it. If the "What you might still want to add" section flags context the model could not have without you, fill that in next.

Common use cases

  • Going from a one-line idea to a structured prompt before a long content generation.
  • Sanity-checking a prompt for an automated workflow that will run thousands of times.
  • Writing a prompt for a model you do not normally use (different models have different strengths).
  • Teaching a teammate how to write better prompts by showing the before-and-after.
  • Catching ambiguity (e.g., "blog post about AI" leaves audience, length, format, tone all undefined).

Common pitfalls

  • The improved prompt is still a starting point. A prompt is only as good as the context you can give it. The tool catches structural and clarity issues; it cannot supply the domain knowledge or the specific requirements you have not stated.
  • Specificity has a ceiling. Very long, very prescriptive prompts sometimes produce worse output than concise ones because the model gets pulled in too many directions. The tool aims for "as short as possible while still adding structure that helps."
  • Test the rewritten prompt. A prompt that looks better is not guaranteed to produce better output. Run both, compare. The tool's heuristics are general; your task is specific.

Frequently asked questions

Where does my prompt go?
The draft is sent to glunty which forwards it to Anthropic Claude. glunty does not log or store the input; Anthropic processes for the duration of the request under their data-usage policy (no training on API inputs by default). If your draft contains sensitive details (real names, internal product specs, account numbers), strip them out and replace with placeholders before submitting.
Will it work for prompts targeting other models (GPT, Gemini, Llama)?
Yes. The improvements (specificity, structure, role-setting, output-format constraints) apply across most modern LLMs. The same draft will produce a noticeably better result on GPT-4, Claude, Gemini, or Llama. Model-specific quirks (e.g., system-prompt placement conventions, function-calling format) are smaller second-order concerns.
How is this different from a prompt-engineering course or template library?
A template library gives you reusable scaffolds. A course teaches general principles. This tool gives you specific feedback on your specific draft right now. The three are complementary: read a course for principles, use templates as scaffolds, use this tool to sharpen a draft that does not fit any template.
Will the improved prompt always produce better output?
Usually but not always. The tool catches structural and clarity issues. If your prompt is already structurally fine and just needs a different angle on the task, the rewrite may not help. The right test is to run both prompts (original and improved) and compare outputs on your specific use case.
Can it shorten an over-long prompt?
Yes. Very long, very prescriptive prompts often perform worse than concise ones because the model gets pulled in too many directions. The tool will collapse redundant instructions, remove polite filler, and tighten language while preserving essential constraints.
Why 10 per day?
Prompt drafts are typically short, so each call is cheap, but allowing unlimited use would invite automation abuse. Ten is enough for typical iterative use. Heavy users (running prompt experiments) should hit the Claude API directly.

Embed this tool

Free for any use; attribution appreciated. Paste this on your site:

The embed runs the same tool that lives at this URL. No tracking; no ads inside the embed. Resize height as needed for your layout.

Cite this tool

For academic, journalistic, or technical references. Pick a format:

Citations use 2026 as the publication year. Access date is left as a fillable placeholder where the citation style expects one.

Embedded tool from glunty.com