All tools

Claude Token Counter

Estimate token usage for your CLAUDE.md, system prompt, or any text. See how your content fits in Claude's context window — runs entirely in your browser, no data sent anywhere.

Your text
0 characters0 words

Estimated tokens

0

≈ 4 chars per token (English prose)

Context window usage

Claude Haiku 4.5Fastest & cheapest
0.0%
0 / 200,000 tokens
Claude Sonnet 4.6Best balance
0.0%
0 / 200,000 tokens
Claude Opus 4.6Most capable
0.0%
0 / 200,000 tokens

Tips for reducing tokens

  • Remove redundant comments and examples
  • Use bullet points instead of prose
  • Keep code snippets short and focused
  • Browse lean community rules for inspiration

Find lean CLAUDE.md rules

Browse community-optimized rules that balance context and effectiveness.

Browse rules

What is a Claude token?

Claude uses a byte-pair encoding (BPE) tokenizer similar to GPT models. Roughly speaking, one token is about 4 characters of English prose or 3 characters of code. A word like "tokenization" counts as 2–3 tokens. The Claude API bills by token — both input (prompt) and output (response).

What is a context window?

The context window is the maximum number of tokens Claude can process in a single request. Claude Haiku 4.5, Sonnet 4.6, and Opus 4.6 all support 200,000 tokens (~150,000 words). If your prompt plus expected output exceeds this limit, you will need to shorten your input.

Tips for reducing token count

  • Remove redundant comments and examples from CLAUDE.md
  • Use bullet points instead of prose paragraphs
  • Keep code snippets short and focused — show patterns, not full implementations
  • Avoid repeating the same instruction in multiple ways
  • Browse lean community rules for inspiration