Back to Learn
Best Practices
13 min read

Prompt Engineering for Claude: Write Better AI Prompts

clauderules.net

Why Prompt Engineering Matters

The same task worded two different ways can produce results that are wildly different in quality, format, and accuracy. Prompt engineering is the practice of deliberately crafting inputs to get the outputs you need — reliably, not just occasionally.

With Claude specifically, prompt engineering matters more than with simpler AI tools because Claude is capable of nuanced, context-sensitive responses. A vague prompt leaves Claude guessing at your intent. A precise prompt gives Claude everything it needs to produce exactly what you want.

The STAR Framework for Claude Prompts

The most reliable structure for Claude prompts is STAR: Situation, Task, Action, Result. Be explicit about all four:

prompt
Situation: I'm building a Next.js 15 App Router app with Prisma and PostgreSQL.
The User model has fields: id, email, name, createdAt, updatedAt.

Task: I need an API route that paginates the user list.

Action: Create the GET /api/users route handler with cursor-based pagination.

Result: Return { users, nextCursor, total } where nextCursor is the ID of the last
item for the next page, or null if on the last page. Support ?cursor and ?limit params.
Default limit is 20. Max limit is 100.

Compare this to the weak version: "Write a paginated user list API." The STAR version gives Claude enough context to write the right implementation the first time.

Specificity Beats Brevity

The most common prompt engineering mistake is being too brief in the name of efficiency. Claude can handle long prompts — the extra tokens you spend on specificity pay off in reduced back-and-forth.

Weak prompt

"Fix the bug in auth.ts"

Strong prompt

"In auth.ts at line 47, the JWT verification fails when the token is expired. It throws an error instead of returning a 401 response. Fix it to catch the TokenExpiredError specifically and return NextResponse.json({ error: 'Token expired' }, { status: 401 }). Do not modify the general error handler."

Role Prompting

Assigning Claude a specific expert role shifts its perspective and improves output quality for domain-specific tasks. The role activates relevant knowledge and sets implicit quality standards.

prompt
# Security review
"You are a senior application security engineer specializing in web applications.
Review this authentication code for OWASP Top 10 vulnerabilities. Be direct about
severity. Cite the specific OWASP category for each issue."

# Performance analysis
"You are a database performance specialist. Analyze this Prisma query for N+1
problems, missing indexes, and inefficient JOINs. Suggest specific optimizations
with estimated impact."

# Code review
"You are a TypeScript expert who enforces strict type safety. Review this file for
any use of 'any', implicit returns, unsafe type assertions, and missing null checks.
For each issue, show the correct typed version."

Few-Shot Examples

Showing Claude the format you want is more reliable than describing it. Include one or two examples of the exact output format you expect, and Claude will follow the pattern.

prompt
Write a changelog entry for each of these commits. Format like the examples:

Examples:
- "fix: null check on user.profile in dashboard" → "Fixed crash when user has no profile in dashboard"
- "feat: add export to CSV button" → "Added CSV export for data tables"
- "refactor: extract auth middleware" → "Extracted authentication logic into reusable middleware"

Now format these:
- "fix: prevent duplicate email signup"
- "feat: add dark mode toggle"
- "chore: upgrade prisma to v6"

Chain-of-Thought Prompting

For complex reasoning tasks — debugging, architecture decisions, security analysis — asking Claude to think step-by-step before answering produces significantly better results. The reasoning process itself catches errors.

prompt
# Weak: direct answer request
"Should I use a monorepo or separate repos for my three microservices?"

# Strong: ask for reasoning first
"I have three microservices: auth-service, api-gateway, and notification-service.
They share some TypeScript types but have different deployment cadences.
Think through the trade-offs of monorepo vs. separate repos for this specific case,
then give me a recommendation with your reasoning."

Constraints and Guardrails

Claude respects explicit constraints much more reliably than implicit ones. Tell Claude what not to do just as clearly as what to do.

prompt
Refactor the UserService class to use the repository pattern.

Constraints:
- Do not modify any test files — they must pass without changes
- Do not add new npm packages
- Do not change the public API (method signatures must remain identical)
- Do not use classes for the repository — use plain functions
- The refactored code must compile with TypeScript strict mode

CLAUDE.md as Permanent Prompt Engineering

The best prompts should be written once and reused forever. When you discover a prompt pattern that consistently gets good results, move it into your CLAUDE.md file so it applies to every session automatically.

CLAUDE.md
## How to Interact With Me
- When writing code, always show the TypeScript types first, then the implementation
- When fixing bugs, explain the root cause before showing the fix
- When suggesting refactors, show a before/after comparison
- Never use placeholder comments like "// TODO: implement this" — implement it

## Output Format for Code Reviews
When reviewing code, use this structure:
**Critical** (must fix): [issues that will cause bugs or security problems]
**Major** (should fix): [issues that hurt maintainability or performance]
**Minor** (nice to fix): [style issues, naming, etc.]
**Suggestions** (optional): [ideas for improvement]

Before/After: Prompt Transformations

Three real examples of prompts transformed from weak to strong:

Example 1: Test writing

Before

"Write tests for the auth module"

After

"Write vitest unit tests for lib/auth.ts. Test the validateToken, generateToken, and refreshToken functions. Cover: valid tokens, expired tokens, malformed tokens, and missing required fields. Do not mock the JWT library — test with real tokens using the test secret key in .env.test."

Example 2: Debugging

Before

"The build is failing"

After

"The production build fails with: Error: Cannot find module '@/lib/utils'. This only happens on Vercel, not locally. The file exists at lib/utils.ts. The tsconfig has "@/*": ["./"] as a path alias. Read tsconfig.json and next.config.ts and diagnose why the alias resolves locally but not in the Vercel build environment."

Example 3: Architecture

Before

"How should I structure my API?"

After

"I'm building a REST API with Node.js and Fastify. It serves a mobile app (React Native) and a web dashboard. It has 3 resource types: Users, Projects, and Tasks. Each resource has CRUD plus some domain operations (e.g., archive a project, assign a task). Recommend a route file structure, show a sample route file for the Task resource, and explain how you would handle shared validation logic."

Treat your best prompts like code. Version control them as skills in .claude/commands/. A library of well-crafted prompts is a competitive advantage for your team.

Get the Claude Code Starter Pack

Top CLAUDE.md rules for Next.js, TypeScript, Python, Go, and React — delivered free to your inbox.