Voice Commands for AI Coding

Dictate to Claude, Cursor, and ChatGPT

Dictate instructions to AI coding assistants.

Start Dictating
Works on:
macOSWindowsLinux

The short answer: open any AI coding assistant (Claude, Cursor, ChatGPT, Gemini), click into the prompt field, press ⌃+⌥+R (Mac) or Ctrl+Alt+R (Windows/Linux), speak your coding requirements for 40-60 seconds, and AICHE transcribes and inserts the formatted prompt.

Typing 150-200 word prompts with full technical context takes 8-12 minutes and forces you to sit motionless while formulating multi-part coding requirements.

  1. Open your AI assistant's web interface or CLI (Claude Code, Cursor chat panel, ChatGPT, Gemini).
  2. Click into the prompt or message field.
  3. Press your AICHE hotkey to start recording.
  4. Speak your complete coding requirement with context (example: "build a React component that fetches user data from an API, handles loading and error states, displays results in a sortable table, uses TypeScript, and makes it reusable").
  5. Press the hotkey again-AICHE transcribes, applies Message Ready formatting, and inserts the text.
  6. Press Enter to send the prompt to your AI assistant.

Heads-up: AICHE transcribes your instructions for the AI assistant to read. It doesn't convert spoken words like "function" or "curly brace" into actual code syntax. You describe what you want, the AI writes the code.

The pro-tip: if you think in a non-English language, enable Translate to English in AICHE settings. Speak naturally in Russian, Spanish, or Mandarin, and the AI receives properly formatted English prompts automatically.

Why Voice Prompts Get Better AI Output

When you type a coding prompt, you instinctively minimize effort. "Fix the auth bug" is easier to type than explaining the full context. But AI assistants produce dramatically better code when they understand the complete picture - framework, architecture, constraints, and what you've already tried.

Speaking removes the effort barrier. A 45-second spoken prompt naturally includes 120-180 words of context that you'd never bother typing. You mention the framework, describe the component structure, explain the bug symptoms, and specify what the fix should look like - all because speaking is easier than typing.

The result is fewer back-and-forth cycles. A detailed first prompt often produces usable code on the first response, while a terse typed prompt requires 3-4 follow-up clarifications before the AI understands what you actually need.

Which AI Tools Work Best

AICHE's global hotkey works with any tool that has a text input field. Here's how it integrates with common AI coding tools:

Claude (Web and Claude Code)

Click into Claude's message field, press your hotkey, describe the architecture, the bug, or the feature you need. Claude's strength is understanding complex multi-step requirements, so detailed voice prompts play to its advantage. Speak for 30-60 seconds with full context and Claude produces comprehensive solutions.

Cursor

Cursor's inline chat and command palette both accept voice input through AICHE. Open the chat panel (Ctrl+L), press your hotkey, and describe the code change you want. Cursor sees your codebase, and your voice prompt gives it the intent - the combination produces targeted edits across multiple files.

ChatGPT

Click into the message field, press your hotkey, speak. ChatGPT handles well-structured prompts best, so voice dictation's natural sentence structure works in its favor. For code generation, include the language, framework, and any constraints in your spoken prompt.

GitHub Copilot Chat

In VS Code's Copilot Chat panel, use your hotkey to dictate coding questions and change requests. Copilot Chat has file context from your editor, so focus your voice prompt on intent and requirements rather than describing the code it can already see.

Effective Voice Prompting Patterns

The Context-Problem-Ask Pattern

Start with context (what you're building and what framework), describe the problem (what's broken or what's missing), then state the ask (what you want the AI to do). This mirrors natural conversation and produces the clearest prompts.

Example: "I'm working on a Next.js app with a PostgreSQL backend. The user profile page loads slowly because it makes 6 separate API calls on mount. Can you refactor the data fetching into a single server-side function that runs all queries in parallel and returns a combined response?"

That takes 20 seconds to speak but contains framework, database, specific problem, root cause, and desired solution - all context that produces a much better AI response.

The Iterative Refinement Pattern

Send an initial voice prompt, review the AI's response, then dictate a follow-up that refines the output. "That looks good but change the error handling to use a Result type instead of try-catch, and add a retry with exponential backoff for the network calls."

This is where voice shines - refinement prompts are quick reactions that are painful to type but effortless to speak. You look at the code, see what needs changing, and speak the correction in 10 seconds.

The Architecture Discussion Pattern

Use voice to have a conversational back-and-forth with the AI about system design. "Walk me through the tradeoffs between using a message queue versus direct API calls for the notification system. We need to handle about 5,000 notifications per minute with at-most-once delivery."

Architecture discussions are inherently verbal - they're the kind of conversation you'd have with a senior engineer at a whiteboard. Voice input makes AI interactions feel natural for this type of thinking.

Tips for AI Voice Prompting

Don't self-edit while speaking. Let your thoughts flow out in natural order. Message Ready cleans up grammar and filler afterward. Your job is to communicate the full context, not to craft perfect sentences.

Mention file names and function names. AI tools work better when you reference specific code. "Look at the AuthCallback component in auth-callback.tsx" is better than "look at the auth code."

State constraints explicitly. "Use TypeScript, no external dependencies, compatible with Node 18" - constraints you'd leave out of a typed prompt because they're tedious to type are easy to include when speaking.

Result: detailed coding prompts that took 10 minutes to type with full context now take 50 seconds to speak, and you can pace while thinking through architecture instead of hunching over a keyboard.

Do this now: open Claude or ChatGPT, press your hotkey, and dictate one feature you've been putting off because typing the full requirement with error handling and edge cases felt overwhelming.

#ai-coding#voice-commands#development