AICHE +
M
Manus Integration

Voice commands for autonomous AI agents

Speak detailed task specifications to Manus. Autonomous agents need precise instructions.

Download AICHE
Works on:
macOSWindowsLinux

The short answer: open Manus, click into the task description field, press ⌃+⌥+R (Mac) or Ctrl+Alt+R (Windows/Linux), speak your complete task specification for 60-90 seconds, and AICHE inserts the formatted instructions for Manus to execute autonomously.

Autonomous Agents Need Precise Instructions

Manus is an autonomous AI agent. You give it a task, and it works through multiple steps on its own: browsing the web, reading documents, writing code, creating files, and making decisions at each step without asking you for guidance.

This is fundamentally different from a chatbot. With ChatGPT or Claude, you have a conversation. If the AI misunderstands something, you correct it in the next message. With Manus, there is no back-and-forth during execution. The agent reads your instructions once and then runs with them. If your instructions are vague, it fills in the gaps with assumptions. Those assumptions may not match what you wanted, and you only find out when the task is complete.

This makes the initial instruction the most important input you will write. A vague prompt wastes the agent's entire execution. A detailed prompt produces results you can actually use. Voice lets you dictate the detailed version in 60 seconds instead of spending 10 minutes typing it.

How to Use It

  1. Open Manus in your browser.
  2. Start a new task or open an existing workflow.
  3. Click into the task description or instruction field.
  4. Press your AICHE hotkey (⌃+⌥+R on Mac, Ctrl+Alt+R on Windows/Linux) to start recording.
  5. Speak your complete task specification (example: "research the top five project management tools used by remote engineering teams with 10 to 50 people. For each tool, find the current pricing for a team of 25, the key features for asynchronous communication, any integrations with GitHub and Linear, and real user reviews from the past 6 months. Compile the results into a comparison table with a recommendation based on value for a team that prioritizes async communication and GitHub integration over real-time features").
  6. Press the hotkey again. AICHE transcribes, applies Content Organization, and inserts the text.
  7. Review the instructions, then start the agent.

Writing Constraints and Boundaries

Autonomous agents can take unexpected paths. They might visit sites you did not expect, make interpretations you did not intend, or produce output in a format that does not match your needs. Explicit constraints in your instructions prevent these detours.

Dictate the boundaries alongside the task. Say: "only use official pricing pages and G2 reviews, do not include tools that are primarily for enterprise teams above 200 people, format the output as a Markdown table with columns for tool name, monthly cost per seat, async features, GitHub integration quality, and your assessment. Keep the recommendation section under 200 words."

Those constraints take 20 seconds to speak. Typing them takes two minutes. Without them, Manus might pull from outdated blog posts, include irrelevant enterprise tools, and produce a 2,000-word essay instead of a concise table. The investment of 20 seconds of spoken constraints saves you from throwing away the result and starting over.

Multi-Step Task Specifications

Manus handles complex tasks that involve multiple phases. Research first, then synthesis, then output formatting. When you speak a multi-step task, structure it as a sequence.

Dictate: "step one, find all Python HTTP client libraries with more than 5,000 GitHub stars. Step two, for each library, check the latest release date, the open issue count, and whether it supports async. Step three, test the three most popular ones by writing a simple GET request example. Step four, compare their error handling approaches. Step five, write a recommendation memo for a team that needs async support and good error messages."

Speaking in numbered steps takes 30 seconds. Manus interprets the sequence and executes each phase in order. Without the explicit structure, it might try to do everything at once and produce disorganized results.

Reviewing and Iterating on Agent Output

After Manus completes a task, review the results. If something needs adjustment, dictate a follow-up instruction that references what was already done. "The comparison table is good, but the pricing data for the third tool looks outdated. Check their pricing page directly and update the table. Also add a column for whether each tool has a free tier."

This refinement takes 10 seconds to speak. It is specific enough that Manus can make the targeted adjustment without redoing the entire task.

Heads-up: review your instructions before starting the agent. Because Manus runs autonomously, corrections mid-execution are not possible. Spend 10 seconds re-reading your dictated prompt to catch any missing details.

Pro tip: include the output format in your instructions. Saying "produce a Markdown table" or "write a one-page memo" or "create a numbered list of findings" prevents Manus from guessing what format you want.

Result: detailed agent task specifications that took 12 minutes to type now take 60 seconds to speak. Manus executes with fewer wrong turns because your spoken instructions included constraints, structure, and output format that typed shortcuts leave out.

Do this now: open Manus, press your hotkey, and dictate one research task you have been putting off. Describe the research scope, the specific data points you need, any sources to prioritize or avoid, and the format you want the results in. Let the agent run.

#productivity#workflow#ai-agents