AICHE +LLogseq Integration
Voice input for outlining
Speak your outlines into Logseq. Build your knowledge base 4x faster.
The short answer: open Logseq, click into any bullet on your Journal page or any other page, press ⌃+⌥+R (Mac) or Ctrl+Alt+R (Windows/Linux), speak your thought for 30-90 seconds, and AICHE inserts clean text into that block. Your notes stay in local Markdown files you control, and AICHE processes audio with zero data retention.
The Local-First Capture Problem
Logseq users chose the tool deliberately. They want an outliner with bidirectional links, graph visualization, and block-level references, but they also want local-first storage, open source code, and complete ownership of their data. No proprietary cloud database. No vendor lock-in. Just Markdown files on disk.
This principled choice comes with a practical limitation: input speed. Logseq doesn't have a mobile app with quick capture (the way some cloud-based tools do). It doesn't have a web clipper that grabs content in one click. Most knowledge enters Logseq through the keyboard, one block at a time. And keyboard input has a hard ceiling around 40 words per minute for most people.
During a meeting, you're choosing between listening and typing. During a reading session, you're choosing between engaging with the material and documenting your reactions. During morning planning, you're choosing between thinking freely and translating those thoughts into typed blocks. Every time the keyboard demands your attention, it steals that attention from the actual cognitive work.
The gap between how fast you can think and how fast you can type creates a filter. Only the thoughts that feel "worth typing" survive. The rest vanish. Over weeks and months, your Logseq graph ends up thinner than it should be, not because you lacked insights, but because the input friction made you unconsciously selective.
How Voice Works With Logseq
- Open Logseq on your desktop. Navigate to today's Journal page, or open any page where you want to add content.
- Click into an existing bullet, or click below the last block to create a new one.
- Press ⌃+⌥+R (Mac) or Ctrl+Alt+R (Windows/Linux) to start AICHE recording.
- Speak your content naturally. One thought per block works best. Say everything relevant to that point, including context, reasoning, and connections.
- Press the hotkey again. AICHE transcribes your speech, removes filler words with Message Ready, and inserts the text into the block.
- Press Enter to create a new block. Press Tab if you want a child block for sub-points.
- Repeat. Press your hotkey, speak, stop, move to the next block.
Workflows for Logseq Users
Journal as Daily Capture
Logseq's Journal page is the natural starting point. It opens by default, it's automatically dated, and it's designed for unstructured input. This makes it the lowest-friction target for voice capture.
Each morning, open Logseq (it lands on today's Journal), click the first empty block, and press your hotkey. Speak for 2-3 minutes about what's on your plate: projects, priorities, problems, random ideas. Don't organize. Just talk. AICHE captures everything and inserts it as a single block. Then break that block into individual blocks, one per thought, and you have your day's starting material.
Throughout the day, come back to the Journal whenever a thought hits. Press your hotkey, speak for 10-20 seconds, done. Each entry takes less effort than unlocking your phone, which means more thoughts actually get captured.
At the end of the week, your Journal pages contain 50-80 blocks of raw material. During weekly review, scan them for patterns, add [[page links]], and move important items to dedicated pages. The Journal becomes a pipeline, and voice keeps the pipeline full.
Literature Notes and Reading Reactions
When you're reading something that sparks ideas, keep Logseq open beside your reading material. Each time you want to capture a reaction, switch to Logseq, press your hotkey, and speak.
"The author's point about compound interest applying to knowledge is interesting but oversimplified. Knowledge compounds only when you actively connect new information to existing understanding. Passive reading doesn't compound. This relates to the retrieval practice research. The connection has to be effortful to stick."
That took 15 seconds. It captures not just the author's point but your reaction and a cross-reference to another concept. Typing that would take over a minute and would likely interrupt your reading focus enough that you'd lose the next paragraph's context.
After the reading session, go through your blocks and add [[page references]] and tags. The spoken content becomes linked nodes in your graph.
Meeting Notes Block by Block
During meetings, keep Logseq open. Each time something important happens, press your hotkey and speak a brief capture: "Decision made to delay the API redesign until after the March release. Reason is resource conflict with the onboarding project. Chen owns the updated timeline." Press Enter, wait for the next notable moment.
This produces 8-15 blocks per hour-long meeting, each with specific detail. Compare that to the alternative: frantic continuous typing that splits your attention, or giving up and writing sparse notes after the meeting from memory. Voice captures the key moments with minimal attention cost.
Block-Level Knowledge Capture
Logseq's block references let you embed any block inside other pages using (()). This makes every block potentially reusable. The more complete and self-contained each block is, the more useful it becomes as a reference target.
Voice encourages completeness. When you speak a thought, you naturally include the context around it. You don't abbreviate the way you do when typing. The result is blocks that stand on their own when referenced elsewhere, without needing the surrounding context to make sense.
The Privacy Alignment
Logseq users care about data ownership. Your notes live in local Markdown and Org-mode files, not in someone's cloud database. AICHE aligns with this principle. Audio sent to AICHE's transcription service is processed and discarded. There is no audio storage, no training on your data, no retention beyond the moment of transcription.
Your workflow stays local-first. Logseq stores everything on your machine. AICHE handles the voice-to-text conversion with zero retention, then the text lives in your local files like any other block you typed. No new cloud dependency gets introduced into your data chain.
Tips for Voice Input in Logseq
One thought per recording, one recording per block. Logseq's power is in atomic blocks that can be referenced, linked, and reorganized. Keep each voice capture focused on a single idea, and it maps cleanly to one block.
Speak the content, type the metadata. Don't try to dictate Logseq syntax like [[page links]], #tags, TODO markers, or property fields. Speak the content naturally, then add syntax during editing. This keeps your speaking flow unbroken.
Use Content Organization for longer captures. If you're speaking for more than 60 seconds, for instance summarizing a chapter or recapping a meeting, enable Content Organization in AICHE settings. It structures the transcription into logical sections that are easier to split into individual blocks.
Enable Clean Language for shared graphs. If you publish your Logseq graph or share it with collaborators, Clean Language ensures your dictated blocks read professionally. Spoken quirks and casual phrasing get polished without changing the meaning.
Heads-up: Logseq's block references ((())) and page links [[]] are not generated by AICHE. All linking is manual. Speak your thoughts in plain language, then add graph structure during your editing pass. Trying to speak bracket syntax interrupts your flow and doesn't produce the correct characters.
Pro-tip: Use Logseq's query system to surface blocks by content. The more detail you dictate into each block, the more reliably queries find them later. A block that says "discussed API migration" is less findable than one that says "Decision to migrate REST API to GraphQL, motivated by frontend team's need for flexible queries, timeline is Q2, Chen leading the technical design."
Result: A daily Journal habit that produces 8-10 sparse typed blocks expands to 20-30 detailed blocks with voice input. Your graph grows faster, links multiply, and weekly reviews surface connections you would have missed. Meeting notes capture specific decisions and owners instead of vague summaries. Literature notes preserve your genuine reactions instead of diluted versions.
Do this now: Open Logseq's Journal, click into an empty block, press your hotkey, and dictate everything you accomplished today. One block per item, 30 seconds of speaking total. Then add [[links]] to connect those items to your existing pages.
Works With
AICHE with Notion
Dictate into Notion pages and databases. Capture thoughts at speaking speed without switching apps.
AICHE with Tana
Tana with voice. Dictate entries and knowledge cards naturally without typing.
AICHE with Airtable
Airtable with voice. Dictate database entries and records naturally without keyboard input.
AICHE with Capacities
Capacities with voice. Dictate entries and knowledge cards naturally without typing.
AICHE with Coda
Coda documents with voice. Dictate collaborative notes and document content naturally without typing.
AICHE with Heptabase
Heptabase with voice. Dictate entries and knowledge cards naturally without typing.