kleamerkuri

kleamerkuri

May 7, 2026 · 19 min read

Snaply: Local AI Dictation That Actually Works (And It’s Free)

If you watch a lot of developer or productivity content on YouTube (like me), you’ve probably seen Wispr Flow pop up without even realizing it. Someone’s mid-sentence on camera, they switch to their screen to type a prompt, and you see this little overlay appear as they dictate. It’s subtle, but once you notice it, you start seeing it everywhere.

That was my first real exposure to AI dictation as a workflow. Not as a product review, just someone using it casually in the background.

It looked nice. Real nice.

And then I saw the price tag: $15/month, or $144/year on the annual plan.

I started looking for a free alternative.

Why Mac’s Built-In Dictation (and My Own Attempt at One) Fell Short

The obvious first stop was the dictation I already had access to. MacOS has built-in voice dictation. There are browser-based options. I even spent some time testing a dictation feature I was building into my NoteApply Chrome extension.

None of them was good enough for actual daily use, and for different reasons.

The problem with built-in Mac dictation is that it transcribes what you say literally. Filler words stay in. Sentences that trail off or restart mid-thought get captured exactly as spoken. You end up spending as much time cleaning the output as you would typing, which defeats the whole point.

The Ollama-based approach in the Chrome extension was more interesting but had its own limitations, like the fact that it’s scoped to the extension. For dictation to work as a daily driver, it needs to be near-instant and apply across the various apps and platforms you’ll use.

So, that’s what eventually led me to Snaply. Something I’ve rarely heard anyone mention, and only came across by chance.

Related: I Tried the ‘Code for Free with Local AI’ Setup. Here’s What Actually Happened.

I didn’t go in expecting much since free tools in this space usually mean capped usage, a watermark, or a feature set that’s barely worth the install.

Snaply is none of those things. It’s a Mac app with three core features:

  1. AI dictation
  2. writing assistant
  3. automatic meeting notes

All of it runs locally on your device. All of it is free for individual use, with no usage limits.

I downloaded it to try the dictation; that’s still my main use. But it’s already changed how I write messages and docs, prompt an AI agent during a project, give feedback on generated output, and flag something specific in a code review.

Anywhere you’d normally type something that takes a minute to explain, dictation turns into a real shortcut.

Here’s what it does and how to get started 👇

What Is Snaply, and How Does It Actually Compare to Wispr Flow?

Snaply is a Mac-only app that provides AI dictation, a writing assistant, and meeting notes, all running on your device with nothing going to the cloud.

The developer built it after running into the same two walls that come up with tools like Wispr Flow: the monthly cost and the privacy question.

His answer was to run everything on-device using local AI models, which handle both at once.

No cloud processing means no servers receiving your voice data.

No server costs means the individual plan can be free.

How Much Does Wispr Flow Actually Cost, and Is There a Free Version?

Since Wispr Flow comes up most in this space, I’ll put it in the spotlight. Wispr Flow’s free Basic plan is capped at 2,000 words per week. That’s roughly 15 to 20 minutes of natural speech.

Most daily users hit that cap within a couple of work sessions. It’s functional enough to evaluate whether AI dictation fits your workflow, but it’s not really a sustainable free plan for regular use.

Pro runs $15/month billed monthly, or $12/month if you pay annually ($144/year). There’s a 14-day Pro trial available with no credit card required.

In comparison, Snaply charges nothing for individuals, has no word cap, and works offline.

The tradeoff is that Wispr Flow runs on the cloud and has a more polished cross-platform story—Mac, Windows, iOS, Android. Snaply is Mac-only, local-first, and free.

Depending on what you need, one of those is clearly the better fit.

Hey! I’ve heard of Wispr Flow plenty but haven’t used it personally. Same with Otter.ai, which comes up often in the meeting notes space. What I can speak to is Snaply, and that’s what this post is about.

Voice Dictation: How It Works and Why It’s Different From What You’ve Tried

This is still the feature I use most, and the one that’ll probably sell you on the app too. So I want to give it proper space, including answering the under-the-hood questions I had when I first downloaded it.

The basic flow is this: you press a shortcut, speak naturally, release, and your words appear in whatever app is in focus.

Any app. Notion, Slack, Gmail, your code editor, a web form—doesn’t matter. If you can type there, you can speak there. There’s no switching windows, no pasting from a separate interface.

The first time I really felt the difference was during a Teams chat. I needed to explain a small update on a technical detail, the kind of thing that would’ve taken a solid paragraph or two (with bullets) to type out carefully.

Instead, I dictated it in about fifteen seconds, and it landed clean. No forever typing indicator amidst a slew of impatient messages.

That was the moment I started thinking about how else I could use it.

Now, what keeps me using it is the accuracy. I expected to get the words roughly right and spend time cleaning up. The output, however, comes back with proper punctuation, capitalization, and sentence structure without me doing anything extra.

You talk normally, as you would to a person, with proper expressions and pitch.

What’s Actually Running on Your Mac When You Use This?

This is the question I had, and I’m guessing you do too: if this is running locally, what got installed? Do I need Ollama? Did it download something somewhere I didn’t notice?

Snaply downloads the models it needs during install; you just don’t see it happen. There’s no manual model setup, no terminal commands, no Ollama required. The app handles it in the background the same way an app might quietly pull down an asset bundle on first launch.

By the time you open Snaply for the first time, it’s already… ready.

For the dictation layer, Snaply uses Whisper, an open-source speech recognition model originally released by OpenAI. On Apple Silicon machines (M1 and newer), Whisper runs on the Neural Engine, the dedicated chip Apple built for exactly this kind of AI processing.

That’s why transcription feels near-instant even though nothing is going to a server. The chip handles it separately from whatever else you’re running 💁‍♀️

If you’re on an older Intel Mac, Whisper still runs, but on the CPU rather than a dedicated AI chip, which means it’s slower, especially with longer audio. For short real-time dictation like messages, notes, or prompts, you’ll likely still get usable results. For long meeting transcripts, it may feel sluggish.

Apple Silicon is what helps this truly shine.

For the Writing Assistant, the local model is Gemma 4 that’s also downloaded silently during install. Same story: it’s just there when you open the app.

Explore: This Is The Reason Why You Sound More Like AI

The Specifics That Are Actually Worth Knowing

Punctuation works automatically. You don’t say “comma” or “period” out loud since Snaply infers it from your natural speech patterns.

Question marks, capitalization, and even line breaks for things like email formatting are all handled without you thinking about them.

Twenty-five languages are supported on-device, from Spanish, French, German, and Portuguese to Italian and more, all recognized locally without switching models or extra configurations.

And because the model lives on your machine, there’s no internet dependency. No connection, no problem!

Tip 💡
When you first try it, don’t over-articulate. Speak at your normal pace. The model is trained on natural speech, and slowing down or emphasizing every word tends to hurt accuracy more than help it. I learned this the first day 😅

The AI Writing Assistant: What It Is and How I Started Using It

I downloaded Snaply for the dictation and almost didn’t try anything else. I noticed the writing assistant option sitting there in the overlay, figured I’d give it a quick try, and ended up using it the same day.

The way it works:

  1. Select any text in any app
  2. Hit Control + Space or click the inline toolbar (if enabled), and an overlay appears with a list of transformation modes
  3. Pick one, and Snaply rewrites the selected text in place

No copying, no switching to another tab, no coming back to paste. You can even add a custom instruction in a mini chat right then and there, inline.

There are three modes built in by default:

  • Polish fixes grammar, spelling, spacing, and capitalization. This is the one I reach for most. I’ll dictate something rough, immediately select it, hit Polish, and it comes back reading like I actually wrote it carefully.
  • Tone adjustment rewrites the selected text as more formal, casual, or professional, depending on what you need.
  • Translate converts selected text into another language, with 100+ languages supported.

I tested a custom prompt with something minor, like asking it to title case a specific highlighted selection in the Apple store description for the new app I’m listing. It handled it cleanly with no fuss, and the result was exactly what I asked for.

Related: Who Are You Actually Writing For in the AI Era?

Building Your Own Transformation Modes

This is where the Writing Assistant goes from useful to really interesting.

Under Settings → Writing Assistant → Assistant Modes, you can write your own transformation prompts and save them as named modes. Something like “Summarize into three bullets,” “Rewrite this as a LinkedIn post,” or “Tighten this for the blog.”

Once you’ve created a mode, it shows up in the overlay alongside the built-in ones.

The shortcut system makes this actually practical since the first nine active modes automatically get assigned to Control + Space + 1 through Control + Space + 9. So a custom transformation fires with a two-key combo, no menu navigation required.

I have a few modes I want to set up, and honestly, the shortcut system is what makes me want to actually do it rather than leave it as a “someday” thing. Remove user friction, even a tiny bit, and you make them a customer 🥲

Note 👇
The local Gemma 4 model handles Polish and basic Tone adjustments well. For more complex custom prompts, you can connect your own OpenAI or Claude API key under Settings. You only pay your own per-token rate. Snaply doesn’t add a markup.

Automatic Meeting Notes: What I Know, What I Haven’t Tested Yet

I want to cover this feature because the way it works is different enough from what most people have seen, even though I haven’t run it through a real meeting myself yet.

When you have a calendar meeting coming up, Snaply can listen and automatically generate a transcript, a structured summary, and a list of action items after the call ends. What makes this different from tools like Otter.ai isn’t the output but how it captures the audio in the first place.

Otter.ai and similar tools work by joining your Zoom or Google Meet session as a bot participant. It appears in the participant list, and everyone in the meeting can see it.

I’ve run into this during interviews and once in a user study where two bots ended up in the same session simultaneously. It’s definitely a strange experience that changes the dynamic of the call, whether you intend it to or not, and in professional contexts, clients especially tend to notice.

Snaply captures audio directly from your Mac’s system output. Nothing joins the call. No one on the other end sees anything different. The transcript and summary stay on your machine when the call ends, and you can search through them, replay sections, or review them alongside the full recording later.

There’s also speaker identification where you name participants once, and Snaply recognizes them in future meetings.

I’ll follow up once I’ve actually used this in a real meeting. But the architectural difference alone makes it the more interesting option to me.

Tip: Meeting notes work with Zoom, Google Meet, and Teams out of the box. If your setup is different from those three, check the docs before depending on it for a call that matters.

Why Is This Free? How the Local AI Model Actually Pays for Itself

Local AI is cheaper to distribute than cloud AI because there’s no recurring server cost.

Once Snaply downloads the model to your Mac during install, every computation runs on your own hardware, be it your Neural Engine, CPU, or RAM. The developer isn’t paying for server inference every time you speak, which is why there’s no usage cap and no subscription required for individuals.

Your dictation history, meeting transcripts, and recordings are stored locally in the app’s data folder on your Mac’s disk. You can search old dictations, replay meeting audio, and review past notes.

You own all of it, and it only exists as long as you keep it. Delete the app, delete the data.

One caveat worth flagging is that if you connect your own OpenAI or Claude API key for the Writing Assistant, the text you’re transforming gets sent to that provider. Snaply is upfront about this in their docs. Keep it in mind if you’re working with anything sensitive.

Related: Your Phone Can Run Real AI Now, Here’s What That Actually Means

Four Settings to Check Before You Start Relying on It

These aren’t configurations you have to do before the app works since it works out of the box. But after using it for a bit, I hit a few things I wished I’d looked at earlier.

Here’s where each one lives:

Dictation Shortcut — Check This First

Go to Settings → Dictation → Trigger Key.

Before you touch anything, press the current trigger key and see if dictation activates. On my setup, it was automatically bound to the right-side Command key on install. I didn’t configure that; it just mapped itself. If yours already works, you might not need to change a thing.

If you want to remap it, tap the trigger key field and press whatever key or combo you want. Caps Lock is a popular remap since it’s easy to reach and rarely mapped to anything else. If you’ve already reassigned Caps Lock, Fn or Right Option are both good alternatives.

Tip 👀
Snaply is really good at having mini active walkthrough of the various functions. So I definitely recommend you spend those extra couple of seconds going over them. They’re really quick and aim to directly show you how to do an action.

Custom Vocabulary — Worth It If You Work With Technical Terms

Go to Settings → Dictation → Custom Vocabulary.

Add technical terms, product names, or jargon that Whisper might misinterpret phonetically, things like useState, GraphQL, Tailwind, or your company’s internal product names. You add them as plain text entries, one at a time.

Whisper is accurate overall, but specialized vocabulary can trip it up if it hasn’t encountered those exact strings before.

For instance, I’m working on a cool new experimental side project that is using Code – OSS. And every time I would dictate that name, it would come out as “code OSS” (if spelled out “o-s-s”) or “goddos” (worse case) or “code OS” (best case, no spelling but great enunciation).

A few entries here go a long way for anyone working in code or with less streamlined projects and names.

Text Snippets — Handy for Boilerplate You Type Constantly

Go to Settings → Snippets.

You can define voice-activated text expansions here. Set a trigger phrase and the full block of text that should expand when you say it, things like your standard email sign-off, a template greeting, a boilerplate you find yourself retyping constantly.

Say the phrase, and the snippet drops in at your cursor.

It’s a small thing individually, but it adds up across a full day of typing, rehashing the same thing over and over again.

Clipboard Behavior — Only Matters for Specific Workflows

Go to Settings → Dictation → Output Mode.

By default, dictated text pastes directly at your cursor. If you’d rather control when and where it lands, switch this to clipboard-only. For example, you’re dictating into a context where you want to review before anything gets inserted.

You then paste manually when you’re ready.

Most people will never need to change this, but it’s there if your workflow calls for it.

How I Actually Use It Day-to-Day

I’m going to skip the “here are three workflow templates” format because, honestly, it always comes out sounding like a blog post about a blog post (I swear this made sense in my head when I said it).

Instead, here’s how dictation and the writing assistant have actually changed what I do.

Dictating Prompts While I’m Working

This is the use case I didn’t expect. When I’m working on a project and need to prompt an AI agent, I used to type it out. Which meant I’d often write a shorter, less detailed prompt to save time, or I’d end up with finger cramps.

Now I just talk through what I want. I describe the feature, explain what’s wrong with an existing implementation, and give context about what I’ve already tried, all in one flow.

The prompt comes out more like how I’d explain it to another developer. That’s exactly what a good prompt should be, and those are the details that tend to get lost when we’re writing things down.

I actually didn’t think I would like to dictate my prompts because I find it can be a little bit distracting when I hear myself talk out loud. So far, though, with more practice and a little adjustment, I’m getting used to it, and I keep turning on dictation far more than writing.

Same thing when I’m giving feedback on a generated output. Pointing out what was missed, explaining what I actually wanted, and flagging something specific is all streamlined. Speaking it out is faster and tends to produce better feedback than typing it, because I’m not self-editing as I go.

However, dictating things like prompts can be somewhat of an issue if you tend to rumble. You need to train yourself to speak succinctly and address the matter to avoid creating too much irrelevant noise in your prompt.

Tip 💯
Ensure to structure your dictation based on your activity. If you are brain dumping, then rumble away. But if you are aiming to provide a structured prompt to an AI-generated response or provide feedback to a colleague, then you need to narrow down what you’re trying to say.

Drafting Rough, Polishing After

Instead of staring at a blank message window trying to write something clean on the first pass, I start talking. Rough, unpolished, stream of consciousness to get the content out of my head and onto the screen.

Then I select it all, hit Control + Space, choose Polish (default, or my version that polishes in certain ways for different cases), and let Snaply clean it up.

This is how I’ve been handling messages I’d normally sit on and overthink.

When you remove the pressure of crafting while you’re talking, both parts get easier. The output after Polish is cleaner than what I’d usually type in one shot anyway.

Fixing Something Without Leaving the App

You’re mid-Slack thread or mid-email, and what you’ve written is a little rough. You can see it, you can definitely tell, but you don’t want to open another tab to fix it.

Select the text, hit Control + Space, pick Tone or Polish. Done. You never left the app.

I reach for this very often throughout a normal day, not for big rewrites but to remove small friction before something goes out.

Where the Free Tier Actually Has a Ceiling

The free tier is unlimited for individuals with no word caps, no time limits, and no feature gates on the dictation side. But the local writing model has limits, and I’d rather tell you where they show up than have you hit them unexpectedly.

For Polish and basic Tone adjustments, Gemma 4 does well. That’s where I’ve been using it most, and I’ve been happy with the output.

However, when you push it toward complex creative tasks, something like “rewrite this in the voice of a product manager explaining a difficult tradeoff to stakeholders”, the local model will give you something not necessarily bad, though, not matching the quality of GPT or Claude.

The gap is real in anything that requires judgment or nuance, rather than just cleanup.

That’s what the API key option exists for. If you’re already paying for OpenAI or Claude, go to Settings → Writing Assistant → AI Model, connect your key, and the Writing Assistant will route custom prompts through that model.

You pay your own per-token rate, and Snaply doesn’t charge anything on top.

Tip: For Polish and Tone, the local model is fine. For the custom modes where you’re asking for something more nuanced (and it’s not proprietary or needs to stay private), that’s when I’d connect the key.

It’s a Wrap

I’m not usually someone who downloads a new productivity tool and sticks with it because I find most of them require too much habit change for the payoff they deliver.

Snaply’s been different. It has truly surprised me!

The dictation ended up working its way into things I wasn’t expecting, from messages to prompting, feedback, and anything where I’d normally type something that takes more than a few seconds to say.

The writing assistant I’m still figuring out, but the Polish shortcut alone has already saved me a lot of second-guessing.

It’s free, it’s local, and the setup is only five minutes.

If you’re on a Mac, it’s worth the download to try the dictation for a day and see if it changes anything for you because it already has for me!

Download Snaply and try it out yourself if you haven’t already. If you have, I’d love to hear your thoughts about it.

Otherwise, I’ll see you on the next one.

Bye, friends.

😏 Don’t miss these tips!

We don’t spam! Read more in our privacy policy

Related Posts

Leave a Comment

Your email address will not be published. Required fields are marked *