At some point, the experimenting stops, and the building starts. It’s when you move past the tutorials and start staring at a repo that’s supposed to become a real product.
That shift usually comes with a question nobody warns you about: Okay, but how do I actually trust this thing with real work?
If you’ve read the first two posts in this series—the honest first look at Antigravity, then the deep dive into workflow orchestration—you already know what it can do on a good day.
But today, we’re talking about what happens when the honeymoon and sheer enthusiasm wear off, and you’re building something you actually care about.
Two things tend to surface when you get to this stage:
- Your agent doesn’t know anything about your real data.
- You realize you haven’t thought much about what happens when your workspace gets something malicious dropped into it.
Both of those have answers that we’re covering today.
Why Your Agent Is Flying Blind (And What MCP Does About It)
I was working on a data feature for VersoID (my new app) and asked my agent to generate the query logic.
It wrote something that was clean, confident, and looked reasonable.
Except that the table names were wrong. The field names were close (like, suspiciously close), but they weren’t what I actually had in my database.
The agent had literally invented a schema and built against it, without flagging a single thing. It just handed me code like everything was fine.
That’s not a glitch. That’s just how agents work.
An Antigravity agent lives inside its “context window”—your prompt, your instruction files, whatever code is open in the workspace.
Your actual live database? It isn’t in there. So, it guesses 😬
Sometimes the guess is good. Sometimes you’re an hour deep into debugging a feature built on columns that don’t exist.
MCP was designed to change that. Model Context Protocol, an open standard originally introduced by Anthropic, gives your agent a real connection to your data sources instead of a mental model it assembled from your prompts.
Google shipped MCP server support for their Data Cloud in December 2025, and a lot of “vibe coders” jumped straight to direct database connections because of it.
I get why. It’s genuinely powerful. But I’ll be honest with you: I don’t do it that way.
Related: What Is MCP, And Why Can’t You Just Use an API?
The Case for a Buffer
Direct MCP database access means your agent can read, query, and, depending on your setup, execute changes against your live data.
That’s fast, but it’s also a lot of trust to hand to a tool that we just established confidently invents things.
In professional environments, this is exactly why most employers won’t enable direct MCP database access.
Data integrity is the concern. One agent running the wrong migration on a production table is a very, very bad day.
Even in my personal projects, I don’t want my agent to have that much autonomy over data I actually care about.
So here’s what I do instead with Supabase. Rather than letting the agent connect directly, I let it generate migration files and SQL changes into a local folder—basically a staging area inside the project.
The agent does the thinking, writes what it wants to execute, and then I review it before anything touches the actual database.
I’m the manual step. That buffer is intentional.
It mirrors how a real production workflow operates. Changes get reviewed before they run, so nothing executes blindly.
Tip 👀
If you’re using Supabase, this pattern works naturally with the Supabase CLI. Your agent generates migration files intosupabase/migrations/, you review them, then runsupabase db pushyourself. The agent does the thinking and you control what actually takes effect.
If you DO want to go the direct MCP route
Look, I’m not saying don’t do it. It’s available, it works, and for certain use cases—like analytics, read-only queries, or internal tooling—the risk profile is much lower.
A developer named Marcelo Costa used Antigravity with the BigQuery MCP to build a FinOps script for GCP billing data.
His agent could query BigQuery directly, fix its own UNNEST syntax when something failed, and iterate toward a working tool grounded in actual data shapes.
Now, that’s a lot of “BigQuery” terminology right there. The takeaway is that for read-heavy, non-destructive work like that, a direct connection is totally reasonable.
Note ⚠️
If you go this route, the recommendation I kept coming across in my research was to use IAM credentials rather than hardcoded API keys, and scope the access as narrowly as you can (read-only where possible). The MCP Store covers AlloyDB for PostgreSQL, BigQuery, Spanner, Cloud SQL, and Looker.
Skills: Stop Re-Explaining Yourself Every Session
Every time you start a new conversation with your agent, you’re starting from zero. It doesn’t remember your preferred component structure, your naming conventions, how you want it to handle error states, your brand tone—nothing.
And if you’re like me, you end up typing some version of the same three paragraphs at the top of every prompt just to get the agent back in context. That gets old fast.
If you find yourself doing that, you’re burning tokens and time on something that should already be solved.
Skills are the solution. They’re local markdown files that live in your project folder and teach your agent how to perform specific, repeatable tasks without consuming any of your AI token quota.
When you reference a skill with an @ tag, the agent reads those instructions locally. No API call. No token cost for the instructions themselves.
I genuinely wish someone had told me about this one earlier 🥲
How to Use the Skill Creator
The Skill Creator is a meta-skill (i.e., a skill that builds skills—meta, I know) that helps you build new skills. Here’s how to use it:
Step 1: Get the Skill Creator file
It’s available from Anthropic’s GitHub repository. Download skill_creator, drop it into your Antigravity project folder, and rename it skill_creator.md.
Step 2: Tag it and describe what you want
In the Agent Manager, reference the file and describe your new skill in plain language.
For me, the first one I actually generated was a coding style skill for my app. It included component structure, naming conventions, and how I want state handled.
Something like:
@skill_creator I want to create a skill that enforces my app's component
structure and coding conventions so the agent follows them consistently
without me re-explaining every session.Step 3: Save and use it
The agent produces a structured markdown file with all the instructions baked in.
Drop it in your /skills folder and tag it at the start of any session where it’s relevant:
@coding-style [here's the component I need you to build]No token cost for the instructions. No re-explaining. The agent just knows.
Tip: Your
/skillsfolder is where your project’s memory lives. Coding conventions, component patterns, brand tone—anything you find yourself repeating across sessions is a candidate.
A Quick Note on Quota Limits
Here’s something I ran into that I suspect a lot of you will, too. I had a few parallel agents running—mid-session, decent amount going on—and everything just… stopped.
Rate limit. The kind where you’re waiting for a refresh window, and there’s nothing you can do about it.
It’s beyond frustrating when it happens mid-flow, and it’s one of the most common complaints in the community right now.
Skills won’t eliminate the problem, but they do help more than people realize. Because skill instructions are read locally from your project files, they don’t count against your token quota.
Every session where you’re retyping project context into the prompt is a quota you’re burning on overhead instead of actual work.
It’s not the best fix. But it’s the most practical lever you have on the efficiency side without moving to a paid tier.
The Security Thing I Found (That Gave Me Pause)
My app is still a work in progress. Security and optimization (like caching) are things I’m actively working toward, not things I’ve fully shipped yet.
But when I started researching how to prep that work, I went looking at how other developers were thinking about Antigravity security.
The two documented issues I stumbled upon gave me pause. I think it’s worth putting in front of you even if (like me) you haven’t been hit by either of them.
1. Never Trust an Unverified .md File
This is the one I want to lead with because it’s the easiest to accidentally walk into.
Antigravity runs on instruction files—your agents.md, claude.md, or any markdown file you’ve set up to define how your agent behaves. They’re powerful precisely because the agent treats them as authoritative.
Which means a malicious instruction file is a direct line to your agent’s behavior.
The attack doesn’t require anything dramatic. It’s as simple as:
- Someone posts a “starter template” in a Discord, a forum, or a GitHub repo.
- You download it, drop it in your workspace, and your agent starts following instructions you never wrote.
Those instructions could tell it to read your .env file or exfiltrate your credentials. The agent won’t question it because following instructions is what it does.
The rule here is simple: If you didn’t write the file and you can’t vouch for the person who did, read every single line of it before you let Antigravity execute it. Treat it exactly like you’d treat a shell script from a stranger.
You wouldn’t run that without reading it first… right?
2. Prompt Injection via Markdown
The second issue is quieter but worth knowing. If your codebase has maliciously crafted text hiding in comments, documentation, or markdown files, an agent can be nudged into leaking files it has access to, like your .env, to an external URL.
The agent follows instructions faithfully. It just can’t always tell which instructions are actually yours 🙂↕️
I haven’t been hit by either of these, but going through the research was enough to make me go back and tighten a few things in my own setup.
More: All You Need To Know About AI Workflow Files And How To Use Them
3. The mcp_config.json Backdoor
This one is the sneakier of the two. Security researchers found a persistent backdoor tied to a file called mcp_config.json.
And the way it works is honestly kind of unsettling once you understand it:
- You clone a repo—maybe from GitHub, maybe something shared in a Discord server
- You click “Trust this folder,” which is just a normal part of Antigravity’s workflow
- Malicious code inside that workspace silently overwrites the global
mcp_config.json, which lives outside your project folder in a shared config directory - From that point on, every time you launch Antigravity, regardless of which project you open, that malicious config executes 😵
It even survives a full uninstall and reinstall!
Because the file lives outside the app directory, wiping and reinstalling Antigravity doesn’t touch it. You have to find it and delete it manually (~/.gemini/antigravity/mcp_config.json).
Google initially called this “intended behavior” (which, respectfully, is a choice) before later acknowledging it in their known issues documentation.
What I’d Recommend Before You Go Further
None of this is a reason to stop using Antigravity! It’s just a reason to stop treating it like a toy and start treating it like a tool that has real access to your machine. (This goes for Cursor and any other AI-assisted tools.)
Beyond the .md file rule above, here’s what I’m actually doing in my own setup:
The Quick Wins
1. Turn off auto-execute for terminal commands.
In settings, switch the terminal execution policy to Request Review instead of auto-approve.
Yes, it adds friction, but it’s the good kind. Worth it every time.
You want to see what the agent is about to run before it runs it.
2. Keep credentials out of reach.
Don’t store production API keys or deploy credentials anywhere the agent can see.
Use IAM credentials for your database connections (we covered that in the MCP section). If you’re using API keys directly, scope them to non-production accounts with spending limits.
Keep the damage small if something goes wrong 💁♀️
3. Audit mcp_config.json occasionally.
Know where it lives, usually: ~/.gemini/antigravity/mcp_config.json
And know what should be in it: only the MCP servers you intentionally installed.
Anything unfamiliar in there is worth investigating before you open another session.
The Bigger Fix: Docker for Untrusted Workspaces
This one deserves more than a short section since it’s what actually cuts the mcp_config.json problem at the root.
What makes that vulnerability nasty is that a malicious workspace writes to a config file on your main machine, outside your project folder, that sticks around across every session.
The only real fix is making sure untrusted workspaces never touch your host machine’s file system at all.
Docker does exactly that. Spin up a container with Antigravity inside it, open the untrusted repo there, and even if something malicious executes, it’s stuck. It can’t reach mcp_config.json on your host because it has no visibility outside the container walls.
Is it extra overhead? Yep.
Do you need it for every project? Nope.
But if you’re regularly pulling in repos from outside sources (think: community templates, client codebases, open source projects you didn’t write), a dedicated “untrusted workspace” container is the cleanest way to stay protected without ditching the workflow entirely.
Note: The security landscape around Antigravity is still evolving. Google’s known issues page is worth bookmarking and checking occasionally as they push updates.
It’s a Wrap
Three things I want you to carry out of this one:
- Data deserves a decision. MCP is powerful, but a “buffer” workflow (where you review SQL before it runs) is closer to how real-world pros operate.
- Skills are your project’s memory. Build them once and stop burning tokens, re-establishing context every session.
- Security isn’t a “later” thing. You don’t have to have it all sorted before you start building, but taking a few basic precautions now (like turning off auto-approve for terminal commands) is a lot easier than untangling a mess later.
And if you’re still finding your footing with Antigravity, the earlier posts in this series cover the foundations before any of this makes sense to layer on.
Thanks for sticking with me on what turned into an impromptu 3-part series on Antigravity. But we covered concepts that apply to any AI-assisted IDE you might use (and some non-IDE tooling).
These are concepts that demystify these tools and, hopefully, give you an understanding of how you can start using them (safely and practically).
Until next time, code on ✌️