kleamerkuri

kleamerkuri

Jan 29, 2026 · 28 min read

All You Need To Know About AI Workflow Files And How To Use Them

There’s this moment that keeps happening to me lately.

I’ll be watching a coding tutorial or reading through a GitHub repo, and the developer will reference their .cursorrules file or their SKILL.md setup like it’s the most natural thing in the world.

Zero explanation. Just: “Oh yeah, I’ve got my workflow files configured, so…”

And I’m sitting there thinking: I’m sorry, your WHAT files?

At first, I figured it was just me. Maybe I missed a blog post. Maybe this was some advanced thing only certain developers needed.

But then I started seeing the same confusion everywhere:

  • On Reddit: “Can someone ELI5 what a SKILL.md file actually does?”
  • On Twitter: “Why does every Cursor tutorial assume I know what .cursorrules are?”
  • On Discord: “I keep seeing .agent/skills/ folders but nobody explains them”
  • On Stack Overflow: “What’s the difference between workflow files and just writing good prompts?”

Turns out, we’re all confused. And honestly? That makes sense.

Why This Matters (And Why It’s So Confusing)

These AI workflow files kind of appeared out of nowhere over the past year, and most tutorials treat them like common knowledge when they’re absolutely not.

Different tools use different formats, the documentation is scattered, and nobody’s written a clear “here’s what this actually is” guide.

At least none that’s centralized or properly provides a clear “what”, “why”, and “how” to go about it 🙂‍↕️

Here’s what’s actually happening: AI coding tools have gotten good enough that developers need a better way to manage context.

The old approach—typing the same instructions into your AI chat over and over—wastes time, burns money on tokens, and gives inconsistent results.

Workflow files solve this. They’re structured documents that tell your AI assistant how to behave in your specific project.

These are specialized Markdown files that act as an “instruction manual” for AI agents.

Think of them like a playbook: you write your project standards once, the AI loads them automatically, and you never have to repeat yourself.

In this post, I’m breaking it all down to serve as a practical guide to understanding what these files are, why they exist, how they save you money, and how to start using them yourself.

Here’s what we’re covering:

  • What these files actually are (with real examples you can see)
  • Why developers use them instead of regular prompts
  • How they save money on AI costs (with specific numbers)
  • The different types across different tools
  • How to create them using AI (meta, I know 💁‍♀️)
  • Plus, when to use Projects in web UIs vs. local workflow files

By the end, you’ll understand what everyone’s talking about, and you’ll know exactly how to implement this in your own workflow!

Let’s decode this together.

What Developers Are Really Asking About

I did some serious digging because I wanted to help all of us understand this really well.

Below is a realistic picture based on community discussions, developer guides, and emerging usage patterns in tools like Cursor, Antigravity, Claude Code, VS Code with Copilot, and others as of January 2026.

Here’s what people want to know:

1. What are these files (like SKILL.md / .agent / workflow .md files) in AI tooling?

When developers talk about AI project flows or “instructional context files,” they’re mostly referring to structured instruction documents that tell an AI agent how to behave in that project.

In Antigravity (an AI-focused IDE):

A SKILL.md file lives inside a folder like:

.agent/skills/my-skill/
    → SKILL.md
    → scripts/
    → helper files

The purpose of SKILL.md is to act as:

  • Human-readable instructions describing what the skill does
  • Machine-loadable behavior that the AI reads and uses when appropriate

This is basically a module of expertise the AI can “turn on” for specific types of tasks.

In Cursor IDE:

Cursor uses .cursorrules (legacy, root-level file) or the newer .cursor/rules/*.mdc files for modular rules that apply to specific file patterns.

In VS Code with GitHub Copilot:

GitHub Copilot supports Agent Skills stored in:

.github/skills/[skill-name]/SKILL.md

or

.claude/skills/[skill-name]/SKILL.md

In Claude Code CLI and similar tools:

.agent/skills/[skill-name]/SKILL.md

Note 💬
All of these share the same core concept: human-readable instructions that are also machine-loadable.

Here’s what a basic SKILL.md file looks like:

---
name: api-testing
description: Guide for testing REST APIs with proper error handling
---

# API Testing Skill

When testing APIs in this project:
1. Use the `/tests/api/` directory structure
2. Always include error case tests (400, 401, 404, 500)
3. Use the `api-client.js` utility for all requests
4. Mock external dependencies using the `mock-server` script
5. Run tests with `npm run test:api`

## Example Test Structure
- `GET` requests: test success + 404
- `POST` requests: test success + validation errors
- `PUT/PATCH`: test partial updates
- `DELETE`: test idempotency

When your AI agent (Claude, Cursor, Copilot, etc.) is working on API-related code, it loads this skill and follows your team’s specific conventions without you having to repeat yourself 🔥

2. Why do developers use these files instead of just plain prompts?

This is a great question and one that can help de-mystify AI workflow files.

There are three main reasons:

Context Reuse

Most AI models work by reading all relevant text into memory before generating a response.

If you repeat the same instructions over and over in the chat, you burn tokens (which cost money 💸).

With workflow files:

  • You store structured context once (in Markdown) and let the agent load it when needed
  • The agent’s system loads only what is really relevant, so fewer tokens are consumed

In other words, the model doesn’t have to relearn your rules every time you ask a question.

This is exactly why SKILL.md, structured workflow folders, agent rules files, etc., were created.

Consistency

Without structured instructions, AI output varies wildly.

One day, it writes verbose code with comments, while the next day it’s terse and cryptic.

Sometimes it follows best practices. Sometimes it doesn’t 🤷‍♀️

Workflow files standardize behavior so the AI follows the same guidelines every single time.

Scalability

If you’re working on multiple features or have a team collaborating, you don’t want everyone typing different prompts.

That’s a literal nightmare 🙈

Workflow files create shared standards that everyone’s AI assistant follows.

3. How do these files save money on tokens?

Now, let’s get to the nitty-gritty stuff.

In plain English: Tokens = the AI’s “fuel.”

Every word you give it, whether in code or instructions, counts toward a usage bill.

If you repeatedly feed long prompt instructions (e.g., “here are the style guidelines for this project…”), each word in those guidelines gets re-sent to the AI every time.

When you instead store those in a workflow file, the agent system:

  • reads them only once
  • efficiently indexes them
  • doesn’t have to re-tokenize them on every use

That’s why structured workflows and skill files are seen as a cost optimization strategy, especially in tools with large context windows.

Tip 🔥
For large technical instructions, that adds up to thousands or tens of thousands fewer tokens per workflow! It’s serious cost savings that need to be considered. Don’t use AI to build yourself into debt.

Let’s make this concrete with real numbers.

Scenario: Working on a React + TypeScript project

You have coding standards (500 words = ~700 tokens) that you paste into prompts.

Without workflow files:

  • You make 50 AI requests per day
  • Each request includes those 700 tokens
  • Daily token usage: 50 × 700 = 35,000 tokens
  • Monthly usage (22 workdays): 770,000 tokens
  • Cost at $15/million tokens (Claude Opus 4.5 input): $11.55/month

With workflow files:

  • The AI loads your standards once per session: 700 tokens
  • Each subsequent request references them: ~50 tokens
  • Daily token usage: 700 + (50 × 50) = 3,200 tokens
  • Monthly usage: 70,400 tokens
  • Cost: $1.06/month

Savings: $10.49/month just from one set of coding standards 🙌

Now multiply that across multiple instruction sets (testing standards, API conventions, deployment workflows, component structure) and multiple team members, and you’re looking at hundreds of dollars in savings per month.

4. What’s the purpose of each type of file I see online?

Here’s the simplest breakdown:

  • SKILL.md (Antigravity, VS Code, Claude Code, and similar workflows): Instructs the AI how to perform a very specific type of task (e.g., “How to extract tables from PDFs” or “How to test React components”). Think of it like a recipe card for a chef. Once the AI reads it, it knows exactly how to execute that type of work.
  • .agent folders (.agent/skills/, .agent/pipelines/): Houses multiple skills or workflow pieces together, like a toolkit. The AI agent discovers them automatically when it starts working in your project.
  • Workflow markdown files (.md / .mdc / ai/commands/*.md): Define sequenced steps for tasks like feature planning, testing, reviews, etc. They’re used as blueprints that the AI can reference to keep tasks consistent.
  • .cursorrules files (Cursor IDE specific): Define global AI behavior rules for Cursor IDE. They’re usually located at .cursorrules (root of project) or under .cursor/rules/*.mdc (modular, newer approach).

Example:

# Project Rules

- Use TypeScript strict mode
- Prefer functional components
- Use Tailwind for styling
- Keep components under 200 lines
- Write tests for all new features

5. Are these files specific to one platform, like Cursor or Claude?

Nope. And this makes it somewhat of a challenge.

Different AI IDEs have their own flavors:

  • Cursor uses rules files in .cursor/rules/ and workflow files to guide AI assistance
  • Antigravity uses .agent/skills/... with SKILL.md files
  • VS Code with GitHub Copilot supports .github/skills/ with Agent Skills format
  • Claude Code CLI uses .agent/skills/ or .claude/skills/
  • Other frameworks generate an entire ai/commands/ and ai/rules/ folder with MD files to script workflows across tools

Tip 👇
They’re different ecosystems, but the idea is the same: human-written, machine-readable instructions that persist and inform AI behavior over time. Don’t let the variants overwhelm you. Instead, be aware of what exists for your chosen toolset.

The good news? Claude published Agent Skills as an open standard in October 2025 for cross-platform portability.

This means the SKILL.md format is increasingly being adopted across tools, making your skills portable between different AI assistants and IDEs.

6. What problems are developers actually solving with these files?

We touched on the solutions while answering the earlier questions, but as a recap:

  • Constant Context Loss: AI forgets details unless you repeat them.
    • Solution: Store context in reusable, well-structured files.
  • Expensive Token Usage: More repeated prompts = more cost.
    • Solution: Store persistent instructions that the system can reference efficiently.
  • Inconsistent AI Output: If you have to write long prompts every time, the output varies.
    • Solution: Rules and workflows standardize responses, allowing us to write project brief files (e.g., claude.md) to provide exactly the right context in one shot.

Value, Intent, & How to Generate Workflow Files Using AI

Why Workflow Files Matter (Value & Intent)

Here’s the short explanation:

  • Consistency: You want the AI to behave the same way every time (e.g., follow coding standards, use your project’s conventions).
  • Cost Savings: Remember, each character you don’t have to re-type into the prompt is money in your pocket 🙂
  • Scalability: If you’re automating workflows across features or teams, you want machine-friendly, structured files, not manually typed prompts.
  • Modularity: You can reuse the same skill across projects instead of rewriting clever prompts each time.

This is exactly why “skills” and “workflow md files” are starting to show up in tools like Antigravity and project templates.

How to Generate These Files Using AI Itself

You may be surprised (or not, what with AI capabilities these days) how simple this is.

Step 1 — Ask AI for a “standard template” for the type of file you need

Example prompt:

Generate a SKILL.md template for an AI agent that:
  - explains how to build and test a REST API
  - includes name, description, steps, and edge cases
  - is optimized to minimize token usage

The AI generates a structured MD file with a YAML header and steps.

Step 2 — Supplement with Project-Specific Requirements

Next prompt:

Take this template and fill it in based on my project:
  - JavaScript API for a to-do list app
  - use SQLite for storage
  - follow project conventions from my README

You now have customized workflow instructions.

Step 3 — Place the file in the proper folder

Example folder structures:

/my-project/
├── .agent/skills/api-skill/
│   └── SKILL.md
├── .agent/skills/testing-skill/
│   └── SKILL.md

or for Cursor:

/my-project/
├── .cursor/rules/
│   ├── javascript.mdc
│   └── testing.mdc

or for VS Code with Copilot:

/my-project/
├── .github/skills/api-development/
│   └── SKILL.md

That’s it. The AI itself authored the instructions that teach future AI sessions how to do real work.

Tip 💡
Think of workflow markdown files like instruction manuals you leave for your future self or for your AI teammate. Instead of repeating yourself, you’re building a reusable knowledge base. It’s the ultimate level of “active” documentation.

Real-World Examples and Use Cases

To make things clearer and rooted in practice, let’s go over some examples.

Example 1: React Component Creation Skill

Here’s a complete skill for creating React components following specific conventions:

---
name: react-components
description: Standards for creating React components in this project
---

# React Component Creation

## File Structure
Components live in `src/components/[feature]/ComponentName.tsx`

## Component Template
```typescript
import React from 'react';
import { cn } from '@/lib/utils';

interface ComponentNameProps {
  className?: string;
  children?: React.ReactNode;
}

export function ComponentName({
  className,
  children
}: ComponentNameProps) {
  return (
    <div className={cn('base-styles', className)}>
      {children}
    </div>
  );
}
```

## Naming Conventions
- PascalCase for component names
- camelCase for props
- Prefix event handlers with `handle` (handleClick, handleSubmit)

## Styling
- Use Tailwind utility classes only
- Use `cn()` utility for conditional classes
- No inline styles

## Props
- Define props in separate interface
- Always include `className?` for composition
- Use `children?` when wrapper component

Now, when you ask: “Create a Card component”, the AI generates code following these exact conventions on the first try.

Example 2: API Route Testing Skill

With this skill loaded, you can ask: “Create tests for the /api/products endpoint” and the AI will generate comprehensive tests following your exact testing patterns.

---
name: api-testing
description: Testing standards for Express API routes
---

# API Route Testing

## Test File Location
Place tests in `tests/api/[route-name].test.js`

## Required Test Cases
For each endpoint, test:
1. Success case (200/201)
2. Validation errors (400)
3. Authentication errors (401)
4. Not found errors (404)
5. Server errors (500)

## Test Structure
```javascript
describe('POST /api/users', () => {
  it('creates user with valid data', async () => {
    const response = await request(app)
      .post('/api/users')
      .send({ name: 'Test', email: '[email protected]' });
    
    expect(response.status).toBe(201);
    expect(response.body).toHaveProperty('id');
  });

  it('returns 400 for invalid email', async () => {
    const response = await request(app)
      .post('/api/users')
      .send({ name: 'Test', email: 'invalid' });
    
    expect(response.status).toBe(400);
  });
});
```

## Coverage Requirements
- Minimum 80% code coverage
- 100% coverage for critical paths

No more explaining which status codes to test or where test files should live; it’s all standardized.

Example 3: Database Migration Skill

This skill ensures every database change follows a safe, rollback-friendly pattern.

---
name: database-migrations
description: Standards for creating database migrations
---

# Database Migrations

## File Naming
Format: `YYYYMMDDHHMMSS_description.sql`
Example: `20260125143000_add_user_roles.sql`

## Required Sections
Every migration must have:
1. UP migration (apply changes)
2. DOWN migration (rollback)
3. Verification query

## Template
```sql
-- UP Migration
ALTER TABLE users ADD COLUMN email_verified BOOLEAN DEFAULT FALSE;

-- DOWN Migration
ALTER TABLE users DROP COLUMN email_verified;

-- Verification
SELECT column_name 
FROM information_schema.columns 
WHERE table_name = 'users' 
  AND column_name = 'email_verified';
```

## Testing Process
1. Run migration on test DB
2. Verify schema changes
3. Test rollback
4. Re-run migration
5. Check for idempotency

When you ask for a migration, the AI automatically includes both UP and DOWN migrations plus verification queries (critical for production deployments but easy to forget 😬).

Projects in Web UIs: The Alternative Approach

Now, you might be thinking: “Wait, what are Projects then? ChatGPT has them and so does Claude”.

If you’re wondering how Projects relate to these files, let’s clarify 👇

What Are Projects in Web UIs?

As of January 2026, most major AI platforms offer “Projects” features in their web interfaces.

Think of Projects as persistent workspaces that keep all your context—files, instructions, and chat history—organized in one place.

Note ✅
Here’s the key difference from workflow files: Projects live in the cloud, workflow files live in your codebase.

ChatGPT Projects:

Available on all plans (Free, Plus, Pro, Team, Enterprise) as of September 2025.

What you get:

  • Context window: 128,000 tokens (roughly 300 pages of text—enough for multiple documentation files, code examples, and chat history)
  • File uploads: 5 files per project on Free tier, up to 40 files on paid plans
  • Custom instructions: Tell ChatGPT how to behave in this specific project
  • Persistent context: Every chat in the Project remembers your uploaded files and settings

The beauty here? You don’t need a paid plan to get started.

Free users can create Projects, upload reference materials (like style guides or API docs), and set custom instructions so ChatGPT remembers your preferences across conversations.

Real example: You’re writing a blog. Upload your style guide and 3-4 previous posts as reference. Set instructions like “Write conversationally, use short paragraphs, include code examples.” Now, every new post you draft in that Project automatically matches your voice and format.

Tip 🤓
Be careful when including previous writings as reference context since the model can take it too literally and adopt them as a “knowledge” base. Keep prompting strict and set with intent!

Related: A Smart Free Chrome Extension That Upgrades AI Prompts

Claude Projects:

Also available on all plans (Free, Pro, Max, Team, Enterprise), with enhanced features on paid tiers.

What you get:

  • Context window: 200,000 tokens on paid plans (about 500 pages); 500,000 tokens on Enterprise with Claude Sonnet 4.5
  • RAG on paid plans: Retrieval Augmented Generation automatically scales your knowledge base up to 10x when approaching context limits
  • Project knowledge: Upload PDFs, code files, text docs, and Claude indexes them intelligently
  • Custom instructions: Set project-specific behavior and guidelines

Even on the free plan, you can create dedicated workspaces with custom instructions.

The paid plans unlock RAG, which is massive for developers. Instead of worrying about hitting context limits, Claude intelligently retrieves relevant pieces from your uploaded documentation on demand.

Real example: You’re building a React app. Upload your design system docs, component library code, and API specifications. When you ask Claude to create a new component, it references your existing patterns and follows your naming conventions automatically without you having to paste context every time.

Related: The Ultimate Guide To Re-Engineering My Portfolio’s RAG Chatbot

Gemini (Google AI Studio):

Free to use with impressive specs (Gemini does free right 👏):

  • Context window: Up to 1,000,000 tokens (Gemini 1.5 Pro)—the largest publicly available
  • File uploads: Upload entire books, research papers, codebases
  • System instructions: Set behavioral guidelines
  • Template saving: Save frequently-used prompts for reuse

Gemini’s massive context window is genuinely wild.

You can upload an entire technical book or a massive codebase in a single session and ask questions about it!

This makes it particularly powerful for academic research, analyzing large datasets, or working with extensive documentation.

How Projects Actually Work in Practice

Instead of starting fresh every conversation, Projects create a persistent foundation that stays consistent.

Here’s the basic flow:

  1. Create a Project and give it a clear name (“Blog Writing,” “API Development,” “Research Notes”)
  2. Upload your reference files, like previous work, documentation, code examples, and style guides
  3. Set custom instructions by telling the AI how to behave in this context
  4. Start chatting, and every conversation inherits that foundation automatically

Example ChatGPT Project setup:

Project Name: "E-Commerce Frontend"

Uploaded Files:
- design-system.md (your component guidelines)
- api-documentation.pdf (backend endpoints)
- component-library-examples.tsx (code patterns)

Custom Instructions:
"This is a Next.js 14 app with TypeScript and Tailwind CSS. 
Use the App Router. Components should be server components 
by default. Follow the naming conventions in design-system.md. 
Always include TypeScript types. Prefer composition over props drilling."

When you ask: “Create a ProductCard component”, ChatGPT already knows your:

  • Tech stack (Next.js 14, TypeScript, Tailwind)
  • Architectural patterns (App Router, server components)
  • Naming conventions (from design-system.md)
  • Coding preferences (composition over props drilling)

No repeated explanations. No pasting the same context. Just consistent, on-brand code generation.

Projects vs. Local Workflow Files: When to Use Each

Here’s where we get a little bit strategic.

Projects and workflow files aren’t competing solutions; they’re complementary tools for different workflows.

The Core Difference:

Projects (Web UIs)Workflow Files (Local)
Manual context managementAutomatic context loading
Centralized in the cloudVersion-controlled in your repo
You decide what to includeAI discovers what’s relevant
Great for planning & researchGreat for active development

When to Use Projects

  • You’re working entirely in the web UI (no local coding environment). You’re researching, writing documentation, or brainstorming, not actively building in an IDE.
  • You need the same context across multiple unrelated tasks. For example, you have a Project for “Company Knowledge” with onboarding docs, brand guidelines, and technical specs. Every conversation—whether you’re writing emails or drafting blog posts—has access to that foundation.
  • You’re collaborating with non-developers. Projects work great for teams where not everyone codes. Marketing folks, designers, and product managers can share Projects with their custom instructions and reference materials.
  • You want to reference documents frequently. If you’re constantly asking questions about a specific PDF (like a research paper or product spec), upload it to a Project once instead of re-uploading every time.

When to Use Local Workflow Files

  • You’re actively developing in an IDE. Cursor, Antigravity, VS Code, or any local development environment where you want AI assistance while coding.
  • You need context that adapts to what you’re working on. As you switch between components, API routes, and tests, relevant skills auto-load based on file patterns. You don’t manually activate them—they just work.
  • You want team-shared, version-controlled standards. Workflow files live in your Git repo. When someone clones the project, they get the same AI behavior you do. No manual setup required (it’s the beauty of it).
  • You’re building features and writing code daily. Local skills are optimized for the build-test-deploy cycle. They understand your project structure and follow your conventions automatically.

Explore: This Is A Super Easy Optimization Workflow For SEO & Accessibility

Real-World Comparison

Follow with me here:

Scenario 1: Planning a new feature (use Projects)

You’re in the research phase.

You upload user stories, competitor analysis, and technical requirements to a Claude Project.

Then, you brainstorm approaches, draft architecture docs, and create an implementation plan, all in the web UI where you can easily share with your team.

Scenario 2: Building that feature (use workflow files)

Now you’re in VS Code. You have skills for component creation, API integration, and testing.

As you work on different files, the relevant skills load automatically.

The AI generates code following your established patterns without you explaining them every time.

The Hybrid Approach (Best of Both Worlds)

Smart developers don’t choose one over the other. Instead, they use both strategically based on what they’re doing!

Here’s how a typical development workflow might use both:

Phase 1: Research & Planning (Projects)

You create a Claude Project called “User Authentication Redesign.”

Upload:

  • Current authentication flow diagrams
  • Security audit findings
  • User research notes
  • Competitor analysis

Custom instructions: “Focus on security-first solutions. Consider mobile and web platforms. Prioritize developer experience.”

In this Project, you:

  • Brainstorm implementation approaches
  • Draft technical requirements
  • Create system architecture diagrams
  • Write security considerations doc

Think of this phase as prepping a sort of presentation. If your manager, boss, or client is to ask you for an overview or suggestion, this will serve as a good entry point for a pitch (as opposed to an in-the-moment brainstorm).

Phase 2: Implementation Planning (Bridge)

Based on your Project conversations, you generate workflow files:

# Create skills based on your planning
.agent/skills/
├── auth-endpoints/
│   └── SKILL.md  # API patterns from your architecture doc
├── auth-testing/
│   └── SKILL.md  # Security test requirements from audit
└── auth-components/
    └── SKILL.md  # UI patterns from your design spec

You ask Claude (in the Project): “Generate a SKILL.md file for authentication endpoints based on our architecture discussion.”

Phase 3: Active Development (Workflow Files)

Now you’re in VS Code (or Antigravity, Cursor, whatever floats your boat).

Your skills are loaded, so you start building:

  • Create /api/auth/login → Auth endpoints skill loads automatically
  • Build <LoginForm /> → Auth components skill loads
  • Write tests → Auth testing skill ensures security coverage

The AI generates code following the exact patterns you discussed in your Project, but now it’s all automated and version-controlled.

Tip 👀
Make sure you adjust, update, and correct your workflow files based on any new requirements or modifications. This will save you time “troubleshooting” why our co-coding agent is building React class components when you specifically are looking for functional ones.

Phase 4: Documentation (Back to Projects)

The feature is now complete! Return to your Project:

  • Upload the finished code
  • Generate user documentation
  • Create internal developer guides
  • Write release notes

The Project remembers all your architectural decisions, so the documentation is consistent with your planning.

Why This Hybrid Approach Works

Projects are great for thinking and deciding. Workflow files are great for building and maintaining.

Using both means:

  • Your planning stays organized and shareable (Projects)
  • Your implementation stays consistent and automated (Workflow Files)
  • And your team has clear documentation in both places 🙂

You’re not duplicating effort; you’re using the right tool for each phase of development.

Quick Comparison Table

FeatureChatGPT/Claude ProjectsLocal Workflow Files
LocationCloud (web UI)Local codebase
Context LoadingManual (you upload)Automatic (AI discovers)
Version ControlNo (stored in cloud)Yes (in your repo)
Team SharingVia web interfaceVia Git commits
Best ForPlanning, writing, researchCoding, building, testing
Context Window128K-1M tokensLoaded progressively
Cost ImpactTokens per uploadOne-time load
IDE IntegrationNoneBuilt-in

A Practical Getting Started Guide

Alright, I’ve managed to convince you of the high-value of AI workflow files 🤞

How do you actually start using this?

Note: Instructions or what’s available may change. The following is true as of time of writing.

Cursor Users

Option 1: Legacy .cursorrules (Simple)

  1. Create a .cursorrules file in your project root
  2. Write your project rules in plain English
  3. Cursor auto-loads them

Example:

# Project Guidelines

- This is a TypeScript + React + Vite project
- Use functional components with hooks
- Styling: Tailwind CSS utility classes only
- Testing: Vitest + React Testing Library
- Components go in src/components/[feature]/
- Write tests for all new components
- Use named exports, not default exports

Option 2: Modular Rules (Advanced)

Create rules that apply to specific files:

.cursor/rules/
├── api.mdc       (applies to src/api/**)
├── components.mdc (applies to src/components/**)
└── tests.mdc      (applies to **/*.test.ts)

Each .mdc file uses gitignore-style patterns to match files.

VS Code + GitHub Copilot Users

  1. Create .github/skills/ directory in your project
  2. Add skill folders with SKILL.md files:
.github/skills/
├── api-development/
│   └── SKILL.md
├── react-components/
│   └── SKILL.md
└── testing/
    └── SKILL.md
  1. Skills work automatically with GitHub Copilot; no manual activation is needed

    Claude Code / CLI Tools

    1. Create .agent/skills/ or .claude/skills/ directory
    2. Add skills with proper frontmatter:
    ---
    name: deployment
    description: Deployment process for staging and production
    ---
    
    # Deployment Workflow
    
    ## Staging
    1. Run tests: `npm run test`
    2. Build: `npm run build`
    3. Deploy: `npm run deploy:staging`
    4. Verify: Check [staging-url]
    
    ## Production
    1. Tag release: `git tag v1.x.x`
    2. Merge to main
    3. CI/CD auto-deploys
    4. Monitor logs for 15 minutes

    Web UI Users (ChatGPT, Claude, Gemini)

    ChatGPT:

    1. Click “New Project” in the sidebar
    2. Upload relevant files (docs, code examples, specs)
    3. Set custom instructions in project settings
    4. Start chatting

    Claude:

    1. Create new Project
    2. Add project knowledge (upload files)
    3. Set custom instructions
    4. All conversations inherit that context

    Gemini (Google AI Studio):

    1. Create a new prompt/chat
    2. Upload files in context
    3. Set system instructions
    4. Save as a template for reuse

    Explore: GEO Vs. SEO: How To Win In The Age Of AI Search (2026 Guide)

    Some Common Mistakes and Caveats

    There’s always a “but” in all good things.

    Starting with caveats:

    1: Skills Require Code Execution Capability

    Skills that include scripts or need to read files require the Code Execution Tool (beta in Claude). This provides the secure environment they need to run. It’s a hard requirement, not optional.

    If code execution isn’t enabled, skills will still load their instructions, but won’t be able to execute bundled scripts.

    2: Not All Tools Support Skills…Yet

    As of January 2026:

    • Supported: Claude Code, GitHub Copilot (Agent Skills), Cursor (rules), Codex, Antigravity
    • Partial: ChatGPT (Projects, not skills), Gemini (Projects, not skills)
    • Not supported: Many older AI coding tools

    Check your tool’s documentation before investing time in skill creation.

    3: Skills Don’t Replace Good Prompting

    This is important: Skills provide context and standards.

    They don’t make vague requests magically work.

    Still bad:

    [With a skill loaded]
    "Make it better"

    Good:

    "Refactor the UserProfile component to use our lazy loading pattern
    defined in the performance-optimization skill. Keep the existing
    prop interface."

    Tip: Clear prompts and good skills give the best results.

    Moving on to mistakes:

    1: Writing Skills That Are Too Generic

    Bad:

    ---
    name: coding-standards
    description: General coding best practices
    ---
    
    Write clean code. Use good variable names. Comment your code.

    This is totally useless. It’s too vague to do anything of worth.

    Good:

    ---
    name: api-error-handling
    description: Error handling standards for Express API routes
    ---
    
    # API Error Handling
    
    All route handlers must:
    1. Use try/catch for async operations
    2. Return consistent error format:
    ```json
       {
         "error": {
           "code": "ERROR_CODE",
           "message": "User-friendly message",
           "details": {}
         }
       }
    ```
    3. Log errors to our logging service
    4. Return appropriate HTTP status codes

    Don’t be intimidated by the “craft” of good skills. Scroll back to the skill-building section above that shows you how to leverage AI to generate impeccable skills 💡

    2: Not Updating Skills as Your Project Evolves

    Your codebase changes; it’s normal. However, your skills should also change!

    Set a reminder to review skills monthly. For shared skills, make it a habit to review them as a team to keep on track with what’s working and what’s not doing you any service.

    Make sure to remove outdated instructions and add new conventions as they emerge.

    3: Overloading a Single Skill

    If your skill file is 500+ lines, break it into multiple skills:

    • api-routes-basic.md
    • api-routes-auth.md
    • api-routes-validation.md

    This keeps skills focused, and the AI loads only what’s relevant.

    Advanced Tips and the Future

    Here’s something worth remembering and looking forward to:

    Advanced Tip: Version Your Skills

    Treat skills like code. When making breaking changes, version them:

    .github/skills/
    ├── api-testing-v1/
    │   └── SKILL.md
    └── api-testing-v2/
        └── SKILL.md

    This lets you test new approaches without breaking existing workflows.

    The Future of AI Workflow Files

    Here are some anticipated moves and predictions:

    Cross-Platform Standardization

    Claude published Agent Skills as an open standard in October 2025.

    Expect more tools to adopt compatible formats, making skills portable across IDEs and AI assistants.

    We already saw from the examples above how skills are starting to look pretty similar in structure across tools.

    AI-Generated Skills

    Looking ahead, agents may create, edit, and evaluate skills on their own, codifying their own patterns of behavior into reusable capabilities.

    Imagine your AI assistant analyzing your coding patterns over a month and suggesting: “I notice you always structure API routes the same way. Want me to create a skill for that?”

    That’s epic.

    Tip: You don’t have to wait until then. A similar behavior occurs in workflows that you have your co-coding agent generate tasks for execution steps with explicit instructions to update the mark tasks as complete.

    Organization-Wide Skill Libraries

    Companies will build internal skill repositories that new developers can clone:

    company-skills/
    ├── backend/
    ├── frontend/
    ├── testing/
    ├── deployment/
    └── security/

    This accelerates onboarding and ensures consistency across teams.

    A serious talent bottleneck annihilated 🙌

    Endings Thoughts

    AI workflow files—whether they’re SKILL.md files, .cursorrules, or Projects in web UIs—are not just fancy prompts.

    They’re a fundamental shift in how we work with AI assistants.

    They solve three real problems:

    1. Context loss: No more re-explaining your project in every session
    2. Token costs: Save hundreds of dollars by loading context efficiently
    3. Inconsistency: Get predictable, standardized output every time

    Here’s what to do next:

    If You’re Using Cursor:

    1. Create a .cursorrules file in your project root
    2. Write 10 lines of project-specific guidelines
    3. Test it on your next coding session

    If You’re Using VS Code + Copilot:

    1. Create .github/skills/ directory
    2. Add one skill for your most common task
    3. Restart VS Code and see it in action

    When You’re Using Claude or ChatGPT Web UI:

    1. Create a Project for your current work
    2. Upload relevant docs and set custom instructions
    3. Use it consistently for one week and measure time saved

    For CLI Users (Claude Code, Codex, etc.):

    1. Create .agent/skills/ or .claude/skills/
    2. Start with a testing or deployment skill
    3. Iterate based on what works

    Remember: Start small with one skill for one task. See how it feels and then expand.

    You don’t need to overhaul your entire workflow overnight. Just pick one area where you’re constantly repeating yourself, and that’s your first skill.

    It’s a Wrap

    AI-assisted development is still new. We’re all figuring this out together.

    But one pattern is becoming clear: the developers who succeed go beyond just using AI by structuring how AI understands their work.

    Workflow files are that structure.

    They’re the difference between treating AI as a glorified autocomplete and treating it as a knowledgeable teammate who knows your project, your standards, and your goals.

    So stop pasting the same instructions over and over. Write them once. Let the AI load them automatically. Save money, time, and ship better code.

    That’s the whole point 🥲

    I’ll see ya on the next one!

    Related Posts

    Leave a Comment

    Your email address will not be published. Required fields are marked *