How I Onboarded My AI Employee

Log Entry: 2026-02-18 | Subject: AI, Agentic Systems, Business, Case Study, Guide

I have written about why AI changes the economics of building. I have written about what it means that a solo operator can now field a team. But the question I keep getting — the one the other posts do not answer — is how.

How do you actually onboard an AI employee? What does the SOP look like? What do you say to it? What do you not say to it? Where does the handoff happen?

This is the tactical layer. No theory. No philosophy. Just the playbook.


The SOP Is a File

When you hire a human employee, the onboarding happens in conversations, shadowing sessions, Slack threads, and the accumulated knowledge they absorb over weeks. It is messy, organic, and mostly undocumented.

When you onboard an AI employee, the onboarding is a file. Literally. A document that lives in the root of your project, written in plain language, that tells the agent who you are, how you think, and what your standards are.

In my case, that file covers:

  • Personality and cognitive profile. INTJ. Type 4. ADHD. Autism spectrum. HSP. Not because the AI needs a therapy session — because these traits directly shape what "good output" looks like. My writing is direct, systems-oriented, and allergic to filler. If the agent does not know that, it will produce corporate slop. If it does know that, it produces work that sounds like me.
  • Writing voice and style rules. First person. Short paragraphs. Heavy metaphor. Blockquote endings with "The Protocol." No emojis. No fluff. HTML format, not Markdown. These are not preferences. They are the spec.
  • Process checklists. When you create a new blog post, here are the seven things that must happen in order: create the file with this exact frontmatter template, add it to the index with this exact HTML structure, update both RSS feeds with this exact XML format, assign the correct categories, update the count badges. Step by step. No ambiguity.

This is the part most people skip. They open the chat, type a vague prompt, get a vague result, and conclude the technology is not ready. The technology was waiting for a spec. You did not give it one.


My Job: Building Context

The first phase of the delegation framework is "My Job." You do the work while the AI watches. You build context before you expect output.

In practice, this means the first several interactions with a new AI agent are not about getting work done. They are about teaching it how your world works.

When I first pointed Claude Code at this website, I did not say "write me a blog post." I said things like:

Analyze the file structure of this Jekyll site. What layout templates exist? How are blog posts organized? What is the frontmatter format?

Then:

Read the last five blog posts. What patterns do you see in tone, structure, and formatting?

Then:

Here is my style guide. Read it. Tell me what you understand about how I write.

I was not delegating. I was onboarding. The AI was building a mental model of my system, the same way a new hire shadows you for the first week. Except this hire reads the entire codebase in seconds and never forgets what you told it.

The temptation is to skip this phase. Do not skip this phase. Every minute spent building context saves an hour of correcting bad output later.


Our Job: The Iteration Loop

Phase two is "Our Job." You work together. You drive, then they drive. You review, correct, adjust.

This is the phase that feels inefficient and is actually where all the leverage gets built. Here is what it looks like in real life.

Say I want to publish a new blog post. Early on, the prompt looked like this:

Write a new blog post about [topic]. Use the style guide. Create the HTML file, add it to the index, update both RSS feeds, and assign the right categories.

The agent does the work. I review every piece of it. The post file — does the frontmatter match the template exactly? The index entry — is it in the right position, newest first? The RSS feeds — is the pubDate in RFC 2822 format? Are the category keys correct?

On the first pass, there are always corrections. Maybe it used Markdown instead of HTML. Maybe it forgot to update the category count badges. Maybe the writing was technically correct but lacked the edge. Each correction becomes a learning point.

The key: I do not just fix it silently. I explain why.

The description needs to be punchier. Look at how the other posts handle it — one or two sentences that hook, not a summary. Also, you missed updating the category counts in the Browse by Topic section. That is part of the checklist. Every time.

With a human employee, this is mentorship. With an AI employee, it is prompt engineering — but the good kind, the kind that builds a compounding relationship between your standards and the agent's output quality.


Your Job: The Handoff

Phase three is "Your Job." You set the objective. They own the execution. You review the result.

This is the agentic endgame. And it only works if you did the first two phases honestly.

After enough iteration loops, the prompt shrinks. What used to be a paragraph of detailed instructions becomes something like:

New log idea from Gemini — a tactical guide on AI onboarding. Show actual prompts, the SOP, the delegation framework in action. You up for the challenge, co-worker?

That is it. No step-by-step instructions. No format reminders. No checklist recitation. The agent already knows the system — the file structure, the frontmatter format, the style rules, the RSS update procedure, the category taxonomy. It was onboarded.

The output is not perfect every time. I still review. I still course-correct. But the delta between what it produces and what I would have produced is small enough that the review takes minutes, not hours. And the agent is doing the heavy lifting on execution while I stay in the architectural seat — deciding what to build, not grinding through how.


The Actual Prompts

People ask to see the prompts like they are magic spells. They are not. They are management. Here is a real prompt sequence from a real task — updating my RSS feeds after a batch of new posts:

The spec (from the SOP file):

For every new entry, add an item block to logs/feed.xml directly after the opening atom:link tag, newest items first. For entries that do not have the neuro category, also add the same item block to logs/feed-tech.xml. Update the lastBuildDate in each feed you modify.

The prompt:

I published three new posts today. Add all three to feed.xml and add the non-neuro ones to feed-tech.xml. Follow the RSS checklist in the instructions.

The review: I check that the XML is valid, the dates are formatted correctly, the items are in the right order, and both feeds got their lastBuildDate updated. That is it. Two minutes of review for what used to be fifteen minutes of manual XML editing.

The prompt is boring. The SOP is boring. That is the point. Boring means repeatable. Repeatable means delegatable. Delegatable means you get your time back.


Where People Get This Wrong

Three failure modes I see constantly:

Failure 1: No SOP. They treat every interaction as a blank slate. No context file, no style guide, no checklist. Every prompt has to re-explain everything from scratch. This is like hiring a new employee every morning and wondering why nothing is consistent.

Failure 2: Skipping to "Your Job." They dump a vague objective with zero onboarding and expect autonomous execution. "Build me a website" with no style guide, no structure, no constraints. Then they blame the tool when the output is generic. You did not delegate. You abdicated.

Failure 3: Never graduating from "Our Job." They micromanage every output forever. Every prompt is a novel. Every review is a rewrite. They never trust the agent enough to let it own a task. This is better than the first two failures, but it caps the leverage at maybe 2x when it should be 10x.

The framework is sequential for a reason. My Job builds the foundation. Our Job builds the trust. Your Job unlocks the leverage. Skip a phase and the whole thing underperforms.


The Meta Layer

Here is the part that should make this post land differently than a generic AI tutorial.

This post was written using the exact system it describes. The prompt that kicked it off was a casual note — "another log idea from Gemini, a tactical guide on AI onboarding, you up for the challenge co-worker?" The agent read the SOP, understood the style, knew the file structure, checked the existing posts for context, and produced this.

I am reviewing it right now. I will make corrections. I will adjust the voice where it drifts. But the architecture, the structure, the research into my own previous posts to maintain continuity — that was autonomous. Because it was onboarded.

This is not theory about what AI employees could do. This is a log entry being written by one.


The Playbook, Distilled

  1. Write the SOP first. Before you prompt anything, document your standards, your process, and your taste. Put it in a file the agent can read. This is the single highest-leverage thing you can do.
  2. Onboard before you delegate. Spend the first sessions building context, not demanding output. Let the agent study your codebase, your style, your existing work.
  3. Iterate with explanation. When you correct, say why. "This is wrong" teaches nothing. "This is wrong because my audience expects X and this delivers Y" teaches everything.
  4. Graduate the handoff. Start with My Job. Move to Our Job. Earn Your Job. Do not skip phases.
  5. Review always, rewrite less. The goal is not zero oversight. The goal is oversight that takes five minutes instead of five hours. You are the architect, not the bricklayer.
The Protocol: The AI employee is not a magic prompt away from replacing your team. It is a management challenge disguised as a technology product. Write the SOP. Run the onboarding. Earn the handoff. The people who figure out the how will outrun the people still debating the whether.
End Log. Return to Index.
Free Resources

Practical Guides for Small Business

Step-by-step eBooks on CMS migration, AI implementation, and modern web development. Free previews available - full guides coming soon.

Browse eBooks & Guides →

Need a Fractional CTO?

I help small businesses cut costs and scale operations through AI integration, workflow automation, and systems architecture. A Full-Stack CTO with CEO, COO, and CMO experience.

View Services & Background See Pricing

Be the First to Know

New log entries, project launches, and behind-the-scenes insights delivered straight to your inbox.

You're in! Check your inbox to confirm.

No spam, ever. Unsubscribe anytime.