Claude Session Handoffs: How to Keep Context Across Conversations

TLDR: AI conversations have a pretty definite length limit as of 2026 and a limited shelf life. When context fills up or you start a new session, everything the AI learned about your project may be gone. ⭐ A simple two-file system (a permanent reference doc and a living session log) plus a handoff prompt takes a few seconds at the end of a session and saves minutes of re-explanation (and potentially wasted context) at the start of the next one.⭐ Here’s exactly how I do it and why it is better than compacting IMHO.

(if you see your context getting full [70%+] then if nothing else, at least tell Claude: “Hey, we’re getting pretty full on context, please draft a handoff prompt so we can continue fresh.”)

If you’ve spent any real time using Claude, ChatGPT, CoPilot or any AI assistant for ongoing projects, you’ve likely hit some walls. You had a great session yesterday. You figured out the right approach, tested a bunch of things, made decisions, documented what worked and what didn’t. Then today you open a new conversation and the AI has absolutely no idea what you’re talking about. 😜

This isn’t a bug. It’s how these systems work. Every conversation starts fresh. Even with memory features (which are genuinely useful for preferences and personal context), the deep project-specific knowledge from yesterday’s three-hour debugging session? Gone. Companies are obviously working on this and Projects (Claude.ai) or other pre-built solutions can be helpful.

But a few simple behavioral changes by the user can be very helpful regardless of what else is going on. I’ve been iterating on this for a while now, and it’s become one of the most valuable parts of my AI workflow. It’s not complicated, other people do this too, and it doesn’t require any special tools. And it works whether you’re using Claude’s web interface, Claude Code, or honestly any AI assistant.

The Problem Is Real (and Getting Worse)

Here’s what’s a little ironic: as AI models get more capable and we trust them with bigger, more complex tasks, the context problem gets worse, not better. A quick “help me write this email” conversation doesn’t need continuity. ⚠ But a multi-session infrastructure project where you’re configuring services, debugging issues, making architectural decisions, and building on previous work? Losing context between sessions is brutal. (and can be downright dangerous from a security or maintainability standpoint)

I recently spent eight sessions getting a self-hosted Open WebUI + SearXNG stack working on a Proxmox server. Each session built directly on the last. Configuration values discovered in session 3 were still relevant in session 8. A workaround from session 5 informed a decision in session 7. Without some way to carry that knowledge forward, every session would have started with “okay, so here’s everything about my setup” followed by minutes of re-orientation before any actual work happened.

Claude does have a memory feature, and I use it heavily. I wrote about custom instructions and how they change the experience. But memory is best for broad patterns and preferences, things like “I prefer concise answers” or “I use Proxmox for virtualization.” It’s not designed to hold the detailed state of an active project: which config values live where, what you tested, what failed and why, what decisions are still open.

The Two-File System

After trying a few different approaches, for this type of situation I settled on something simple that just works. Two files per project:

CLAUDE.md is the permanent reference. Think of it as the project’s technical spec sheet. It holds architecture details, IP addresses, file paths, service configurations, command syntax that’s easy to forget, and anything else that describes what the system is. This file rarely changes. When it does, it’s because the system itself changed.

HANDOVER.md is the living session log. Every session gets a dated entry with what was done, what was tested, what worked, what didn’t, key learnings, and a current “pending / next steps” section. This file changes every session.

The separation matters. When Claude reads CLAUDE.md, it gets the lay of the land instantly. When it reads HANDOVER.md, it gets the narrative: where you’ve been, what you’ve tried, and where you’re headed. Together, they give a fresh session essentially the same understanding you had at the end of the last one.

I won’t pretend these two files are the only way to do this. There are people building dedicated tools for it, and some of the Claude Code community has developed elaborate handoff protocols. For my workflow which can be tons of projects that are pretty quickly finished (we’re talking days not weeks), frequently two markdown files have been the right balance of effort and payoff.

The Handoff Prompt

The files alone aren’t enough. You also need a short prompt that tells the next session exactly what to read and where to pick up. At the end of a session, I ask Claude to write me a handoff prompt. It looks something like this:

Read ~/projects/my-project/HANDOVER.md and ~/projects/my-project/CLAUDE.md for full context.

Current state: Web search is working. 5/5 tests pass. All config is in the SQLite DB, not .env files.

Open decisions:

  1. Should we remove the unused Playwright installation to reclaim 3 GB of disk? Or save it for next phases?
  2. Should we eval new models or perfect the calling and prompting of the ones we have?

Pick any next task:

  • Publish the blog post draft
  • Test PDF upload and RAG query
  • Security audit

That’s maybe 15 lines. And Claude or CODEX can do that for you if you tell them you want it to generate a full handoff prompt from this session to next. It gives the new session a few-sentence summary of where things stand, calls out decisions that need human input, and offers a menu of what to work on next. The new session reads the docs, has full context, and starts working immediately.

For projects in Claude Code, this works even more naturally because Claude Code reads CLAUDE.md automatically from the project root. But even in the web interface, pasting a handoff prompt at the start of a conversation gets you 90% of the way there.

What Goes in the Session Log

I’ve found the most useful session entries include five things:

What changed. Not just “fixed the search” but the specific configuration changes, commands run, and values set. Future sessions need to know what’s actually in place, not just that something was done.

What was tested and how. If you ran five test queries and all passed in 13-19 seconds, write that down. It’s the baseline for knowing if something regresses later.

What didn’t work and why. This is probably the most valuable part. Without it, a future session will cheerfully suggest the exact approach you already tried and abandoned. “Tried Playwright for full-page loading. Added 30-60 seconds of latency per query. Bypassed in favor of SearXNG snippets.” That one sentence saves an entire debugging detour.

Key learnings. Things you discovered that aren’t obvious from the docs. “The database config always overrides .env values after first startup. Must modify SQLite directly for config changes.” This is the kind of thing that takes an hour to figure out and ten seconds to write down.

Open decisions with options. Not just “decide about the API key” but “the API key is admin-scoped. Options: rotate it, scope it to a non-admin user, or accept it for a private instance.” Spelling out the options means you (or the AI) can pick up that thread without reconstructing the context around it.

The Time Math

I want to be honest about the tradeoff here. Writing a good handoff can take a little time. Sometimes Claude writes most of it and I just review. Sometimes I ask for it when context is getting high (you can usually tell when responses start getting a little less sharp, or when you’ve been going for a while on a complex project, ideally be wrapping up the session before 50%-75% context if you can).

The payoff is that the next session starts productive almost immediately instead of spending the first few minutes on “okay so here’s the setup, and last time we…” That’s a good trade in my experience, especially if you’re doing this across multiple projects.

The other less obvious benefit: the session log becomes genuine project documentation. I’ve gone back to HANDOVER.md entries weeks later to remember why I made a particular decision. That’s useful even without the AI angle. Don’t forget lasting documentation! But that’s another blog post.

What This Doesn’t Solve

I should be clear about the limitations. This approach works well for ongoing projects with technical complexity. It’s overkill for quick one-off questions. It requires discipline to actually do the handoff at the end of a session. And it doesn’t replace the need for good custom instructions and memory, and just taking the time to make sure you are making good choices and good code.

It also doesn’t solve the fundamental token limit issue within a single session. If you’re deep into a complex session and context gets full, you still need to wrap up and start fresh. The handoff just makes that transition painless instead of painful.

Getting Started

If you want to try this, start simple:

  1. Create a CLAUDE.md (or whatever you want to call it) for your project with the permanent technical details.
  2. Create a HANDOVER.md with a single session entry covering where things stand right now.
  3. At the end of your next session, ask the AI to update the handover doc and write you a handoff prompt.
  4. Start the next session by pasting that prompt.

Hopefully you’ll feel the difference immediately. And you’ll probably start tweaking the format to fit your workflow, which is exactly what you should do. 💪👍


This post was drafted with the assistance of Claude and this is an actual workflow that I utilize.

Leave a Reply

Your email address will not be published. Required fields are marked *