Context is Everything

Field Note #4

What you’ll learn:

  • The overnight contract review that changed how the business uses AI powered Slack

  • Why every legal AI review you have read is probably missing the most important variables

  • The AI governance question I have not solved yet


The Visibility Challenge

When I set up #legal in Slack with Wordsmith, the technical part was easy. Changing the habit of my coworkers took a little longer.

I wanted everything in the public channel for reasons that compound on each other.

  • Visibility: everyone sees what is being asked and can learn from the answers

  • Efficiency: Wordsmith auto-answers common questions without me needing to jump in

  • Reusability: public answers build the knowledge base and reduce repeat questions

  • Manageability: I cannot track requests scattered across DMs, threads, and emails

  • No repeats: I do not want to answer the same question or fact pattern twice. That is what AI is for.

If it is in the #legal channel, it is searchable and reusable.

Some people were already using it. The goal was getting everyone there consistently. That is a different problem than adoption from scratch. More like a nudge than a campaign. But it still required something more compelling than a reminder to use the channel.

What provided that nudge was multiple automated responses that met people's needs immediately. I set up the Wordsmith-powered channel before I had fully customized it. I wanted to get it in front of the business and start testing. Turns out that was the right call. Real examples of their pain points being solved. Legal no longer being the block or the bottleneck.

And then came Thursday.

I woke up to find the agent had reviewed an agreement overnight and flagged it as one-sided and well outside industry standards. The team had already seen the response. They were already thinking through the implications before I was even awake.

When I reviewed the agreement myself, I went back and forth. The contract was bad, clearly, but I was weighing the legal risk against the business opportunity. I did not want to kill a deal by pushing back too hard on terms that might be negotiable. I was second-guessing myself, wondering if I was being too conservative.

The business pushed back before I did. They had read what Wordsmith flagged and decided the risk was real. They were not willing to move forward without changes.

That is when I understood something about what a neutral tool actually does. It was not just that Wordsmith surfaced the analysis. It was that the analysis landed differently because it did not come from me. When legal says a contract is too risky, there is always a negotiation about how much of that is caution and how much is genuine exposure. When the system flags it independently, that conversation shifts. The information becomes easier to act on because it is not filtered through the dynamic of me trying to convince someone.

The channel filled up after that. People had watched the agent work without me present, flag something real, in plain language, at a time when I was unavailable. That is not something I could have demonstrated in a training session.

Three weeks in, the questions are sharper and more specific. The team is suggesting workflows I have not built yet. That is the actual value of the public channel. Not just answers. Signals.

  • How Wordsmith in Slack actually works

    Wordsmith connects your legal knowledge base to Slack via an AI agent that responds to questions in any channel you configure.

    Setup: You build a repository in Wordsmith — contracts, policies, playbooks, FAQs, previous answers — and connect the agent to your Slack workspace. It watches the channels you point it at and responds to questions in thread, citing its sources.

    The part people don't talk about enough: Wordsmith can scrape your entire Slack channel history and surface previous answers for you to review and accept into the knowledge base. You are not starting from zero. You are curating what already happened.

    Where the knowledge lives: I have connected it directly to Google Drive and Notion. The agent draws from both. That is what makes it usable from day one rather than six months from now.

    What I am building toward: Routing DocuSign notification emails into the repository automatically, so the agent can answer contract status questions in Slack without anyone having to look it up. Should work better with our internal DocuSign. Third-party is messier. Still figuring that out.

Context is Why Every Legal AI Review is Missing the Point

Last week, I had Claude help me build a Legal AI Review Report website.

I built it because I kept hitting the same wall with legal AI review. They were written by someone, but you could not tell anything meaningful about who that someone was.

What is their tech stack? How big is their team? What problems were they actually trying to solve? What tools do they already have?

That last one matters more than most people admit. I already have Google Drive, Notion, and Slack. Wordsmith connects directly to all three. That means I am not starting a knowledge base from scratch. I am pointing the agent at infrastructure that already exists. For me, that changes the evaluation entirely.

For someone deep in a Microsoft stack, that calculus looks completely different. Copilot is already in their tenant. SharePoint is already where their knowledge lives. A review I write about Wordsmith may not be useful to them.

This is the problem with most legal AI coverage right now. The reviewer exists but is invisible. You are reading a conclusion without the context that produced it.

When I posted about testing tools recently, vendors jumped in immediately. Most did not ask what I needed first. They pitched contract review features. They offered Word plugin demos. They assumed the problem before checking. The good SaaS salespeople find your pain point before they start talking.

For me, a useful review has to address these points or it tells me nothing:

  • What is your tech stack

  • What tools do you already have and what do they connect to

  • What problem were you actually trying to solve

  • How big is your team and what is your budget

  • Do you prefer working in a web app or a plugin and why

What matters to you may not matter to me. How do I know that reading legal AI tech reviews?

Where I am Still Working it Out

One thing I have not resolved is AI governance.

We have Gemini available across the company. Half the company is on Claude. Engineering, marketing, even our CEO is building. Things are moving fast, which is exciting, and also raises a question I do not have an answer to yet.

Do we let everyone experiment and build independently? Or do we run a period of exploration and then consolidate into shared systems?

The reason I think about this is because most of us are working off the same knowledge bases and data and have access to the same AI tools. So how do we do this efficiently? If we are all working with the same data, we need compatible architecture at some point. Individual experiments that are not designed to connect will create the same fragmentation problem that AI is supposed to solve.

I think the governance decision is also a context decision. You cannot build the right architecture without a clear view of what problems you are actually solving and who is solving them. Which means you have to do the thinking before you start building. Most people start building and do the thinking later, if at all.

Takeways

Before you evaluate a tool, define the problem.

Before you read a review, understand the reviewer.

Before you build a system, map the context it needs.

Everything else follows from that.


Weekend Outro

I was too old for One Direction, but Harry Style has grown on me.


if you have questions or want to follow up on anything, hit comment below or reply.

Want the Field Notes to hit your inbox each week? Subscribe here.

Want to dive deeper? Become a member of The Field Guide.


Next
Next

Finance Got Lucky. Legal Didn't.