I should have asked AI to argue against me
Field Note #1 — Week 3 back as GC at Worksome
Three weeks back into my role as General Counsel and VP of Compliance at Worksome. I'm building an AI-first legal function, and I'm sharing what I'm learning as I go. This includes the wins, the mistakes, and the stuff nobody's talking about.
Here are three things on my mind this week:
1. Flip your prompting perspective
A few of us were working through a genuinely complex legal question this week — the kind that feels like a law school exam. We had ChatGPT, Gemini, Wordsmith, and Claude all open. Everyone prompting, everyone asking questions with the same set of facts.
Here's what I noticed: I was framing everything from my legal risk perspective. And AI was giving me exactly that answer back.
When I flipped it, came at the same question from the business's perspective, asked AI to prove the business's case, an exception to the rule surfaced that wasn't coming out in my prompting or analysis at all.
AI delivers what you're looking for. That's the thing. So if you only ask it to confirm your risk assessment, that's all you'll get.
Try coming at it opposite to what you think the outcome should be. Stress test it. Poke holes in that position. And then verify by going to direct sources — which is what we did.
2. Sometimes you don't want generative AI
At Worksome, I'm building out worker classification. We already had a deterministic tool built pre-AI, and I've been working through whether to replace it with gen AI.
The answer? No.
Gen AI needs verification. For classification, you need defensibility, control, and explainability. We're using weighted questions, which is better because we can control it and it creates a much more defensible position.
Here's the bigger point for legal leaders: understand how different types of AI work and when they should or shouldn't be applied.
Not everything needs gen AI. The question that should drive your decision is simple — is 80–90% accuracy good enough for what you're doing, or do you need 100%?
3. Querying our own codebase
One thing that's been really cool coming back is seeing how much AI has changed how our engineers work. We've now set it up so we can query our codebase and get information directly from our platform.
I can ask questions about our code and get answers — which is powerful on its own. But what I'm trying to do now is connect that to Wordsmith through MCP, so that when I'm making legal recommendations or risk assessments, I can pull actual product data to back them up.
We're not just doing this for legal — we're rolling it out across the company so people can make genuinely data-backed decisions. I'll share how this goes as it unfolds.