Related Reading: Legal AI and the Future of Startup Law
TL;DR: No matter how much it might horrify conservative lawyers, founders are going to use AI for some legal tasks, and it’s actually an upgrade from what they’ve been doing previously for low-stakes issues (just Googling and winging it). But there are a few things to keep in mind to gain efficiency while minimizing penny-wise and pound-foolish risks.
The rise of AI is unquestionably the most profound change in the legal industry of this decade, including for startup lawyers. Every single serious law firm (including Optimal) has implemented AI in their practices in a material way, and is testing other tools in beta/alpha for future use once they mature enough for safe integration.
Adjacent to the issue of lawyers using AI is the riskier proposition of clients (including founders) using it on their own, completely bypassing legal professionals. Cynics might expect all lawyers to handwave away clients leaning on AI, expecting them to feel threatened by competition or (again, cynically) to hoard as many billable hours as possible. But the cynicism is overdone. Legal is a big industry with a variety of personalities, and there are a lot of lawyers who are very much not luddites.
Young first-time founders have always struggled with soberly understanding why legal works the way it does, because the legal context of permanent contracts, high-stakes negotiations, and irreversible mistakes is so different from the iterative tech and biz environment in which founder psychology thrives. The subtext of “move fast and break things” is that it’s acceptable because when things break they can, in fact, be fixed; especially in software. The often misunderstood subtext in legal is that, actually, they can’t be fixed; so maybe you really should slow down a bit.
The short story of my perspective is that expecting founders to never use AI for legal is spectacularly naive. Is it risky? Yes. But risk tolerance is literally what makes founders founders. If anyone in business is going to lean on AI somewhere for legal, it’s entrepreneurs.
It’s naive because the reality is, pre-AI, many founders have already been doing plenty of legal-ish things away from lawyers. Lawyers – even at lean boutiques, forget BigLaw – are not cheap. Talent has to get paid.
Even well-funded young startups have to pick their battles as to when to lean on professional legal advice, and when to simply wing it, because approaching every decision with the same level of risk intolerance of a Fortune 500 company is simply not feasible. There isn’t a budget for it.
So the sober way to assess the prospects of founders using AI for legal isn’t to judge it against a non-existent parallel universe in which every decision has air-tight elite counsel, but against founders doing DIY legal based on personal instincts or a simple google search alone; because that’s what’s already been happening. Viewed that way, “GPT, esq.” is a massive upgrade.
Is using GPT Pro or Gemini Deep Think better than simply using Google? Obviously. It’s also better, speaking candidly, than the advice you’re likely to get from all the so-called “low cost” lawyers out there with minimal real-world experience in startups and venture capital.
I say this often: law is a lot like healthcare. There are specialties and niche subspecialties. Serious “Startup Law” (often called ECVC for “Emerging Companies and Venture Capital”) is a niche subspecialty of corporate law. There are hundreds of “corporate lawyers” but far fewer corporate lawyers truly specialized in ECVC, with all the knowledge of market norms and contextual nuances that entails.
Asking a non-ECVC lawyer – like a small business lawyer, or a generalist who dabbles in real estate, estate planning, and god knows what else – for advice on your VC-backed startup is a lot like asking a dermatologist (and a low-end one) for advice on a neurological issue. You are almost certainly going to get better answers from GPT Pro.
Founders are going to use AI for some legal issues, and that’s probably a good thing. However, there are a few things I’d suggest for doing it intelligently; moderating some of the risk, having a clear understanding of AI’s most likely failure modes, and knowing when it will be high ROI to loop in a lawyer even at the earliest stages.
First, use Clerky or Stripe Atlas for formation, and don’t assume that investor-preferred “standards” are non-negotiable.
These are extremely well-tested automation tools that are far cheaper than anything a law firm will produce, relying on well-respected templates produced by specialist lawyers. AI tools like GPT and Gemini are pulling from the entirety of the internet when they generate an output, but 99.999% of the data they’ve been trained on is irrelevant at best and simply wrong for your context at worst. For a couple hundred dollars you can avoid almost all the worst mistakes founders make in a DIY formation by simply leaning on Clerky or Stripe Atlas.
Even better, use them with an ECVC lawyer. Most of our own clients are incorporated on one of these tools, and we are simply in the loop to ensure no contextual nuances “bust” the standardized terms they use.
For early-stage fundraising docs, like Post-Money SAFEs, don’t assume they aren’t negotiable. Many VCs want you to think that, but it’s not true.
AI, just like prior legal automation tools, fails on unique context.
Context is one of the main reasons to keep a specialized human in the loop on legal, and one with a long-term relationship with the core team. Aside from check-the-box “compliance” sorts of issues, most of the legal issues startups face do not have a single correct answer.
What industry are you in?
Who are the founders?
Who were their prior, or who are their present, employers?
Where do they live?
Where is the company headquartered?
What’s the relationship of the founders to each other?
What are their growth and exit goals?
Who are their investors or likely investors, and how are their expectations in tension with the founders’?
What’s the distribution of leverage among the various relevant parties?
What’s the broader economic / market environment they are navigating?
To an AI tool, you are simply a user among millions. But to a lawyer with whom you have a long-term relationship, you are a specific company with very specific contextual needs, and that heavily plays into legal answers.
This is actually why, in the long-run, healthcare will be far more shaped by AI than law will be. There is an extremely higher level of subjectivity to desired outcomes in law than in medicine. Your goal in healthcare is to eliminate the disease, identify whether or not (binary) you have a condition. Goals in high-stakes legal are far less straightforward, and therefore far less addressable by algorithms alone. Personalities, relationship dynamics, and business contexts vary a lot more than biology.
Founders can reduce some of the risk of AI by incorporating more context into their prompts. Of course, this relies on founders actually knowing what context is relevant, and they’re often going to be wrong. Any generalist DIY tool has a failure mode that follows from the user, in this case founders, simply not knowing what they don’t know. The AI doesn’t know what you don’t know either.
AI is very helpful for pre-gaming discussions with, and messages to, your lawyers.
A lot of back-and-forth between clients and lawyers, which costs money, isn’t itself about devising a contextual strategy or even answering a legal question for the client but simply educating the client on certain concepts that they need to learn before an answer can even be derived. AI can be extremely useful to get that educational process out of the way.
So you might prompt the AI with something like “I’m going to ask my corporate lawyer about [X], but what are some concepts I should understand ahead of time to make our discussion as efficient as possible? What questions should I ask them?”
You can even do this with contract review. I’ve seen clients e-mail us a document and say “I ran this through GPT and here were some suggestions, just mentioning to you in case helpful. And by the way, I really don’t care about [X, Y, and Z.]”
The theme here is AI can be fantastic for the objective parts of a legal matter, like making sure you understand specific concepts, and that can really cut down on communication and resolution time with lawyers, whom you can then lean on for the more subjective or contextual parts of the project.
AI can turn a 30-minute call with your lawyer into a 10-minute one, with zero loss in output. That saves money, and many lawyers (myself included) love it when calls finish early.
AI (alone) is the most dangerous for permanent high-stakes relationships, particularly with key employees, commercial partners, and investors.
Related to the above point that context heavily influences a lot of legal issues, anything involving high-stakes negotiation – of a key new hire, a new investment or other commercial relationship – is going to be extremely dangerous to lean on AI (alone) for. The AI will not have an appropriate understanding of the negotiation context, including the leverage your counterparty has versus yours, and what the range of feasible outcomes is.
See Negotiation is Relationship Building for a deeper dive on all the subtle power and psychological games that experienced players (like VCs) – who virtually always have more experience than first-time founders – can play to sway a negotiation. Whatever output an AI tool might generate is going to be too complex for founders to actually understand and utilize on their own. It’s also likely to not encompass the true range of options because it was trained only on publicly available data, and it’s not like a public article has been written on every single negotiation tactic or legal nuance.
Once again, AI can be helpful for educating founders on relevant concepts without their lawyers’ timer being turned on, just like AI can be helpful to educate a medical patient before going into a consult with a physician. But it’s not going to make you as knowledgeable as an elite professional. You don’t have the time.
For the love of all things holy, do not use AI (alone) to negotiate your equity round term sheet.
Don’t assume using lawyers will always be unacceptably expensive.
Using a specialized boutique firm instead of BigLaw typically cuts legal bills (and hourly rates) in half with zero drop in quality, and sometimes improves quality because you’re working with more senior people. Thus don’t attach yourself to firms that are unnecessarily expensive (when leaner high quality options are available) pushing you to overuse AI.
ECVC lawyers also typically have precedent and templates you can lean on that do not require a ton of their time to generate. Before assuming it’s going to cost hours of time for a lawyer to prep a document for you, ask if they have a template to start with. That template, if it exists, will unquestionably be more useful and less risky than anything an AI tool will generate.
If not already obvious (I hope it is), pay up for the “Pro” models of ChatGPT or Gemini.
Among people who are benchmarking the models for legal tasks, the most advanced (and expensive) models have been clearly shown to be the least hallucinatory. Pay the $200 per month. This (legal) is not a game. If you are bypassing lawyers to lean on AI, which is risky, do not be so pound foolish on what is still peanuts (a couple hundred dollars) in the grand scheme of things.
Will the above completely eliminate the risks of leaning on AI for legal as a founder? Of course not. But it will certainly reduce them.
For a separate discussion on how AI is not likely to change Startup Law, though there will be plenty of shysters who pretend otherwise, see Legal AI and the Future of Startup Law. This notion of “AI First” law firms will work in very discrete compartmentalized areas, like high-volume low-stakes contracts for larger enterprises, but it will crash and burn in the kinds of high-stakes long-term representation that serious startup lawyers do.
A good metaphor is that “AI First X-ray review” (a narrow productized service) has serious legs. But an “AI First hospital” is preposterous, at least with the technology emerging in the next decade. The margins required by investors simply will not be there without VCs playing background games to cut down quality via de-skilling (eliminating seasoned senior professionals with deep contextual knowledge), while hiding it from naive clients. That’s what happened with Atrium a few years ago.
Elite boutiques have already brought dramatic efficiency (lower rates via lower overhead) to high-end law, while maintaining flexibility and Partner-level oversight. The same Partner who would be $1300/hr in BigLaw will be $650 at a boutique, without making less money. That is a big drop. Those lean boutique firms, along with BigLaw (higher rates for higher scale and ultra high-stakes), are themselves rapidly incorporating AI into their practices right now.
You’re going to somehow build “AI First” direct competitors that are cheaper, not malpractice nightmares, and have the kinds of far-larger profit margins (~3x of professional services) required for VC returns? Hope you have Nobel prize winning bleeding-edge tech. Good luck and God bless.