Startup Founders Using ChatGPT and Gemini for Legal

Related Reading: Legal AI and the Future of Startup Law

TL;DR: No matter how much it might horrify conservative lawyers, founders are going to use AI for some legal tasks, and it’s actually an upgrade from what they’ve been doing previously for low-stakes issues (just Googling and winging it). But there are a few things to keep in mind to gain efficiency while minimizing penny-wise and pound-foolish risks.

The rise of AI is unquestionably the most profound change in the legal industry of this decade, including for startup lawyers. Every single serious law firm (including Optimal) has implemented AI in their practices in a material way, and is testing other tools in beta/alpha for future use once they mature enough for safe integration.

Adjacent to the issue of lawyers using AI is the riskier proposition of clients (including founders) using it on their own, completely bypassing legal professionals. Cynics might expect all lawyers to handwave away clients leaning on AI, expecting them to feel threatened by competition or (again, cynically) to hoard as many billable hours as possible. But the cynicism is overdone. Legal is a big industry with a variety of personalities, and there are a lot of lawyers who are very much not luddites.

Young first-time founders have always struggled with soberly understanding why legal works the way it does, because the legal context of permanent contracts, high-stakes negotiations, and irreversible mistakes is so different from the iterative tech and biz environment in which founder psychology thrives. The subtext of “move fast and break things” is that it’s acceptable because when things break they can, in fact, be fixed; especially in software. The often misunderstood subtext in legal is that, actually, they can’t be fixed; so maybe you really should slow down a bit.

The short story of my perspective is that expecting founders to never use AI for legal is spectacularly naive. Is it risky? Yes. But risk tolerance is literally what makes founders founders. If anyone in business is going to lean on AI somewhere for legal, it’s entrepreneurs.

It’s naive because the reality is, pre-AI, many founders have already been doing plenty of legal-ish things away from lawyers. Lawyers – even at lean boutiques, forget BigLaw – are not cheap. Talent has to get paid.

Even well-funded young startups have to pick their battles as to when to lean on professional legal advice, and when to simply wing it, because approaching every decision with the same level of risk intolerance of a Fortune 500 company is simply not feasible. There isn’t a budget for it.

So the sober way to assess the prospects of founders using AI for legal isn’t to judge it against a non-existent parallel universe in which every decision has air-tight elite counsel, but against founders doing DIY legal based on personal instincts or a simple google search alone; because that’s what’s already been happening. Viewed that way, “GPT, esq.” is a massive upgrade.

Is using GPT Pro or Gemini Deep Think better than simply using Google? Obviously. It’s also better, speaking candidly, than the advice you’re likely to get from all the so-called “low cost” lawyers out there with minimal real-world experience in startups and venture capital.

I say this often: law is a lot like healthcare. There are specialties and niche subspecialties. Serious “Startup Law” (often called ECVC for “Emerging Companies and Venture Capital”) is a niche subspecialty of corporate law. There are hundreds of “corporate lawyers” but far fewer corporate lawyers truly specialized in ECVC, with all the knowledge of market norms and contextual nuances that entails.

Asking a non-ECVC lawyer – like a small business lawyer, or a generalist who dabbles in real estate, estate planning, and god knows what else – for advice on your VC-backed startup is a lot like asking a dermatologist (and a low-end one) for advice on a neurological issue. You are almost certainly going to get better answers from GPT Pro.

Founders are going to use AI for some legal issues, and that’s probably a good thing. However, there are a few things I’d suggest for doing it intelligently; moderating some of the risk, having a clear understanding of AI’s most likely failure modes, and knowing when it will be high ROI to loop in a lawyer even at the earliest stages.

First, use Clerky or Stripe Atlas for formation, and don’t assume that investor-preferred “standards” are non-negotiable.

These are extremely well-tested automation tools that are far cheaper than anything a law firm will produce, relying on well-respected templates produced by specialist lawyers. AI tools like GPT and Gemini are pulling from the entirety of the internet when they generate an output, but 99.999% of the data they’ve been trained on is irrelevant at best and simply wrong for your context at worst. For a couple hundred dollars you can avoid almost all the worst mistakes founders make in a DIY formation by simply leaning on Clerky or Stripe Atlas.

Even better, use them with an ECVC lawyer. Most of our own clients are incorporated on one of these tools, and we are simply in the loop to ensure no contextual nuances “bust” the standardized terms they use.

For early-stage fundraising docs, like Post-Money SAFEs, don’t assume they aren’t negotiable. Many VCs want you to think that, but it’s not true.

AI, just like prior legal automation tools, fails on unique context.

Context is one of the main reasons to keep a specialized human in the loop on legal, and one with a long-term relationship with the core team. Aside from check-the-box “compliance” sorts of issues, most of the legal issues startups face do not have a single correct answer.

What industry are you in?
Who are the founders?
Who were their prior, or who are their present, employers?
Where do they live?
Where is the company headquartered?
What’s the relationship of the founders to each other?
What are their growth and exit goals?
Who are their investors or likely investors, and how are their expectations in tension with the founders’?
What’s the distribution of leverage among the various relevant parties?
What’s the broader economic / market environment they are navigating?

To an AI tool, you are simply a user among millions. But to a lawyer with whom you have a long-term relationship, you are a specific company with very specific contextual needs, and that heavily plays into legal answers.

This is actually why, in the long-run, healthcare will be far more shaped by AI than law will be. There is an extremely higher level of subjectivity to desired outcomes in law than in medicine. Your goal in healthcare is to eliminate the disease, identify whether or not (binary) you have a condition. Goals in high-stakes legal are far less straightforward, and therefore far less addressable by algorithms alone. Personalities, relationship dynamics, and business contexts vary a lot more than biology.

Founders can reduce some of the risk of AI by incorporating more context into their prompts. Of course, this relies on founders actually knowing what context is relevant, and they’re often going to be wrong. Any generalist DIY tool has a failure mode that follows from the user, in this case founders, simply not knowing what they don’t know. The AI doesn’t know what you don’t know either.

AI is very helpful for pre-gaming discussions with, and messages to, your lawyers.

A lot of back-and-forth between clients and lawyers, which costs money, isn’t itself about devising a contextual strategy or even answering a legal question for the client but simply educating the client on certain concepts that they need to learn before an answer can even be derived. AI can be extremely useful to get that educational process out of the way.

So you might prompt the AI with something like “I’m going to ask my corporate lawyer about [X], but what are some concepts I should understand ahead of time to make our discussion as efficient as possible? What questions should I ask them?”

You can even do this with contract review. I’ve seen clients e-mail us a document and say “I ran this through GPT and here were some suggestions, just mentioning to you in case helpful. And by the way, I really don’t care about [X, Y, and Z.]”

The theme here is AI can be fantastic for the objective parts of a legal matter, like making sure you understand specific concepts, and that can really cut down on communication and resolution time with lawyers, whom you can then lean on for the more subjective or contextual parts of the project.

AI can turn a 30-minute call with your lawyer into a 10-minute one, with zero loss in output. That saves money, and many lawyers (myself included) love it when calls finish early.

AI (alone) is the most dangerous for permanent high-stakes relationships, particularly with key employees, commercial partners, and investors.

Related to the above point that context heavily influences a lot of legal issues, anything involving high-stakes negotiation – of a key new hire, a new investment or other commercial relationship – is going to be extremely dangerous to lean on AI (alone) for. The AI will not have an appropriate understanding of the negotiation context, including the leverage your counterparty has versus yours, and what the range of feasible outcomes is.

See Negotiation is Relationship Building for a deeper dive on all the subtle power and psychological games that experienced players (like VCs) – who virtually always have more experience than first-time founders – can play to sway a negotiation. Whatever output an AI tool might generate is going to be too complex for founders to actually understand and utilize on their own. It’s also likely to not encompass the true range of options because it was trained only on publicly available data, and it’s not like a public article has been written on every single negotiation tactic or legal nuance.

Once again, AI can be helpful for educating founders on relevant concepts without their lawyers’ timer being turned on, just like AI can be helpful to educate a medical patient before going into a consult with a physician. But it’s not going to make you as knowledgeable as an elite professional. You don’t have the time.

For the love of all things holy, do not use AI (alone) to negotiate your equity round term sheet.

Don’t assume using lawyers will always be unacceptably expensive.

Using a specialized boutique firm instead of BigLaw typically cuts legal bills (and hourly rates) in half with zero drop in quality, and sometimes improves quality because you’re working with more senior people. Thus don’t attach yourself to firms that are unnecessarily expensive (when leaner high quality options are available) pushing you to overuse AI.

ECVC lawyers also typically have precedent and templates you can lean on that do not require a ton of their time to generate. Before assuming it’s going to cost hours of time for a lawyer to prep a document for you, ask if they have a template to start with. That template, if it exists, will unquestionably be more useful and less risky than anything an AI tool will generate.

If not already obvious (I hope it is), pay up for the “Pro” models of ChatGPT or Gemini.

Among people who are benchmarking the models for legal tasks, the most advanced (and expensive) models have been clearly shown to be the least hallucinatory. Pay the $200 per month. This (legal) is not a game. If you are bypassing lawyers to lean on AI, which is risky, do not be so pound foolish on what is still peanuts (a couple hundred dollars) in the grand scheme of things.

Will the above completely eliminate the risks of leaning on AI for legal as a founder? Of course not. But it will certainly reduce them.

For a separate discussion on how AI is not likely to change Startup Law, though there will be plenty of shysters who pretend otherwise, see Legal AI and the Future of Startup Law. This notion of “AI First” law firms will work in very discrete compartmentalized areas, like high-volume low-stakes contracts for larger enterprises, but it will crash and burn in the kinds of high-stakes long-term representation that serious startup lawyers do.

A good metaphor is that “AI First X-ray review” (a narrow productized service) has serious legs. But an “AI First hospital” is preposterous, at least with the technology emerging in the next decade. The margins required by investors simply will not be there without VCs playing background games to cut down quality via de-skilling (eliminating seasoned senior professionals with deep contextual knowledge), while hiding it from naive clients. That’s what happened with Atrium a few years ago.

Elite boutiques have already brought dramatic efficiency (lower rates via lower overhead) to high-end law, while maintaining flexibility and Partner-level oversight. The same Partner who would be $1300/hr in BigLaw will be $650 at a boutique, without making less money. That is a big drop. Those lean boutique firms, along with BigLaw (higher rates for higher scale and ultra high-stakes), are themselves rapidly incorporating AI into their practices right now.

You’re going to somehow build “AI First” direct competitors that are cheaper, not malpractice nightmares, and have the kinds of far-larger profit margins (~3x of professional services) required for VC returns? Hope you have Nobel prize winning bleeding-edge tech. Good luck and God bless.

Seed-Stage Startups Should Shrink Their Option Pools

Background reading:

You only get 100% of your cap table to give away (or keep), and the sad fact is founders make all sorts of tactical errors that needlessly give up points to investors and other parties. Sometimes those errors are driven by bad advice offered by misaligned participants in the ecosystem.

One example I’ve written extensively about is the aggressive anti-dilution mechanism built into YC’s default Post-Money SAFE Template. YC portrays its template as a wonderful legal fees-saving “standard” for founders, while staying quiet about its extremely harsh economics that amplify founder dilution. YC is, at the end of the day, a VC that benefits from making founders dilute more. So be skeptical about using their templates without any modification.

The reality is SAFEs are tweaked/modified all the time, and it costs essentially nothing in legal fees to do so. In that above-linked post I offer a very simple – just a few sentences – tweak to eliminate this issue, while preserving the post-money valuation mechanism that provides transparency on how much of the cap table a SAFE is purchasing.

Another issue I wrote about over ten years ago is how founders needlessly reserve too large of an option pool at formation. They’ll just pick a number, like 20% or 10%, and reserve that amount, regardless of what they actually intend to use. They think this costs them nothing, but it’s just not true.

First, most employee new hire equity grants are made based on a % of the fully-diluted capitalization. When you offer them 2% or 3%, the denominator of that percentage includes the reserved but unused pool. It’s simple math that if you reserved too large of a pool, you are needlessly giving them more of the cap table than you otherwise would have. If you had reserved a smaller pool up-front, the 2% or 3% would be of a smaller pie, and then in expanding the pool later (which you can always do), the employee dilutes alongside everyone else.

Second, reserving too large of a pool makes it easier for VCs to argue for a needlessly large pool in your first equity round. As I wrote before:

The pool you reserve before your first VC financing will set the baseline for negotiating how much of an option pool “top up” VCs make founders absorb.

If your pool is at 5% going into a funding round and your VCs are negotiating for a 10% or 15% pool post-closing, it’s going to show up as a very large increase. The optics of that increase will help you in negotiation. But if you start with a 10% or 15% pool that you didn’t even need, the increase will look much smaller, which means you basically made the VC’s job easier for zero benefit to yourself.

The above two issues are not new in my writings. Stop reserving too large of a pool at formation, because it ends up giving too much equity to employee/consultant/advisor hires via equity grant calculations, and to VCs via equity round negotiations.

A somewhat newer issue that I want to emphasize here: Post-Money SAFEs make it even more costly to have an artificially large pool, given how their conversion math works. Shrink your pool to as small as possible before your SAFEs convert.

The definition of “Company Capitalization” in the Post-Money SAFE (which is the denominator for purposes of SAFE conversion) includes the pool existing before the equity round, but excludes the pool increase negotiating with your new lead VC(s).

Thus by having a pointlessly large pool at the time of SAFE conversion, you are just handing money to the SAFE holders. Shrink the pool before SAFE conversion to only exactly what you need, and the full pool increase of the equity round will NOT drop the SAFEs conversion price.

I’m not going to show specific examples of the math here. You can use the Open Startup Model (free) if you don’t have your own excel model. Suffice to say based on a few examples I’ve modeled out, you can reduce the amount of dilution your SAFE holders take, in most scenarios, by about 10% or more. Free money.

So the costs of having a pointlessly large equity pool before an equity round continue to mount:

  1. It means you’re giving too much equity to new hires.
  2. It means you’re making the job of your VCs in your equity round easier by front-loading an option pool increase they would otherwise need to argue for themselves.
  3. It means your SAFE holders are getting more shares from their SAFE conversion than is actually necessary.

Stop. Reserving. Stupidly. Large. Option. Pools. The emergence of AI probably means hiring needs, and associated equity pool needs, are going to shrink anyway.

At formation, reserve only what you think you will need for the next 6 months or so. And before you start negotiating an equity round, shrink your pool to cover only what has actually been used. This will save you multiple percentage points on your cap table that could be worth millions in the long-run. Again, free money. Take it.

Legal AI and the Future of Startup Law

It’s been a while since I’ve written on legal tech and startup law, and given recent developments in AI, it feels like an update is in order. For context, I’m a Partner and legal CTO at Optimal, an elite boutique law firm focused on ECVC, M&A, and Tech Transactions for VC-backed startups. We’re about 20 lawyers, company-side focused, and negotiate across from all the top VCs (and lesser ones too) and the usual suspects of Bay Area and NYC-based BigLaw.

Over the past two years I’ve reviewed and/or tested a tremendous number of new AI-based legal tech products that have hit, or will soon hit, the market. The notion that the new generation of LLMs will make a material impact on the legal industry is accurate. The capabilities being released go well beyond the typical automation tools that law firms have been integrating over the past decade or so. Lawyers of all stripes are going to get a lot more productive.

However, what’s also become clear is that (predictably) shysters are coming out of the woodwork, exaggerating where this new tech is going and what can be done with it. For just one example of that, see this X thread. Given what I’ve observed over the years regarding the very short-term memory of the entrepreneurial ecosystem, I am not surprised at all that, thanks to AI, we are going to see the second, third, fourth, and on and on attempts at building the same flawed and untenable business model of some supposed new kind of law firm or legal service provider infused (somehow) so extensively with super-advanced technology that they dominate and transform the industry.

Alas, it’s just not meant to be, even if many of us tech-forward lawyers eagerly await every new tool that makes legal practice faster, smoother, more productive, etc. For those who don’t remember, Atrium was the most visible failed attempt at building the tech-driven startup law firm “of the future.” There are a lot of views out there for explaining why Atrium failed, some (in my opinion) more honest than others. I’ll summarize my views here:

A. It was controlled by a founder (Justin Kan) who, despite being extremely successful and brilliant in his own way, not only didn’t understand the real drivers of the elite legal industry – on the supply or demand side – but had no real interest in learning them. He assumed that his personal brand had enough gravitational pull to cover up impossible economics and a weak value proposition dependent on exaggerated technology capabilities.

B. Kan also assumed that his connections with Y Combinator, which funneled both cash investment and portfolio companies to Atrium, amplified his pull even further.

C. Not nearly discussed enough, Atrium’s organizational structure hid massively problematic ethical conflicts of interest that were a ticking time bomb. Company counsel, what Atrium purported to be, is supposed to help startups negotiate against VCs, serving as an equalizer for entrepreneurs and other common stockholders negotiating super high-stakes contracts and board decisions with financially misaligned elite counterparties. Yet Atrium was funded by the VC community, had VCs on its Board, and used all of those connections to funnel portfolio companies of its investors (including YC) into its client base.

In short, over-confidence and naivete, vaporware technology, and a go-to-market strategy dependent on pretending professional rules around conflicts of interest (for protecting clients) could just be hand-waived out of existence.

I could go on and on about how doomed Atrium was from the start, including how it depended on inexperienced de-skilled (read: no real partner oversight) young lawyers dreaming of VC-like payouts, and how its fixed-fee pricing model itself incentivized rushed work and not properly serving clients. But this post isn’t about Atrium; it’s about the people who are going to be using AI vaporware to try to resurrect it.

The narrative emerging today is something like “Atrium was just too early. Now, with AI, is the right time.” I’m sorry, but it’s really not. Even if the new generation of legal AI is more powerful than what Atrium was building – and it is – nothing coming down the pike of legal AI with the current generation of algorithms is going to be so transformational as to overcome all of the other flaws of the business concept.

At the low end of law, I can certainly see new legal AI creating something like a more dynamic version of LegalZoom, backed by highly de-skilled humans shuffling paper around in the background. But this wouldn’t be that transformational, because legal automation has already been eating up the bottom two quartiles of the legal industry doing work like small business law, simple divorces, estate planning, low-stakes dispute resolution, etc. It’s why the majority of law graduates today, even many from decent schools, can barely earn enough to pay their student loans. Sidenote: I think about half of law schools should just be shut down if they can’t find a way to operate at half the cost or less.

Startups even have their own LegalZoom: Clerky for the very earliest pre-seed stages, when everything can most easily be cookie-cutter. It works great in many (not all) very early-stage contexts, and many lawyers, myself included, integrate with it.

But unlike automation tools, we (VC-backed startup lawyers) play at the elite end of law, where the stakes are much higher, and the context on the ground is far more variable and complex.

The new LLM-based generation of legal AI tools are going to make elite lawyers much more productive. We already see it happening within our own firm, and it’s influencing hiring decisions, particularly on the junior side. They will make drafting, document review, research, and other lawyer work meaningfully more productive to the point of probably shrinking the footprint of elite firms, concentrating earnings further toward the top as the real “mandarins” of elite law don’t need nearly as much on-the-ground junior labor to serve clients.

But the notion that this new technology eliminates the need for those legal mandarins – the people who not only have the years of technical training, but also the personal understanding of the client and the mix of IQ/EQ to apply legal + strategic insight to unique dynamic human contexts, is preposterous.

There is simply no way to use AI (with presently attainable capabilities) to de-skill this top end of the industry such that a new organizational structure full of lower-paid “legal technicians” can actually deliver what clients want, at a quality level that doesn’t touch malpractice. This generation of AI will, as it plays out, be the equivalent of armies of tireless and supernaturally fast paralegals and junior lawyers, at a tiny fraction of what the human equivalent would cost.

Super valuable. But as anyone who has actually worked in legal (or the military) knows, even the largest and fastest infantry can be useless (even dangerous) without sufficiently smart hands-on strategic leadership. Interesting theoretical discussions on new AI algorithms point out that even if AI isn’t really “reasoning” in an abstract sense (it’s not), many lower-end white-collar workers aren’t either. I actually agree with that, even if some find it insulting (sorry).

But the elite lawyers in high-end law firms? They’re being paid to actually reason in complex high-stakes ways that no present AI breakthroughs anywhere on the horizon, in university or corporate research labs and certainly not in the market, can supplant. That being said, their work is also embedded in workflows that include numerous mundane (boring) tasks they’d gladly outsource to a diligent and reliable tool. This is why literally every single elite law firm is working on integrating AI right now.

Hardly luddites, they understand this tech is going to make their partnerships much more profitable, while improving efficiency for clients. It’s also going to make it a lot harder for junior lawyers to enter elite ranks. Such is life in the race against the machines, or perhaps better said: against the mandarins using machines.

The shysters that will be peddling AI to create pretend startup law firms and alternative legal services will be taking one of a few (predictable) strategies:

They will exaggerate the extent to which elite legal work is or can be standardized, because their unit economics can’t work without hyper standardization.

See Standardization and Flexibility in Startup Law. VC-backed tech companies going after 9, 10, and larger-figure opportunities are not coffee shops. They all operate in different competitive contexts, with different investors, different growth expectations, different team cultures, and all sorts of other contextual dynamics that influence their approach to legal and Board issues. This is why even at the earliest stages Founders CEOs talk to human lawyers.

You see this play out with other automation tools that have tried (and failed) to hyper-standardize startup law: see Carta. They know their technology breaks down beyond a small level of parameters, and so they try to get clients (Founder CEOs) to believe that narrow set of parameters is all they need. But the ROI – millions – for actually negotiating contracts (flexibility) is often so high that only the most foolish entrepreneurs trust their key decisions to an automation tool.

They will de-skill their rosters in order to create margins (potentially) attractive to investors, while covering up the (significant) drop in quality.

The most expensive people at any law firm are the Partners, just as the most expensive people at a hospital are the top doctors, all for good reason. They are the ultimate quality control in a service where low quality is extremely expensive, even dangerous.

Elite law firms are built, funded, and run by a hybrid form of capital – elite Partners. They provide the financial capital, but also the extremely nuanced technical knowledge required to train and run the operation: professional human capital.

If you just layer investors, like VCs, onto this model you are not going to have a competitive advantage in the industry. Too many mouths to feed, and not enough margin. So entrants, like Atrium, rely on de-skilling – eliminating real (highly skilled) Partners, and trying to convince clients this doesn’t result in a drop in quality.

What will a drop in quality look like? Rushed (or non-existent) negotiation. Poorly thought-out legal strategy. Technical errors that even the best LLMs just aren’t algorithmically capable of catching, but now without senior expertise to correct them.

Law firms are far lower margin relative to the kinds of tech products funded by VCs. There’s no real magic to trying to create VC-like margins in professional services. It requires getting rid of a lot of the most elite talent, because that’s where the money (rightfully) goes. In healthcare, this can work at the low end (de-skilled), like nurse practitioners using tech to treat sniffles faster and cheaper than GPs. In high-end specialty care, it can be (and has at times been) disastrous.

They will be funded by, and partner with, ecosystem players who profit from a drop in the quality of legal service provided to startups.

This is exactly what happened with Atrium, which relied extensively on pushing so-called “standards” created by Y Combinator, an accelerator and VC, because YC funded Atrium, sat on its board, and pushed a lot of its portfolio companies to use Atrium. Unsurprisingly, those standards were designed to benefit investors financially, which means they cost entrepreneurs significantly, far more (orders of magnitude) than whatever they “saved” in legal fees.

This is fundamentally what so many people in the startup ecosystem misunderstand about the role of company counsel, and some even put in effort toward ensuring entrepreneurs don’t understand it. Startup Law is, at a foundational level, adversarial* and (unavoidably) zero-sum. See: Negotiation is Relationship Building. Many people want to pretend otherwise, but at the end of the day institutional investors and common stockholders see the world differently, have different goals (often), and in an exit the money can only go into one pocket or the other.

One of the most clever things I observed about how Justin Kan structured Atrium is it offered his investors a double value proposition. The first was obvious: we’ll build this supposedly massively disruptive whiz-bang-pow legal tech firm. But the second one was more subtle: send your portfolio companies our way, and we’ll ensure they negotiate the “right” way and sign the “right” contracts – meaning the ones that make more money for and give more power to… those same investors.

A brilliant move, even if profoundly illegal (it flouted rules against conflicts of interest), and ultimately not enough to overcome the bigger business model flaws. Too many smart entrepreneurs – fools can always be tricked – saw through the charade and weren’t willing to bite. I expect the same to happen with the new generation of Atriums that will be attempted in the ecosystem.

Fiction: New LLMs will disrupt the legal industry, paving the way for entirely new organizational structures taking enormous amounts of business from the old guard.

Fact: At the bottom end of the market, new legal AI will incrementally allow existing automation providers to move up-market, perhaps from the 40th percentile to something like the 50th or 60th, but nowhere near the elite firms that are most-often talked about. At the high end, everyone and their mother is working to adopt legal AI into their existing firms. Elite firms will likely be smaller and more profitable, but still very much headed by elite legal mandarins wielding more powerful productivity tech.

Post-script on Healthcare: A brief point about the new generation of AI as it applies to healthcare, perhaps the field most often compared to elite law. From my vantagepoint, I expect AI to be much more impactful in the long-run to healthcare than law, for reasons I will call (i) less competitive subjectivity and (ii) more compartmentalized service.

What I mean by less competitive subjectivity is that in healthcare the goals are more straight-forward – treat/heal the patient, and the playing field is much more standardized: biology. More straightforward goals and (relatively) uniform biological science lend themselves much more towards the implementation of algorithms and high-volume data crunching. In elite law, however, the goals are much more subjective and contextual: there are multiple players, often with their own worldviews and strategic priorities. Further, every company is very different. Different people, industries, business models, etc. EQ and human-oriented “reading the room” play a much bigger role here, and I believe that limits how far technology – in its currently developing iteration – can go in displacing humans as opposed to augmenting their productivity.

By more compartmentalized service, I mean that healthcare breaks down into much more discrete tasks that can be walled off and modularized, outsourced entirely to third-parties and technology, and then re-integrated into the patient’s treatment without a problem. Think blood labs, diagnostic testing, monitoring, pharma, etc. Elite law just doesn’t work that way for a number of reasons – largely having to do with the more contextualized and subjective nature of the work, which amplifies the friction involved in integrating third-parties lacking the full context of the “patient” (the client). This is why in healthcare I expect to see a flourishing of third-party AI-centric services woven into the market, whereas in legal far more of the development will be tools for law firms.

* When I speak of ECVC law as being “adversarial” I mean in the technical sense. It is (obviously) not to suggest overtly hostile intentions or behavior, but to acknowledge openly and honestly that there are numerous zero-sum issues on which entrepreneurs (and their employees) are misaligned with investors, and each constituency is maneuvering in order to gain an advantage. When certain players suggest that it is “no big deal” for lawyers representing companies to have close relationships with the VCs investing in those same companies, I consider that little more than a rhetorical sleight-of-hand to give investors a tremendous negotiating advantage. See Negotiation is Relationship Building