Oh, good. Another attempt at using AI to litigate. This one is (thank GOD) hypothetical. But let's see how it does.
To put it differntly:
Does Daniel know what he's talking about, or is he a tech dude in desperate need of a sense of humility and an understanding of his own limitations?
Thread.
Here's the medium post where Daniel does his little legal hypothetical.
medium.com/technology-p...
Let's run through it.
First issue: just because you know how useful a tool is for what you do doesn't mean you have a clue whether it will help others. To know that, you need to know what they do.
Moving on, Daniel writes in the second paragraph:
"And so, here is a quickly drafted hypothetical legal case."
But what *kind* of case, litigated where, for what reasons, with what desired outcomes, and what legal issues in play?
Those are all critical questions.
Now we have a hypothetical case, in an unknown court, and we have a "Plaintiff's Legal Brief."
But a brief for *what*? Briefs aren't freestanding things. They're purpose-specific documents, with set goals, that need to meet defined standards. Important and absent details.
OK. Not only are the first two paragraphs utterly inadequate for any litigation purpose, they would also be nearly useless to me for client intake purposes. To be blunt, I generally get more information from potential clients in the initial email, without the need for LLM filtering.
And the information I get from clients in the first email is usually just enough for me to know what questions to start with when I get on the phone with them.
There's nothing useful in this part of the LLM output; time would be better spent putting it in an email instead of a prompt.
The questions presented are...well, let's start here:
Issue 0: Can a litigation be brought against the Tribal Counsel *at all*.
Issue 1 and 2 are...at best poorly phrased and contain absolutely nothing that would not have been gathered from the prompt given to the AI. They're words in sentences.
And the same goes for the rest of this drek.
Millions of years ago, plants and critters died to produce the fossil fuels that went to make the energy that was used to produce this LLM output; Daniel owes them all apologies.
This is in the vague shape of a legal document. But it's at best useless.
And now we merrily dive into the pool of what Daniel thinks the LLM can do to help be legally in this situation.
Much of this, of course, comprises things that are already being done with other forms of machine learning, and which do not necessarily fall into the realm of things LLMs are good at.
Daniel's "hypothetical lynchpin," meanwhile, suffers from its own flaws.
Most notably, it is assumed that these sources of evidence are capable of being determinative in resolving the current dispute. That is a catastrophically unsound assumption.
The problem, Daniel, is that in order to identify the lynchpin evidence in any legal case, one must know the underlying law. In fact, one must know the underlying law well enough to be able to argue over which rule of law should apply in the case.
That's rarely clear at the start.
LLMs may pick a rule of law to apply to the case. They might even pick one that both exists and could theoretically apply to the case. But that's jumping from step one to step cucumber.
Lawyering happens in many places. Identifying the most favorable law for the client is a crucial one.
To put it another way, Daniel:
You know little enough about law that your carefully constructed hypothetical detonated on its own terms.
Maybe you should stick to your core competencies, which appear to consist of calling women who know more than you "Karens" and...do you have any others?
While I agree with you, I’d point out that you could have saved yourself several tweets and just posted this. I think we can all agree the proper conclusion to draw about anyone who proudly self-describes as a “polymath”.