Post

Avatar
Avatar
Your hypothetical case conflated Pine Creek Montana with the Pine Creek First Nation of Manitoba, and your lack of knowledge on the subject you were basing your hypothetical on means this wholly hallucinated association flew under your radar. Do you see where this would be a problem in litigation?
Avatar
To underline this a bit, the LLM you used is not in the business of giving accurate results, it's in the business of giving results that pass casual inspection. That is not the standard which is needed in legal cases.
Avatar
This is also a very qucik hypthetical that I wrote up just to show a point not to argue a fucking legal case.
Avatar
The validity of the factual information in the hypothetical is not relevant to the case being argued: whether GPT augmented searches can act as a FIRST STEP in a stream of steps which include fact validation. I wasn't going to put that much time into validating the facts of a fucking hypothetical.
Avatar
You know what else? There's no Elias! Wow.... the hypothetical person doesn't fucking exist.
Avatar
Unfortunately, as a FIRST STEP, it so far has not worked. Because it isn't seeing the solution that a real estate attorney would see right off the bat. There is a solution to your hypothetical. And ChatGPT has taken your hypothetical person way off the path.
Avatar
Okay so what's the solution? How would you have found that linchpin document?
Avatar
I think your AI must have garbled what you meant to type, which surely was: "I am now becoming aware of just how many crucial things I don't know about this subject, which led me to the erroneous conclusion that this is an area where a current LLM could help instead of being a huge liability!"
Avatar
You in fact demonstrated the point, just not the one you meant to, because of the nature of legal cases. There is an opposing counsel that is extremely motivated to find any possible flaw with anything that you say, do, or most importantly file.
Avatar
Because you used Chat-GPT as a "first step" and it drafted a brief-shaped document, you've now introduced an unbounded set of legal landmines for the both the litigant and their attorneys.
Avatar
Wait let me get this straight. Because my HYPOTHETICAL BRIEF involving A PERSON WHO IS NOT REAL is not factually accurate, I have put my fake person at risk?
Avatar
Right but (1) this is not a legal case and (2) even if it was it is actually not relevant to the case. That it can be shown that using a GPT alone, and without expertise, there can be errors in GPT output does not in any show that they cannot be used to search in foreign languages.
Avatar
But people in the legal profession have told you, repeatedly, that this saves no work whatsoever. That correcting the errors the LLM introduces actually create work, that it is no substitute for legal counsel, and so forth. You are hammering a square peg into a round hole.
Avatar
And it failed even the most basic of tests. It doesn't understand that two things named "Pine Creek" are not related to each other.
Avatar
Of course it didn't. That's not its job. That's not what it is good at. It is good at translating and compressing/decompressing information.
Avatar
How is it that you are missing the point so spectacularly here?' If a LLM can lie to you confidently that it can pass basic geographic errors to you without notice, it has no hope of helping someone in a court of law.
Avatar
Did you write a hypothetical or did you type “ChatGPT draft a legal brief in a hypothetical case where LLMs are useful and include a section describing how LLMs provided a hypothetical lynchpin to the case”?
Avatar
i do have sympathy. you got an idea and did the hard work of writing it up and want to share your piece. one of the best things copy editors do for nonfiction writers is to get their facts and fill them in on what they need to know. medium lacks that. (i was a professional nonfiction book editor.)
Avatar
Cool. It's a hypothetical. Even better if it's not exactly real. That's not the point. But thanks for deflecting again.
Brian isn't deflecting. He's pointing out that ChatGPT is producing information that you cannot rely upon. And if you can't rely upon it, then you cant use it. If a lawyer were to file false things, they would be looking at sanctions or disbarment. If a pro se, you could still face sanctions.
Avatar
And ironically, I'd imagine he could have written a more compelling and consistent hypothetical if he hadn't passed those duties off to a LLM.
Avatar
Tell you what. You find this case and give the correct facts of this case, or you pay me $100. Deal? Wait. You can't find the facts of a fictional case?
Law students argue fictional cases all the time. There are whole competitions built around it. Its called Moot Court. The fictional case isn't the problem. The problem is that you have no idea how the law works, and assume that because you are good at LLMs that you can fake it. News alert, you cant.
Avatar
This is incoherent as an argument for why lawyers should use LLMs. You don't seem to understand how legal disputes arise, what parties are required to provide to the court and each other, or what legal or factual research is meant to accomplish. I don't think you're qualified to make this argument.
Avatar
In this scenario, is your idea that that LLM did archival work in a library and found some mouldering untranslated treaty document, then scanned and translated it? Or is the idea that this treaty (between whom?) was part of discovery but without an LLM no lawyer would…check what the treaty said?
Avatar
Lmao this is so dumb, but your replies in this thread are even worse
Avatar
I don't think you did your argument much good by kicking it off with a pseudo-legal document whose deficiencies set off any lawyer's alarm bells. You could have just posted that an LLM might help with bulk translation from a seldom-spoken language.
Avatar
Is any of this real? The precedent, the LLM performance, the translation?
Avatar
The case is a work of fiction of course. The utility of GPTs is quite real. They are quite good at translating and summarizing. Perfect? NO. That's why the process outlined includes a validation step where the potential results are given to a potential to translate and validate.
Avatar
As a lawyer, I gotta tell ya, it’s really hard for me to gauge the utility of a tool if I can’t determine it’s accuracy.
Avatar
I agree. They are making tools that do allow for this and not far off. Surely this can be used instead of an army of first year assoc looking for precedent? Don't get me wrong, this person's "take" is shit -- they don't have context in the field. But AI is going to be a tool lawyers will use.
I've got an AI PhD and 10 years building products in big 'Legal Tech'. I'm leaving the company because they won't stop demanding LLMs injected everywhere; they aren't helpful today and user studies/demos say they're unwanted other than in a moonshot 'Everything Works!' sense - which isn't coming.
Avatar
My experience, and Michael Cohen's, is that the LLM's will obligingly hallucinate the very authorities upon which Goldman expects to rely.
Avatar
The hypothetical cases are where AI/LLMs *actually* work.
Avatar
You already use this technology or similar technology if you use ediscovery tools. Certainly if you're going with "AI" in general and you use ediscovery tools.
The ediscovery tools that you use already incorporate NLP. If you're using a relatively up to date ediscovery tool, you're already enjoying the benefits of this technology, especially if the ediscovery process involves material in multiple languages.
Avatar
Avatar
Natural language processing does indeed fall under the category of AI, which is a very broad category which includes pretty much all machine learning, including NMT and LLMs, as well as any work on AGI.
Avatar
To marketing folks, or ppl trying to boost a stock, maybe. Referring to such a broad swath of algorithm-derived language processing technologies as "intelligence" is wish casting.
Avatar
Culling through a defined universe is entirely different than letting the AI loose. And at this point and for the near foreseeable, it is, let loose, not reliable. And if it is not reliable, it is near to useless.
Avatar
What do you mean by "letting the AI loose?"
Avatar
Or what Michael Cohen, in his typical wisdom, did.
Avatar
What you described in your Medium article