Your hypothetical case conflated Pine Creek Montana with the Pine Creek First Nation of Manitoba, and your lack of knowledge on the subject you were basing your hypothetical on means this wholly hallucinated association flew under your radar. Do you see where this would be a problem in litigation?
To underline this a bit, the LLM you used is not in the business of giving accurate results, it's in the business of giving results that pass casual inspection.
That is not the standard which is needed in legal cases.
The validity of the factual information in the hypothetical is not relevant to the case being argued: whether GPT augmented searches can act as a FIRST STEP in a stream of steps which include fact validation. I wasn't going to put that much time into validating the facts of a fucking hypothetical.
Unfortunately, as a FIRST STEP, it so far has not worked. Because it isn't seeing the solution that a real estate attorney would see right off the bat. There is a solution to your hypothetical. And ChatGPT has taken your hypothetical person way off the path.
I think your AI must have garbled what you meant to type, which surely was:
"I am now becoming aware of just how many crucial things I don't know about this subject, which led me to the erroneous conclusion that this is an area where a current LLM could help instead of being a huge liability!"
You in fact demonstrated the point, just not the one you meant to, because of the nature of legal cases. There is an opposing counsel that is extremely motivated to find any possible flaw with anything that you say, do, or most importantly file.
Because you used Chat-GPT as a "first step" and it drafted a brief-shaped document, you've now introduced an unbounded set of legal landmines for the both the litigant and their attorneys.
Wait let me get this straight. Because my HYPOTHETICAL BRIEF involving A PERSON WHO IS NOT REAL is not factually accurate, I have put my fake person at risk?
Right but (1) this is not a legal case and (2) even if it was it is actually not relevant to the case.
That it can be shown that using a GPT alone, and without expertise, there can be errors in GPT output does not in any show that they cannot be used to search in foreign languages.
But people in the legal profession have told you, repeatedly, that this saves no work whatsoever. That correcting the errors the LLM introduces actually create work, that it is no substitute for legal counsel, and so forth. You are hammering a square peg into a round hole.
How is it that you are missing the point so spectacularly here?'
If a LLM can lie to you confidently that it can pass basic geographic errors to you without notice, it has no hope of helping someone in a court of law.
Did you write a hypothetical or did you type “ChatGPT draft a legal brief in a hypothetical case where LLMs are useful and include a section describing how LLMs provided a hypothetical lynchpin to the case”?
i do have sympathy. you got an idea and did the hard work of writing it up and want to share your piece. one of the best things copy editors do for nonfiction writers is to get their facts and fill them in on what they need to know. medium lacks that. (i was a professional nonfiction book editor.)
Brian isn't deflecting. He's pointing out that ChatGPT is producing information that you cannot rely upon. And if you can't rely upon it, then you cant use it. If a lawyer were to file false things, they would be looking at sanctions or disbarment. If a pro se, you could still face sanctions.
Tell you what. You find this case and give the correct facts of this case, or you pay me $100. Deal? Wait. You can't find the facts of a fictional case?
Law students argue fictional cases all the time. There are whole competitions built around it. Its called Moot Court. The fictional case isn't the problem. The problem is that you have no idea how the law works, and assume that because you are good at LLMs that you can fake it. News alert, you cant.
This is incoherent as an argument for why lawyers should use LLMs. You don't seem to understand how legal disputes arise, what parties are required to provide to the court and each other, or what legal or factual research is meant to accomplish. I don't think you're qualified to make this argument.
In this scenario, is your idea that that LLM did archival work in a library and found some mouldering untranslated treaty document, then scanned and translated it? Or is the idea that this treaty (between whom?) was part of discovery but without an LLM no lawyer would…check what the treaty said?
I don't think you did your argument much good by kicking it off with a pseudo-legal document whose deficiencies set off any lawyer's alarm bells. You could have just posted that an LLM might help with bulk translation from a seldom-spoken language.
The case is a work of fiction of course. The utility of GPTs is quite real. They are quite good at translating and summarizing. Perfect? NO. That's why the process outlined includes a validation step where the potential results are given to a potential to translate and validate.
I agree. They are making tools that do allow for this and not far off. Surely this can be used instead of an army of first year assoc looking for precedent?
Don't get me wrong, this person's "take" is shit -- they don't have context in the field. But AI is going to be a tool lawyers will use.
I've got an AI PhD and 10 years building products in big 'Legal Tech'. I'm leaving the company because they won't stop demanding LLMs injected everywhere; they aren't helpful today and user studies/demos say they're unwanted other than in a moonshot 'Everything Works!' sense - which isn't coming.
You already use this technology or similar technology if you use ediscovery tools. Certainly if you're going with "AI" in general and you use ediscovery tools.
The ediscovery tools that you use already incorporate NLP. If you're using a relatively up to date ediscovery tool, you're already enjoying the benefits of this technology, especially if the ediscovery process involves material in multiple languages.
Natural language processing does indeed fall under the category of AI, which is a very broad category which includes pretty much all machine learning, including NMT and LLMs, as well as any work on AGI.
To marketing folks, or ppl trying to boost a stock, maybe. Referring to such a broad swath of algorithm-derived language processing technologies as "intelligence" is wish casting.
Culling through a defined universe is entirely different than letting the AI loose. And at this point and for the near foreseeable, it is, let loose, not reliable. And if it is not reliable, it is near to useless.