Post

Avatar
Avatar
This is incoherent as an argument for why lawyers should use LLMs. You don't seem to understand how legal disputes arise, what parties are required to provide to the court and each other, or what legal or factual research is meant to accomplish. I don't think you're qualified to make this argument.
Avatar
In this scenario, is your idea that that LLM did archival work in a library and found some mouldering untranslated treaty document, then scanned and translated it? Or is the idea that this treaty (between whom?) was part of discovery but without an LLM no lawyer would…check what the treaty said?
Avatar
Lmao this is so dumb, but your replies in this thread are even worse
Avatar
I don't think you did your argument much good by kicking it off with a pseudo-legal document whose deficiencies set off any lawyer's alarm bells. You could have just posted that an LLM might help with bulk translation from a seldom-spoken language.
Avatar
Is any of this real? The precedent, the LLM performance, the translation?
Avatar
The case is a work of fiction of course. The utility of GPTs is quite real. They are quite good at translating and summarizing. Perfect? NO. That's why the process outlined includes a validation step where the potential results are given to a potential to translate and validate.
Avatar
As a lawyer, I gotta tell ya, it’s really hard for me to gauge the utility of a tool if I can’t determine it’s accuracy.
Avatar
I agree. They are making tools that do allow for this and not far off. Surely this can be used instead of an army of first year assoc looking for precedent? Don't get me wrong, this person's "take" is shit -- they don't have context in the field. But AI is going to be a tool lawyers will use.
I've got an AI PhD and 10 years building products in big 'Legal Tech'. I'm leaving the company because they won't stop demanding LLMs injected everywhere; they aren't helpful today and user studies/demos say they're unwanted other than in a moonshot 'Everything Works!' sense - which isn't coming.
Avatar
My experience, and Michael Cohen's, is that the LLM's will obligingly hallucinate the very authorities upon which Goldman expects to rely.
Avatar
The hypothetical cases are where AI/LLMs *actually* work.
Avatar
You already use this technology or similar technology if you use ediscovery tools. Certainly if you're going with "AI" in general and you use ediscovery tools.
The ediscovery tools that you use already incorporate NLP. If you're using a relatively up to date ediscovery tool, you're already enjoying the benefits of this technology, especially if the ediscovery process involves material in multiple languages.
Avatar
Avatar
Natural language processing does indeed fall under the category of AI, which is a very broad category which includes pretty much all machine learning, including NMT and LLMs, as well as any work on AGI.
Avatar
To marketing folks, or ppl trying to boost a stock, maybe. Referring to such a broad swath of algorithm-derived language processing technologies as "intelligence" is wish casting.
Avatar
Culling through a defined universe is entirely different than letting the AI loose. And at this point and for the near foreseeable, it is, let loose, not reliable. And if it is not reliable, it is near to useless.
Avatar
What do you mean by "letting the AI loose?"
Avatar
Or what Michael Cohen, in his typical wisdom, did.
Avatar
What you described in your Medium article
Avatar
Yeah, I could see putting something complete that I prepared in, and asking "did I miss something?" And then see what it spits out. Never hurts to have another 'mind' review one's work product.
Avatar
If you define bullshit as "characterized not by an intent to deceive but instead by a reckless disregard for the truth" it becomes clear that LLMs are bullshit machines. A reckless disregard for the truth is not a useful way to practice law. link.springer.com/article/10.1...
ChatGPT is bullshit - Ethics and Information Technologylink.springer.com Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persist...
By the way, the leftists got you labeled on aegis (which is shutting down)...
Avatar
Yeah glad that b.s. labeler is getting shut down.
Avatar
But "leftist." That's the funny thing. I'm queer and a vocal advocate for trans inclusion, gender affirming care, etc.
Avatar
Lower dude, lower. He is not as tall as he says he is.
I think their take is garbage, but how is it transphobic? Looking through their feed most of their content is flagged. This looks more like abuse from people using a labeler, not that this person is problematic. 🤷‍♂️
Avatar
It's not specific posts. At some point labeler decided to label the account itself, and now the label appears on all posts. Which is - yes, incredibly stupid, and should be fixed.
Ah, that's how labeling the account gets manifested. Yeah, that's absurd.
Avatar
How does one check that?
You have to subscribe to aegis to see them. They don't tell you, and the review/audit process doesn't exist (since it's being shut down). But the point still remains that the service, like all the "labeling" services are a point of abuse. One where the label is more important than the content.
Avatar
And your last sentence was why I asked. Thanks though.
Avatar
Avatar
It turns out that his is a very good use case for AI.