You in fact demonstrated the point, just not the one you meant to, because of the nature of legal cases. There is an opposing counsel that is extremely motivated to find any possible flaw with anything that you say, do, or most importantly file.
Because you used Chat-GPT as a "first step" and it drafted a brief-shaped document, you've now introduced an unbounded set of legal landmines for the both the litigant and their attorneys.
Wait let me get this straight. Because my HYPOTHETICAL BRIEF involving A PERSON WHO IS NOT REAL is not factually accurate, I have put my fake person at risk?
Buddy, I didn't say that it should be used to write a fucking brief, and certainly not that a fucking brief should be written in a second as a throwaway hypothetical. Jeesus fucking Christ.
Do you put the same level of validation work into constructing a hypothetical for a discussion on social media that you do for your fucking cases? I don't think so. Why do you expect me to?
Since the facts of a FAKE CASE are not relevant to the matter at hand, get over it.
Okay, fine, so you play law professor and make up a hypo, it contains problems, but we'll pretend it doesn't and now you want chat GPT to help. Let's start here.
Would chatGPT produce documents if asked to research this wholly fake legal matter?
No one is saying that this holds the same real world ramifications as if you did it in actual litigation
But exactly these problems are introduced when you use these tools, and your example shows them being introduced!
Daniel, your hypothetical case *isn't*. The 'linchpin' here is that the brief fails to state a claim. Even if *your* 'linchpin' was legally relevant, and it isn't, your hypothetical fails to even reach it.
Right but (1) this is not a legal case and (2) even if it was it is actually not relevant to the case.
That it can be shown that using a GPT alone, and without expertise, there can be errors in GPT output does not in any show that they cannot be used to search in foreign languages.