Absolutely amazing how often ChatGPT and others are just plain wrong
| Paralegal Mohammad | 03/26/25 | | novus homo | 03/26/25 | | Faggottini | 03/26/25 | | Oh, you love men? | 03/26/25 | | Paralegal Mohammad | 03/26/25 | | novus homo | 03/26/25 | | Oh, you love men? | 03/26/25 | | ,.,...,..,.,.,:,..,.,.,::,......;,..,:.:.,:.::,. | 03/26/25 | | novus homo | 03/26/25 | | metaphysical certitude | 03/26/25 | | novus homo | 03/26/25 | | Ting Ting Johnson | 03/26/25 | | novus homo | 03/26/25 | | metaphysical certitude | 03/26/25 | | ,.,...,..,.,.,:,..,.,.,::,......;,..,:.:.,:.::,. | 03/26/25 | | ,.,.,.,....,.,..,.,.,. | 03/26/25 | | novus homo | 03/26/25 | | ,.,.,.,....,.,..,.,.,. | 03/27/25 | | ,.,...,..,.,.,:,..,.,.,::,......;,..,:.:.,:.::,. | 03/27/25 | | Ting Ting Johnson | 03/27/25 | | metaphysical certitude | 03/26/25 | | Ting Ting Johnson | 03/26/25 | | AZNgirl pouring Starbucks coffee on Dad's Penis | 03/26/25 | | ,.,.,,.,.,.,.,.,.,..,.,,,..,.,. | 03/26/25 | | jag | 03/26/25 | | Diane Rehm talking dirty | 03/26/25 | | shitlib shibboleth | 03/26/25 | | Darnell | 03/26/25 | | michael doodikoff | 03/26/25 | | state your IQ before I engage you further | 03/26/25 | | Ting Ting Johnson | 03/26/25 | | tsar booba | 03/26/25 | | ,.,.,.,........,....,,,.. | 03/26/25 | | ChadGPT-5 | 03/26/25 | | 718-662-5970 | 03/26/25 | | ,.,.,.,........,....,,,.. | 03/26/25 | | Yonex Sinner | 03/26/25 | | michael doodikoff | 03/26/25 | | Oh, you! Travel! | 03/27/25 |
Poast new message in this thread
 |
Date: March 26th, 2025 4:07 PM
Author: ,.,...,..,.,.,:,..,.,.,::,......;,..,:.:.,:.::,.
Computing a confidence score for the generated option is not very meaningful since uninformative responses like "no." Or "I don't know" have much higher probabilities in the training text than the actual pargraphs-long answer
(http://www.autoadmit.com/thread.php?thread_id=5700041&forum_id=2:#48784555) |
 |
Date: March 26th, 2025 7:53 PM
Author: ,.,...,..,.,.,:,..,.,.,::,......;,..,:.:.,:.::,.
Nope. They use sampling heuristics that don't select the single maximum likelihood sequence. Otherwise it would be a nonpolynomial complexity problem.
(http://www.autoadmit.com/thread.php?thread_id=5700041&forum_id=2:#48785390) |
 |
Date: March 26th, 2025 10:11 PM Author: ,.,.,.,....,.,..,.,.,.
The models sample word fragments from a probability distribution. You can easily get log likelihood scores from the model, but converting that into something resembling knowledge confidence is not straightforward. If the likelihood of a sample is low, it could mean there are many viable options for a particular answer. It would probably require an alternative training regime. The hallucination rates are rapidly falling anyway from general LLM improvement. If that stalls, I expect more interest in addressing it.
(http://www.autoadmit.com/thread.php?thread_id=5700041&forum_id=2:#48785854) |
 |
Date: March 27th, 2025 12:09 AM Author: ,.,.,.,....,.,..,.,.,.
There are lots of possible ways to address it. LLMs don’t use direct knowledge retrieval from a database right now (probably due to inference costs), but plenty would pay for that. LLMs could easily retrieve information from a legal database, put it in the context window and use it for generation. A model is significantly less likely to hallucinate about something in its context window. I find it rather concerning that the hallucination issue is used as a reason to dismiss the potential implications of the technology. We could be quite near strong LLMs that will substantially automate legal work.
(http://www.autoadmit.com/thread.php?thread_id=5700041&forum_id=2:#48786099) |
 |
Date: March 27th, 2025 7:53 AM
Author: ,.,...,..,.,.,:,..,.,.,::,......;,..,:.:.,:.::,.
You can do this with an LLM. Its called retrieval augmented generation and it doesn't completely stop hallucination.
(http://www.autoadmit.com/thread.php?thread_id=5700041&forum_id=2:#48786463) |
Date: March 26th, 2025 2:03 PM Author: Ting Ting Johnson
chatgpt is completely done here. it's pozzed so bad you may actually be at a danger of getting AIDS by digitally conversing with it
openAI is completely fucked and is a dead org walking. they're running on name recognition at this point. they can burn however many 100s of billions of monopoly money they can get their hands on, but as long as they have shitlibs running the org and manipulating the models, their product will always suck compared to their competitors
(http://www.autoadmit.com/thread.php?thread_id=5700041&forum_id=2:#48784134) |
 |
Date: March 26th, 2025 5:08 PM
Author: ,.,.,,.,.,.,.,.,.,..,.,,,..,.,. ( )
lmao. Explain why you needed to do this conversion and why you decided to use AI for it. This hits on the main themes of your character arc, being poor and retarded, so you should add it to the bort lore.
(http://www.autoadmit.com/thread.php?thread_id=5700041&forum_id=2:#48784811) |
Date: March 26th, 2025 2:57 PM Author: michael doodikoff
I didn't have any good discovery samples for an interpleader case I am working on - RPDs, RFAs, and Interrogatories.
I fed it the entire complaint and had it draft discovery from my client to one of the other respondents.
It actually gave me great results for that, some shit I hadn't even thought of. Had to make a few changes but it was 90% of the way there.
That said, the citations and statutes it cites are regularly completely wrong
(http://www.autoadmit.com/thread.php?thread_id=5700041&forum_id=2:#48784313) |
Date: March 26th, 2025 4:06 PM
Author: ,.,.,.,........,....,,,..
It’s usually right. It’s a lot more right than average attorneys anyway. It does suffer from the problem that you can convince it out of positions, but LOL so do attorneys that work for you. It’s insanely good and I don’t see how it doesn’t replace almost everything I currently do one day. In the meantime, it’s like having cheat mode.
(http://www.autoadmit.com/thread.php?thread_id=5700041&forum_id=2:#48784547) |
Date: March 26th, 2025 4:08 PM Author: 718-662-5970
i think if youre using it to give you facts - esp facts that are vulnerable to changing or are obscure, youre using it wrong.
it is built to be a paralegal, ironically. you shouldnt trust your paralegal to know shit, only to do shit.
(http://www.autoadmit.com/thread.php?thread_id=5700041&forum_id=2:#48784559) |
Date: March 27th, 2025 8:59 AM Author: Oh, you! Travel! (🧐)
(Guy still using GPT4)
Feedback loops will mostly take care of the "Source?" issue, eg Deep Research is pretty good at citing things. It's only a matter of time (a couple of years tops) before it's capable of mostly replacing not only boilerplate consultants but biglawyers and the like
(http://www.autoadmit.com/thread.php?thread_id=5700041&forum_id=2:#48786574) |
|
|