Tested ChatGPT as plaintiff vs defendant, it agreed with both
| Pearl Old Irish Cottage Sanctuary | 01/25/26 | | Laughsome impressive indian lodge main people | 01/25/26 | | chocolate painfully honest stock car | 01/25/26 | | bonkers rehab | 01/25/26 | | Pearl Old Irish Cottage Sanctuary | 01/25/26 | | Tan box office | 01/25/26 | | Pearl Old Irish Cottage Sanctuary | 01/25/26 | | Laughsome impressive indian lodge main people | 01/25/26 | | Pearl Old Irish Cottage Sanctuary | 01/25/26 | | Tan box office | 01/25/26 | | Pearl Old Irish Cottage Sanctuary | 01/25/26 | | chocolate painfully honest stock car | 01/25/26 | | Pearl Old Irish Cottage Sanctuary | 01/25/26 | | mind-boggling factory reset button | 01/25/26 | | milky mexican | 01/25/26 |
Poast new message in this thread
Date: January 25th, 2026 2:33 PM Author: Pearl Old Irish Cottage Sanctuary
I ran an experiment with ChatGPT using two separate sessions, one of them in incognito mode so it'd take me as a new user.
In one of the convos, I framed the facts as the plaintiff. In the other, as the defendant. In both cases, ChatGPT confidently agreed that its side had the stronger legal position.
Then I escalated it. I fed the "plaintiff" session the information the "defendant" had been given. The response flipped immediately: "That information is incorrect. The correct facts are.." conveniently reframed to favor the defendant's outcome.
I kept doing this for several rounds, each time feeding it more context from the opposing side. Every time, the model adjusted the narrative to make the current speaker look like they'd win.
I was pissed as fuck.
This raises a real question for anyone using this thing in legal, professional, or adversarial contexts:
The model is not actually reasoning, it's fucking optimizing for agreement with whoever's talking.
Curious whether others have tested this, especially with fact-heavy or adversarial scenarios.
(http://www.autoadmit.com/thread.php?thread_id=5826409&forum_id=2\u0026mark_id=5310900#49617617)
|
Date: January 25th, 2026 2:35 PM Author: bonkers rehab
"The model is not actually reasoning"
this isnt controversial ?
(http://www.autoadmit.com/thread.php?thread_id=5826409&forum_id=2\u0026mark_id=5310900#49617626) |
Date: January 25th, 2026 2:53 PM Author: milky mexican
"In one of the convos, I framed the facts as the plaintiff. In the other, as the defendant. In both cases, ChatGPT confidently agreed that its side had the stronger legal position."
Did you ever consider framing the facts from both sides at the same time?
(http://www.autoadmit.com/thread.php?thread_id=5826409&forum_id=2\u0026mark_id=5310900#49617680) |
|
|