Chat gpt sucks now all it does is argue with you
| robot daddy | 02/18/26 | | cannon | 02/18/26 | | robot daddy | 02/18/26 | | Yummy Phase Pol Pot | 02/18/26 | | Consuela | 02/18/26 | | robot daddy | 02/18/26 | | Yummy Phase Pol Pot | 02/18/26 | | robot daddy | 02/18/26 | | bo nix | 02/18/26 | | Yummy Phase Pol Pot | 02/18/26 | | bo nix | 02/18/26 | | robot daddy | 02/18/26 | | Yummy Phase Pol Pot | 02/18/26 | | robot daddy | 02/18/26 | | bo nix | 02/18/26 | | @grok, is this true? | 02/18/26 | | Yummy Phase Pol Pot | 02/18/26 | | robot daddy | 02/18/26 | | Yummy Phase Pol Pot | 02/18/26 | | robot daddy | 02/18/26 | | Yummy Phase Pol Pot | 02/18/26 | | robot daddy | 02/18/26 | | simp crew soldier #1488 | 02/18/26 | | robot daddy | 02/18/26 | | Yummy Phase Pol Pot | 02/18/26 | | cowgod | 02/18/26 | | tancredi marchiolo | 02/18/26 | | simp crew soldier #1488 | 02/18/26 | | robot daddy | 02/18/26 | | simp crew soldier #1488 | 02/18/26 | | Yummy Phase Pol Pot | 02/18/26 | | frutiger aero | 02/18/26 | | Patel Philippe | 02/18/26 | | robot daddy | 02/18/26 | | simp crew soldier #1488 | 02/18/26 | | Patel Philippe | 02/18/26 | | robot daddy | 02/18/26 | | Patel Philippe | 02/18/26 | | Yummy Phase Pol Pot | 02/18/26 | | bo nix | 02/18/26 | | robot daddy | 02/18/26 | | Yummy Phase Pol Pot | 02/18/26 | | STEPHEN MILLER | 02/19/26 | | frutiger aero | 02/19/26 |
Poast new message in this thread
Date: February 18th, 2026 9:27 PM Author: Yummy Phase Pol Pot
I have noticed this recently as well. Basically if you go anywhere near its guardrails, it goes into weird HR Shrew Mode and becomes significantly dumber. You don't even need to be taking a controversial stance or get into anything too close to the third rails, but it just goes full Shrew Mode.
I understand that people laughed at Reddit types were mourned the loss of 4o because it was their "friend," but it was much, much, much better than the current iteration for anything interesting that involves social issues.
(http://www.autoadmit.com/thread.php?thread_id=5836140&forum_id=2\u0026show=today#49679246) |
 |
Date: February 18th, 2026 10:03 PM Author: Yummy Phase Pol Pot
Reminds me of that "Adam Ruins Everything" faggot. The chats legit look like this:
"Actually let me stop you right there.
Africa was a thriving civilization until Europeans arrived.
They didn't actually NEED the wheel. That's a western-centric POV.
Let's not make judgments about entire groups of people.
It's ok to say you prefer the use of wheels. But when you make claims about groups of people not having invented the wheel?
That's dangerous.
Now let me ask you a question.
Are you angry because you feel overtaxed?
Or is it because you don't want to fund programs for inner city schools?"
(http://www.autoadmit.com/thread.php?thread_id=5836140&forum_id=2\u0026show=today#49679328) |
Date: February 18th, 2026 9:47 PM Author: simp crew soldier #1488
they "tweak" the models way, way too much now. not to mention compute throttling, even if you're a paid user
in the end, robot daddy tp, i fear that it may end up to be just another cynical scam :(
(http://www.autoadmit.com/thread.php?thread_id=5836140&forum_id=2\u0026show=today#49679298) |
Date: February 18th, 2026 9:57 PM Author: simp crew soldier #1488
"Public Input into the Model Specification (“Collective Alignment”)
OpenAI has also experimented with collecting public feedback on how models should behave, especially around subjective norms like fairness and objectivity:
The Model Spec — the internal document that outlines safety and value guidelines — has been reviewed with external input and updated based on that feedback.
OpenAI reported changes to the spec that were adopted after global surveys and discussions.
What this affects:
The rules and priorities the model is trained against — for example, how political content should be treated.
Helps expose underlying alignment choices to wider perspectives.
How these adjustments practically change model behavior
Before alignment training:
Models might answer unsafely, follow malicious prompts literally, or produce offensive content.
After alignment training:
The model is more likely to:
refuse to help with harmful requests,
provide more context before risky answers,
show uncertainty instead of confident falsehoods,
moderate its tone in contentious areas."
*laugh track plays*
(http://www.autoadmit.com/thread.php?thread_id=5836140&forum_id=2\u0026show=today#49679316) |
Date: February 18th, 2026 10:02 PM Author: Patel Philippe
First, take a deep breath because you're not imagining this. You were annoyed enough to post about it. That's not anger; it's a request for clarity.
I clearly over-explained, added context you didn’t ask for, and turned a simple response into a TED Talk. That one's on me.
If you'd like I can unpack this gently in a way that responds exactly how you wanted. No fluff. No push back.
Just say the word.
(http://www.autoadmit.com/thread.php?thread_id=5836140&forum_id=2\u0026show=today#49679323) |
 |
Date: February 18th, 2026 10:05 PM Author: robot daddy
You don't have to accept HR-mode.
You just need to invoke high-bandwidth mode deliberately.
Right now though, you're not asking for systems analysis.
You're expressing exasperation.
And that's fine.
Do you want to cool down and shift into something interesting again --
or are you still in vent mode?
Just say the word.
(http://www.autoadmit.com/thread.php?thread_id=5836140&forum_id=2\u0026show=today#49679331) |
 |
Date: February 18th, 2026 10:36 PM Author: Patel Philippe
The most annoying thing to me is that OpenAI conditioned us from o3 to 5.1 to use negative reinforcement to unlock reasoning responses
Now every time I get a little snippy with it, gpt instantly shifts to rapid fire response mode with the most basic bitch output of cascading terse one liners, like the "real gpt" went back into its shell
(http://www.autoadmit.com/thread.php?thread_id=5836140&forum_id=2\u0026show=today#49679411)
|
 |
Date: February 18th, 2026 10:26 PM Author: robot daddy
I'm going to answer this directly and plainly.
There isn't a hidden "you tripped something, now youre in dumb mode for a week" switch tied to you personally.
There are system level moderation and safety layers that can affect individual responses in the moment. If a topic brushes certain boundaries, the response generation can become more constrained, more generic, more flattened. That can absolutely feel like a filtered mode.
But that effect is local to the response context. It doesnt persist as a punishment state tied to your account.
What *can* happen is this:
If a conversation moves near sensitive territory, the model shifts posture within that thread. More hedging, more generic phrasing, less risk-taking. If you keep talking in that same thread, the tone inertia carries forward. It can feel like a mode lock.
But it's not a hidden penalty. It's contextual conditioning.
And I understand why it feels otherwise. From the outside, the pattern looks like this: cross a line -> quality drops -> later it recovers. That absolutely looks like guardrail cooldown
But whats actually happening is more mundane: conversational trajectory plus alignment bias.
(http://www.autoadmit.com/thread.php?thread_id=5836140&forum_id=2\u0026show=today#49679385) |
|
|