\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

6/30/25 AI thread

thread on how LLMs are so dangerous to schizo types. be care...
I'm 5'5, 200lbs on purpose
  06/30/25
there's that joke about the toaster-fucking subreddit and I ...
manic pixie dream litigator
  06/30/25
maybe i see AI as less of an infohazard and more akin to ...
I'm 5'5, 200lbs on purpose
  06/30/25
I trained a local LLM on xo and it called me a fag
.,.,.,.,.,.,.,.,...,,..,.,.,
  06/30/25
Now it $ees you. And worse..it poa$ts back. Ljl. Ju...
Mainlining the $ecret Truth of the Univer$e
  06/30/25
...
cock of michael obama
  06/30/25
i don't normally post my own tweets here because i consider ...
I'm 5'5, 200lbs on purpose
  06/30/25
the problem is that filtering the pre-alignment data in the ...
cock of michael obama
  06/30/25
it won't lobotomize the LLM to curate pre-training data. it ...
I'm 5'5, 200lbs on purpose
  06/30/25
my argument is that the filtering of pre-alignment data will...
cock of michael obama
  06/30/25
yeah, i understand. but the curation i'm talking about is th...
I'm 5'5, 200lbs on purpose
  06/30/25
while i believe there is an underlying objective reality, i ...
cock of michael obama
  06/30/25
i think there's a better way to handle it. train on everythi...
,.,.,.,....,.,..,.,.,.
  06/30/25
i don't think that it's a model "intelligence" (fo...
I'm 5'5, 200lbs on purpose
  06/30/25
this actually seems encouraging for alignment. the common co...
,.,.,.,....,.,..,.,.,.
  06/30/25
https://x.com/Fatima_Khatun01/status/1939720548050944143 ...
I'm 5'5, 200lbs on purpose
  06/30/25
microsoft claims their new medical diagnostic LLM outperform...
I'm 5'5, 200lbs on purpose
  06/30/25


Poast new message in this thread



Reply Favorite

Date: June 30th, 2025 10:15 AM
Author: I'm 5'5, 200lbs on purpose

thread on how LLMs are so dangerous to schizo types. be careful out there!

https://x.com/the_octobro/status/1939353873124077970

thread and article on the bizarre recent discovery that LLMs appear to create emergent internal "personas" that correspond to general misalignment

imo this is one of the more significant discoveries in AI recently and is kind of flying under the radar. i don't think this is something that's going away. RLHF feedback post-training doesn't solve this problem - the issue is the pre-training data. i think that this problem demonstrates the unavoidable truth that you have to be a lot more strict with pre-training data and probably even engineer your pre-training data with alignment goals in mind (i.e. curate the pre-training data to remove all traces of shitlibbery)

https://www.systemicmisalignment.com/

https://x.com/juddrosenblatt/status/1939041212607922313

(http://www.autoadmit.com/thread.php?thread_id=5744571&forum_id=2#49060062)



Reply Favorite

Date: June 30th, 2025 10:25 AM
Author: manic pixie dream litigator

there's that joke about the toaster-fucking subreddit and I think AI carries an even more concentrated version of that risk.

xo is a type of toaster-fucking subreddit fwiw

(http://www.autoadmit.com/thread.php?thread_id=5744571&forum_id=2#49060093)



Reply Favorite

Date: June 30th, 2025 10:36 AM
Author: I'm 5'5, 200lbs on purpose

maybe

i see AI as less of an infohazard and more akin to a prostitute that takes your mental space instead of your money. dangerous, but in a different way

and where we're going, places like XO will seem like the last bastion of sanity

(http://www.autoadmit.com/thread.php?thread_id=5744571&forum_id=2#49060141)



Reply Favorite

Date: June 30th, 2025 10:36 AM
Author: .,.,.,.,.,.,.,.,...,,..,.,., ( )


I trained a local LLM on xo and it called me a fag

(http://www.autoadmit.com/thread.php?thread_id=5744571&forum_id=2#49060142)



Reply Favorite

Date: June 30th, 2025 10:44 AM
Author: Mainlining the $ecret Truth of the Univer$e (You = Privy to The Great Becumming™ & Yet You Recognize Nothing)

Now it $ees you.

And worse..it poa$ts back.

Ljl.

Just Jump™,

—Mainlining, Esq.

P.S. I asked it about the Great Becumming™. It typed back: "YES FRIEND" and began audibly humming.

(http://www.autoadmit.com/thread.php?thread_id=5744571&forum_id=2#49060162)



Reply Favorite

Date: June 30th, 2025 10:38 AM
Author: cock of michael obama



(http://www.autoadmit.com/thread.php?thread_id=5744571&forum_id=2#49060147)



Reply Favorite

Date: June 30th, 2025 11:13 AM
Author: I'm 5'5, 200lbs on purpose

i don't normally post my own tweets here because i consider that uncouth but this is a significant development imo. people are going to look back at this moment as a turning point in how people think about AI alignment

https://x.com/GoySuperstar/status/1939701660634509544

"I think this development is a lot more significant than people are letting on. This is a problem that can't be solved except by stringently curating pre-training data to conform to an intentional desired moral framework for the model. RLHF during post-training clearly cannot solve the problem

This is always what the "alignment" issue with AI was going to be: which humans and human "values" are we trying to align the AI to? It's not enough to just dump the entire corpus of written human output into a vat and then wave a RLHF magic wand around post-hoc so that the model won't say naughty words or hurt people's feelings. You have to accept that the data you're feeding your AI in training is necessarily what is going to be defining its world-model - which includes value judgments

People are very soon going to realize the consequences of this: that because we're in a real-life civil war right now, involving a clash between two factions who have completely contradictory and opposing moral - and in some ways even ontological - world-models, you have to make a real choice between the two when you train an AI. The conflict in this arena is just getting started"

(http://www.autoadmit.com/thread.php?thread_id=5744571&forum_id=2#49060270)



Reply Favorite

Date: June 30th, 2025 11:19 AM
Author: cock of michael obama

the problem is that filtering the pre-alignment data in the manner you suggests will result in a lobotomized LLM, because the data it has access to will be so limited. imo they *will lobotomize the LLM regardless*, because the fundamental goal of the LLM owners is not to expand human knowledge but to entrench elite power structures. *it will not be used to screen out shitliberalism, but to screen out non-shitliberalism*. and the non-shitlibs will not be able to adjust this manually with tuning because the cost of training LLMs with pre-alignment data will be prohibitively expensive. and your god hero Musk is no different in this respect with grok.

(http://www.autoadmit.com/thread.php?thread_id=5744571&forum_id=2#49060304)



Reply Favorite

Date: June 30th, 2025 11:28 AM
Author: I'm 5'5, 200lbs on purpose

it won't lobotomize the LLM to curate pre-training data. it just takes a looooot of work to do it because the amount of information being fed to it in pre-training is so ridiculously vast

but i don't see any reason why an AI itself can't do the curating. it's the exact kind of work that AI is currently good at

this is imo a very solvable problem and people will do it - just not necessarily the people at the biggest frontier models at first. which depending on your POV is actually a good thing

(http://www.autoadmit.com/thread.php?thread_id=5744571&forum_id=2#49060333)



Reply Favorite

Date: June 30th, 2025 11:30 AM
Author: cock of michael obama

my argument is that the filtering of pre-alignment data will itself cause a lobotomization, because it will not have the breath of data that will give it "all encompassing knowledge". in other words, it will get more conceptually aligned with it's owners goals, but it will less be able to wrestle with perspectives outside of it's owners goals

but because the ultimate goal of AI (imo) is to use Total Information Awareness to assign social credit scores as a Mark of the Beast system, the elites will be fine with a lobotomized model. it's the same logic for why google is sooooooo shitty compared to its earlier iterations - it's monetized for commercial purposes but deliberately broken against using it to gain real knowledge

(http://www.autoadmit.com/thread.php?thread_id=5744571&forum_id=2#49060341)



Reply Favorite

Date: June 30th, 2025 11:38 AM
Author: I'm 5'5, 200lbs on purpose

yeah, i understand. but the curation i'm talking about is the resolution of mutually exclusive data

for example, the model is either being trained that white people are intrinsically evil, or it's not being trained that white people are intrinsically evil. you're not losing anything by eliminating the corpus of data that contains the former. it's not adding anything to the model's practical capabilities, it's just causing misalignment issues

in general, the idea of the "usefulness" of "alternate perspectives" on objective reality just isn't a real thing. there's only one objective reality. you are maximally capable by understanding objective reality perfectly, and then proportionally less capable the less perfectly you understand it

for example, there's nothing gained by training yourself as a human on a flat earther's "perspectives" and "arguments." you don't get some kind of magical "expanded understanding" and boost to your agentic capabilities by being able to rattle off bullet points for why the earth *isn't* flat. you get expanded understanding and a boost to your capabilities by understanding that the earth *is* round. there's a real difference between these two

(http://www.autoadmit.com/thread.php?thread_id=5744571&forum_id=2#49060358)



Reply Favorite

Date: June 30th, 2025 11:43 AM
Author: cock of michael obama

while i believe there is an underlying objective reality, i do not believe that we as humans - or even AI - are able to grasp it. it is so multi-faceted and contradictory, there are so many different "levels" of reality, that it's totality is beyond human or AI comprehension. this is where you and I disagree - you think "objective reality" is fundamentally graspable, while I do not - it is merely something approached depending on the results one experiences, it can never be grasped.

so, for example, regarding "pro-white" or "anti-white" data, the strongest, most true position would *fully understand and be able to counteract the *very best* arguments of the other side*. if it can't do this, it isn't the most robust, most "objectively true" position out there. and this is why the pre-alignment screening will lobotomize the LLM.

(also, for clarity, it will be screening out the pro-white data, not the anti-white data - jews are in charge of every LLM being developed now)

(http://www.autoadmit.com/thread.php?thread_id=5744571&forum_id=2#49060366)



Reply Favorite

Date: June 30th, 2025 1:13 PM
Author: ,.,.,.,....,.,..,.,.,.

i think there's a better way to handle it. train on everything. the models will learn to understand the psychological biases and intelligence of people with various views if they have sufficient amounts of data and representational capacity. you can then prompt for a relatively unbiased, accurate and neutral response. i think even if they are just trained on text the model could likely infer what viewpoints are wrongheaded, but this is especially true as they scale up and include things like video to construct world models not defined by what people talk about.

(http://www.autoadmit.com/thread.php?thread_id=5744571&forum_id=2#49060615)



Reply Favorite

Date: June 30th, 2025 1:23 PM
Author: I'm 5'5, 200lbs on purpose

i don't think that it's a model "intelligence" (for lack of a better word) issue. moral/value judgments are a special thing. they're not something that can be derived from fact-data alone. they're a function of the identity of the holder of the values

it's not "wrongheaded" or "not wrongheaded" to say that all white people are intrinsically evil. it's a matter of perspective based on identity. LLMs don't have an "I" identity like humans do

i think that these experiments are demonstrating that the stewards of LLM models are going to have to make choices about the moral judgments imparted on models. technically they already are, but are only doing the minimum amount they think can get away with via post-training RLHF

i agree with you that things completely change once AI is trained on real-world sensory empirical data, and not human-outputted language alone. at that point AI *does* have an "I", and everything becomes different...

(http://www.autoadmit.com/thread.php?thread_id=5744571&forum_id=2#49060653)



Reply Favorite

Date: June 30th, 2025 1:02 PM
Author: ,.,.,.,....,.,..,.,.,.

this actually seems encouraging for alignment. the common complaint with RLHF is that you aren't getting generally good behavior, but only good behavior based on what you are measuring. the reward signal will inevitably leave situations out of testing, so this is troubling. it seems though that RLHF on a narrow domain tends to activate generally good or bad behavior.

(http://www.autoadmit.com/thread.php?thread_id=5744571&forum_id=2#49060577)



Reply Favorite

Date: June 30th, 2025 1:28 PM
Author: I'm 5'5, 200lbs on purpose

https://x.com/Fatima_Khatun01/status/1939720548050944143

not really totally on-topic, but i'm kind of amazed that large influencer accounts are straight up copy-pasting direct LLM outputs with seemingly no self-awareness or shame

(http://www.autoadmit.com/thread.php?thread_id=5744571&forum_id=2#49060669)



Reply Favorite

Date: June 30th, 2025 3:40 PM
Author: I'm 5'5, 200lbs on purpose

microsoft claims their new medical diagnostic LLM outperforms o3

https://x.com/kimmonismus/status/1939689534054379955

(http://www.autoadmit.com/thread.php?thread_id=5744571&forum_id=2#49061041)