\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

AI progress has plateaued at GPT-4 level

https://www.theintrinsicperspective.com/p/ai-progress-has-pl...
cock of michael obama
  11/14/24
Good this shit is too good
Mig
  11/14/24
I hoap so but for how long? There's trillion$ invested in AI...
.,.,,.,..,,..,..,..,....,,...,.
  11/14/24
No it hasn't. It just went deep black like quantum computing...
but at what cost
  11/14/24
...
cowstack
  11/14/24
...
Mig
  11/14/24
tcr
.,.,,.,..,,..,..,..,....,,...,.
  11/14/24
Does the author not realize that GPT is a tiny subset of &qu...
cowstack
  11/14/24
This is talking about pre-training. There are a ton of thing...
So we looked at the data
  11/14/24
Scaffolding is promising, but o1 wasn’t a universal im...
magadood
  11/14/24
this has appeared likely for a while since all the models ha...
magadood
  11/14/24
Altman and people within OpenAI (even those who left recentl...
As far as they will go but even farther
  11/14/24
Scaling likely does but they aren’t scaling the right ...
alpha dood
  11/14/24


Poast new message in this thread



Reply Favorite

Date: November 14th, 2024 11:10 AM
Author: cock of michael obama

https://www.theintrinsicperspective.com/p/ai-progress-has-plateaued-at-gpt

(http://www.autoadmit.com/thread.php?thread_id=5634079&forum_id=2\u0026mark_id=5310919",#48336411)



Reply Favorite

Date: November 14th, 2024 11:11 AM
Author: Mig

Good this shit is too good

(http://www.autoadmit.com/thread.php?thread_id=5634079&forum_id=2\u0026mark_id=5310919",#48336414)



Reply Favorite

Date: November 14th, 2024 11:12 AM
Author: .,.,,.,..,,..,..,..,....,,...,.


I hoap so but for how long? There's trillion$ invested in AI destroying humanity

(http://www.autoadmit.com/thread.php?thread_id=5634079&forum_id=2\u0026mark_id=5310919",#48336420)



Reply Favorite

Date: November 14th, 2024 11:15 AM
Author: but at what cost

No it hasn't. It just went deep black like quantum computing because it's too dangerous for normies to possess

(http://www.autoadmit.com/thread.php?thread_id=5634079&forum_id=2\u0026mark_id=5310919",#48336429)



Reply Favorite

Date: November 14th, 2024 11:15 AM
Author: cowstack



(http://www.autoadmit.com/thread.php?thread_id=5634079&forum_id=2\u0026mark_id=5310919",#48336432)



Reply Favorite

Date: November 14th, 2024 11:25 AM
Author: Mig



(http://www.autoadmit.com/thread.php?thread_id=5634079&forum_id=2\u0026mark_id=5310919",#48336475)



Reply Favorite

Date: November 14th, 2024 11:33 AM
Author: .,.,,.,..,,..,..,..,....,,...,.


tcr

(http://www.autoadmit.com/thread.php?thread_id=5634079&forum_id=2\u0026mark_id=5310919",#48336521)



Reply Favorite

Date: November 14th, 2024 11:16 AM
Author: cowstack

Does the author not realize that GPT is a tiny subset of "AI"?

(http://www.autoadmit.com/thread.php?thread_id=5634079&forum_id=2\u0026mark_id=5310919",#48336435)



Reply Favorite

Date: November 14th, 2024 11:16 AM
Author: So we looked at the data

This is talking about pre-training. There are a ton of things that can be done post training, they are not even close to exhausting possibilities in the current paradigm.

(http://www.autoadmit.com/thread.php?thread_id=5634079&forum_id=2\u0026mark_id=5310919",#48336436)



Reply Favorite

Date: November 14th, 2024 11:33 AM
Author: magadood

Scaffolding is promising, but o1 wasn’t a universal improvement in model capabilities. Language abilities are essentially the same. I wouldn’t bet on it as a solution.

There are a few possibilities : transformers are inherently limited (lack of recurrency is an obvious problem), standard back propagation is insufficient or the pre training objective is not ideal. All of them are reasonable. They could replace all of these hand coded components with learnable/evolvable functions, so I wouldn’t bet on AI progress stalling.

(http://www.autoadmit.com/thread.php?thread_id=5634079&forum_id=2\u0026mark_id=5310919",#48336516)



Reply Favorite

Date: November 14th, 2024 11:21 AM
Author: magadood

this has appeared likely for a while since all the models have been clustering around GPT-4 level. I highly doubt synthetic data is the solution. They are trying to feed the data hunger of transformers by generating more of it. They should be trying to deal with the poor generalization properties of transformers directly, which is why they need so much data in the first place. Using RL to generate token chains is a very expensive way to go and only seems to improve some abilities.

(http://www.autoadmit.com/thread.php?thread_id=5634079&forum_id=2\u0026mark_id=5310919",#48336461)



Reply Favorite

Date: November 14th, 2024 11:23 AM
Author: As far as they will go but even farther (🧐)

Altman and people within OpenAI (even those who left recently) have been fighting this narrative hard, although of course we have no fucking idea whether they're just blowing hot air or not

What does seem clear though is there is a ton more work to get to ASI, which will be the true turning point when the world may descend into total chaos communism. Turns out it's pretty hard to turn LLMs into meaningful independent acting agents, scaling doesn't magically create superhuman intelligence like all the hockey sticks predicted (shocker)

(http://www.autoadmit.com/thread.php?thread_id=5634079&forum_id=2\u0026mark_id=5310919",#48336468)



Reply Favorite

Date: November 14th, 2024 1:21 PM
Author: alpha dood

Scaling likely does but they aren’t scaling the right thing. The idea that searching over Turing machines (using a simplicity bias) on enormous quantities of data produces increasingly general intelligence is likely. even if you buy into system 2/system 1 model of cognition, it’s kind of fanciful to imagine that a search process using a million GPUs can’t find a Turing machine that implements slow reasoning. Maybe not this year, maybe not next year, but I don’t see how it could be 20 years away

(http://www.autoadmit.com/thread.php?thread_id=5634079&forum_id=2\u0026mark_id=5310919",#48337224)