There is 0% possibility of “AGI” ever occurring
| big principal's office chad | 10/18/25 | | umber prole | 10/18/25 | | fishy exciting quadroon antidepressant drug | 10/18/25 | | low-t aromatic set mad-dog skullcap | 10/18/25 | | frozen stimulating water buffalo faggot firefighter | 10/18/25 | | Shimmering lay azn | 10/18/25 | | motley embarrassed to the bone legend main people | 10/18/25 | | rusted hell | 10/18/25 | | racy church building lettuce | 10/18/25 | | pontificating trailer park doctorate | 10/18/25 | | Sick deranged parlor place of business | 10/19/25 | | mentally impaired elite shrine hominid | 10/19/25 | | Talented station | 11/08/25 | | The Penis | 04/20/26 | | marvelous cumskin | 10/19/25 | | scarlet native striped hyena | 11/08/25 | | kemp | 04/20/26 | | the walter white of this generation (walt jr.) | 04/20/26 | | kikesicle | 04/20/26 | | the walter white of this generation (walt jr.) | 04/20/26 | | .,.,...,..,.,.,:,,:,...,:::,...,:,.,.:..:. | 04/20/26 | | The Penis | 04/20/26 | | The Penis | 04/20/26 | | kemp | 04/20/26 | | The Penis | 04/20/26 |
Poast new message in this thread
Date: October 18th, 2025 8:11 AM Author: big principal's office chad
AI’s greatest accomplishment will be making millions of otherwise sentient / marginally intelligent people retarded.
It’s just another fucking computer program and that’s all it will ever be.
(http://www.autoadmit.com/thread.php?thread_id=5787428&forum_id=2...id#49357405) |
Date: October 18th, 2025 12:30 PM Author: frozen stimulating water buffalo faggot firefighter
The only thing that would/could stop the eventual development of artificial intelligent minds is the breakdown and collapse of civilization
Which could totally happen
(http://www.autoadmit.com/thread.php?thread_id=5787428&forum_id=2...id#49357795) |
Date: October 18th, 2025 12:36 PM Author: motley embarrassed to the bone legend main people
that's AGI though
make users insanely retarded
everyone uses AI to try not be retarded
agents do basic tasks that humans can no longer do
(http://www.autoadmit.com/thread.php?thread_id=5787428&forum_id=2...id#49357811) |
Date: April 20th, 2026 1:40 AM Author: kikesicle
it's a bit like the breathless overselling that went on during the mid-20th century regarding technology, space travel, et cetera.
at the time, it felt like things were moving so rapidly that SURELY our computers and rockets would soon morph into robotic butlers and flying cars a la The Jetsons!
people deadass believed that the Moon 'landing' was an obvious auguring of interplanetary space travel and 'colonization' in the near-term. even in the 80s, they were still making sci-fi movies set in 'THE YEAR 1999' where people had flying cars and lived on Mars, like this was 100% plausible futurecasting.
the AI thing will fizzle, just like all tech has. we'll get all the low-hanging fruit benefits of AI, while 'AGI' talk quietly fades away.
(http://www.autoadmit.com/thread.php?thread_id=5787428&forum_id=2...id#49828620) |
 |
Date: April 20th, 2026 2:42 AM
Author: .,.,...,..,.,.,:,,:,...,:::,...,:,.,.:..:.
This is generally a reasonable criticism of technological hype but the arguments for near term AGI are more complicated than “we made a lot of progress quickly, therefore we are near the end.” The success in modeling multiple different modalities using a highly generic architecture with minimal implementation differences tells you something meaningful about intelligence. Gradient descent with transformers works in multiple different domains because it’s a tractable approximation of Solomonoff induction. Small generalizing circuits make up more of the weight space and are easier for gradient descent to stumble on. It’s basically just a dumb circuit search process that tends to find programs from data that are likely to generalize. This becomes more true the more data you train on, with more epochs and with heavier regularization and more parameters. As compute budgets increase and training techniques become more efficient, this process converges closer and closer to optimal prediction of whatever modality you are training on. Even if we run out of ideas for how to improve the training techniques, the parametric circuit search can be increasingly applied to the learning algorithms themselves. There’s really no plausible obstacle that could stop this from happening relatively soon. Manifesting intelligence is no longer contingent on brilliant insights or ideas but FLOPS and engineering
(http://www.autoadmit.com/thread.php?thread_id=5787428&forum_id=2...id#49828660) |
|
|