True artificial intelligence is decades, if not centuries, away
| Exciting Headpube Hissy Fit | 12/04/18 | | fuchsia toaster theatre | 12/04/18 | | Cowardly gas station mad cow disease | 12/04/18 | | Ruby vivacious affirmative action weed whacker | 12/06/18 | | Confused boyish crotch | 12/04/18 | | Amethyst fantasy-prone antidepressant drug hell | 12/04/18 | | https://i.imgur.com/ovcBe0z.png | 01/16/26 | | Adventurous poppy internal respiration abode | 12/04/18 | | claret messiness candlestick maker | 12/04/18 | | Exciting Headpube Hissy Fit | 12/04/18 | | startling twinkling locus newt | 12/04/18 | | Exciting Headpube Hissy Fit | 12/04/18 | | twisted filthy organic girlfriend | 12/04/18 | | Sinister vibrant generalized bond | 12/05/18 | | laughsome zombie-like yarmulke | 12/05/18 | | https://i.imgur.com/ovcBe0z.png | 01/16/26 | | misunderstood ocher associate house | 12/04/18 | | mint sanctuary coldplay fan | 12/04/18 | | concupiscible kink-friendly stain bawdyhouse | 12/04/18 | | French Temple Nibblets | 12/05/18 | | know-it-all milky feces | 12/05/18 | | plum becky pit | 12/04/18 | | fuchsia toaster theatre | 12/04/18 | | aggressive ticket booth background story | 12/04/18 | | plum becky pit | 12/04/18 | | aggressive ticket booth background story | 12/04/18 | | plum becky pit | 12/04/18 | | Dun Useless Ratface | 12/05/18 | | plum becky pit | 12/05/18 | | Dun Useless Ratface | 12/06/18 | | Disrespectful university | 12/06/18 | | Dark lodge roommate | 12/05/18 | | Exciting Headpube Hissy Fit | 12/04/18 | | plum becky pit | 12/04/18 | | Exciting Headpube Hissy Fit | 12/04/18 | | plum becky pit | 12/04/18 | | Exciting Headpube Hissy Fit | 12/04/18 | | plum becky pit | 12/04/18 | | Exciting Headpube Hissy Fit | 12/04/18 | | plum becky pit | 12/04/18 | | marvelous awkward reading party location | 12/04/18 | | plum becky pit | 12/04/18 | | marvelous awkward reading party location | 12/04/18 | | plum becky pit | 12/04/18 | | marvelous awkward reading party location | 12/04/18 | | Exciting Headpube Hissy Fit | 12/04/18 | | aggressive ticket booth background story | 12/04/18 | | Electric zippy halford | 12/05/18 | | plum becky pit | 12/05/18 | | Electric zippy halford | 12/05/18 | | plum becky pit | 12/06/18 | | razzle-dazzle corn cake sex offender | 12/04/18 | | https://i.imgur.com/ovcBe0z.png | 01/16/26 | | walnut comical church building really tough guy | 12/04/18 | | aphrodisiac green son of senegal address | 12/04/18 | | concupiscible kink-friendly stain bawdyhouse | 12/04/18 | | Bateful umber whorehouse | 12/05/18 | | lemon queen of the night | 12/04/18 | | Garnet Histrionic Principal's Office | 12/05/18 | | odious canary circlehead | 12/04/18 | | Jet base death wish | 12/04/18 | | Hyperactive Cuckold Institution | 12/04/18 | | Godawful rusted brunch therapy | 12/05/18 | | Garnet Histrionic Principal's Office | 12/05/18 | | Fragrant alpha school cafeteria | 12/05/18 | | Godawful rusted brunch therapy | 12/05/18 | | Electric zippy halford | 12/07/18 | | Hyperactive Cuckold Institution | 12/05/18 | | Electric zippy halford | 12/05/18 | | Swashbuckling Parlor | 12/05/18 | | Crimson Casino | 12/05/18 | | Dun Useless Ratface | 12/05/18 | | Nubile impertinent half-breed | 12/05/18 | | Garnet Histrionic Principal's Office | 12/05/18 | | Hyperactive Cuckold Institution | 12/05/18 | | Light Sick Area Scourge Upon The Earth | 12/05/18 | | Exciting Headpube Hissy Fit | 12/05/18 | | Electric zippy halford | 12/05/18 | | Light Sick Area Scourge Upon The Earth | 12/05/18 | | Electric zippy halford | 12/06/18 | | French Temple Nibblets | 12/06/18 | | Electric zippy halford | 12/06/18 | | French Temple Nibblets | 12/06/18 | | gay domesticated office | 12/05/18 | | Snowy Point | 05/10/20 | | LathamTouchedMe | 01/16/26 | | claudeman | 01/16/26 | | Motley smoky menage | 12/05/18 | | Exciting Headpube Hissy Fit | 12/05/18 | | Motley smoky menage | 12/05/18 | | Exciting Headpube Hissy Fit | 12/05/18 | | Motley smoky menage | 12/05/18 | | Saffron Trailer Park Mother | 12/05/18 | | Motley smoky menage | 12/05/18 | | Saffron Trailer Park Mother | 12/05/18 | | Motley smoky menage | 12/05/18 | | Saffron Trailer Park Mother | 12/05/18 | | grizzly deranged parlour milk | 12/05/18 | | Amethyst fantasy-prone antidepressant drug hell | 12/05/18 | | fiercely-loyal pozpig | 12/05/18 | | grizzly deranged parlour milk | 12/05/18 | | Matthias of Redwall Did Nothing Wrong #Cornflower | 01/16/26 | | claudeman | 01/16/26 | | Matthias of Redwall Did Nothing Wrong #Cornflower | 01/16/26 | | Sinister vibrant generalized bond | 12/06/18 | | Spectacular rehab | 12/06/18 | | Swashbuckling Parlor | 12/06/18 | | Lavender resort | 12/06/18 | | Swashbuckling Parlor | 12/06/18 | | Disrespectful university | 12/06/18 | | Snowy Point | 05/10/20 | | startling twinkling locus newt | 05/10/20 | | claudeman | 01/16/26 | | SkaddenArse | 01/16/26 | | https://i.imgur.com/ovcBe0z.png | 01/16/26 |
Poast new message in this thread
Date: December 4th, 2018 10:50 PM Author: fuchsia toaster theatre
have you read this short story?
https://en.wikipedia.org/wiki/Understand_(story)
what about something like this
(http://www.autoadmit.com/thread.php?thread_id=4149157&forum_id=2\u0026mark_id=5310443#37351721) |
Date: December 4th, 2018 10:50 PM Author: Confused boyish crotch
This is the conclusion I've come to as well
Most AI these days is glorified curve fitting
(http://www.autoadmit.com/thread.php?thread_id=4149157&forum_id=2\u0026mark_id=5310443#37351723) |
 |
Date: December 5th, 2018 1:04 PM Author: French Temple Nibblets
Exactly. Our track record for long term predictions is atrocious.
It is as dumb to predict 10 years as it is 50 years or 100 years. There is hardly any basis other than some generic "Moore's law" that's not even valid anymore.
"We don't know". But this is apparently not an acceptable answer.
(http://www.autoadmit.com/thread.php?thread_id=4149157&forum_id=2\u0026mark_id=5310443#37354442) |
 |
Date: December 4th, 2018 11:28 PM Author: plum becky pit
I don’t know you’re wrong but I really wanna know what those things are and whether they do more than give us just a semantic understanding of symbols that ai would lack but wouldn’t necessarily need.
Davidson’s a nice derangement of epitaphs might be relevant. Ai might ultimately lack true spontaneity. But we might be able to engineer spontaneity so well we can’t tell the difference.
You might be right that this makes ai asymptotic against true generality though
(http://www.autoadmit.com/thread.php?thread_id=4149157&forum_id=2\u0026mark_id=5310443#37352006) |
 |
Date: December 4th, 2018 11:19 PM Author: marvelous awkward reading party location
More than twice as many philosophers support physicalism about the mind as support non-physicalism.
https://philpapers.org/surveys/results.pl
You, stroking your neckbeard: "I'm a non-physicalist because I'm not an idiot"
(http://www.autoadmit.com/thread.php?thread_id=4149157&forum_id=2\u0026mark_id=5310443#37351926) |
Date: December 5th, 2018 11:29 AM Author: Swashbuckling Parlor
The work has already been done for us by mother nature through the most complex thing in the universe--the human brain. True AI systems will most likely occur via some combination of whole-brain mapping and the application of global learning systems.
Development of high-resolution cellular brain-mapping technology is maybe a decade away. Developers already recognize that the era of hand-tailored scripting alone is drawing to a close.
(http://www.autoadmit.com/thread.php?thread_id=4149157&forum_id=2\u0026mark_id=5310443#37353868) |
Date: December 5th, 2018 11:49 AM Author: Dun Useless Ratface
When AGI happens, it will be an economic disaster at first. However, as people have increasing leisure time and less to contribute meaningfully to society, they'll start to get introspective and more religious.
Eventually, the world economy will look like the Antebellum South, with robots instead of slavery; a mega-rich capitalist ownership caste; and religious impulse driving charity to the proles.
(http://www.autoadmit.com/thread.php?thread_id=4149157&forum_id=2\u0026mark_id=5310443#37353954) |
Date: December 5th, 2018 1:24 PM Author: Light Sick Area Scourge Upon The Earth
Too many people in this thread are relying on the human brain and that fuzzy term "consciousness" as the benchmarks for AGI or lesser AI. The brain is the product of many years of evolution (a messy process that rewards traits that lead to replication), not efficient engineering focused on modern tasks. The future of AI is not replicants, cyborgs, or recreations that mimic the processes of the human brain. It's a compilation of software that becomes the best at any narrow but very important task (driving, flying, investment advisor, accountant, legal researcher, medical diagnostic, surgeon, custodian, and eventually any other imaginable task assigned to it). These robots/apps won't need to act like humans to completely supplant humans in every imaginable activity or occupation.
The best example I've seen of AI bypassing the brain's more circuitous route was in Google Deepmind's video game play. In its first attempt at various classic games the AI looked like a retard (much worse than a human on his first try). But the AI "brute-forced" its way to becoming the best gamer ever, often employing techniques and strategies not seen before. Even if the human brain has the "intuition" to initially figure out the controls and learn and adapt, the AI brain could play the game 10 billion times in its "head" to figure out the optimal strategy.
(http://www.autoadmit.com/thread.php?thread_id=4149157&forum_id=2\u0026mark_id=5310443#37354601) |
 |
Date: December 5th, 2018 5:29 PM Author: Light Sick Area Scourge Upon The Earth
I only brought up the video game example to show that humans and AI think/compute differently. Just because AI will not be able to mimic the human brain for the foreseeable future does not mean AI won't be able to best humans in virtually every activity that we value.
It was not an easy task to create AI that becomes a video game champion when the AI does not have knowledge of the code in the game and isn't exploiting it. It had no unfair advantage. It was given the task of maximizing score and only "told" what each function in the game does (i.e. this makes the player go right, this makes the player shoot a gun). Like I said, the AI actually sucked ass when it initially played the games (much worse than your typical human) but eventually bested all humans.
Our brains are computers. They are weird and oddly "engineered" computers that have some "mystical" components to them that current AI is incapable of, but in other ways they are actually pretty shitty relative to current AI. Far too often people use the brain and our thought processes as the benchmark for determining whether AI is inferior. I think that's pointless.
(http://www.autoadmit.com/thread.php?thread_id=4149157&forum_id=2\u0026mark_id=5310443#37356182) |
 |
Date: December 5th, 2018 5:24 PM Author: gay domesticated office
Lolled in my office at this absurd attempt to move the goalposts FORWARD.
**picks up wooden doorstop**
"As you can clearly see, this Intelligent Inclined Plane Machine is the *real genius* here; not humans! It has simply chosen to focus on becoming excellent at the narrow task of keeping doors open, while the wasteful human brain-- [(cough, cough, chortle)]-- runs in circles spending energy on things like consciousness, leisure, meaning, and all those other inefficiencies you call 'general intelligence.'"
(http://www.autoadmit.com/thread.php?thread_id=4149157&forum_id=2\u0026mark_id=5310443#37356154) |
Date: December 5th, 2018 1:26 PM Author: Motley smoky menage
what an idiotic take. you don't need AI's that are completely human in order for AI to replace jobs.
also, AI hasn't been scripting and pathfinding for a long time. the entire field is about statistical learning these days. of course, you are right that full human ability and behavior may be centuries away, but generally you have no idea what you're talking about.
(http://www.autoadmit.com/thread.php?thread_id=4149157&forum_id=2\u0026mark_id=5310443#37354622) |
Date: December 6th, 2018 3:50 AM Author: Sinister vibrant generalized bond
As someone who's done research in the field of ML here's my humble take.
AI is a marketing term to build hype / sell products for the foreseeable future.
"Machine Learning" is just applied statistics.
Any talk of AGI is insanely premature - advances in ML applications gives us e.g. better facial recognition software, Alexa-type devices that work better, make self-driving cars more feasible due to higher image classification accuracy.
But these continued improvements in the accuracy of algorithms, e.g. "deep learning" hype - just improves devices and tech like described above - it doesn't fundamentally bring us close to AGI.
It's not like we have the structure of AGI mapped out and it's like oh damn - if only we could squeak out a 10% improvement in classification accuracy.... THEN this AGI would be working. No - the entire structure / mapping of AGI is insanely complex and requires large breakthroughs in several fields.
All the hype as of lately has been due to advances with several specific variations of neural networks which are specifically well suited to high dimensional ML problems with complex feature interactions (and outside of these specific applications they actually perform very poorly relative to "traditional" models or just say random forests or boosting methods / ensembles) - and these advances got tons of hype because they're useful for image classification problems which self-driving cars, facial recognition etc. all rely upon.
But the difference is we knew very well how to structure these problems then when the accuracy of the underlying ML algorithms improved enough these problems (e.g. self-driving cars) were feasible to put into application.
By contrast there's no such structure of AGI that I'm aware of that we're just waiting around for some improvements to take place which will allow us to solve AGI.
The problem of AGI is far more fundamental - it's not a matter of algorithm performance, it's that we don't even know what to do.
If someone more on the AI side (I'm on the statistics / ML side) wants to tell me I'm mistaken I'm more than willing to listen......
edit: In terms of automation / job replacement - you don't need anything remotely close to AGI / AI, those terms are pure marketing terms in the corporate world. To automate jobs - say a chatbot that helps you with customer service - you simply need sufficient accuracy of the algorithms. All these products that can replace people's jobs are remarkably simple in structure and are just using applied stats / NLP bundled together in some software product being sold as "AI". Automation of jobs will continue to happen even when true AI stays far beyond the horizon - because to automate many jobs you just need a few algorithms and a bit of engineering. This is counter-intuitively actually a sign that we're far away from true AI - because companies are building highly specialized domain-specific products to automate processes for companies. Any guess on when true AI becomes possible is worthless IMO.
(http://www.autoadmit.com/thread.php?thread_id=4149157&forum_id=2\u0026mark_id=5310443#37359074) |
 |
Date: December 6th, 2018 11:23 AM Author: Swashbuckling Parlor
Thank you. I think those of us who are essentially epiphenomenalists concerning consciousness have a different set of beliefs for human-level AI and consciousness, though.
Some people, including myself, think that there is no free will. We also believe that what we call human consciousness is really a processing hallucination that is generated epiphenomenally once neurons hit a certain level of complexity in broadly adaptive skill sets.
For these people, they (and I) believe that human-level or "true" AI will inevitably occur. At some point in the near future we will hit on nonbiological complex neural development (probably through a combination of whole brain emulation and planned broad learning, TBH). When we do so, what we consider "consciousness" of some type will appear to manifest in that AI. And that will be an epiphenomenal hallucination on the part of the poor little AI, in much the same way as we suffer out own hallucination of consciousness, and of agency in others.
In the end, who knows, though. We could all be wrong. And this viewpoint seems to defy common sense. But it is basically where the neuroscience appears to take us at present.
(http://www.autoadmit.com/thread.php?thread_id=4149157&forum_id=2\u0026mark_id=5310443#37360199) |
 |
Date: December 6th, 2018 11:35 AM Author: Swashbuckling Parlor
Very hard for me to do! I'm not that deft at explaining such things directly, nor are my own thoughts especially clear on the subject. It is a difficult subject for me. I think some experience w meditation is helpful in getting the necessary perspective on subjective mental processes though.
This gentleman has some interesting work on the subject of consciousness as hallucination. https://aeon.co/essays/the-hard-problem-of-consciousness-is-a-distraction-from-the-real-one
Here is an interesting video on point, nice and simple:
https://www.youtube.com/watch?v=lyu7v7nWzfo
(http://www.autoadmit.com/thread.php?thread_id=4149157&forum_id=2\u0026mark_id=5310443#37360288)
|
|
|