Reminder: Searle's Chinese Room is a TTT argument
| Transparent Very Tactful Hell Ratface | 07/12/14 | | Talented aquamarine hominid toilet seat | 07/12/14 | | Transparent Very Tactful Hell Ratface | 07/12/14 | | Talented aquamarine hominid toilet seat | 07/12/14 | | Transparent Very Tactful Hell Ratface | 07/12/14 | | Talented aquamarine hominid toilet seat | 07/12/14 | | Hilarious Smoky Institution Ceo | 07/27/14 | | Transparent Very Tactful Hell Ratface | 07/12/14 | | Bearded piazza queen of the night | 07/12/14 | | glittery crackhouse | 07/12/14 | | Bearded piazza queen of the night | 07/12/14 | | Vigorous amber locus faggot firefighter | 07/12/14 | | Transparent Very Tactful Hell Ratface | 07/12/14 | | Bearded piazza queen of the night | 07/12/14 | | Transparent Very Tactful Hell Ratface | 07/12/14 | | Bearded piazza queen of the night | 07/12/14 | | Transparent Very Tactful Hell Ratface | 07/12/14 | | bat-shit-crazy travel guidebook | 07/12/14 | | Transparent Very Tactful Hell Ratface | 07/26/14 | | Transparent Very Tactful Hell Ratface | 07/26/14 | | Territorial Tantric Cuckold | 07/27/14 | | rambunctious trailer park | 07/27/14 | | Transparent Very Tactful Hell Ratface | 07/27/14 | | rambunctious trailer park | 07/27/14 | | Transparent Very Tactful Hell Ratface | 07/27/14 | | rambunctious trailer park | 07/27/14 | | Transparent Very Tactful Hell Ratface | 07/27/14 | | rambunctious trailer park | 07/27/14 | | Transparent Very Tactful Hell Ratface | 07/27/14 | | rambunctious trailer park | 07/27/14 | | Transparent Very Tactful Hell Ratface | 07/27/14 | | rambunctious trailer park | 07/27/14 | | Transparent Very Tactful Hell Ratface | 07/27/14 | | rambunctious trailer park | 07/27/14 | | Hilarious Smoky Institution Ceo | 07/27/14 | | Transparent Very Tactful Hell Ratface | 07/27/14 | | rambunctious trailer park | 07/27/14 | | Transparent Very Tactful Hell Ratface | 07/27/14 | | rambunctious trailer park | 07/27/14 | | Transparent Very Tactful Hell Ratface | 07/27/14 | | rambunctious trailer park | 07/27/14 | | Transparent Very Tactful Hell Ratface | 07/27/14 | | rambunctious trailer park | 07/27/14 | | Transparent Very Tactful Hell Ratface | 07/27/14 | | Hilarious Smoky Institution Ceo | 07/27/14 | | rambunctious trailer park | 07/27/14 | | Hilarious Smoky Institution Ceo | 07/27/14 | | rambunctious trailer park | 07/27/14 | | Wild cruise ship digit ratio | 07/27/14 | | Gay of Hormuz | 03/10/26 | | robot daddy | 03/10/26 | | Gay of Hormuz | 03/10/26 | | robot daddy | 03/10/26 | | Gay of Hormuz | 03/10/26 | | robot daddy | 03/10/26 | | Junko Enoshima | 03/10/26 | | robot daddy | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | robot daddy | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | robot daddy | 03/10/26 |
Poast new message in this thread
Date: July 12th, 2014 5:06 PM Author: Transparent Very Tactful Hell Ratface
http://plato.stanford.edu/entries/chinese-room/
"Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese."
Edit: this has been used to discredit the possibility of an "understanding" AI by Searle himself as well as several others.
Do prestigious "intellectuals" actually buy this crap? When someone asks you a question, and you respond, the language centers of your brain know a bunch of rules for interpreting and responding to your native language...the instructions/exact response comes from other areas of your brain (emotional, rational) coming up with an appropriate response and sending it back to the language centers to parse it and spit it out.
In this thought experiment, the guy in the room is basically serving the same purpose as the language centers of a human brain. He's just interpreting and responding to data based on rules and responses he has been given (in the brain example, other areas of your brain telling the language section what to tell your mouth to say, basically).
No one would argue that the Temporal Lobe of the brain by itself "understands" anything.
(http://www.autoadmit.com/thread.php?thread_id=2617109&forum_id=2",#25918020)
|
 |
Date: July 12th, 2014 5:17 PM Author: Transparent Very Tactful Hell Ratface
yeah he's just a biomo, it seems.
that's like a pre-Wright brothers bro saying physiological substrate is necessary for flight.
(http://www.autoadmit.com/thread.php?thread_id=2617109&forum_id=2",#25918078) |
 |
Date: July 12th, 2014 5:29 PM Author: Transparent Very Tactful Hell Ratface
"His point is that computers that operate using systems of synatic rules do not understand semantic content. They can follow the rules that are fed into them, but they do not understand what those rules means."
It's pretty obvious what he is saying, but the argument was proposed to refute the possibility of AI that can "understand" what it's doing when it passes the Turing test. And it's used by AI-naysayers, including him, to show that you can't have "intelligent" or "understanding" AI. Which I'm saying is 100% wrong.
Searle's "Chinese Room" would only represent one part of the AI infrastructure, the language interpreter. By that logic, the human brain is not "intelligent" or "understanding" because your Temporal Lobe "doesn't know" what it's doing...it's just applying rules you've learned.
(http://www.autoadmit.com/thread.php?thread_id=2617109&forum_id=2",#25918137) |
 |
Date: July 12th, 2014 5:31 PM Author: Bearded piazza queen of the night
I agree that it doesn't prove AI is impossible. Anyone that says it does is making a fairly large mistake.
"Searle's "Chinese Room" would only represent one part of the AI infrastructure, the language interpreter. By that measure, the human brain is not "intelligent" or "understanding" because your Temporal Lobe "doesn't know" what it's doing...it's just applying rules you've learned. "
Humans do understand semantic content, our consciousness isn't built merely on applying syntactic rules. It's hard to say much more than that since the nature of human consciousness isn't well understood by either philosophers or scientists.
(http://www.autoadmit.com/thread.php?thread_id=2617109&forum_id=2",#25918150) |
 |
Date: July 12th, 2014 5:34 PM Author: Transparent Very Tactful Hell Ratface
"Humans do understand semantic content, our consciousness isn't built merely on applying syntactic rules."
Right. But the individual part(s) of our brain that would be performing the same function at the interpreter in the Chinese Room are equally as non-understanding. So the argument is kind of defeating a Strawman of what developed AI will likely be (if it's possible...which it seems like).
(http://www.autoadmit.com/thread.php?thread_id=2617109&forum_id=2",#25918160) |
 |
Date: July 12th, 2014 5:45 PM Author: Transparent Very Tactful Hell Ratface
"The problem with your argument is that you're making all sorts of assumptions about how the human brain works and how it is linked to consciousness."
We know from fMRI/PET scans of people with and without brain damage to specific brain regions that certain regions (i.e. language processing centers) are very specific and not involved in much other than processing language rules.
"Something else is going on in our brains or our minds, but we do not know what that thing is "
This is true, although it would surprise me if it were anything more than a lot of highly-interconnected parts, most of which are relatively non-understanding individually.
We know from sending electrical impulses to different brain regions that we can "shut off" or "stimulate" certain functions that affect consciousness. We just don't know the details.
Given that if you "shut off" certain areas of the brain, a person will become unconscious, it doesn't seem likely that there are any parts of consciousness that don't result from biology. But it's possible.
(http://www.autoadmit.com/thread.php?thread_id=2617109&forum_id=2",#25918214) |
Date: July 12th, 2014 5:39 PM Author: bat-shit-crazy travel guidebook
dogs can read human faces, and without intellectual comprehension of the analytic content behind those expressions, respond in (empirically derived from trial and error) appropriate ways (conducive to getting treats).
are dogs working within a chinese room framework?
(http://www.autoadmit.com/thread.php?thread_id=2617109&forum_id=2",#25918185) |
 |
Date: July 26th, 2014 11:15 PM Author: Transparent Very Tactful Hell Ratface
"are dogs working within a chinese room framework?"
explain. anything that processes, interprets and responds to information using a set of rules could fit within a "framework" of the Chinese room. but whether it can be concluded that that its conscious or not is different.
(http://www.autoadmit.com/thread.php?thread_id=2617109&forum_id=2",#26007483) |
 |
Date: July 27th, 2014 5:14 AM Author: Transparent Very Tactful Hell Ratface
yeah why do you think I clarified?
because the argument is commonly used against AI and that's how I discovered it. I forgot to poast that in the OP though.
(http://www.autoadmit.com/thread.php?thread_id=2617109&forum_id=2",#26008652) |
 |
Date: July 27th, 2014 5:20 AM Author: Transparent Very Tactful Hell Ratface
right, yet humans are "understanding" as per the definition.
so its clear that ..... theres something more. or more likely understanding is an emergent property of a complex system (see above my analogy about the human temporal lobe)
(http://www.autoadmit.com/thread.php?thread_id=2617109&forum_id=2",#26008663) |
 |
Date: July 27th, 2014 5:24 AM Author: Transparent Very Tactful Hell Ratface
"or there's something special about the human biological substrate, as searle believes"
yeah there's no evidence of this bro. and no reason to believe it.
unless you're a penrose fag who thinks microtubules create human consciousness
(http://www.autoadmit.com/thread.php?thread_id=2617109&forum_id=2",#26008668) |
 |
Date: July 27th, 2014 5:29 AM Author: Transparent Very Tactful Hell Ratface
"not sure what evidence emergence has but i haven't thought about the issue in a few years."
there really isn't any. and anything that talks about conscious experience is inherently doomed to failure.
we were talking about this earlier this evening in some other thread but...its not so important that AI is "understanding" in a fully-sentient human-like conscious sense (which is impossible to prove and shaky to begin with) but rather that it can do .....higher-level thinking that is equal to or greater than are most powerful minds. as SAPIENPIGS
(http://www.autoadmit.com/thread.php?thread_id=2617109&forum_id=2",#26008680) |
 |
Date: July 27th, 2014 5:35 AM Author: Transparent Very Tactful Hell Ratface
"searle's argument is about semantic understanding and, arguably, through that about consciousness. not about what's important"
no, we aren't talking about searle's argument. we were at first, now we aren't. we are talking about consciousness in general.
what im saying is we already concluded that its bullshit and impossible to say anything about "consciousness" and by extension a certain chill human-like "understanding". so its more important that we focus on what we CAN do.
(http://www.autoadmit.com/thread.php?thread_id=2617109&forum_id=2",#26008692) |
 |
Date: July 27th, 2014 5:39 AM Author: Transparent Very Tactful Hell Ratface
its pretty simple. its a qualitative state only you can experience.
anything else is inferred.
edit: its a subjective state you cant qualify
(http://www.autoadmit.com/thread.php?thread_id=2617109&forum_id=2",#26008701) |
 |
Date: July 27th, 2014 5:45 AM Author: Transparent Very Tactful Hell Ratface
look im really drunk but ill attempt to engage you further
what i meant was subjective
(http://www.autoadmit.com/thread.php?thread_id=2617109&forum_id=2",#26008709) |
Date: March 10th, 2026 6:11 PM Author: Emilio Plan Truster
the key element of thinking about this stuff is embodiment and self-referential self-interest imo
an agent doesn't necessarily have to be fully biological in order to have a mind in the way that a human has a mind. but it does have to be a continuously embodied, self-referencing entity with a real self-interest
(http://www.autoadmit.com/thread.php?thread_id=2617109&forum_id=2",#49732775) |
|
|