New ChatGPT seamlessly combines multimodal capabilities
| Cracking At-the-ready Garrison Telephone | 05/13/24 | | Sepia Cumskin Electric Furnace | 05/13/24 | | Cracking At-the-ready Garrison Telephone | 05/13/24 | | Sepia Cumskin Electric Furnace | 05/13/24 | | Cracking At-the-ready Garrison Telephone | 05/13/24 | | Sepia Cumskin Electric Furnace | 05/13/24 | | Cracking At-the-ready Garrison Telephone | 05/13/24 | | floppy rigpig property | 05/13/24 | | orange friendly grandma | 05/13/24 |
Poast new message in this thread
Date: May 13th, 2024 2:17 PM Author: Cracking At-the-ready Garrison Telephone
https://openai.com/index/hello-gpt-4o/
pretty neat. Can’t wait until they stick this in robots.
(http://www.autoadmit.com/thread.php?thread_id=5528093&forum_id=2#47658083) |
Date: May 13th, 2024 2:33 PM Author: Sepia Cumskin Electric Furnace
"our new flagship model that can reason-"
let me stop you right there big guy
(http://www.autoadmit.com/thread.php?thread_id=5528093&forum_id=2#47658128) |
|
Date: May 13th, 2024 3:03 PM Author: Sepia Cumskin Electric Furnace
LLMs cannot reason or plan dawg, those are their major limitations
you can get an LLM to contradict itself like 5 times in one response. they're not "reasoning" at all
you can feed an LLM a series of prompts that will enable it to spit out outputs that look like a progressive series of steps of reasoning, but you're actually the one who is reasoning in that case, not the LLM
(http://www.autoadmit.com/thread.php?thread_id=5528093&forum_id=2#47658221) |
|
Date: May 13th, 2024 3:26 PM Author: Sepia Cumskin Electric Furnace
reasoning and planning are the same thing in substance. you can't reason without creating an abstract model of the matter at hand in your head (same thing as planning)
what stockfish is doing is just calculation. it looks like reasoning to us because we are humans, and when we play chess we find moves via reasoning, so we project that onto the engine when we observe it playing chess. but all it's doing is calculating
i think a more interesting and pertinent version of the point you're trying to make is: are humans below ~125 IQ ever actually "reasoning" outside of extremely simple and mostly unconscious and intuitive/automatic responses? and i think that the answer is no, and that this question and answer have a lot more profound and serious implications than anything to do with LLMs
(http://www.autoadmit.com/thread.php?thread_id=5528093&forum_id=2#47658314) |
|
|