which answer do you choose in this puzzle? (xo engagement bait)
| Emilio Plan Truster | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | China | 03/10/26 | | richard clock | 03/10/26 | | @grok, is this true? | 03/10/26 | | spiritually female godfather | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | spiritually female godfather | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | spiritually female godfather | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | spiritually female godfather | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | spiritually female godfather | 03/10/26 | | Fucking Fuckface | 03/10/26 | | you\'re the puppet | 03/10/26 | | peeface | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | Waingro | 03/10/26 | | Wes Scantlin | 03/10/26 | | things from the 90s/00s so ethereal and dreamlike: | 03/10/26 | | jonathan penis | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | lex | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | goyim in abundanceeeeee | 03/10/26 | | tancredi marchiolo | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | goyim in abundanceeeeee | 03/10/26 | | tancredi marchiolo | 03/10/26 | | goyim in abundanceeeeee | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | tancredi marchiolo | 03/10/26 | | things from the 90s/00s so ethereal and dreamlike: | 03/10/26 | | richard clock | 03/10/26 | | fatty nigger | 03/10/26 | | spiritually female godfather | 03/10/26 | | Nude Karlstack | 03/10/26 | | @grok, is this true? | 03/10/26 | | fatty nigger | 03/10/26 | | Nazca Redlines | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | spiritually female godfather | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | richard clock | 03/10/26 | | spiritually female godfather | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | spiritually female godfather | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | spiritually female godfather | 03/10/26 | | Fucking Fuckface | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | spiritually female godfather | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | jonathan penis | 03/10/26 | | Nazca Redlines | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | richard clock | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | spiritually female godfather | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | goyim in abundanceeeeee | 03/10/26 | | richard clock | 03/10/26 | | goyim in abundanceeeeee | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | spiritually female godfather | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | spiritually female godfather | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | spiritually female godfather | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | spiritually female godfather | 03/10/26 | | goyim in abundanceeeeee | 03/10/26 | | cypher | 03/10/26 | | spiritually female godfather | 03/10/26 | | goyim in abundanceeeeee | 03/10/26 | | @grok, is this true? | 03/10/26 | | goyim in abundanceeeeee | 03/10/26 | | state your IQ before I engage you further | 03/10/26 | | spiritually female godfather | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | things from the 90s/00s so ethereal and dreamlike: | 03/10/26 | | Lab Diamond Dallas Trump | 03/10/26 | | Nude Karlstack | 03/10/26 | | things from the 90s/00s so ethereal and dreamlike: | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | things from the 90s/00s so ethereal and dreamlike: | 03/10/26 | | John Frum | 03/10/26 | | UN peacekeeper | 03/10/26 | | Lab Diamond Dallas Trump | 03/10/26 | | goyim in abundanceeeeee | 03/10/26 | | Lab Diamond Dallas Trump | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | goyim in abundanceeeeee | 03/10/26 | | Lab Diamond Dallas Trump | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | goyim in abundanceeeeee | 03/10/26 | | Lab Diamond Dallas Trump | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | Lab Diamond Dallas Trump | 03/11/26 | | Metal Up Your Ass | 03/10/26 | | things from the 90s/00s so ethereal and dreamlike: | 03/10/26 | | jonathan penis | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | things from the 90s/00s so ethereal and dreamlike: | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | goyim in abundanceeeeee | 03/10/26 | | things from the 90s/00s so ethereal and dreamlike: | 03/10/26 | | Emilio Plan Truster | 03/10/26 | | things from the 90s/00s so ethereal and dreamlike: | 03/10/26 | | goyim in abundanceeeeee | 03/10/26 | | things from the 90s/00s so ethereal and dreamlike: | 03/10/26 |
Poast new message in this thread
Date: March 10th, 2026 11:59 AM Author: Emilio Plan Truster
TOTAL VOTES:
BOX A ONLY: 8
BOTH BOXES: 3
----------------------------------
Setup: You walk into a room. There are two boxes.
Box A (opaque) — contains either $1,000,000 or $0.
Box B (transparent) — contains $1,000 (you can see this).
A highly reliable, God-Like AI Predictor (almost always right) has already predicted, before you walked into the room, whether you will take only Box A or both boxes. Its prediction includes the assumption that you will be aware of the rules of the game and of it making this prediction.
If the Predictor predicted you will take only Box A, they put $1,000,000 in Box A.
If the Predictor predicted you will take both boxes, they put $0 in Box A.
Now the boxes are in front of you. You must choose right now, and your choice does not affect the Predictor anymore (the prediction and the filling already happened).
--------------------------------
Do you take only Box A (hoping for $1,000,000), or do you take both Box A and Box B (guaranteeing the $1,000 but possibly losing the million, but with a higher maximum payoff of $1,001,000)? Why?
(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=week#49731649) |
 |
Date: March 10th, 2026 12:57 PM Author: spiritually female godfather (gunneratttt)
i misread the hypo and thought i could only choose one.
id take both. the predictor would predict any rational person would take both, so it wouldn't matter, but its costless to see if it made an error since it's almost always right.
unless the predictor is in the first leg too (the predictor is predicited which move id take with a predictor). now im in a game of golden balls with essentially an ai clone. in which case id go with A, because $1000 is inconsequential and i think theres a greater than 1:1000 chance the predictor would think id choose that, so it's positive EV.
(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=week#49731834) |
 |
Date: March 10th, 2026 1:23 PM Author: Emilio Plan Truster
dude, what? lol
what exact same situation? isolation from what? huh?
the predictor is predicting (with 99.99999%+ accuracy) which choice you will make based on the rules of the game laid out in op. not sure if this helps or not
(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=week#49731909) |
 |
Date: March 10th, 2026 1:37 PM Author: spiritually female godfather (gunneratttt)
your hypo is ambiguous about whether the predictor is predicting whether id take both boxes if there was no predictor (just two boxes, i can have one or both, no information about the odds of what's in box a or how that wad decided) or whether it's predicting what i would do under the exact same circumstances (with a predictor).
then i misred the question as "do you take A or B" instead of "do you take A or both?" under my A or B reading, the 100% correct answer would be B, and i thought you were asking this to see how many people would choose incorrectly because $1k is insignificant
if i had read your question correctly the context clue would have resolved the ambiguity. it was a rc fail on my part.
(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=week#49731956) |
 |
Date: March 10th, 2026 2:01 PM Author: spiritually female godfather (gunneratttt)
yes i understood that a few poasts ago and said id take A.
i would choose A because $1k is inconsequential. im truly engaging in this hypo by *not* thinking of it further than that. if i choose A based on my gut, legitimately, not trying to game the predictor as it has already put/not put the money in A, then there's at least a chance the predictor would have predicted that (and under your hypo it *did* do that, because it was legitimately my first instinct.)
its kinda like quantum mechanics. the longer i think about it, the more likely i am to reach an optimal answer, which could be both. thus my actions right now do impact what the predictor has already done.
(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=week#49732051) |
Date: March 10th, 2026 2:21 PM Author: Nazca Redlines
On the one hand, this is kind of a twise on the Monty Hall problem, in which taking both boxes would be cr given that the AI has already made its prediction. On the other hand, Box A only is cr, and I'm taking Box A only.
The rational choice for someone (a) who can think through the excercise and (b) for whom $1,000 is nbd is to take only Box A and hope the AI sees that you would see it that way.
The answer might change if the AI is predicting for a random, average person, who is more likely to take both boxes.
(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=week#49732114) |
 |
Date: March 10th, 2026 3:08 PM Author: spiritually female godfather (gunneratttt)
tyvm
most poasters are very insecure about ever being wrong online. only like 15 people participated in my election guessing game where i offered a $50 giftcard. yet hundreds will make vague, non-falsiable "predictions."
from the face of the hypo most can tell its somewhat complicated and that there might be an objectively correct answer. and they'd rather not risk damage to their e-rep.
it's a lawyer forum, no suprise many are risk adverse bothboxmos
(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=week#49732273) |
 |
Date: March 10th, 2026 3:40 PM Author: Emilio Plan Truster
indeed. but this hypo is completely different than the one in the OP, and in fact it illustrates what the hypo in the OP is all about
in your hypo, it is 100% clear, without any doubt whatsoever, that we would be risking our entire net worth if we made the bet
in the hypo in the OP, the *entire question* is whether or not one is actually "risking" anything if one chooses to take two boxes rather than just Box A. that is what it's all about
that is why it's important to fully engage with the hypo in the OP. because otherwise you're not giving a meaningful answer to the question
(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=week#49732366) |
 |
Date: March 10th, 2026 3:54 PM Author: Emilio Plan Truster
you need to fill in those assumptions yourself as part of your answer to the question. that's the point
"it's entirely rational to act how you want the AI to believe you would act in this scenario (therefore i choose only Box A)"
here you are filling in those assumptions. good. this is a totally fine answer. naturally though, a two-boxer is going to point out: "but the Predictor's decision has already been made. you lose nothing by taking the other box too; you just get a free +$1000 EV"
and then the thought experiment continues from here, because the real question is: under what assumptions/conditions *would* you actually be Risking the $1,000,0000 by taking both boxes?
(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=week#49732406) |
 |
Date: March 10th, 2026 4:01 PM Author: spiritually female godfather (gunneratttt)
well they know theyre risking something, even if they dont understand what or how, by virtue of the type being debatable even by very smart people. not getting something you would have in 10 seconds is functinally equivalent to losing something you already have.
lets say im certain A is empty and twobox is the best move, but box B has a penny. i know im human and fallible, so even though im certain, id hedge on solipism because its almost costless. or reverse it and have box B have $999k. now, even if im nearly certain box A is full, the guaranteed money is too much to pass up.
the amount in the boxes, the deciders situation, and their confidence in their decision is all going to factor in.
(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=week#49732419) |
 |
Date: March 10th, 2026 4:43 PM Author: spiritually female godfather (gunneratttt)
well you're coming into it fully engaged with a goal in mind. and you're trying to tempt people to engage. even very smart people require this motivation sometimes. if it was framed in terms of widgits it would also be the same, but people have a hard time conceptualizing that being a decision theyd devote any real thought to.
if i wanted you to engage with something id frame it around something you actually care about. like when i tried to illustrate openmindedness to consuela. although that has its own pitfalls because sometimes people are so emotionally tethered to something they cant tolerate it being questioned. or they miss the point completely, like how consuela keeps saying im a bootlicking vaxxcuck.
ime smart people are even more susceptible to this than midwits. smart people tend to have a lot of ego about their intelligence and correctedness. e.g. risk adverse law fags terrified to risk being wrong even anonymously online about dumb shit.
(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=week#49732529) |
Date: March 10th, 2026 3:14 PM Author: goyim in abundanceeeeee
Can you replace AI in the hypo with something that better illustrates the point you're trying to make? Because it's the most crucial part of the hypo and it's still unclear. If you just said there's a box with a thousand and a box with zero or a million it comes down simply to risk tolerance and necessity. If someone desperately needed a grand they don't risk taking box A.
It's also totally unclear why someone would take both boxes or not, and if this has any effect on the chance the computer decides they were worthy or not of leaving the million.
The hypo is totally broken because it comes down to whether or not we trust the decision making ability of a computer, which may or may not be right or wrong, and can't even be proven to be so until after you've taken a box or boxes. The whole hypo suffers from a lack of clarity. Is it just a sword in a stone situation where we're just hoping the robot deems us worthy? I fail to see why there's any point in guessing what a robot thinks of us. Wouldn't it then follow that the optimal strategy would be to appeal to the robot's inner sense of goodness enough to leave the million in the first box? That's what we're left with ultimately, and, perhaps, unfortunately.
If you're insinuating that the best course is to take Box B, prove the computer right about the goodness of humanity so that it must leave the million, then build a time machine and go back in time to take the first box, then I think you could have articulated that a lot better.
(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=week#49732292) |
Date: March 10th, 2026 5:15 PM Author: Emilio Plan Truster
this is known as Newcomb's Paradox/Problem, you can read about it here if you're interested:
https://en.wikipedia.org/wiki/Newcomb%27s_problem
the real meat of the problem is getting into causality and whether or not "free will" is a real thing, and under what conditions it is/could be real
(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=week#49732611) |
Date: March 10th, 2026 8:05 PM Author: Lab Diamond Dallas Trump
I'm surprised nobody has pointed this out, but it really doesn't matter what decision you make. The predictor's abilities are "godlike." That should mean it has specific insight into the particular reasoning process of the decisionmaker. The hypo is muddied some by it being "almost always right," but I assume those situations are likely caused by someone choosing contrary to their own reasoning process.
The point though, is that there is no way for the predictor to "almost always" reach the correct decision unless it is able to analyze the specific decisionmaker and replicate their thought process with a high degree of accuracy. Being correct almost always would involve correctly predicting how a wide range of intellects would approach the problem.
So, there is no probabilistic "correct" answer between one box and two boxes under this hypothetical. The actual analysis of the choice itself is a red herring. The nature of the predictive device and how it works is the most important part of the hypo, and my interpretation is that you maximize your outcome by making whatever choice is consistent with your internal reasoning process. Or alternatively, if the predictor is even able to predict that you might disregard your internal reasoning and make the opposite choice, then it's simply impossible to lose the hypo.
Either way, the choice is irrelevant. However, the hypo is somewhat incomplete because it does not describe the circumstances under which the predictor will be inaccurate, which could entirely change the problem.
(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=week#49733205) |
 |
Date: March 10th, 2026 8:45 PM Author: Lab Diamond Dallas Trump
The only assumption my position really needs is that people won't uniformly make one choice or the other -- the choices will be split, which is impossible to prove from a logical perspective but unassailable from an empirical perspective. But that just supports my contention that they are TTT charlatan hacks for spending a lot of time trying to logically "debate" and "prove" something that I determined wasn't susceptible to that within 5 minutes of looking at it.
-----------------
In his 1969 article, Nozick noted that "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly."[1][4] The problem continues to divide philosophers today.[9][10] In a 2020 survey, a modest plurality of professional philosophers chose to take both boxes (39.0% versus 31.2%).
(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=week#49733320) |
 |
Date: March 11th, 2026 12:04 AM Author: Lab Diamond Dallas Trump
This looks like a universal player to me, breh
--------------
In the standard version of Newcomb's problem, two boxes are designated A and B. The player is given a choice between taking only box B or taking both boxes A and B. The player knows the following:[4]
Box A is transparent, or open, and always contains a visible $1,000.
Box B is opaque, or closed, and its content has already been set by the predictor:
If the predictor has predicted that the player will take both boxes A and B, then box B contains nothing.
If the predictor has predicted that the player will take only box B, then box B contains $1,000,000.
The player does not know what the predictor predicted or what box B contains while making the choice.
(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=week#49733778) |
 |
Date: March 10th, 2026 11:42 PM Author: things from the 90s/00s so ethereal and dreamlike:
>i'm a simple goy
simple-hearted yet not even in the slightest simple-minded (even your [cyber]detractors can't deny your formidable mental horsepower, IIRC, but then again Talmudic rhetoric is 180 so who knows), or:
Wise as a serpent yet innocent as a dove tp :-)
(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=week#49733732) |
Date: March 10th, 2026 8:42 PM Author: goyim in abundanceeeeee
There's no punishment for the computer being wrong. It really matters whether it's infallible or fallible. If it's infallible you should take the mystery box because then it would have correctly predicted that you would and then left the million dollars. If you take both and the computer is right then you only get 1000. So it comes down to whether the computer can be wrong. If it's always right then there's no point in ever taking two boxes. The computer can't always be right, you take two boxes, and you get the million. No one points this out.
If the computer has the potential to be wrong then it is a capricious god and you should take two boxes every time.
(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=week#49733314) |
|
|