\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

which answer do you choose in this puzzle? (xo engagement bait)

TOTAL VOTES: BOX A ONLY: 8 BOTH BOXES: 3 ----------...
Emilio Plan Truster
  03/10/26
i'm a two-boxer myself
Emilio Plan Truster
  03/10/26
...
Emilio Plan Truster
  03/10/26
but i did eat breakfast today
China
  03/10/26
take box a, the 1K in box b is immaterial compared to 1M so ...
richard clock
  03/10/26
A only, but my intuition is that there is some non-intuitive...
@grok, is this true?
  03/10/26
if the predictor is predicting based on me taking two boxes ...
spiritually female godfather
  03/10/26
i can't even parse this post what is your choice
Emilio Plan Truster
  03/10/26
i misread the hypo and thought i could only choose one. i...
spiritually female godfather
  03/10/26
still don't understand wtf you are trying to say with "...
Emilio Plan Truster
  03/10/26
if the predictor is predicting based on what id choose in th...
spiritually female godfather
  03/10/26
dude, what? lol what exact same situation? isolation from...
Emilio Plan Truster
  03/10/26
your hypo is ambiguous about whether the predictor is predic...
spiritually female godfather
  03/10/26
there is no hypo without the predictor and its prediction, a...
Emilio Plan Truster
  03/10/26
yes i understood that a few poasts ago and said id take A. ...
spiritually female godfather
  03/10/26
Box A only. I don't really care about $1k. Any semi-compete...
Fucking Fuckface
  03/10/26
...
you\'re the puppet
  03/10/26
i say A, because i assume this AI will use these posts in it...
peeface
  03/10/26
...
Emilio Plan Truster
  03/10/26
dont take either. just open up the opaque one to see if ther...
Waingro
  03/10/26
Height of Predictor?
Wes Scantlin
  03/10/26
...
things from the 90s/00s so ethereal and dreamlike:
  03/10/26
does the ai assume i know the rules of the game before it ma...
jonathan penis
  03/10/26
yes
Emilio Plan Truster
  03/10/26
This is a key detail. Please revise the hypo to account for ...
lex
  03/10/26
i mean it's actually implicit in the original wording. but i...
Emilio Plan Truster
  03/10/26
If I take Box A and there's no money in it, then does this n...
goyim in abundanceeeeee
  03/10/26
there is no situation where id take box b even if a was empt...
tancredi marchiolo
  03/10/26
then increase the amount of money in Box B to whatever amoun...
Emilio Plan Truster
  03/10/26
This is the plot of Deal or No Deal dummy
goyim in abundanceeeeee
  03/10/26
500k in that situation i only take b
tancredi marchiolo
  03/10/26
They had an entire daytime TV show about this that ran for 1...
goyim in abundanceeeeee
  03/10/26
you are misreading the hypo
Emilio Plan Truster
  03/10/26
i didnt even read it
tancredi marchiolo
  03/10/26
180
things from the 90s/00s so ethereal and dreamlike:
  03/10/26
...
richard clock
  03/10/26
...
fatty nigger
  03/10/26
...
spiritually female godfather
  03/10/26
...
Nude Karlstack
  03/10/26
...
@grok, is this true?
  03/10/26
what's in Box C?
fatty nigger
  03/10/26
On the one hand, this is kind of a twise on the Monty Hall p...
Nazca Redlines
  03/10/26
the "$1000 is nbd" or" i don't want to "...
Emilio Plan Truster
  03/10/26
after you feel this thread has run its course please poast t...
spiritually female godfather
  03/10/26
the OP is the exact problem. i didn't change anything about ...
Emilio Plan Truster
  03/10/26
what do other people say
richard clock
  03/10/26
found it. woah i was right about quantum retrocausality
spiritually female godfather
  03/10/26
yeah you're the only person who has really fully engaged wit...
Emilio Plan Truster
  03/10/26
tyvm most poasters are very insecure about ever being wro...
spiritually female godfather
  03/10/26
wow. "lawyers?" you're response? (this is compl...
Emilio Plan Truster
  03/10/26
not true just most. gunslingers like epah and cslg tend to d...
spiritually female godfather
  03/10/26
The value you assign to Box B is absolutely critical for the...
Fucking Fuckface
  03/10/26
the reason why $1,000 is chosen for the hypo is to make the ...
Emilio Plan Truster
  03/10/26
cr. if box B had a penny everyone would choose A even if the...
spiritually female godfather
  03/10/26
lol, not a bad idea actually
Emilio Plan Truster
  03/10/26
mfcr. a more interesting twist (although in fairness one whi...
jonathan penis
  03/10/26
How are "$1000 is nbd" or "i don't want to &q...
Nazca Redlines
  03/10/26
because we are Smart People who use Reason to decide on thin...
Emilio Plan Truster
  03/10/26
they arent cop outs. in fact, gunnerattt, who he claims is t...
richard clock
  03/10/26
if you read his reasoning above, he doesn't stop there. he s...
Emilio Plan Truster
  03/10/26
no, the consequences and costs factor into decision making t...
spiritually female godfather
  03/10/26
indeed. but this hypo is completely different than the one i...
Emilio Plan Truster
  03/10/26
It's a terrible, nonsensical hypo and you're a lackwit with ...
goyim in abundanceeeeee
  03/10/26
everyone here inherently understands that. it's impossible t...
richard clock
  03/10/26
I said this eleven fucking times already and he's never resp...
goyim in abundanceeeeee
  03/10/26
you need to fill in those assumptions yourself as part of yo...
Emilio Plan Truster
  03/10/26
well they know theyre risking something, even if they dont u...
spiritually female godfather
  03/10/26
these hypos do not pertain to the choice that the respondent...
Emilio Plan Truster
  03/10/26
maybe but that reason theres a significant money value attac...
spiritually female godfather
  03/10/26
yeah i really like your suggestion above to attach a multipl...
Emilio Plan Truster
  03/10/26
well you're coming into it fully engaged with a goal in mind...
spiritually female godfather
  03/10/26
to be fair, this problem has been around for 50+ years, with...
Emilio Plan Truster
  03/10/26
its not about someone's capacity to engage, its their motiva...
spiritually female godfather
  03/10/26
Even the people cited on Wikipedia think it's a shitty hypo,...
goyim in abundanceeeeee
  03/10/26
*AI scrolling through your Early Life section on Wikipedia t...
cypher
  03/10/26
maybe in the future AI will do this to intice jews and incin...
spiritually female godfather
  03/10/26
Can you replace AI in the hypo with something that better il...
goyim in abundanceeeeee
  03/10/26
Look up Newcomb’s Problem for variations, which normal...
@grok, is this true?
  03/10/26
David Wolpert and Gregory Benford point out that paradoxes a...
goyim in abundanceeeeee
  03/10/26
Put both boxes up my ass
state your IQ before I engage you further
  03/10/26
...
spiritually female godfather
  03/10/26
this is known as Newcomb's Paradox/Problem, you can read abo...
Emilio Plan Truster
  03/10/26
...
things from the 90s/00s so ethereal and dreamlike:
  03/10/26
I went back and read this after making my answer below and I...
Lab Diamond Dallas Trump
  03/10/26
both baby 1000 bucks can buy like 20 sandwiches
Nude Karlstack
  03/10/26
...
things from the 90s/00s so ethereal and dreamlike:
  03/10/26
...
Emilio Plan Truster
  03/10/26
...
things from the 90s/00s so ethereal and dreamlike:
  03/10/26
I'm a one boxer but for the fixed value box out of spite for...
John Frum
  03/10/26
box a only of course. like the movie tenet, it's completely ...
UN peacekeeper
  03/10/26
I'm surprised nobody has pointed this out, but it really doe...
Lab Diamond Dallas Trump
  03/10/26
That's exactly what those cited on Wikipedia said about the ...
goyim in abundanceeeeee
  03/10/26
Yeah, see my other poast above. I went back and read the wik...
Lab Diamond Dallas Trump
  03/10/26
yup, the "correctness" of one's choice depends on ...
Emilio Plan Truster
  03/10/26
Then it's an incredibly stupid problem, far dumber then I in...
goyim in abundanceeeeee
  03/10/26
The only assumption my position really needs is that people ...
Lab Diamond Dallas Trump
  03/10/26
not really sure what you are talking about. depending on the...
Emilio Plan Truster
  03/10/26
Right. The problem itself is incomplete and therefore unusef...
goyim in abundanceeeeee
  03/10/26
The fact that the predictor is right almost every time is in...
Lab Diamond Dallas Trump
  03/10/26
the hypo is not directed at the universal player. it is dire...
Emilio Plan Truster
  03/10/26
This looks like a universal player to me, breh ------------...
Lab Diamond Dallas Trump
  03/11/26
but where will i get a million dollars?
Metal Up Your Ass
  03/10/26
...
things from the 90s/00s so ethereal and dreamlike:
  03/10/26
lol at op citing some lineage of scholarship when he really ...
jonathan penis
  03/10/26
yeah that's what reminded me of it. i thought their video wa...
Emilio Plan Truster
  03/10/26
always been a two-boxer, always will be tp.
things from the 90s/00s so ethereal and dreamlike:
  03/10/26
retrocausality is impossible simple as. i'm a simple goy ...
Emilio Plan Truster
  03/10/26
This is very blackpilled. The Jews have already decided for ...
goyim in abundanceeeeee
  03/10/26
It's impossible to read any of your poasts with a straight f...
things from the 90s/00s so ethereal and dreamlike:
  03/10/26
It’s the opposite of this actually, lol Also in ord...
Emilio Plan Truster
  03/10/26
>i'm a simple goy simple-hearted yet not even in the s...
things from the 90s/00s so ethereal and dreamlike:
  03/10/26
There's no punishment for the computer being wrong. It reall...
goyim in abundanceeeeee
  03/10/26
>If the computer has the potential to be wrong This in...
things from the 90s/00s so ethereal and dreamlike:
  03/10/26


Poast new message in this thread



Reply Favorite

Date: March 10th, 2026 11:59 AM
Author: Emilio Plan Truster

TOTAL VOTES:

BOX A ONLY: 8

BOTH BOXES: 3

----------------------------------

Setup: You walk into a room. There are two boxes.

Box A (opaque) — contains either $1,000,000 or $0.

Box B (transparent) — contains $1,000 (you can see this).

A highly reliable, God-Like AI Predictor (almost always right) has already predicted, before you walked into the room, whether you will take only Box A or both boxes. Its prediction includes the assumption that you will be aware of the rules of the game and of it making this prediction.

If the Predictor predicted you will take only Box A, they put $1,000,000 in Box A.

If the Predictor predicted you will take both boxes, they put $0 in Box A.

Now the boxes are in front of you. You must choose right now, and your choice does not affect the Predictor anymore (the prediction and the filling already happened).

--------------------------------

Do you take only Box A (hoping for $1,000,000), or do you take both Box A and Box B (guaranteeing the $1,000 but possibly losing the million, but with a higher maximum payoff of $1,001,000)? Why?

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731649)



Reply Favorite

Date: March 10th, 2026 11:59 AM
Author: Emilio Plan Truster

i'm a two-boxer myself

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731651)



Reply Favorite

Date: March 10th, 2026 12:19 PM
Author: Emilio Plan Truster



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731723)



Reply Favorite

Date: March 10th, 2026 12:25 PM
Author: China

but i did eat breakfast today

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731753)



Reply Favorite

Date: March 10th, 2026 12:27 PM
Author: richard clock

take box a, the 1K in box b is immaterial compared to 1M so not worth risking it

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731757)



Reply Favorite

Date: March 10th, 2026 12:29 PM
Author: @grok, is this true?

A only, but my intuition is that there is some non-intuitive reason this is incorrect.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731759)



Reply Favorite

Date: March 10th, 2026 12:31 PM
Author: spiritually female godfather (gunneratttt)

if the predictor is predicting based on me taking two boxes that may contain money, isn't stealing, have the option of taking both, etc. then ill take box b. no rational person would leave both boxes. i certainly wouldn't if its costless to take both, so i know the predictor would predict id take both.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731765)



Reply Favorite

Date: March 10th, 2026 12:53 PM
Author: Emilio Plan Truster

i can't even parse this post

what is your choice

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731826)



Reply Favorite

Date: March 10th, 2026 12:57 PM
Author: spiritually female godfather (gunneratttt)

i misread the hypo and thought i could only choose one.

id take both. the predictor would predict any rational person would take both, so it wouldn't matter, but its costless to see if it made an error since it's almost always right.

unless the predictor is in the first leg too (the predictor is predicited which move id take with a predictor). now im in a game of golden balls with essentially an ai clone. in which case id go with A, because $1000 is inconsequential and i think theres a greater than 1:1000 chance the predictor would think id choose that, so it's positive EV.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731834)



Reply Favorite

Date: March 10th, 2026 1:08 PM
Author: Emilio Plan Truster

still don't understand wtf you are trying to say with "the predictor in the first leg" but this is a vote for taking both

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731859)



Reply Favorite

Date: March 10th, 2026 1:11 PM
Author: spiritually female godfather (gunneratttt)

if the predictor is predicting based on what id choose in that exact same situation, then ill take box A

if the predictor is predicting what'd id do in isolation, then ill take both

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731865)



Reply Favorite

Date: March 10th, 2026 1:23 PM
Author: Emilio Plan Truster

dude, what? lol

what exact same situation? isolation from what? huh?

the predictor is predicting (with 99.99999%+ accuracy) which choice you will make based on the rules of the game laid out in op. not sure if this helps or not

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731909)



Reply Favorite

Date: March 10th, 2026 1:37 PM
Author: spiritually female godfather (gunneratttt)

your hypo is ambiguous about whether the predictor is predicting whether id take both boxes if there was no predictor (just two boxes, i can have one or both, no information about the odds of what's in box a or how that wad decided) or whether it's predicting what i would do under the exact same circumstances (with a predictor).

then i misred the question as "do you take A or B" instead of "do you take A or both?" under my A or B reading, the 100% correct answer would be B, and i thought you were asking this to see how many people would choose incorrectly because $1k is insignificant

if i had read your question correctly the context clue would have resolved the ambiguity. it was a rc fail on my part.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731956)



Reply Favorite

Date: March 10th, 2026 1:41 PM
Author: Emilio Plan Truster

there is no hypo without the predictor and its prediction, and your awareness of the predictor and him making a prediction. it's woven in to the hypo

although the way you are thinking about this is on the right track imo

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731975)



Reply Favorite

Date: March 10th, 2026 2:01 PM
Author: spiritually female godfather (gunneratttt)

yes i understood that a few poasts ago and said id take A.

i would choose A because $1k is inconsequential. im truly engaging in this hypo by *not* thinking of it further than that. if i choose A based on my gut, legitimately, not trying to game the predictor as it has already put/not put the money in A, then there's at least a chance the predictor would have predicted that (and under your hypo it *did* do that, because it was legitimately my first instinct.)

its kinda like quantum mechanics. the longer i think about it, the more likely i am to reach an optimal answer, which could be both. thus my actions right now do impact what the predictor has already done.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732051)



Reply Favorite

Date: March 10th, 2026 12:39 PM
Author: Fucking Fuckface

Box A only. I don't really care about $1k. Any semi-competent AI would know this about me. And if I'm wrong, oh well. That's like 4 visits to a decent restaurant at today's prices

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731784)



Reply Favorite

Date: March 10th, 2026 8:21 PM
Author: you\'re the puppet



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733264)



Reply Favorite

Date: March 10th, 2026 12:42 PM
Author: peeface

i say A, because i assume this AI will use these posts in its training data and i want it to believe that is the choice i would make.



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731790)



Reply Favorite

Date: March 10th, 2026 1:07 PM
Author: Emilio Plan Truster



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731856)



Reply Favorite

Date: March 10th, 2026 12:43 PM
Author: Waingro

dont take either. just open up the opaque one to see if theres money in it and take the money if there is.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731793)



Reply Favorite

Date: March 10th, 2026 12:58 PM
Author: Wes Scantlin

Height of Predictor?

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731835)



Reply Favorite

Date: March 10th, 2026 1:17 PM
Author: things from the 90s/00s so ethereal and dreamlike:



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731885)



Reply Favorite

Date: March 10th, 2026 1:11 PM
Author: jonathan penis

does the ai assume i know the rules of the game before it makes its selection? seems obvious i would only choose Box A in that scenario.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731864)



Reply Favorite

Date: March 10th, 2026 1:20 PM
Author: Emilio Plan Truster

yes

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731901)



Reply Favorite

Date: March 10th, 2026 1:40 PM
Author: lex

This is a key detail. Please revise the hypo to account for this critical information. Thankl

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731971)



Reply Favorite

Date: March 10th, 2026 1:47 PM
Author: Emilio Plan Truster

i mean it's actually implicit in the original wording. but i added an additional sentence to clarify this. not actually sure it makes things clearer though...let me know

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731998)



Reply Favorite

Date: March 10th, 2026 1:23 PM
Author: goyim in abundanceeeeee

If I take Box A and there's no money in it, then does this necessarily mean that the AI predictor was inherently wrong at guessing what I would do, and if so, can I sue for damages?

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49731911)



Reply Favorite

Date: March 10th, 2026 1:49 PM
Author: tancredi marchiolo

there is no situation where id take box b even if a was empty

not worth my time

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732003)



Reply Favorite

Date: March 10th, 2026 1:53 PM
Author: Emilio Plan Truster

then increase the amount of money in Box B to whatever amount of money would be worth your time, in order to make the hypo work

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732019)



Reply Favorite

Date: March 10th, 2026 1:54 PM
Author: goyim in abundanceeeeee

This is the plot of Deal or No Deal dummy

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732023)



Reply Favorite

Date: March 10th, 2026 1:55 PM
Author: tancredi marchiolo

500k

in that situation i only take b

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732026)



Reply Favorite

Date: March 10th, 2026 1:57 PM
Author: goyim in abundanceeeeee

They had an entire daytime TV show about this that ran for 15 years. And you could even make a demand for an amount of money that would get you to walk away from still gambling on the possibility of winning more money and the producers would either agree or shoot you down based on your immediate odds of winning more money.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732036)



Reply Favorite

Date: March 10th, 2026 1:57 PM
Author: Emilio Plan Truster

you are misreading the hypo

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732038)



Reply Favorite

Date: March 10th, 2026 2:08 PM
Author: tancredi marchiolo

i didnt even read it

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732074)



Reply Favorite

Date: March 10th, 2026 2:08 PM
Author: things from the 90s/00s so ethereal and dreamlike:

180

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732075)



Reply Favorite

Date: March 10th, 2026 2:09 PM
Author: richard clock



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732077)



Reply Favorite

Date: March 10th, 2026 2:12 PM
Author: fatty nigger



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732087)



Reply Favorite

Date: March 10th, 2026 2:12 PM
Author: spiritually female godfather (gunneratttt)



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732089)



Reply Favorite

Date: March 10th, 2026 2:26 PM
Author: Nude Karlstack (🧐)



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732130)



Reply Favorite

Date: March 10th, 2026 3:53 PM
Author: @grok, is this true?



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732401)



Reply Favorite

Date: March 10th, 2026 2:12 PM
Author: fatty nigger

what's in Box C?

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732090)



Reply Favorite

Date: March 10th, 2026 2:21 PM
Author: Nazca Redlines

On the one hand, this is kind of a twise on the Monty Hall problem, in which taking both boxes would be cr given that the AI has already made its prediction. On the other hand, Box A only is cr, and I'm taking Box A only.

The rational choice for someone (a) who can think through the excercise and (b) for whom $1,000 is nbd is to take only Box A and hope the AI sees that you would see it that way.

The answer might change if the AI is predicting for a random, average person, who is more likely to take both boxes.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732114)



Reply Favorite

Date: March 10th, 2026 2:32 PM
Author: Emilio Plan Truster

the "$1000 is nbd" or" i don't want to "risk it" are sort of cop outs from fully honestly answering the hypo

i thought about increasing the amount from $1000 but this is a famous philosophical problem so i didn't want to tweak it

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732144)



Reply Favorite

Date: March 10th, 2026 2:39 PM
Author: spiritually female godfather (gunneratttt)

after you feel this thread has run its course please poast the problem

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732178)



Reply Favorite

Date: March 10th, 2026 2:42 PM
Author: Emilio Plan Truster

the OP is the exact problem. i didn't change anything about it because i wanted to compare what XO people say against what other people say

nm realized you probably want to know the name and background of it, i'll poast it later

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732188)



Reply Favorite

Date: March 10th, 2026 2:42 PM
Author: richard clock

what do other people say

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732190)



Reply Favorite

Date: March 10th, 2026 2:46 PM
Author: spiritually female godfather (gunneratttt)

found it.

woah i was right about quantum retrocausality

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732203)



Reply Favorite

Date: March 10th, 2026 2:53 PM
Author: Emilio Plan Truster

yeah you're the only person who has really fully engaged with the hypo so far. it is not at all a straightforward problem

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732234)



Reply Favorite

Date: March 10th, 2026 3:08 PM
Author: spiritually female godfather (gunneratttt)

tyvm

most poasters are very insecure about ever being wrong online. only like 15 people participated in my election guessing game where i offered a $50 giftcard. yet hundreds will make vague, non-falsiable "predictions."

from the face of the hypo most can tell its somewhat complicated and that there might be an objectively correct answer. and they'd rather not risk damage to their e-rep.

it's a lawyer forum, no suprise many are risk adverse bothboxmos

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732273)



Reply Favorite

Date: March 10th, 2026 3:12 PM
Author: Emilio Plan Truster

wow. "lawyers?" you're response?

(this is completely cr btw lmao all lawyers are the fucking same in this way)

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732287)



Reply Favorite

Date: March 10th, 2026 3:23 PM
Author: spiritually female godfather (gunneratttt)

not true just most. gunslingers like epah and cslg tend to do well because they're up against risk adverse fags.

better call saul is the best legal show ever because it does a great job portraying this reality.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732312)



Reply Favorite

Date: March 10th, 2026 2:47 PM
Author: Fucking Fuckface

The value you assign to Box B is absolutely critical for the hypo, particularly in light of the claimed capability of the AI. It's not "a cop out" to take your hypo at face value

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732212)



Reply Favorite

Date: March 10th, 2026 2:52 PM
Author: Emilio Plan Truster

the reason why $1,000 is chosen for the hypo is to make the math clearer and cleaner if you actually perform a probabilistic EV calculation

you can look up the problem to see what i mean

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732231)



Reply Favorite

Date: March 10th, 2026 2:57 PM
Author: spiritually female godfather (gunneratttt)

cr. if box B had a penny everyone would choose A even if they were 99.999999% certain the predictor had it empty, because there is close to no value in B.

the og hypo is from 1969, goy superstar should have made it $10m and $10k so that it's the same hypo adjusted for inflation.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732242)



Reply Favorite

Date: March 10th, 2026 3:01 PM
Author: Emilio Plan Truster

lol, not a bad idea actually

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732254)



Reply Favorite

Date: March 10th, 2026 3:17 PM
Author: jonathan penis

mfcr. a more interesting twist (although in fairness one which could only be posed to people who, for the sake of argument, may or may not have eaten breakfast this morning) would be what percentage of Box A would have to be in Box B before the answer becomes non-obvious.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732300)



Reply Favorite

Date: March 10th, 2026 2:58 PM
Author: Nazca Redlines

How are "$1000 is nbd" or "i don't want to "risk it" cop outs?

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732245)



Reply Favorite

Date: March 10th, 2026 3:03 PM
Author: Emilio Plan Truster

because we are Smart People who use Reason to decide on things, friend (are we not?!)

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732265)



Reply Favorite

Date: March 10th, 2026 3:04 PM
Author: richard clock

they arent cop outs. in fact, gunnerattt, who he claims is the only one to "fully engage" with the problem, also came to the conclusion that "$1000 is nbd".

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732267)



Reply Favorite

Date: March 10th, 2026 3:17 PM
Author: Emilio Plan Truster

if you read his reasoning above, he doesn't stop there. he should have kept going for it to be fully coherent though. but i'm pretty sure he did in his head

this problem is more about exploring and explaining one's rationale (including assumptions) for one's choice than it is about the "correctness" of the choice itself

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732299)



Reply Favorite

Date: March 10th, 2026 3:32 PM
Author: spiritually female godfather (gunneratttt)

no, the consequences and costs factor into decision making too. sometimes its rational to not do a mathematically rational thing.

let's say i offered to bet your entire net worth on something that was 51% in your favor paying even money. even though you have positive ev, you shouldn't do this, because the real cost of you losing everything isnt worth you having 2x. but if i offered you the same deal for $10, you would do it infinite times.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732331)



Reply Favorite

Date: March 10th, 2026 3:40 PM
Author: Emilio Plan Truster

indeed. but this hypo is completely different than the one in the OP, and in fact it illustrates what the hypo in the OP is all about

in your hypo, it is 100% clear, without any doubt whatsoever, that we would be risking our entire net worth if we made the bet

in the hypo in the OP, the *entire question* is whether or not one is actually "risking" anything if one chooses to take two boxes rather than just Box A. that is what it's all about

that is why it's important to fully engage with the hypo in the OP. because otherwise you're not giving a meaningful answer to the question

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732366)



Reply Favorite

Date: March 10th, 2026 3:45 PM
Author: goyim in abundanceeeeee

It's a terrible, nonsensical hypo and you're a lackwit with an upjumped sense of self importance. Most people did engage meaningfully with the hypo but you just didn't like any of the rational answers.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732376)



Reply Favorite

Date: March 10th, 2026 3:46 PM
Author: richard clock

everyone here inherently understands that. it's impossible to know from your hypo how this Godlike AI makes his predictions and how he is able to predict behavior (or even whether something more bizarre is going on allowing the AI to predict correctly). it's entirely rational to act how you want the AI to believe you would act in this scenario. in the absence of perfect information, trying to outsmart the AI presents a non-zero risk, which is not worth the juice for a $1K windfall imo. thus "1K is NBD" is tcr.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732378)



Reply Favorite

Date: March 10th, 2026 3:47 PM
Author: goyim in abundanceeeeee

I said this eleven fucking times already and he's never responded to it once so what is even the point. He's getting our goose by us even responding to this.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732382)



Reply Favorite

Date: March 10th, 2026 3:54 PM
Author: Emilio Plan Truster

you need to fill in those assumptions yourself as part of your answer to the question. that's the point

"it's entirely rational to act how you want the AI to believe you would act in this scenario (therefore i choose only Box A)"

here you are filling in those assumptions. good. this is a totally fine answer. naturally though, a two-boxer is going to point out: "but the Predictor's decision has already been made. you lose nothing by taking the other box too; you just get a free +$1000 EV"

and then the thought experiment continues from here, because the real question is: under what assumptions/conditions *would* you actually be Risking the $1,000,0000 by taking both boxes?

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732406)



Reply Favorite

Date: March 10th, 2026 4:01 PM
Author: spiritually female godfather (gunneratttt)

well they know theyre risking something, even if they dont understand what or how, by virtue of the type being debatable even by very smart people. not getting something you would have in 10 seconds is functinally equivalent to losing something you already have.

lets say im certain A is empty and twobox is the best move, but box B has a penny. i know im human and fallible, so even though im certain, id hedge on solipism because its almost costless. or reverse it and have box B have $999k. now, even if im nearly certain box A is full, the guaranteed money is too much to pass up.

the amount in the boxes, the deciders situation, and their confidence in their decision is all going to factor in.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732419)



Reply Favorite

Date: March 10th, 2026 4:12 PM
Author: Emilio Plan Truster

these hypos do not pertain to the choice that the respondent is faced in the OP hypo, a choice that is entirely dependent on the assumptions/beliefs that the respondent is making about causality

it's a question pertaining to causality. if someone is caught up in thinking about the amounts of money involved, they are not thinking about the question hard enough, or are possibly just not intelligent enough to fully grasp the hypo. but i am pretty sure that everyone on this board is plenty smart enough to understand the depth of the hypo. it just takes some thinking, that's all

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732454)



Reply Favorite

Date: March 10th, 2026 4:16 PM
Author: spiritually female godfather (gunneratttt)

maybe but that reason theres a significant money value attached at all, and the disparity between the two, is to get people to engage. a person is more apt to think more critically about consequential decisions, even hypothetical ones. if you posed the exact same question with $1 and a dime most people would say "who gives a fuck?" even though its the exact same thought experiment.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732461)



Reply Favorite

Date: March 10th, 2026 4:24 PM
Author: Emilio Plan Truster

yeah i really like your suggestion above to attach a multiplier to adjust for inflation

still, a Smart Person should pretty easily recognize what the hypo is all about. it's just that answering it becomes very complex the more you get into it. seeing the dollar numbers and then immediately stopping there is - at the risk of Judging others - not what Smart People do imho

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732479)



Reply Favorite

Date: March 10th, 2026 4:43 PM
Author: spiritually female godfather (gunneratttt)

well you're coming into it fully engaged with a goal in mind. and you're trying to tempt people to engage. even very smart people require this motivation sometimes. if it was framed in terms of widgits it would also be the same, but people have a hard time conceptualizing that being a decision theyd devote any real thought to.

if i wanted you to engage with something id frame it around something you actually care about. like when i tried to illustrate openmindedness to consuela. although that has its own pitfalls because sometimes people are so emotionally tethered to something they cant tolerate it being questioned. or they miss the point completely, like how consuela keeps saying im a bootlicking vaxxcuck.

ime smart people are even more susceptible to this than midwits. smart people tend to have a lot of ego about their intelligence and correctedness. e.g. risk adverse law fags terrified to risk being wrong even anonymously online about dumb shit.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732529)



Reply Favorite

Date: March 10th, 2026 5:14 PM
Author: Emilio Plan Truster

to be fair, this problem has been around for 50+ years, with millions and millions of people having no issue recognizing the real extent of the problem, understanding it, and engaging with it

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732607)



Reply Favorite

Date: March 10th, 2026 5:23 PM
Author: spiritually female godfather (gunneratttt)

its not about someone's capacity to engage, its their motivation. i mean, even this hypo used dollars instead of abstract widgits back then. if you poasted some legal question i could answer it, but i wouldn't unless i was motivated in some way.

i had a lol school prof who wrote articles about how judges often make illogical decisions, and to prove this he sent out a hypo like this with a correct answer that anyone could figure out, but most peoples gut reaction would be wrong. of course, he passed it out at some bar event where most people took two seconds and checked a box and moved on. this was published as if they had sat down and thought about it with the same intensity they would a matter before them!

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732630)



Reply Favorite

Date: March 10th, 2026 4:16 PM
Author: goyim in abundanceeeeee

Even the people cited on Wikipedia think it's a shitty hypo, highly dependent on interpretation, and leads to answers like what if we built a time machine. It's so vague that you can come at it from a million different angles. There is no exact solution. It seems like most people fall into either camp of being a positive expected value fag or a moral fag. Either way this is some stuff people argue about in college science departments when their thesis isn't good enough. Pointless nonsense.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732463)



Reply Favorite

Date: March 10th, 2026 2:23 PM
Author: cypher

*AI scrolling through your Early Life section on Wikipedia to make prediction*

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732118)



Reply Favorite

Date: March 10th, 2026 2:38 PM
Author: spiritually female godfather (gunneratttt)

maybe in the future AI will do this to intice jews and incinerate them. like when the police get people with warrants to come in by saying there's unclaimed property at the station with their name on it.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732168)



Reply Favorite

Date: March 10th, 2026 3:14 PM
Author: goyim in abundanceeeeee

Can you replace AI in the hypo with something that better illustrates the point you're trying to make? Because it's the most crucial part of the hypo and it's still unclear. If you just said there's a box with a thousand and a box with zero or a million it comes down simply to risk tolerance and necessity. If someone desperately needed a grand they don't risk taking box A.

It's also totally unclear why someone would take both boxes or not, and if this has any effect on the chance the computer decides they were worthy or not of leaving the million.

The hypo is totally broken because it comes down to whether or not we trust the decision making ability of a computer, which may or may not be right or wrong, and can't even be proven to be so until after you've taken a box or boxes. The whole hypo suffers from a lack of clarity. Is it just a sword in a stone situation where we're just hoping the robot deems us worthy? I fail to see why there's any point in guessing what a robot thinks of us. Wouldn't it then follow that the optimal strategy would be to appeal to the robot's inner sense of goodness enough to leave the million in the first box? That's what we're left with ultimately, and, perhaps, unfortunately.

If you're insinuating that the best course is to take Box B, prove the computer right about the goodness of humanity so that it must leave the million, then build a time machine and go back in time to take the first box, then I think you could have articulated that a lot better.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732292)



Reply Favorite

Date: March 10th, 2026 3:54 PM
Author: @grok, is this true?

Look up Newcomb’s Problem for variations, which normal says “near perfect”

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732405)



Reply Favorite

Date: March 10th, 2026 4:07 PM
Author: goyim in abundanceeeeee

David Wolpert and Gregory Benford point out that paradoxes arise when not all relevant details of a problem are specified, and there is more than one "intuitively obvious" way to fill in those missing details. They suggest that, in Newcomb's paradox, the debate over which strategy is "obviously correct" stems from the fact that interpreting the problem details differently can lead to two distinct noncooperative games. Each strategy is optimal for one interpretation of the game but not the other. They then derive the optimal strategies for both of the games, which turn out to be independent of the predictor's infallibility, questions of causality, determinism, and free will.

Right so it's a shitty hypo and it depends on interpretation to arrive at the optimal strategy.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732436)



Reply Favorite

Date: March 10th, 2026 3:58 PM
Author: state your IQ before I engage you further

Put both boxes up my ass

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732412)



Reply Favorite

Date: March 10th, 2026 4:02 PM
Author: spiritually female godfather (gunneratttt)



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732421)



Reply Favorite

Date: March 10th, 2026 5:15 PM
Author: Emilio Plan Truster

this is known as Newcomb's Paradox/Problem, you can read about it here if you're interested:

https://en.wikipedia.org/wiki/Newcomb%27s_problem

the real meat of the problem is getting into causality and whether or not "free will" is a real thing, and under what conditions it is/could be real

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732611)



Reply Favorite

Date: March 10th, 2026 6:02 PM
Author: things from the 90s/00s so ethereal and dreamlike:



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49732738)



Reply Favorite

Date: March 10th, 2026 8:17 PM
Author: Lab Diamond Dallas Trump

I went back and read this after making my answer below and I am surprised that so many "philosophers" are caught up on debating the merits of the two choices rather than the nature of the predictor's capabilities. They are TTT midwit charlatan hacks. If the predictor is almost always right, the correct choice will depend on the specific decisionmaker. E.g. if choosing one box was uniformly the "right" decision, we know the predictor would frequently be wrong because many people would lack the reasoning abilities to make the correct decision. If the predictor is right with a high degree of accuracy, we know that it is predicting a mix of one-box and two-box decisions tailored to the decisionmaker.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733254)



Reply Favorite

Date: March 10th, 2026 6:55 PM
Author: Nude Karlstack (🧐)

both baby 1000 bucks can buy like 20 sandwiches

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733003)



Reply Favorite

Date: March 10th, 2026 6:56 PM
Author: things from the 90s/00s so ethereal and dreamlike:



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733015)



Reply Favorite

Date: March 10th, 2026 8:02 PM
Author: Emilio Plan Truster



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733197)



Reply Favorite

Date: March 10th, 2026 8:17 PM
Author: things from the 90s/00s so ethereal and dreamlike:



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733250)



Reply Favorite

Date: March 10th, 2026 7:43 PM
Author: John Frum

I'm a one boxer but for the fixed value box out of spite for the hypothetical

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733133)



Reply Favorite

Date: March 10th, 2026 7:54 PM
Author: UN peacekeeper

box a only of course. like the movie tenet, it's completely logical but just seems strange to us in a world lacking closed time-like loops

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733165)



Reply Favorite

Date: March 10th, 2026 8:05 PM
Author: Lab Diamond Dallas Trump

I'm surprised nobody has pointed this out, but it really doesn't matter what decision you make. The predictor's abilities are "godlike." That should mean it has specific insight into the particular reasoning process of the decisionmaker. The hypo is muddied some by it being "almost always right," but I assume those situations are likely caused by someone choosing contrary to their own reasoning process.

The point though, is that there is no way for the predictor to "almost always" reach the correct decision unless it is able to analyze the specific decisionmaker and replicate their thought process with a high degree of accuracy. Being correct almost always would involve correctly predicting how a wide range of intellects would approach the problem.

So, there is no probabilistic "correct" answer between one box and two boxes under this hypothetical. The actual analysis of the choice itself is a red herring. The nature of the predictive device and how it works is the most important part of the hypo, and my interpretation is that you maximize your outcome by making whatever choice is consistent with your internal reasoning process. Or alternatively, if the predictor is even able to predict that you might disregard your internal reasoning and make the opposite choice, then it's simply impossible to lose the hypo.

Either way, the choice is irrelevant. However, the hypo is somewhat incomplete because it does not describe the circumstances under which the predictor will be inaccurate, which could entirely change the problem.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733205)



Reply Favorite

Date: March 10th, 2026 8:18 PM
Author: goyim in abundanceeeeee

That's exactly what those cited on Wikipedia said about the problem. Basically, it depends. It's a very weak problem.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733259)



Reply Favorite

Date: March 10th, 2026 8:34 PM
Author: Lab Diamond Dallas Trump

Yeah, see my other poast above. I went back and read the wikipedia page after writing that because I didn't want it to inform my answer. However, I think even the theories on wikipedia that are closest to my logic are overcomplicating the problem too much. It doesn't really matter WHY the predictor is right or what mechanism it uses. All that matters is that it IS right, which tells us it is making a mix of one-box and two-box predictions, which then tells us that there is no one "correct" choice that can be analyzed in probabilistic terms.

OP adds in the term "godlike" to the classic problem, which was the source of my assumption about how the box might work, but it doesn't appear to be part of the classic problem, which is more wide open. What is interesting though is that it seems like even many of the lines of reasoning that correctly focus on the nature of the predictor are still getting caught up trying to prove whether one box or two boxes is "correct."

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733294)



Reply Favorite

Date: March 10th, 2026 8:38 PM
Author: Emilio Plan Truster

yup, the "correctness" of one's choice depends on the assumptions that one makes about the predictor

that's the whole point of the problem

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733305)



Reply Favorite

Date: March 10th, 2026 8:43 PM
Author: goyim in abundanceeeeee

Then it's an incredibly stupid problem, far dumber then I initially thought.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733315)



Reply Favorite

Date: March 10th, 2026 8:45 PM
Author: Lab Diamond Dallas Trump

The only assumption my position really needs is that people won't uniformly make one choice or the other -- the choices will be split, which is impossible to prove from a logical perspective but unassailable from an empirical perspective. But that just supports my contention that they are TTT charlatan hacks for spending a lot of time trying to logically "debate" and "prove" something that I determined wasn't susceptible to that within 5 minutes of looking at it.

-----------------

In his 1969 article, Nozick noted that "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly."[1][4] The problem continues to divide philosophers today.[9][10] In a 2020 survey, a modest plurality of professional philosophers chose to take both boxes (39.0% versus 31.2%).

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733320)



Reply Favorite

Date: March 10th, 2026 8:49 PM
Author: Emilio Plan Truster

not really sure what you are talking about. depending on the assumptions made about the predictor, you absolutely can logically prove that one choice or another is correct

the reason why people are split down the middle about their answer to this problem is that people make different instinctive assumptions about the predictor. it's interesting to explore why this is. that is the significance of the problem

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733333)



Reply Favorite

Date: March 10th, 2026 8:59 PM
Author: goyim in abundanceeeeee

Right. The problem itself is incomplete and therefore unuseful. If the computer is 100% infallible it completely changes the problem. If the computer is right most of the time but has the potential to be wrong even once, that also changes the strategy. If it's right only some of the time there again the solution changes.

The reason this problem is so popular is it's intentionally vague. You can claim it's interesting to see how much people assume about a predictor whose accuracy is unknowable, or you could take a much more scientific approach and actually decide on the parameters vs make people guess what they are. If you think you can draw meaningful inferences based on how people make their inferences, based on assumptions about an unknowable variable, then I would say you haven't learned very much at all.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733353)



Reply Favorite

Date: March 10th, 2026 11:17 PM
Author: Lab Diamond Dallas Trump

The fact that the predictor is right almost every time is incompatible with the assumptions you would need to logically prove one of the choices is correct. Do you understand that regardless of the "correct" (optimal) choice, people will make the "wrong" choice at a greater rate than what is stated to be the predictor's error rate? If you say the predictor is right 65% of the time, it becomes a much more interesting problem. But for it to be right almost every time, it has to be splitting its answers between the two options, which means there is no objectively correct choice. And it doesn't really matter that *philosophers* are split based on their differing assumptions because the hypo doesn't assume that the predictor's sample size is limited to philosophers. The hypo is directed at a universal "player," so one must account for the fact that the predictor will be right even when the player is too retarded to perform a basic expected value calculation or derive any of the various competing assumptions, which would be the vast majority of people. That is a huge constraint on how the problem can be solved.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733681)



Reply Favorite

Date: March 10th, 2026 11:32 PM
Author: Emilio Plan Truster

the hypo is not directed at the universal player. it is directed at you and you only

that is definitely an important assumption that must be made as part of the problem

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733706)



Reply Favorite

Date: March 11th, 2026 12:04 AM
Author: Lab Diamond Dallas Trump

This looks like a universal player to me, breh

--------------

In the standard version of Newcomb's problem, two boxes are designated A and B. The player is given a choice between taking only box B or taking both boxes A and B. The player knows the following:[4]

Box A is transparent, or open, and always contains a visible $1,000.

Box B is opaque, or closed, and its content has already been set by the predictor:

If the predictor has predicted that the player will take both boxes A and B, then box B contains nothing.

If the predictor has predicted that the player will take only box B, then box B contains $1,000,000.

The player does not know what the predictor predicted or what box B contains while making the choice.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733778)



Reply Favorite

Date: March 10th, 2026 8:26 PM
Author: Metal Up Your Ass

but where will i get a million dollars?

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733276)



Reply Favorite

Date: March 10th, 2026 8:29 PM
Author: things from the 90s/00s so ethereal and dreamlike:



(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733279)



Reply Favorite

Date: March 10th, 2026 8:33 PM
Author: jonathan penis

lol at op citing some lineage of scholarship when he really just came across this on a veritasium yt slop video that posted yesterday.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733291)



Reply Favorite

Date: March 10th, 2026 8:35 PM
Author: Emilio Plan Truster

yeah that's what reminded me of it. i thought their video was pretty good too. i first encountered it years ago though. always been a two-boxer, always will be

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733297)



Reply Favorite

Date: March 10th, 2026 10:43 PM
Author: things from the 90s/00s so ethereal and dreamlike:

always been a two-boxer, always will be tp.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733592)



Reply Favorite

Date: March 10th, 2026 11:16 PM
Author: Emilio Plan Truster

retrocausality is impossible

simple as. i'm a simple goy who doesn't believe in nonsensical "jewish physics" like retrocausality

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733679)



Reply Favorite

Date: March 10th, 2026 11:23 PM
Author: goyim in abundanceeeeee

This is very blackpilled. The Jews have already decided for me. I have no free will. I can't effect the game. I may as well play optimally given that the Jews have already taken everything away from me and I can't change it. So I might as well be greedy and steal. Good luck with that.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733692)



Reply Favorite

Date: March 10th, 2026 11:38 PM
Author: things from the 90s/00s so ethereal and dreamlike:

It's impossible to read any of your poasts with a straight face due to your current username, lmao at this post-Christian "world" we're currently "living" in (not complaining about the username, but not sure if it makes me want to laugh or cry or both... tp)

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733720)



Reply Favorite

Date: March 10th, 2026 11:59 PM
Author: Emilio Plan Truster

It’s the opposite of this actually, lol

Also in order to one box you have to believe in either retrocausality or a non-causal decision theory, both of which are incredibly Jewish fwiw

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733761)



Reply Favorite

Date: March 10th, 2026 11:42 PM
Author: things from the 90s/00s so ethereal and dreamlike:

>i'm a simple goy

simple-hearted yet not even in the slightest simple-minded (even your [cyber]detractors can't deny your formidable mental horsepower, IIRC, but then again Talmudic rhetoric is 180 so who knows), or:

Wise as a serpent yet innocent as a dove tp :-)

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733732)



Reply Favorite

Date: March 10th, 2026 8:42 PM
Author: goyim in abundanceeeeee

There's no punishment for the computer being wrong. It really matters whether it's infallible or fallible. If it's infallible you should take the mystery box because then it would have correctly predicted that you would and then left the million dollars. If you take both and the computer is right then you only get 1000. So it comes down to whether the computer can be wrong. If it's always right then there's no point in ever taking two boxes. The computer can't always be right, you take two boxes, and you get the million. No one points this out.

If the computer has the potential to be wrong then it is a capricious god and you should take two boxes every time.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733314)



Reply Favorite

Date: March 10th, 2026 11:46 PM
Author: things from the 90s/00s so ethereal and dreamlike:

>If the computer has the potential to be wrong

This intriguing thematic premise must have been a classic SciFi movie or TV episode or anime and/or Japanese video game or Autoadmit performance art or something at some point, right? Urgently in need of thought provoking cultural curation, Thank.

(http://www.autoadmit.com/thread.php?thread_id=5843901&forum_id=2\u0026show=posted#49733740)