\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

reports are of 60% yield for TSMC 2nm node

we should see 2nm devices in latter half of 2025. latest NVI...
Concupiscible Canary Station
  06/20/25
wow
blue passionate boistinker hominid
  06/20/25
180 Should I wait then? Im about to buy a ThreadRipper 79...
Startling brunch associate
  06/20/25
Missed the part about this being GPU. Still buying a 5090.
Startling brunch associate
  06/20/25
are you gonna pay $3000k or whatever the ebay cost is or mur...
galvanic angry state
  06/20/25
You can buy 4x 3060 ti's for $1000 and have more VRAM than a...
Soggy Sooty University
  06/20/25
Yeah just run games off 4 gpus at once, good shit you chink ...
Chrome spectacular cuckoldry resort
  06/20/25
What can't run on a 3060 ti?
Soggy Sooty University
  06/21/25
might want to wait but for a workstation CPU it isn't going ...
Concupiscible Canary Station
  06/20/25
Apple bought Nvidia's entire production capacity for 3nm sil...
Soggy Sooty University
  06/20/25
...
Concupiscible Canary Station
  06/20/25
kikes are annoying as fuck. but israel contributes bigly to...
dashing alcoholic casino
  06/20/25
(it is 1992)
Soggy Sooty University
  06/20/25
...
Fuchsia volcanic crater roommate
  06/20/25
...
confused aphrodisiac striped hyena kitchen
  06/21/25
there are several other improvements coming in the near futu...
Beta Box Office
  06/20/25
it definitely hasn't been hardware that has slowed down fron...
blue passionate boistinker hominid
  06/20/25
i don't really buy the story that progress has slowed much. ...
Beta Box Office
  06/20/25
the benchmarks are made up and meaningless the models hav...
blue passionate boistinker hominid
  06/20/25
Well, if you completely dismiss the benchmarks then there is...
Beta Box Office
  06/20/25
Look at AI image generation. No improvements in ages and the...
Soggy Sooty University
  06/20/25
the research/tool calling loops in recent models are much mo...
galvanic angry state
  06/20/25
...
brindle flirting office legend
  06/20/25
https://x.com/ylecun/status/1935108028891861393
blue passionate boistinker hominid
  06/20/25
ai couldn't code for 5 minutes two years ago. the whole poin...
galvanic angry state
  06/20/25
Nothing mission critical will ever get turned over to AI in ...
Soggy Sooty University
  06/20/25
(not a Boeing exec)
Fuchsia volcanic crater roommate
  06/20/25
i agree with your first post above but i'm much more skeptic...
blue passionate boistinker hominid
  06/20/25
better hardware and more compute and has been the primary wa...
Beta Box Office
  06/20/25
it'll certainly level off at some point. but it's crossed th...
galvanic angry state
  06/20/25
oh yeah even if AI never gets any better than it is now (and...
blue passionate boistinker hominid
  06/20/25
seems like only yesterday that 9nm was the frontier and 6nm ...
fear-inspiring business firm
  06/20/25
If backside power delivery happens it's going to reset the c...
Soggy Sooty University
  06/20/25


Poast new message in this thread



Reply Favorite

Date: June 20th, 2025 1:57 PM
Author: Concupiscible Canary Station

we should see 2nm devices in latter half of 2025. latest NVIDIA GPUs are on 5nm, for reference.

middle eastern shitholes lobbing bombs at their useless buildings while Taiwan manufactures the Future.

get ready for faster Screens.

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035154)



Reply Favorite

Date: June 20th, 2025 2:03 PM
Author: blue passionate boistinker hominid

wow

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035179)



Reply Favorite

Date: June 20th, 2025 2:04 PM
Author: Startling brunch associate

180

Should I wait then? Im about to buy a ThreadRipper 7985WX

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035184)



Reply Favorite

Date: June 20th, 2025 2:05 PM
Author: Startling brunch associate

Missed the part about this being GPU. Still buying a 5090.

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035186)



Reply Favorite

Date: June 20th, 2025 2:10 PM
Author: galvanic angry state

are you gonna pay $3000k or whatever the ebay cost is or murder someone IRL to get one at msrp?

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035209)



Reply Favorite

Date: June 20th, 2025 2:14 PM
Author: Soggy Sooty University

You can buy 4x 3060 ti's for $1000 and have more VRAM than a 5090. Newer software can pool it and give you 48gb. The speed bump you get from the 5090 is more than offset by the 32gb cap.

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035221)



Reply Favorite

Date: June 20th, 2025 3:03 PM
Author: Chrome spectacular cuckoldry resort

Yeah just run games off 4 gpus at once, good shit you chink retard

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035356)



Reply Favorite

Date: June 21st, 2025 1:00 AM
Author: Soggy Sooty University

What can't run on a 3060 ti?

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49036663)



Reply Favorite

Date: June 20th, 2025 2:07 PM
Author: Concupiscible Canary Station

might want to wait but for a workstation CPU it isn't going to make a huge difference like with mobile/embedded imo

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035192)



Reply Favorite

Date: June 20th, 2025 2:04 PM
Author: Soggy Sooty University

Apple bought Nvidia's entire production capacity for 3nm silicon for the first two years after it debuted. Nvidia still can't get 3nm.

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035185)



Reply Favorite

Date: June 20th, 2025 2:07 PM
Author: Concupiscible Canary Station



(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035197)



Reply Favorite

Date: June 20th, 2025 2:06 PM
Author: dashing alcoholic casino

kikes are annoying as fuck. but israel contributes bigly to global semiconductor research and development. taiwan isn't doing this on their own

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035188)



Reply Favorite

Date: June 20th, 2025 2:08 PM
Author: Soggy Sooty University

(it is 1992)

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035199)



Reply Favorite

Date: June 20th, 2025 2:49 PM
Author: Fuchsia volcanic crater roommate



(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035322)



Reply Favorite

Date: June 21st, 2025 12:45 AM
Author: confused aphrodisiac striped hyena kitchen



(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49036652)



Reply Favorite

Date: June 20th, 2025 2:29 PM
Author: Beta Box Office

there are several other improvements coming in the near future - higher memory bandwidth, more memory on the chips, better cooling. the time to train frontier-scale language models is likely to go from months to weeks by the late 2020s. we won't have to deal with the current slow pace of AI development where we only see meaningful model improvements every 2-3 months.

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035265)



Reply Favorite

Date: June 20th, 2025 2:30 PM
Author: blue passionate boistinker hominid

it definitely hasn't been hardware that has slowed down frontier model improvements

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035266)



Reply Favorite

Date: June 20th, 2025 2:37 PM
Author: Beta Box Office

i don't really buy the story that progress has slowed much. GPT 4.5 sucked, but the last 6 months has seen fairly significant model improvements. even comparing the first generation of the Gemini 2.5 pro in late March to the current version shows pretty large improvements on most benchmarks.

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035281)



Reply Favorite

Date: June 20th, 2025 2:44 PM
Author: blue passionate boistinker hominid

the benchmarks are made up and meaningless

the models haven't become better at anything except some kinds of coding (only because they've been fine-tuned to do that specifically)

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035306)



Reply Favorite

Date: June 20th, 2025 2:55 PM
Author: Beta Box Office

Well, if you completely dismiss the benchmarks then there is really nowhere to go with the argument. It turns into a feeling based argument on model performance. People are notoriously bad judges of this - look how many people still insist a couple weeks after a model is released that it became dumber.

This argument does look strained though because 1) model developers know training specifically on benchmarks and releasing weak models that don’t actually perform would eliminate their credibility 2) the benchmarks are increasingly broad and model performance across different domains have very similar rank orderings. When people release new ones that the models haven’t seen, this remains true. Performance should be much more uneven if they are benchmark fitting.

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035341)



Reply Favorite

Date: June 20th, 2025 4:53 PM
Author: Soggy Sooty University

Look at AI image generation. No improvements in ages and the models aren't even that big. They still have the same crippling limitations, e.g. skin either looks plastic or everything in the background gets distorted. That's why the most realistic images have to simulate shallow focus and blur out everything else. No one seems to know how to fix basic shit like that.

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035616)



Reply Favorite

Date: June 20th, 2025 3:14 PM
Author: galvanic angry state

the research/tool calling loops in recent models are much more capable than a year ago. this only became possible once the base models got good enough to not rapidly compound errors across iterations--there's a reason nobody was doing it two+ years ago.

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035380)



Reply Favorite

Date: June 20th, 2025 4:51 PM
Author: brindle flirting office legend



(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035612)



Reply Favorite

Date: June 20th, 2025 4:52 PM
Author: blue passionate boistinker hominid

https://x.com/ylecun/status/1935108028891861393

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035614)



Reply Favorite

Date: June 20th, 2025 4:58 PM
Author: galvanic angry state

ai couldn't code for 5 minutes two years ago. the whole point is the models have finally crossed a threshold where their error margin is small enough that they don't immediately compound to the point of failure, at which point relatively short-acting agents becomes viable. and as they keep improving the viable window of work duration will go up

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035634)



Reply Favorite

Date: June 20th, 2025 4:59 PM
Author: Soggy Sooty University

Nothing mission critical will ever get turned over to AI in the private sector. Governments might dabble it with but the oil and gas industry won't.

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035638)



Reply Favorite

Date: June 20th, 2025 5:01 PM
Author: Fuchsia volcanic crater roommate

(not a Boeing exec)

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035645)



Reply Favorite

Date: June 20th, 2025 5:00 PM
Author: blue passionate boistinker hominid

i agree with your first post above but i'm much more skeptical of this one. i don't think they will get much better at this with more/better hardware and compute

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035641)



Reply Favorite

Date: June 20th, 2025 5:09 PM
Author: Beta Box Office

better hardware and more compute and has been the primary way models have been improved over the last decade. it's not just about being able to train on more data, with more parameters, with more aggressive regularization, etc. it allows researchers to try out more ideas and at larger scale. algorithmic improvements are extremely dependent on being able to rapidly try out ideas. agent based coders or approaches like AlphaEvolve make compute even more important for algorithmic improvement.

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035668)



Reply Favorite

Date: June 20th, 2025 5:19 PM
Author: galvanic angry state

it'll certainly level off at some point. but it's crossed the threshold where it's clear that with a human still in the loop current models could lead to pretty significant job loss after a fair amount of effort tailoring agentic workflows. it's just up in the air as to whether we progress to total chaos ai-ization from here or whether we sort of stagnate at a point which still requires significant numbers of humans in the loop, even if it is far fewer than required before.

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035690)



Reply Favorite

Date: June 20th, 2025 5:43 PM
Author: blue passionate boistinker hominid

oh yeah even if AI never gets any better than it is now (and it definitely will), it will still swallow up tons of jobs

there's just going to be a lag time to adopt and implement it

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035746)



Reply Favorite

Date: June 20th, 2025 2:32 PM
Author: fear-inspiring business firm

seems like only yesterday that 9nm was the frontier and 6nm was the unattainable dream of the distant future

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035272)



Reply Favorite

Date: June 20th, 2025 4:57 PM
Author: Soggy Sooty University

If backside power delivery happens it's going to reset the clock. Right now everyone is doing shithybrid designs.

(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2...id.#49035630)