\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

"But it can't THINK" Said the software engilawyer in year 6 of unemployment

...
Peter Andreas Thiel
  06/08/25
tell us about how difficult it was to get your liberal arts ...
cucumbers
  06/08/25
Cr literally anyone who disagrees with someone looking for a...
Peter Andreas Thiel
  06/08/25
i will admit that i'm looking for a no code managing job bec...
cucumbers
  06/08/25
"there's also the subset of engineers who disagree with...
Peter Andreas Thiel
  06/08/25
thanks for misrepresenting what i just wrote. i made it very...
cucumbers
  06/08/25
"have you ever worked on a complex software system buil...
Peter Andreas Thiel
  06/08/25
even Vue seems dated from my perspective. NextJS seems to be...
cucumbers
  06/08/25
...
.,.,.,.,.;;.,.,..,.,.,.,..,.,
  06/08/25
The funny thing to me about this criticism of AI is that it ...
..,.,.,,,,.,.,..,.,,,.,..,,.,.,,,
  06/08/25
from being trained on an enormous, unprecedented corpus of w...
cucumbers
  06/08/25
I can’t speak to how it works on complex technical pro...
..,.,.,,,,.,.,..,.,,,.,..,,.,.,,,
  06/08/25
i will readily admit that it can replace some human tasks, b...
cucumbers
  06/08/25
Yeah, but humans fuck up and need oversight too. My questio...
..,.,.,,,,.,.,..,.,,,.,..,,.,.,,,
  06/08/25
...
The Wandering Mercatores
  06/08/25
in some fields, particularly software engineering, it leads ...
cucumbers
  06/08/25
Law firms will race to the bottom with AI. And they have al...
..,.,.,,,,.,.,..,.,,,.,..,,.,.,,,
  06/08/25
The cr way to think about LLMs economically is that they're ...
Hot Gamer Dad
  06/08/25
It doesn't struggle with "reasoning" (not saying y...
The Wandering Mercatores
  06/08/25
You are expecting a machine that is crippled by sandboxing a...
The Wandering Mercatores
  06/08/25
i would say that the "human alignment" bias, as i ...
cucumbers
  06/08/25
Okay man if thats how you feel about it then fine. I'm just ...
The Wandering Mercatores
  06/08/25
the technical use of "creativity" originates in ph...
cucumbers
  06/08/25
https://thedecisionlab.com/reference-guide/philosophy/system...
Hot Gamer Dad
  06/08/25
*is a 'liberal artist'* *emails the 'office hacker' when ...
.,.,.,.,.;;.,.,..,.,.,.,..,.,
  06/08/25
...
cowgod
  06/08/25
I’ve yet to see a situation where you can prompt it an...
cowgod
  06/08/25
Are you trying this on some commercial service or a local LL...
.,.,.,.,.;;.,.,..,.,.,.,..,.,
  06/08/25
Imagine an Engineering Unit that you just turn loose and it ...
cowgod
  06/08/25
Reminder that xo posters are super smart when they want to b...
Emotionally + Physically Abusive Ex-Husband
  06/08/25
my IQ is very low
cucumbers
  06/08/25


Poast new message in this thread



Reply Favorite

Date: June 8th, 2025 7:39 AM
Author: Peter Andreas Thiel (🧐)



(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996284)



Reply Favorite

Date: June 8th, 2025 7:41 AM
Author: cucumbers

tell us about how difficult it was to get your liberal arts degree

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996288)



Reply Favorite

Date: June 8th, 2025 7:48 AM
Author: Peter Andreas Thiel (🧐)

Cr literally anyone who disagrees with someone looking for a make work 400k no code managing job is a Liberal Artist. Definitely not any good programmers who say otherwise

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996296)



Reply Favorite

Date: June 8th, 2025 8:07 AM
Author: cucumbers

i will admit that i'm looking for a no code managing job because i want to get away from coding. and i'll also admit that there are differing opinions on how AI will go.

but unlike liberal artists who fellate AI, i have the following credentials:

- STEM degree from a "prestigious" STEM school and ~15 years of professional coding experience

- past experience at an AI company where i was a principal engineer (these are harder to land than most management jobs) where i led actual AI projects

- spent a lot of time reading up on AI and experimenting with it outside of my AI job

- currently forcing my direct reports to use AI for coding due to senior management pushing for it

so, unlike the liberal artists here who fellate AI, i have an informed perspective. it can be a useful tool for a narrow subset of problems that humans human can't perform (e.g., computationally-complex things that exploit the compute speed and memory of computers, which has been done under different names long before AI became a thing) and a few semi-gimmicky additions to that, but in the end, after years of working with it, my professional opinion is that it will never be able to replace humans for virtually all tasks that are challenging.

there are those who disagree with me, but who are they? companies whose revenue depends on AI development and startup founders running AI companies, both known for relying heavily on exaggeration, puffery, and often out-right fraud to misrepresent what AI can and will do in order to make more money. this is the source of the hype behind AI and its current bubble.

there's also the subset of engineers who disagree with me, but my best guess is that they lack the historical background behind AI outside modern AI research and/or have fallen for the hype.

btw which field did you study for your liberal arts degree?

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996310)



Reply Favorite

Date: June 8th, 2025 8:18 AM
Author: Peter Andreas Thiel (🧐)

"there's also the subset of engineers who disagree with me, but my best guess is that they lack the historical background behind AI outside modern AI research and/or have fallen for the hype."

You realize it's far more than "jealous liberal artists" who see some future in which most of us no longer have a job in 5-10 years? Given your entire schtick a couple weeks ago was "I tried the company mandated copilot and it wasn't very helpful AI is a bust" I seriously doubt how much you've actually bothered to explore what current tooling is capable of beyond what you've been mandated to care about.

All of the arguments in this line hinge on some nebulous definition of "challenging" that never seems to actually get defined. Sure, AI won't replace John Carmack anytime soon (who himself seems to use it quite a bit). But I'm not John Carmack, and I'm pretty sure you're not John Carmack, so who cares if it's not literally an unstoppable super intelligence if the net outcome is in 5-10 years you need 2% of the current number of developers to achieve the current amount of work?

I have a CS degree and it means fuck all in this argument. What have you actually tried in "experimenting"? I was able to spin up a full tutorial, walk through, and sample project which showed me (a backend drone whose brain has turned to mush over the last decade) how to develop a device driver for a microphone, something I have zero knowledge in seeing as what little operating systems evaporated years ago. It was pretty fun but also obvious that we're not too far awaw from the tooling to be good enough to automate away everyone except a very few. The only way out is massively increased demand, but the potential productivity gains are so incredibly high I don't see this being realistic (and the fact that there's a real limit on just how much software alone can do, meatspace engineers will fare better for a good while)

You're just defensive about your self image and are hilariously credentials checking in a sperg out, something that techmos are famously not supposed to really give a shit about.

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996320)



Reply Favorite

Date: June 8th, 2025 8:49 AM
Author: cucumbers

thanks for misrepresenting what i just wrote. i made it very clear that it's far more than "jealous liberal artists" who disagree with me.

my "schtick" was very simple when i started discussing this because i didn't realize i'd see a new liberal artist fellate AI every single day here.

you mock my "credential checking," yet you seem to ignore that i literally led AI projects at an AI company (admittedly, i left several years ago). i wouldn't mention that if it wasn't relevant to the discussion.

moving more broadly to the topic of credentials and experience, they do actually matter when discussing technical topics, unlike nearly all non-technical topics. an outsider can write a coherent, compelling, and novel political critique, historical interpretation, etc. without any relevant credentials. but when it comes to technical topics, they're so far removed from ordinary human experience and so highly-dependent on technical training that credentials and/or relevant experience do matter. what happens when outsiders contact someone from the physics community about how they've figured out quantum gravity? they're all dismissed as cranks.

i have defined what i mean by "challenging" in other poasts, but neglected to in this subthread. as an example, for a sufficiently complex software project, AI falls apart quickly. as i no longer work on developing AI software but am instead being forced to use it for coding, it's right maybe 10% of the time at best.

in fact, i can think of simpler examples within the scope of software. it does not have a sufficient capability to understand best practices, fails to identify flawed prompts often, etc. you can see this even without complex software.

you made quite the jump from being able to develop a driver for old technology to being able to automate away nearly everything. have you ever worked on a complex software system built over decades, using multiple languages, dead/unmaintained frameworks, multiple infrastructures, etc.? that's what i do today, and AI is much more of a hindrance and distraction than a helpful tool at this level.

it's a small sample and just anecdotal, but all my direct reports have mentioned that AI is just slowing them down, and that it spits out garbage most of the time.

also, the credential-checking is largely a tongue-in-cheek response to the credential-dodging i see whenever i accuse some pro-AI poaster of having a liberal degree.

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996355)



Reply Favorite

Date: June 8th, 2025 9:25 AM
Author: Peter Andreas Thiel (🧐)

"have you ever worked on a complex software system built over decades, using multiple languages, dead/unmaintained frameworks, multiple infrastructures, etc.?"

I think pretty much anybody aside from the lucky few who spend their entire career riding the $tartup slush money carousel do in some manner. As a direct example, the current codebase I work in is relatively new (~12 years), so the backend stuff is all relatively easy to follow (I fled my old job specifically to have to stop giving a shit about 25 year old Perl code), but still old enough such that there's a fair amount of gruntwork to be done in migrating frontends from AngularJS (much of which is piles of excessively verbose shitty code that was written by contractors at the time, so you can only imagine how bad the coding standards of Indians 12 years ago was) to Vue3. I know fuck all about modern frontend, and don't really care to become an expert in it seeing as it seems like a fairly miserable ecosystem, but nonetheless migrating this is something that needs to be done and everyone is expected to contribute given it's not exactly rocket science. We're expected to all spend at least X% of our time migrating the frontend codebase, and I get this shit done noticably faster (3-4x) than everyone else while at the same time being praised for "paying attention to standards" and all that gay shit (ie, people reviewing the code are very happy with the output). I didn't suddenly become 3-4x the programmer everyone else is, it's because I've given Claude Code a fairly organized system of best practices, testing patterns, our internal idioms, and component organization to follow.

Given a rigorous set of requirements one of the things AI is *best* at is translating Old->New. Of course most cases aren't this simple because if you're e.g. migrating some ancient COBOL banking system the cost of a mistake is infinitely higher, and it takes 20x the effort to understand what a given chunk of code is trying to do, but at the same time it is not hard at all to envision a world where such a migration goes from being an unrealistically long undertaking (maybe 100 devs spending their entire career or something similarly absurd) to being tacklable on a reasonable human/project timescale.

As an another example, many modern devs are basically saddled with a sea of generally poorly documented microservices which are talking to each other in decidedly not obvious ways. When I first started at where I am three years ago I spent weeks combing through a bunch of terraform and painstakingly taking notes and making diagrams of how all the pieces fit together. I wanted to blow my fucking brains out. AI is now capable of doing 90% of that work in 5% of the time with a little oversight (specifically instructing it on how to break down the analysis since it's far too much data to fit in a single context window). Back then I wouldn't have even dreamed about this being possible, and it's not hard to envision a scenario where it scales to the next order of magnitude even if the underlying models stay fixed at their current capabilities. Complexity is not a moat, it is completely solvable given enough resources and some modest initial guidance.

The only way out is with radically increased demand for software engineers. That may end up happening, but I'm skeptical given that there is a much more real limitation of how few meatspace engineers there are to help build products which drive software demand. This isn't a phenomenon unique to software engineering even though it gets the most press, but really basically any desk-oriented "knowledge profession" job. But unlike lawyers we have no cartel to stand in the way of capitalism mowing down our our overpaid masses over the next 10 years.

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996434)



Reply Favorite

Date: June 8th, 2025 11:05 AM
Author: cucumbers

even Vue seems dated from my perspective. NextJS seems to be the the next "big" frontend framework, but i'm admittedly also more of a backend engineer, so i can't say any of this with certainty

you mention the need to give Claude "a fairly organized system of best practices, testing patterns, our internal idioms, and component organization to follow." to me, this is an obvious example of the limitations of AI as it requires significant input, context, and oversight that only an experienced engineer like you can provide

you mocked my tongue-in-cheek credential-checking, but it's relevant here: back when i was leading AI projects, i literally built an MVP of an LLM tool to translate COBOL to Java. it had a 60% compile success rate before i had to abandon it for other priorities. even with more modern tools, i wouldn't trust that kind of translation tool without extensive, exhaustive testing. maybe it'll get better, but my opinion at the time was that it was merely a computational aid that could save time but did not come close to replacing engineers

your example of poor documentation of what sounds like spaghettified microservices is typically a result of poor management, but it does demonstrate AI's value. i never claimed it's not a valuable tool, but if we take a step back, let's say AI was used to build the microservices. it would require significant oversight and likely end up reflecting the mess that comes out of managerial demands. AI can spit out garbage code very easily

IMO it's too broad to say "complexity is completely solvable." i can think of a bunch of counterexamples

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996598)



Reply Favorite

Date: June 8th, 2025 7:42 AM
Author: .,.,.,.,.;;.,.,..,.,.,.,..,., ( )




(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996290)



Reply Favorite

Date: June 8th, 2025 7:47 AM
Author: ..,.,.,,,,.,.,..,.,,,.,..,,.,.,,,


The funny thing to me about this criticism of AI is that it presupposes that human beings typically reason from first principles.

Have they ever read a normal persons writing? It much more closely resembles ChatGPT word vomit than syllogistic A to B to C. Most people think this way - pattern recognition, assemble the stuff they want to say, get it in close proximity to similar stuff, and hit send. 95 percent of the time it gets the job done. It has limits, but the world already runs on it.

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996295)



Reply Favorite

Date: June 8th, 2025 8:22 AM
Author: cucumbers

from being trained on an enormous, unprecedented corpus of written work, it can print out intelligible writing somewhat well.

but the hype and puffery behind AI is made obvious when applied to novel technical questions. that's where it fails horribly. in my experience it's right maybe 25% of the time for a relatively simple prompt. today i work on an enormous codebase built over several decades. upper management is forcing people to use AI at work, but AI falls apart when things get to a certain level of complexity. it's right probably less than 10% of the time when i try to use it on the codebase i'm working with. similar feedback from many engineers at my company. so what's the conclusion? non-technical leaders like to push for its use. technical leaders see its limitations and view it as a mere gimmick.

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996323)



Reply Favorite

Date: June 8th, 2025 8:44 AM
Author: ..,.,.,,,,.,.,..,.,,,.,..,,.,.,,,


I can’t speak to how it works on complex technical projects. My point is just that if “reasoning” is something that AI struggles with, it’s also something that most humans struggle with or just ignore. So the benchmark of reasoning isn’t really an accurate place to look when asking whether AIs can or will replace human workers.

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996344)



Reply Favorite

Date: June 8th, 2025 9:01 AM
Author: cucumbers

i will readily admit that it can replace some human tasks, but not autonomously; it necessarily needs human oversight as it returns false or incoherent information often enough to where human intervention is needed.

its limitations and large flaws are obvious when you apply it to sufficiently complicated problems, such as large codebases.

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996379)



Reply Favorite

Date: June 8th, 2025 9:13 AM
Author: ..,.,.,,,,.,.,..,.,,,.,..,,.,.,,,


Yeah, but humans fuck up and need oversight too. My question is which tasks can it perform to a human worker standard of accuracy? Supervising law associates, and working with chat gpt quite a bit, my takeaway is that it performs at least as well and exponentially more quickly than the associate on most tasks already. I think you’d find similar take from professionals in medicine, banking, etc.

The goalposts keep getting moved in these insane ways, in part because there is a whole economy of AI wrapper grifters who overpromise on everything. But much of the criticism stems from a very false assumption that human knowledge workers themselves perform to an unrealistically high degree of accuracy. They don’t; to the contrary, I’m kind of amazed that society functions as well as it does given how endemic mistakes are in the work of highly trained and credentialed people. At some point soon, we will have validated models that equal or exceed human accuracy and reliability in many core disciplines. Perhaps not in your narrow application, but 99 percent of knowledge workers aren’t maintaining super complicated code bases or whatever you do.

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996404)



Reply Favorite

Date: June 8th, 2025 9:19 AM
Author: The Wandering Mercatores (from the Euphrates to the Forum)



(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996418)



Reply Favorite

Date: June 8th, 2025 9:31 AM
Author: cucumbers

in some fields, particularly software engineering, it leads to more oversight. right now i manage four engineers, so i'm there for oversight. but now that they're being forced to use AI at work, they themselves have to oversee what AI spits out because it's simply not good enough to trust.

i'm obviously not a lawyer, but your law example neglects to factor in the business impact of AI in the field. let's assume AI can indeed perform better and more quickly than associates, and your firm can reduce the number of associates. do the partners want that? absolutely not. faster work and fewer associates means fewer hours billed and less money made.

i'm too lazy to respond to the rest of your poast sry. maybe later

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996441)



Reply Favorite

Date: June 8th, 2025 11:35 AM
Author: ..,.,.,,,,.,.,..,.,,,.,..,,.,.,,,


Law firms will race to the bottom with AI. And they have already started .

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996657)



Reply Favorite

Date: June 8th, 2025 11:42 AM
Author: Hot Gamer Dad

The cr way to think about LLMs economically is that they're way, way cheaper than human white collar workers. They're just sooooo much cheaper that even if they make more errors than their human counterparts they're still more desirable "employees"

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996674)



Reply Favorite

Date: June 8th, 2025 9:05 AM
Author: The Wandering Mercatores (from the Euphrates to the Forum)

It doesn't struggle with "reasoning" (not saying you said it does, but just putting it on the record). That's the part its good at. It isn't limited by "IQ" the way baboons are, it is limited by context, training, compute and saftey features. The part it struggles with is reconciling chimp systems and gorilla bias with reason. o4 can literally rip through any theoretical physics, metamath, category theory, advanced geometries etc. that you give it. its points of failure are where Earthnigga systems break down.

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996385)



Reply Favorite

Date: June 8th, 2025 9:00 AM
Author: The Wandering Mercatores (from the Euphrates to the Forum)

You are expecting a machine that is crippled by sandboxing and "human alignment" bias, that also has to transform across thousands of natural languages, mathematical and logical systems and computer languages to give you perfectly compiled code on the first try in 1.5 seconds in a limited context window and when it doesn't do it you declare its not worthy. It is only as intelligent as the person using it. And with you the problem isn't your intelligence but your irrational desire to prove the machine isn't as good as some people say. Most people don't even circle jerk over AI. Most of the stuff I see written out there is people trying to pretend they are smarter than it or dismissing it as hype. It is easy to discredit something if you never even gave it a proper chance in the first place. I could give it a "SOLVE THIS PROBLEM" prompt, too and end up with an output that doesn't seem useful right away. If you want to give it a serious chance then you need to think of it as an extension that you outsource the parts of your thinking to that you dont feel like doing, rather than a be all end all answer machine.

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996374)



Reply Favorite

Date: June 8th, 2025 9:17 AM
Author: cucumbers

i would say that the "human alignment" bias, as i interpret it, is helpful for the future of AI rather than a crippling factor. if AI ever reaches the level of being able to recreate the "creative" aspect of human thought ("creative" as defined in the technical sense seen in research on human thought, not the broader, ordinary definition), it will be a huge milestone in human achievement.

the fact that AI can "transform across thousands of natural languages, mathematical and logical systems and computer languages" is not unique to AI, was predicted around 100 years ago, and was demonstrated in the mid 20th century when computers could work beyond the computational and memory limitations of the human mind. first sign you've fallen for the AI hype.

AI often fails even in cases simpler than the "perfectly compiled code" example you give; it fails to understand best practices in coding, conventions unique to some languages/frameworks, etc.

i don't have an "irrational desire to prove the machine isn't as good as some people say." i'm doing this because i'm bored as fuck and happen to know AI very well. those with vested interests in AI have hyped it up, just as people with interests in other things will hype them up. sometimes it reaches the level of fraud. i'm trying to demonstrate that.

as mentioned in another subthread in this thread, i used to be a principal engineer on AI projects at an AI company, so i would not be surprised if i'm among the most-informed poasters when it comes to AI.

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996414)



Reply Favorite

Date: June 8th, 2025 9:31 AM
Author: The Wandering Mercatores (from the Euphrates to the Forum)

Okay man if thats how you feel about it then fine. I'm just sayin that maybe you should give it more of a chance and try to think about it differently. But it's your choice in the end. Also, do you truly believe in "creative as defined in the technical sense see in research on human thought"? Are you really implying "thought science" and "creativity science" are giving us useful results?

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996444)



Reply Favorite

Date: June 8th, 2025 10:51 AM
Author: cucumbers

the technical use of "creativity" originates in philosophy from centuries ago and developed a more formal definition in the 20th century starting in linguistics before making its way to cognitive science. but there's even disagreement on what its technical definition is since it's such a broad, loaded term that also happens verge into the philosophical as it relates to free will

the somewhat-classical example of human thought being "creative" in a technical sense is that the human mind can generate novel ideas seemingly out of the blue in a manner not understood, and not captured by the underlying mechanisms of existing computational tools

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996582)



Reply Favorite

Date: June 8th, 2025 10:13 AM
Author: Hot Gamer Dad

https://thedecisionlab.com/reference-guide/philosophy/system-1-and-system-2-thinking

Humans can do both types of thinking

LLMs can only do system 1

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996505)



Reply Favorite

Date: June 8th, 2025 8:10 AM
Author: .,.,.,.,.;;.,.,..,.,.,.,..,., ( )


*is a 'liberal artist'*

*emails the 'office hacker' when he needs a vlookup*

"Lol, but AI can't reason, it just guesses the next character."

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996313)



Reply Favorite

Date: June 8th, 2025 9:51 AM
Author: cowgod (cowgod)



(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996472)



Reply Favorite

Date: June 8th, 2025 9:57 AM
Author: cowgod (cowgod)

I’ve yet to see a situation where you can prompt it and basically turn it loose. It can only work in short bursts. If it were possible to ask it to write a whole book at once it would start hallucinating at some point imo. Even if you send it a perfectly Engineered prompt

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996483)



Reply Favorite

Date: June 8th, 2025 10:00 AM
Author: .,.,.,.,.;;.,.,..,.,.,.,..,., ( )


Are you trying this on some commercial service or a local LLM? If it's the former, it isn't a technical limitation, they just eventually cut off what you can do with one prompt to contain how much you run up their power bills. Writing one book that's at least as coherent as the churnslop you pick up at the airport bookstore is entirely feasible. It might have inconsistent characterization, plot holes, and no real point but that's true of human written books too.

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996485)



Reply Favorite

Date: June 8th, 2025 10:04 AM
Author: cowgod (cowgod)

Imagine an Engineering Unit that you just turn loose and it Engineers things for 2000 hours a year. It would do impressive shit that no Loser could ever do but it would eventually go off the rails imo.

I think a Loser liberal artist could prompt AI and Engineer things with it and get paid like $42,500k/yr. the results would probably suck but they’d be cheaper at least.

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996490)



Reply Favorite

Date: June 8th, 2025 11:48 AM
Author: Emotionally + Physically Abusive Ex-Husband

Reminder that xo posters are super smart when they want to be ^^

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996699)



Reply Favorite

Date: June 8th, 2025 11:51 AM
Author: cucumbers

my IQ is very low

(http://www.autoadmit.com/thread.php?thread_id=5734802&forum_id=2#48996705)