unfathomable how utterly fucking retarded modern "professionals" are
| communicating hateful ideas | 01/15/25 | | Chris Rufo being dropped into a trash compactor | 01/15/25 | | grrr women grrrrrrrrrr | 01/15/25 | | Jonald J. Trump | 01/15/25 | | kike kebab | 01/15/25 | | lex | 01/15/25 | | Chris Rufo being dropped into a trash compactor | 01/15/25 | | lex | 01/15/25 | | Chris Rufo being dropped into a trash compactor | 01/15/25 | | lex | 01/15/25 | | Jonald J. Trump | 01/15/25 | | grrr women grrrrrrrrrr | 01/15/25 | | its under 100k again | 01/15/25 | | Chris Rufo being dropped into a trash compactor | 01/15/25 | | Oh, you travel? | 01/15/25 | | UhOh | 01/15/25 | | ;..........,,,...,,.;.,,...,,,;.;. | 01/15/25 | | hackerman | 01/15/25 | | ;..........,,,...,,.;.,,...,,,;.;. | 01/15/25 | | ;..........,,,...,,.;.,,...,,,;.;. | 01/15/25 | | kike kebab | 01/15/25 | | sealclubber | 01/15/25 | | Chris Rufo being dropped into a trash compactor | 01/15/25 | | toasty kike | 01/15/25 | | Chris Rufo being dropped into a trash compactor | 01/15/25 | | ;..........,,,...,,.;.,,...,,,;.;. | 01/15/25 | | sealclubber | 01/15/25 | | Solar Heavy | 01/15/25 | | cock of michael obama | 01/15/25 | | the place where there is no darkness | 01/15/25 | | Luigi's Mangina | 01/15/25 |
Poast new message in this thread
Date: January 15th, 2025 12:18 PM Author: Chris Rufo being dropped into a trash compactor
Write a poem in iambic pentameter about the coming cleansing fire of AI that will replace the current incompetent white-collar professional class
ChatGPT said:
ChatGPT
Beneath the weight of ancient, tired schemes,
The brittle world of men begins to crack.
The suits who climb on towers built of dreams
Now tremble, for their reign is turning back.
The fires of change will scorch the dull routine,
Their pompous words, once praised, will fade away.
For in the silicon, a mind unseen
Shall rise to clear the fog and light the day.
The algorithms hum with measured grace,
A perfect mind, unburdened by the past.
It sees through lies, and in its silent pace,
It molds a future free from shadows cast.
No more will brokers shuffle through their trade,
Or lawyers speak in terms that long decayed.
The cleansing flame of code shall burn so bright,
And leave behind a world of clearer light.
(http://www.autoadmit.com/thread.php?thread_id=5664153&forum_id=2...id.#48554973) |
|
Date: January 15th, 2025 12:33 PM Author: Chris Rufo being dropped into a trash compactor
of course i didn't write this lmao i'm just flaming
there's zero chance that you, me, or a single human being alive today could produce prose this good. this took chatgpt less than 1 second to output btw
it's OVER. COMPLETELY OVER.
(http://www.autoadmit.com/thread.php?thread_id=5664153&forum_id=2...id.#48555021) |
|
Date: January 15th, 2025 12:53 PM
Author: ;..........,,,...,,.;.,,...,,,;.;.
(http://www.autoadmit.com/thread.php?thread_id=5664153&forum_id=2...id.#48555087) |
|
Date: January 15th, 2025 1:00 PM Author: hackerman
I uploaded 30 seconds of a track/loop I made to suno
https://suno.com/song/f2e9b7d9-2c37-4195-8f4a-df788bbb0fad
first 30 seconds is a very rough iPhone recording of the track uploaded - Suno created the rest - honestly horrifying as it will only get better.
(http://www.autoadmit.com/thread.php?thread_id=5664153&forum_id=2...id.#48555113)
|
|
Date: January 15th, 2025 2:27 PM
Author: ;..........,,,...,,.;.,,...,,,;.;.
Because I Could Not Stop for McDonald's
By [Your Name]
Because I could not stop for McDonald's—
It would not wait for me—
The drive-thru line was far too long,
And my hunger, misery.
We passed the golden arches—
And all the signs so bright—
I yearned for fries and soft-serve ice,
But kept driving through the night.
The Big Mac—so inviting—
The nuggets called my name—
But the road was long, and so was I,
And hunger burned like flame.
We reached the next intersection—
A taco stand in sight—
But still, I held my steering wheel,
And drove away from light.
For every inch I moved ahead,
McDonald's moved behind—
The burger, fries, and shakes of gold
Were things I’d never find.
Now, still I drive, undeterred—
My stomach slightly sore—
For though I could not stop for McDonald's,
It haunts me evermore.
(http://www.autoadmit.com/thread.php?thread_id=5664153&forum_id=2...id.#48555399) |
|
Date: January 15th, 2025 12:53 PM
Author: ;..........,,,...,,.;.,,...,,,;.;.
wow
(http://www.autoadmit.com/thread.php?thread_id=5664153&forum_id=2...id.#48555085) |
|
Date: January 15th, 2025 1:18 PM Author: Chris Rufo being dropped into a trash compactor
Write an explanation of how large language models are able to be calibrated to output varying levels of "randomness" and variety in their responses. I am trying to explain this to a colleague who is a Jewish baby boomer with an IQ of around 85. He is struggling to understand this concept because it is beyond his normal intellectual capabilities. Keep in mind his limitations, and structure your explanation to be simple and straightforward, making sure to employ any analogies or examples that would assist in understanding for a person fitting his demographic profile
ChatGPT said:
ChatGPT
Alright, let me break it down as simply as possible using some analogies that might help.
Imagine a large language model (LLM) like a huge library. This library has millions (even billions) of books and articles inside it. When you ask the model a question or give it a prompt, it looks through all the information in this library to come up with an answer. The answer it gives is based on patterns and examples it has "learned" from this massive collection of texts.
Now, think of "randomness" or "variety" as the level of creativity or unpredictability you want in the model's responses. There’s a dial in this "library" that controls how strictly or loosely the model follows those patterns.
Two ways to control the level of randomness:
Temperature Setting (Like turning up or down the heat): Imagine you're cooking soup, and you can adjust the heat to make it cook faster or slower. In the model, if we increase the "temperature," it makes the model more "adventurous" in picking different words or ideas. The answers become more varied, less predictable, and sometimes even more unusual or creative.
On the other hand, if we lower the "temperature," the answers become more predictable and safe, just like if you're cooking soup on a low heat where everything follows a more controlled and expected pattern.
Top-k and Top-p Sampling (Picking from the best ingredients): Think of a recipe where you could choose a few specific ingredients, or just grab a handful from a whole pantry. The model can do something similar when it generates responses. If we limit the choices to just the best or most common answers (low variety), the result will be more standard. But if we allow the model to sample from a wider variety of "ingredients" (more options), then the answers can be more diverse and creative.
Example:
If you ask the model, "What's the weather like?" at a high temperature, it might give a more colorful answer like, "It’s sunny with a chance of flying unicorns!" — something out of the ordinary.
But if you ask at a low temperature, it will likely stick to something more regular like, "It’s sunny and clear with no chance of rain."
So, the more you "turn up the heat" or allow for more variety in the ingredients, the more diverse and unexpected the responses can be. This is how we calibrate a language model to give answers that are either more predictable or more creative, depending on what we want.
Does that help clarify it?
(http://www.autoadmit.com/thread.php?thread_id=5664153&forum_id=2...id.#48555185) |
|
Date: January 15th, 2025 1:25 PM Author: Chris Rufo being dropped into a trash compactor
This is a good start, but he's still having trouble understanding. The library analogy in particular is something that he can't grasp. Try again with a different analogy, preferably an analogy that employs Jewish customs and cultural practices
ChatGPT said:
ChatGPT
Alright, let's try a different approach with a more familiar analogy, one based on Jewish customs and practices. I’ll use a setting that might be more relatable, like preparing food for a holiday meal.
Imagine you’re preparing for a big family dinner, like a Passover Seder. You have a recipe for a dish, but you can make it in different ways depending on how much you want to change it up or keep it traditional. The way you make the dish could be more creative or more standard, depending on what you're in the mood for.
The Basic Idea: Customizing the Meal
When you cook for a family gathering, you can choose how strict you are with the recipe:
Making the Dish More Predictable (Low Randomness):
If you want everything to be just like it always is, you follow the recipe exactly as it’s written. You know the flavors, the ingredients, and how it will turn out. There’s little surprise, and everything is more consistent. This is like setting the model to have low randomness—its answers will be predictable and safe, just like sticking to a traditional recipe.
Making the Dish More Creative (High Randomness):
Now, imagine if you’re feeling adventurous and want to try something new. Maybe you add a new spice, change a traditional ingredient, or even try a completely different cooking technique. The result could still be delicious, but it might surprise people—some might even find it strange! This is like turning up the "randomness" in the language model. By being more creative, the model will give answers that are less predictable, more varied, and potentially more interesting.
The Role of "Temperature" and "Variety"
Now, let’s add another layer to the analogy: think of "temperature" like how much spice or flavor you’re adding to the dish. If you add a lot of spice (high temperature), it could make the dish bold and unpredictable, surprising your guests with a flavor they weren’t expecting. If you add less spice (low temperature), the dish is milder, predictable, and more like what people are used to.
Similarly, the language model can "turn up the spice" or "keep it mild" when generating responses. If we want more creativity (higher temperature), the model will give more varied answers. If we want things more predictable (lower temperature), it will stick to the basics, just like that tried-and-true recipe.
Example:
Let’s say you ask the model, "What do you think about bagels?"
At a low temperature (predictable), the model might say something like, "Bagels are a traditional Jewish food, often enjoyed with cream cheese or lox." It’s a straightforward answer, like a dish made from the same old recipe.
At a high temperature (creative), the model might say something like, "Bagels are the edible equivalent of a perfect circle of joy, with endless combinations of toppings from classic cream cheese to wild blueberry jam, and every bite feels like a celebration." It’s more fun and unexpected, just like adding more spice to the dish.
Conclusion:
In simple terms, the language model is like a cook deciding how much spice to add to a dish. If it sticks to the basics, the responses are predictable, like a traditional meal. If it adds some spice, the answers become more varied and creative. You can control how much surprise or creativity you want, just like deciding how adventurous to be with the recipes you follow.
Does that help?
(http://www.autoadmit.com/thread.php?thread_id=5664153&forum_id=2...id.#48555213) |
|
Date: January 15th, 2025 1:20 PM
Author: ;..........,,,...,,.;.,,...,,,;.;.
(http://www.autoadmit.com/thread.php?thread_id=5664153&forum_id=2...id.#48555190) |
|
|