There isn't enough electricity grid capacity to sustain mass AI adoption or more
| average/ordinary/typical citizen/person | 12/28/24 | | Post nut horror | 12/30/24 | | https://imgur.com/a/o2g8xYK | 12/30/24 | | Post nut horror | 12/30/24 | | https://imgur.com/a/o2g8xYK | 12/30/24 | | Post nut horror | 12/30/24 | | https://imgur.com/a/o2g8xYK | 12/30/24 | | average/ordinary/typical citizen/person | 12/31/24 | | TSINAH | 12/31/24 | | cock of michael obama | 12/28/24 | | average/ordinary/typical citizen/person | 12/28/24 | | auntie kamala?s boy toy | 12/30/24 | | average/ordinary/typical citizen/person | 12/29/24 | | https://imgur.com/a/o2g8xYK | 12/30/24 | | fluid | 01/01/25 | | .,.,...,..,.,.,:,,:,.,.,:::,....,:,..,:.:.,:.::, | 12/29/24 | | average/ordinary/typical citizen/person | 12/29/24 | | .,.,...,..,.,.,:,,:,.,.,:::,....,:,..,:.:.,:.::, | 12/30/24 | | https://imgur.com/a/o2g8xYK | 12/30/24 | | "'''''"'''"""''''" | 12/30/24 | | cowgod | 12/30/24 | | pitbulls eating your face in hell forever tp | 12/30/24 | | Oh, you travel? | 12/30/24 | | https://imgur.com/a/o2g8xYK | 12/30/24 | | average/ordinary/typical citizen/person | 12/30/24 | | Emotionally + Physically Abusive Ex-Husband | 01/01/25 | | Jeetmaxxer 2025 | 01/01/25 | | Diane Rehm talking dirty | 12/30/24 | | fluid | 01/01/25 | | "'''''"'''"""''''" | 12/30/24 | | chill online bro | 12/31/24 | | magadood | 01/01/25 | | chill online bro | 01/01/25 | | fluid | 01/01/25 | | magadood | 01/01/25 | | https://imgur.com/a/o2g8xYK | 01/01/25 | | fluid | 01/01/25 | | magadood | 01/01/25 | | fluid | 01/01/25 | | Mainlining the Secret Truth of the Mahchine | 01/03/25 | | Mainlining the Secret Truth of the Mahchine | 01/01/25 | | Die Hard 2: Die Harder | 01/01/25 | | ,.,.,.,....,.,..,.,.,. | 01/03/25 | | How to derive aut from ism | 01/03/25 |
Poast new message in this thread
Date: December 29th, 2024 9:36 PM
Author: .,.,...,..,.,.,:,,:,.,.,:::,....,:,..,:.:.,:.::,
The grid will respond to demand signals. If the demand is great enough, there will be new conventional nuclear generation Facilities built. But to start out with it is very easy to build a ton of gas CTs. The AI boom is actually what will save us from over reliance on wind and solar, and the reliability problems that over reliance would create.
(http://www.autoadmit.com/thread.php?thread_id=5656162&forum_id=2:#48500142)
|
|
Date: December 30th, 2024 1:16 PM
Author: .,.,...,..,.,.,:,,:,.,.,:::,....,:,..,:.:.,:.::,
(http://www.autoadmit.com/thread.php?thread_id=5656162&forum_id=2:#48501571) |
Date: December 30th, 2024 1:17 PM
Author: https://imgur.com/a/o2g8xYK
ljl this is only true because Nvidia GPUs are energy HOGS manufactured on outdated process nodes. The only company that makes worse GPUs is AMD.
Apple silicon doesn't doesn't have this problem. Neither does Qualcomm now. Enjoy paying $20k for less VRAM than you can get in a Macbook, TSINAH.
(http://www.autoadmit.com/thread.php?thread_id=5656162&forum_id=2:#48501577) |
Date: December 30th, 2024 1:30 PM Author: pitbulls eating your face in hell forever tp
They will just raise electricity prices to levels unaffordable for households and consumers to force rationing and rolling brown-outs for hours every day like in South Africa.
Regular people will sit shivering in the dark, munching on cold bugs in their micro-studio apartments, while the massive AI data centers glow and hum with energy off in the distance.
(http://www.autoadmit.com/thread.php?thread_id=5656162&forum_id=2:#48501623) |
Date: December 30th, 2024 1:55 PM Author: Diane Rehm talking dirty (🐿️ )
The exact amount of electricity consumed per question I answer depends on various factors, such as the complexity of the query, the underlying hardware (servers), and the overall infrastructure that powers the model. However, we can give a general idea of the electricity consumption involved in running large AI models like mine.
Key Factors Affecting Electricity Usage:
1. Model Size: Larger models, like GPT-4, consume more energy to process queries than smaller ones. The deeper the neural network, the more computation is needed.
2. Server Load: The number of users and queries being processed at any given time affects the overall power consumption. High demand can lead to more servers being activated.
3. Data Centers: AI models like mine run on clusters of high-performance GPUs or specialized hardware (e.g., TPUs) located in data centers. The energy consumption of these data centers includes cooling, power for servers, and other infrastructure.
4. Query Complexity: Simpler queries likely require less computation, while more complex or multi-turn interactions consume more energy.
General Estimation:
It’s difficult to pinpoint the exact energy cost for each query, but research into AI model energy consumption can provide a rough idea.
1. Energy per Inference:
• A single query or inference (answer) from a model like GPT-3 or GPT-4 could range from 0.1 kWh to 1 kWh depending on the size of the model, the hardware used, and the complexity of the query.
• For comparison, 1 kWh is about the energy needed to power a standard light bulb for 10-12 hours or run a microwave for 15-20 minutes.
2. Total Data Center Energy:
• Large data centers that host AI models can consume tens of megawatt-hours (MWh) per day, depending on the scale. However, the actual energy used for each query is a fraction of that total.
Conclusion:
While it’s hard to pinpoint the exact electricity consumption per question, each query likely consumes a very small amount of energy, possibly in the range of 0.1 to 1 watt-hour (Wh). Given the rapid advancements in more energy-efficient hardware and optimizations in data centers, the energy footprint per query is continuously decreasing, but it still adds up at scale across millions or billions of interactions.
The energy consumption is largely driven by the infrastructure behind the AI, including both hardware and cooling needs in the data centers, and not just the direct computational work of answering a single question.
(http://www.autoadmit.com/thread.php?thread_id=5656162&forum_id=2:#48501725) |
Date: January 1st, 2025 12:13 AM Author: magadood
energy is unlikely to be a major constraint that prevents AGI development. this is a starting point for this analysis:
https://epoch.ai/blog/can-ai-scaling-continue-through-2030
"We find that training runs of 2e29 FLOP will likely be feasible by the end of this decade. In other words, by 2030 it will be very likely possible to train models that exceed GPT-4 in scale to the same degree that GPT-4 exceeds GPT-2 in scale."
we are likely to see large increases in training compute availability in a few years, even with power being limited. the other issue is that the focus on training costs is the result of the poor generalization of transformers and current training techniques. they have to train them on every bit of data, because generalization is not great. the news stories are all focused on finding as much data as possible, or creating synthetic data to produce more robust models. humans don't train on 10 trillion words. if they could find algorithms with human level data efficiency, the compute costs would plummet. considering humans are able to learn to be reasonably competent in a few years (and a lot of that time spent processing irrelevant/redundant information) with brains that are clearly not optimized (with even the smartest people having substantial mutational load that impairs brain function), I would not be optimistic about theoretical costs. it wouldn't surprise me if AGI training costs could be <$100K even at current prices.
(http://www.autoadmit.com/thread.php?thread_id=5656162&forum_id=2:#48506410) |
Date: January 3rd, 2025 6:13 PM Author: ,.,.,.,....,.,..,.,.,.
DeepSeek was trained for <$6 million and is a frontier level model. compare that to GPT-4o, which cost more than $100 million. LLMs are still very new, they don't have a theoretical model for how to train them and there is a significant amount of alchemy that goes into getting them to work. they just stirs lots of shit together until the results look good. there is likely lots of room for training optimization even if hardware improvements stopped (which won't happen soon enough to matter).
(http://www.autoadmit.com/thread.php?thread_id=5656162&forum_id=2:#48515739) |
|
|