\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

You can power limit a RTX 3090 to 250W and it still hauls ass at AI

It idles around 9W for me, though some people can't get it t...
Apoplectic vigorous dilemma windowlicker
  12/12/25
...
Grizzly blathering point
  12/12/25
What exactly are you even supposed to do with these fraud gp...
Sable frozen rehab
  12/12/25
With one you can do inference. With two you can do video and...
Apoplectic vigorous dilemma windowlicker
  12/12/25
What do you mean "do inference" or "do video&...
Sable frozen rehab
  12/12/25
Ollama is the only platform that's optimized for LLM. You ho...
Apoplectic vigorous dilemma windowlicker
  12/12/25
Aight bet i'm not doing any of that shit
Sable frozen rehab
  12/12/25
Enjoy having all your chats stored in the cloud and used to ...
Apoplectic vigorous dilemma windowlicker
  12/12/25


Poast new message in this thread



Reply Favorite

Date: December 12th, 2025 8:25 PM
Author: Apoplectic vigorous dilemma windowlicker

It idles around 9W for me, though some people can't get it to idle below 30W which sucks but whatever

(http://www.autoadmit.com/thread.php?thread_id=5809689&forum_id=2\u0026mark_id=5310751",#49505831)



Reply Favorite

Date: December 12th, 2025 8:26 PM
Author: Grizzly blathering point



(http://www.autoadmit.com/thread.php?thread_id=5809689&forum_id=2\u0026mark_id=5310751",#49505832)



Reply Favorite

Date: December 12th, 2025 8:27 PM
Author: Sable frozen rehab

What exactly are you even supposed to do with these fraud gpus to make AI happen

(http://www.autoadmit.com/thread.php?thread_id=5809689&forum_id=2\u0026mark_id=5310751",#49505835)



Reply Favorite

Date: December 12th, 2025 8:39 PM
Author: Apoplectic vigorous dilemma windowlicker

With one you can do inference. With two you can do video and image generation. With four IDK, they say you can do training but I don't care about that

(http://www.autoadmit.com/thread.php?thread_id=5809689&forum_id=2\u0026mark_id=5310751",#49505870)



Reply Favorite

Date: December 12th, 2025 8:51 PM
Author: Sable frozen rehab

What do you mean "do inference" or "do video" what program are you using to do these things?

(http://www.autoadmit.com/thread.php?thread_id=5809689&forum_id=2\u0026mark_id=5310751",#49505893)



Reply Favorite

Date: December 12th, 2025 8:58 PM
Author: Apoplectic vigorous dilemma windowlicker

Ollama is the only platform that's optimized for LLM. You host an Ollama server in a LXC or Docker container, then run Openwebui or AnythingLLM on give it the key to your Ollama server. If you want to run Stable Diffusion you can do it in docker the same way, then access it with ComfyUI in another container.

(http://www.autoadmit.com/thread.php?thread_id=5809689&forum_id=2\u0026mark_id=5310751",#49505912)



Reply Favorite

Date: December 12th, 2025 8:59 PM
Author: Sable frozen rehab

Aight bet i'm not doing any of that shit

(http://www.autoadmit.com/thread.php?thread_id=5809689&forum_id=2\u0026mark_id=5310751",#49505917)



Reply Favorite

Date: December 12th, 2025 10:15 PM
Author: Apoplectic vigorous dilemma windowlicker

Enjoy having all your chats stored in the cloud and used to train models of you I guess

(http://www.autoadmit.com/thread.php?thread_id=5809689&forum_id=2\u0026mark_id=5310751",#49506076)