Leave biglol as 6th year to join an early stage AI start-up?
| Elephant Corporation | 03/09/25 | | VoteRepublican | 03/09/25 | | Epistemic Humility | 03/09/25 | | ,....,..,.,,,,,.. | 03/09/25 | | VoteRepublican | 03/09/25 | | ,....,..,.,,,,,.. | 03/09/25 | | Prolemobiler | 03/09/25 | | Paralegal Mohammad | 03/09/25 | | """'"'"""'' | 03/09/25 | | metaphysical certitude | 03/09/25 | | The Last Liberal | 03/09/25 | | The Buzz Aldrin Spacelaw Fellowship | 03/09/25 |
Poast new message in this thread
Date: March 9th, 2025 7:53 PM Author: ,....,..,.,,,,,..
A lot of these AI startups will have their lunch eaten by larger players training more general models. Companies building their products based on specialized training or special scaffolding of LLMs will almost certainly not last. Claude and ChatGPT become more flexible every year, and it’s significantly easier to make models more adaptable by diversified training than it is to hand engineer them for a particular use.
(http://www.autoadmit.com/thread.php?thread_id=5691531&forum_id=2#48731527) |
 |
Date: March 9th, 2025 8:02 PM Author: ,....,..,.,,,,,..
Large language models and their multimodal successors. The idea, which is well validated at this point, is that models become more adaptable to a wide range of inputs if they are trained on everything. The training process encourages models to do things like in-context learning and will respond better than a highly specialized model trained on a particular output. You don’t want an LLM trained only on legal texts or whatever. You want to take one of the giant models trained on everything and fine tune it for legal texts generation, because it has a lot of circuits already that allow flexible language understanding and generation. Data diversity is essential, and in order to make use of it you need a huge compute budget.
(http://www.autoadmit.com/thread.php?thread_id=5691531&forum_id=2#48731542) |
|
|