Codex 5.3 extra-high is so good its nuts
| robot daddy | 02/06/26 | | .,.,.;;,;.,;:,:,,:,.,:,::,..;.,:,.:;.:.,;.:.,:.::, | 02/06/26 | | robot daddy | 02/06/26 | | .,.,...,..,.,.,:,,:,...,:::,...,:,.,.:..:. | 02/06/26 |
Poast new message in this thread
Date: February 6th, 2026 12:25 AM
Author: .,.,.;;,;.,;:,:,,:,.,:,::,..;.,:,.:;.:.,;.:.,:.::,
what do you use it for
(http://www.autoadmit.com/thread.php?thread_id=5831622&forum_id=2Reputation#49650249) |
Date: February 6th, 2026 1:37 AM
Author: .,.,...,..,.,.,:,,:,...,:::,...,:,.,.:..:.
The SWE Bench Pro graph with number of tokens vs. accuracy is interesting. At the limit, it converges on 5.2 codex max. Models are now becoming more token efficient, but not necessarily developing new capabilities?
(http://www.autoadmit.com/thread.php?thread_id=5831622&forum_id=2Reputation#49650280) |
|
|