# Sophia Trains Faster But Gets Nowhere Farther: What Stanford FLeX Code Research Found - Date: 2026-04-09 - Category: Artificial Intelligence Sophia optimizer trains code models 30% faster than AdamW — but new Stanford research shows that speed advantage comes at no accuracy cost, which is either good news or a red flag depending on what you thought the speedup meant. ---