ScalingOpt
Exploring the core role of optimizers in the era of large language models, focusing on the synergy between optimizer design, model architecture, and training configurations under the Scaling Law paradigm. A systematic discussion platform and scientific benchmark for LLM training.