HyperAIHyperAI
2 days ago

Language-Guided Tuning: Enhancing Numeric Optimization with Textual Feedback

Yuxing Lu, Yucheng Hu, Nan Sun, Xukai Zhao
Language-Guided Tuning: Enhancing Numeric Optimization with Textual Feedback
Abstract

Configuration optimization remains a critical bottleneck in machine learning, requiring coordinated tuning across model architecture, training strategy, feature engineering, and hyperparameters. Traditional approaches treat these dimensions independently and lack interpretability, while recent automated methods struggle with dynamic adaptability and semantic reasoning about optimization decisions. We introduce Language-Guided Tuning (LGT), a novel framework that employs multi-agent Large Language Models to intelligently optimize configurations through natural language reasoning. We apply textual gradients - qualitative feedback signals that complement numerical optimization by providing semantic understanding of training dynamics and configuration interdependencies. LGT coordinates three specialized agents: an Advisor that proposes configuration changes, an Evaluator that assesses progress, and an Optimizer that refines the decision-making process, creating a self-improving feedback loop. Through comprehensive evaluation on six diverse datasets, LGT demonstrates substantial improvements over traditional optimization methods, achieving performance gains while maintaining high interpretability.

Language-Guided Tuning: Enhancing Numeric Optimization with Textual Feedback | Latest Papers | HyperAI