HyperAIHyperAI
12 days ago

Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization

Zhenpeng Su, Leiyu Pan, Xue Bai, Dening Liu, Guanting Dong, Jiaming Huang, Wenping Hu, Guorui Zhou
Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving
  Clipping Policy Optimization
Abstract

We present Klear-Reasoner, a model with long reasoning capabilities thatdemonstrates careful deliberation during problem solving, achieving outstandingperformance across multiple benchmarks. Although there are already manyexcellent works related to inference models in the current community, there arestill many problems with reproducing high-performance inference models due toincomplete disclosure of training details. This report provides an in-depthanalysis of the reasoning model, covering the entire post-training workflowfrom data preparation and long Chain-of-Thought supervised fine-tuning (longCoT SFT) to reinforcement learning (RL), along with detailed ablation studiesfor each experimental component. For SFT data, our experiments show that asmall number of high-quality data sources are more effective than a largenumber of diverse data sources, and that difficult samples can achieve betterresults without accuracy filtering. In addition, we investigate two key issueswith current clipping mechanisms in RL: Clipping suppresses criticalexploration signals and ignores suboptimal trajectories. To address thesechallenges, we propose Gradient-Preserving clipping Policy Optimization (GPPO)that gently backpropagates gradients from clipped tokens. GPPO not onlyenhances the model's exploration capacity but also improves its efficiency inlearning from negative samples. Klear-Reasoner exhibits exceptional reasoningabilities in mathematics and programming, scoring 90.5\% on AIME 2024, 83.2\%on AIME 2025, 66.0\% on LiveCodeBench V5 and 58.1\% on LiveCodeBench V6.

Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization | Latest Papers | HyperAI