← Back to Papers

Emergent temporal abstractions in autoregressive models enable hierarchical reinforcement learning

7.55 2512.20605 · 2025-12-23

Authors

Seijin Kobayashi; Yanick Schimpf; Maximilian Schlegel; Angelika Steger; Maciej Wolczyk; Johannes von Oswald; Nino Scherre; Kaitlin Maile; Guillaume Lajoie; Blake A. Richards; Rif A. Saurous; James Manyika; Blaise Agüera y Arcas; Alexander Meulemans; João Sacramento

Scores

8.0
Novelty
7.7
Technical
6.7
Transferability
8.0
Momentum
7.0
Evidence
7.7
Breakthrough

Rationale

The paper introduces a novel approach by integrating hierarchical reinforcement learning with autoregressive models through internal temporal abstractions, which is a fresh concept in leveraging internal model representations for exploration. The technical significance is high as it addresses inefficiencies in traditional RL by enabling learning from sparse rewards, a notable bottleneck. The method shows potential for transferability across various domains requiring hierarchical decision-making, such as robotics or complex simulations. The work aligns well with current research trends in RL and model-based learning, and the empirical results, while promising, would benefit from further validation across diverse benchmarks.