Emergent temporal abstractions in autoregressive models enable hierarchical reinforcement learning
Authors
Seijin Kobayashi; Yanick Schimpf; Maximilian Schlegel; Angelika Steger; Maciej Wolczyk; Johannes von Oswald; Nino Scherre; Kaitlin Maile; Guillaume Lajoie; Blake A. Richards; Rif A. Saurous; James Manyika; Blaise Agüera y Arcas; Alexander Meulemans; João Sacramento
Scores
Rationale
The paper introduces a novel approach by integrating hierarchical reinforcement learning with autoregressive models through internal temporal abstractions, which is a fresh concept in leveraging internal model representations for exploration. The technical significance is high as it addresses inefficiencies in traditional RL by enabling learning from sparse rewards, a notable bottleneck. The method shows potential for transferability across various domains requiring hierarchical decision-making, such as robotics or complex simulations. The work aligns well with current research trends in RL and model-based learning, and the empirical results, while promising, would benefit from further validation across diverse benchmarks.