Source Idea
Emergent temporal abstractions in autoregressive models can improve transfer learning in hierarchical reinforcement learning across different domains.
View Source Idea →
Files (8)
- README.md
- metadata.json
- requirements.txt
- src/environments.py
- src/evaluate.py
- src/models.py
- src/train.py
- src/transfer_learning.py
README Preview
# Temporal Abstractions in Reinforcement Learning
## Project Description
This project explores the hypothesis that emergent temporal abstractions in autoregressive models can improve transfer learning in hierarchical reinforcement learning across different domains.
## Research Hypothesis
Emergent temporal abstractions in autoregressive models can enhance the adaptability and efficiency of hierarchical RL models in new tasks with minimal retraining.
## Implementation Approach
We will conduct experiments using RL models enhanced with temporal abstractions and evaluate their performance in new environments. The evaluation will compare adaptation speed and efficiency against models trained from scratch and models without temporal abstractions.
## Setup Instructions
1. Clone the repository:
```bash
git clone https://github.com/yourusername/temporal_abstractions_rl.git
cd temporal_abstractions_rl
```
2. Install the required packages:
```bash
pip install -r requirements.txt
```
3. Run the training script:
```bash
python src/train.py
```
## Usage Examples
- To train a model with temporal abstractions:
```bash
python src/train.py --use_temporal_abstractions
```
- To evaluate transfer learning performance:
```bash
python src/evaluate.py --model_path path/to/model
```
## Expected Results
We expect to demonstrate that models using temporal abstractions adapt to new tasks more quickly and efficiently than baseline models.
## References
- [Emergent temporal abstractions in autoregressive models enable hierarchical reinforcement learning](http://arxiv.org/abs/2512.20605v1)