Reinforcement Learning on Web Interfaces Using Workflow-Guided Exploration
Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin Shi, Percy Liang
- 🏛 Institutions
- Stanford
- 📅 Date
- February 24, 2018
- 📑 Publisher
- ICLR 2018 (Poster)
- 💻 Env
- Web
- 🔑 Keywords
TLDR
This paper introduces workflow-guided exploration, where demonstrations are converted into high-level workflows that constrain web-agent exploration during RL. It shows that this makes sparse-reward web interaction much more sample-efficient than pure behavioral cloning on World of Bits style tasks.
Related papers
- Proposer-Agent-Evaluator (PAE): Autonomous Skill Discovery For Foundation Model Internet AgentsDecember 17, 2024 · ICML 2025 (Poster)
- Navigating WebAI: Training Agents to Complete Web Tasks with Large Language Models and Reinforcement LearningMay 1, 2024 · SAC 2024
- World of Bits: An Open-Domain Platform for Web-Based AgentsAugust 31, 2017 · ICML 2017
- ClawGUI: A Unified Framework for Training, Evaluating, and Deploying GUI AgentsApril 13, 2026 · arXiv
- DistRL: An Asynchronous Distributed Reinforcement Learning Framework for On-Device Control AgentsOctober 18, 2024 · ICLR 2025 (Poster)
- WebArena-Infinity: Generating Browser Environments with Verifiable Tasks at ScaleMarch 2026 · Blog Post