AdaptAgent: Adapting Multimodal Web Agents with Few-Shot Learning from Human Demonstrations
Gaurav Verma, Rachneet Kaur, Nishan Srishankar, Zhen Zeng, Tucker Balch, Manuela Veloso
- 🏛 Institutions
- Georgia Tech, J.P. Morgan AI Research
- 📅 Date
- November 24, 2024
- 📑 Publisher
- ACL 2025
- 💻 Env
- Web
- 🔑 Keywords
TLDR
AdaptAgent studies how multimodal web agents can adapt to unseen websites with only a few human demonstrations instead of relying solely on broad pretraining or large-scale fine-tuning. It shows that both proprietary and open-weight agents benefit from few-shot demonstrations, with clear gains on Mind2Web and VisualWebArena.
Related papers
- AppAgent: Multimodal Agents as Smartphone UsersDecember 21, 2023 · CHI 2025
- OpeFlo: Automated UX Evaluation via Simulated Human Web Interaction with GUI GroundingFebruary 25, 2026 · arXiv
- ColorBrowserAgent: Complex Long-Horizon Browser Agent with Adaptive Knowledge EvolutionJanuary 12, 2026 · arXiv
- WebATLAS: An LLM Agent with Experience-Driven Memory and Action SimulationOctober 26, 2025 · NeurIPS 2025 Workshop on Language Agents and World Models
- Surfer 2: The Next Generation of Cross-Platform Computer Use AgentsOctober 22, 2025 · arXiv
- PolySkill: Learning Generalizable Skills Through Polymorphic AbstractionOctober 17, 2025 · arXiv