MindSearch: Mimicking Human Minds Elicits Deep AI Searcher
Zehui Chen, Kuikun Liu, Qiuchen Wang, Jiangning Liu, Wenwei Zhang, Kai Chen, Feng Zhao
- 🏛 Institutions
- University of Science and Technology of China, Shanghai AI Laboratory
- 📅 Date
- July 29, 2024
- 📑 Publisher
- ICLR 2025 (Poster)
- 💻 Env
- 🔑 Keywords
TLDR
This paper presents MindSearch, a novel approach to web information seeking and integration that mimics human cognitive processes. The system uses a multi-agent framework consisting of a WebPlanner and WebSearcher. The WebPlanner models multi-step information seeking as a dynamic graph construction process, decomposing complex queries into sub-questions. The WebSearcher performs hierarchical information retrieval for each sub-question. MindSearch demonstrates significant improvements in response quality and depth compared to existing AI search solutions, processing information from over 300 web pages in just 3 minutes.
Related papers
- WebATLAS: An LLM Agent with Experience-Driven Memory and Action SimulationOctober 26, 2025 · NeurIPS 2025 Workshop on Language Agents and World Models
- Building a Stable Planner: An Extended Finite State Machine Based Planning Module for Mobile GUI AgentMay 20, 2025 · arXiv
- LiteWebAgent: The Open-Source Suite for VLM-Based Web-Agent ApplicationsMarch 4, 2025 · NAACL 2025 System Demonstrations
- A Real-World WebAgent with Planning, Long Context Understanding, and Program SynthesisJuly 24, 2023 · ICLR 2024 (Oral)
- VLAA-GUI: Knowing When to Stop, Recover, and Search, A Modular Framework for GUI AutomationApril 23, 2026 · arXiv
- ClawGUI: A Unified Framework for Training, Evaluating, and Deploying GUI AgentsApril 13, 2026 · arXiv