Papers
arxiv:2603.15594

OpenSeeker: Democratizing Frontier Search Agents by Fully Open-Sourcing Training Data

Published on Mar 16
· Submitted by
yuwendu
on Mar 17
#2 Paper of the day
Authors:
Rui Ye ,
,

Abstract

Deep search capabilities have become an indispensable competency for frontier Large Language Model (LLM) agents, yet the development of high-performance search agents remains dominated by industrial giants due to a lack of transparent, high-quality training data. This persistent data scarcity has fundamentally hindered the progress of the broader research community in developing and innovating within this domain. To bridge this gap, we introduce OpenSeeker, the first fully open-source search agent (i.e., model and data) that achieves frontier-level performance through two core technical innovations: (1) Fact-grounded scalable controllable QA synthesis, which reverse-engineers the web graph via topological expansion and entity obfuscation to generate complex, multi-hop reasoning tasks with controllable coverage and complexity. (2) Denoised trajectory synthesis, which employs a retrospective summarization mechanism to denoise the trajectory, therefore promoting the teacher LLMs to generate high-quality actions. Experimental results demonstrate that OpenSeeker, trained (a single training run) on only 11.7k synthesized samples, achieves state-of-the-art performance across multiple benchmarks including BrowseComp, BrowseComp-ZH, xbench-DeepSearch, and WideSearch. Notably, trained with simple SFT, OpenSeeker significantly outperforms the second-best fully open-source agent DeepDive (e.g., 29.5% v.s. 15.3% on BrowseComp), and even surpasses industrial competitors such as Tongyi DeepResearch (trained via extensive continual pre-training, SFT, and RL) on BrowseComp-ZH (48.4% v.s. 46.7%). We fully open-source the complete training dataset and the model weights to democratize frontier search agent research and foster a more transparent, collaborative ecosystem.

Community

Paper author Paper submitter

img_v3_02vq_39a9b55b-bced-47bb-8b48-3caacff14ddg
🔓 Shattering the Corporate Data Moat: Meet OpenSeeker, the First Fully Open-Source Frontier Search Agent by a Purely Academic Team.

High-performance Search Agents have long been a "closed-door game." While weights are open, the high-quality training data (the real secret sauce) remained hidden. We introduce OpenSeeker, a purely academic initiative achieving SOTA search while open-sourcing everything: Models and 100% Full Training Data.

🚀 Why OpenSeeker is a Game Changer:

🔥 Efficiency Over Scale: We proved you don't need millions of samples. OpenSeeker achieves frontier performance with just 11.7k synthesized samples using Single-Run SFT with no complex RL or continual pre-training.
🎓Empowering Academia:We provide the missinghigh-quality data foundation, enabling researchers to build next-gen agents without corporate-scale resources.

⚔️ Beating Industry Giants:
🇨🇳 48.4% on BrowseComp-ZH: Surpassing Alibaba's Tongyi DeepResearch (46.7%)! We beat their massive CPT + SFT + RL pipeline using SFT only.
🌍 ~30B SFT Models SOTA: 29.5% (BrowseComp), 74.0% (xbench), 59.4% (WideSearch).

🧪 The "Secret Sauce" Behind the Data:
🕸️ Fact-Grounded QA Synthesis: We reverse-engineer the web graph using Entity Obfuscation to generate complex multi-hop queries.
🧹 Denoised Trajectory Synthesis: Our Asymmetric Context Training teaches students to predict expert actions from raw, noisy HTML, mastering robust information extraction.

📦 The "Open" in OpenSeeker: We release the full recipe to democratize deep research: ✅ 11.7k High-Difficulty Data (QA + Trajectories) ✅ OpenSeeker-v1-30B-SFT Model
👨‍💻 GitHub: GitHub - rui-ye/OpenSeeker: OpenSeeker: A search agent with open-source data and models
🤗 Data: https://huggingface.co/datasets/OpenSeeker/OpenSeeker-v1-Data
🤗 Models: https://huggingface.co/OpenSeeker/OpenSeeker-v1-30B-SFT

Paper author Paper submitter

thanks a lot for opensourcing

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.15594 in a Space README.md to link it from this page.

Collections including this paper 3