SII ASI

Paper arXiv GitHub Hugging Face Hugging Face

daVinci-Dev: Agent-native Mid-training for Software Engineering

Table of Contents

Overview

daVinci-Dev is a family of large language models trained for agentic software engineering.

This work presents a systematic study of agentic mid-training and introduces agent-native data to reduce the distribution mismatch between static pretraining corpora and the dynamic, feedback-rich environments faced by real code agents.

Our training uses two complementary trajectory types (details in the paper):

  • Contextually-native trajectories $\mathcal{D}^{\text{ctx}}_{\text{py}}$ (PR-derived): preserve the full information flow by bundling file discovery/context retrieval together with sequential edits. This provides broad coverage and diversity.
  • Environmentally-native trajectories $\mathcal{D}^{\text{env}}_{\text{pass}}$ (executable rollouts): collected from real executable repositories with genuine tool/test outputs, capturing authentic feedback loops.

Resources (open-source / open-release):

Key Results

SWE-Bench Verified

We reach SOTA among open training recipes using agentic scaffolds under their model sizes, despite starting from the non-coder Qwen2.5-Base family.

Model SWE-Bench Verified (Pass@1) Notes
daVinci-Dev-72B 58.5% Agent-native MT + SFT
daVinci-Dev-32B 56.1% Agent-native MT + SFT

Generalization gains: improvements are also observed on standard code benchmarks (e.g., HumanEval/EvalPlus) and scientific reasoning benchmarks (e.g., GPQA/SciBench) as reported in the paper.

Model Zoo

We will open-source model checkpoints on Hugging Face:

Model Description Link
daVinci-Dev-72B Final model (agent-native mid-training + env native SFT) https://huggingface.co/GAIR/daVinci-Dev-72B
daVinci-Dev-32B Final model (agent-native mid-training + env native SFT) https://huggingface.co/GAIR/daVinci-Dev-32B
daVinci-Dev-72B-MT MT checkpoint (after agent-native mid-training, before SFT) https://huggingface.co/GAIR/daVinci-Dev-72B-MT
daVinci-Dev-32B-MT MT checkpoint (after agent-native mid-training, before SFT) https://huggingface.co/GAIR/daVinci-Dev-32B-MT

Datasets

We will open-source our datasets through Hugging Face:

Dataset Description Link
daVinci-Dev Agent-native data used in our training recipe (as permitted) https://huggingface.co/datasets/GAIR/daVinci-Dev

Pipeline

The GitHub repository contains a high-performance pipeline that calls the GitHub API and constructs the structured PR representation used to build $\mathcal{D}^{\text{ctx}}_{\text{py}}$.

Pipeline Description Link
daVinci-Dev Pipeline a high-performance pipeline used to build $\mathcal{D}^{\text{ctx}}_{\text{py}}$ GAIR-NLP/daVinci-Dev

Quick Start

These checkpoints are intended to be used inside the SWE-Agent scaffold. They are also compatible with standard inference frameworks.

Start with HF Transformers
pip install transformers torch
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "GAIR/daVinci-Dev-72B"  # or any checkpoint in the model zoo

tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True,
)

messages = [
    {
        "role": "system",
        "content": (
            "You are a software engineering agent. "
            "When solving tasks, reason about the repo structure, propose minimal edits, "
            "and describe how you would validate with tests."
        ),
    },
    {"role": "user", "content": "Bug: tests fail when X. Please fix it."},
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)

with torch.no_grad():
    outputs = model.generate(
        **inputs,
        max_new_tokens=2048,
        temperature=0.2,
        do_sample=True,
    )

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Start with vLLM
pip install vllm
python -m vllm.entrypoints.openai.api_server \
  --model GAIR/daVinci-Dev-72B \
  --tensor-parallel-size 8 \
  --max-model-len 131072

Training

This section summarizes the methodology described in the paper.

Data

  • Contextually-native PR trajectories: 68.6B tokens (constructed from GitHub pull requests, preserving the coupling between context retrieval and edits).
  • Environmentally-native executable trajectories: 3.1B raw tokens (4.5B effective tokens), collected by running an agent in real executable environments with tool and test feedback. Trajectories include both test-passing and non-passing rollouts.

Recipe (high level)

  • Start from the Qwen2.5 base model family (32B / 72B).
  • Perform agent-native mid-training on PR-derived trajectories (and optionally mixed with executable trajectories).
  • Perform SFT on the test-passing subset of environmentally-native trajectories.

-MT checkpoints correspond to the state after mid-training and before SFT.

Evaluation

We report performance on SWE-Bench Verified using SWE-Agent with the setup described in the paper (including temperature 0, 128k context, and a 100-step budget). Results are reported as Pass@1 (averaged across 4 runs).

License

This project is a mixed release:

  • Contextually-native PR-derived subset: only PRs from repositories detected as having a permissive license are included. Each repo’s license is provided in ./ctx-native/filtered_repos/part-0000.parquet.
  • Environmentally-native subset: derived from SWE-rebench, licensed under CC-BY-4.0.
  • daVinci-Dev models: released under Qwen license. Users should verify the licensing status of any generated code before using it in production.
  • daVinci-Dev pipeline: released under the Apache-2.0 license.

Users are responsible for ensuring their downstream usage complies with the licenses of the underlying sources.

Citation

ArXiv link and the official citation block are coming soon (the manuscript is under review at the time of release).

Downloads last month
63
Safetensors
Model size
73B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for GAIR/daVinci-Dev-72B

Base model

Qwen/Qwen2.5-72B
Finetuned
(54)
this model