Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

PEEK VLM Path/Mask Labels for BRIDGE_v2

This dataset contains the PEEK VLM (Policy-agnostic Extraction of Essential Keypoints) generated path and mask labels for the BRIDGE_v2 dataset. These labels are an integral part of the research presented in the paper PEEK: Guiding and Minimal Image Representations for Zero-Shot Generalization of Robot Manipulation Policies.

PEEK fine-tunes Vision-Language Models (VLMs) to predict a unified point-based intermediate representation for robot manipulation. This representation consists of:

  1. End-effector paths: specifying what actions to take.
  2. Task-relevant masks: indicating where to focus.

These annotations are directly overlaid onto robot observations, making the representation policy-agnostic and transferable across architectures. This dataset provides these automatically generated labels for the BRIDGE_v2 dataset, enabling researchers to readily use them for policy training and enhancement to boost zero-shot generalization.

Paper

PEEK: Guiding and Minimal Image Representations for Zero-Shot Generalization of Robot Manipulation Policies

Project Page

https://peek-robot.github.io

Code/Github Repository

The main PEEK framework and associated code can be found on the Github repository: https://github.com/peek-robot/peek

Sample Usage

This dataset provides the pre-computed PEEK VLM path and mask labels for the BRIDGE_v2 dataset. These labels are intended to be integrated with existing BRIDGE_v2 data to guide and enhance robot manipulation policies during training and inference, as described in the PEEK paper. Users can download these labels and utilize them within their policy learning pipelines to equip manipulation policies with minimal cues for improved zero-shot generalization. For detailed instructions on how to incorporate these labels into policy training or for examples of VLM data labeling, please refer to the PEEK Github repository, particularly the peek_vlm folder.

Citation

If you find this dataset useful for your research, please cite the original paper:

@inproceedings{zhang2025peek,
    title={PEEK: Guiding and Minimal Image Representations for Zero-Shot Generalization of Robot Manipulation Policies}, 
    author={Jesse Zhang and Marius Memmel and Kevin Kim and Dieter Fox and Jesse Thomason and Fabio Ramos and Erdem Bıyık and Abhishek Gupta and Anqi Li},
    booktitle={arXiv:2509.18282},
    year={2025},
}
Downloads last month
44