Papers
arxiv:2603.22570

CanViT: Toward Active-Vision Foundation Models

Published on Mar 23
· Submitted by
Yohaï-Eliel BERREBY
on Mar 25
Authors:
,
,

Abstract

CanViT represents the first task- and policy-agnostic Active-Vision Foundation Model that efficiently processes visual scenes through sequential glimpses using a retinotopic Vision Transformer backbone and canvas-based working memory.

AI-generated summary

Active computer vision promises efficient, biologically plausible perception through sequential, localized glimpses, but lacks scalable general-purpose architectures and pretraining pipelines. As a result, Active-Vision Foundation Models (AVFMs) have remained unexplored. We introduce CanViT, the first task- and policy-agnostic AVFM. CanViT uses scene-relative RoPE to bind a retinotopic Vision Transformer backbone and a spatiotopic scene-wide latent workspace, the canvas. Efficient interaction with this high-capacity working memory is supported by Canvas Attention, a novel asymmetric cross-attention mechanism. We decouple thinking (backbone-level) and memory (canvas-level), eliminating canvas-side self-attention and fully-connected layers to achieve low-latency sequential inference and scalability to large scenes. We propose a label-free active vision pretraining scheme, policy-agnostic passive-to-active dense latent distillation: reconstructing scene-wide DINOv3 embeddings from sequences of low-resolution glimpses with randomized locations, zoom levels, and lengths. We pretrain CanViT-B from a random initialization on 13.2 million ImageNet-21k scenes -- an order of magnitude more than previous active models -- and 1 billion random glimpses, in 166 hours on a single H100. On ADE20K segmentation, a frozen CanViT-B achieves 38.5% mIoU in a single low-resolution glimpse, outperforming the best active model's 27.6% with 19.5x fewer inference FLOPs and no fine-tuning, as well as its FLOP- or input-matched DINOv3 teacher. Given additional glimpses, CanViT-B reaches 45.9% ADE20K mIoU. On ImageNet-1k classification, CanViT-B reaches 81.2% top-1 accuracy with frozen teacher probes. CanViT generalizes to longer rollouts, larger scenes, and new policies. Our work closes the wide gap between passive and active vision on semantic segmentation and demonstrates the potential of AVFMs as a new research axis.

Community

Paper author Paper submitter
edited 1 day ago

Active vision has great theoretical potential, but has struggled in practice.

CanViT means to change that, with a novel ViT-based architecture designed from the ground up for active vision at scale, and a distillation paradigm that makes it easy to train in a task- and policy-agnostic manner.

Our work introduces the Active-Vision Foundation Model (AVFM) paradigm, based around the idea of decoupling active-vision pretraining from the final viewing policy, and makes it computationally and empirically viable.

Easy-to-use code and HuggingFace-compatible checkpoints are available.

Extensions to video and embodied settings (robotics, motorized cameras...) are obvious next steps.

We are excited to share this work with the community and to see what people will build upon it!

The paper is also listed on alphaXiv.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.22570 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.22570 in a Space README.md to link it from this page.

Collections including this paper 1