Papers
arxiv:2603.19753

ReLi3D: Relightable Multi-view 3D Reconstruction with Disentangled Illumination

Published on Mar 20
· Submitted by
Jan-Niklas Dihlmann
on Mar 23
Authors:
,
,
,
,
,

Abstract

ReLi3D presents a unified end-to-end pipeline for simultaneous 3D geometry, material, and illumination reconstruction from multi-view images using transformer cross-conditioning and a two-path prediction strategy.

AI-generated summary

Reconstructing 3D assets from images has long required separate pipelines for geometry reconstruction, material estimation, and illumination recovery, each with distinct limitations and computational overhead. We present ReLi3D, the first unified end-to-end pipeline that simultaneously reconstructs complete 3D geometry, spatially-varying physically-based materials, and environment illumination from sparse multi-view images in under one second. Our key insight is that multi-view constraints can dramatically improve material and illumination disentanglement, a problem that remains fundamentally ill-posed for single-image methods. Key to our approach is the fusion of the multi-view input via a transformer cross-conditioning architecture, followed by a novel unified two-path prediction strategy. The first path predicts the object's structure and appearance, while the second path predicts the environment illumination from image background or object reflections. This, combined with a differentiable Monte Carlo multiple importance sampling renderer, creates an optimal illumination disentanglement training pipeline. In addition, with our mixed domain training protocol, which combines synthetic PBR datasets with real-world RGB captures, we establish generalizable results in geometry, material accuracy, and illumination quality. By unifying previously separate reconstruction tasks into a single feed-forward pass, we enable near-instantaneous generation of complete, relightable 3D assets. Project Page: https://reli3d.jdihlmann.com/

Community

Paper submitter

Relightable 3D assets with physically based, spatially varying materials are still hard to obtain from images. Most approaches either separate geometry/materials/lighting into different stages, or they struggle with the fundamental ambiguity of single-view inverse rendering.

We introduce ReLi3D (ICLR 2026): a unified feed-forward pipeline that reconstructs complete 3D geometry, spatially varying PBR materials, and coherent HDR environment illumination from sparse posed RGB views in under one second.

Our key insight is that multi-view constraints are essential for material-lighting disentanglement. When multiple views observe the same surface under shared illumination, cross-view consistency sharply reduces ambiguity and stabilizes both material and lighting recovery.

ReLi3D uses a shared cross-conditioning transformer to fuse an arbitrary number of masked views into a unified triplane representation. Two coupled prediction paths then recover (1) object density and svBRDF parameters and (2) environment illumination via a RENI++ latent code. A differentiable physically based renderer with Monte Carlo integration and multiple importance sampling couples both paths during training, together with mixed-domain supervision from synthetic PBR assets, synthetic RGB-only renders, and real UCO3D captures.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.19753 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.19753 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.