Datasets:

Modalities:
Text
Video
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 6,493 Bytes
736d597
 
 
 
12b16f6
 
 
2081a4f
736d597
ff6118d
 
 
957cb16
736d597
957cb16
430341d
 
736d597
430341d
 
 
 
 
90e6e5d
430341d
94bd765
 
 
 
 
 
 
 
 
 
 
15aadcf
 
 
c641b94
 
 
15aadcf
 
 
 
 
 
 
 
 
 
 
c641b94
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
037517c
c641b94
 
 
5848737
c641b94
2f761df
037517c
 
 
 
 
 
 
15aadcf
f7f7f75
 
 
 
 
 
 
 
 
 
 
 
 
 
94bd765
 
037517c
8b3cee7
c7ee695
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
---
license: apache-2.0
---

* Paper:[https://arxiv.org/abs/2506.03140](https://arxiv.org/abs/2506.03140)
* Project Page:[https://camclonemaster.github.io/](https://camclonemaster.github.io/)
* Dataset:[https://huggingface.co/datasets/KwaiVGI/CameraClone-Dataset](https://huggingface.co/datasets/KwaiVGI/CameraClone-Dataset)
* Training & Inference Code:[https://github.com/KwaiVGI/CamCloneMaster](https://github.com/KwaiVGI/CamCloneMaster)

# Camera Clone Dataset

## 1. Dataset Introduction

**TL;DR:** The Camera Clone Dataset, introduced in [CamCloneMaster](https://arxiv.org/pdf/2506.03140), is a large-scale synthetic dataset designed for camera clone learning, encompassing diverse scenes, subjects, and camera movements. It consists of triple video sets: a camera motion reference video \\(V_{cam}\\), a content reference video \\(V_{cont}\\), and a target video \\(V\\), which recaptures the scene in \\(V_{cont}\\) with the same camera movement as \\(V_{cam}\\).

<div align="center">
  <video controls autoplay style="width: 70%;" src="https://huggingface.co/datasets/KwaiVGI/CameraClone-Dataset/resolve/main/dataset.mp4"></video>
</div>

The Camera Clone Dataset is rendered using Unreal Engine 5. We collect 40 3D scenes as backgrounds, and we also collect 66 characters and put them into the 3D scenes as main subjects, each character is combined with one random animation, such as running and dancing.

To construct the triple set, camera trajectories must satisfy two key requirements: 1) *Simultaneous Multi-View Capture*: Multiple cameras must film the same scene concurrently, each following a distinct trajectory. 2) *Paired Trajectories*: paired shots with the same camera trajectories across different locations. Our implementation strategy addresses these needs as follows: Within any single location, 10 synchronized cameras operate simultaneously, each following one of ten unique, pre-defined trajectories to capture diverse views. To create paired trajectories, we group 3D locations in scenes into sets of four, ensuring that the same ten camera trajectories are replicated across all locations within each set. The camera trajectories themselves are automatically generated using designed rules. These rules encompass various types, including basic movements, circular arcs, and more complex camera paths.

In total, Camera Clone Dataset comprises 391K visually authentic videos shooting from 39.1K different locations in 40 scenes with 97.75K diverse camera trajectories, and 1,155K triple video sets are constructed based on these videos. Each video has a resolution of 576 x 1,008 and 77 frames.

**3D Environment:** We collect 40 high-quality 3D environments assets from [Fab](https://www.fab.com). To minimize the domain gap between rendered data and real-world videos, we primarily select visually realistic 3D scenes, while choosing a few stylized or surreal 3D scenes as a supplement. To ensure data diversity, the selected scenes cover a variety of indoor and outdoor settings, such as city streets, shopping malls, cafes, office rooms, and the countryside.

**Character:** We collect 66 different human 3D models as characters from [Fab](https://www.fab.com) and [Mixamo](https://www.mixamo.com).

**Animation:** We collect 93 different animations from [Fab](https://www.fab.com) and [Mixamo](https://www.mixamo.com), including common actions such as waving, dancing, and cheering. We use these animations to drive the collected characters and create diverse datasets through various combinations.

**Camera Trajectories:** To prevent clipping, trajectories are constrained by a maximum movement distance \\(d_{max}\\), determined by the initial shot position in the scene. The types of trajectories contain:
  * **Basic**: Simple pans/tilts (5°-75°), rolls (20°-340°), and translations along cardinal axes.
  * **Arc**: Orbital paths, combining a primary rotation (10°-75°) with smaller, secondary rotations (5°-15°).
  * **Random**: Smooth splines interpolated between 2-4 random keypoints. Half of these splines also incorporated with multi-axis rotations.

## 2. Statistics and Configurations

Dataset Statistics:
| Number of Dynamic Scenes | Camera per Scene | Total Videos | Number of Triple Sets |
|:------------------------:|:----------------:|:------------:|:------------:|
| 39,100                   | 10               | 391,000      |1154,819      |

Video Configurations:

| Resolution  | Frame Number | FPS                      |
|:-----------:|:------------:|:------------------------:|
| 1344x768   | 77           | 15                       |
| 1008x576   | 77           | 15                       |

Note: You can use 'center crop' to adjust the video's aspect ratio to fit your video generation model, such as 16:9, 9:16, 4:3, or 3:4.


## 3. File Structure
```
Camera-Clone-Dataset
├──data
    ├── 0316
    │   └── traj_1_01
    │       ├── scene1_01.mp4
    │       ├── scene550_01.mp4
    │       ├── scene935_01.mp4
    │       └── scene1224_01.mp4
    ├── 0317
    ├── 0401
    ├── 0402
    ├── 0404
    ├── 0407
    └── 0410
```

## 4. Use Dataset
```bash
sudo apt-get install git-lfs
git lfs install
git clone https://huggingface.co/datasets/KwaiVGI/CameraClone-Dataset
cd CameraClone-Dataset
cat CamCloneDataset.part* > CamCloneDataset.tar.gz
tar --zstd -xvf CamCloneDataset.tar.gz
```

The "Triple Sets" information is located in the [CamCloneDataset.csv](https://huggingface.co/datasets/KwaiVGI/CameraClone-Dataset/blob/main/CamCloneDataset.csv) file, which contains the following columns:
* video_path: The path to the target video.
* caption: A description of the target video.
* ref_video_path: The path to the camera reference video.
* content_video_path: The path to the content reference video.

## Citation
If you found this dataset useful, please cite our [paper](https://arxiv.org/abs/2506.03140).
```bibtex
@misc{luo2025camclonemaster,
      title={CamCloneMaster: Enabling Reference-based Camera Control for Video Generation}, 
      author={Yawen Luo and Jianhong Bai and Xiaoyu Shi and Menghan Xia and Xintao Wang and Pengfei Wan and Di Zhang and Kun Gai and Tianfan Xue},
      year={2025},
      eprint={2506.03140},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2506.03140}, 
}
```

## Contact

[Yawen Luo](https://luo0207.github.io/yawenluo/)

luoyw0207@gmail.com