KUOCHENG commited on
Commit
9fef882
Β·
verified Β·
1 Parent(s): 4c06e08

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +236 -22
README.md CHANGED
@@ -11,6 +11,7 @@ tags:
11
  - astronomy
12
  - super-resolution
13
  - computer-vision
 
14
  configs:
15
  - config_name: default
16
  data_files:
@@ -20,35 +21,248 @@ configs:
20
  path: sampled_data/x2/validation_metadata.jsonl
21
  ---
22
 
 
23
 
24
- # STAR Dataset
25
- The **STAR (Super-Resolution for Astronomical Star Fields)** dataset is a large-scale benchmark for developing field-level super-resolution models in astronomy. It contains **54,738 flux-consistent image pairs** derived from Hubble Space Telescope (HST) high-resolution observations and physically faithful low-resolution counterparts. The dataset addresses three key challenges in astronomical super-resolution:
26
 
27
- - **Flux Inconsistency**: Ensures consistent flux using a flux-preserving data generation pipeline.
28
- - **Object-Crop Configuration**: Strategically samples patches across diverse celestial regions.
29
- - **Data Diversity**: Covers dense star clusters, sparse galactic fields, and regions with varying background noise.
30
 
31
- The dataset includes x2 and x4 scaling pairs in `.npy` format, suitable for training and evaluating super-resolution models.
 
 
 
32
 
33
- ## Structure
34
- - **Full Data**:
35
- - `data/x2/x2.tar.gz` (33GB): Full x2 dataset.
36
- - `data/x4/x4.tar.gz` (29GB): Full x4 dataset.
37
- - Unzip to access: `train_hr_patch/`, `train_lr_patch/`, `eval_hr_patch/`, `eval_lr_patch/` (.npy files), and `dataload_filename/` (txt with pairs).
38
 
39
- - **Sample Data** (for testing and Croissant, x2 only):
40
- - `sampled_data/x2/`: Sample .npy pairs (500 train pairs, 100 eval pairs).
41
- - `train_hr_patch/`, `train_lr_patch/` (500 files each)
42
- - `eval_hr_patch/`, `eval_lr_patch/` (100 files each)
43
- - Croissant metadata: `sampled_data/x2/croissant.json`
 
 
 
 
 
 
 
 
44
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
 
46
- ## Custom Loading with Split
47
- To load with split="train":
48
  ```python
 
49
  from datasets import load_dataset
50
- data_files = {
51
- "train": ["sampled/x2/train_hr_patch/*.npy", "sampled/x2/train_lr_patch/*.npy"],
52
- "validation": ["sampled/x2/eval_hr_patch/*.npy", "sampled/x2/eval_lr_patch/*.npy"]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
  }
54
- dataset = load_dataset("KUOCHENG/STAR", data_files=data_files, split="train")
 
 
 
 
 
 
 
 
 
 
 
11
  - astronomy
12
  - super-resolution
13
  - computer-vision
14
+ - hubble-space-telescope
15
  configs:
16
  - config_name: default
17
  data_files:
 
21
  path: sampled_data/x2/validation_metadata.jsonl
22
  ---
23
 
24
+ # STAR Dataset (Super-Resolution for Astronomical Star Fields)
25
 
26
+ The **STAR** dataset is a large-scale benchmark for developing field-level super-resolution models in astronomy. It contains **54,738 flux-consistent image pairs** derived from Hubble Space Telescope (HST) high-resolution observations and physically faithful low-resolution counterparts.
 
27
 
28
+ ## 🌟 Key Features
 
 
29
 
30
+ - **Flux Consistency**: Ensures consistent flux using a flux-preserving data generation pipeline
31
+ - **Object-Crop Configuration**: Strategically samples patches across diverse celestial regions
32
+ - **Data Diversity**: Covers dense star clusters, sparse galactic fields, and regions with varying background noise
33
+ - **Multiple Scales**: Supports both 2x and 4x super-resolution tasks
34
 
35
+ ## πŸ“Š Dataset Structure
 
 
 
 
36
 
37
+ ```
38
+ KUOCHENG/STAR/
39
+ β”œβ”€β”€ sampled_data/x2/ # Sample dataset (600 pairs)
40
+ β”‚ β”œβ”€β”€ train_hr_patch/ # 500 HR training patches (.npy files)
41
+ β”‚ β”œβ”€β”€ train_lr_patch/ # 500 LR training patches (.npy files)
42
+ β”‚ β”œβ”€β”€ eval_hr_patch/ # 100 HR validation patches (.npy files)
43
+ β”‚ β”œβ”€β”€ eval_lr_patch/ # 100 LR validation patches (.npy files)
44
+ β”‚ β”œβ”€β”€ train_metadata.jsonl # Training pairs metadata
45
+ β”‚ └── validation_metadata.jsonl # Validation pairs metadata
46
+ └── data/
47
+ β”œβ”€β”€ x2/x2.tar.gz # Full x2 dataset (33GB)
48
+ └── x4/x4.tar.gz # Full x4 dataset (29GB)
49
+ ```
50
 
51
+ ## πŸš€ Quick Start
52
+
53
+ ### Loading the Dataset
54
+
55
+ ```python
56
+ from datasets import load_dataset
57
+ import numpy as np
58
+
59
+ # Load metadata
60
+ dataset = load_dataset("KUOCHENG/STAR")
61
+
62
+ # Access a sample
63
+ sample = dataset['train'][0]
64
+ hr_path = sample['hr_path'] # Path to HR .npy file
65
+ lr_path = sample['lr_path'] # Path to LR .npy file
66
+
67
+ # Load actual data
68
+ hr_data = np.load(hr_path, allow_pickle=True).item()
69
+ lr_data = np.load(lr_path, allow_pickle=True).item()
70
+ ```
71
+
72
+ ### Understanding the Data Format
73
+
74
+ Each `.npy` file contains a dictionary with the following structure:
75
+
76
+ #### High-Resolution (HR) Data
77
+ - **Shape**: `(256, 256)` for all HR patches
78
+ - **Access Keys**:
79
+ ```python
80
+ hr_data['image'] # The actual grayscale astronomical image
81
+ hr_data['mask'] # Binary mask (True = valid/accessible pixels)
82
+ hr_data['attn_map'] # Attention map from star finder (detected astronomical sources)
83
+ hr_data['coord'] # Coordinate information (if available)
84
+ ```
85
+
86
+ #### Low-Resolution (LR) Data
87
+ - **Shape**: Depends on the super-resolution scale
88
+ - For x2: `(128, 128)`
89
+ - For x4: `(64, 64)`
90
+ - **Access Keys**: Same as HR data
91
+ ```python
92
+ lr_data['image'] # Downsampled grayscale image
93
+ lr_data['mask'] # Downsampled mask
94
+ lr_data['attn_map'] # Downsampled attention map
95
+ lr_data['coord'] # Coordinate information
96
+ ```
97
+
98
+ ### Data Fields Explanation
99
+
100
+ | Field | Description | Type | Usage |
101
+ |-------|-------------|------|-------|
102
+ | `image` | Raw astronomical observation data | `np.ndarray` (float32) | Main input for super-resolution |
103
+ | `mask` | Valid pixel indicator | `np.ndarray` (bool) | Identifies accessible regions (True = valid) |
104
+ | `attn_map` | Star finder output | `np.ndarray` (float32) | Highlights detected astronomical sources (stars, galaxies) |
105
+ | `coord` | Spatial coordinates | `np.ndarray` | Position information for patch alignment |
106
+
107
+ ## πŸ’» Usage Examples
108
+
109
+ ### Basic Training Loop
110
 
 
 
111
  ```python
112
+ import numpy as np
113
  from datasets import load_dataset
114
+ import torch
115
+ from torch.utils.data import DataLoader
116
+
117
+ # Load dataset
118
+ dataset = load_dataset("KUOCHENG/STAR")
119
+
120
+ class STARDataset(torch.utils.data.Dataset):
121
+ def __init__(self, hf_dataset):
122
+ self.dataset = hf_dataset
123
+
124
+ def __len__(self):
125
+ return len(self.dataset)
126
+
127
+ def __getitem__(self, idx):
128
+ sample = self.dataset[idx]
129
+
130
+ # Load .npy files
131
+ hr_data = np.load(sample['hr_path'], allow_pickle=True).item()
132
+ lr_data = np.load(sample['lr_path'], allow_pickle=True).item()
133
+
134
+ # Extract images
135
+ hr_image = hr_data['image'].astype(np.float32)
136
+ lr_image = lr_data['image'].astype(np.float32)
137
+
138
+ # Extract masks for loss computation
139
+ hr_mask = hr_data['mask'].astype(np.float32)
140
+ lr_mask = lr_data['mask'].astype(np.float32)
141
+
142
+ # Convert to tensors
143
+ return {
144
+ 'lr_image': torch.from_numpy(lr_image).unsqueeze(0), # Add channel dim
145
+ 'hr_image': torch.from_numpy(hr_image).unsqueeze(0),
146
+ 'hr_mask': torch.from_numpy(hr_mask).unsqueeze(0),
147
+ 'lr_mask': torch.from_numpy(lr_mask).unsqueeze(0),
148
+ }
149
+
150
+ # Create PyTorch dataset and dataloader
151
+ train_dataset = STARDataset(dataset['train'])
152
+ train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
153
+
154
+ # Training loop example
155
+ for batch in train_loader:
156
+ lr_images = batch['lr_image'] # [B, 1, 128, 128] for x2
157
+ hr_images = batch['hr_image'] # [B, 1, 256, 256]
158
+ masks = batch['hr_mask'] # [B, 1, 256, 256]
159
+
160
+ # Your training code here
161
+ # pred = model(lr_images)
162
+ # loss = criterion(pred * masks, hr_images * masks) # Apply mask to focus on valid regions
163
+ ```
164
+
165
+ ### Visualization
166
+
167
+ ```python
168
+ import matplotlib.pyplot as plt
169
+ import numpy as np
170
+
171
+ def visualize_sample(hr_path, lr_path):
172
+ # Load data
173
+ hr_data = np.load(hr_path, allow_pickle=True).item()
174
+ lr_data = np.load(lr_path, allow_pickle=True).item()
175
+
176
+ fig, axes = plt.subplots(2, 3, figsize=(15, 10))
177
+
178
+ # HR visualizations
179
+ axes[0, 0].imshow(hr_data['image'], cmap='gray')
180
+ axes[0, 0].set_title('HR Image (256x256)')
181
+
182
+ axes[0, 1].imshow(hr_data['mask'], cmap='binary')
183
+ axes[0, 1].set_title('HR Mask (Valid Regions)')
184
+
185
+ axes[0, 2].imshow(hr_data['attn_map'], cmap='hot')
186
+ axes[0, 2].set_title('HR Attention Map (Detected Sources)')
187
+
188
+ # LR visualizations
189
+ axes[1, 0].imshow(lr_data['image'], cmap='gray')
190
+ axes[1, 0].set_title(f'LR Image ({lr_data["image"].shape[0]}x{lr_data["image"].shape[1]})')
191
+
192
+ axes[1, 1].imshow(lr_data['mask'], cmap='binary')
193
+ axes[1, 1].set_title('LR Mask')
194
+
195
+ axes[1, 2].imshow(lr_data['attn_map'], cmap='hot')
196
+ axes[1, 2].set_title('LR Attention Map')
197
+
198
+ plt.tight_layout()
199
+ plt.show()
200
+
201
+ # Visualize a sample
202
+ sample = dataset['train'][0]
203
+ visualize_sample(sample['hr_path'], sample['lr_path'])
204
+ ```
205
+
206
+ ## πŸ“ File Naming Convention
207
+
208
+ - **HR files**: `*_hr_hr_patch_*.npy`
209
+ - **LR files**: `*_hr_lr_patch_*.npy`
210
+
211
+ Files are paired by replacing `_hr_hr_patch_` with `_hr_lr_patch_` in the filename.
212
+
213
+ ## πŸ”„ Full Dataset Access
214
+
215
+ For the complete dataset (54,738 pairs), download the compressed files:
216
+
217
+ ```python
218
+ # Manual download and extraction
219
+ import tarfile
220
+
221
+ # Extract x2 dataset
222
+ with tarfile.open('data/x2/x2.tar.gz', 'r:gz') as tar:
223
+ tar.extractall('data/x2/')
224
+
225
+ # The extracted structure will be:
226
+ # data/x2/
227
+ # β”œβ”€β”€ train_hr_patch/ # ~45,000 HR patches
228
+ # β”œβ”€β”€ train_lr_patch/ # ~45,000 LR patches
229
+ # β”œβ”€β”€ eval_hr_patch/ # ~9,000 HR patches
230
+ # β”œβ”€β”€ eval_lr_patch/ # ~9,000 LR patches
231
+ # β”œβ”€β”€ dataload_filename/
232
+ # β”‚ β”œβ”€β”€ train_dataloader.txt # Training pairs list
233
+ # β”‚ └── eval_dataloader.txt # Evaluation pairs list
234
+ # └── psf_hr/, psf_lr/ # Original unpatched data
235
+ ```
236
+
237
+ ## 🎯 Model Evaluation Metrics
238
+
239
+ When evaluating super-resolution models on STAR, consider:
240
+
241
+ 1. **Masked PSNR/SSIM**: Only compute metrics on valid pixels (where mask=True)
242
+ 2. **Source Detection F1**: Evaluate if astronomical sources are preserved
243
+ 3. **Flux Preservation**: Check if total flux is maintained (important for astronomy, see paper)
244
+ ```
245
+
246
+ ## πŸ“ Citation
247
+
248
+ If you use the STAR dataset in your research, please cite:
249
+
250
+ ```bibtex
251
+ @article{wu2025star,
252
+ title={STAR: A Benchmark for Astronomical Star Fields Super-Resolution},
253
+ author={Wu, Kuo-Cheng and Zhuang, Guohang and Huang, Jinyang and Zhang, Xiang and Ouyang, Wanli and Lu, Yan},
254
+ journal={arXiv preprint arXiv:2507.16385},
255
+ year={2025},
256
+ url={https://arxiv.org/abs/2507.16385}
257
  }
258
+
259
+ ```
260
+
261
+ ## πŸ“„ License
262
+
263
+ This dataset is released under the MIT License.
264
+
265
+ ## 🀝 Contact
266
+
267
+ For questions or issues, please open an issue on the [dataset repository](https://huggingface.co/datasets/KUOCHENG/STAR/discussions).
268
+ also can see [github](https://github.com/GuoCheng12/STAR)