|
|
--- |
|
|
license: cc-by-4.0 |
|
|
task_categories: |
|
|
- feature-extraction |
|
|
tags: |
|
|
- code |
|
|
- remote_sensing |
|
|
- weakly_supervised_semantic_segmentation |
|
|
size_categories: |
|
|
- 100K<n<1M |
|
|
--- |
|
|
|
|
|
# Dataset for Weakly Supervised Semantic Segmentation |
|
|
|
|
|
Based on the ESA WorldCover 2020 v100 dataset: |
|
|
|
|
|
> Zanaga, D., Van De Kerchove, R., De Keersmaecker, W., Souverijns, N., Brockmann, C., Quast, R., Wevers, J., Grosu, A., Paccini, A., Vergnaud, S., Cartus, O., Santoro, M., Fritz, S., Georgieva, I., Lesiv, M., Carter, S., Herold, M., Li, Linlin, Tsendbazar, N.E., Ramoino, F., Arino, O., 2021. ESA WorldCover 10 m 2020 v100. https://doi.org/10.5281/zenodo.5571936 |
|
|
|
|
|
Homepage: https://esa-worldcover.org/en |
|
|
|
|
|
### Dataset structure |
|
|
~500,000 (image, label, class_proportions) triplets where |
|
|
- image -> Remote Sensing composite with Bands B4, B3, B2, B8, B11, B12, S1VV, S1VH at 10m resolution of size 128x128px |
|
|
- label -> WorldCover 2020 V100 Semantic Segmentation Map with 11 classes |
|
|
- class proportions -> the amount of pixel for each class in percent (sums up to one) |
|
|
|
|
|
split into 3 subsets for training, validation and testing |
|
|
- train_split: 70% () |
|
|
- val_split: 10% () |
|
|
- test_split: 20% () |
|
|
|
|
|
additional subtable in LMDB with means and standard deviations for each split |
|
|
|
|
|
### using this dataset: |
|
|
|
|
|
1. Reqiurements |
|
|
- pytorch |
|
|
- lmdb |
|
|
- numpy |
|
|
- safetensors |
|
|
|
|
|
2. Extract the LMDB file |
|
|
- ```tar -xz S2WC-RSS-like.tar.gz . ``` |
|
|
|
|
|
4. Initialze the dataset reader |
|
|
```python |
|
|
import ./WCv1LMDBReader.py |
|
|
|
|
|
# initialize train dataset by setting split='train' and use all available bands |
|
|
train_ds = WCv1LMDBReader('<path_to_lmdb_file>', split='train', output_bands=[Bands.ALL]) |
|
|
|
|
|
# initialize val dataset by setting split='val' and use all available bands |
|
|
val_ds = WCv1LMDBReader('<path_to_lmdb_file>', split='val', output_bands=[Bands.ALL]) |
|
|
|
|
|
# initialize train dataset by setting split='test' and use all available bands |
|
|
test_ds = WCv1LMDBReader('<path_to_lmdb_file>', split='test', output_bands=[Bands.ALL]) |
|
|
|
|
|
# load the means and std deviations |
|
|
train_mean, train_std = train_ds.get_mean_std() |
|
|
val_mean, val_std = val_ds.get_mean_std() |
|
|
test_mean, test_std = test_ds.get_mean_std() |
|
|
``` |
|
|
create a pytorch dataset that can be used with a dataloader in either pytorch lightning or plain pytorch e.g. |
|
|
```python |
|
|
from torch.utils.data import Dataloader |
|
|
|
|
|
train_loader = utils.data.DataLoader(train_ds, batch_size=64, num_workers=4, shuffle=True) |
|
|
val_loader = utils.data.DataLoader(val_ds, batch_size=64, num_workers=4) |
|
|
test_loader = utils.data.DataLoader(test_ds, batch_size=64, num_workers=4) |
|
|
``` |