Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
The dataset viewer is not available for this split.
Rows from parquet row groups are too big to be read: 299.78 MiB (max=286.10 MiB)
Error code:   TooBigContentError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

🧠 J-ORA: A Robot Perception Framework for Japanese Object Identification, Reference Resolution, and Action Prediction

A Multimodal Dataset for Vision-Language Grounding in Human-Robot Interaction (HRI)
πŸ“ Language: Japanese | πŸ€– Focus: Embodied AI | πŸ“† Size: 142 Scenes | πŸ”¬ Granularity: Object-Level


πŸ“˜ Summary

J-ORA (Japanese Object Reference and Action) is a multimodal benchmark for grounded vision-language learning in robotics. It is designed for understanding Japanese robot instructions in real-world settings through a combination of:

  • Rich object-level visual scenes
  • Human-robot dialogues in Japanese
  • Grounded object attributes and reference links
  • Support for 3 robot perception tasks:
    • Object Identification (OI)
    • Reference Resolution (RR)
    • Next Action Prediction (AP)

🌏 Motivation

Despite the growing role of Vision-Language Models (VLMs), robot perception remains challenging in dynamic real-world environments with occlusions, multiple object types, and ambiguous language.
Yet, existing benchmarks focus mainly on English and synthetic prompts.
J-ORA fills this gap with a Japanese-centric, multimodal resource grounded in real images and conversational instructions, supporting the development of intelligent agents that understand and act in non-English human environments.


πŸ“¦ Dataset Overview

Feature Value
Hours of recordings 3 hrs 3 min 44 sec
Unique dialogues 93
Total dialogue scenes 142
Total utterances 2,131
Average turns per dialogue 15
Image-dialogue pairs 142
Unique object classes 160
Object attribute annotations 1,817
Language Japanese

Each .png image is paired with a .json file that contains:

  • πŸ”¨ Full Japanese dialogue text
  • πŸ” Object attributes (category, color, shape, size, material, etc.)
  • πŸ“ Reference links from language to visual regions
  • βš™οΈ Spatial positions and interactivity annotations

πŸ“‚ Structure

Each sample contains:

{
  "image_id": "001",
  "caption": {
    "text": "Japanese instruction dialogue..."
  },
  "i2t_relations": [...],
  "object_attributes": [...],
  "relations": [...]
}

Files:

001.png
001.png.json
002.png
002.png.json
...

To use:

from datasets import load_dataset

ds = load_dataset("jatuhurrra/J-ORA", split="train")

🧠 Tasks Supported

🟑 Object Identification (OI)

Identify all object mentions in a Japanese instruction and match them to objects in the scene.

πŸ”΅ Reference Resolution (RR)

Locate the visual regions of mentioned objects based on textual referring expressions.

πŸ”΄ Action Prediction (AP)

Given the mentioned object(s) and their locations, predict the likely action (e.g., pick, move, discard).


πŸ§ͺ Evaluations

We benchmarked 10+ VLMs, including:

  • πŸ† Proprietary: GPT-4o, Gemini 1.5 Pro, Claude 3.5
  • πŸ’» General open-source: LLaVa, Qwen2VL
  • πŸ‡―πŸ‡΅ Japanese-specific: EvoVLM-JP, Bilingual-gpt-neox, Japanese-stable-VLM

We test zero-shot and fine-tuned variants with and without object attribute embeddings.
Findings reveal:

  • Performance gaps persist between English and Japanese
  • Reference resolution remains the hardest task
  • Attribute-based fine-tuning improves grounding

πŸ” Use Cases

J-ORA supports research in:

  • Japanese robot instruction following
  • Multilingual grounding and reference resolution
  • Vision-language model benchmarking
  • Embodied AI with language grounding
  • Fine-tuning of Japanese VLMs for HRI tasks

πŸ› οΈ Resources

  • Code: Task pipelines, training scripts, and evaluation metrics.

  • Dataset: Full annotations and image data.

    The data introduced in this project extends the J-CRe3 data.


πŸ“„ Citation

If you use this dataset, please cite both J-ORA and the J-CRe3 dataset:

% J-ORA citation coming soon
@inproceedings{ueda-2024-j-cre3,
  title     = {J-CRe3: A Japanese Conversation Dataset for Real-world Reference Resolution},
  author    = {Nobuhiro Ueda and Hideko Habe and Yoko Matsui and Akishige Yuguchi and Seiya Kawano and Yasutomo Kawanishi and Sadao Kurohashi and Koichiro Yoshino},
  booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
  month     = may,
  year      = {2024},
  url       = {https://aclanthology.org/2024.lrec-main.829},
  pages     = {9489--9502},
  address   = {Turin, Italy},
}

πŸ“¬ Contact

For questions and collaborations:

  • Jesse Atuhurra β€” atuhurra.jesse.ag2@naist.ac.jp
  • Koichiro Yoshino β€” koichiro.yoshino@riken.jp

πŸ“œ License

Released under:
CC BY-SA 4.0 β€” Creative Commons Attribution-ShareAlike 4.0 International License.


Downloads last month
13