The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
π§ J-ORA: A Robot Perception Framework for Japanese Object Identification, Reference Resolution, and Action Prediction
A Multimodal Dataset for Vision-Language Grounding in Human-Robot Interaction (HRI)
π Language: Japanese | π€ Focus: Embodied AI | π Size: 142 Scenes | π¬ Granularity: Object-Level
π Summary
J-ORA (Japanese Object Reference and Action) is a multimodal benchmark for grounded vision-language learning in robotics. It is designed for understanding Japanese robot instructions in real-world settings through a combination of:
- Rich object-level visual scenes
- Human-robot dialogues in Japanese
- Grounded object attributes and reference links
- Support for 3 robot perception tasks:
- Object Identification (OI)
- Reference Resolution (RR)
- Next Action Prediction (AP)
π Motivation
Despite the growing role of Vision-Language Models (VLMs), robot perception remains challenging in dynamic real-world environments with occlusions, multiple object types, and ambiguous language.
Yet, existing benchmarks focus mainly on English and synthetic prompts.
J-ORA fills this gap with a Japanese-centric, multimodal resource grounded in real images and conversational instructions, supporting the development of intelligent agents that understand and act in non-English human environments.
π¦ Dataset Overview
| Feature | Value |
|---|---|
| Hours of recordings | 3 hrs 3 min 44 sec |
| Unique dialogues | 93 |
| Total dialogue scenes | 142 |
| Total utterances | 2,131 |
| Average turns per dialogue | 15 |
| Image-dialogue pairs | 142 |
| Unique object classes | 160 |
| Object attribute annotations | 1,817 |
| Language | Japanese |
Each .png image is paired with a .json file that contains:
- π¨ Full Japanese dialogue text
- π Object attributes (
category,color,shape,size,material, etc.) - π Reference links from language to visual regions
- βοΈ Spatial positions and interactivity annotations
π Structure
Each sample contains:
{
"image_id": "001",
"caption": {
"text": "Japanese instruction dialogue..."
},
"i2t_relations": [...],
"object_attributes": [...],
"relations": [...]
}
Files:
001.png
001.png.json
002.png
002.png.json
...
To use:
from datasets import load_dataset
ds = load_dataset("jatuhurrra/J-ORA", split="train")
π§ Tasks Supported
π‘ Object Identification (OI)
Identify all object mentions in a Japanese instruction and match them to objects in the scene.
π΅ Reference Resolution (RR)
Locate the visual regions of mentioned objects based on textual referring expressions.
π΄ Action Prediction (AP)
Given the mentioned object(s) and their locations, predict the likely action (e.g., pick, move, discard).
π§ͺ Evaluations
We benchmarked 10+ VLMs, including:
- π Proprietary: GPT-4o, Gemini 1.5 Pro, Claude 3.5
- π» General open-source: LLaVa, Qwen2VL
- π―π΅ Japanese-specific: EvoVLM-JP, Bilingual-gpt-neox, Japanese-stable-VLM
We test zero-shot and fine-tuned variants with and without object attribute embeddings.
Findings reveal:
- Performance gaps persist between English and Japanese
- Reference resolution remains the hardest task
- Attribute-based fine-tuning improves grounding
π Use Cases
J-ORA supports research in:
- Japanese robot instruction following
- Multilingual grounding and reference resolution
- Vision-language model benchmarking
- Embodied AI with language grounding
- Fine-tuning of Japanese VLMs for HRI tasks
π οΈ Resources
Code: Task pipelines, training scripts, and evaluation metrics.
Dataset: Full annotations and image data.
The data introduced in this project extends the J-CRe3 data.
π Citation
If you use this dataset, please cite both J-ORA and the J-CRe3 dataset:
% J-ORA citation coming soon
@inproceedings{ueda-2024-j-cre3,
title = {J-CRe3: A Japanese Conversation Dataset for Real-world Reference Resolution},
author = {Nobuhiro Ueda and Hideko Habe and Yoko Matsui and Akishige Yuguchi and Seiya Kawano and Yasutomo Kawanishi and Sadao Kurohashi and Koichiro Yoshino},
booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
month = may,
year = {2024},
url = {https://aclanthology.org/2024.lrec-main.829},
pages = {9489--9502},
address = {Turin, Italy},
}
π¬ Contact
For questions and collaborations:
- Jesse Atuhurra β
atuhurra.jesse.ag2@naist.ac.jp - Koichiro Yoshino β
koichiro.yoshino@riken.jp
π License
Released under:
CC BY-SA 4.0 β Creative Commons Attribution-ShareAlike 4.0 International License.
- Downloads last month
- 13