id
int64
599M
2.47B
url
stringlengths
58
61
repository_url
stringclasses
1 value
events_url
stringlengths
65
68
labels
listlengths
0
4
active_lock_reason
null
updated_at
stringlengths
20
20
assignees
listlengths
0
4
html_url
stringlengths
46
51
author_association
stringclasses
4 values
state_reason
stringclasses
3 values
draft
bool
2 classes
milestone
dict
comments
listlengths
0
30
title
stringlengths
1
290
reactions
dict
node_id
stringlengths
18
32
pull_request
dict
created_at
stringlengths
20
20
comments_url
stringlengths
67
70
body
stringlengths
0
228k
user
dict
labels_url
stringlengths
72
75
timeline_url
stringlengths
67
70
state
stringclasses
2 values
locked
bool
1 class
number
int64
1
7.11k
performed_via_github_app
null
closed_at
stringlengths
20
20
assignee
dict
is_pull_request
bool
2 classes
2,473,367,848
https://api.github.com/repos/huggingface/datasets/issues/7109
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7109/events
[]
null
2024-08-19T13:29:12Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
https://github.com/huggingface/datasets/issues/7109
MEMBER
null
null
null
[]
ConnectionError for gated datasets and unauthenticated users
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7109/reactions" }
I_kwDODunzps6TbJko
null
2024-08-19T13:27:45Z
https://api.github.com/repos/huggingface/datasets/issues/7109/comments
Since the Hub returns dataset info for gated datasets and unauthenticated users, there is dead code: https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/load.py#L1846-L1852 We should remove the dead code and properly handle this case: currently we are raising a `ConnectionError` instead of a `DatasetNotFoundError` (as before). See: - https://github.com/huggingface/dataset-viewer/issues/3025 - https://github.com/huggingface/huggingface_hub/issues/2457
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7109/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7109/timeline
open
false
7,109
null
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,470,665,327
https://api.github.com/repos/huggingface/datasets/issues/7108
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7108/events
[]
null
2024-08-19T13:21:12Z
[]
https://github.com/huggingface/datasets/issues/7108
NONE
completed
null
null
[ "I don't reproduce, I was able to create a new repo: https://huggingface.co/datasets/severo/reproduce-datasets-issues-7108. Can you confirm it's still broken?", "I have just tried again.\r\n\r\nFirefox: The `Create dataset` doesn't work. It has worked in the past. It's my preferred browser.\r\n\r\nChrome: The `Cr...
website broken: Create a new dataset repository, doesn't create a new repo in Firefox
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7108/reactions" }
I_kwDODunzps6TQ1xv
null
2024-08-16T17:23:00Z
https://api.github.com/repos/huggingface/datasets/issues/7108/comments
### Describe the bug This issue is also reported here: https://discuss.huggingface.co/t/create-a-new-dataset-repository-broken-page/102644 This page is broken. https://huggingface.co/new-dataset I fill in the form with my text, and click `Create Dataset`. ![Screenshot 2024-08-16 at 15 55 37](https://github.com/user-attachments/assets/de16627b-7a55-4bcf-9f0b-a48227aabfe6) Then the form gets wiped. And no repo got created. No error message visible in the developer console. ![Screenshot 2024-08-16 at 15 56 54](https://github.com/user-attachments/assets/0520164b-431c-40a5-9634-11fd62c4f4c3) # Idea for improvement For better UX, if the repo cannot be created, then show an error message, that something went wrong. # Work around, that works for me ```python from huggingface_hub import HfApi, HfFolder repo_id = 'simon-arc-solve-fractal-v3' api = HfApi() username = api.whoami()['name'] repo_url = api.create_repo(repo_id=repo_id, exist_ok=True, private=True, repo_type="dataset") ``` ### Steps to reproduce the bug Go https://huggingface.co/new-dataset Fill in the form. Click `Create dataset`. Now the form is cleared. And the page doesn't jump anywhere. ### Expected behavior The moment the user clicks `Create dataset`, the repo gets created and the page jumps to the created repo. ### Environment info Firefox 128.0.3 (64-bit) macOS Sonoma 14.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4", "events_url": "https://api.github.com/users/neoneye/events{/privacy}", "followers_url": "https://api.github.com/users/neoneye/followers", "following_url": "https://api.github.com/users/neoneye/following{/other_user}", "gists_url": "https://api.github.com/users/neoneye/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/neoneye", "id": 147971, "login": "neoneye", "node_id": "MDQ6VXNlcjE0Nzk3MQ==", "organizations_url": "https://api.github.com/users/neoneye/orgs", "received_events_url": "https://api.github.com/users/neoneye/received_events", "repos_url": "https://api.github.com/users/neoneye/repos", "site_admin": false, "starred_url": "https://api.github.com/users/neoneye/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neoneye/subscriptions", "type": "User", "url": "https://api.github.com/users/neoneye" }
https://api.github.com/repos/huggingface/datasets/issues/7108/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7108/timeline
closed
false
7,108
null
2024-08-19T06:52:48Z
null
false
2,470,444,732
https://api.github.com/repos/huggingface/datasets/issues/7107
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7107/events
[]
null
2024-08-18T09:28:43Z
[]
https://github.com/huggingface/datasets/issues/7107
NONE
completed
null
null
[ "There seems to be a PR related to the load_dataset path that went into 2.21.0 -- https://github.com/huggingface/datasets/pull/6862/files\r\n\r\nTaking a look at it now", "+1\r\n\r\nDowngrading to 2.20.0 fixed my issue, hopefully helpful for others.", "I tried adding a simple test to `test_load.py` with the alp...
load_dataset broken in 2.21.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7107/reactions" }
I_kwDODunzps6TP_68
null
2024-08-16T14:59:51Z
https://api.github.com/repos/huggingface/datasets/issues/7107/comments
### Describe the bug `eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)` used to work till 2.20.0 but doesn't work in 2.21.0 In 2.20.0: ![Screenshot 2024-08-16 at 3 57 10 PM](https://github.com/user-attachments/assets/0516489b-8187-486d-bee8-88af3381dee9) in 2.21.0: ![Screenshot 2024-08-16 at 3 57 24 PM](https://github.com/user-attachments/assets/bc257570-f461-41e4-8717-90a69ed7c24f) ### Steps to reproduce the bug 1. Spin up a new google collab 2. `pip install datasets==2.21.0` 3. `import datasets` 4. `eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)` 5. Will throw an error. ### Expected behavior Try steps 1-5 again but replace datasets version with 2.20.0, it will work ### Environment info - `datasets` version: 2.21.0 - Platform: Linux-6.1.85+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.5 - PyArrow version: 17.0.0 - Pandas version: 2.1.4 - `fsspec` version: 2024.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/1911631?v=4", "events_url": "https://api.github.com/users/anjor/events{/privacy}", "followers_url": "https://api.github.com/users/anjor/followers", "following_url": "https://api.github.com/users/anjor/following{/other_user}", "gists_url": "https://api.github.com/users/anjor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anjor", "id": 1911631, "login": "anjor", "node_id": "MDQ6VXNlcjE5MTE2MzE=", "organizations_url": "https://api.github.com/users/anjor/orgs", "received_events_url": "https://api.github.com/users/anjor/received_events", "repos_url": "https://api.github.com/users/anjor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anjor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anjor/subscriptions", "type": "User", "url": "https://api.github.com/users/anjor" }
https://api.github.com/repos/huggingface/datasets/issues/7107/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7107/timeline
closed
false
7,107
null
2024-08-18T09:27:12Z
null
false
2,469,854,262
https://api.github.com/repos/huggingface/datasets/issues/7106
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7106/events
[]
null
2024-08-16T09:31:37Z
[]
https://github.com/huggingface/datasets/pull/7106
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7106). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
Rename LargeList.dtype to LargeList.feature
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7106/reactions" }
PR_kwDODunzps54jntM
{ "diff_url": "https://github.com/huggingface/datasets/pull/7106.diff", "html_url": "https://github.com/huggingface/datasets/pull/7106", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7106.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7106" }
2024-08-16T09:12:04Z
https://api.github.com/repos/huggingface/datasets/issues/7106/comments
Rename `LargeList.dtype` to `LargeList.feature`. Note that `dtype` is usually used for NumPy data types ("int64", "float32",...): see `Value.dtype`. However, `LargeList` attribute (like `Sequence.feature`) expects a `FeatureType` instead. With this renaming: - we avoid confusion about the expected type and - we also align `LargeList` with `Sequence`.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7106/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7106/timeline
open
false
7,106
null
null
null
true
2,468,207,039
https://api.github.com/repos/huggingface/datasets/issues/7105
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7105/events
[]
null
2024-08-19T15:08:49Z
[]
https://github.com/huggingface/datasets/pull/7105
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7105). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Nice\r\n\r\n<img width=\"141\" alt=\"Capture d’écran 2024-08-19 à 15 25 00\" src=\"ht...
Use `huggingface_hub` cache
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 2, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/7105/reactions" }
PR_kwDODunzps54eZ0D
{ "diff_url": "https://github.com/huggingface/datasets/pull/7105.diff", "html_url": "https://github.com/huggingface/datasets/pull/7105", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7105.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7105" }
2024-08-15T14:45:22Z
https://api.github.com/repos/huggingface/datasets/issues/7105/comments
wip - use `hf_hub_download()` from `huggingface_hub` for HF files - `datasets` cache_dir is still used for: - caching datasets as Arrow files (that back `Dataset` objects) - extracted archives, uncompressed files - files downloaded via http (datasets with scripts) - I removed code that were made for http files (and also the dummy_data / mock_download_manager stuff that happened to rely on them and have been legacy for a while now)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/7105/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7105/timeline
open
false
7,105
null
null
null
true
2,467,788,212
https://api.github.com/repos/huggingface/datasets/issues/7104
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7104/events
[]
null
2024-08-15T10:24:13Z
[]
https://github.com/huggingface/datasets/pull/7104
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7104). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
remove more script docs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7104/reactions" }
PR_kwDODunzps54dAhE
{ "diff_url": "https://github.com/huggingface/datasets/pull/7104.diff", "html_url": "https://github.com/huggingface/datasets/pull/7104", "merged_at": "2024-08-15T10:18:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/7104.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7104" }
2024-08-15T10:13:26Z
https://api.github.com/repos/huggingface/datasets/issues/7104/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/7104/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7104/timeline
closed
false
7,104
null
2024-08-15T10:18:25Z
null
true
2,467,664,581
https://api.github.com/repos/huggingface/datasets/issues/7103
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7103/events
[]
null
2024-08-16T09:18:29Z
[]
https://github.com/huggingface/datasets/pull/7103
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7103). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Fix args of feature docstrings
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7103/reactions" }
PR_kwDODunzps54clrp
{ "diff_url": "https://github.com/huggingface/datasets/pull/7103.diff", "html_url": "https://github.com/huggingface/datasets/pull/7103", "merged_at": "2024-08-15T10:33:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/7103.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7103" }
2024-08-15T08:46:08Z
https://api.github.com/repos/huggingface/datasets/issues/7103/comments
Fix Args section of feature docstrings. Currently, some args do not appear in the docs because they are not properly parsed due to the lack of their type (between parentheses).
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7103/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7103/timeline
closed
false
7,103
null
2024-08-15T10:33:30Z
null
true
2,466,893,106
https://api.github.com/repos/huggingface/datasets/issues/7102
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7102/events
[]
null
2024-08-15T16:17:31Z
[]
https://github.com/huggingface/datasets/issues/7102
NONE
null
null
null
[ "Hi @lajd , I was skeptical about how we are saving the shards each as their own dataset (arrow file) in the script above, and so I updated the script to try out saving the shards in a few different file formats. From the experiments I ran, I saw binary format show significantly the best performance, with arrow a...
Slow iteration speeds when using IterableDataset.shuffle with load_dataset(data_files=..., streaming=True)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7102/reactions" }
I_kwDODunzps6TCc0y
null
2024-08-14T21:44:44Z
https://api.github.com/repos/huggingface/datasets/issues/7102/comments
### Describe the bug When I load a dataset from a number of arrow files, as in: ``` random_dataset = load_dataset( "arrow", data_files={split: shard_filepaths}, streaming=True, split=split, ) ``` I'm able to get fast iteration speeds when iterating over the dataset without shuffling. When I shuffle the dataset, the iteration speed is reduced by ~1000x. It's very possible the way I'm loading dataset shards is not appropriate; if so please advise! Thanks for the help ### Steps to reproduce the bug Here's full code to reproduce the issue: - Generate a random dataset - Create shards of data independently using Dataset.save_to_disk() - The below will generate 16 shards (arrow files), of 512 examples each ``` import time from pathlib import Path from multiprocessing import Pool, cpu_count import torch from datasets import Dataset, load_dataset split = "train" split_save_dir = "/tmp/random_split" def generate_random_example(): return { 'inputs': torch.randn(128).tolist(), 'indices': torch.randint(0, 10000, (2, 20000)).tolist(), 'values': torch.randn(20000).tolist(), } def generate_shard_dataset(examples_per_shard: int = 512): dataset_dict = { 'inputs': [], 'indices': [], 'values': [] } for _ in range(examples_per_shard): example = generate_random_example() dataset_dict['inputs'].append(example['inputs']) dataset_dict['indices'].append(example['indices']) dataset_dict['values'].append(example['values']) return Dataset.from_dict(dataset_dict) def save_shard(shard_idx, save_dir, examples_per_shard): shard_dataset = generate_shard_dataset(examples_per_shard) shard_write_path = Path(save_dir) / f"shard_{shard_idx}" shard_dataset.save_to_disk(shard_write_path) return str(Path(shard_write_path) / "data-00000-of-00001.arrow") def generate_split_shards(save_dir, num_shards: int = 16, examples_per_shard: int = 512): with Pool(cpu_count()) as pool: args = [(m, save_dir, examples_per_shard) for m in range(num_shards)] shard_filepaths = pool.starmap(save_shard, args) return shard_filepaths shard_filepaths = generate_split_shards(split_save_dir) ``` Load the dataset as IterableDataset: ``` random_dataset = load_dataset( "arrow", data_files={split: shard_filepaths}, streaming=True, split=split, ) random_dataset = random_dataset.with_format("numpy") ``` Observe the iterations/second when iterating over the dataset directly, and applying shuffling before iterating: Without shuffling, this gives ~1500 iterations/second ``` start_time = time.time() for count, item in enumerate(random_dataset): if count > 0 and count % 100 == 0: elapsed_time = time.time() - start_time iterations_per_second = count / elapsed_time print(f"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second") ``` ``` Processed 100 items at an average of 705.74 iterations/second Processed 200 items at an average of 1169.68 iterations/second Processed 300 items at an average of 1497.97 iterations/second Processed 400 items at an average of 1739.62 iterations/second Processed 500 items at an average of 1931.11 iterations/second` ``` When shuffling, this gives ~3 iterations/second: ``` random_dataset = random_dataset.shuffle(buffer_size=100,seed=42) start_time = time.time() for count, item in enumerate(random_dataset): if count > 0 and count % 100 == 0: elapsed_time = time.time() - start_time iterations_per_second = count / elapsed_time print(f"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second") ``` ``` Processed 100 items at an average of 3.75 iterations/second Processed 200 items at an average of 3.93 iterations/second ``` ### Expected behavior Iterations per second should be barely affected by shuffling, especially with a small buffer size ### Environment info Datasets version: 2.21.0 Python 3.10 Ubuntu 22.04
{ "avatar_url": "https://avatars.githubusercontent.com/u/13192126?v=4", "events_url": "https://api.github.com/users/lajd/events{/privacy}", "followers_url": "https://api.github.com/users/lajd/followers", "following_url": "https://api.github.com/users/lajd/following{/other_user}", "gists_url": "https://api.github.com/users/lajd/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lajd", "id": 13192126, "login": "lajd", "node_id": "MDQ6VXNlcjEzMTkyMTI2", "organizations_url": "https://api.github.com/users/lajd/orgs", "received_events_url": "https://api.github.com/users/lajd/received_events", "repos_url": "https://api.github.com/users/lajd/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lajd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lajd/subscriptions", "type": "User", "url": "https://api.github.com/users/lajd" }
https://api.github.com/repos/huggingface/datasets/issues/7102/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7102/timeline
open
false
7,102
null
null
null
false
2,466,510,783
https://api.github.com/repos/huggingface/datasets/issues/7101
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7101/events
[]
null
2024-08-18T10:33:38Z
[]
https://github.com/huggingface/datasets/issues/7101
NONE
null
null
null
[ "Having looked into this further it seems the core of the issue is with two different formats in the same repo.\r\n\r\nWhen the `parquet` config is first, the `WebDataset`s are loaded as `parquet`, if the `WebDataset` configs are first, the `parquet` is loaded as `WebDataset`.\r\n\r\nA workaround in my case would b...
`load_dataset` from Hub with `name` to specify `config` using incorrect builder type when multiple data formats are present
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7101/reactions" }
I_kwDODunzps6TA_e_
null
2024-08-14T18:12:25Z
https://api.github.com/repos/huggingface/datasets/issues/7101/comments
Following [documentation](https://huggingface.co/docs/datasets/repository_structure#define-your-splits-and-subsets-in-yaml) I had defined different configs for [`Dataception`](https://huggingface.co/datasets/bigdata-pw/Dataception), a dataset of datasets: ```yaml configs: - config_name: dataception data_files: - path: dataception.parquet split: train default: true - config_name: dataset_5423 data_files: - path: datasets/5423.tar split: train ... - config_name: dataset_721736 data_files: - path: datasets/721736.tar split: train ``` The intent was for metadata to be browsable via Dataset Viewer, in addition to each individual dataset, and to allow datasets to be loaded by specifying the config/name to `load_dataset`. While testing `load_dataset` I encountered the following error: ```python >>> dataset = load_dataset("bigdata-pw/Dataception", "dataset_7691") Downloading readme: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 467k/467k [00:00<00:00, 1.99MB/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 71.0M/71.0M [00:02<00:00, 26.8MB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "datasets\load.py", line 2145, in load_dataset builder_instance.download_and_prepare( File "datasets\builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "datasets\builder.py", line 1100, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "datasets\packaged_modules\parquet\parquet.py", line 58, in _split_generators self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f)) ^^^^^^^^^^^^^^^^^ File "pyarrow\parquet\core.py", line 2325, in read_schema file = ParquetFile( ^^^^^^^^^^^^ File "pyarrow\parquet\core.py", line 318, in __init__ self.reader.open( File "pyarrow\_parquet.pyx", line 1470, in pyarrow._parquet.ParquetReader.open File "pyarrow\error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. ``` The correct file is downloaded, however the incorrect builder type is detected; `parquet` due to other content of the repository. It would appear that the config needs to be taken into account. Note that I have removed the additional configs from the repository because of this issue and there is a limit of 3000 configs anyway so the Dataset Viewer doesn't work as I intended. I'll add them back in if it assists with testing.
{ "avatar_url": "https://avatars.githubusercontent.com/u/106811348?v=4", "events_url": "https://api.github.com/users/hlky/events{/privacy}", "followers_url": "https://api.github.com/users/hlky/followers", "following_url": "https://api.github.com/users/hlky/following{/other_user}", "gists_url": "https://api.github.com/users/hlky/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hlky", "id": 106811348, "login": "hlky", "node_id": "U_kgDOBl3P1A", "organizations_url": "https://api.github.com/users/hlky/orgs", "received_events_url": "https://api.github.com/users/hlky/received_events", "repos_url": "https://api.github.com/users/hlky/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hlky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hlky/subscriptions", "type": "User", "url": "https://api.github.com/users/hlky" }
https://api.github.com/repos/huggingface/datasets/issues/7101/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7101/timeline
open
false
7,101
null
null
null
false
2,465,529,414
https://api.github.com/repos/huggingface/datasets/issues/7100
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7100/events
[]
null
2024-08-14T11:01:51Z
[]
https://github.com/huggingface/datasets/issues/7100
NONE
null
null
null
[]
IterableDataset: cannot resolve features from list of numpy arrays
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7100/reactions" }
I_kwDODunzps6S9P5G
null
2024-08-14T11:01:51Z
https://api.github.com/repos/huggingface/datasets/issues/7100/comments
### Describe the bug when resolve features of `IterableDataset`, got `pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values` error. ``` Traceback (most recent call last): File "test.py", line 6 iter_ds = iter_ds._resolve_features() File "lib/python3.10/site-packages/datasets/iterable_dataset.py", line 2876, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "lib/python3.10/site-packages/datasets/iterable_dataset.py", line 63, in _infer_features_from_batch pa_table = pa.Table.from_pydict(batch) File "pyarrow/table.pxi", line 1813, in pyarrow.lib._Tabular.from_pydict File "pyarrow/table.pxi", line 5339, in pyarrow.lib._from_pydict File "pyarrow/array.pxi", line 374, in pyarrow.lib.asarray File "pyarrow/array.pxi", line 344, in pyarrow.lib.array File "pyarrow/array.pxi", line 42, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values ``` ### Steps to reproduce the bug ```python from datasets import Dataset import numpy as np # create list of numpy iter_ds = Dataset.from_dict({'a': [[[1, 2, 3], [1, 2, 3]]]}).to_iterable_dataset().map(lambda x: {'a': [np.array(x['a'])]}) iter_ds = iter_ds._resolve_features() # errors here ``` ### Expected behavior features can be successfully resolved ### Environment info - `datasets` version: 2.21.0 - Platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.4 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/18899212?v=4", "events_url": "https://api.github.com/users/VeryLazyBoy/events{/privacy}", "followers_url": "https://api.github.com/users/VeryLazyBoy/followers", "following_url": "https://api.github.com/users/VeryLazyBoy/following{/other_user}", "gists_url": "https://api.github.com/users/VeryLazyBoy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VeryLazyBoy", "id": 18899212, "login": "VeryLazyBoy", "node_id": "MDQ6VXNlcjE4ODk5MjEy", "organizations_url": "https://api.github.com/users/VeryLazyBoy/orgs", "received_events_url": "https://api.github.com/users/VeryLazyBoy/received_events", "repos_url": "https://api.github.com/users/VeryLazyBoy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VeryLazyBoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VeryLazyBoy/subscriptions", "type": "User", "url": "https://api.github.com/users/VeryLazyBoy" }
https://api.github.com/repos/huggingface/datasets/issues/7100/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7100/timeline
open
false
7,100
null
null
null
false
2,465,221,827
https://api.github.com/repos/huggingface/datasets/issues/7099
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7099/events
[]
null
2024-08-14T08:45:17Z
[]
https://github.com/huggingface/datasets/pull/7099
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7099). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Set dev version
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7099/reactions" }
PR_kwDODunzps54U7s4
{ "diff_url": "https://github.com/huggingface/datasets/pull/7099.diff", "html_url": "https://github.com/huggingface/datasets/pull/7099", "merged_at": "2024-08-14T08:39:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/7099.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7099" }
2024-08-14T08:31:17Z
https://api.github.com/repos/huggingface/datasets/issues/7099/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7099/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7099/timeline
closed
false
7,099
null
2024-08-14T08:39:25Z
null
true
2,465,016,562
https://api.github.com/repos/huggingface/datasets/issues/7098
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7098/events
[]
null
2024-08-14T06:41:07Z
[]
https://github.com/huggingface/datasets/pull/7098
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7098). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
Release: 2.21.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7098/reactions" }
PR_kwDODunzps54UPMS
{ "diff_url": "https://github.com/huggingface/datasets/pull/7098.diff", "html_url": "https://github.com/huggingface/datasets/pull/7098", "merged_at": "2024-08-14T06:41:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/7098.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7098" }
2024-08-14T06:35:13Z
https://api.github.com/repos/huggingface/datasets/issues/7098/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7098/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7098/timeline
closed
false
7,098
null
2024-08-14T06:41:06Z
null
true
2,458,455,489
https://api.github.com/repos/huggingface/datasets/issues/7097
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7097/events
[]
null
2024-08-09T18:26:37Z
[]
https://github.com/huggingface/datasets/issues/7097
NONE
null
null
null
[]
Some of DownloadConfig's properties are always being overridden in load.py
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7097/reactions" }
I_kwDODunzps6SiQ3B
null
2024-08-09T18:26:37Z
https://api.github.com/repos/huggingface/datasets/issues/7097/comments
### Describe the bug The `extract_compressed_file` and `force_extract` properties of DownloadConfig are always being set to True in the function `dataset_module_factory` in the `load.py` file. This behavior is very annoying because data extracted will just be ignored the next time the dataset is loaded. See this image below: ![image](https://github.com/user-attachments/assets/9e76ebb7-09b1-4c95-adc8-a959b536f93c) ### Steps to reproduce the bug 1. Have a local dataset that contains archived files (zip, tar.gz, etc) 2. Build a dataset loading script to download and extract these files 3. Run the load_dataset function with a DownloadConfig that specifically set `force_extract` to False 4. The extraction process will start no matter if the archives was extracted previously ### Expected behavior The extraction process should not run when the archives were previously extracted and `force_extract` is set to False. ### Environment info datasets==2.20.0 python3.9
{ "avatar_url": "https://avatars.githubusercontent.com/u/29772899?v=4", "events_url": "https://api.github.com/users/ductai199x/events{/privacy}", "followers_url": "https://api.github.com/users/ductai199x/followers", "following_url": "https://api.github.com/users/ductai199x/following{/other_user}", "gists_url": "https://api.github.com/users/ductai199x/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ductai199x", "id": 29772899, "login": "ductai199x", "node_id": "MDQ6VXNlcjI5NzcyODk5", "organizations_url": "https://api.github.com/users/ductai199x/orgs", "received_events_url": "https://api.github.com/users/ductai199x/received_events", "repos_url": "https://api.github.com/users/ductai199x/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ductai199x/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ductai199x/subscriptions", "type": "User", "url": "https://api.github.com/users/ductai199x" }
https://api.github.com/repos/huggingface/datasets/issues/7097/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7097/timeline
open
false
7,097
null
null
null
false
2,456,929,173
https://api.github.com/repos/huggingface/datasets/issues/7096
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7096/events
[]
null
2024-08-15T17:25:26Z
[]
https://github.com/huggingface/datasets/pull/7096
CONTRIBUTOR
null
false
null
[ "Hi @albertvillanova, is this PR looking okay to you? Anything else you'd like to see?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7096). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",...
Automatically create `cache_dir` from `cache_file_name`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7096/reactions" }
PR_kwDODunzps535Xkr
{ "diff_url": "https://github.com/huggingface/datasets/pull/7096.diff", "html_url": "https://github.com/huggingface/datasets/pull/7096", "merged_at": "2024-08-15T10:13:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/7096.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7096" }
2024-08-09T01:34:06Z
https://api.github.com/repos/huggingface/datasets/issues/7096/comments
You get a pretty unhelpful error message when specifying a `cache_file_name` in a directory that doesn't exist, e.g. `cache_file_name="./cache/data.map"` ```python import datasets cache_file_name="./cache/train.map" dataset = datasets.load_dataset("ylecun/mnist") dataset["train"].map(lambda x: x, cache_file_name=cache_file_name) ``` ``` FileNotFoundError: [Errno 2] No such file or directory: '/.../cache/tmp48r61siw' ``` It is simple enough to create and I was expecting that this would have been the case. cc: @albertvillanova @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/27844407?v=4", "events_url": "https://api.github.com/users/ringohoffman/events{/privacy}", "followers_url": "https://api.github.com/users/ringohoffman/followers", "following_url": "https://api.github.com/users/ringohoffman/following{/other_user}", "gists_url": "https://api.github.com/users/ringohoffman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ringohoffman", "id": 27844407, "login": "ringohoffman", "node_id": "MDQ6VXNlcjI3ODQ0NDA3", "organizations_url": "https://api.github.com/users/ringohoffman/orgs", "received_events_url": "https://api.github.com/users/ringohoffman/received_events", "repos_url": "https://api.github.com/users/ringohoffman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ringohoffman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ringohoffman/subscriptions", "type": "User", "url": "https://api.github.com/users/ringohoffman" }
https://api.github.com/repos/huggingface/datasets/issues/7096/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7096/timeline
closed
false
7,096
null
2024-08-15T10:13:22Z
null
true
2,454,418,130
https://api.github.com/repos/huggingface/datasets/issues/7094
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7094/events
[]
null
2024-08-07T21:53:06Z
[]
https://github.com/huggingface/datasets/pull/7094
NONE
null
false
null
[]
Add Arabic Docs to Datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7094/reactions" }
PR_kwDODunzps53w2b7
{ "diff_url": "https://github.com/huggingface/datasets/pull/7094.diff", "html_url": "https://github.com/huggingface/datasets/pull/7094", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7094.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7094" }
2024-08-07T21:53:06Z
https://api.github.com/repos/huggingface/datasets/issues/7094/comments
Translate Docs into Arabic issue-number : #7093 [Arabic Docs](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx) [English Docs](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/en/index.mdx) @stevhliu
{ "avatar_url": "https://avatars.githubusercontent.com/u/53489256?v=4", "events_url": "https://api.github.com/users/AhmedAlmaghz/events{/privacy}", "followers_url": "https://api.github.com/users/AhmedAlmaghz/followers", "following_url": "https://api.github.com/users/AhmedAlmaghz/following{/other_user}", "gists_url": "https://api.github.com/users/AhmedAlmaghz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AhmedAlmaghz", "id": 53489256, "login": "AhmedAlmaghz", "node_id": "MDQ6VXNlcjUzNDg5MjU2", "organizations_url": "https://api.github.com/users/AhmedAlmaghz/orgs", "received_events_url": "https://api.github.com/users/AhmedAlmaghz/received_events", "repos_url": "https://api.github.com/users/AhmedAlmaghz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AhmedAlmaghz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AhmedAlmaghz/subscriptions", "type": "User", "url": "https://api.github.com/users/AhmedAlmaghz" }
https://api.github.com/repos/huggingface/datasets/issues/7094/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7094/timeline
open
false
7,094
null
null
null
true
2,454,413,074
https://api.github.com/repos/huggingface/datasets/issues/7093
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7093/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-08-07T21:48:05Z
[]
https://github.com/huggingface/datasets/issues/7093
NONE
null
null
null
[]
Add Arabic Docs to datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7093/reactions" }
I_kwDODunzps6SS18S
null
2024-08-07T21:48:05Z
https://api.github.com/repos/huggingface/datasets/issues/7093/comments
### Feature request Add Arabic Docs to datasets [Datasets Arabic](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx) ### Motivation @AhmedAlmaghz https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx ### Your contribution @AhmedAlmaghz https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx
{ "avatar_url": "https://avatars.githubusercontent.com/u/53489256?v=4", "events_url": "https://api.github.com/users/AhmedAlmaghz/events{/privacy}", "followers_url": "https://api.github.com/users/AhmedAlmaghz/followers", "following_url": "https://api.github.com/users/AhmedAlmaghz/following{/other_user}", "gists_url": "https://api.github.com/users/AhmedAlmaghz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AhmedAlmaghz", "id": 53489256, "login": "AhmedAlmaghz", "node_id": "MDQ6VXNlcjUzNDg5MjU2", "organizations_url": "https://api.github.com/users/AhmedAlmaghz/orgs", "received_events_url": "https://api.github.com/users/AhmedAlmaghz/received_events", "repos_url": "https://api.github.com/users/AhmedAlmaghz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AhmedAlmaghz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AhmedAlmaghz/subscriptions", "type": "User", "url": "https://api.github.com/users/AhmedAlmaghz" }
https://api.github.com/repos/huggingface/datasets/issues/7093/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7093/timeline
open
false
7,093
null
null
null
false
2,451,393,658
https://api.github.com/repos/huggingface/datasets/issues/7092
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7092/events
[]
null
2024-08-08T16:35:01Z
[]
https://github.com/huggingface/datasets/issues/7092
NONE
null
null
null
[ "I’ll take a look", "Possible definitions of done for this issue:\r\n\r\n1. A fix so you can load your dataset specifically\r\n2. A general fix for datasets similar to this in the `datasets` library\r\n\r\nOption 1 is trivial. I think option 2 requires significant changes to the library.\r\n\r\nSince you outlined...
load_dataset with multiple jsonlines files interprets datastructure too early
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7092/reactions" }
I_kwDODunzps6SHUx6
null
2024-08-06T17:42:55Z
https://api.github.com/repos/huggingface/datasets/issues/7092/comments
### Describe the bug likely related to #6460 using `datasets.load_dataset("json", data_dir= ... )` with multiple `.jsonl` files will error if one of the files (maybe the first file?) contains a full column of empty data. ### Steps to reproduce the bug real world example: data is available in this [PR-branch](https://github.com/Vipitis/shadertoys-dataset/pull/3/commits/cb1e7157814f74acb09d5dc2f1be3c0a868a9933). Because my files are chunked by months, some months contain all empty data for some columns, just by chance - these are `[]`. Otherwise it's all the same structure. ```python from datasets import load_dataset ds = load_dataset("json", data_dir="./data/annotated/api") ``` you get a long error trace, where in the middle it says something like ```cs TypeError: Couldn't cast array of type struct<id: int64, src: string, ctype: string, channel: int64, sampler: struct<filter: string, wrap: string, vflip: string, srgb: string, internal: string>, published: int64> to null ``` toy example: (on request) ### Expected behavior Some suggestions 1. give a better error message to the user 2. consider all files before deciding on a data structure for a given column. 3. if you encounter a new structure, and can't cast that to null, replace the null-hypothesis. (maybe something for pyarrow) as a workaround I have lazily implemented the following (essentially step 2) ```python import os import jsonlines import datasets api_files = os.listdir("./data/annotated/api") api_files = [f"./data/annotated/api/{f}" for f in api_files] api_file_contents = [] for f in api_files: with jsonlines.open(f) as reader: for obj in reader: api_file_contents.append(obj) ds = datasets.Dataset.from_list(api_file_contents) ``` this works fine for my usecase, but is potentially slower and less memory efficient for really large datasets (where this is unlikely to happen in the first place). ### Environment info - `datasets` version: 2.20.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.9.4 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/23384483?v=4", "events_url": "https://api.github.com/users/Vipitis/events{/privacy}", "followers_url": "https://api.github.com/users/Vipitis/followers", "following_url": "https://api.github.com/users/Vipitis/following{/other_user}", "gists_url": "https://api.github.com/users/Vipitis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Vipitis", "id": 23384483, "login": "Vipitis", "node_id": "MDQ6VXNlcjIzMzg0NDgz", "organizations_url": "https://api.github.com/users/Vipitis/orgs", "received_events_url": "https://api.github.com/users/Vipitis/received_events", "repos_url": "https://api.github.com/users/Vipitis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Vipitis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Vipitis/subscriptions", "type": "User", "url": "https://api.github.com/users/Vipitis" }
https://api.github.com/repos/huggingface/datasets/issues/7092/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7092/timeline
open
false
7,092
null
null
null
false
2,449,699,490
https://api.github.com/repos/huggingface/datasets/issues/7090
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7090/events
[]
null
2024-08-06T00:35:05Z
[]
https://github.com/huggingface/datasets/issues/7090
NONE
null
null
null
[]
The test test_move_script_doesnt_change_hash fails because it runs the 'python' command while the python executable has a different name
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7090/reactions" }
I_kwDODunzps6SA3Ki
null
2024-08-06T00:35:05Z
https://api.github.com/repos/huggingface/datasets/issues/7090/comments
### Describe the bug Tests should use the same pythin path as they are launched with, which in the case of FreeBSD is /usr/local/bin/python3.11 Failure: ``` if err_filename is not None: > raise child_exception_type(errno_num, err_msg, err_filename) E FileNotFoundError: [Errno 2] No such file or directory: 'python' ``` ### Steps to reproduce the bug regular test run using PyTest ### Expected behavior n/a ### Environment info FreeBSD 14.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4", "events_url": "https://api.github.com/users/yurivict/events{/privacy}", "followers_url": "https://api.github.com/users/yurivict/followers", "following_url": "https://api.github.com/users/yurivict/following{/other_user}", "gists_url": "https://api.github.com/users/yurivict/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yurivict", "id": 271906, "login": "yurivict", "node_id": "MDQ6VXNlcjI3MTkwNg==", "organizations_url": "https://api.github.com/users/yurivict/orgs", "received_events_url": "https://api.github.com/users/yurivict/received_events", "repos_url": "https://api.github.com/users/yurivict/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yurivict/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yurivict/subscriptions", "type": "User", "url": "https://api.github.com/users/yurivict" }
https://api.github.com/repos/huggingface/datasets/issues/7090/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7090/timeline
open
false
7,090
null
null
null
false
2,449,479,500
https://api.github.com/repos/huggingface/datasets/issues/7089
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7089/events
[]
null
2024-08-05T21:05:11Z
[]
https://github.com/huggingface/datasets/issues/7089
NONE
null
null
null
[]
Missing pyspark dependency causes the testsuite to error out, instead of a few tests to be skipped
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7089/reactions" }
I_kwDODunzps6SABdM
null
2024-08-05T21:05:11Z
https://api.github.com/repos/huggingface/datasets/issues/7089/comments
### Describe the bug see the subject ### Steps to reproduce the bug regular tests ### Expected behavior n/a ### Environment info version 2.20.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4", "events_url": "https://api.github.com/users/yurivict/events{/privacy}", "followers_url": "https://api.github.com/users/yurivict/followers", "following_url": "https://api.github.com/users/yurivict/following{/other_user}", "gists_url": "https://api.github.com/users/yurivict/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yurivict", "id": 271906, "login": "yurivict", "node_id": "MDQ6VXNlcjI3MTkwNg==", "organizations_url": "https://api.github.com/users/yurivict/orgs", "received_events_url": "https://api.github.com/users/yurivict/received_events", "repos_url": "https://api.github.com/users/yurivict/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yurivict/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yurivict/subscriptions", "type": "User", "url": "https://api.github.com/users/yurivict" }
https://api.github.com/repos/huggingface/datasets/issues/7089/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7089/timeline
open
false
7,089
null
null
null
false
2,447,383,940
https://api.github.com/repos/huggingface/datasets/issues/7088
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7088/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-08-05T00:45:50Z
[]
https://github.com/huggingface/datasets/issues/7088
NONE
null
null
null
[]
Disable warning when using with_format format on tensors
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7088/reactions" }
I_kwDODunzps6R4B2E
null
2024-08-05T00:45:50Z
https://api.github.com/repos/huggingface/datasets/issues/7088/comments
### Feature request If we write this code: ```python """Get data and define datasets.""" from enum import StrEnum from datasets import load_dataset from torch.utils.data import DataLoader from torchvision import transforms class Split(StrEnum): """Describes what type of split to use in the dataloader""" TRAIN = "train" TEST = "test" VAL = "validation" class ImageNetDataLoader(DataLoader): """Create an ImageNetDataloader""" _preprocess_transform = transforms.Compose( [ transforms.Resize(256), transforms.CenterCrop(224), ] ) def __init__(self, batch_size: int = 4, split: Split = Split.TRAIN): dataset = ( load_dataset( "imagenet-1k", split=split, trust_remote_code=True, streaming=True, ) .with_format("torch") .map(self._preprocess) ) super().__init__(dataset=dataset, batch_size=batch_size) def _preprocess(self, data): if data["image"].shape[0] < 3: data["image"] = data["image"].repeat(3, 1, 1) data["image"] = self._preprocess_transform(data["image"].float()) return data if __name__ == "__main__": dataloader = ImageNetDataLoader(batch_size=2) for batch in dataloader: print(batch["image"]) break ``` This will trigger an user warning : ```bash datasets\formatting\torch_formatter.py:85: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs}) ``` ### Motivation This happens because the the way the formatted tensor is returned in `TorchFormatter._tensorize`. This function handle values of different types, according to some tests it seems that possible value types are `int`, `numpy.ndarray` and `torch.Tensor`. In particular this warning is triggered when the value type is `torch.Tensor`, because is not the suggested Pytorch way of doing it: - https://stackoverflow.com/questions/55266154/pytorch-preferred-way-to-copy-a-tensor - https://discuss.pytorch.org/t/it-is-recommended-to-use-source-tensor-clone-detach-or-sourcetensor-clone-detach-requires-grad-true/101218#:~:text=The%20warning%20points%20to%20wrapping%20a%20tensor%20in%20torch.tensor%2C%20which%20is%20not%20recommended.%0AInstead%20of%20torch.tensor(outputs)%20use%20outputs.clone().detach()%20or%20the%20same%20with%20.requires_grad_(True)%2C%20if%20necessary. ### Your contribution A solution that I found to be working is to change the current way of doing it: ```python return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs}) ``` To: ```python if (isinstance(value, torch.Tensor)): tensor = value.clone().detach() if self.torch_tensor_kwargs.get('requires_grad', False): tensor.requires_grad_() return tensor else: return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs}) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42048782?v=4", "events_url": "https://api.github.com/users/Haislich/events{/privacy}", "followers_url": "https://api.github.com/users/Haislich/followers", "following_url": "https://api.github.com/users/Haislich/following{/other_user}", "gists_url": "https://api.github.com/users/Haislich/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Haislich", "id": 42048782, "login": "Haislich", "node_id": "MDQ6VXNlcjQyMDQ4Nzgy", "organizations_url": "https://api.github.com/users/Haislich/orgs", "received_events_url": "https://api.github.com/users/Haislich/received_events", "repos_url": "https://api.github.com/users/Haislich/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Haislich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Haislich/subscriptions", "type": "User", "url": "https://api.github.com/users/Haislich" }
https://api.github.com/repos/huggingface/datasets/issues/7088/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7088/timeline
open
false
7,088
null
null
null
false
2,447,158,643
https://api.github.com/repos/huggingface/datasets/issues/7087
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7087/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-08-06T06:59:23Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
https://github.com/huggingface/datasets/issues/7087
NONE
completed
null
null
[ "Thanks for reporting.\r\n\r\nIt is weird, because the language entry is in the list. See: https://github.com/huggingface/huggingface.js/blob/98e32f0ed4ee057a596f66a1dec738e5db9643d5/packages/languages/src/languages_iso_639_3.ts#L15186-L15189\r\n\r\nI have reported the issue:\r\n- https://github.com/huggingface/hug...
Unable to create dataset card for Lushootseed language
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7087/reactions" }
I_kwDODunzps6R3K1z
null
2024-08-04T14:27:04Z
https://api.github.com/repos/huggingface/datasets/issues/7087/comments
### Feature request While I was creating the dataset which contained all documents from the Lushootseed Wikipedia, the dataset card asked me to enter which language the dataset was in. Since Lushootseed is a critically endangered language, it was not available as one of the options. Is it possible to allow entering languages that aren't available in the options? ### Motivation I'd like to add more information about my dataset in the dataset card, and the language is one of the most important pieces of information, since the entire dataset is primarily concerned collecting Lushootseed documents. ### Your contribution I can submit a pull request
{ "avatar_url": "https://avatars.githubusercontent.com/u/134876525?v=4", "events_url": "https://api.github.com/users/vaishnavsudarshan/events{/privacy}", "followers_url": "https://api.github.com/users/vaishnavsudarshan/followers", "following_url": "https://api.github.com/users/vaishnavsudarshan/following{/other_user}", "gists_url": "https://api.github.com/users/vaishnavsudarshan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vaishnavsudarshan", "id": 134876525, "login": "vaishnavsudarshan", "node_id": "U_kgDOCAoNbQ", "organizations_url": "https://api.github.com/users/vaishnavsudarshan/orgs", "received_events_url": "https://api.github.com/users/vaishnavsudarshan/received_events", "repos_url": "https://api.github.com/users/vaishnavsudarshan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vaishnavsudarshan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vaishnavsudarshan/subscriptions", "type": "User", "url": "https://api.github.com/users/vaishnavsudarshan" }
https://api.github.com/repos/huggingface/datasets/issues/7087/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7087/timeline
closed
false
7,087
null
2024-08-06T06:59:22Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,445,516,829
https://api.github.com/repos/huggingface/datasets/issues/7086
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7086/events
[]
null
2024-08-02T18:12:23Z
[]
https://github.com/huggingface/datasets/issues/7086
NONE
null
null
null
[]
load_dataset ignores cached datasets and tries to hit HF Hub, resulting in API rate limit errors
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7086/reactions" }
I_kwDODunzps6Rw6Ad
null
2024-08-02T18:12:23Z
https://api.github.com/repos/huggingface/datasets/issues/7086/comments
### Describe the bug I have been running lm-eval-harness a lot which has results in an API rate limit. This seems strange, since all of the data should be cached locally. I have in fact verified this. ### Steps to reproduce the bug 1. Be Me 2. Run `load_dataset("TAUR-Lab/MuSR")` 3. Hit rate limit error 4. Dataset is in .cache/huggingface/datasets 5. ??? ### Expected behavior We should not run into API rate limits if we have cached the dataset ### Environment info datasets 2.16.0 python 3.10.4
{ "avatar_url": "https://avatars.githubusercontent.com/u/11379648?v=4", "events_url": "https://api.github.com/users/tginart/events{/privacy}", "followers_url": "https://api.github.com/users/tginart/followers", "following_url": "https://api.github.com/users/tginart/following{/other_user}", "gists_url": "https://api.github.com/users/tginart/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tginart", "id": 11379648, "login": "tginart", "node_id": "MDQ6VXNlcjExMzc5NjQ4", "organizations_url": "https://api.github.com/users/tginart/orgs", "received_events_url": "https://api.github.com/users/tginart/received_events", "repos_url": "https://api.github.com/users/tginart/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tginart/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tginart/subscriptions", "type": "User", "url": "https://api.github.com/users/tginart" }
https://api.github.com/repos/huggingface/datasets/issues/7086/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7086/timeline
open
false
7,086
null
null
null
false
2,440,008,618
https://api.github.com/repos/huggingface/datasets/issues/7085
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7085/events
[]
null
2024-08-14T16:04:24Z
[]
https://github.com/huggingface/datasets/issues/7085
NONE
null
null
null
[ "@lhoestq I detected this regression over on [DataDreamer](https://github.com/datadreamer-dev/DataDreamer)'s test suite. I put in these [monkey patches](https://github.com/datadreamer-dev/DataDreamer/blob/4cbaf9f39cf7bedde72bbaa68346e169788fbecb/src/_patches/datasets_reset_state_hack.py) in case that fixed it our t...
[Regression] IterableDataset is broken on 2.20.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7085/reactions" }
I_kwDODunzps6Rb5Oq
null
2024-07-31T13:01:59Z
https://api.github.com/repos/huggingface/datasets/issues/7085/comments
### Describe the bug In the latest version of datasets there is a major regression, after creating an `IterableDataset` from a generator and applying a few operations (`map`, `select`), you can no longer iterate through the dataset multiple times. The issue seems to stem from the recent addition of "resumable IterableDatasets" (#6658) (@lhoestq). It seems like it's keeping state when it shouldn't. ### Steps to reproduce the bug Minimal Reproducible Example (comparing `datasets==2.17.0` and `datasets==2.20.0`) ``` #!/bin/bash # List of dataset versions to test versions=("2.17.0" "2.20.0") # Loop through each version for version in "${versions[@]}"; do # Install the specific version of the datasets library pip3 install -q datasets=="$version" 2>/dev/null # Run the Python script python3 - <<EOF from datasets import IterableDataset from datasets.features.features import Features, Value def test_gen(): yield from [{"foo": i} for i in range(10)] features = Features([("foo", Value("int64"))]) d = IterableDataset.from_generator(test_gen, features=features) mapped = d.map(lambda row: {"foo": row["foo"] * 2}) column = mapped.select_columns(["foo"]) print("Version $version - Iterate Once:", list(column)) print("Version $version - Iterate Twice:", list(column)) EOF done ``` The output looks like this: ``` Version 2.17.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}] Version 2.17.0 - Iterate Twice: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}] Version 2.20.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}] Version 2.20.0 - Iterate Twice: [] ``` ### Expected behavior The expected behavior is it version 2.20.0 should behave the same as 2.17.0. ### Environment info `datasets==2.20.0` on any platform.
{ "avatar_url": "https://avatars.githubusercontent.com/u/5404177?v=4", "events_url": "https://api.github.com/users/AjayP13/events{/privacy}", "followers_url": "https://api.github.com/users/AjayP13/followers", "following_url": "https://api.github.com/users/AjayP13/following{/other_user}", "gists_url": "https://api.github.com/users/AjayP13/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AjayP13", "id": 5404177, "login": "AjayP13", "node_id": "MDQ6VXNlcjU0MDQxNzc=", "organizations_url": "https://api.github.com/users/AjayP13/orgs", "received_events_url": "https://api.github.com/users/AjayP13/received_events", "repos_url": "https://api.github.com/users/AjayP13/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AjayP13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AjayP13/subscriptions", "type": "User", "url": "https://api.github.com/users/AjayP13" }
https://api.github.com/repos/huggingface/datasets/issues/7085/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7085/timeline
open
false
7,085
null
null
null
false
2,439,519,534
https://api.github.com/repos/huggingface/datasets/issues/7084
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7084/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-07-31T09:05:58Z
[]
https://github.com/huggingface/datasets/issues/7084
NONE
null
null
null
[]
More easily support streaming local files
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7084/reactions" }
I_kwDODunzps6RaB0u
null
2024-07-31T09:03:15Z
https://api.github.com/repos/huggingface/datasets/issues/7084/comments
### Feature request Simplify downloading and streaming datasets locally. Specifically, perhaps add an option to `load_dataset(..., streaming="download_first")` or add better support for streaming symlinked or arrow files. ### Motivation I have downloaded FineWeb-edu locally and currently trying to stream the dataset from the local files. I have both the raw parquet files using `hugginface-cli download --repo-type dataset HuggingFaceFW/fineweb-edu` and the processed arrow files using `load_dataset("HuggingFaceFW/fineweb-edu")`. Streaming the files locally does not work well for both file types for two different reasons. **Arrow files** When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/datasets/HuggingFaceFW___fineweb-edu/default/0.0.0/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/fineweb-edu-train-*.arrow"})` resolving the data files is fast, but because `arrow` is not included in the known [extensions file list](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/utils/file_utils.py#L738) , all files are opened and scanned to determine the compression type. Adding `arrow` to the known extension types resolves this issue. **Parquet files** When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/hub/dataset-HuggingFaceFW___fineweb-edu/snapshots/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/data/CC-MAIN-*/train-*.parquet"})` the paths do not get resolved because the parquet files are symlinked from the blobs (which contain all files in case there are different versions). This occurs because the [pattern matching](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/data_files.py#L389) checks if the path is a file and does not check for symlinks. Symlinks (at least on my machine) are of type "other". ### Your contribution I have created a PR for fixing arrow file streaming and symlinks. However, I have not checked locally if the tests work or new tests need to be added. IMO, the easiest option would be to add a `streaming=download_first` option, but I'm afraid that exceeds my current knowledge of how the datasets library works. https://github.com/huggingface/datasets/pull/7083
{ "avatar_url": "https://avatars.githubusercontent.com/u/23191892?v=4", "events_url": "https://api.github.com/users/fschlatt/events{/privacy}", "followers_url": "https://api.github.com/users/fschlatt/followers", "following_url": "https://api.github.com/users/fschlatt/following{/other_user}", "gists_url": "https://api.github.com/users/fschlatt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fschlatt", "id": 23191892, "login": "fschlatt", "node_id": "MDQ6VXNlcjIzMTkxODky", "organizations_url": "https://api.github.com/users/fschlatt/orgs", "received_events_url": "https://api.github.com/users/fschlatt/received_events", "repos_url": "https://api.github.com/users/fschlatt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fschlatt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fschlatt/subscriptions", "type": "User", "url": "https://api.github.com/users/fschlatt" }
https://api.github.com/repos/huggingface/datasets/issues/7084/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7084/timeline
open
false
7,084
null
null
null
false
2,439,518,466
https://api.github.com/repos/huggingface/datasets/issues/7083
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7083/events
[]
null
2024-08-15T14:08:04Z
[]
https://github.com/huggingface/datasets/pull/7083
NONE
null
false
null
[]
fix streaming from arrow files
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7083/reactions" }
PR_kwDODunzps5292hC
{ "diff_url": "https://github.com/huggingface/datasets/pull/7083.diff", "html_url": "https://github.com/huggingface/datasets/pull/7083", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7083.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7083" }
2024-07-31T09:02:42Z
https://api.github.com/repos/huggingface/datasets/issues/7083/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/23191892?v=4", "events_url": "https://api.github.com/users/fschlatt/events{/privacy}", "followers_url": "https://api.github.com/users/fschlatt/followers", "following_url": "https://api.github.com/users/fschlatt/following{/other_user}", "gists_url": "https://api.github.com/users/fschlatt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fschlatt", "id": 23191892, "login": "fschlatt", "node_id": "MDQ6VXNlcjIzMTkxODky", "organizations_url": "https://api.github.com/users/fschlatt/orgs", "received_events_url": "https://api.github.com/users/fschlatt/received_events", "repos_url": "https://api.github.com/users/fschlatt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fschlatt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fschlatt/subscriptions", "type": "User", "url": "https://api.github.com/users/fschlatt" }
https://api.github.com/repos/huggingface/datasets/issues/7083/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7083/timeline
open
false
7,083
null
null
null
true
2,437,354,975
https://api.github.com/repos/huggingface/datasets/issues/7082
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7082/events
[]
null
2024-08-08T08:29:55Z
[]
https://github.com/huggingface/datasets/pull/7082
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7082). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Support HTTP authentication in non-streaming mode
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7082/reactions" }
PR_kwDODunzps522dTJ
{ "diff_url": "https://github.com/huggingface/datasets/pull/7082.diff", "html_url": "https://github.com/huggingface/datasets/pull/7082", "merged_at": "2024-08-08T08:24:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/7082.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7082" }
2024-07-30T09:25:49Z
https://api.github.com/repos/huggingface/datasets/issues/7082/comments
Support HTTP authentication in non-streaming mode, by support passing HTTP storage_options in non-streaming mode. - Note that currently, HTTP authentication is supported only in streaming mode. For example, this is necessary if a remote HTTP host requires authentication to download the data.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7082/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7082/timeline
closed
false
7,082
null
2024-08-08T08:24:06Z
null
true
2,437,059,657
https://api.github.com/repos/huggingface/datasets/issues/7081
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7081/events
[]
null
2024-07-30T08:30:37Z
[]
https://github.com/huggingface/datasets/pull/7081
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7081). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Set load_from_disk path type as PathLike
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7081/reactions" }
PR_kwDODunzps521cGm
{ "diff_url": "https://github.com/huggingface/datasets/pull/7081.diff", "html_url": "https://github.com/huggingface/datasets/pull/7081", "merged_at": "2024-07-30T08:21:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/7081.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7081" }
2024-07-30T07:00:38Z
https://api.github.com/repos/huggingface/datasets/issues/7081/comments
Set `load_from_disk` path type as `PathLike`. This way it is aligned with `save_to_disk`.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7081/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7081/timeline
closed
false
7,081
null
2024-07-30T08:21:50Z
null
true
2,434,275,664
https://api.github.com/repos/huggingface/datasets/issues/7080
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7080/events
[]
null
2024-07-29T01:42:43Z
[]
https://github.com/huggingface/datasets/issues/7080
NONE
null
null
null
[]
Generating train split takes a long time
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7080/reactions" }
I_kwDODunzps6RGBlQ
null
2024-07-29T01:42:43Z
https://api.github.com/repos/huggingface/datasets/issues/7080/comments
### Describe the bug Loading a simple webdataset takes ~45 minutes. ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("PixArt-alpha/SAM-LLaVA-Captions10M") ``` ### Expected behavior The dataset should load immediately as it does when loaded through a normal indexed WebDataset loader. Generating splits should be optional and there should be a message showing how to disable it. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.14 - `huggingface_hub` version: 0.24.1 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/35648800?v=4", "events_url": "https://api.github.com/users/alexanderswerdlow/events{/privacy}", "followers_url": "https://api.github.com/users/alexanderswerdlow/followers", "following_url": "https://api.github.com/users/alexanderswerdlow/following{/other_user}", "gists_url": "https://api.github.com/users/alexanderswerdlow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alexanderswerdlow", "id": 35648800, "login": "alexanderswerdlow", "node_id": "MDQ6VXNlcjM1NjQ4ODAw", "organizations_url": "https://api.github.com/users/alexanderswerdlow/orgs", "received_events_url": "https://api.github.com/users/alexanderswerdlow/received_events", "repos_url": "https://api.github.com/users/alexanderswerdlow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alexanderswerdlow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexanderswerdlow/subscriptions", "type": "User", "url": "https://api.github.com/users/alexanderswerdlow" }
https://api.github.com/repos/huggingface/datasets/issues/7080/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7080/timeline
open
false
7,080
null
null
null
false
2,433,363,298
https://api.github.com/repos/huggingface/datasets/issues/7079
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7079/events
[]
null
2024-07-27T20:06:44Z
[]
https://github.com/huggingface/datasets/issues/7079
NONE
completed
null
null
[ "same issue here. @albertvillanova @lhoestq ", "Also impacted by this issue in many of my datasets (though not all) - in my case, this also seems to affect datasets that have been updated recently. Git cloning and the web interface still work:\r\n- https://huggingface.co/api/datasets/acmc/cheat_reduced\r\n- https...
HfHubHTTPError: 500 Server Error: Internal Server Error for url:
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 4, "heart": 3, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 7, "url": "https://api.github.com/repos/huggingface/datasets/issues/7079/reactions" }
I_kwDODunzps6RCi1i
null
2024-07-27T08:21:03Z
https://api.github.com/repos/huggingface/datasets/issues/7079/comments
### Describe the bug newly uploaded datasets, since yesterday, yields an error. old datasets, works fine. Seems like the datasets api server returns a 500 I'm getting the same error, when I invoke `load_dataset` with my dataset. Long discussion about it here, but I'm not sure anyone from huggingface have seen it. https://discuss.huggingface.co/t/hfhubhttperror-500-server-error-internal-server-error-for-url/99580/1 ### Steps to reproduce the bug this api url: https://huggingface.co/api/datasets/neoneye/simon-arc-shape-v4-rev3 respond with: ``` {"error":"Internal Error - We're working hard to fix this as soon as possible!"} ``` ### Expected behavior return no error with newer datasets. With older datasets I can load the datasets fine. ### Environment info # Browser When I access the api in the browser: https://huggingface.co/api/datasets/neoneye/simon-arc-shape-v4-rev3 ``` {"error":"Internal Error - We're working hard to fix this as soon as possible!"} ``` ### Request headers ``` Accept text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8 Accept-Encoding gzip, deflate, br, zstd Accept-Language en-US,en;q=0.5 Connection keep-alive Host huggingface.co Priority u=1 Sec-Fetch-Dest document Sec-Fetch-Mode navigate Sec-Fetch-Site cross-site Upgrade-Insecure-Requests 1 User-Agent Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:127.0) Gecko/20100101 Firefox/127.0 ``` ### Response headers ``` X-Firefox-Spdy h2 access-control-allow-origin https://huggingface.co access-control-expose-headers X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range content-length 80 content-type application/json; charset=utf-8 cross-origin-opener-policy same-origin date Fri, 26 Jul 2024 19:09:45 GMT etag W/"50-9qrwU+BNI4SD0Fe32p/nofkmv0c" referrer-policy strict-origin-when-cross-origin vary Origin via 1.1 1624c79cd07e6098196697a6a7907e4a.cloudfront.net (CloudFront) x-amz-cf-id SP8E7n5qRaP6i9c9G83dNAiOzJBU4GXSrDRAcVNTomY895K35H0nJQ== x-amz-cf-pop CPH50-C1 x-cache Error from cloudfront x-error-message Internal Error - We're working hard to fix this as soon as possible! x-powered-by huggingface-moon x-request-id Root=1-66a3f479-026417465ef42f49349fdca1 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4", "events_url": "https://api.github.com/users/neoneye/events{/privacy}", "followers_url": "https://api.github.com/users/neoneye/followers", "following_url": "https://api.github.com/users/neoneye/following{/other_user}", "gists_url": "https://api.github.com/users/neoneye/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/neoneye", "id": 147971, "login": "neoneye", "node_id": "MDQ6VXNlcjE0Nzk3MQ==", "organizations_url": "https://api.github.com/users/neoneye/orgs", "received_events_url": "https://api.github.com/users/neoneye/received_events", "repos_url": "https://api.github.com/users/neoneye/repos", "site_admin": false, "starred_url": "https://api.github.com/users/neoneye/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neoneye/subscriptions", "type": "User", "url": "https://api.github.com/users/neoneye" }
https://api.github.com/repos/huggingface/datasets/issues/7079/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7079/timeline
closed
false
7,079
null
2024-07-27T19:52:30Z
null
false
2,433,270,271
https://api.github.com/repos/huggingface/datasets/issues/7078
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7078/events
[]
null
2024-07-27T05:50:57Z
[]
https://github.com/huggingface/datasets/pull/7078
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7078). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Fix CI test_convert_to_parquet
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7078/reactions" }
PR_kwDODunzps52oq4n
{ "diff_url": "https://github.com/huggingface/datasets/pull/7078.diff", "html_url": "https://github.com/huggingface/datasets/pull/7078", "merged_at": "2024-07-27T05:44:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/7078.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7078" }
2024-07-27T05:32:40Z
https://api.github.com/repos/huggingface/datasets/issues/7078/comments
Fix `test_convert_to_parquet` by patching `HfApi.preupload_lfs_files` and revert temporary fix: - #7074
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7078/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7078/timeline
closed
false
7,078
null
2024-07-27T05:44:32Z
null
true
2,432,345,489
https://api.github.com/repos/huggingface/datasets/issues/7077
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7077/events
[]
null
2024-07-30T07:52:26Z
[]
https://github.com/huggingface/datasets/issues/7077
NONE
null
null
null
[ "I confirm that `column_names` values are not copied to `names` variable because in this case `CsvConfig.__post_init__` is not called: `CsvConfig` is instantiated with default values and afterwards the `config_kwargs` are used to overwrite its attributes.\r\n\r\n@luismsgomes in the meantime, you can avoid the bug i...
column_names ignored by load_dataset() when loading CSV file
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7077/reactions" }
I_kwDODunzps6Q-qWR
null
2024-07-26T14:18:04Z
https://api.github.com/repos/huggingface/datasets/issues/7077/comments
### Describe the bug load_dataset() ignores the column_names kwarg when loading a CSV file. Instead, it uses whatever values are on the first line of the file. ### Steps to reproduce the bug Call `load_dataset` to load data from a CSV file and specify `column_names` kwarg. ### Expected behavior The resulting dataset should have the specified column names **and** the first line of the file should be considered as data values. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.10.0-30-cloud-amd64-x86_64-with-glibc2.31 - Python version: 3.9.2 - `huggingface_hub` version: 0.24.2 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/9130265?v=4", "events_url": "https://api.github.com/users/luismsgomes/events{/privacy}", "followers_url": "https://api.github.com/users/luismsgomes/followers", "following_url": "https://api.github.com/users/luismsgomes/following{/other_user}", "gists_url": "https://api.github.com/users/luismsgomes/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/luismsgomes", "id": 9130265, "login": "luismsgomes", "node_id": "MDQ6VXNlcjkxMzAyNjU=", "organizations_url": "https://api.github.com/users/luismsgomes/orgs", "received_events_url": "https://api.github.com/users/luismsgomes/received_events", "repos_url": "https://api.github.com/users/luismsgomes/repos", "site_admin": false, "starred_url": "https://api.github.com/users/luismsgomes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/luismsgomes/subscriptions", "type": "User", "url": "https://api.github.com/users/luismsgomes" }
https://api.github.com/repos/huggingface/datasets/issues/7077/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7077/timeline
open
false
7,077
null
null
null
false
2,432,275,393
https://api.github.com/repos/huggingface/datasets/issues/7076
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7076/events
[]
null
2024-07-27T05:48:17Z
[]
https://github.com/huggingface/datasets/pull/7076
MEMBER
null
true
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7076). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
🧪 Do not mock create_commit
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7076/reactions" }
PR_kwDODunzps52lTDe
{ "diff_url": "https://github.com/huggingface/datasets/pull/7076.diff", "html_url": "https://github.com/huggingface/datasets/pull/7076", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7076.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7076" }
2024-07-26T13:44:42Z
https://api.github.com/repos/huggingface/datasets/issues/7076/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/342922?v=4", "events_url": "https://api.github.com/users/coyotte508/events{/privacy}", "followers_url": "https://api.github.com/users/coyotte508/followers", "following_url": "https://api.github.com/users/coyotte508/following{/other_user}", "gists_url": "https://api.github.com/users/coyotte508/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/coyotte508", "id": 342922, "login": "coyotte508", "node_id": "MDQ6VXNlcjM0MjkyMg==", "organizations_url": "https://api.github.com/users/coyotte508/orgs", "received_events_url": "https://api.github.com/users/coyotte508/received_events", "repos_url": "https://api.github.com/users/coyotte508/repos", "site_admin": false, "starred_url": "https://api.github.com/users/coyotte508/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/coyotte508/subscriptions", "type": "User", "url": "https://api.github.com/users/coyotte508" }
https://api.github.com/repos/huggingface/datasets/issues/7076/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7076/timeline
closed
false
7,076
null
2024-07-27T05:48:17Z
null
true
2,432,027,412
https://api.github.com/repos/huggingface/datasets/issues/7075
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7075/events
[]
null
2024-07-26T11:46:52Z
[]
https://github.com/huggingface/datasets/pull/7075
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7075). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Update required soxr version from pre-release to release
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7075/reactions" }
PR_kwDODunzps52kciD
{ "diff_url": "https://github.com/huggingface/datasets/pull/7075.diff", "html_url": "https://github.com/huggingface/datasets/pull/7075", "merged_at": "2024-07-26T11:40:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/7075.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7075" }
2024-07-26T11:24:35Z
https://api.github.com/repos/huggingface/datasets/issues/7075/comments
Update required `soxr` version from pre-release to release 0.4.0: https://github.com/dofuuz/python-soxr/releases/tag/v0.4.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7075/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7075/timeline
closed
false
7,075
null
2024-07-26T11:40:49Z
null
true
2,431,772,703
https://api.github.com/repos/huggingface/datasets/issues/7074
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7074/events
[]
null
2024-07-26T09:23:33Z
[]
https://github.com/huggingface/datasets/pull/7074
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7074). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Fix CI by temporarily marking test_convert_to_parquet as expected to fail
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7074/reactions" }
PR_kwDODunzps52jkw4
{ "diff_url": "https://github.com/huggingface/datasets/pull/7074.diff", "html_url": "https://github.com/huggingface/datasets/pull/7074", "merged_at": "2024-07-26T09:16:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/7074.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7074" }
2024-07-26T09:03:33Z
https://api.github.com/repos/huggingface/datasets/issues/7074/comments
As a hotfix for CI, temporarily mark test_convert_to_parquet as expected to fail. Fix #7073. Revert once root cause is fixed.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7074/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7074/timeline
closed
false
7,074
null
2024-07-26T09:16:12Z
null
true
2,431,706,568
https://api.github.com/repos/huggingface/datasets/issues/7073
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7073/events
[]
null
2024-07-27T05:48:02Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
https://github.com/huggingface/datasets/issues/7073
MEMBER
completed
null
null
[ "Any recent change in the API backend rejecting parameter `revision=\"refs/pr/1\"` to `HfApi.preupload_lfs_files`?\r\n```\r\nf\"{endpoint}/api/{repo_type}s/{repo_id}/preupload/{revision}\"\r\n\r\nhttps://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5188a8-17219154347516/preupload/refs...
CI is broken for convert_to_parquet: Invalid rev id: refs/pr/1 404 error causes RevisionNotFoundError
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7073/reactions" }
I_kwDODunzps6Q8OXI
null
2024-07-26T08:27:41Z
https://api.github.com/repos/huggingface/datasets/issues/7073/comments
See: https://github.com/huggingface/datasets/actions/runs/10095313567/job/27915185756 ``` FAILED tests/test_hub.py::test_convert_to_parquet - huggingface_hub.utils._errors.RevisionNotFoundError: 404 Client Error. (Request ID: Root=1-66a25839-31ce7b475e70e7db1e4d44c2;b0c8870f-d5ef-4bf2-a6ff-0191f3df0f64) Revision Not Found for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5188a8-17219154347516/preupload/refs%2Fpr%2F1. Invalid rev id: refs/pr/1 ``` ``` /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/hub.py:86: in convert_to_parquet dataset.push_to_hub( /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/dataset_dict.py:1722: in push_to_hub split_additions, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub( /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/arrow_dataset.py:5511: in _push_parquet_shards_to_hub api.preupload_lfs_files( /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/hf_api.py:4231: in preupload_lfs_files _fetch_upload_modes( /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py:118: in _inner_fn return fn(*args, **kwargs) /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/_commit_api.py:507: in _fetch_upload_modes hf_raise_for_status(resp) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7073/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7073/timeline
closed
false
7,073
null
2024-07-26T09:16:13Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,430,577,916
https://api.github.com/repos/huggingface/datasets/issues/7072
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7072/events
[]
null
2024-07-25T20:36:11Z
[]
https://github.com/huggingface/datasets/issues/7072
NONE
not_planned
null
null
[]
nm
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7072/reactions" }
I_kwDODunzps6Q36z8
null
2024-07-25T17:03:24Z
https://api.github.com/repos/huggingface/datasets/issues/7072/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/26392883?v=4", "events_url": "https://api.github.com/users/brettdavies/events{/privacy}", "followers_url": "https://api.github.com/users/brettdavies/followers", "following_url": "https://api.github.com/users/brettdavies/following{/other_user}", "gists_url": "https://api.github.com/users/brettdavies/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/brettdavies", "id": 26392883, "login": "brettdavies", "node_id": "MDQ6VXNlcjI2MzkyODgz", "organizations_url": "https://api.github.com/users/brettdavies/orgs", "received_events_url": "https://api.github.com/users/brettdavies/received_events", "repos_url": "https://api.github.com/users/brettdavies/repos", "site_admin": false, "starred_url": "https://api.github.com/users/brettdavies/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brettdavies/subscriptions", "type": "User", "url": "https://api.github.com/users/brettdavies" }
https://api.github.com/repos/huggingface/datasets/issues/7072/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7072/timeline
closed
false
7,072
null
2024-07-25T20:36:11Z
null
false
2,430,313,011
https://api.github.com/repos/huggingface/datasets/issues/7071
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7071/events
[]
null
2024-07-25T15:36:59Z
[]
https://github.com/huggingface/datasets/issues/7071
NONE
null
null
null
[]
Filter hangs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7071/reactions" }
I_kwDODunzps6Q26Iz
null
2024-07-25T15:29:05Z
https://api.github.com/repos/huggingface/datasets/issues/7071/comments
### Describe the bug When trying to filter my custom dataset, the process hangs, regardless of the lambda function used. It appears to be an issue with the way the Images are being handled. The dataset in question is a preprocessed version of https://huggingface.co/datasets/danaaubakirova/patfig where notably, I have converted the data to the Parquet format. ### Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset('lcolonn/patfig', split='test') ds_filtered = ds.filter(lambda row: row['cpc_class'] != 'Y') ``` Eventually I ctrl+C and I obtain this stack trace: ``` >>> ds_filtered = ds.filter(lambda row: row['cpc_class'] != 'Y') Filter: 0%| | 0/998 [00:00<?, ? examples/s]Filter: 0%| | 0/998 [00:35<?, ? examples/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 567, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/fingerprint.py", line 482, in wrapper out = func(dataset, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3714, in filter indices = self.map( ^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 602, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 567, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3161, in map for rank, done, content in Dataset._map_single(**dataset_kwargs): File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3552, in _map_single batch = apply_function_on_filtered_inputs( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3421, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 6478, in get_indices_from_mask_function num_examples = len(batch[next(iter(batch.keys()))]) ~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 273, in __getitem__ value = self.format(key) ^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 376, in format return self.formatter.format_column(self.pa_table.select([key])) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 443, in format_column column = self.python_features_decoder.decode_column(column, pa_table.column_names[0]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 219, in decode_column return self.features.decode_column(column, column_name) if self.features else column ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 2008, in decode_column [decode_nested_example(self[column_name], value) if value is not None else None for value in column] File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 2008, in <listcomp> [decode_nested_example(self[column_name], value) if value is not None else None for value in column] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 1351, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/image.py", line 188, in decode_example image.load() # to avoid "Too many open files" errors ^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/PIL/ImageFile.py", line 293, in load n, err_code = decoder.decode(b) ^^^^^^^^^^^^^^^^^ KeyboardInterrupt ``` Warning! This can even seem to cause some computers to crash. ### Expected behavior Should return the filtered dataset ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35 - Python version: 3.11.9 - `huggingface_hub` version: 0.24.0 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/61711045?v=4", "events_url": "https://api.github.com/users/lucienwalewski/events{/privacy}", "followers_url": "https://api.github.com/users/lucienwalewski/followers", "following_url": "https://api.github.com/users/lucienwalewski/following{/other_user}", "gists_url": "https://api.github.com/users/lucienwalewski/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lucienwalewski", "id": 61711045, "login": "lucienwalewski", "node_id": "MDQ6VXNlcjYxNzExMDQ1", "organizations_url": "https://api.github.com/users/lucienwalewski/orgs", "received_events_url": "https://api.github.com/users/lucienwalewski/received_events", "repos_url": "https://api.github.com/users/lucienwalewski/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lucienwalewski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucienwalewski/subscriptions", "type": "User", "url": "https://api.github.com/users/lucienwalewski" }
https://api.github.com/repos/huggingface/datasets/issues/7071/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7071/timeline
open
false
7,071
null
null
null
false
2,430,285,235
https://api.github.com/repos/huggingface/datasets/issues/7070
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7070/events
[]
null
2024-07-25T15:19:34Z
[]
https://github.com/huggingface/datasets/issues/7070
NONE
null
null
null
[]
how set_transform affects batch size?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7070/reactions" }
I_kwDODunzps6Q2zWz
null
2024-07-25T15:19:34Z
https://api.github.com/repos/huggingface/datasets/issues/7070/comments
### Describe the bug I am trying to fine-tune w2v-bert for ASR task. Since my dataset is so big, I preferred to use the on-the-fly method with set_transform. So i change the preprocessing function to this: ``` def prepare_dataset(batch): input_features = processor(batch["audio"], sampling_rate=16000).input_features[0] input_length = len(input_features) labels = processor.tokenizer(batch["text"], padding=False).input_ids batch = { "input_features": [input_features], "input_length": [input_length], "labels": [labels] } return batch train_ds.set_transform(prepare_dataset) val_ds.set_transform(prepare_dataset) ``` After this, I also had to change the DataCollatorCTCWithPadding class like this: ``` @dataclass class DataCollatorCTCWithPadding: processor: Wav2Vec2BertProcessor padding: Union[bool, str] = True def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: # Separate input_features and labels input_features = [{"input_features": feature["input_features"][0]} for feature in features] labels = [feature["labels"][0] for feature in features] # Pad input features batch = self.processor.pad( input_features, padding=self.padding, return_tensors="pt", ) # Pad and process labels label_features = self.processor.tokenizer.pad( {"input_ids": labels}, padding=self.padding, return_tensors="pt", ) labels = label_features["input_ids"] attention_mask = label_features["attention_mask"] # Replace padding with -100 to ignore these tokens during loss calculation labels = labels.masked_fill(attention_mask.ne(1), -100) batch["labels"] = labels return batch ``` But now a strange thing is happening, no matter how much I increase the batch size, the amount of V-RAM GPU usage does not change, while the number of total steps in the progress-bar (logging) changes. Is this normal or have I made a mistake? ### Steps to reproduce the bug i can share my code if needed ### Expected behavior Equal to the batch size value, the set_transform function is applied to the dataset and given to the model as a batch. ### Environment info all updated versions
{ "avatar_url": "https://avatars.githubusercontent.com/u/103993288?v=4", "events_url": "https://api.github.com/users/VafaKnm/events{/privacy}", "followers_url": "https://api.github.com/users/VafaKnm/followers", "following_url": "https://api.github.com/users/VafaKnm/following{/other_user}", "gists_url": "https://api.github.com/users/VafaKnm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VafaKnm", "id": 103993288, "login": "VafaKnm", "node_id": "U_kgDOBjLPyA", "organizations_url": "https://api.github.com/users/VafaKnm/orgs", "received_events_url": "https://api.github.com/users/VafaKnm/received_events", "repos_url": "https://api.github.com/users/VafaKnm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VafaKnm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VafaKnm/subscriptions", "type": "User", "url": "https://api.github.com/users/VafaKnm" }
https://api.github.com/repos/huggingface/datasets/issues/7070/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7070/timeline
open
false
7,070
null
null
null
false
2,429,281,339
https://api.github.com/repos/huggingface/datasets/issues/7069
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7069/events
[]
null
2024-07-31T07:10:07Z
[]
https://github.com/huggingface/datasets/pull/7069
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7069). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "cc @Wauplin maybe it's a `huggingface_hub` bug ?\r\n\r\nEDIT: ah actually the issue is ...
Fix push_to_hub by not calling create_branch if PR branch
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7069/reactions" }
PR_kwDODunzps52betB
{ "diff_url": "https://github.com/huggingface/datasets/pull/7069.diff", "html_url": "https://github.com/huggingface/datasets/pull/7069", "merged_at": "2024-07-30T10:51:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/7069.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7069" }
2024-07-25T07:50:04Z
https://api.github.com/repos/huggingface/datasets/issues/7069/comments
Fix push_to_hub by not calling create_branch if PR branch (e.g. `refs/pr/1`). Note that currently create_branch raises a 400 Bad Request error if the user passes a PR branch (e.g. `refs/pr/1`). EDIT: ~~Fix push_to_hub by not calling create_branch if branch exists.~~ Note that currently create_branch raises a 403 Forbidden error even if all these conditions are met: - exist_ok is passed - the branch already exists - the user does not have WRITE permission Fix #7067. Related issue: - https://github.com/huggingface/huggingface_hub/issues/2419
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7069/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7069/timeline
closed
false
7,069
null
2024-07-30T10:51:01Z
null
true
2,426,657,434
https://api.github.com/repos/huggingface/datasets/issues/7068
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7068/events
[]
null
2024-07-29T07:02:07Z
[]
https://github.com/huggingface/datasets/pull/7068
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7068). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Fix prepare_single_hop_path_and_storage_options
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7068/reactions" }
PR_kwDODunzps52SwXS
{ "diff_url": "https://github.com/huggingface/datasets/pull/7068.diff", "html_url": "https://github.com/huggingface/datasets/pull/7068", "merged_at": "2024-07-29T06:56:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/7068.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7068" }
2024-07-24T05:52:34Z
https://api.github.com/repos/huggingface/datasets/issues/7068/comments
Fix `_prepare_single_hop_path_and_storage_options`: - Do not pass HF authentication headers and HF user-agent to non-HF HTTP URLs - Do not overwrite passed `storage_options` nested values: - Before, when passed ```DownloadConfig(storage_options={"https": {"client_kwargs": {"raise_for_status": True}}})```, it was overwritten to ```{"https": {"client_kwargs": {"trust_env": True}}}``` - Now, the result combines both: ```{"https": {"client_kwargs": {"trust_env": True, "raise_for_status": True}}}```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7068/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7068/timeline
closed
false
7,068
null
2024-07-29T06:56:15Z
null
true
2,425,460,168
https://api.github.com/repos/huggingface/datasets/issues/7067
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7067/events
[]
null
2024-07-30T10:51:02Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
https://github.com/huggingface/datasets/issues/7067
NONE
completed
null
null
[ "Many users have encountered the same issue, which has caused inconvenience.\r\n\r\nhttps://discuss.huggingface.co/t/convert-to-parquet-fails-for-datasets-with-multiple-configs/86733\r\n", "Thanks for reporting.\r\n\r\nI will make the code more robust.", "I have opened an issue in the huggingface-hub repo:\r\n-...
Convert_to_parquet fails for datasets with multiple configs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7067/reactions" }
I_kwDODunzps6QkZXI
null
2024-07-23T15:09:33Z
https://api.github.com/repos/huggingface/datasets/issues/7067/comments
If the dataset has multiple configs, when using the `datasets-cli convert_to_parquet` command to avoid issues with the data viewer caused by loading scripts, the conversion process only successfully converts the data corresponding to the first config. When it starts converting the second config, it throws an error: ``` Traceback (most recent call last): File "/opt/anaconda3/envs/dl/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/commands/datasets_cli.py", line 41, in main service.run() File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/commands/convert_to_parquet.py", line 83, in run dataset.push_to_hub( File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/dataset_dict.py", line 1713, in push_to_hub api.create_branch(repo_id, branch=revision, token=token, repo_type="dataset", exist_ok=True) File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 5503, in create_branch hf_raise_for_status(response) File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 358, in hf_raise_for_status raise BadRequestError(message, response=response) from e huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-669fc665-7c2e80d75f4337496ee95402;731fcdc7-0950-4eec-99cf-ce047b8d003f) Bad request: Invalid reference for a branch: refs/pr/1 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/97585031?v=4", "events_url": "https://api.github.com/users/HuangZhen02/events{/privacy}", "followers_url": "https://api.github.com/users/HuangZhen02/followers", "following_url": "https://api.github.com/users/HuangZhen02/following{/other_user}", "gists_url": "https://api.github.com/users/HuangZhen02/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HuangZhen02", "id": 97585031, "login": "HuangZhen02", "node_id": "U_kgDOBdEHhw", "organizations_url": "https://api.github.com/users/HuangZhen02/orgs", "received_events_url": "https://api.github.com/users/HuangZhen02/received_events", "repos_url": "https://api.github.com/users/HuangZhen02/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HuangZhen02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HuangZhen02/subscriptions", "type": "User", "url": "https://api.github.com/users/HuangZhen02" }
https://api.github.com/repos/huggingface/datasets/issues/7067/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7067/timeline
closed
false
7,067
null
2024-07-30T10:51:02Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,425,125,160
https://api.github.com/repos/huggingface/datasets/issues/7066
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7066/events
[]
null
2024-07-23T12:43:59Z
[]
https://github.com/huggingface/datasets/issues/7066
MEMBER
null
null
null
[]
One subset per file in repo ?
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7066/reactions" }
I_kwDODunzps6QjHko
null
2024-07-23T12:43:59Z
https://api.github.com/repos/huggingface/datasets/issues/7066/comments
Right now we consider all the files of a dataset to be the same data, e.g. ``` single_subset_dataset/ ├── train0.jsonl ├── train1.jsonl └── train2.jsonl ``` but in cases like this, each file is actually a different subset of the dataset and should be loaded separately ``` many_subsets_dataset/ ├── animals.jsonl ├── trees.jsonl └── metadata.jsonl ``` It would be nice to detect those subsets automatically using a simple heuristic. For example we can group files together if their paths names are the same except some digits ?
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/7066/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7066/timeline
open
false
7,066
null
null
null
false
2,424,734,953
https://api.github.com/repos/huggingface/datasets/issues/7065
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7065/events
[]
null
2024-07-23T09:37:56Z
[]
https://github.com/huggingface/datasets/issues/7065
NONE
null
null
null
[]
Cannot get item after loading from disk and then converting to iterable.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7065/reactions" }
I_kwDODunzps6QhoTp
null
2024-07-23T09:37:56Z
https://api.github.com/repos/huggingface/datasets/issues/7065/comments
### Describe the bug The dataset generated from local file works fine. ```py root = "/home/data/train" file_list1 = glob(os.path.join(root, "*part1.flac")) file_list2 = glob(os.path.join(root, "*part2.flac")) ds = ( Dataset.from_dict({"part1": file_list1, "part2": file_list2}) .cast_column("part1", Audio(sampling_rate=None, mono=False)) .cast_column("part2", Audio(sampling_rate=None, mono=False)) ) ids = ds.to_iterable_dataset(128) ids = ids.shuffle(buffer_size=10000, seed=42) dataloader = DataLoader(ids, num_workers=4, batch_size=8, persistent_workers=True) for batch in dataloader: break ``` But after saving it to disk and then loading it from disk, I cannot get data as expected. ```py root = "/home/data/train" file_list1 = glob(os.path.join(root, "*part1.flac")) file_list2 = glob(os.path.join(root, "*part2.flac")) ds = ( Dataset.from_dict({"part1": file_list1, "part2": file_list2}) .cast_column("part1", Audio(sampling_rate=None, mono=False)) .cast_column("part2", Audio(sampling_rate=None, mono=False)) ) ds.save_to_disk("./train") ds = datasets.load_from_disk("./train") ids = ds.to_iterable_dataset(128) ids = ids.shuffle(buffer_size=10000, seed=42) dataloader = DataLoader(ids, num_workers=4, batch_size=8, persistent_workers=True) for batch in dataloader: break ``` After a long time waiting, an error occurs: ``` Loading dataset from disk: 100%|█████████████████████████████████████████████████████████████████████████| 165/165 [00:00<00:00, 6422.18it/s] Traceback (most recent call last): File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1133, in _try_get_data data = self._data_queue.get(timeout=timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/queues.py", line 113, in get if not self._poll(timeout): File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 257, in poll return self._poll(timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 424, in _poll r = wait([self], timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 931, in wait ready = selector.select(timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/selectors.py", line 416, in select fd_event_list = self._selector.poll(timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler _error_if_any_worker_fails() RuntimeError: DataLoader worker (pid 3490529) is killed by signal: Killed. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/hanzerui/.conda/envs/mss/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/hanzerui/.conda/envs/mss/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module> cli.main() File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main run() File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file runpy.run_path(target, run_name="__main__") File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path return _run_module_code(code, init_globals, run_name, File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code _run_code(code, mod_globals, init_globals, File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code exec(code, run_globals) File "/home/hanzerui/workspace/NetEase/test/test_datasets.py", line 60, in <module> for batch in dataloader: File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 631, in __next__ data = self._next_data() File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1329, in _next_data idx, data = self._get_data() File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1295, in _get_data success, data = self._try_get_data() File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1146, in _try_get_data raise RuntimeError(f'DataLoader worker (pid(s) {pids_str}) exited unexpectedly') from e RuntimeError: DataLoader worker (pid(s) 3490529) exited unexpectedly ``` It seems that streaming is not supported by `laod_from_disk`, so does that mean I cannot convert it to iterable? ### Steps to reproduce the bug 1. Create a `Dataset` from local files with `from_dict` 2. Save it to disk with `save_to_disk` 3. Load it from disk with `load_from_disk` 4. Convert to iterable with `to_iterable_dataset` 5. Loop the dataset ### Expected behavior Get items faster than the original dataset generated from dict. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35 - Python version: 3.10.14 - `huggingface_hub` version: 0.23.2 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/21305646?v=4", "events_url": "https://api.github.com/users/happyTonakai/events{/privacy}", "followers_url": "https://api.github.com/users/happyTonakai/followers", "following_url": "https://api.github.com/users/happyTonakai/following{/other_user}", "gists_url": "https://api.github.com/users/happyTonakai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/happyTonakai", "id": 21305646, "login": "happyTonakai", "node_id": "MDQ6VXNlcjIxMzA1NjQ2", "organizations_url": "https://api.github.com/users/happyTonakai/orgs", "received_events_url": "https://api.github.com/users/happyTonakai/received_events", "repos_url": "https://api.github.com/users/happyTonakai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/happyTonakai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/happyTonakai/subscriptions", "type": "User", "url": "https://api.github.com/users/happyTonakai" }
https://api.github.com/repos/huggingface/datasets/issues/7065/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7065/timeline
open
false
7,065
null
null
null
false
2,424,613,104
https://api.github.com/repos/huggingface/datasets/issues/7064
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7064/events
[]
null
2024-07-25T13:51:25Z
[]
https://github.com/huggingface/datasets/pull/7064
CONTRIBUTOR
null
false
null
[ "Looks good to me ! :)\r\n\r\nyou might want to add the `map` num_proc argument as well, for people who want to make it run faster", "Thanks for the feedback @lhoestq! The last commits include:\r\n- Adding the `num_proc` parameter to `batch`\r\n- Adding tests similar to the one done for `IterableDataset.batch()`\...
Add `batch` method to `Dataset` class
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7064/reactions" }
PR_kwDODunzps52Lz2-
{ "diff_url": "https://github.com/huggingface/datasets/pull/7064.diff", "html_url": "https://github.com/huggingface/datasets/pull/7064", "merged_at": "2024-07-25T13:45:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/7064.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7064" }
2024-07-23T08:40:43Z
https://api.github.com/repos/huggingface/datasets/issues/7064/comments
This PR introduces a new `batch` method to the `Dataset` class, aligning its functionality with the `IterableDataset.batch()` method (implemented in #7054). The implementation uses as well the existing `map` method for efficient batching of examples. Key changes: - Add `batch` method to `Dataset` class in `arrow_dataset.py` - Utilize `map` method for batching Closes #7063 Once the approach is approved, i will create the tests and update the documentation.
{ "avatar_url": "https://avatars.githubusercontent.com/u/61876623?v=4", "events_url": "https://api.github.com/users/lappemic/events{/privacy}", "followers_url": "https://api.github.com/users/lappemic/followers", "following_url": "https://api.github.com/users/lappemic/following{/other_user}", "gists_url": "https://api.github.com/users/lappemic/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lappemic", "id": 61876623, "login": "lappemic", "node_id": "MDQ6VXNlcjYxODc2NjIz", "organizations_url": "https://api.github.com/users/lappemic/orgs", "received_events_url": "https://api.github.com/users/lappemic/received_events", "repos_url": "https://api.github.com/users/lappemic/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lappemic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lappemic/subscriptions", "type": "User", "url": "https://api.github.com/users/lappemic" }
https://api.github.com/repos/huggingface/datasets/issues/7064/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7064/timeline
closed
false
7,064
null
2024-07-25T13:45:20Z
null
true
2,424,488,648
https://api.github.com/repos/huggingface/datasets/issues/7063
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7063/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-07-25T13:45:21Z
[]
https://github.com/huggingface/datasets/issues/7063
CONTRIBUTOR
completed
null
null
[]
Add `batch` method to `Dataset`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7063/reactions" }
I_kwDODunzps6QgsLI
null
2024-07-23T07:36:59Z
https://api.github.com/repos/huggingface/datasets/issues/7063/comments
### Feature request Add a `batch` method to the Dataset class, similar to the one recently implemented for `IterableDataset` in PR #7054. ### Motivation A batched iteration speeds up data loading significantly (see e.g. #6279) ### Your contribution I plan to open a PR to implement this.
{ "avatar_url": "https://avatars.githubusercontent.com/u/61876623?v=4", "events_url": "https://api.github.com/users/lappemic/events{/privacy}", "followers_url": "https://api.github.com/users/lappemic/followers", "following_url": "https://api.github.com/users/lappemic/following{/other_user}", "gists_url": "https://api.github.com/users/lappemic/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lappemic", "id": 61876623, "login": "lappemic", "node_id": "MDQ6VXNlcjYxODc2NjIz", "organizations_url": "https://api.github.com/users/lappemic/orgs", "received_events_url": "https://api.github.com/users/lappemic/received_events", "repos_url": "https://api.github.com/users/lappemic/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lappemic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lappemic/subscriptions", "type": "User", "url": "https://api.github.com/users/lappemic" }
https://api.github.com/repos/huggingface/datasets/issues/7063/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7063/timeline
closed
false
7,063
null
2024-07-25T13:45:21Z
null
false
2,424,467,484
https://api.github.com/repos/huggingface/datasets/issues/7062
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7062/events
[]
null
2024-07-23T14:28:27Z
[]
https://github.com/huggingface/datasets/pull/7062
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7062). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Avoid calling http_head for non-HTTP URLs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7062/reactions" }
PR_kwDODunzps52LUPR
{ "diff_url": "https://github.com/huggingface/datasets/pull/7062.diff", "html_url": "https://github.com/huggingface/datasets/pull/7062", "merged_at": "2024-07-23T14:21:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/7062.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7062" }
2024-07-23T07:25:09Z
https://api.github.com/repos/huggingface/datasets/issues/7062/comments
Avoid calling `http_head` for non-HTTP URLs, by adding and `else` statement. Currently, it makes an unnecessary HTTP call (which adds latency) for non-HTTP protocols, like FTP, S3,... I discovered this while working in an unrelated issue.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7062/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7062/timeline
closed
false
7,062
null
2024-07-23T14:21:08Z
null
true
2,423,786,881
https://api.github.com/repos/huggingface/datasets/issues/7061
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7061/events
[]
null
2024-07-22T21:18:12Z
[]
https://github.com/huggingface/datasets/issues/7061
NONE
null
null
null
[]
Custom Dataset | Still Raise Error while handling errors in _generate_examples
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7061/reactions" }
I_kwDODunzps6QeA2B
null
2024-07-22T21:18:12Z
https://api.github.com/repos/huggingface/datasets/issues/7061/comments
### Describe the bug I follow this [example](https://discuss.huggingface.co/t/error-handling-in-iterabledataset/72827/3) to handle errors in custom dataset. I am writing a dataset script which read jsonl files and i need to handle errors and continue reading files without raising exception and exit the execution. ``` def _generate_examples(self, filepaths): errors=[] id_ = 0 for filepath in filepaths: try: with open(filepath, 'r') as f: for line in f: json_obj = json.loads(line) yield id_, json_obj id_ += 1 except Exception as exc: logger.error(f"error occur at filepath: {filepath}") errors.append(error) ``` seems the logger.error is printed but still exception is raised the the run is exit. ``` Downloading and preparing dataset custom_dataset/default to /home/myuser/.cache/huggingface/datasets/custom_dataset/default-a14cdd566afee0a6/1.0.0/acfcc9fb9c57034b580c4252841 ERROR: datasets_modules.datasets.custom_dataset.acfcc9fb9c57034b580c4252841bb890a5617cbd28678dd4be5e52b81188ad02.custom_dataset: 2024-07-22 10:47:42,167: error occur at filepath: '/home/myuser/ds/corrupted-file.jsonl Traceback (most recent call last): File "/home/myuser/.cache/huggingface/modules/datasets_modules/datasets/custom_dataset/ac..2/custom_dataset.py", line 48, in _generate_examples json_obj = json.loads(line) File "myenv/lib/python3.8/json/__init__.py", line 357, in loads return _default_decoder.decode(s) File "myenv/lib/python3.8/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "myenv/lib/python3.8/json/decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Invalid control character at: line 1 column 4 (char 3) Generating train split: 0 examples [00:06, ? examples/s]> RemoteTraceback: """ Traceback (most recent call last): File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1637, in _prepare_split_single num_examples, num_bytes = writer.finalize() File "myenv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 594, in finalize raise SchemaInferenceError("Please pass `features` or at least one example when writing data") datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data The above exception was the direct cause of the following exception: Traceback (most recent call last): File "myenv/lib/python3.8/site-packages/multiprocess/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "myenv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 1353, in _write_generator_to_queue for i, result in enumerate(func(**kwargs)): File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1646, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset """ The above exception was the direct cause of the following exception: │ │ │ myenv/lib/python3.8/site-packages/datasets/utils/py_utils. │ │ py:1377 in <listcomp> │ │ │ │ 1374 │ │ │ │ if all(async_result.ready() for async_result in async_results) and queue │ │ 1375 │ │ │ │ │ break │ │ 1376 │ │ # we get the result in case there's an error to raise │ │ ❱ 1377 │ │ [async_result.get() for async_result in async_results] │ │ 1378 │ │ │ │ ╭──────────────────────────────── locals ─────────────────────────────────╮ │ │ │ .0 = <list_iterator object at 0x7f2cc1f0ce20> │ │ │ │ async_result = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> │ │ │ ╰─────────────────────────────────────────────────────────────────────────╯ │ │ │ │ myenv/lib/python3.8/site-packages/multiprocess/pool.py:771 │ │ in get │ │ │ │ 768 │ │ if self._success: │ │ 769 │ │ │ return self._value │ │ 770 │ │ else: │ │ ❱ 771 │ │ │ raise self._value │ │ 772 │ │ │ 773 │ def _set(self, i, obj): │ │ 774 │ │ self._success, self._value = obj │ │ │ │ ╭────────────────────────────── locals ──────────────────────────────╮ │ │ │ self = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> │ │ │ │ timeout = None │ │ │ ╰────────────────────────────────────────────────────────────────────╯ │ DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug same as above ### Expected behavior should handle error and continue reading remaining files ### Environment info python 3.9
{ "avatar_url": "https://avatars.githubusercontent.com/u/68266028?v=4", "events_url": "https://api.github.com/users/hahmad2008/events{/privacy}", "followers_url": "https://api.github.com/users/hahmad2008/followers", "following_url": "https://api.github.com/users/hahmad2008/following{/other_user}", "gists_url": "https://api.github.com/users/hahmad2008/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hahmad2008", "id": 68266028, "login": "hahmad2008", "node_id": "MDQ6VXNlcjY4MjY2MDI4", "organizations_url": "https://api.github.com/users/hahmad2008/orgs", "received_events_url": "https://api.github.com/users/hahmad2008/received_events", "repos_url": "https://api.github.com/users/hahmad2008/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hahmad2008/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hahmad2008/subscriptions", "type": "User", "url": "https://api.github.com/users/hahmad2008" }
https://api.github.com/repos/huggingface/datasets/issues/7061/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7061/timeline
open
false
7,061
null
null
null
false
2,423,188,419
https://api.github.com/repos/huggingface/datasets/issues/7060
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7060/events
[]
null
2024-07-23T13:28:44Z
[]
https://github.com/huggingface/datasets/pull/7060
NONE
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7060). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
WebDataset BuilderConfig
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7060/reactions" }
PR_kwDODunzps52G71g
{ "diff_url": "https://github.com/huggingface/datasets/pull/7060.diff", "html_url": "https://github.com/huggingface/datasets/pull/7060", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7060.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7060" }
2024-07-22T15:41:07Z
https://api.github.com/repos/huggingface/datasets/issues/7060/comments
This PR adds `WebDatasetConfig`. Closes #7055
{ "avatar_url": "https://avatars.githubusercontent.com/u/106811348?v=4", "events_url": "https://api.github.com/users/hlky/events{/privacy}", "followers_url": "https://api.github.com/users/hlky/followers", "following_url": "https://api.github.com/users/hlky/following{/other_user}", "gists_url": "https://api.github.com/users/hlky/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hlky", "id": 106811348, "login": "hlky", "node_id": "U_kgDOBl3P1A", "organizations_url": "https://api.github.com/users/hlky/orgs", "received_events_url": "https://api.github.com/users/hlky/received_events", "repos_url": "https://api.github.com/users/hlky/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hlky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hlky/subscriptions", "type": "User", "url": "https://api.github.com/users/hlky" }
https://api.github.com/repos/huggingface/datasets/issues/7060/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7060/timeline
closed
false
7,060
null
2024-07-23T13:28:44Z
null
true
2,422,827,892
https://api.github.com/repos/huggingface/datasets/issues/7059
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7059/events
[]
null
2024-07-22T13:02:53Z
[]
https://github.com/huggingface/datasets/issues/7059
NONE
null
null
null
[]
None values are skipped when reading jsonl in subobjects
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7059/reactions" }
I_kwDODunzps6QaWt0
null
2024-07-22T13:02:42Z
https://api.github.com/repos/huggingface/datasets/issues/7059/comments
### Describe the bug I have been fighting against my machine since this morning only to find out this is some kind of a bug. When loading a dataset composed of `metadata.jsonl`, if you have nullable values (Optional[str]), they can be ignored by the parser, shifting things around. E.g., let's take this example Here are two version of a same dataset: [not-buggy.tar.gz](https://github.com/user-attachments/files/16333532/not-buggy.tar.gz) [buggy.tar.gz](https://github.com/user-attachments/files/16333553/buggy.tar.gz) ### Steps to reproduce the bug 1. Load the `buggy.tar.gz` dataset 2. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]` 3. Load the `not-buggy.tar.gz` dataset 4. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]` ### Expected behavior Both should have 4 baseline entries: 1. Buggy should have None followed by three lists 2. Non-Buggy should have four lists, and the first one should be an empty list. One does not work, 2 works. Despite accepting None in another position than the first one. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/1929830?v=4", "events_url": "https://api.github.com/users/PonteIneptique/events{/privacy}", "followers_url": "https://api.github.com/users/PonteIneptique/followers", "following_url": "https://api.github.com/users/PonteIneptique/following{/other_user}", "gists_url": "https://api.github.com/users/PonteIneptique/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PonteIneptique", "id": 1929830, "login": "PonteIneptique", "node_id": "MDQ6VXNlcjE5Mjk4MzA=", "organizations_url": "https://api.github.com/users/PonteIneptique/orgs", "received_events_url": "https://api.github.com/users/PonteIneptique/received_events", "repos_url": "https://api.github.com/users/PonteIneptique/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PonteIneptique/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PonteIneptique/subscriptions", "type": "User", "url": "https://api.github.com/users/PonteIneptique" }
https://api.github.com/repos/huggingface/datasets/issues/7059/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7059/timeline
open
false
7,059
null
null
null
false
2,422,560,355
https://api.github.com/repos/huggingface/datasets/issues/7058
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7058/events
[]
null
2024-07-22T10:49:20Z
[]
https://github.com/huggingface/datasets/issues/7058
CONTRIBUTOR
null
null
null
[]
New feature type: Document
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7058/reactions" }
I_kwDODunzps6QZVZj
null
2024-07-22T10:49:20Z
https://api.github.com/repos/huggingface/datasets/issues/7058/comments
It would be useful for PDF. https://github.com/huggingface/dataset-viewer/issues/2991#issuecomment-2242656069
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
https://api.github.com/repos/huggingface/datasets/issues/7058/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7058/timeline
open
false
7,058
null
null
null
false
2,422,498,520
https://api.github.com/repos/huggingface/datasets/issues/7057
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7057/events
[]
null
2024-07-22T10:34:14Z
[]
https://github.com/huggingface/datasets/pull/7057
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7057). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Update load_hub.mdx
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7057/reactions" }
PR_kwDODunzps52EjGC
{ "diff_url": "https://github.com/huggingface/datasets/pull/7057.diff", "html_url": "https://github.com/huggingface/datasets/pull/7057", "merged_at": "2024-07-22T10:28:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/7057.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7057" }
2024-07-22T10:17:46Z
https://api.github.com/repos/huggingface/datasets/issues/7057/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
https://api.github.com/repos/huggingface/datasets/issues/7057/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7057/timeline
closed
false
7,057
null
2024-07-22T10:28:10Z
null
true
2,422,192,257
https://api.github.com/repos/huggingface/datasets/issues/7056
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7056/events
[]
null
2024-07-22T15:37:01Z
[]
https://github.com/huggingface/datasets/pull/7056
CONTRIBUTOR
null
false
null
[ "Oh cool !\r\n\r\nThe time it takes to resume depends on the expected maximum distance in this case right ? Do you know its relationship with $B$ ?\r\n\r\nIn your test it already as high as 15k for $B=1024$, which is ok for text datasets but is maybe not ideal for datasets with heavy samples like audio/image/video ...
Make `BufferShuffledExamplesIterable` resumable
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7056/reactions" }
PR_kwDODunzps52DgOu
{ "diff_url": "https://github.com/huggingface/datasets/pull/7056.diff", "html_url": "https://github.com/huggingface/datasets/pull/7056", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7056.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7056" }
2024-07-22T07:50:02Z
https://api.github.com/repos/huggingface/datasets/issues/7056/comments
This PR aims to implement a resumable `BufferShuffledExamplesIterable`. Instead of saving the entire buffer content, which is very memory-intensive, the newly implemented `BufferShuffledExamplesIterable` saves only the minimal state necessary for recovery, e.g., the random generator states and the state of the first example in the buffer dict. The idea is that since the buffer size is limited, even if the entire buffer is discarded, we can rebuild it as long as the state of the oldest example is recorded. For buffer size $B$, the expected distance between when an example is pushed and when it is yielded is $d = \sum_{k=1}^{\infty} k\frac{1}{B} (1 - \frac{1}{B} )^{k-1} =B$. Simulation experiments support these claims: ```py from random import randint BUFFER_SIZE = 1024 dists = [] buffer = [] for i in range(10000000): if i < BUFFER_SIZE: buffer.append(i) else: index = randint(0, BUFFER_SIZE - 1) dists.append(i - buffer[index]) buffer[index] = i print(f"MIN DIST: {min(dists)}\nMAX DIST: {max(dists)}\nAVG DIST: {sum(dists) / len(dists):.2f}\n") ``` which produces the following output: ```py MIN DIST: 1 MAX DIST: 15136 AVG DIST: 1023.95 ``` The overall time for reconstructing the buffer and recovery should not be too long. The following code mimics the cases of resuming online tokenization by `datasets` and `StatefulDataLoader` under distributed scenarios, ```py import pickle import time from itertools import chain from typing import Any, Dict, List import torch from datasets import load_dataset from torchdata.stateful_dataloader import StatefulDataLoader from tqdm import tqdm from transformers import AutoTokenizer, DataCollatorForLanguageModeling tokenizer = AutoTokenizer.from_pretrained('fla-hub/gla-1.3B-100B') tokenizer.pad_token = tokenizer.eos_token data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) torch.manual_seed(42) def tokenize(examples: Dict[str, List[Any]]) -> Dict[str, List[List[int]]]: input_ids = tokenizer(examples['text'])['input_ids'] input_ids = list(chain(*input_ids)) total_length = len(input_ids) chunk_size = 2048 total_length = (total_length // chunk_size) * chunk_size # the last chunk smaller than chunk_size will be discarded return {'input_ids': [input_ids[i: i+chunk_size] for i in range(0, total_length, chunk_size)]} batch_size = 16 num_workers = 5 context_length = 2048 rank = 1 world_size = 32 prefetch_factor = 2 steps = 2048 path = 'fla-hub/slimpajama-test' dataset = load_dataset( path=path, split='train', streaming=True, trust_remote_code=True ) dataset = dataset.map(tokenize, batched=True, remove_columns=next(iter(dataset)).keys()) dataset = dataset.shuffle(seed=42) loader = StatefulDataLoader(dataset=dataset, batch_size=batch_size, collate_fn=data_collator, num_workers=num_workers, persistent_workers=False, prefetch_factor=prefetch_factor) start = time.time() for i, batch in tqdm(enumerate(loader)): if i == 0: print(f'{i}\n{batch["input_ids"]}') if i == steps - 1: print(f'{i}\n{batch["input_ids"]}') state_dict = loader.state_dict() if i == steps: print(f'{i}\n{batch["input_ids"]}') break print(f"{time.time() - start:.2f}s elapsed") print(f"{len(pickle.dumps(state_dict)) / 1024**2:.2f}MB states in total") for worker in state_dict['_snapshot']['_worker_snapshots'].keys(): print(f"{worker} {len(pickle.dumps(state_dict['_snapshot']['_worker_snapshots'][worker])) / 1024**2:.2f}MB") print(state_dict['_snapshot']['_worker_snapshots']['worker_0']['dataset_state']) loader = StatefulDataLoader(dataset=dataset, batch_size=batch_size, collate_fn=data_collator, num_workers=num_workers, persistent_workers=False, prefetch_factor=prefetch_factor) print("Loading state dict") loader.load_state_dict(state_dict) start = time.time() for batch in loader: print(batch['input_ids']) break print(f"{time.time() - start:.2f}s elapsed") ``` and the outputs are ```py 0 tensor([[ 909, 395, 19082, ..., 13088, 16232, 395], [ 601, 28705, 28770, ..., 28733, 923, 288], [21753, 15071, 13977, ..., 9369, 28723, 415], ..., [21763, 28751, 20300, ..., 28781, 28734, 4775], [ 354, 396, 10214, ..., 298, 429, 28770], [ 333, 6149, 28768, ..., 2773, 340, 351]]) 2047 tensor([[28723, 415, 3889, ..., 272, 3065, 2609], [ 403, 3214, 3629, ..., 403, 21163, 16434], [28723, 13, 28749, ..., 28705, 28750, 28734], ..., [ 2778, 2251, 28723, ..., 354, 684, 429], [ 5659, 298, 1038, ..., 5290, 297, 22153], [ 938, 28723, 1537, ..., 9123, 28733, 12154]]) 2048 tensor([[ 769, 278, 12531, ..., 28721, 19309, 28739], [ 415, 23347, 622, ..., 3937, 2426, 28725], [28745, 4345, 28723, ..., 338, 28725, 583], ..., [ 1670, 28709, 5809, ..., 28734, 28760, 393], [ 340, 1277, 624, ..., 325, 28790, 1329], [ 523, 1144, 3409, ..., 359, 359, 17422]]) 65.97s elapsed 0.00MB states in total worker_0 0.00MB worker_1 0.00MB worker_2 0.00MB worker_3 0.00MB worker_4 0.00MB {'ex_iterable': {'ex_iterable': {'shard_idx': 0, 'shard_example_idx': 14000}, 'num_examples_since_previous_state': 166, 'previous_state_example_idx': 7394, 'previous_state': {'shard_idx': 0, 'shard_example_idx': 13000}}, 'num_taken': 6560, 'global_example_idx': 7560, 'buffer_state_dict': {'num_taken': 6560, 'global_example_idx': 356, 'index_offset': 0, 'first_state': {'ex_iterable': {'shard_idx': 0, 'shard_example_idx': 1000}, 'num_examples_since_previous_state': 356, 'previous_state_example_idx': 0, 'previous_state': {'shard_idx': 0, 'shard_example_idx': 0}}, 'bit_generator_state': {'state': {'state': 274674114334540486603088602300644985544, 'inc': 332724090758049132448979897138935081983}, 'bit_generator': 'PCG64', 'has_uint32': 0, 'uinteger': 0}}} Loading state dict tensor([[ 769, 278, 12531, ..., 28721, 19309, 28739], [ 415, 23347, 622, ..., 3937, 2426, 28725], [28745, 4345, 28723, ..., 338, 28725, 583], ..., [ 1670, 28709, 5809, ..., 28734, 28760, 393], [ 340, 1277, 624, ..., 325, 28790, 1329], [ 523, 1144, 3409, ..., 359, 359, 17422]]) 24.60s elapsed ``` Not sure if this PR complies with the `datasets` code style. Looking for your help @lhoestq, also very willing to further improve the code if any suggestions are given.
{ "avatar_url": "https://avatars.githubusercontent.com/u/18402347?v=4", "events_url": "https://api.github.com/users/yzhangcs/events{/privacy}", "followers_url": "https://api.github.com/users/yzhangcs/followers", "following_url": "https://api.github.com/users/yzhangcs/following{/other_user}", "gists_url": "https://api.github.com/users/yzhangcs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yzhangcs", "id": 18402347, "login": "yzhangcs", "node_id": "MDQ6VXNlcjE4NDAyMzQ3", "organizations_url": "https://api.github.com/users/yzhangcs/orgs", "received_events_url": "https://api.github.com/users/yzhangcs/received_events", "repos_url": "https://api.github.com/users/yzhangcs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yzhangcs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yzhangcs/subscriptions", "type": "User", "url": "https://api.github.com/users/yzhangcs" }
https://api.github.com/repos/huggingface/datasets/issues/7056/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7056/timeline
open
false
7,056
null
null
null
true
2,421,708,891
https://api.github.com/repos/huggingface/datasets/issues/7055
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7055/events
[]
null
2024-07-24T13:26:30Z
[]
https://github.com/huggingface/datasets/issues/7055
NONE
completed
null
null
[ "Since `datasets` uses is built on Arrow to store the data, it requires each sample to have the same columns.\r\n\r\nThis can be fixed by specifyign in advance the name of all the possible columns in the `dataset_info` in YAML, and missing values will be `None`", "Thanks. This currently doesn't work for WebDatase...
WebDataset with different prefixes are unsupported
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7055/reactions" }
I_kwDODunzps6QWFhb
null
2024-07-22T01:14:19Z
https://api.github.com/repos/huggingface/datasets/issues/7055/comments
### Describe the bug Consider a WebDataset with multiple images for each item where the number of images may vary: [example](https://huggingface.co/datasets/bigdata-pw/fashion-150k) Due to this [code](https://github.com/huggingface/datasets/blob/87f4c2088854ff33e817e724e75179e9975c1b02/src/datasets/packaged_modules/webdataset/webdataset.py#L76-L80) an error is given. ``` The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types. ``` The purpose of this check is unclear because PyArrow supports different keys. Removing the check allows the dataset to be loaded and there's no issue when iterating through the dataset. ``` >>> from datasets import load_dataset >>> path = "shards/*.tar" >>> dataset = load_dataset("webdataset", data_files={"train": path}, split="train", streaming=True) Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 152/152 [00:00<00:00, 56458.93it/s] >>> dataset IterableDataset({ features: ['__key__', '__url__', '1.jpg', '2.jpg', '3.jpg', '4.jpg', 'json'], n_shards: 152 }) ``` ### Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("bigdata-pw/fashion-150k") ``` ### Expected behavior Dataset loads without error ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.14.0-467.el9.x86_64-x86_64-with-glibc2.34 - Python version: 3.9.19 - `huggingface_hub` version: 0.23.4 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/106811348?v=4", "events_url": "https://api.github.com/users/hlky/events{/privacy}", "followers_url": "https://api.github.com/users/hlky/followers", "following_url": "https://api.github.com/users/hlky/following{/other_user}", "gists_url": "https://api.github.com/users/hlky/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hlky", "id": 106811348, "login": "hlky", "node_id": "U_kgDOBl3P1A", "organizations_url": "https://api.github.com/users/hlky/orgs", "received_events_url": "https://api.github.com/users/hlky/received_events", "repos_url": "https://api.github.com/users/hlky/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hlky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hlky/subscriptions", "type": "User", "url": "https://api.github.com/users/hlky" }
https://api.github.com/repos/huggingface/datasets/issues/7055/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7055/timeline
closed
false
7,055
null
2024-07-23T13:28:46Z
null
false
2,418,548,995
https://api.github.com/repos/huggingface/datasets/issues/7054
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7054/events
[]
null
2024-07-23T13:25:13Z
[]
https://github.com/huggingface/datasets/pull/7054
CONTRIBUTOR
null
false
null
[ "Cool ! Thanks for diving into it :)\r\n\r\nYour implementation is great and indeed supports shuffling and batching, you just need to additionally account for state_dict (for dataset [checkpointing+resuming](https://huggingface.co/docs/datasets/main/en/use_with_pytorch#checkpoint-and-resume))\r\n\r\nThat being said...
Add batching to `IterableDataset`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7054/reactions" }
PR_kwDODunzps514T1f
{ "diff_url": "https://github.com/huggingface/datasets/pull/7054.diff", "html_url": "https://github.com/huggingface/datasets/pull/7054", "merged_at": "2024-07-23T10:34:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/7054.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7054" }
2024-07-19T10:11:47Z
https://api.github.com/repos/huggingface/datasets/issues/7054/comments
I've taken a try at implementing a batched `IterableDataset` as requested in issue #6279. This PR adds a new `BatchedExamplesIterable` class and a `.batch()` method to the `IterableDataset` class. The main changes are: 1. A new `BatchedExamplesIterable` that groups examples into batches. 2. A `.batch()` method for `IterableDataset` to easily create batched versions. 3. Support for shuffling and sharding to work with PyTorch DataLoader and multiple workers. I'm not sure if this is exactly what you had in mind and also have not fully tested it atm, so I'd really appreciate your feedback. Does this seem like it's heading in the right direction? I'm happy to make any changes or explore different approaches if needed. Pinging @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/61876623?v=4", "events_url": "https://api.github.com/users/lappemic/events{/privacy}", "followers_url": "https://api.github.com/users/lappemic/followers", "following_url": "https://api.github.com/users/lappemic/following{/other_user}", "gists_url": "https://api.github.com/users/lappemic/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lappemic", "id": 61876623, "login": "lappemic", "node_id": "MDQ6VXNlcjYxODc2NjIz", "organizations_url": "https://api.github.com/users/lappemic/orgs", "received_events_url": "https://api.github.com/users/lappemic/received_events", "repos_url": "https://api.github.com/users/lappemic/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lappemic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lappemic/subscriptions", "type": "User", "url": "https://api.github.com/users/lappemic" }
https://api.github.com/repos/huggingface/datasets/issues/7054/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7054/timeline
closed
false
7,054
null
2024-07-23T10:34:28Z
null
true
2,416,423,791
https://api.github.com/repos/huggingface/datasets/issues/7053
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7053/events
[]
null
2024-07-18T15:17:42Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
https://github.com/huggingface/datasets/issues/7053
NONE
completed
null
null
[ "Hi,\r\n\r\nThis issue was fixed in `datasets` 2.15.0:\r\n- #6105\r\n\r\nYou will need to update your `datasets`:\r\n```\r\npip install -U datasets\r\n```", "Duplicate of:\r\n- #6100" ]
Datasets.datafiles resolve_pattern `TypeError: can only concatenate tuple (not "str") to tuple`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7053/reactions" }
I_kwDODunzps6QB7Nv
null
2024-07-18T13:42:35Z
https://api.github.com/repos/huggingface/datasets/issues/7053/comments
### Describe the bug in data_files.py, line 332, `fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options)` If we run the code on AWS, as fs.protocol will be a tuple like: `('file', 'local')` So, `isinstance(fs.protocol, str) == False` and `protocol_prefix = fs.protocol + "://" if fs.protocol != "file" else ""` will raise `TypeError: can only concatenate tuple (not "str") to tuple`. ### Steps to reproduce the bug Steps to reproduce: 1. Run on a cloud server like AWS, 2. `import datasets.data_files as datafile` 3. datafile.resolve_pattern('path/to/dataset', '.') 4. `TypeError: can only concatenate tuple (not "str") to tuple` ### Expected behavior Should return path of the dataset, with fs.protocol at the beginning ### Environment info - `datasets` version: 2.14.0 - Platform: Linux-3.10.0-1160.119.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.19 - Huggingface_hub version: 0.23.5 - PyArrow version: 16.1.0 - Pandas version: 1.1.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/48289218?v=4", "events_url": "https://api.github.com/users/MatthewYZhang/events{/privacy}", "followers_url": "https://api.github.com/users/MatthewYZhang/followers", "following_url": "https://api.github.com/users/MatthewYZhang/following{/other_user}", "gists_url": "https://api.github.com/users/MatthewYZhang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MatthewYZhang", "id": 48289218, "login": "MatthewYZhang", "node_id": "MDQ6VXNlcjQ4Mjg5MjE4", "organizations_url": "https://api.github.com/users/MatthewYZhang/orgs", "received_events_url": "https://api.github.com/users/MatthewYZhang/received_events", "repos_url": "https://api.github.com/users/MatthewYZhang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MatthewYZhang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MatthewYZhang/subscriptions", "type": "User", "url": "https://api.github.com/users/MatthewYZhang" }
https://api.github.com/repos/huggingface/datasets/issues/7053/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7053/timeline
closed
false
7,053
null
2024-07-18T15:16:18Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,411,682,730
https://api.github.com/repos/huggingface/datasets/issues/7052
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7052/events
[]
null
2024-07-29T06:47:55Z
[]
https://github.com/huggingface/datasets/pull/7052
NONE
null
true
null
[]
Adding `Music` feature for symbolic music modality (MIDI, abc)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7052/reactions" }
PR_kwDODunzps51iuop
{ "diff_url": "https://github.com/huggingface/datasets/pull/7052.diff", "html_url": "https://github.com/huggingface/datasets/pull/7052", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7052.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7052" }
2024-07-16T17:26:04Z
https://api.github.com/repos/huggingface/datasets/issues/7052/comments
⚠️ (WIP) ⚠️ ### What this PR does This PR adds a `Music` feature for the symbolic music modality, in particular [MIDI](https://en.wikipedia.org/wiki/Musical_Instrument_Digital_Interface) and [abc](https://en.wikipedia.org/wiki/ABC_notation) files. ### Motivations These two file formats are widely used in the [Music Information Retrieval (MIR)](https://en.wikipedia.org/wiki/Music_information_retrieval) for tasks such as music generation, music transcription, music synthesis or music transcription. Having a dedicated feature in the datasets library would allow to both encourage researchers to share datasets of this modality as well as making them more easily usable for end users, benefitting from the perks of the library. These file formats are supported by [symusic](https://github.com/Yikai-Liao/symusic), a lightweight Python library with C bindings (using nanobind) allowing to efficiently read, write and manipulate them. The library is actively developed, and can in the future also implement other file formats such as [musicXML](https://en.wikipedia.org/wiki/MusicXML). As such, this PR relies on it. The music data can then easily be tokenized with appropriate tokenizers such as [MidiTok](https://github.com/Natooz/MidiTok) or converted to pianorolls matrices by symusic. **Jul 16th 2024:** * the tests for the `Music` feature are currently failing due to non-supported access to the LazyBatch in `test_dataset_with_music_feature_map` and `test_dataset_with_music_feature_map_resample_music` (see TODOs). I am a beginner with pyArrow, I'll take any advice to make this work; * additional tests including the `Music` feature with parquet and WebDataset should be implemented. As of right now, I am waiting for your feedback before taking further steps; * a `MusicFolder` should also be implemented to comply with the usages of the `Image` and `Audio` features, waiting for your feedback too. CCing @lhoestq and @albertvillanova
{ "avatar_url": "https://avatars.githubusercontent.com/u/56734983?v=4", "events_url": "https://api.github.com/users/Natooz/events{/privacy}", "followers_url": "https://api.github.com/users/Natooz/followers", "following_url": "https://api.github.com/users/Natooz/following{/other_user}", "gists_url": "https://api.github.com/users/Natooz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Natooz", "id": 56734983, "login": "Natooz", "node_id": "MDQ6VXNlcjU2NzM0OTgz", "organizations_url": "https://api.github.com/users/Natooz/orgs", "received_events_url": "https://api.github.com/users/Natooz/received_events", "repos_url": "https://api.github.com/users/Natooz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Natooz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Natooz/subscriptions", "type": "User", "url": "https://api.github.com/users/Natooz" }
https://api.github.com/repos/huggingface/datasets/issues/7052/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7052/timeline
closed
false
7,052
null
2024-07-29T06:47:55Z
null
true
2,409,353,929
https://api.github.com/repos/huggingface/datasets/issues/7051
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7051/events
[]
null
2024-08-05T20:58:04Z
[]
https://github.com/huggingface/datasets/issues/7051
NONE
completed
null
null
[ "This is not possible right now afaik :/\r\n\r\nMaybe we could have something like this ? wdyt ?\r\n\r\n```python\r\nds = interleave_datasets(\r\n [shuffled_dataset_a, dataset_b],\r\n probabilities=probabilities,\r\n stopping_strategy='all_exhausted',\r\n reshuffle_each_iteration=True,\r\n)", "That wo...
How to set_epoch with interleave_datasets?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7051/reactions" }
I_kwDODunzps6Pm9LJ
null
2024-07-15T18:24:52Z
https://api.github.com/repos/huggingface/datasets/issues/7051/comments
Let's say I have dataset A which has 100k examples, and dataset B which has 100m examples. I want to train on an interleaved dataset of A+B, with stopping_strategy='all_exhausted' so dataset B doesn't repeat any examples. But every time A is exhausted I want it to be reshuffled (eg. calling set_epoch) Of course I want to interleave as IterableDatasets / streaming mode so B doesn't have to get tokenized completely at the start. How could I achieve this? I was thinking something like, if I wrap dataset A in some new IterableDataset with from_generator() and manually call set_epoch before interleaving it? But I'm not sure how to keep the number of shards in that dataset... Something like ``` dataset_a = load_dataset(...) dataset_b = load_dataset(...) def epoch_shuffled_dataset(ds): # How to make this maintain the number of shards in ds?? for epoch in itertools.count(): ds.set_epoch(epoch) yield from iter(ds) shuffled_dataset_a = IterableDataset.from_generator(epoch_shuffled_dataset, gen_kwargs={'ds': dataset_a}) interleaved = interleave_datasets([shuffled_dataset_a, dataset_b], probs, stopping_strategy='all_exhausted') ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonathanasdf", "id": 511073, "login": "jonathanasdf", "node_id": "MDQ6VXNlcjUxMTA3Mw==", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "type": "User", "url": "https://api.github.com/users/jonathanasdf" }
https://api.github.com/repos/huggingface/datasets/issues/7051/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7051/timeline
closed
false
7,051
null
2024-08-05T20:58:04Z
null
false
2,409,048,733
https://api.github.com/repos/huggingface/datasets/issues/7050
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7050/events
[]
null
2024-07-15T16:06:15Z
[]
https://github.com/huggingface/datasets/pull/7050
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7050). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
add checkpoint and resume title in docs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7050/reactions" }
PR_kwDODunzps51Z1Yp
{ "diff_url": "https://github.com/huggingface/datasets/pull/7050.diff", "html_url": "https://github.com/huggingface/datasets/pull/7050", "merged_at": "2024-07-15T15:59:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/7050.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7050" }
2024-07-15T15:38:04Z
https://api.github.com/repos/huggingface/datasets/issues/7050/comments
(minor) just to make it more prominent in the docs page for the soon-to-be-released new torchdata
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/7050/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7050/timeline
closed
false
7,050
null
2024-07-15T15:59:56Z
null
true
2,408,514,366
https://api.github.com/repos/huggingface/datasets/issues/7049
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7049/events
[]
null
2024-07-18T11:33:34Z
[]
https://github.com/huggingface/datasets/issues/7049
NONE
completed
null
null
[ "In addition, when I use `set_format ` and index the ds, the following error occurs:\r\nthe code\r\n```python\r\nds.set_format(type=\"np\", colums=\"pixel_values\")\r\n```\r\nerror\r\n<img width=\"918\" alt=\"image\" src=\"https://github.com/user-attachments/assets/b28bbff2-20ea-4d28-ab62-b4ed2d944996\">\r\n", ">...
Save nparray as list
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7049/reactions" }
I_kwDODunzps6PjwM-
null
2024-07-15T11:36:11Z
https://api.github.com/repos/huggingface/datasets/issues/7049/comments
### Describe the bug When I use the `map` function to convert images into features, datasets saves nparray as a list. Some people use the `set_format` function to convert the column back, but doesn't this lose precision? ### Steps to reproduce the bug the map function ```python def convert_image_to_features(inst, processor, image_dir): image_file = inst["image_url"] file = image_file.split("/")[-1] image_path = os.path.join(image_dir, file) image = Image.open(image_path) image = image.convert("RGBA") inst["pixel_values"] = processor(images=image, return_tensors="np")["pixel_values"] return inst ``` main function ```python map_fun = partial( convert_image_to_features, processor=processor, image_dir=image_dir ) ds = ds.map(map_fun, batched=False, num_proc=20) print(type(ds[0]["pixel_values"]) ``` ### Expected behavior (type < list>) ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-4.19.91-009.ali4000.alios7.x86_64-x86_64-with-glibc2.35 - Python version: 3.11.5 - `huggingface_hub` version: 0.23.4 - PyArrow version: 14.0.2 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/48399040?v=4", "events_url": "https://api.github.com/users/Sakurakdx/events{/privacy}", "followers_url": "https://api.github.com/users/Sakurakdx/followers", "following_url": "https://api.github.com/users/Sakurakdx/following{/other_user}", "gists_url": "https://api.github.com/users/Sakurakdx/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Sakurakdx", "id": 48399040, "login": "Sakurakdx", "node_id": "MDQ6VXNlcjQ4Mzk5MDQw", "organizations_url": "https://api.github.com/users/Sakurakdx/orgs", "received_events_url": "https://api.github.com/users/Sakurakdx/received_events", "repos_url": "https://api.github.com/users/Sakurakdx/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Sakurakdx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sakurakdx/subscriptions", "type": "User", "url": "https://api.github.com/users/Sakurakdx" }
https://api.github.com/repos/huggingface/datasets/issues/7049/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7049/timeline
closed
false
7,049
null
2024-07-18T11:33:34Z
null
false
2,408,487,547
https://api.github.com/repos/huggingface/datasets/issues/7048
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7048/events
[]
null
2024-07-16T10:11:25Z
[]
https://github.com/huggingface/datasets/issues/7048
NONE
completed
null
null
[ "Could you please check your `numpy` version?", "I got this issue while using numpy version 2.0. \r\n\r\nI solved it by switching back to numpy 1.26.0 :) ", "We recently added support for numpy 2.0, but it is not released yet.", "Ok I see, thanks! I think we can close this issue for now as switching back to v...
ImportError: numpy.core.multiarray when using `filter`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7048/reactions" }
I_kwDODunzps6Pjpp7
null
2024-07-15T11:21:04Z
https://api.github.com/repos/huggingface/datasets/issues/7048/comments
### Describe the bug I can't apply the filter method on my dataset. ### Steps to reproduce the bug The following snippet generates a bug: ```python from datasets import load_dataset ami = load_dataset('kamilakesbi/ami', 'ihm') ami['train'].filter( lambda example: example["file_name"] == 'EN2001a' ) ``` I get the following error: `ImportError: numpy.core.multiarray failed to import (auto-generated because you didn't call 'numpy.import_array()' after cimporting numpy; use '<void>numpy._import_array' to disable if you are certain you don't need it).` ### Expected behavior It should work properly! ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/45195979?v=4", "events_url": "https://api.github.com/users/kamilakesbi/events{/privacy}", "followers_url": "https://api.github.com/users/kamilakesbi/followers", "following_url": "https://api.github.com/users/kamilakesbi/following{/other_user}", "gists_url": "https://api.github.com/users/kamilakesbi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kamilakesbi", "id": 45195979, "login": "kamilakesbi", "node_id": "MDQ6VXNlcjQ1MTk1OTc5", "organizations_url": "https://api.github.com/users/kamilakesbi/orgs", "received_events_url": "https://api.github.com/users/kamilakesbi/received_events", "repos_url": "https://api.github.com/users/kamilakesbi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kamilakesbi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kamilakesbi/subscriptions", "type": "User", "url": "https://api.github.com/users/kamilakesbi" }
https://api.github.com/repos/huggingface/datasets/issues/7048/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7048/timeline
closed
false
7,048
null
2024-07-16T10:11:25Z
null
false
2,406,495,084
https://api.github.com/repos/huggingface/datasets/issues/7047
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7047/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-07-17T12:07:08Z
[]
https://github.com/huggingface/datasets/issues/7047
NONE
null
null
null
[ "To anyone else who finds themselves in this predicament, it's possible to read the parquet file in the same way that datasets writes it, and then manually break it into pieces. Although, you need a couple of magic options (`thrift_*`) to deal with the huge metadata, otherwise pyarrow immediately crashes.\r\n```pyt...
Save Dataset as Sharded Parquet
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7047/reactions" }
I_kwDODunzps6PcDNs
null
2024-07-12T23:47:51Z
https://api.github.com/repos/huggingface/datasets/issues/7047/comments
### Feature request `to_parquet` currently saves the dataset as one massive, monolithic parquet file, rather than as several small parquet files. It should shard large datasets automatically. ### Motivation This default behavior makes me very sad because a program I ran for 6 hours saved its results using `to_parquet`, putting the entire billion+ row dataset into a 171 GB *single shard parquet file* which pyarrow, apache spark, etc. all cannot work with without completely exhausting the memory of my system. I was previously able to work with larger-than-memory parquet files, but not this one. I *assume* the reason why this is happening is because it is a single shard. Making sharding the default behavior puts datasets in parity with other frameworks, such as spark, which automatically shard when a large dataset is saved as parquet. ### Your contribution I could change the logic here https://github.com/huggingface/datasets/blob/bf6f41e94d9b2f1c620cf937a2e85e5754a8b960/src/datasets/io/parquet.py#L109-L158 to use `pyarrow.dataset.write_dataset`, which seems to support sharding, or periodically open new files. We would only shard if the user passed in a path rather than file handle.
{ "avatar_url": "https://avatars.githubusercontent.com/u/43631024?v=4", "events_url": "https://api.github.com/users/tom-p-reichel/events{/privacy}", "followers_url": "https://api.github.com/users/tom-p-reichel/followers", "following_url": "https://api.github.com/users/tom-p-reichel/following{/other_user}", "gists_url": "https://api.github.com/users/tom-p-reichel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tom-p-reichel", "id": 43631024, "login": "tom-p-reichel", "node_id": "MDQ6VXNlcjQzNjMxMDI0", "organizations_url": "https://api.github.com/users/tom-p-reichel/orgs", "received_events_url": "https://api.github.com/users/tom-p-reichel/received_events", "repos_url": "https://api.github.com/users/tom-p-reichel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tom-p-reichel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tom-p-reichel/subscriptions", "type": "User", "url": "https://api.github.com/users/tom-p-reichel" }
https://api.github.com/repos/huggingface/datasets/issues/7047/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7047/timeline
open
false
7,047
null
null
null
false
2,405,485,582
https://api.github.com/repos/huggingface/datasets/issues/7046
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7046/events
[]
null
2024-07-12T13:04:40Z
[]
https://github.com/huggingface/datasets/pull/7046
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7046). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Support librosa and numpy 2.0 for Python 3.10
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7046/reactions" }
PR_kwDODunzps51N05n
{ "diff_url": "https://github.com/huggingface/datasets/pull/7046.diff", "html_url": "https://github.com/huggingface/datasets/pull/7046", "merged_at": "2024-07-12T12:58:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/7046.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7046" }
2024-07-12T12:42:47Z
https://api.github.com/repos/huggingface/datasets/issues/7046/comments
Support librosa and numpy 2.0 for Python 3.10 by installing soxr 0.4.0b1 pre-release: - https://github.com/dofuuz/python-soxr/releases/tag/v0.4.0b1 - https://github.com/dofuuz/python-soxr/issues/28
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7046/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7046/timeline
closed
false
7,046
null
2024-07-12T12:58:17Z
null
true
2,405,447,858
https://api.github.com/repos/huggingface/datasets/issues/7045
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7045/events
[]
null
2024-07-12T12:38:53Z
[]
https://github.com/huggingface/datasets/pull/7045
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7045). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Fix tensorflow min version depending on Python version
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7045/reactions" }
PR_kwDODunzps51Nsie
{ "diff_url": "https://github.com/huggingface/datasets/pull/7045.diff", "html_url": "https://github.com/huggingface/datasets/pull/7045", "merged_at": "2024-07-12T12:33:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/7045.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7045" }
2024-07-12T12:20:23Z
https://api.github.com/repos/huggingface/datasets/issues/7045/comments
Fix tensorflow min version depending on Python version. Related to: - #6991
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7045/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7045/timeline
closed
false
7,045
null
2024-07-12T12:33:00Z
null
true
2,405,002,987
https://api.github.com/repos/huggingface/datasets/issues/7044
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7044/events
[]
null
2024-07-12T09:06:32Z
[]
https://github.com/huggingface/datasets/pull/7044
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7044). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Mark tests that require librosa
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7044/reactions" }
PR_kwDODunzps51MLbh
{ "diff_url": "https://github.com/huggingface/datasets/pull/7044.diff", "html_url": "https://github.com/huggingface/datasets/pull/7044", "merged_at": "2024-07-12T09:00:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/7044.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7044" }
2024-07-12T08:06:59Z
https://api.github.com/repos/huggingface/datasets/issues/7044/comments
Mark tests that require `librosa`. Note that `librosa` is an optional dependency (installed with `audio` option) and we should be able to test environments without that library installed. This is the case if we want to test Numpy 2.0, which is currently incompatible with `librosa` due to its dependency on `soxr`: - https://github.com/dofuuz/python-soxr/issues/28
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7044/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7044/timeline
closed
false
7,044
null
2024-07-12T09:00:09Z
null
true
2,404,951,714
https://api.github.com/repos/huggingface/datasets/issues/7043
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7043/events
[]
null
2024-07-12T08:12:55Z
[]
https://github.com/huggingface/datasets/pull/7043
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7043). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Add decorator as explicit test dependency
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7043/reactions" }
PR_kwDODunzps51MAN0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7043.diff", "html_url": "https://github.com/huggingface/datasets/pull/7043", "merged_at": "2024-07-12T08:07:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/7043.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7043" }
2024-07-12T07:35:23Z
https://api.github.com/repos/huggingface/datasets/issues/7043/comments
Add decorator as explicit test dependency. We use `decorator` library in our CI test since PR: - #4845 However we did not add it as an explicit test requirement, and we depended on it indirectly through other libraries' dependencies. I discovered this while testing Numpy 2.0 and removing incompatible libraries.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7043/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7043/timeline
closed
false
7,043
null
2024-07-12T08:07:10Z
null
true
2,404,605,836
https://api.github.com/repos/huggingface/datasets/issues/7042
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7042/events
[]
null
2024-08-15T10:07:44Z
[]
https://github.com/huggingface/datasets/pull/7042
CONTRIBUTOR
null
false
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
Improved the tutorial by adding a link for loading datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7042/reactions" }
PR_kwDODunzps51K8CM
{ "diff_url": "https://github.com/huggingface/datasets/pull/7042.diff", "html_url": "https://github.com/huggingface/datasets/pull/7042", "merged_at": "2024-08-15T10:01:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/7042.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7042" }
2024-07-12T03:49:54Z
https://api.github.com/repos/huggingface/datasets/issues/7042/comments
Improved the tutorial by letting readers know about loading datasets with common files and including a link. I left the local files section alone because the methods were already listed with code snippets.
{ "avatar_url": "https://avatars.githubusercontent.com/u/41874659?v=4", "events_url": "https://api.github.com/users/AmboThom/events{/privacy}", "followers_url": "https://api.github.com/users/AmboThom/followers", "following_url": "https://api.github.com/users/AmboThom/following{/other_user}", "gists_url": "https://api.github.com/users/AmboThom/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AmboThom", "id": 41874659, "login": "AmboThom", "node_id": "MDQ6VXNlcjQxODc0NjU5", "organizations_url": "https://api.github.com/users/AmboThom/orgs", "received_events_url": "https://api.github.com/users/AmboThom/received_events", "repos_url": "https://api.github.com/users/AmboThom/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AmboThom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmboThom/subscriptions", "type": "User", "url": "https://api.github.com/users/AmboThom" }
https://api.github.com/repos/huggingface/datasets/issues/7042/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7042/timeline
closed
false
7,042
null
2024-08-15T10:01:59Z
null
true
2,404,576,038
https://api.github.com/repos/huggingface/datasets/issues/7041
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7041/events
[]
null
2024-07-22T13:55:17Z
[]
https://github.com/huggingface/datasets/issues/7041
NONE
null
null
null
[ "`filter` add an indices mapping on top of the dataset, so `sort` has to gather all the rows that are kept to form a new Arrow table and sort the table. Gathering all the rows can take some time, but is a necessary step. You can try calling `ds = ds.flatten_indices()` before sorting to remove the indices mapping." ...
`sort` after `filter` unreasonably slow
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7041/reactions" }
I_kwDODunzps6PUusm
null
2024-07-12T03:29:27Z
https://api.github.com/repos/huggingface/datasets/issues/7041/comments
### Describe the bug as the tittle says ... ### Steps to reproduce the bug `sort` seems to be normal. ```python from datasets import Dataset import random nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)] ds = Dataset.from_list(nums) print("start sort") ds = ds.sort("k") print("finish sort") ``` but `sort` after `filter` is extremely slow. ```python from datasets import Dataset import random nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)] ds = Dataset.from_list(nums) ds = ds.filter(lambda x:x > 100, input_columns="k") print("start sort") ds = ds.sort("k") print("finish sort") ``` ### Expected behavior Is this a bug, or is it a misuse of the `sort` function? ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/56711045?v=4", "events_url": "https://api.github.com/users/Tobin-rgb/events{/privacy}", "followers_url": "https://api.github.com/users/Tobin-rgb/followers", "following_url": "https://api.github.com/users/Tobin-rgb/following{/other_user}", "gists_url": "https://api.github.com/users/Tobin-rgb/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Tobin-rgb", "id": 56711045, "login": "Tobin-rgb", "node_id": "MDQ6VXNlcjU2NzExMDQ1", "organizations_url": "https://api.github.com/users/Tobin-rgb/orgs", "received_events_url": "https://api.github.com/users/Tobin-rgb/received_events", "repos_url": "https://api.github.com/users/Tobin-rgb/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Tobin-rgb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tobin-rgb/subscriptions", "type": "User", "url": "https://api.github.com/users/Tobin-rgb" }
https://api.github.com/repos/huggingface/datasets/issues/7041/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7041/timeline
open
false
7,041
null
null
null
false
2,402,918,335
https://api.github.com/repos/huggingface/datasets/issues/7040
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7040/events
[]
null
2024-07-11T14:11:56Z
[]
https://github.com/huggingface/datasets/issues/7040
NONE
null
null
null
[ "When you pass `streaming=True`, the cache is ignored. The remote data URL is used instead and the data is streamed from the remote server.", "Thanks for your reply! So is there any solution to get my expected behavior besides clone the whole repo ? Or could I adjust my script to load the downloaded arrow files a...
load `streaming=True` dataset with downloaded cache
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7040/reactions" }
I_kwDODunzps6POZ-_
null
2024-07-11T11:14:13Z
https://api.github.com/repos/huggingface/datasets/issues/7040/comments
### Describe the bug We build a dataset which contains several hdf5 files and write a script using `h5py` to generate the dataset. The hdf5 files are large and the processed dataset cache takes more disk space. So we hope to try streaming iterable dataset. Unfortunately, `h5py` can't convert a remote URL into a hdf5 file descriptor. So we use `fsspec` as an interface like below: ```python def _generate_examples(self, filepath, split): for file in filepath: with fsspec.open(file, "rb") as fs: with h5py.File(fs, "r") as fp: # for event_id in sorted(list(fp.keys())): event_ids = list(fp.keys()) ...... ``` ### Steps to reproduce the bug The `fsspec` works, but it takes 10+ min to print the first 10 examples, which is even longer than the downloading time. I'm not sure if it just caches the whole hdf5 file and generates the examples. ### Expected behavior So does the following make sense so far? 1. download the files ```python dataset = datasets.load('path/to/myscripts', split="train", name="event", trust_remote_code=True) ``` 2. load the iterable dataset faster (using the raw file cache at path `.cache/huggingface/datasets/downloads`) ```python dataset = datasets.load('path/to/myscripts', split="train", name="event", trust_remote_code=True, streaming=true) ``` I made some tests, but the code above can't get the expected result. I'm not sure if this is supported. I also find the issue #6327 . It seemed similar to mine, but I couldn't find a solution. ### Environment info - `datasets` = 2.18.0 - `h5py` = 3.10.0 - `fsspec` = 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/39429965?v=4", "events_url": "https://api.github.com/users/wanghaoyucn/events{/privacy}", "followers_url": "https://api.github.com/users/wanghaoyucn/followers", "following_url": "https://api.github.com/users/wanghaoyucn/following{/other_user}", "gists_url": "https://api.github.com/users/wanghaoyucn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wanghaoyucn", "id": 39429965, "login": "wanghaoyucn", "node_id": "MDQ6VXNlcjM5NDI5OTY1", "organizations_url": "https://api.github.com/users/wanghaoyucn/orgs", "received_events_url": "https://api.github.com/users/wanghaoyucn/received_events", "repos_url": "https://api.github.com/users/wanghaoyucn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wanghaoyucn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wanghaoyucn/subscriptions", "type": "User", "url": "https://api.github.com/users/wanghaoyucn" }
https://api.github.com/repos/huggingface/datasets/issues/7040/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7040/timeline
open
false
7,040
null
null
null
false
2,402,403,390
https://api.github.com/repos/huggingface/datasets/issues/7039
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7039/events
[]
null
2024-07-11T07:27:58Z
[]
https://github.com/huggingface/datasets/pull/7039
MEMBER
null
true
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7039). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "The test before confirms the bug.\r\n\r\nThere are different possible solutions to this...
Fix export to JSON when dataset larger than batch size
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7039/reactions" }
PR_kwDODunzps51DgCY
{ "diff_url": "https://github.com/huggingface/datasets/pull/7039.diff", "html_url": "https://github.com/huggingface/datasets/pull/7039", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7039.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7039" }
2024-07-11T06:52:22Z
https://api.github.com/repos/huggingface/datasets/issues/7039/comments
Fix export to JSON (`files=False`) when dataset larger than batch size. Fix #7037.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7039/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7039/timeline
open
false
7,039
null
null
null
true
2,402,081,227
https://api.github.com/repos/huggingface/datasets/issues/7038
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7038/events
[]
null
2024-07-11T05:28:39Z
[]
https://github.com/huggingface/datasets/issues/7038
NONE
not_planned
null
null
[ "This is the `datasets` repository, and the issue should be opened in the `transformers` repo instead." ]
Yes, can definitely elaborate:
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7038/reactions" }
I_kwDODunzps6PLNnL
null
2024-07-11T02:22:30Z
https://api.github.com/repos/huggingface/datasets/issues/7038/comments
Yes, can definitely elaborate: Say I want to use HF Trainer with an arbitrary PyTorch optimizer (`AdamW` here just as an example). Then I should intuitively extend `Trainer` like: ```python class CustomOptimizerTrainer(Trainer): @staticmethod def get_optimizer_cls_and_kwargs(args: HfTrainingArguments, model=None) -> tuple[type[torch.optim.Optimizer], dict[str, Any]]: optimizer = torch.optim.AdamW optimizer_kwargs = { "lr": 4e-3, "betas": (0.9, 0.999), "weight_decay": 0.05, } return optimizer, optimizer_kwargs ``` However, this won't take effect, because `Trainer.create_optimizer` hardcodes the `Trainer` class name when calling `get_optimizer_cls_and_kwargs`: https://github.com/huggingface/transformers/blob/6c1d0b069de22d7ed8aa83f733c25045eea0585d/src/transformers/trainer.py#L1076 `CustomOptimizerTrainer.get_optimizer_cls_and_kwargs` will never be called. So I could either: - also override the entire `create_optimizer` and rewrite `Trainer.get_optimizer_cls_and_kwargs` to `self.get_optimizer_cls_and_kwargs` (overkill) - or monkey-patch (not ideal): ```python class CustomOptimizerTrainer(Trainer): # def get_optimizer_cls_and_kwargs ... def create_optimizer(self): trainer_get_optimizer_fn = Trainer.get_optimizer_cls_and_kwargs Trainer.get_optimizer_cls_and_kwargs = self.get_optimizer_cls_and_kwargs optimizer = super().create_optimizer() Trainer.get_optimizer_cls_and_kwargs = trainer_get_optimizer_fn return optimizer ``` But I think the best fix is to change `Trainer.get_optimizer_cls_and_kwargs` to `self.get_optimizer_cls_and_kwargs` in the original source of `Trainer.create_optimizer`. I also made `get_optimizer_cls_and_kwargs` an instance method instead of a static method, but that probably doesn't matter as much and can be reverted. It breaks the syntax of the tests. Please let me know if that's clearer and if you agree! Thanks! _Originally posted by @apoorvkh in https://github.com/huggingface/transformers/issues/31875#issuecomment-2221491647_
{ "avatar_url": "https://avatars.githubusercontent.com/u/165458456?v=4", "events_url": "https://api.github.com/users/Khaliq88/events{/privacy}", "followers_url": "https://api.github.com/users/Khaliq88/followers", "following_url": "https://api.github.com/users/Khaliq88/following{/other_user}", "gists_url": "https://api.github.com/users/Khaliq88/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Khaliq88", "id": 165458456, "login": "Khaliq88", "node_id": "U_kgDOCdyyGA", "organizations_url": "https://api.github.com/users/Khaliq88/orgs", "received_events_url": "https://api.github.com/users/Khaliq88/received_events", "repos_url": "https://api.github.com/users/Khaliq88/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Khaliq88/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Khaliq88/subscriptions", "type": "User", "url": "https://api.github.com/users/Khaliq88" }
https://api.github.com/repos/huggingface/datasets/issues/7038/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7038/timeline
closed
false
7,038
null
2024-07-11T05:28:39Z
null
false
2,400,192,419
https://api.github.com/repos/huggingface/datasets/issues/7037
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7037/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2024-07-10T13:07:44Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
https://github.com/huggingface/datasets/issues/7037
NONE
null
null
null
[ "Thanks for reporting, @LinglingGreat.\r\n\r\nI confirm this is a bug." ]
A bug of Dataset.to_json() function
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7037/reactions" }
I_kwDODunzps6PEAej
null
2024-07-10T09:11:22Z
https://api.github.com/repos/huggingface/datasets/issues/7037/comments
### Describe the bug When using the Dataset.to_json() function, an unexpected error occurs if the parameter is set to lines=False. The stored data should be in the form of a list, but it actually turns into multiple lists, which causes an error when reading the data again. The reason is that to_json() writes to the file in several segments based on the batch size. This is not a problem when lines=True, but it is incorrect when lines=False, because writing in several times will produce multiple lists(when len(dataset) > batch_size). ### Steps to reproduce the bug try this code: ```python from datasets import load_dataset import json train_dataset = load_dataset("Anthropic/hh-rlhf", data_dir="harmless-base")["train"] output_path = "./harmless-base_hftojs.json" print(len(train_dataset)) train_dataset.to_json(output_path, lines=False, force_ascii=False, indent=2) with open(output_path, encoding="utf-8") as f: data = json.loads(f.read()) ``` it raise error: json.decoder.JSONDecodeError: Extra data: line 4003 column 1 (char 1373709) Extra square brackets have appeared here: <img width="265" alt="image" src="https://github.com/huggingface/datasets/assets/26499566/81492332-386d-42e8-88d1-b6d4ae3682cc"> ### Expected behavior The code runs normally. ### Environment info datasets=2.20.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/26499566?v=4", "events_url": "https://api.github.com/users/LinglingGreat/events{/privacy}", "followers_url": "https://api.github.com/users/LinglingGreat/followers", "following_url": "https://api.github.com/users/LinglingGreat/following{/other_user}", "gists_url": "https://api.github.com/users/LinglingGreat/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LinglingGreat", "id": 26499566, "login": "LinglingGreat", "node_id": "MDQ6VXNlcjI2NDk5NTY2", "organizations_url": "https://api.github.com/users/LinglingGreat/orgs", "received_events_url": "https://api.github.com/users/LinglingGreat/received_events", "repos_url": "https://api.github.com/users/LinglingGreat/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LinglingGreat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LinglingGreat/subscriptions", "type": "User", "url": "https://api.github.com/users/LinglingGreat" }
https://api.github.com/repos/huggingface/datasets/issues/7037/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7037/timeline
open
false
7,037
null
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,400,035,672
https://api.github.com/repos/huggingface/datasets/issues/7036
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7036/events
[]
null
2024-07-26T07:58:00Z
[]
https://github.com/huggingface/datasets/pull/7036
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7036). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Fix doc generation when NamedSplit is used as parameter default value
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7036/reactions" }
PR_kwDODunzps507bZk
{ "diff_url": "https://github.com/huggingface/datasets/pull/7036.diff", "html_url": "https://github.com/huggingface/datasets/pull/7036", "merged_at": "2024-07-26T07:51:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/7036.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7036" }
2024-07-10T07:58:46Z
https://api.github.com/repos/huggingface/datasets/issues/7036/comments
Fix doc generation when `NamedSplit` is used as parameter default value. Fix #7035.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7036/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7036/timeline
closed
false
7,036
null
2024-07-26T07:51:52Z
null
true
2,400,021,225
https://api.github.com/repos/huggingface/datasets/issues/7035
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7035/events
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
null
2024-07-26T07:51:53Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
https://github.com/huggingface/datasets/issues/7035
MEMBER
completed
null
null
[]
Docs are not generated when a parameter defaults to a NamedSplit value
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7035/reactions" }
I_kwDODunzps6PDWrp
null
2024-07-10T07:51:24Z
https://api.github.com/repos/huggingface/datasets/issues/7035/comments
While generating the docs, we get an error when some parameter defaults to a `NamedSplit` value, like: ```python def call_function(split=Split.TRAIN): ... ``` The error is: ValueError: Equality not supported between split train and <class 'inspect._empty'> See: https://github.com/huggingface/datasets/actions/runs/9869660902/job/27254359863?pr=7015 ``` Building the MDX files: 97%|█████████▋| 58/60 [00:00<00:00, 91.94it/s] Traceback (most recent call last): File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 197, in build_mdx_files content, new_anchors, source_files, errors = resolve_autodoc( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 123, in resolve_autodoc doc = autodoc( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/autodoc.py", line 499, in autodoc method_doc, check = document_object( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/autodoc.py", line 395, in document_object signature = format_signature(obj) File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/autodoc.py", line 126, in format_signature if param.default != inspect._empty: File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/datasets/splits.py", line 136, in __ne__ return not self.__eq__(other) File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/datasets/splits.py", line 379, in __eq__ raise ValueError(f"Equality not supported between split {self} and {other}") ValueError: Equality not supported between split train and <class 'inspect._empty'> The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/runner/work/datasets/datasets/.venv/bin/doc-builder", line 8, in <module> sys.exit(main()) File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/commands/doc_builder_cli.py", line 47, in main args.func(args) File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/commands/build.py", line 102, in build_command build_doc( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 367, in build_doc anchors_mapping, source_files_mapping = build_mdx_files( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 230, in build_mdx_files raise type(e)(f"There was an error when converting {file} to the MDX format.\n" + e.args[0]) from e ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format. Equality not supported between split train and <class 'inspect._empty'> ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7035/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7035/timeline
closed
false
7,035
null
2024-07-26T07:51:53Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,397,525,974
https://api.github.com/repos/huggingface/datasets/issues/7034
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7034/events
[]
null
2024-08-13T08:22:25Z
[]
https://github.com/huggingface/datasets/pull/7034
CONTRIBUTOR
null
false
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
chore: fix typos in docs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7034/reactions" }
PR_kwDODunzps50y-ya
{ "diff_url": "https://github.com/huggingface/datasets/pull/7034.diff", "html_url": "https://github.com/huggingface/datasets/pull/7034", "merged_at": "2024-08-13T08:16:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/7034.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7034" }
2024-07-09T08:35:05Z
https://api.github.com/repos/huggingface/datasets/issues/7034/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/150505746?v=4", "events_url": "https://api.github.com/users/hattizai/events{/privacy}", "followers_url": "https://api.github.com/users/hattizai/followers", "following_url": "https://api.github.com/users/hattizai/following{/other_user}", "gists_url": "https://api.github.com/users/hattizai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hattizai", "id": 150505746, "login": "hattizai", "node_id": "U_kgDOCPiJEg", "organizations_url": "https://api.github.com/users/hattizai/orgs", "received_events_url": "https://api.github.com/users/hattizai/received_events", "repos_url": "https://api.github.com/users/hattizai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hattizai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hattizai/subscriptions", "type": "User", "url": "https://api.github.com/users/hattizai" }
https://api.github.com/repos/huggingface/datasets/issues/7034/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7034/timeline
closed
false
7,034
null
2024-08-13T08:16:22Z
null
true
2,397,419,768
https://api.github.com/repos/huggingface/datasets/issues/7033
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7033/events
[]
null
2024-07-26T12:56:16Z
[]
https://github.com/huggingface/datasets/issues/7033
CONTRIBUTOR
completed
null
null
[ "Thanks for reporting, @pminervini.\r\n\r\nI agree we should give the option to define the split name.\r\n\r\nIndeed, there is a PR that addresses precisely this issue:\r\n- #7015\r\n\r\nI am reviewing it.", "Booom! thank you guys :)" ]
`from_generator` does not allow to specify the split name
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7033/reactions" }
I_kwDODunzps6O5bj4
null
2024-07-09T07:47:58Z
https://api.github.com/repos/huggingface/datasets/issues/7033/comments
### Describe the bug I'm building train, dev, and test using `from_generator`; however, in all three cases, the logger prints `Generating train split:` It's not possible to change the split name since it seems to be hardcoded: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/generator/generator.py ### Steps to reproduce the bug ``` In [1]: from datasets import Dataset In [2]: def gen(): ...: yield {"pokemon": "bulbasaur", "type": "grass"} ...: In [3]: ds = Dataset.from_generator(gen) Generating train split: 1 examples [00:00, 133.89 examples/s] ``` ### Expected behavior It should be possible to specify any split name ### Environment info - `datasets` version: 2.19.2 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.5 - `huggingface_hub` version: 0.23.3 - PyArrow version: 15.0.0 - Pandas version: 2.0.3 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/227357?v=4", "events_url": "https://api.github.com/users/pminervini/events{/privacy}", "followers_url": "https://api.github.com/users/pminervini/followers", "following_url": "https://api.github.com/users/pminervini/following{/other_user}", "gists_url": "https://api.github.com/users/pminervini/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pminervini", "id": 227357, "login": "pminervini", "node_id": "MDQ6VXNlcjIyNzM1Nw==", "organizations_url": "https://api.github.com/users/pminervini/orgs", "received_events_url": "https://api.github.com/users/pminervini/received_events", "repos_url": "https://api.github.com/users/pminervini/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pminervini/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pminervini/subscriptions", "type": "User", "url": "https://api.github.com/users/pminervini" }
https://api.github.com/repos/huggingface/datasets/issues/7033/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7033/timeline
closed
false
7,033
null
2024-07-26T09:31:56Z
null
false
2,395,531,699
https://api.github.com/repos/huggingface/datasets/issues/7032
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7032/events
[]
null
2024-07-12T15:07:03Z
[]
https://github.com/huggingface/datasets/pull/7032
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7032). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@albertvillanova hm I don't know tbh, it's just that \"mlfoundations/dclm-baseline-1.0\...
Register `.zstd` extension for zstd-compressed files
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7032/reactions" }
PR_kwDODunzps50sJTq
{ "diff_url": "https://github.com/huggingface/datasets/pull/7032.diff", "html_url": "https://github.com/huggingface/datasets/pull/7032", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7032.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7032" }
2024-07-08T12:39:50Z
https://api.github.com/repos/huggingface/datasets/issues/7032/comments
For example, https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0 dataset files have `.zstd` extension which is currently ignored (only `.zst` is registered).
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
https://api.github.com/repos/huggingface/datasets/issues/7032/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7032/timeline
closed
false
7,032
null
2024-07-12T15:07:03Z
null
true
2,395,401,692
https://api.github.com/repos/huggingface/datasets/issues/7031
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7031/events
[]
null
2024-07-08T11:47:29Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
https://github.com/huggingface/datasets/issues/7031
MEMBER
not_planned
null
null
[]
CI quality is broken: use ruff check instead
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7031/reactions" }
I_kwDODunzps6Oxu3c
null
2024-07-08T11:42:24Z
https://api.github.com/repos/huggingface/datasets/issues/7031/comments
CI quality is broken: https://github.com/huggingface/datasets/actions/runs/9838873879/job/27159697027 ``` error: `ruff <path>` has been removed. Use `ruff check <path>` instead. ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7031/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7031/timeline
closed
false
7,031
null
2024-07-08T11:47:29Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,393,411,631
https://api.github.com/repos/huggingface/datasets/issues/7030
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7030/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-07-13T14:35:59Z
[]
https://github.com/huggingface/datasets/issues/7030
NONE
completed
null
null
[ "You can disable progress bars for all of `datasets` with `disable_progress_bars`. [Link](https://huggingface.co/docs/datasets/en/package_reference/utilities#datasets.enable_progress_bars)\r\n\r\nSo you could do something like:\r\n\r\n```python\r\nfrom datasets import load_from_disk, enable_progress_bars, disable_p...
Add option to disable progress bar when reading a dataset ("Loading dataset from disk")
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7030/reactions" }
I_kwDODunzps6OqJAv
null
2024-07-06T05:43:37Z
https://api.github.com/repos/huggingface/datasets/issues/7030/comments
### Feature request Add an option in load_from_disk to disable the progress bar even if the number of files is larger than 16. ### Motivation I am reading a lot of datasets that it creates lots of logs. <img width="1432" alt="image" src="https://github.com/huggingface/datasets/assets/57996478/8d4bbf03-6b89-44b6-937c-932f01b4eb2a"> ### Your contribution Seems like an easy fix to make. I can create a PR if necessary.
{ "avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4", "events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}", "followers_url": "https://api.github.com/users/yuvalkirstain/followers", "following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}", "gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yuvalkirstain", "id": 57996478, "login": "yuvalkirstain", "node_id": "MDQ6VXNlcjU3OTk2NDc4", "organizations_url": "https://api.github.com/users/yuvalkirstain/orgs", "received_events_url": "https://api.github.com/users/yuvalkirstain/received_events", "repos_url": "https://api.github.com/users/yuvalkirstain/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions", "type": "User", "url": "https://api.github.com/users/yuvalkirstain" }
https://api.github.com/repos/huggingface/datasets/issues/7030/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7030/timeline
closed
false
7,030
null
2024-07-13T14:35:59Z
null
false
2,391,366,696
https://api.github.com/repos/huggingface/datasets/issues/7029
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7029/events
[]
null
2024-07-17T12:44:03Z
[]
https://github.com/huggingface/datasets/issues/7029
NONE
null
null
null
[ "hi ! can you share the full stack trace ? this should help locate what files is not written in the cache_dir" ]
load_dataset on AWS lambda throws OSError(30, 'Read-only file system') error
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7029/reactions" }
I_kwDODunzps6OiVwo
null
2024-07-04T19:15:16Z
https://api.github.com/repos/huggingface/datasets/issues/7029/comments
### Describe the bug I'm using AWS lambda to run a python application. I run the `load_dataset` function with cache_dir="/tmp" and is still throws the OSError(30, 'Read-only file system') error. Is even updated all the HF envs to point to /tmp dir but the issue still persists. I can confirm that the I can write to /tmp directory. ### Steps to reproduce the bug ```python d = load_dataset( path=hugging_face_link, split=split, token=token, cache_dir="/tmp/hugging_face_cache", ) ``` ### Expected behavior Everything written to the file system as part of the load_datasets function should be in the /tmp directory. ### Environment info datasets version: 2.16.1 Platform: Linux-5.10.216-225.855.amzn2.x86_64-x86_64-with-glibc2.26 Python version: 3.11.9 huggingface_hub version: 0.19.4 PyArrow version: 16.1.0 Pandas version: 2.2.2 fsspec version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/171606538?v=4", "events_url": "https://api.github.com/users/sugam-nexusflow/events{/privacy}", "followers_url": "https://api.github.com/users/sugam-nexusflow/followers", "following_url": "https://api.github.com/users/sugam-nexusflow/following{/other_user}", "gists_url": "https://api.github.com/users/sugam-nexusflow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sugam-nexusflow", "id": 171606538, "login": "sugam-nexusflow", "node_id": "U_kgDOCjqCCg", "organizations_url": "https://api.github.com/users/sugam-nexusflow/orgs", "received_events_url": "https://api.github.com/users/sugam-nexusflow/received_events", "repos_url": "https://api.github.com/users/sugam-nexusflow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sugam-nexusflow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sugam-nexusflow/subscriptions", "type": "User", "url": "https://api.github.com/users/sugam-nexusflow" }
https://api.github.com/repos/huggingface/datasets/issues/7029/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7029/timeline
open
false
7,029
null
null
null
false
2,391,077,531
https://api.github.com/repos/huggingface/datasets/issues/7028
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7028/events
[]
null
2024-07-04T15:26:35Z
[]
https://github.com/huggingface/datasets/pull/7028
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7028). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Fix ci
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7028/reactions" }
PR_kwDODunzps50dQ1w
{ "diff_url": "https://github.com/huggingface/datasets/pull/7028.diff", "html_url": "https://github.com/huggingface/datasets/pull/7028", "merged_at": "2024-07-04T15:19:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/7028.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7028" }
2024-07-04T15:11:08Z
https://api.github.com/repos/huggingface/datasets/issues/7028/comments
...after last pr errors
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/7028/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7028/timeline
closed
false
7,028
null
2024-07-04T15:19:16Z
null
true
2,391,013,330
https://api.github.com/repos/huggingface/datasets/issues/7027
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7027/events
[]
null
2024-07-04T14:40:46Z
[]
https://github.com/huggingface/datasets/pull/7027
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7027). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Missing line from previous pr
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7027/reactions" }
PR_kwDODunzps50dCsE
{ "diff_url": "https://github.com/huggingface/datasets/pull/7027.diff", "html_url": "https://github.com/huggingface/datasets/pull/7027", "merged_at": "2024-07-04T14:34:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/7027.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7027" }
2024-07-04T14:34:29Z
https://api.github.com/repos/huggingface/datasets/issues/7027/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/7027/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7027/timeline
closed
false
7,027
null
2024-07-04T14:34:36Z
null
true
2,390,983,889
https://api.github.com/repos/huggingface/datasets/issues/7026
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7026/events
[]
null
2024-07-04T14:28:36Z
[]
https://github.com/huggingface/datasets/pull/7026
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7026). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Fix check_library_imports
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7026/reactions" }
PR_kwDODunzps50c8Mf
{ "diff_url": "https://github.com/huggingface/datasets/pull/7026.diff", "html_url": "https://github.com/huggingface/datasets/pull/7026", "merged_at": "2024-07-04T14:20:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/7026.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7026" }
2024-07-04T14:18:38Z
https://api.github.com/repos/huggingface/datasets/issues/7026/comments
move it to after the `trust_remote_code` check Note that it only affects local datasets that already exist on disk, not datasets loaded from HF directly
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/7026/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7026/timeline
closed
false
7,026
null
2024-07-04T14:20:02Z
null
true
2,390,488,546
https://api.github.com/repos/huggingface/datasets/issues/7025
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7025/events
[]
null
2024-07-31T06:15:50Z
[]
https://github.com/huggingface/datasets/pull/7025
CONTRIBUTOR
null
false
null
[ "requesting review - @albertvillanova @lhoestq ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7025). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@lhoestq rebased the PR, It would b...
feat: support non streamable arrow file binary format
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7025/reactions" }
PR_kwDODunzps50bSyD
{ "diff_url": "https://github.com/huggingface/datasets/pull/7025.diff", "html_url": "https://github.com/huggingface/datasets/pull/7025", "merged_at": "2024-07-31T06:09:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/7025.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7025" }
2024-07-04T10:11:12Z
https://api.github.com/repos/huggingface/datasets/issues/7025/comments
Support Arrow files (`.arrow`) that are in non streamable binary file formats.
{ "avatar_url": "https://avatars.githubusercontent.com/u/15800200?v=4", "events_url": "https://api.github.com/users/kmehant/events{/privacy}", "followers_url": "https://api.github.com/users/kmehant/followers", "following_url": "https://api.github.com/users/kmehant/following{/other_user}", "gists_url": "https://api.github.com/users/kmehant/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kmehant", "id": 15800200, "login": "kmehant", "node_id": "MDQ6VXNlcjE1ODAwMjAw", "organizations_url": "https://api.github.com/users/kmehant/orgs", "received_events_url": "https://api.github.com/users/kmehant/received_events", "repos_url": "https://api.github.com/users/kmehant/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kmehant/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kmehant/subscriptions", "type": "User", "url": "https://api.github.com/users/kmehant" }
https://api.github.com/repos/huggingface/datasets/issues/7025/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7025/timeline
closed
false
7,025
null
2024-07-31T06:09:31Z
null
true
2,390,141,626
https://api.github.com/repos/huggingface/datasets/issues/7024
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7024/events
[]
null
2024-07-04T07:21:47Z
[]
https://github.com/huggingface/datasets/issues/7024
NONE
null
null
null
[]
Streaming dataset not returning data
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7024/reactions" }
I_kwDODunzps6Odqq6
null
2024-07-04T07:21:47Z
https://api.github.com/repos/huggingface/datasets/issues/7024/comments
### Describe the bug I'm deciding to post here because I'm still not sure what the issue is, or if I am using IterableDatasets wrongly. I'm following the guide on here https://huggingface.co/learn/cookbook/en/fine_tuning_code_llm_on_single_gpu pretty much to a tee and have verified that it works when I'm fine-tuning on the provided dataset. However, I'm doing some data preprocessing steps (filtering out entries), when I try to swap out the dataset for mine, it fails to train. However, I eventually fixed this by simply setting `stream=False` in `load_dataset`. Coud this be some sort of network / firewall issue I'm facing? ### Steps to reproduce the bug I made a post with greater description about how I reproduced this problem before I found my workaround: https://discuss.huggingface.co/t/problem-with-custom-iterator-of-streaming-dataset-not-returning-anything/94551 Here is the problematic dataset snippet, which works when streaming=False (and with buffer keyword removed from shuffle) ``` commitpackft = load_dataset( "chargoddard/commitpack-ft-instruct", split="train", streaming=True ).filter(lambda example: example["language"] == "Python") def form_template(example): """Forms a template for each example following the alpaca format for CommitPack""" example["content"] = ( "### Human: " + example["instruction"] + " " + example["input"] + " ### Assistant: " + example["output"] ) return example dataset = commitpackft.map( form_template, remove_columns=["id", "language", "license", "instruction", "input", "output"], ).shuffle( seed=42, buffer_size=10000 ) # remove everything since its all inside "content" now validation_data = dataset.take(4000) train_data = dataset.skip(4000) ``` The annoying part about this is that it only fails during training and I don't know when it will fail, except that it always fails during evaluation. ### Expected behavior The expected behavior is that I should be able to get something from the iterator when called instead of getting nothing / stuck in a loop somewhere. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.31 - Python version: 3.11.7 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/91670254?v=4", "events_url": "https://api.github.com/users/johnwee1/events{/privacy}", "followers_url": "https://api.github.com/users/johnwee1/followers", "following_url": "https://api.github.com/users/johnwee1/following{/other_user}", "gists_url": "https://api.github.com/users/johnwee1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/johnwee1", "id": 91670254, "login": "johnwee1", "node_id": "U_kgDOBXbG7g", "organizations_url": "https://api.github.com/users/johnwee1/orgs", "received_events_url": "https://api.github.com/users/johnwee1/received_events", "repos_url": "https://api.github.com/users/johnwee1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/johnwee1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnwee1/subscriptions", "type": "User", "url": "https://api.github.com/users/johnwee1" }
https://api.github.com/repos/huggingface/datasets/issues/7024/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7024/timeline
open
false
7,024
null
null
null
false
2,388,090,424
https://api.github.com/repos/huggingface/datasets/issues/7023
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7023/events
[]
null
2024-07-03T09:24:46Z
[]
https://github.com/huggingface/datasets/pull/7023
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7023). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Remove dead code for pyarrow < 15.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7023/reactions" }
PR_kwDODunzps50TDot
{ "diff_url": "https://github.com/huggingface/datasets/pull/7023.diff", "html_url": "https://github.com/huggingface/datasets/pull/7023", "merged_at": "2024-07-03T09:17:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/7023.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7023" }
2024-07-03T09:05:03Z
https://api.github.com/repos/huggingface/datasets/issues/7023/comments
Remove dead code for pyarrow < 15.0.0. Code is dead since the merge of: - #6892 Fix #7022.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7023/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7023/timeline
closed
false
7,023
null
2024-07-03T09:17:35Z
null
true
2,388,064,650
https://api.github.com/repos/huggingface/datasets/issues/7022
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7022/events
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
null
2024-07-03T09:17:36Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
https://github.com/huggingface/datasets/issues/7022
MEMBER
completed
null
null
[]
There is dead code after we require pyarrow >= 15.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7022/reactions" }
I_kwDODunzps6OVvmK
null
2024-07-03T08:52:57Z
https://api.github.com/repos/huggingface/datasets/issues/7022/comments
There are code lines specific for pyarrow versions < 15.0.0. However, we require pyarrow >= 15.0.0 since the merge of PR: - #6892 Those code lines are now dead code and should be removed.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7022/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7022/timeline
closed
false
7,022
null
2024-07-03T09:17:36Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,387,948,935
https://api.github.com/repos/huggingface/datasets/issues/7021
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7021/events
[]
null
2024-07-03T08:47:49Z
[]
https://github.com/huggingface/datasets/pull/7021
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7021). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Fix casting list array to fixed size list
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7021/reactions" }
PR_kwDODunzps50SlKR
{ "diff_url": "https://github.com/huggingface/datasets/pull/7021.diff", "html_url": "https://github.com/huggingface/datasets/pull/7021", "merged_at": "2024-07-03T08:41:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/7021.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7021" }
2024-07-03T07:58:57Z
https://api.github.com/repos/huggingface/datasets/issues/7021/comments
Fix casting list array to fixed size list. This bug was introduced in [datasets-2.17.0](https://github.com/huggingface/datasets/releases/tag/2.17.0) by PR: https://github.com/huggingface/datasets/pull/6283/files#diff-1cb2b66aa9311d729cfd83013dad56cf5afcda35b39dfd0bfe9c3813a049eab0R1899 - #6283 Fix #7020.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7021/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7021/timeline
closed
false
7,021
null
2024-07-03T08:41:55Z
null
true
2,387,940,990
https://api.github.com/repos/huggingface/datasets/issues/7020
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7020/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2024-07-03T08:41:56Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
https://github.com/huggingface/datasets/issues/7020
MEMBER
completed
null
null
[]
Casting list array to fixed size list raises error
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7020/reactions" }
I_kwDODunzps6OVRZ-
null
2024-07-03T07:54:49Z
https://api.github.com/repos/huggingface/datasets/issues/7020/comments
When trying to cast a list array to fixed size list, an AttributeError is raised: > AttributeError: 'pyarrow.lib.FixedSizeListType' object has no attribute 'length' Steps to reproduce the bug: ```python import pyarrow as pa from datasets.table import array_cast arr = pa.array([[0, 1]]) array_cast(arr, pa.list_(pa.int64(), 2)) ``` Stack trace: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-12-6cb90a1d8216> in <module> 3 4 arr = pa.array([[0, 1]]) ----> 5 array_cast(arr, pa.list_(pa.int64(), 2)) ~/huggingface/datasets/src/datasets/table.py in wrapper(array, *args, **kwargs) 1802 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 1803 else: -> 1804 return func(array, *args, **kwargs) 1805 1806 return wrapper ~/huggingface/datasets/src/datasets/table.py in array_cast(array, pa_type, allow_primitive_to_str, allow_decimal_to_str) 1920 else: 1921 array_values = array.values[ -> 1922 array.offset * pa_type.length : (array.offset + len(array)) * pa_type.length 1923 ] 1924 return pa.FixedSizeListArray.from_arrays(_c(array_values, pa_type.value_type), pa_type.list_size) AttributeError: 'pyarrow.lib.FixedSizeListType' object has no attribute 'length' ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7020/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7020/timeline
closed
false
7,020
null
2024-07-03T08:41:56Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,385,793,897
https://api.github.com/repos/huggingface/datasets/issues/7019
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7019/events
[]
null
2024-08-12T14:49:45Z
[]
https://github.com/huggingface/datasets/pull/7019
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7019). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@albertvillanova really happy to see this fix.\r\n\r\nHave you attempted to save a data...
Support pyarrow large_list
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7019/reactions" }
PR_kwDODunzps50LMjW
{ "diff_url": "https://github.com/huggingface/datasets/pull/7019.diff", "html_url": "https://github.com/huggingface/datasets/pull/7019", "merged_at": "2024-08-12T14:43:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/7019.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7019" }
2024-07-02T09:52:52Z
https://api.github.com/repos/huggingface/datasets/issues/7019/comments
Allow Polars round trip by supporting pyarrow large list. Fix #6834, fix #6984. Supersede and close #4800, close #6835, close #6986.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7019/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7019/timeline
closed
false
7,019
null
2024-08-12T14:43:45Z
null
true
2,383,700,286
https://api.github.com/repos/huggingface/datasets/issues/7018
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7018/events
[]
null
2024-08-05T09:21:55Z
[]
https://github.com/huggingface/datasets/issues/7018
NONE
null
null
null
[ "In my case the error was:\r\n```\r\nValueError: You are trying to load a dataset that was saved using `save_to_disk`. Please use `load_from_disk` instead.\r\n```\r\nDid you try `load_from_disk`?", "More generally, any reason there is no API consistency between save_to_disk and push_to_hub ? \r\n\r\nWould be nice...
`load_dataset` fails to load dataset saved by `save_to_disk`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7018/reactions" }
I_kwDODunzps6OFGE-
null
2024-07-01T12:19:19Z
https://api.github.com/repos/huggingface/datasets/issues/7018/comments
### Describe the bug This code fails to load the dataset it just saved: ```python from datasets import load_dataset from transformers import AutoTokenizer MODEL = "google-bert/bert-base-cased" tokenizer = AutoTokenizer.from_pretrained(MODEL) dataset = load_dataset("yelp_review_full") def tokenize_function(examples): return tokenizer(examples["text"], padding="max_length", truncation=True) tokenized_datasets = dataset.map(tokenize_function, batched=True) tokenized_datasets.save_to_disk("dataset") tokenized_datasets = load_dataset("dataset/") # raises ``` It raises `ValueError: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('arrow', {}), NamedSplit('test'): ('json', {})}`. I believe this bug is caused by the [logic that tries to infer dataset format](https://github.com/huggingface/datasets/blob/9af8dd3de7626183a9a9ec8973cebc672d690400/src/datasets/load.py#L556). It counts the most common file extension. However, a small dataset can fit in a single `.arrow` file and have two JSON metadata files, causing the format to be inferred as JSON: ```shell $ ls -l dataset/test -rw-r--r-- 1 sliedes sliedes 191498784 Jul 1 13:55 data-00000-of-00001.arrow -rw-r--r-- 1 sliedes sliedes 1730 Jul 1 13:55 dataset_info.json -rw-r--r-- 1 sliedes sliedes 249 Jul 1 13:55 state.json ``` ### Steps to reproduce the bug Execute the code above. ### Expected behavior The dataset is loaded successfully. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-6.9.3-arch1-1-x86_64-with-glibc2.39 - Python version: 3.12.4 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/2307997?v=4", "events_url": "https://api.github.com/users/sliedes/events{/privacy}", "followers_url": "https://api.github.com/users/sliedes/followers", "following_url": "https://api.github.com/users/sliedes/following{/other_user}", "gists_url": "https://api.github.com/users/sliedes/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sliedes", "id": 2307997, "login": "sliedes", "node_id": "MDQ6VXNlcjIzMDc5OTc=", "organizations_url": "https://api.github.com/users/sliedes/orgs", "received_events_url": "https://api.github.com/users/sliedes/received_events", "repos_url": "https://api.github.com/users/sliedes/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sliedes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sliedes/subscriptions", "type": "User", "url": "https://api.github.com/users/sliedes" }
https://api.github.com/repos/huggingface/datasets/issues/7018/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7018/timeline
open
false
7,018
null
null
null
false
2,383,647,419
https://api.github.com/repos/huggingface/datasets/issues/7017
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7017/events
[]
null
2024-07-01T12:12:32Z
[]
https://github.com/huggingface/datasets/pull/7017
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7017). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Support fsspec 2024.6.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7017/reactions" }
PR_kwDODunzps50D3gi
{ "diff_url": "https://github.com/huggingface/datasets/pull/7017.diff", "html_url": "https://github.com/huggingface/datasets/pull/7017", "merged_at": "2024-07-01T12:06:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/7017.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7017" }
2024-07-01T11:57:15Z
https://api.github.com/repos/huggingface/datasets/issues/7017/comments
Support fsspec 2024.6.1.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7017/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7017/timeline
closed
false
7,017
null
2024-07-01T12:06:24Z
null
true
2,383,262,608
https://api.github.com/repos/huggingface/datasets/issues/7016
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7016/events
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" }, { "color": "a2eeef", ...
null
2024-07-20T06:51:58Z
[]
https://github.com/huggingface/datasets/issues/7016
NONE
null
null
null
[ "There is an open issue #2514 about this which also proposes solutions." ]
`drop_duplicates` method
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7016/reactions" }
I_kwDODunzps6ODbOQ
null
2024-07-01T09:01:06Z
https://api.github.com/repos/huggingface/datasets/issues/7016/comments
### Feature request `drop_duplicates` method for huggingface datasets (similiar in simplicity to the `pandas` one) ### Motivation Ease of use ### Your contribution I don't think i am good enough to help
{ "avatar_url": "https://avatars.githubusercontent.com/u/26205298?v=4", "events_url": "https://api.github.com/users/MohamedAliRashad/events{/privacy}", "followers_url": "https://api.github.com/users/MohamedAliRashad/followers", "following_url": "https://api.github.com/users/MohamedAliRashad/following{/other_user}", "gists_url": "https://api.github.com/users/MohamedAliRashad/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MohamedAliRashad", "id": 26205298, "login": "MohamedAliRashad", "node_id": "MDQ6VXNlcjI2MjA1Mjk4", "organizations_url": "https://api.github.com/users/MohamedAliRashad/orgs", "received_events_url": "https://api.github.com/users/MohamedAliRashad/received_events", "repos_url": "https://api.github.com/users/MohamedAliRashad/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MohamedAliRashad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MohamedAliRashad/subscriptions", "type": "User", "url": "https://api.github.com/users/MohamedAliRashad" }
https://api.github.com/repos/huggingface/datasets/issues/7016/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7016/timeline
open
false
7,016
null
null
null
false
2,383,151,220
https://api.github.com/repos/huggingface/datasets/issues/7015
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7015/events
[]
null
2024-07-26T09:37:51Z
[]
https://github.com/huggingface/datasets/pull/7015
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7015). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@albertvillanova thanks for the review, please take a look", "@albertvillanova please...
add split argument to Generator
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7015/reactions" }
PR_kwDODunzps50CJuE
{ "diff_url": "https://github.com/huggingface/datasets/pull/7015.diff", "html_url": "https://github.com/huggingface/datasets/pull/7015", "merged_at": "2024-07-26T09:31:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/7015.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7015" }
2024-07-01T08:09:25Z
https://api.github.com/repos/huggingface/datasets/issues/7015/comments
## Actual When creating a multi-split dataset using generators like ```python datasets.DatasetDict({ "val": datasets.Dataset.from_generator( generator=generator_val, features=features ), "test": datasets.Dataset.from_generator( generator=generator_test, features=features, ) }) ``` It displays (for both test and val) ``` Generating train split ``` ## Expected I would like to be able to improve this behavior by doing ```python datasets.DatasetDict({ "val": datasets.Dataset.from_generator( generator=generator_val, features=features, split="val" ), "test": datasets.Dataset.from_generator( generator=generator_test, features=features, split="test" ) }) ``` It would display ``` Generating val split ``` and ``` Generating test split ``` ## Proposal Current PR is adding an explicit `split` argument and replace the implicit "train" split in the following classes/function : * Generator * from_generator * AbstractDatasetInputStream * GeneratorDatasetInputStream Please share your feedbacks
{ "avatar_url": "https://avatars.githubusercontent.com/u/156736?v=4", "events_url": "https://api.github.com/users/piercus/events{/privacy}", "followers_url": "https://api.github.com/users/piercus/followers", "following_url": "https://api.github.com/users/piercus/following{/other_user}", "gists_url": "https://api.github.com/users/piercus/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/piercus", "id": 156736, "login": "piercus", "node_id": "MDQ6VXNlcjE1NjczNg==", "organizations_url": "https://api.github.com/users/piercus/orgs", "received_events_url": "https://api.github.com/users/piercus/received_events", "repos_url": "https://api.github.com/users/piercus/repos", "site_admin": false, "starred_url": "https://api.github.com/users/piercus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/piercus/subscriptions", "type": "User", "url": "https://api.github.com/users/piercus" }
https://api.github.com/repos/huggingface/datasets/issues/7015/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7015/timeline
closed
false
7,015
null
2024-07-26T09:31:56Z
null
true
2,382,985,847
https://api.github.com/repos/huggingface/datasets/issues/7014
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7014/events
[]
null
2024-07-01T07:16:36Z
[]
https://github.com/huggingface/datasets/pull/7014
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7014). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "The failing CI tests are unrelated to this PR.\r\n\r\nWe can see that now the integrati...
Skip faiss tests on Windows to avoid running CI for 360 minutes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7014/reactions" }
PR_kwDODunzps50BlwV
{ "diff_url": "https://github.com/huggingface/datasets/pull/7014.diff", "html_url": "https://github.com/huggingface/datasets/pull/7014", "merged_at": "2024-07-01T07:10:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/7014.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7014" }
2024-07-01T06:45:35Z
https://api.github.com/repos/huggingface/datasets/issues/7014/comments
Skip faiss tests on Windows to avoid running CI for 360 minutes. Fix #7013. Revert once the underlying issue is fixed.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7014/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7014/timeline
closed
false
7,014
null
2024-07-01T07:10:27Z
null
true
2,382,976,738
https://api.github.com/repos/huggingface/datasets/issues/7013
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7013/events
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
null
2024-07-01T07:10:28Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
https://github.com/huggingface/datasets/issues/7013
MEMBER
completed
null
null
[]
CI is broken for faiss tests on Windows: node down: Not properly terminated
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7013/reactions" }
I_kwDODunzps6OCVbi
null
2024-07-01T06:40:03Z
https://api.github.com/repos/huggingface/datasets/issues/7013/comments
Faiss tests on Windows make the CI run indefinitely until maximum execution time (360 minutes) is reached. See: https://github.com/huggingface/datasets/actions/runs/9712659783 ``` test (integration, windows-latest, deps-minimum) The job running on runner GitHub Actions 60 has exceeded the maximum execution time of 360 minutes. test (integration, windows-latest, deps-latest) The job running on runner GitHub Actions 238 has exceeded the maximum execution time of 360 minutes. ``` ``` ____________________________ tests/test_search.py _____________________________ [gw1] win32 -- Python 3.8.10 C:\hostedtoolcache\windows\Python\3.8.10\x64\python.exe worker 'gw1' crashed while running 'tests/test_search.py::IndexableDatasetTest::test_add_faiss_index' ____________________________ tests/test_search.py _____________________________ [gw2] win32 -- Python 3.8.10 C:\hostedtoolcache\windows\Python\3.8.10\x64\python.exe worker 'gw2' crashed while running 'tests/test_search.py::IndexableDatasetTest::test_add_faiss_index' ``` ``` tests/test_search.py::IndexableDatasetTest::test_add_faiss_index [gw0] node down: Not properly terminated [gw0] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index replacing crashed worker gw0 tests/test_search.py::IndexableDatasetTest::test_add_faiss_index [gw1] node down: Not properly terminated [gw1] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index replacing crashed worker gw1 tests/test_search.py::IndexableDatasetTest::test_add_faiss_index [gw2] node down: Not properly terminated [gw2] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index replacing crashed worker gw2 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7013/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7013/timeline
closed
false
7,013
null
2024-07-01T07:10:28Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,380,934,047
https://api.github.com/repos/huggingface/datasets/issues/7012
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7012/events
[]
null
2024-07-11T02:06:16Z
[]
https://github.com/huggingface/datasets/pull/7012
NONE
null
false
null
[]
Raise an error when a nested object is expected to be a mapping that displays the object
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7012/reactions" }
PR_kwDODunzps5z61A3
{ "diff_url": "https://github.com/huggingface/datasets/pull/7012.diff", "html_url": "https://github.com/huggingface/datasets/pull/7012", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7012.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7012" }
2024-06-28T18:10:59Z
https://api.github.com/repos/huggingface/datasets/issues/7012/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/22511797?v=4", "events_url": "https://api.github.com/users/sebbyjp/events{/privacy}", "followers_url": "https://api.github.com/users/sebbyjp/followers", "following_url": "https://api.github.com/users/sebbyjp/following{/other_user}", "gists_url": "https://api.github.com/users/sebbyjp/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sebbyjp", "id": 22511797, "login": "sebbyjp", "node_id": "MDQ6VXNlcjIyNTExNzk3", "organizations_url": "https://api.github.com/users/sebbyjp/orgs", "received_events_url": "https://api.github.com/users/sebbyjp/received_events", "repos_url": "https://api.github.com/users/sebbyjp/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sebbyjp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sebbyjp/subscriptions", "type": "User", "url": "https://api.github.com/users/sebbyjp" }
https://api.github.com/repos/huggingface/datasets/issues/7012/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7012/timeline
closed
false
7,012
null
2024-07-11T02:06:16Z
null
true
2,379,785,262
https://api.github.com/repos/huggingface/datasets/issues/7011
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7011/events
[]
null
2024-06-28T12:25:25Z
[]
https://github.com/huggingface/datasets/pull/7011
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7011). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Re-enable raising error from huggingface-hub FutureWarning in CI
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7011/reactions" }
PR_kwDODunzps5z27Fs
{ "diff_url": "https://github.com/huggingface/datasets/pull/7011.diff", "html_url": "https://github.com/huggingface/datasets/pull/7011", "merged_at": "2024-06-28T12:19:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/7011.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7011" }
2024-06-28T07:28:32Z
https://api.github.com/repos/huggingface/datasets/issues/7011/comments
Re-enable raising error from huggingface-hub FutureWarning in tests, once that the fix in transformers - https://github.com/huggingface/transformers/pull/31007 was just released yesterday in transformers-4.42.0: https://github.com/huggingface/transformers/releases/tag/v4.42.0 Fix #7010.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7011/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7011/timeline
closed
false
7,011
null
2024-06-28T12:19:28Z
null
true
2,379,777,480
https://api.github.com/repos/huggingface/datasets/issues/7010
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7010/events
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
null
2024-06-28T12:19:30Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
https://github.com/huggingface/datasets/issues/7010
MEMBER
completed
null
null
[]
Re-enable raising error from huggingface-hub FutureWarning in CI
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7010/reactions" }
I_kwDODunzps6N2IXI
null
2024-06-28T07:23:40Z
https://api.github.com/repos/huggingface/datasets/issues/7010/comments
Re-enable raising error from huggingface-hub FutureWarning in CI, which was disabled by PR: - #6876 Note that this can only be done once transformers releases the fix: - https://github.com/huggingface/transformers/pull/31007
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7010/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7010/timeline
closed
false
7,010
null
2024-06-28T12:19:29Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,379,619,132
https://api.github.com/repos/huggingface/datasets/issues/7009
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7009/events
[]
null
2024-06-28T07:17:26Z
[]
https://github.com/huggingface/datasets/pull/7009
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7009). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
Support ruff 0.5.0 in CI
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7009/reactions" }
PR_kwDODunzps5z2Xe6
{ "diff_url": "https://github.com/huggingface/datasets/pull/7009.diff", "html_url": "https://github.com/huggingface/datasets/pull/7009", "merged_at": "2024-06-28T07:11:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/7009.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7009" }
2024-06-28T05:37:36Z
https://api.github.com/repos/huggingface/datasets/issues/7009/comments
Support ruff 0.5.0 in CI and revert: - #7007 Fix #7008.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7009/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7009/timeline
closed
false
7,009
null
2024-06-28T07:11:17Z
null
true
2,379,591,141
https://api.github.com/repos/huggingface/datasets/issues/7008
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7008/events
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
null
2024-06-28T07:11:18Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
https://github.com/huggingface/datasets/issues/7008
MEMBER
completed
null
null
[]
Support ruff 0.5.0 in CI
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7008/reactions" }
I_kwDODunzps6N1a3l
null
2024-06-28T05:11:26Z
https://api.github.com/repos/huggingface/datasets/issues/7008/comments
Support ruff 0.5.0 in CI. Also revert: - #7007
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/7008/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7008/timeline
closed
false
7,008
null
2024-06-28T07:11:18Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false