Excessive cache usage (1TB+) when loading with load_dataset method

#2
by Suhail - opened

Hello, I am new to hf datasets. When attempting to load the CC12M dataset using the load_dataset method, I'm encountering the following issues:

The loading process fills up my .cache memory to approximately 1TB.
This excessive memory usage causes my remote server to crash.

Questions:

Is there an alternative method to load this dataset that would consume less memory?
Is it possible to save this data directly to disk instead of loading it into memory?

I would greatly appreciate any guidance on how to efficiently load and work with this large dataset.

Pixel Parsing org
edited 6 days ago

@Suhail hello, welcome. So yeah, generally I wouldn't use load_dataset for a large webdataset like this unless you set streaming=True and want to preview, scan through once.

If you want to download for train / permanently, I usually prefer to control where it goes and not stick 1TB in a cache folder somewhere :)

pip install huggingface_hub hf-transfer
huggingface-cli login

Then, to download fast to a specified folder:
HF_TRANSFER=1 huggingface-cli download --repo-type dataset pixparse/cc12m-wds --local-dir /dest/path

NOTE: if you had huggingface_hub installed from an old version, make sure it's up to date, the local-dir behaviour was improved this summer and makes it much better for this sort of purpose, it use'd to symlink to cache in old versions

@rwightman Thank you very much, you saved my day. It worked perfectly.

Suhail changed discussion status to closed

Sign up or log in to comment