--- dataset_info: features: - name: text dtype: string - name: source dtype: string splits: - name: train num_bytes: 107402461357 num_examples: 431867387 download_size: 63321627068 dataset_size: 107402461357 configs: - config_name: default data_files: - split: train path: data/train-* language: - da size_categories: - 100M 32`. We encountered several OOM errors that were a bit inexplicable and decided to lower the memory footprint in this way. The hardware we used was a machine with 128 cores and 1TB of RAM. This data should take less than 100GB of disk space. ## Licenses For licensing of the data, we refer to the paper. Specifically for the Twitter data, we made an assumption that the data has been extracted using a version of the Twitter API which is usually MIT-licensed, to consider it as "an upper-bound". Though, we understand that the software/code is usually MIT-licensed. Feel free to leave the Twitter data out. Other Twitter datasets here on HF seem to have a flavor of cc-by*. ## Citation If you find the work in this repository useful, please don't forget to cite: ```bibtex @inproceedings{snakmodel, title={{S}nak{M}odel: Lessons Learned from Training an Open Danish Large Language Model}, author={Mike Zhang and Max M{\"u}ller-Eberstein and Elisa Bassignana and Rob van der Goot}, booktitle={The Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies}, year={2024}, url={https://openreview.net/forum?id=YxzfgQGpRQ} } ```