Open the Datasets Table in a new tab | Back to LifeArchitect.ai
Open the Datasets Table in a new tab | Back to LifeArchitect.ai
Datasets Table, list of datasets for large language models, as used by all major AI labs, including:
DCLM-Pool, GPT-5, RedPajama-Data-v2, Piper monorepo, The Stack v2, MNBVC (Massive Never-ending BT Vast Chinese corpus, Claude-3.5 dataset, FineWeb, GPT-4 dataset, FineWeb-Edu-score-2, CulturaX, HPLT (High Performance Language Technologies), RefinedWeb, MassiveText ML, Matrix, Cultura-Y, DCLM-Baseline, PaLM 2 dataset, Dolma, Infiniset, MADLAD-400, MassiveText EN, The Stack v1, InternLM, Stability New Pile, FineWeb-Edu 1.3T, LLaMA, RedPajama, SlimPajama, Common Corpus, ROOTS, The Pile v1, StarCoder dataset (The Stack 1.2 subset), GPT-3 dataset, RoBERTa dataset, YouTube-Commons, Cosmopedia v0.1, GPT-2 dataset, GPT-1 dataset
All dataset reports by LifeArchitect.ai (most recent at top)Date | Title |
Aug/2024 | What's in GPT-5? |
Jul/2024 | Argonne National Laboratory AuroraGPT (page) |
Sep/2023 | Google DeepMind Gemini: A general specialist |
Mar/2022 | What's in my AI? (GPT-1, GPT-2, GPT-3, MT-NLG, Chinchilla...) |