This is a subset of the deduplicated Stack dataset
It was generated like so:
from datasets import load_dataset, Dataset languages = ["css", "prolog", "c", "fortran", "solidity", "kotlin", "literate-agda", "julia", "java-server-pages", "isabelle", "idris", "lean", "powershell", "go", "erlang", "f-sharp", "ada", "pascal", "perl", "r", "protocol-buffer", "cmake", "sas", "ruby", "rust", "rmarkdown", "c-sharp", "smalltalk", "haskell", "maple", "mathematica", "ocaml", "makefile", "lua", "literate-coffeescript", "literate-haskell", "restructuredtext", "racket", "standard-ml", "systemverilog", "tex", "awk", "assembly", "alloy", "agda", "emacs-lisp", "dart", "cuda", "bluespec", "augeas", "batchfile", "tcsh", "stan", "scala", "tcl", "stata", "applescript", "shell", "clojure", "scheme", "antlr", "sparql", "sql", "glsl", "elm", "dockerfile", "cpp", "coffeescript", "common-lisp", "elixir", "groovy", "html", "java", "javascript", "markdown", "php", "python", "typescript", "verilog", "visual-basic", "vhdl", "thrift", "matlab", "yacc", "zig", "xslt", "json", "yaml"] def dset_gen(): for language in languages: dset = load_dataset("bigcode/the-stack-dedup", data_dir=f"data/{language}", streaming=True, split="train") sample = dset.take(250_000) for row in sample: yield row dset = Dataset.from_generator(dset_gen)
num_examples: 11658586 download_size: 28807934580 dataset_size: 78577965159
Each data instance corresponds to one file. The content of the file is in the content feature, and other features ( repository_name , licenses , etc.) provide some metadata. Note that a given file can appear in several different repositories that satisfy our safe-license criterion. If that is the case, only the first – in alphabetical order -- of these repositories is shown for simplicity.