Papers
arxiv:2407.01492

RegMix: Data Mixture as Regression for Language Model Pre-training

Published on Jul 1
· Submitted by SivilTaram on Jul 2
Authors:
,
,
,

Abstract

The data mixture for large language model pre-training significantly impacts performance, yet how to determine an effective mixture remains unclear. We propose RegMix to automatically identify a high-performing data mixture by formulating it as a regression task. RegMix involves training a set of small models with diverse data mixtures and fitting a regression model to predict their performance given their respective mixtures. With the fitted regression model, we simulate the top-ranked mixture and use it to train a large-scale model with orders of magnitude more compute. To empirically validate RegMix, we train 512 models with 1M parameters for 1B tokens of different mixtures to fit the regression model and find the optimal mixture. Using this mixture we train a 1B parameter model for 25B tokens (i.e. 1000x larger and 25x longer) which we find performs best among 64 candidate 1B parameter models with other mixtures. Further, our method demonstrates superior performance compared to human selection and achieves results that match or surpass DoReMi, while utilizing only 10% of the compute budget. Our experiments also show that (1) Data mixtures significantly impact performance with single-task performance variations of up to 14.6%; (2) Web corpora rather than data perceived as high-quality like Wikipedia have the strongest positive correlation with downstream performance; (3) Domains interact in complex ways often contradicting common sense, thus automatic approaches like RegMix are needed; (4) Data mixture effects transcend scaling laws, and our approach captures the complexity by considering all domains together. Our code is available at https://github.com/sail-sg/regmix.

Community

Paper author Paper submitter
edited Jul 2

Very nice.

Given a high enough compute budget, it would very interesting to do this with more fine-granularity, and random segments. (I.E Divide each domain into x segments, now each time you run the experiment, randomly choose segments to include)

Hi @SivilTaram congrats on this work, and great to see a Spaces demo.

I see the data is currently hosted here: https://github.com/sail-sg/regmix/tree/main/data, would you be up for pushing it to the hub? See here on how to do that: https://huggingface.co/docs/datasets/loading#csv. You can then do dataset.push_to_hub. Additionally it can be linked to this paper, see here: https://huggingface.co/docs/hub/en/datasets-cards#linking-a-paper.

Kind regards,

Niels

Paper author Paper submitter

@nielsr Hi Niels! To clarify, the data folder contains the config files of data mixture and the target result (<1K rows, just some numbers). I have uploaded the sample data (the full dataset will be uploaded also) for the training. Do you think it would be meaningful to upload the mixture data points? Thanks!

https://huggingface.co/datasets/sail/regmix-data-sample

·

That looks really great! The data viewer should be soon available.

Regarding the models, really cool to see you leverage the branching feature! Do note that download stats are based on the main branch, so if there's no config.json or safetensors file there, there won't be any downloads (see here for more info).

Also great to see you wrote some very nice model and dataset cards, thank you for doing that! 🔥

Hello, I noticed that the paper directly uses the regression results of the smaller model as the optimal combination for the larger model. I'm curious about why we can assume the consistency in the effectiveness of data ratios between the two?

·
Paper author

@merlinarer Hi merlinarer, thanks for your interest in our work. Our key assumption is the rank invariance of data mixtures, which posits that the relative ranking of data mixtures in terms of their impact on model performance is consistent across different model sizes and numbers of training tokens (e.g. transferring the regression results of the smaller model as the optimal combination for the larger model). To validate this assumption, we train models with 1M and 1B parameters with different data mixtures. By training 512 models with 1M parameters on 1B tokens and then fitting a LightGBM model, we are able to predict the optimal data mixture among 64 models that are 1000× larger (1B parameters) and trained 25× longer (25B tokens) as depicted in Figure 1. More specifically, the predicted top-1 mixture does have the lowest validation loss among the 64 models with 1B parameters. In summary, our empirical results support our rank invariance hypothesis.

Sign up or log in to comment

Models citing this paper 5

Browse 5 models citing this paper

Datasets citing this paper 2

Spaces citing this paper 1

Collections including this paper 7