Edit model card

Model Card for DeepAutoAI/ldm_soup_Llama-3.1-8B-Inst

Overview

DeepAutoAI/ldm_soup_Llama-3.1-8B-Inst is developed by deepAuto.ai and builds upon the VAGOsolutions/Llama-3.1-SauerkrautLM-8B-Instruct model. Our approach leverages the base model’s pretrained weights and optimizes them for the Winogrande and ARC-Challenge datasets by training a latent diffusion model on the pretrained weights.

Through this process, we learn the distribution of the base model's weight space, enabling us to explore optimal configurations. We then sample multiple sets of weights, using the model-soup averaging technique to identify the best-performing weights for both datasets. These weights are merged using linear interpolation to create the final model weights for DeepAutoAI/ldm_soup_Llama-3.1-8B-Inst.

This approach has led to improved performance on previously unseen leaderboard tasks, all without any additional task-specific training.

The work is currently in progress

Evaluation

Results

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 28.64
IFEval (0-Shot) 80.33
BBH (3-Shot) 31.10
MATH Lvl 5 (4-Shot) 11.56
GPQA (0-shot) 5.26
MuSR (0-shot) 11.52
MMLU-PRO (5-shot) 32.07
Downloads last month
44
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Evaluation results