Edit model card

Llama-3.1-8b-instruct_4bitgs64_hqq_calib-Llama-3.1-SauerkrautLM-8b-Instruct-dare-merge

Llama-3.1-8b-instruct_4bitgs64_hqq_calib-Llama-3.1-SauerkrautLM-8b-Instruct-dare-merge is a sophisticated language model resulting from the strategic merging of two powerful models: mobiuslabsgmbh/Llama-3.1-8b-instruct_4bitgs64_hqq_calib and VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct. This merge was accomplished using mergekit, a specialized tool that facilitates precise model blending to optimize performance and synergy between the merged architectures.

🧩 Merge Configuration

slices:
  - sources:
      - model: mobiuslabsgmbh/Llama-3.1-8b-instruct_4bitgs64_hqq_calib
        layer_range: [0, 31]
      - model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
        layer_range: [0, 31]
merge_method: dare
base_model: mobiuslabsgmbh/Llama-3.1-8b-instruct_4bitgs64_hqq_calib
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: float16

Model Features

This merged model combines the advanced capabilities of the HQQ quantized version of Llama-3.1-8B-Instruct with the fine-tuned prowess of Llama-3.1-SauerkrautLM-8b-Instruct. The result is a versatile model that excels in both generative tasks and nuanced understanding of multilingual contexts, particularly in German and English. The integration of Spectrum Fine-Tuning from the SauerkrautLM model enhances its efficiency and performance, making it suitable for a wide range of applications.

Evaluation Results

The evaluation results from the parent models indicate strong performance across various benchmarks. For instance, the HQQ 4-bit version of Llama-3.1-8B-Instruct achieved notable scores on tasks like ARC, HellaSwag, and MMLU, while the SauerkrautLM model demonstrated significant improvements in multilingual capabilities. The combined strengths of these models in the merged version are expected to yield even better results in similar tasks.

Benchmark Llama-3.1-8B-Instruct (HQQ 4-bit) Llama-3.1-SauerkrautLM
ARC (25-shot) 60.32 Not specified
HellaSwag (10-shot) 79.21 Not specified
MMLU (5-shot) 67.07 Not specified
Average 68.00 Not specified

Limitations

While the merged model benefits from the strengths of both parent models, it may also inherit some limitations. For instance, the HQQ quantization process could introduce certain biases or inaccuracies in specific contexts, and the fine-tuning on a limited dataset may not cover all nuances of the languages involved. Users should be aware of these potential issues and exercise caution when deploying the model in sensitive applications. Additionally, despite efforts to ensure appropriate behavior, the possibility of encountering uncensored content remains.

In summary, Llama-3.1-8b-instruct_4bitgs64_hqq_calib-Llama-3.1-SauerkrautLM-8b-Instruct-dare-merge represents a significant advancement in language modeling, combining the best features of its predecessors while also carrying forward some of their limitations.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .