Edit model card

QuantFactory/Llama2-7B-Hindi-finetuned-GGUF

This is quantized version of subhrokomol/Llama2-7B-Hindi-finetuned created using llama.cpp

Original Model Card

Finetune Llama-2-7B-hf on Hindi dataset after transtokenization

This model was trained on 24GB of RTX A500 on zicsx/mC4-Hindi-Cleaned-3.0 dataset (1%) for 3 hours.

We used Hugging Face PEFT-LoRA PyTorch for training.

Transtokenization process in --

Downloads last month
466
GGUF
Model size
6.74B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for QuantFactory/Llama2-7B-Hindi-finetuned-GGUF

Quantized
this model

Dataset used to train QuantFactory/Llama2-7B-Hindi-finetuned-GGUF