Edit model card

The Quantized Command R Model

Original Base Model: CohereForAI/c4ai-command-r-v01.
Link: https://huggingface.co/CohereForAI/c4ai-command-r-v01

Special Notice

(1) Please note the model is quantized by utilizing the AutoModelForCausalLM.from_pretrained in the transformers package.

(2) For the model quantized by auto-gptq package, please check the link here: https://huggingface.co/shuyuej/Command-R-GPTQ.

(3) This model is a smaller one by setting group_size=1024.

Quantization Configurations

"quantization_config": {
    "batch_size": 1,
    "bits": 4,
    "block_name_to_quantize": null,
    "cache_block_outputs": true,
    "damp_percent": 0.1,
    "dataset": null,
    "desc_act": false,
    "exllama_config": {
      "version": 1
    },
    "group_size": 1024,
    "max_input_length": null,
    "model_seqlen": null,
    "module_name_preceding_first_block": null,
    "modules_in_block_to_quantize": null,
    "pad_token_id": null,
    "quant_method": "gptq",
    "sym": true,
    "tokenizer": null,
    "true_sequential": true,
    "use_cuda_fp16": false,
    "use_exllama": true
  },

Source Codes

Source Codes: https://github.com/vkola-lab/medpodgpt/tree/main/quantization.

Downloads last month
1
Safetensors
Model size
6.25B params
Tensor type
I32
·
FP16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Collection including shuyuej/Command-R-Smaller-HF-GPTQ