Text Generation
Transformers
PyTorch
code
gpt2
custom_code
Eval Results
text-generation-inference

Why can't I get the model widget to work?

#2
by codelion - opened
Lambda Security org
β€’
edited Apr 7, 2023

@SFconvertbot I cannot get the model widget to do inference? At first, I thought it was due to the need to set trust_remote_code to true. But even after adding the safe tensor weights the widget it not working. I can see similar models working fine on https://huggingface.co/bigcode/santacoder and https://huggingface.co/TabbyML/J-350M?

Hi,

The API doesn't run remote code.
It's not a matter of safetensors being there
TabbyML/J-350M is a regular transformers model
https://huggingface.co/bigcode/santacoder/ is a special models and is ran using https://github.com/huggingface/text-generation-inference which has its own running code for it .

If it's the same architecture as santacoder we could run inference in the same way possibly

@olivierdehaene

Lambda Security org

@Narsil this model is a fine-tuned version of SantaCoder. Is there a way I can enable it to use https://github.com/huggingface/text-generation-inference ?

Not externally.

As a first step, I would recommend using spaces, and ask for a community grant, motivate a bit the model, what it's trying to do etc... If there's enough visibility we would definitely enable text-generation-inference on it.

codelion changed discussion status to closed

Sign up or log in to comment