Ollama availability

#8
by ramesh3012 - opened

Hi,

    I wanted to see this model available in ollama. Can you please let me know when it will be available?

Thanks
Ramesh Rajamani

Unfortunately, llama.cpp currently does not support some techs like LongRoPE, Flash attention, CLIP ViT projector, etc which used by Phi-3... Ollama uses llama.cpp as infra. So...

Unfortunately, llama.cpp currently does not support some techs like LongRoPE, Flash attention, CLIP ViT projector, etc which used by Phi-3... Ollama uses llama.cpp as infra. So...

Looks like I made a mistake... llava-phi3 on ollama can use CLIP ViT model as part of gguf model files. And combine with the Phi-3 text model. I just meet issues when I try to convert Phi-3/3.5 vision to GGUF by llama.cpp.

Sign up or log in to comment