jackyjin commited on
Commit
2cdb49d
1 Parent(s): 95f2cf3

Update README.md

Browse files

add processor define

Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -44,7 +44,7 @@ The model supports multi-image and multi-prompt generation. Meaning that you can
44
  Below we used [`"llava-hf/llava-1.5-7b-hf"`](https://huggingface.co/llava-hf/llava-1.5-7b-hf) checkpoint.
45
 
46
  ```python
47
- from transformers import pipeline
48
  from PIL import Image
49
  import requests
50
 
@@ -65,6 +65,8 @@ conversation = [
65
  ],
66
  },
67
  ]
 
 
68
  prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
69
 
70
  outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})
 
44
  Below we used [`"llava-hf/llava-1.5-7b-hf"`](https://huggingface.co/llava-hf/llava-1.5-7b-hf) checkpoint.
45
 
46
  ```python
47
+ from transformers import pipeline,AutoProcessor
48
  from PIL import Image
49
  import requests
50
 
 
65
  ],
66
  },
67
  ]
68
+ processor = AutoProcessor.from_pretrained(model_id)
69
+
70
  prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
71
 
72
  outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})