feihu.hf commited on
Commit
6a7051e
1 Parent(s): 19a986b

update README & config.json

Browse files
Files changed (2) hide show
  1. README.md +23 -1
  2. config.json +1 -1
README.md CHANGED
@@ -33,7 +33,8 @@ Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (
33
  - Number of Paramaters (Non-Embedding): 1.31B
34
  - Number of Layers: 28
35
  - Number of Attention Heads (GQA): 12 for Q and 2 for KV
36
- - Context Length: Full 32,768 tokens
 
37
 
38
  **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
39
 
@@ -48,6 +49,27 @@ With `transformers<4.37.0`, you will encounter the following error:
48
  KeyError: 'qwen2'
49
  ```
50
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
 
52
  ## Evaluation & Performance
53
 
 
33
  - Number of Paramaters (Non-Embedding): 1.31B
34
  - Number of Layers: 28
35
  - Number of Attention Heads (GQA): 12 for Q and 2 for KV
36
+ - Context Length: Full 131,072 tokens
37
+ - Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
38
 
39
  **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
40
 
 
49
  KeyError: 'qwen2'
50
  ```
51
 
52
+ ### Processing Long Texts
53
+
54
+ The current `config.json` is set for context length up to 32,768 tokens.
55
+ To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
56
+
57
+ For supported frameworks, you could add the following to `config.json` to enable YaRN:
58
+ ```json
59
+ {
60
+ ...,
61
+ "rope_scaling": {
62
+ "factor": 4.0,
63
+ "original_max_position_embeddings": 32768,
64
+ "type": "yarn"
65
+ }
66
+ }
67
+ ```
68
+
69
+ For deployment, we recommend using vLLM.
70
+ Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
71
+ Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
72
+ We advise adding the `rope_scaling` configuration only when processing long contexts is required.
73
 
74
  ## Evaluation & Performance
75
 
config.json CHANGED
@@ -17,7 +17,7 @@
17
  "num_key_value_heads": 2,
18
  "rms_norm_eps": 1e-06,
19
  "rope_theta": 1000000.0,
20
- "sliding_window": 32768,
21
  "tie_word_embeddings": true,
22
  "torch_dtype": "bfloat16",
23
  "transformers_version": "4.44.0",
 
17
  "num_key_value_heads": 2,
18
  "rms_norm_eps": 1e-06,
19
  "rope_theta": 1000000.0,
20
+ "sliding_window": 131072,
21
  "tie_word_embeddings": true,
22
  "torch_dtype": "bfloat16",
23
  "transformers_version": "4.44.0",