dinalt commited on
Commit
a4d2883
1 Parent(s): 44e3886

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -65
README.md CHANGED
@@ -1,77 +1,114 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
 
4
  ---
5
 
6
  # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
 
 
 
 
 
 
 
 
 
11
 
12
- ## Model Details
13
 
14
  ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
 
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
- ## Uses
 
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
 
40
- ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
- [More Information Needed]
45
 
46
- ### Downstream Use [optional]
47
 
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
 
50
  [More Information Needed]
51
 
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
 
58
  ## Bias, Risks, and Limitations
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
 
70
  ## How to Get Started with the Model
71
 
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
  ## Training Details
77
 
@@ -140,15 +177,7 @@ Use the code below to get started with the model.
140
 
141
  ## Environmental Impact
142
 
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
 
153
  ## Technical Specifications [optional]
154
 
@@ -162,23 +191,12 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
162
 
163
  #### Hardware
164
 
165
- [More Information Needed]
166
 
167
  #### Software
168
 
169
  [More Information Needed]
170
 
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
 
183
  ## Glossary [optional]
184
 
@@ -196,6 +214,4 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
196
 
197
  ## Model Card Contact
198
 
199
- [More Information Needed]
200
-
201
-
 
1
  ---
2
  library_name: transformers
3
+ license: mit
4
+ datasets:
5
+ - teknium/OpenHermes-2.5
6
+ - Open-Orca/OpenOrca
7
+ - cognitivecomputations/dolphin
8
+ - LDJnr/Capybara
9
+ - abacusai/SystemChat
10
  ---
11
 
12
  # Model Card for Model ID
13
 
14
+ Walsh_Instruct-1.7b
15
 
16
+ ## Model Details
17
 
18
+ - Model Dimension: 2048
19
+ - Hidden Layers: 32
20
+ - Attention Heads: 32
21
+ - Feedforward Dimension: 8192
22
+ - Feedforward Network Type: Conventional MLP with GeLU activation
23
+ - Vocabulary Size: 32000
24
+ - Max Sequence Length: 16K (14-bit absolute positional encoding via Walsh matrix)
25
+ - Weight Initialization: DeepNet, https://arxiv.org/abs/2203.00555
26
+ - Pretraining Datasets: RedPajama-Data-1T, mostly "books" and some Wikipedia.
27
 
 
28
 
29
  ### Model Description
30
 
31
+ This is an instruction tuned fork of my "dinalt/walsh-1-7b" model... mostly for fun.
32
 
33
+ Hadamard-Walsh 1.7B is an experimental model using a new positional encoder. The encoder represents absolute positions by using a combination of rows from the Hadamard-Walsh matrix (https://en.wikipedia.org/wiki/Hadamard_code). Each row corresponds to a binary digit is the positional code, where the presence of a row codes for a 1 and the absence, a zero. While training, the base offset in the sequence is randomly chosen for each batch. The result is that the model is very proficient at sequences much longer than those seen in training.
34
 
35
+ Aside from the unsual positional encoder, the most interesting aspect of this model is the application of DITTO training:
 
 
 
 
 
 
36
 
37
+ Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation
38
+ https://arxiv.org/abs/2206.02369
39
 
40
+ As described in the paper, the procedure is very effective at eliminating sentence level repition. As described in the paper, it also reduces perplexity slightly.
41
 
42
+ I will see about posting the code for running the training and generating a DITTO dataset later, althogh the "ditto-loss" function is already in the model implementation.
 
 
43
 
44
+ - **Developed by:** Jason dinAlt
45
+ - **Model type:** Causal language model. Instruction following. Text generation.
46
 
47
+ ### Model Sources [optional]
48
 
49
+ - **Repository:** https://huggingface.co/dinalt/walsh-1-7b
50
 
51
+ ## Uses
52
 
53
+ This is a toy instruciton following model. It's occasionally reliable at following directions.
54
 
55
+ ### Direct Use
56
 
57
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
58
 
59
  [More Information Needed]
60
 
 
 
 
 
 
61
 
62
  ## Bias, Risks, and Limitations
63
 
64
+ This is an uncensored instruction following model. No attempt has been made to make the model "safe." It may offend your sensibilities.
65
+ It will likely provide inaccurate information. Use at your own risk. Whatever you do, don't put it in charge of the global defense grid!
 
 
 
 
 
 
 
66
 
67
  ## How to Get Started with the Model
68
 
69
+ The easiest way to get started with the model is to use text-generation-webui, which needs to be started with the "--trust-remote-code" flag.
70
+
71
+ https://github.com/oobabooga/text-generation-webui
72
+
73
+ It appears to work best with the "Big O" and "Simple-1" generation presets.
74
+
75
+ ### Prompt Format
76
+ As an instruction model, the model has been trained to use the ChatML instruction format:
77
+ ```
78
+ <|im_start|>system
79
+ Provide some context and/or instructions to the model.
80
+ <|im_end|>
81
+ <|im_start|>user
82
+ The user’s message goes here
83
+ <|im_end|>
84
+ <|im_start|>assistant
85
+ ```
86
+ For details, see: https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/ai-services/openai/includes/chat-markup-language.md#chatml
87
+
88
+ ### Loading:
89
+ The model implementation is all my own, so you will need to use "trust_remote_code" to load the model.
90
+
91
+ ```
92
+ from transformers import (
93
+ AutoTokenizer,
94
+ AutoModelForCausalLM,
95
+ )
96
+
97
+ model_id = "dinalt/walsh-1-7b"
98
+ model = AutoModelForCausalLM.from_pretrained(
99
+ model_id,
100
+ trust_remote_code=True,
101
+ # flash_attention_2 requires bfloat16 or float16
102
+ torch_dtype=torch.bfloat16,
103
+ # One of ["flash_attention_2", "sdpa", "eager"]
104
+ attn_implementation="flash_attention_2",
105
+ )
106
+
107
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
108
+ ```
109
+
110
+ For batch instruction generation, see my example code here:
111
+ https://discuss.huggingface.co/t/implimentation-of-stopping-criteria-list/20040/16?u=dinalt
112
 
113
  ## Training Details
114
 
 
177
 
178
  ## Environmental Impact
179
 
180
+ It keeps my house warm in the winter...
 
 
 
 
 
 
 
 
181
 
182
  ## Technical Specifications [optional]
183
 
 
191
 
192
  #### Hardware
193
 
194
+ 6 x RTX4090
195
 
196
  #### Software
197
 
198
  [More Information Needed]
199
 
 
 
 
 
 
 
 
 
 
 
 
200
 
201
  ## Glossary [optional]
202
 
 
214
 
215
  ## Model Card Contact
216
 
217
+ [More Information Needed]