Crystalcareai commited on
Commit
10f8c9d
1 Parent(s): 571375b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -0
README.md CHANGED
@@ -9,5 +9,16 @@ Llama-3.1-SuperNova-Lite is an 8B parameter model developed by Arcee.ai, based o
9
  The model was trained using a state-of-the-art distillation pipeline and an instruction dataset generated with EvolKit (https://github.com/arcee-ai/EvolKit), ensuring accuracy and efficiency across a wide range of tasks. For more information on its training, visit blog.arcee.ai.
10
 
11
  Llama-3.1-SuperNova-Lite excels in both benchmark performance and real-world applications, providing the power of large-scale models in a more compact, efficient form ideal for organizations seeking high performance with reduced resource requirements.
 
 
 
 
 
 
 
 
 
 
 
12
  # note
13
  This readme will be edited regularly on September 10, 2024 (the day of release). After the final readme is in place we will remove this note.
 
9
  The model was trained using a state-of-the-art distillation pipeline and an instruction dataset generated with EvolKit (https://github.com/arcee-ai/EvolKit), ensuring accuracy and efficiency across a wide range of tasks. For more information on its training, visit blog.arcee.ai.
10
 
11
  Llama-3.1-SuperNova-Lite excels in both benchmark performance and real-world applications, providing the power of large-scale models in a more compact, efficient form ideal for organizations seeking high performance with reduced resource requirements.
12
+
13
+ # Evaluations
14
+ We will be submitting this model to the OpenLLM Leaderboard for a more conclusive benchmark - but here are our internal benchmarks (these will be updated as they come in):
15
+
16
+ | Benchmark | Score |
17
+ |-----------|-------|
18
+ | IF_Eval | 81.1 |
19
+ | MMLU Pro | 35.8 |
20
+ | TruthfulQA| 64.4 |
21
+
22
+
23
  # note
24
  This readme will be edited regularly on September 10, 2024 (the day of release). After the final readme is in place we will remove this note.