Crystalcareai commited on
Commit
877924c
1 Parent(s): 6fd8187

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -6,9 +6,10 @@
6
 
7
  ## Overview
8
 
9
- Llama-3.1-SuperNova-Lite is an 8B parameter model developed by Arcee.ai, derived from the high-performance Arcee-SuperNova model. Built to offer exceptional instruction-following capabilities and domain-specific adaptability, this model was distilled from a 405B parameter architecture using a cutting-edge distillation pipeline. The model leverages offline logits and an instruction dataset generated with EvolKit (https://github.com/arcee-ai/EvolKit), ensuring high accuracy and usability across a range of tasks.
10
 
11
- Llama-3.1-SuperNova-Lite excels in both benchmark performance and real-world application scenarios, offering a smaller footprint without sacrificing the power needed for demanding generative AI tasks. It’s ideal for organizations looking to harness large-scale model capabilities in a more compact, efficient format.
12
 
 
13
  # note
14
  This readme will be edited regularly on September 10, 2024 (the day of release). After the final readme is in place we will remove this note.
 
6
 
7
  ## Overview
8
 
9
+ Llama-3.1-SuperNova-Lite is an 8B parameter model developed by Arcee.ai, based on the Llama-3.1-8B-Instruct architecture. It is a distilled version of the larger Llama-3.1-405B-Instruct model, leveraging offline logits extracted from the 405B parameter variant. This 8B variation of Llama-3.1-SuperNova maintains high performance while offering exceptional instruction-following capabilities and domain-specific adaptability.
10
 
11
+ The model was trained using a state-of-the-art distillation pipeline and an instruction dataset generated with EvolKit (https://github.com/arcee-ai/EvolKit), ensuring accuracy and efficiency across a wide range of tasks. For more information on its training, visit blog.arcee.ai.
12
 
13
+ Llama-3.1-SuperNova-Lite excels in both benchmark performance and real-world applications, providing the power of large-scale models in a more compact, efficient form ideal for organizations seeking high performance with reduced resource requirements.
14
  # note
15
  This readme will be edited regularly on September 10, 2024 (the day of release). After the final readme is in place we will remove this note.