Th3r0 commited on
Commit
f386160
1 Parent(s): 494624c

added wraper for all models changed formatting and spelling mistakes.

Browse files
Files changed (1) hide show
  1. app.py +57 -19
app.py CHANGED
@@ -12,17 +12,52 @@ import pandas as pd
12
  def parse_pipe_sa(pipe_out_text: str):
13
  output_list = list(pipe_out_text)
14
  pipe_label = output_list[0]['label']
15
- pipe_score = output_list[0]['score']
16
 
17
  parsed_prediction = 'NULL'
18
 
19
  if pipe_label == 'NEGATIVE' or pipe_label == 'LABEL_0':
20
- parsed_prediction = f'This model thinks the sentiment is negative with a confidence score of {pipe_score}'
21
  elif pipe_label == 'POSITIVE' or pipe_label == 'LABEL_1':
22
- parsed_prediction = f'This model thinks the sentiment is positive with a confidence score of {pipe_score}'
23
 
24
  return parsed_prediction
25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  loraModel = AutoPeftModelForSequenceClassification.from_pretrained("Intradiction/text_classification_WithLORA")
27
  #tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
28
  tokenizer1 = AutoTokenizer.from_pretrained("albert-base-v2")
@@ -64,13 +99,13 @@ AlbertwithLORA_pipe = pipeline("text-classification",model=sa_merged_model1, tok
64
 
65
  #NLI models
66
  def AlbertnoLORA_fn(text1, text2):
67
- return AlbertnoLORA_pipe({'text': text1, 'text_pair': text2})
68
 
69
  def AlbertwithLORA_fn(text1, text2):
70
- return AlbertwithLORA_pipe({'text': text1, 'text_pair': text2})
71
 
72
  def AlbertUntrained_fn(text1, text2):
73
- return ALbertUntrained_pipe({'text': text1, 'text_pair': text2})
74
 
75
 
76
  # Handle calls to Deberta--------------------------------------------
@@ -86,14 +121,14 @@ DebertawithLORA_pipe = pipeline("text-classification",model=sa_merged_model2, to
86
 
87
  #STS models
88
  def DebertanoLORA_fn(text1, text2):
89
- return DebertanoLORA_pipe({'text': text1, 'text_pair': text2})
90
 
91
  def DebertawithLORA_fn(text1, text2):
92
- return DebertawithLORA_pipe({'text': text1, 'text_pair': text2})
93
  #return ("working2")
94
 
95
  def DebertaUntrained_fn(text1, text2):
96
- return DebertaUntrained_pipe({'text': text1, 'text_pair': text2})
97
 
98
  #helper functions ------------------------------------------------------
99
 
@@ -321,10 +356,11 @@ with gr.Blocks(
321
  with gr.Column(variant="panel"):
322
  gr.Markdown("""
323
  <h2>Specifications</h2>
324
- <p><b>Model:</b> Tiny Bert <br>
 
325
  <b>Dataset:</b> IMDB Movie review dataset <br>
326
  <b>NLP Task:</b> Text Classification</p>
327
- <p>Text classification is an NLP task that focuses on automatically ascribing a predefined category or labels to an input prompt. In this demonstration the Tiny Bert model has been used to classify the text on the basis of sentiment analysis, where the labels (negative and positive) will indicate the emotional state expressed by the input prompt. The tiny bert model was chosen as in its base state its ability to perform sentiment analysis is quite poor, displayed by the untrained model, which often fails to correctly ascribe the label to the sentiment. The models were trained on the IMDB dataset which includes over 100k sentiment pairs pulled from IMDB movie reviews. We can see that when training is performed over [XX] of epochs we see an increase in X% of training time for the LoRA trained model.</p>
328
  """)
329
 
330
  with gr.Column(variant="panel"):
@@ -371,9 +407,10 @@ with gr.Blocks(
371
  gr.Markdown("""
372
  <h2>Specifications</h2>
373
  <p><b>Model:</b> Albert <br>
 
374
  <b>Dataset:</b> Stanford Natural Language Inference Dataset <br>
375
- <b>NLP Task:</b> Natual Languae Infrencing</p>
376
- <p>Natural Language Inference (NLI) which can also be referred to as Textual Entailment is an NLP task with the objective of determining the relationship between two pieces of text. In this demonstration the Albert model has been used to determine textual similarity ascribing a correlation score by the comparison of the two input prompts to determine if. Albert was chosen due to its substandard level of performance in its base state allowing room for improvement during training. The models were trained on the Stanford Natural Language Inference Dataset is a collection of 570k human-written English sentence pairs manually labeled for balanced classification, listed as positive, negative or neutral. We can see that when training is performed over [XX] epochs we see an increase in X% of training time for the LoRA trained model compared to a conventionally tuned model. </p>
377
  """)
378
  with gr.Column(variant="panel"):
379
  nli_p1 = gr.Textbox(placeholder="Prompt One",label= "Enter Query")
@@ -381,21 +418,21 @@ with gr.Blocks(
381
  nli_btn = gr.Button("Run")
382
  btnNLIStats = gr.Button("Display Training Metrics")
383
  btnTensorLinkNLICon = gr.Button(value="View Conventional Training Graphs", link="https://huggingface.co/m4faisal/NLI-Conventional-Fine-Tuning/tensorboard")
384
- btnTensorLinkNLILora = gr.Button(value="View LoRA Training Graphs", link="https://huggingface.co/m4faisal/NLI-Lora-Fine-Tuning-10K/tensorboard")
385
  gr.Examples(
386
  [
387
- "I am with my friends",
388
  "People like apples",
389
- "Dogs like bones",
390
  ],
391
  nli_p1,
392
  label="Try asking",
393
  )
394
  gr.Examples(
395
  [
396
- "I am happy",
397
  "Apples are good",
398
- "Bones like dogs",
399
  ],
400
  nli_p2,
401
  label="Try asking",
@@ -430,9 +467,10 @@ with gr.Blocks(
430
  gr.Markdown("""
431
  <h2>Specifications</h2>
432
  <p><b>Model:</b> Roberta Base <br>
 
433
  <b>Dataset:</b> Semantic Text Similarity Benchmark <br>
434
  <b>NLP Task:</b> Semantic Text Similarity</p>
435
- <p>Semantic text similarity measures the closeness in meaning of two pieces of text despite differences in their wording or structure. This task involves two input prompts which can be sentences, phrases or entire documents and assessing them for similarity. In our implementation we compare phrases represented by a score that can range between zero and one. A score of zero implies completely different phrases, while one indicates identical meaning between the text pair. This implementation uses a DeBERTa-v3-xsmall and training was performed on the semantic text similarity benchmark dataset which contains over 86k semantic pairs and their scores. We can see that when training is performed over [XX] epochs we see an increase in X% of training time for the LoRA trained model compared to a conventionally tuned model.</p>
436
  """)
437
  with gr.Column(variant="panel"):
438
  sts_p1 = gr.Textbox(placeholder="Prompt One",label= "Enter Query")
 
12
  def parse_pipe_sa(pipe_out_text: str):
13
  output_list = list(pipe_out_text)
14
  pipe_label = output_list[0]['label']
15
+ pipe_score = float(output_list[0]['score'])*100
16
 
17
  parsed_prediction = 'NULL'
18
 
19
  if pipe_label == 'NEGATIVE' or pipe_label == 'LABEL_0':
20
+ parsed_prediction = f'This model thinks the sentiment is NEGATIVE. \nConfidence score of {pipe_score:.3f}%'
21
  elif pipe_label == 'POSITIVE' or pipe_label == 'LABEL_1':
22
+ parsed_prediction = f'This model thinks the sentiment is POSITIVE. \nConfidence score of {pipe_score:.3f}%'
23
 
24
  return parsed_prediction
25
 
26
+ # Parse sentiment NLI pipeline results
27
+ def parse_pipe_nli(pipe_out_text: str):
28
+ output_list = pipe_out_text
29
+ pipe_label = output_list['label']
30
+ pipe_score = float(output_list['score'])*100
31
+
32
+ parsed_prediction = 'NULL'
33
+
34
+ if pipe_label == 'NEGATIVE' or pipe_label == 'LABEL_0':
35
+ parsed_prediction = f'This model thinks the clauses CONFIRM each other. \nConfidence score of {pipe_score:.3f}'
36
+ elif pipe_label == 'POSITIVE' or pipe_label == 'LABEL_1':
37
+ parsed_prediction = f'This model thinks the clauses are Neutral. \nConfidence score of {pipe_score:.3f}'
38
+ elif pipe_label == 'POSITIVE' or pipe_label == 'LABEL_2':
39
+ parsed_prediction = f'This model thinks the clauses CONTRADICT each other. \nConfidence score of {pipe_score:.3f}'
40
+
41
+ return parsed_prediction
42
+
43
+ # Parse sentiment STS pipeline results
44
+ def parse_pipe_sts(pipe_out_text: str):
45
+ output_list = pipe_out_text
46
+ pipe_label = output_list['label']
47
+ pipe_score = float(output_list['score'])*100
48
+
49
+ parsed_prediction = 'NULL'
50
+
51
+ if pipe_label == 'NO SIMILARITY' or pipe_label == 'LABEL_0':
52
+ parsed_prediction = f'This model thinks the clauses have NO similarity. \nConfidence score of {pipe_score:.3f}%'
53
+ elif pipe_label == 'LITTLE SIMILARITY' or pipe_label == 'LABEL_1':
54
+ parsed_prediction = f'This model thinks the clauses have LITTLE similarity. \nConfidence score of {pipe_score:.3f}%'
55
+ elif pipe_label == 'MEDIUM OR HIGHER SIMILARITY' or pipe_label == 'LABEL_2':
56
+ parsed_prediction = f'This model thinks the clauses have MEDIUM to HIGH similarity. \nConfidence score of {pipe_score:.3f}%'
57
+
58
+ return parsed_prediction
59
+
60
+ #pretty sure this can be removed
61
  loraModel = AutoPeftModelForSequenceClassification.from_pretrained("Intradiction/text_classification_WithLORA")
62
  #tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
63
  tokenizer1 = AutoTokenizer.from_pretrained("albert-base-v2")
 
99
 
100
  #NLI models
101
  def AlbertnoLORA_fn(text1, text2):
102
+ return parse_pipe_nli(AlbertnoLORA_pipe({'text': text1, 'text_pair': text2}))
103
 
104
  def AlbertwithLORA_fn(text1, text2):
105
+ return parse_pipe_nli(AlbertwithLORA_pipe({'text': text1, 'text_pair': text2}))
106
 
107
  def AlbertUntrained_fn(text1, text2):
108
+ return parse_pipe_nli(ALbertUntrained_pipe({'text': text1, 'text_pair': text2}))
109
 
110
 
111
  # Handle calls to Deberta--------------------------------------------
 
121
 
122
  #STS models
123
  def DebertanoLORA_fn(text1, text2):
124
+ return parse_pipe_sts(DebertanoLORA_pipe({'text': text1, 'text_pair': text2}))
125
 
126
  def DebertawithLORA_fn(text1, text2):
127
+ return parse_pipe_sts(DebertawithLORA_pipe({'text': text1, 'text_pair': text2}))
128
  #return ("working2")
129
 
130
  def DebertaUntrained_fn(text1, text2):
131
+ return parse_pipe_sts(DebertaUntrained_pipe({'text': text1, 'text_pair': text2}))
132
 
133
  #helper functions ------------------------------------------------------
134
 
 
356
  with gr.Column(variant="panel"):
357
  gr.Markdown("""
358
  <h2>Specifications</h2>
359
+ <p><b>Model:</b> Bert Base Uncased <br>
360
+ <b>Number of Parameters:</b> 110 Million <br>
361
  <b>Dataset:</b> IMDB Movie review dataset <br>
362
  <b>NLP Task:</b> Text Classification</p>
363
+ <p>Text classification is an NLP task that focuses on automatically ascribing a predefined category or labels to an input prompt. In this demonstration the Tiny Bert model has been used to classify the text on the basis of sentiment analysis, where the labels (negative and positive) will indicate the emotional state expressed by the input prompt.<br><br>The models were trained on the IMDB dataset which includes over 100k sentiment pairs pulled from IMDB movie reviews.<br><br><b>Results:</b><br> It can be seen that the LoRA fine tuned model performs comparably to the conventionally trained model. The difference arises in the training time where the conventional model takes almost 30 mins to train through 2 epochs the LoRA model takes half the time to train through 4 epochs.</p>
364
  """)
365
 
366
  with gr.Column(variant="panel"):
 
407
  gr.Markdown("""
408
  <h2>Specifications</h2>
409
  <p><b>Model:</b> Albert <br>
410
+ <b>Number of Parameters:</b> 11 Million <br>
411
  <b>Dataset:</b> Stanford Natural Language Inference Dataset <br>
412
+ <b>NLP Task:</b> Natural Language Inferencing</p>
413
+ <p>Natural Language Inference (NLI) which can also be referred to as Textual Entailment is an NLP task with the objective of determining the relationship between two pieces of text. Ideally to determine logical inference (i.e. If the pairs contradict or confirm one another).<br><br>The models were trained on the Stanford Natural Language Inference Dataset which is a collection of 570k human-written English sentence pairs manually labeled for balanced classification, listed as positive, negative or neutral. <br><br><b>Results</b><br>While the time to train for the conventional model may be lower if we look closer at the number of epochs the models we trained over the LoRA model has a time per epoch of 1.5 mins vs the conventional's 3mins per epoch, showing significant improvement. </p>
414
  """)
415
  with gr.Column(variant="panel"):
416
  nli_p1 = gr.Textbox(placeholder="Prompt One",label= "Enter Query")
 
418
  nli_btn = gr.Button("Run")
419
  btnNLIStats = gr.Button("Display Training Metrics")
420
  btnTensorLinkNLICon = gr.Button(value="View Conventional Training Graphs", link="https://huggingface.co/m4faisal/NLI-Conventional-Fine-Tuning/tensorboard")
421
+ btnTensorLinkNLILora = gr.Button(value="View LoRA Training Graphs", link="https://huggingface.co/m4faisal/NLI-Lora-Fine-Tuning-10K/tensorboard")
422
  gr.Examples(
423
  [
424
+ "A man is awake",
425
  "People like apples",
426
+ "A game with mutiple people playing",
427
  ],
428
  nli_p1,
429
  label="Try asking",
430
  )
431
  gr.Examples(
432
  [
433
+ "A man is sleeping",
434
  "Apples are good",
435
+ "Some people are playing a game",
436
  ],
437
  nli_p2,
438
  label="Try asking",
 
467
  gr.Markdown("""
468
  <h2>Specifications</h2>
469
  <p><b>Model:</b> Roberta Base <br>
470
+ <b>Number of Parameters:</b> 125 Million <br>
471
  <b>Dataset:</b> Semantic Text Similarity Benchmark <br>
472
  <b>NLP Task:</b> Semantic Text Similarity</p>
473
+ <p>Semantic text similarity measures the closeness in meaning of two pieces of text despite differences in their wording or structure. This task involves two input prompts which can be sentences, phrases or entire documents and assessing them for similarity. <br><br>This implementation uses the Roberta base model and training was performed on the semantic text similarity benchmark dataset which contains over 86k semantic pairs and their scores.<br><br><b>Results</b><br> We can see that for a comparable result the LoRA trained model manages to train for 30 epochs in 14.5 mins vs the conventional models 24 mins displaying a 60% increase in efficiency. </p>
474
  """)
475
  with gr.Column(variant="panel"):
476
  sts_p1 = gr.Textbox(placeholder="Prompt One",label= "Enter Query")