NickyNicky's picture
Add new SentenceTransformer model.
bdd2065 verified
metadata
base_model: BAAI/bge-base-en-v1.5
datasets: []
language:
  - en
library_name: sentence-transformers
license: apache-2.0
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:6300
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
widget:
  - source_sentence: >-
      Item 3—Legal Proceedings See discussion of Legal Proceedings in Note 10 to
      the consolidated financial statements included in Item 8 of this Report.
    sentences:
      - >-
        What financial measures are presented on a non-GAAP basis in this Annual
        Report on Form 10-K?
      - Which section of the report discusses Legal Proceedings?
      - >-
        What criteria was used to audit the internal control over financial
        reporting of The Procter & Gamble Company as of June 30, 2023?
  - source_sentence: >-
      A portion of the defense and/or settlement costs associated with such
      litigation is covered by indemnification from third parties in limited
      cases.
    sentences:
      - >-
        How did the writers' and actors' strikes affect the Company's
        entertainment segment in 2023?
      - >-
        Can indemnification from third parties also contribute to covering
        litigation costs?
      - >-
        What was the balance of net cash used in financing activities for Costco
        for the 52 weeks ended August 28, 2022?
  - source_sentence: >-
      In the company, to have a diverse and inclusive workforce, there is an
      emphasis on attracting and hiring talented people who represent a mix of
      backgrounds, identities, and experiences.
    sentences:
      - >-
        What does AT&T emphasize to ensure they have a diverse and inclusive
        workforce?
      - >-
        What drove the growth in marketplace revenue for the year ended December
        31, 2023?
      - >-
        What was the effect of prior-period medical claims reserve development
        on the Insurance segment's benefit ratio in 2023?
  - source_sentence: >-
      Internal control over financial reporting is a process designed to provide
      reasonable assurance regarding the reliability of financial reporting and
      the preparation of financial statements for external purposes in
      accordance with generally accepted accounting principles. It includes
      various policies and procedures that ensure accurate and fair record
      maintenance, proper transaction recording, and prevention or detection of
      unauthorized use or acquisition of assets.
    sentences:
      - >-
        How much did net cash used in financing activities decrease in fiscal
        2023 compared to the previous fiscal year?
      - How does Visa ensure the protection of its intellectual property?
      - >-
        What is the purpose of internal control over financial reporting
        according to the document?
  - source_sentence: >-
      Non-GAAP earnings from operations and non-GAAP operating profit margin
      consist of earnings from operations or earnings from operations as a
      percentage of net revenue excluding the items mentioned above and charges
      relating to the amortization of intangible assets, goodwill impairment,
      transformation costs and acquisition, disposition and other related
      charges. Hewlett Packard Enterprise excludes these items because they are
      non-cash expenses, are significantly impacted by the timing and magnitude
      of acquisitions, and are inconsistent in amount and frequency.
    sentences:
      - >-
        What specific charges are excluded from Hewlett Packard Enterprise's
        non-GAAP operating profit margin and why?
      - >-
        How many shares were outstanding at the beginning of 2023 and what was
        their aggregate intrinsic value?
      - >-
        What was the annual amortization expense forecast for
        acquisition-related intangible assets in 2025, according to a specified
        financial projection?
model-index:
  - name: BGE base Financial Matryoshka
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 768
          type: dim_768
        metrics:
          - type: cosine_accuracy@1
            value: 0.7157142857142857
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8571428571428571
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8871428571428571
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9314285714285714
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.7157142857142857
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2857142857142857
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.1774285714285714
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09314285714285712
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.7157142857142857
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8571428571428571
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8871428571428571
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9314285714285714
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.8274896625809096
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7939818594104311
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7969204030602811
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 512
          type: dim_512
        metrics:
          - type: cosine_accuracy@1
            value: 0.7142857142857143
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8571428571428571
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8871428571428571
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9314285714285714
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.7142857142857143
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2857142857142857
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.1774285714285714
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09314285714285712
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.7142857142857143
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8571428571428571
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8871428571428571
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9314285714285714
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.8267670378473014
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7930204081632654
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7958033409607879
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 256
          type: dim_256
        metrics:
          - type: cosine_accuracy@1
            value: 0.7157142857142857
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8514285714285714
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8828571428571429
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.93
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.7157142857142857
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2838095238095238
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.17657142857142857
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09299999999999999
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.7157142857142857
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8514285714285714
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8828571428571429
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.93
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.825504930245723
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7918724489795919
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7945830508495424
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 128
          type: dim_128
        metrics:
          - type: cosine_accuracy@1
            value: 0.7142857142857143
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8428571428571429
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8742857142857143
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9214285714285714
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.7142857142857143
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.28095238095238095
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.17485714285714282
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09214285714285712
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.7142857142857143
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8428571428571429
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8742857142857143
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9214285714285714
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.8203162516614704
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7878543083900227
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7909435994513387
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 64
          type: dim_64
        metrics:
          - type: cosine_accuracy@1
            value: 0.6828571428571428
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.81
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.85
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9042857142857142
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.6828571428571428
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.27
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.16999999999999998
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09042857142857143
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.6828571428571428
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.81
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.85
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9042857142857142
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7926026006937184
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7570844671201811
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7606949750229449
            name: Cosine Map@100

BGE base Financial Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("NickyNicky/bge-base-financial-matryoshka")
# Run inference
sentences = [
    'Non-GAAP earnings from operations and non-GAAP operating profit margin consist of earnings from operations or earnings from operations as a percentage of net revenue excluding the items mentioned above and charges relating to the amortization of intangible assets, goodwill impairment, transformation costs and acquisition, disposition and other related charges. Hewlett Packard Enterprise excludes these items because they are non-cash expenses, are significantly impacted by the timing and magnitude of acquisitions, and are inconsistent in amount and frequency.',
    "What specific charges are excluded from Hewlett Packard Enterprise's non-GAAP operating profit margin and why?",
    'How many shares were outstanding at the beginning of 2023 and what was their aggregate intrinsic value?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.7157
cosine_accuracy@3 0.8571
cosine_accuracy@5 0.8871
cosine_accuracy@10 0.9314
cosine_precision@1 0.7157
cosine_precision@3 0.2857
cosine_precision@5 0.1774
cosine_precision@10 0.0931
cosine_recall@1 0.7157
cosine_recall@3 0.8571
cosine_recall@5 0.8871
cosine_recall@10 0.9314
cosine_ndcg@10 0.8275
cosine_mrr@10 0.794
cosine_map@100 0.7969

Information Retrieval

Metric Value
cosine_accuracy@1 0.7143
cosine_accuracy@3 0.8571
cosine_accuracy@5 0.8871
cosine_accuracy@10 0.9314
cosine_precision@1 0.7143
cosine_precision@3 0.2857
cosine_precision@5 0.1774
cosine_precision@10 0.0931
cosine_recall@1 0.7143
cosine_recall@3 0.8571
cosine_recall@5 0.8871
cosine_recall@10 0.9314
cosine_ndcg@10 0.8268
cosine_mrr@10 0.793
cosine_map@100 0.7958

Information Retrieval

Metric Value
cosine_accuracy@1 0.7157
cosine_accuracy@3 0.8514
cosine_accuracy@5 0.8829
cosine_accuracy@10 0.93
cosine_precision@1 0.7157
cosine_precision@3 0.2838
cosine_precision@5 0.1766
cosine_precision@10 0.093
cosine_recall@1 0.7157
cosine_recall@3 0.8514
cosine_recall@5 0.8829
cosine_recall@10 0.93
cosine_ndcg@10 0.8255
cosine_mrr@10 0.7919
cosine_map@100 0.7946

Information Retrieval

Metric Value
cosine_accuracy@1 0.7143
cosine_accuracy@3 0.8429
cosine_accuracy@5 0.8743
cosine_accuracy@10 0.9214
cosine_precision@1 0.7143
cosine_precision@3 0.281
cosine_precision@5 0.1749
cosine_precision@10 0.0921
cosine_recall@1 0.7143
cosine_recall@3 0.8429
cosine_recall@5 0.8743
cosine_recall@10 0.9214
cosine_ndcg@10 0.8203
cosine_mrr@10 0.7879
cosine_map@100 0.7909

Information Retrieval

Metric Value
cosine_accuracy@1 0.6829
cosine_accuracy@3 0.81
cosine_accuracy@5 0.85
cosine_accuracy@10 0.9043
cosine_precision@1 0.6829
cosine_precision@3 0.27
cosine_precision@5 0.17
cosine_precision@10 0.0904
cosine_recall@1 0.6829
cosine_recall@3 0.81
cosine_recall@5 0.85
cosine_recall@10 0.9043
cosine_ndcg@10 0.7926
cosine_mrr@10 0.7571
cosine_map@100 0.7607

Training Details

Training Dataset

Unnamed Dataset

  • Size: 6,300 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 6 tokens
    • mean: 46.8 tokens
    • max: 512 tokens
    • min: 8 tokens
    • mean: 20.89 tokens
    • max: 51 tokens
  • Samples:
    positive anchor
    Retail sales mix by product type for company-operated stores shows beverages at 74%, food at 22%, and other items at 4%. What are the primary products sold in Starbucks company-operated stores?
    The pre-tax adjustment for transformation costs was $136 in 2021 and $111 in 2020. Transformation costs primarily include costs related to store and business closure costs and third party professional consulting fees associated with business transformation and cost saving initiatives. What was the purpose of pre-tax adjustments for transformation costs by The Kroger Co.?
    HP's Consolidated Financial Statements are prepared in accordance with United States generally accepted accounting principles (GAAP). What principles do HP's Consolidated Financial Statements adhere to?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 40
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 10
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 40
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.9114 9 - 0.7311 0.7527 0.7618 0.6911 0.7612
1.0127 10 1.9734 - - - - -
1.9241 19 - 0.7638 0.7748 0.7800 0.7412 0.7836
2.0253 20 0.8479 - - - - -
2.9367 29 - 0.7775 0.7842 0.7902 0.7473 0.7912
3.0380 30 0.524 - - - - -
3.9494 39 - 0.7831 0.7860 0.7915 0.7556 0.7939
4.0506 40 0.3826 - - - - -
4.9620 49 - 0.7896 0.7915 0.7927 0.7616 0.7983
5.0633 50 0.3165 - - - - -
5.9747 59 - 0.7925 0.7946 0.7943 0.7603 0.7978
6.0759 60 0.2599 - - - - -
6.9873 69 - 0.7918 0.7949 0.7951 0.7608 0.7976
7.0886 70 0.2424 - - - - -
8.0 79 - 0.7925 0.7956 0.7959 0.7612 0.7989
8.1013 80 0.2243 - - - - -
8.9114 88 - 0.7927 0.7956 0.7961 0.7610 0.7983
9.1139 90 0.2222 0.7909 0.7946 0.7958 0.7607 0.7969

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.2.0+cu121
  • Accelerate: 0.31.0
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}