Edit model card

SentenceTransformer based on nreimers/MiniLM-L6-H384-uncased

This is a sentence-transformers model finetuned from nreimers/MiniLM-L6-H384-uncased. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: nreimers/MiniLM-L6-H384-uncased
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'Computationally efficient fixed complexity LLL algorithm for lattice-reduction-aided multiple-input–multiple-output precoding',
    'In multiple-input–multiple-output broadcast channels, lattice reduction (LR) preprocessing technique can significantly improve the precoding performance. Among the existing LR algorithms, the fixed complexity Lenstra–Lenstra–Lovasz (fcLLL) algorithm applying limited number of LLL loops is suitable for the real-time communication system. However, fcLLL algorithm suffers from higher average complexity. Aiming at this problem, a computationally efficient fcLLL (CE-fcLLL) algorithm for LR-aided (LRA) precoding is developed in this study. First, the authors analyse the impact of fcLLL algorithm on the signal-to-noise ratio performance of LRA precoding by a power factor (PF) which is defined to measure the relation of reduced basis and transmit power of LRA precoding. Then, they propose a CE-fcLLL algorithm by designing a new LLL loop and introducing new early termination conditions to reduce redundant and inefficient LR operation in fcLLL algorithm. Finally, they define a PF loss factor to optimise the PF threshold and the number of LLL loops, which can lead to a performance-complexity tradeoff. Simulation results show that the proposed algorithm for LRA precoding can achieve better bit-error-rate performance than the fcLLL algorithm with remarkable complexity savings in the same upper complexity bound.',
    'ABSTRACTThe success of the open innovation (OI) paradigm is still debated and literature is searching for its determinants. Although firms’ internal social context is crucial to explain the success or failure of OI practices, such context is still poorly investigated. The aim of the paper is to analyse whether internal social capital (SC), intended as employees’ propensity to interact and work in groups in order to solve innovation issues, mediates the relationship between OI practices and innovation ambidexterity (IA). Results, based on a survey research developed in Finland, Italy and Sweden, suggest that collaborations with different typologies of partners (scientific and business) achieve good results in terms of IA, through the partial mediation of the internal SC.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 730,454 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 4 tokens
    • mean: 15.97 tokens
    • max: 48 tokens
    • min: 18 tokens
    • mean: 193.95 tokens
    • max: 512 tokens
  • Samples:
    sentence_0 sentence_1
    E-government in a corporatist, communitarian society: the case of Singapore Singapore was one of the early adopters of e-government initiatives in keeping with its status as one of the few developed Asian countries and has continued to be at the forefront of developing e-government structures. While crediting the city-state for the speed of its development, observers have critiqued that the republic limits pluralism, which directly affects e-governance initiatives. This article draws on two recent government initiatives, the notions of corporatism and communitarianism and the concept of symmetry and asymmetry in communication to present the e-government and e-governance structures in Singapore. Four factors are presented as critical for the creation of a successful e-government infrastructure: an educated citizenry; adequate technical infrastructures; offering e-services that citizens need; and commitment from top government officials to support the necessary changes with financial resources and leadership. However, to have meaningful e-governance there has to be political plural...
    Multicast routing representation in ad hoc networks using fuzzy Petri nets In an ad hoc network, each mobile node plays the role of a router and relays packets to final destinations. The network topology of an ad hoc network changes frequently and unpredictable, so that the routing and multicast become extremely challenging. We describe the multicast routing representation using fuzzy Petri net model with the concept of immediately reachable set in wireless ad hoc networks which all nodes equipped with GPS unit. It allows structured representation of network topology, and has a fuzzy reasoning algorithm for finding multicast tree and improves the efficiency of the ad hoc network routing scheme. Therefore when a packet is to be multicast to a group by a multicast source, a heuristic algorithm is used to compute the multicast tree based on the local network topology with a multicast source. Finally, the simulation shows that the percentage of the improvement is more than 15% when compared the IRS method with the original method.
    A Prognosis Tool Based on Fuzzy Anthropometric and Questionnaire Data for Obstructive Sleep Apnea Severity Obstructive sleep apnea (OSA) are linked to the augmented risk of morbidity and mortality. Although polysomnography is considered a well-established method for diagnosing OSA, it suffers the weakness of time consuming and labor intensive, and requires doctors and attending personnel to conduct an overnight evaluation in sleep laboratories with dedicated systems. This study aims at proposing an efficient diagnosis approach for OSA on the basis of anthropometric and questionnaire data. The proposed approach integrates fuzzy set theory and decision tree to predict OSA patterns. A total of 3343 subjects who were referred for clinical suspicion of OSA (eventually 2869 confirmed with OSA and 474 otherwise) were collected, and then classified by the degree of severity. According to an assessment of experiment results on g-means, our proposed method outperforms other methods such as linear regression, decision tree, back propagation neural network, support vector machine, and learning vector quantization. The proposed method is highly viable and capable of detecting the severity of OSA. It can assist doctors in pre-diagnosis of OSA before running the formal PSG test, thereby enabling the more effective use of medical resources.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 1
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
0.0110 500 0.4667
0.0219 1000 0.179
0.0329 1500 0.1543
0.0438 2000 0.1284
0.0548 2500 0.1123
0.0657 3000 0.101
0.0767 3500 0.0989
0.0876 4000 0.0941
0.0986 4500 0.0827
0.1095 5000 0.0874
0.1205 5500 0.0825
0.1314 6000 0.0788
0.1424 6500 0.0728
0.1533 7000 0.0768
0.1643 7500 0.0707
0.1752 8000 0.0691
0.1862 8500 0.0666
0.1971 9000 0.0644
0.2081 9500 0.0615
0.2190 10000 0.0651
0.2300 10500 0.0604
0.2409 11000 0.0595
0.2519 11500 0.0622
0.2628 12000 0.0537
0.2738 12500 0.0564
0.2848 13000 0.0622
0.2957 13500 0.052
0.3067 14000 0.0475
0.3176 14500 0.0569
0.3286 15000 0.0511
0.3395 15500 0.0476
0.3505 16000 0.0498
0.3614 16500 0.0527
0.3724 17000 0.0556
0.3833 17500 0.0495
0.3943 18000 0.0482
0.4052 18500 0.0556
0.4162 19000 0.0454
0.4271 19500 0.0452
0.4381 20000 0.0431
0.4490 20500 0.0462
0.4600 21000 0.0473
0.4709 21500 0.0387
0.4819 22000 0.041
0.4928 22500 0.0472
0.5038 23000 0.0435
0.5147 23500 0.0419
0.5257 24000 0.0395
0.5366 24500 0.043
0.5476 25000 0.0419
0.5585 25500 0.0394
0.5695 26000 0.0403
0.5805 26500 0.0436
0.5914 27000 0.0414
0.6024 27500 0.0418
0.6133 28000 0.0411
0.6243 28500 0.035
0.6352 29000 0.0397
0.6462 29500 0.0392
0.6571 30000 0.0373
0.6681 30500 0.0373
0.6790 31000 0.0363
0.6900 31500 0.0418
0.7009 32000 0.0377
0.7119 32500 0.0321
0.7228 33000 0.0331
0.7338 33500 0.0373
0.7447 34000 0.0342
0.7557 34500 0.0335
0.7666 35000 0.0323
0.7776 35500 0.0362
0.7885 36000 0.0376
0.7995 36500 0.0364
0.8104 37000 0.0396
0.8214 37500 0.0321
0.8323 38000 0.0358
0.8433 38500 0.0299
0.8543 39000 0.0304
0.8652 39500 0.0317
0.8762 40000 0.0334
0.8871 40500 0.0331
0.8981 41000 0.0326
0.9090 41500 0.0325
0.9200 42000 0.0321
0.9309 42500 0.0316
0.9419 43000 0.0321
0.9528 43500 0.0353
0.9638 44000 0.0315
0.9747 44500 0.0326
0.9857 45000 0.031
0.9966 45500 0.0315

Framework Versions

  • Python: 3.12.2
  • Sentence Transformers: 3.0.1
  • Transformers: 4.42.3
  • PyTorch: 2.3.1+cu121
  • Accelerate: 0.32.1
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
0
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for sarwin/rp-embed

Finetuned
this model