[FEEDBACK] Daily Papers

#32
by kramp HF staff - opened
Hugging Face org
โ€ข
edited Jul 25

Note that this is not a post about adding new papers, it's about feedback on the Daily Papers community update feature.

How to submit a paper to the Daily Papers, like @akhaliq (AK)?

  • Submitting is available to paper authors
  • Only recent papers (less than 7d) can be featured on the Daily

Then drop the arxiv id in the form at https://huggingface.co/papers/submit

  • Add medias to the paper (images, videos) when relevant
  • You can start the discussion to engage with the community

Please check out the documentation

We are excited to share our recent work on MLLM architecture design titled "Ovis: Structural Embedding Alignment for Multimodal Large Language Model".

Paper: https://arxiv.org/abs/2405.20797
Github: https://github.com/AIDC-AI/Ovis
Model: https://huggingface.co/AIDC-AI/Ovis-Clip-Llama3-8B
Data: https://huggingface.co/datasets/AIDC-AI/Ovis-dataset

This comment has been hidden
Hugging Face org

@Yiwen-ntu for now we support only videos as paper covers in the Daily.

This comment has been hidden
This comment has been hidden

we are excited to share our work titled "Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models" : https://arxiv.org/abs/2406.12644

We are excited to share our recent work, "Pooling And Attention: What Are Effective Designs For LLM-Based Embedding Models?"

Paper: https://arxiv.org/abs/2409.02727
Github: https://github.com/yixuantt/PoolingAndAttn

deleted
This comment has been hidden

We're thrilled to share our latest work, "Skip-and-Play: Depth-Driven Pose-Preserved Image Generation for Any Objects"
Paper: https://arxiv.org/abs/2409.02653

teaser.PNG

deleted
This comment has been hidden

We're thrilled to share our latest work, "Ferret: Federated Full-Parameter Tuning at Scale for Large Language Models", the first first-order FL method with shared randomness that significantly enhances the scalability of existing federated full-parameter tuning approaches by achieving high computational efficiency, reduced communication overhead, and fast convergence, all while maintaining competitive model accuracy.

Paper: https://arxiv.org/abs/2409.06277
Github: https://github.com/allen4747/Ferret

Hi, I'd like to share our paper beeFormer: Bridging the Gap Between Semantic and Interaction Similarity in Recommender Systems

Paper https://arxiv.org/pdf/2409.10309
Github https://github.com/recombee/beeformer

๐Ÿš€ Excited to share our latest preprint: "CodonTransformer: a multispecies codon optimizer using context-aware neural networks"!

CodonTransformer is a groundbreaking deep learning model that optimizes DNA sequences for heterologous protein expression across 164 species.
By leveraging Transformer architecture and a novel training stratey named STREAM, it generates host-specific DNA sequences with natural-like codon patterns, minimizing negative regulatory elements.

๐Ÿ’ฅ Website
https://adibvafa.github.io/CodonTransformer/

โญ GitHub (Please give us a :star:!)
https://github.com/Adibvafa/CodonTransformer

๐Ÿค– Colab Notebook (Try it out!)
https://adibvafa.github.io/CodonTransformer/GoogleColab

๐Ÿชผ Model
https://huggingface.co/adibvafa/CodonTransformer

๐Ÿ“ Paper
https://www.biorxiv.org/content/10.1101/2024.09.13.612903

Please share with anyone interested!
banner.png

Sign up or log in to comment