title
stringlengths
4
172
link
stringlengths
27
86
article
stringlengths
4
40.1k
Gambling In The Probability Space
https://hf.co/blog/TuringsSolutions/gambling-the-probability-space
8. References
Taxonomy Completion with Embedding Quantization and an LLM-based Pipeline: A Case Study in Computational Linguistics
https://hf.co/blog/dcarpintero/taxonomy-completion
Resources
How to Optimize TTFT of 8B LLMs with 1M Tokens to 20s
https://hf.co/blog/iofu728/ttft-1m-20s
How to Optimize KV Cache in Decoding?
Create a Diffusers-compatible Dataset for Stable Diffusion Fine-tuning
https://hf.co/blog/nroggendorff/create-diffusers-dataset
Step 4. 🎉 You Did It! 🎉 (<strong>finally</strong>)
Bringing Open-Source Models to Spreadsheets 🚀
https://hf.co/blog/fdaudens/hugging-face-on-sheets
What's Next?
Introducing HelpingAI-Flash: Emotionally Intelligent Conversational AI for All Devices
https://hf.co/blog/Abhaykoul/helpingai-flash
Conclusion
Introduction to State Space Models (SSM)
https://hf.co/blog/lbourdois/get-on-the-ssm-train
<span style="color: #FF0000"> <strong>References</strong> </span>
Announcing Finance Commons and the Bad Data Toolbox: Pioneering Open Data and Advanced Document Processing
https://hf.co/blog/Pclanglais/finance-commons-bad-data-toolbox
Will we solve PDF parsing before AGI?
Mixedbread 🤝 deepset: Announcing our New German/English Embedding Model
https://hf.co/blog/shadeMe/deepset-mixedbread-new-german-embedding-model
Use it with the Mixedbread Embedders
Swarm Neural Networks (SNN) for Image Generation
https://hf.co/blog/TuringsSolutions/snndiffusion
References
Querying Datasets with the Datasets Explorer Chrome Extension
https://hf.co/blog/cfahlgren1/querying-datasets-with-sql-in-the-browser
It's Open Source 🤗
Deploy hundreds of open source models on one GPU using LoRAX
https://hf.co/blog/macadeliccc/deploy-hundreds-of-models-on-one-gpu
Citations
Structured Harm Reporting in AI: New Research Paper at AIES and DEFCON event!
https://hf.co/blog/evijit/coordinatedflaws-aies-defcon
Looking Ahead
Unleash ML Power on iOS: Apple Silicon Optimization Secrets
https://hf.co/blog/fguzman82/coreml-async-batch-prediction
References
How OpenGPT 4o works
https://hf.co/blog/KingNish/opengpt-4o-working
Conclusion
Market Research using AI Evolutionary Algorithms and Multimodal Regression
https://hf.co/blog/tonyassi/market-research-ai
About Me
Introducing Ghost 8B Beta: A Game-Changing Language Model
https://hf.co/blog/lamhieu/introducing-ghost-8b-beta-a-game-changing-language
<p> <strong>Ghost 8B Beta</strong>, a groundbreaking language model, is poised to revolutionize the field of natural language processing. Developed with a focus on exceptional multilingual capabilities, superior knowledge acquisition, and cost-effectiveness, this model promises to unlock a new era of AI-powered applic...
The Rise of Agentic Data Generation
https://hf.co/blog/mlabonne/agentic-datagen
Conclusion
Mixture of Agents Model (MAM): An AI-Driven Full-Stack Development Team
https://hf.co/blog/dnnsdunca/mam-model
References
Is AI carbon footprint worrisome?
https://hf.co/blog/as-cle-bert/is-ai-carbon-footprint-worrisome
References
Optimisation d'un système RAG pour la recherche sémantique
https://hf.co/blog/Woziii/rag-semantic-search-space-huggingface
3. Intégration dans gradio.
In-browser LLM app in pure Python: Gemini Nano + Gradio-Lite
https://hf.co/blog/whitphx/in-browser-llm-gemini-nano-gradio-lite
Further reading and references
Introducing HelpingAI-15B: Emotionally Intelligent Conversational AI
https://hf.co/blog/Abhaykoul/introducing-helpingai-15b
Emotional Quotient (EQ)
How to run Gemini Nano locally in your browser
https://hf.co/blog/Xenova/run-gemini-nano-in-your-browser
References:
MMLU-Pro-NoMath
https://hf.co/blog/sam-paech/mmlu-pro-nomath
And also to the original MMLU which MMLU-Pro heavily draws from:
RegMix: Data Mixture as Regression for Language Model Pre-training
https://hf.co/blog/SivilTaram/regmix
Try RegMix on your dataset
MInference 1.0: 10x Faster Million Context Inference with a Single GPU
https://hf.co/blog/liyucheng/minference10
View more information about MInference
Enhancing Search Capabilities for Non-English Datasets in the Dataset Viewer
https://hf.co/blog/asoria/fts-dataset-viewer
Considerations
Introducing the Polish ASR Leaderboard (PAL) and Benchmark Intended Grouping of Open Speech (BIGOS) Corpora
https://hf.co/blog/michaljunczyk/introducing-polish-asr-leaderboard
References
Metric and Relative Monocular Depth Estimation: An Overview. Fine-Tuning Depth Anything V2 👐 📚
https://hf.co/blog/Isayoften/monocular-depth-estimation-guide
References
The Great LLM Showdown: Amy's Quest for the Perfect LLM
https://hf.co/blog/wolfram/the-great-llm-showdown
A Call for Improvement
BM25 for Python: Achieving high performance while simplifying dependencies with *BM25S*⚡
https://hf.co/blog/xhluca/bm25s
Does BM25S replace other libraries?
arXiv实用技巧,如何让你的paper关注度变高?
https://hf.co/blog/JessyTsu1/arxiv-trick
4. arXiv卡点提交
Swarm Neural Networks: Revolutionizing Function and API Call Execution
https://hf.co/blog/TuringsSolutions/swarmneuralnetworks
References
_Repetita iuvant_: how to improve AI code generation
https://hf.co/blog/as-cle-bert/repetita-iuvant-how-to-improve-ai-code-generation
References
RAG chatbot using llama3
https://hf.co/blog/not-lain/rag-chatbot-using-llama3
Dedication
GPM: Generative Password Manager
https://hf.co/blog/apehex/gpm
Improvements
ColPali: Efficient Document Retrieval with Vision Language Models 👀
https://hf.co/blog/manu/colpali
Acknowledgments
Advanced RAG: Fine-Tune Embeddings from HuggingFace for RAG
https://hf.co/blog/lucifertrj/finetune-embeddings
Co-Author: Shivaya Pandey
Image-based search engine
https://hf.co/blog/not-lain/image-retriever
Acknowledgement
EU Training Data Transparency: A Proposal for a Sufficiently Detailed Summary 📑📚🖼️🇪🇺
https://hf.co/blog/yjernite/eu-data-template
Additional Resources
Transformers
https://hf.co/blog/Esmail-AGumaan/attention-is-all-you-need
Citation:
Systems of Representation Are All You Need
https://hf.co/blog/TuringsSolutions/systemsofrepresentation
EuclAId 750 Google Pro 1.0
A Guide to Designing New Functional Proteins and Improving Protein Function, Stability, and Diversity with Generative AI
https://hf.co/blog/AmelieSchreiber/protein-optimization-and-design
Concluding Remarks
Building a Neural Network Classifier from the Ground Up: A Step-by-Step Guide
https://hf.co/blog/dcarpintero/building-a-neural-network-for-image-classification
References
How I train a LoRA: m3lt style training overview
https://hf.co/blog/alvdansen/training-lora-m3lt
Final Observations
Financial Analysis with Langchain and CrewAI Agents
https://hf.co/blog/herooooooooo/financial-analysis-with-langchain-and-crewai
CrewAI
Train custom AI models with the trainer API and adapt them to 🤗
https://hf.co/blog/not-lain/trainer-api-and-mixin-classes
Outro
Formatting Datasets for Chat Template Compatibility
https://hf.co/blog/nroggendorff/format-mayo
Usage
Part 2: Enhancing the Motoku LLM Retrieval System with OpenAI Embeddings and Prompt-based Retrieval
https://hf.co/blog/theeseus-ai/motoku-retrieval-2
Conclusion
Finetuning clip can be done locally with decent results (even if you are GPU poor).
https://hf.co/blog/herooooooooo/clip-finetune
<p> this is the journal of me following clipfinetune I have already posted this on medium but trying to slowly migrate my stuff here</p> <pre><code class="language-python"><span class="hljs-keyword">import</span> os <span class="hljs-keyword">import</span> datasets <span class="hljs-keyword">from</span> dataclasses <...
Building a Motoku LLM Retrieval System Using Internet Computer Protocol, Motoko, and Node.js
https://hf.co/blog/theeseus-ai/icp-retrieval-system
Conclusion
Building an AI-Powered Card Counter with TensorFlow
https://hf.co/blog/theeseus-ai/card-counting
Conclusion
Tokenization Is A Dead Weight
https://hf.co/blog/apehex/tokenization-is-a-dead-weight
Resources
Evaluate RAG pipeline using HuggingFace Open Source Models
https://hf.co/blog/lucifertrj/evaluate-rag
Try BeyondLLM
Build Agentic Workflow using OpenAGI and HuggingFace models
https://hf.co/blog/lucifertrj/openagi-blog
Join the Community
MotionLCM: The Fastest and Best Motion Generation Model
https://hf.co/blog/EvanTHU/motionlcm
📜 Citation
💃Introducing the first LLM-based Motion understanding model: MotionLLM
https://hf.co/blog/EvanTHU/motionllm
📜 Citation
🚨 ALERT: A Comprehensive Benchmark for Assessing Large Language Models' Safety through Red Teaming
https://hf.co/blog/sted97/alert
Further Resources
𝗝𝘂𝗱𝗴𝗶𝗻𝗴 𝘁𝗵𝗲 𝗝𝘂𝗱𝗴𝗲𝘀: 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗻𝗴 𝗔𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁 𝗮𝗻𝗱 𝗩𝘂𝗹𝗻𝗲𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀 𝗶𝗻 𝗟𝗟𝗠𝘀-𝗮𝘀-𝗝𝘂𝗱𝗴𝗲𝘀
https://hf.co/blog/singh96aman/judgingthejudges
<p> 𝐂𝐚𝐧 𝐋𝐋𝐌𝐬 𝐬𝐞𝐫𝐯𝐞 𝐚𝐬 𝐫𝐞𝐥𝐢𝐚𝐛𝐥𝐞 𝐣𝐮𝐝𝐠𝐞𝐬 ⚖️?</p> <p>We aim to identify the right metrics for evaluating Judge LLMs and understand their sensitivities to prompt guidelines, engineering, and specificity. With this paper, we want to raise caution ⚠️ to blindly using LLMs as human proxy. </p> <p>...
Claude-3.5 Evaluation Results on Open VLM Leaderboard
https://hf.co/blog/KennyUTC/claude3-5
<p> <a href="https://cdn-uploads.huggingface.co/production/uploads/63ee1379190ddd6214efd73a/-jf9u3KKGt0pYLD2wYFZs.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63ee1379190ddd6214efd73a/-jf9u3KKGt0pYLD2wYFZs.png"/></a></p> <p>Claude3.5-Sonnet is the latest large mul...
seemore: Implement a Vision Language Model from Scratch
https://hf.co/blog/AviSoori1x/seemore-vision-language-model
Bringing everything together to implement Seemore: the simple Vision Language Model
SeeMoE: Implementing a MoE Vision Language Model from Scratch
https://hf.co/blog/AviSoori1x/seemoe
<p> TL;DR: In this blog I implement a mixture of experts vision language model consisting of an image encoder, a multimodal projection module and a mixture of experts decoder language model in pure pytorch. Thus, the resulting implementation could be thought of as a scaled down version of Grok 1.5 Vision and GPT-4 V...
Shape Rotation 101: An Intro to Einsum and Jax Transformers
https://hf.co/blog/dejavucoder/einsum
Attention block
Open-source embeddings and LLMs outperform Gemini and OpenAI for Web Navigation while being faster and cheaper
https://hf.co/blog/dhuynh95/evaluating-open-source-and-closed-models
Conclusion
Recommendation to Revisit the Diffuser Default LoRA Parameters
https://hf.co/blog/alvdansen/revisit-diffusers-default-params
<p> Over the last year I have trained hundreds of LoRA finetunes with SDXL, and in the short time that I've spent back in the consulting space, I have tested with over a dozen startup apps that offer finetuning services on their platforms. I have seen, very consistently, the same general quality results from these tr...
Introducing Synthetic Data Workshop: Your Gateway to Easy Synthetic Dataset Creation
https://hf.co/blog/davanstrien/synthetic-data-workshop
Next steps
Extracting Concepts from LLMs: Anthropic’s recent discoveries 📖
https://hf.co/blog/m-ric/extracting-concepts-from-llms
4+. Moving forward
Enhancing Image Model Dreambooth Training Through Effective Captioning: Key Observations
https://hf.co/blog/alvdansen/enhancing-lora-training-through-effective-captions
<p> In the realm of Dreambooth and LoRA training, especially when fine-tuning models for SDXL, the nuances of how you approach captioning can significantly impact the model's performance. Here are five key observations based on my experiences that can guide you in optimizing your training data for more precise and de...
Unveiling CIVICS: A New Dataset for Examining Cultural Values in Language Models
https://hf.co/blog/giadap/civics
Future Directions
Introducing the Ultimate SEC LLM: Revolutionizing Financial Insights with Llama-3-70B
https://hf.co/blog/Crystalcareai/llama-3-sec
References
Train a Terrible Tic-Tac-Toe AI
https://hf.co/blog/nroggendorff/ttt-ai
This is a dumb project, and it won't work
Thoughts on LoRA Training Pt 2: Where to Train
https://hf.co/blog/alvdansen/thoughts-on-lora-training-pt-2-training-services
<p> This is a pretty quick follow up, but there were some immediate "where do I start" questions I want to answer.</p> <p>First and foremost, if you have <strong>never</strong> trained a LoRA before, start somewhere that has presets - as in a notebook or platform that has preset trianing parameter values. I beg of yo...
Thoughts on LoRA Training #1
https://hf.co/blog/alvdansen/thoughts-on-lora-training-1
<p> I talk to many people about training LoRAs, from a variety of backgrounds. Some are very new to it, while others are well-established with impressive model portfolios. I aim to make this a series of posts, and possibly an article, discussing my thoughts on LoRA training and my suggestions.</p> <p>For my part, I w...
MobileNet-V4 (now in timm)
https://hf.co/blog/rwightman/mobilenetv4
PyTorch Implementation
Against mixing environment setup with code
https://hf.co/blog/ucheog/separate-env-setup-from-code
Use python-dotenv [say what?!]
SwanLab and Transformers: Power Up Your NLP Experiments
https://hf.co/blog/Andyrasika/swanlab-transformers
Conclusion
CryptGPT: Privacy-Preserving Language Models Using Vigenere Cipher (Part 1)
https://hf.co/blog/diwank/cryptgpt-part1
A Challenge for Cryptanalysts and LLM Researchers
The CVPR Survival Guide: Discovering Research That's Interesting to YOU!
https://hf.co/blog/harpreetsahota/cvpr2024-survival-guide
🔊 Now, let's check all this out in the app! Turn your audio on because I'll explain what I'm doing!
Uncensor any LLM with abliteration
https://hf.co/blog/mlabonne/abliteration
References
Low Latency CPU Based Educational Value Classifier With Generic Educational Value
https://hf.co/blog/kenhktsui/edu-value-classifier-cpu
Citation
An Optimal Lossy Variant of Speculative Decoding
https://hf.co/blog/vivien/optimal-lossy-variant-of-speculative-decoding
Conclusion and further work
Reports on the Hub: A First Look at Self-governance in Open Source AI Development
https://hf.co/blog/frimelle/self-governance-open-source-ai
<p> <img alt="" src="https://cdn-uploads.huggingface.co/production/uploads/6531310497d7f1b4a083de7b/mWAsYQK9yJPaQRY86ijLc.png" style="display: block; margin: auto;"/></p> <p>Hugging Face has a unique position as the most widely used open-source platform for AI models. As in many open-source projects, one of the invalu...
Building a Vision Mixture-of-Expert Model from several fine-tuned Phi-3-Vision Models
https://hf.co/blog/mjbuehler/phi-3-vision-cephalo-moe
Citation
Running Large Multimodal Models on an AI PC's NPU
https://hf.co/blog/bconsolvo/llava-gemma-2b-aipc-npu
Conclusions and calls to action
Saving Memory Using Padding-Free Transformer Layers during Finetuning
https://hf.co/blog/mayank-mishra/padding-free-transformer
References
An Analysis of Chinese LLM Censorship and Bias with Qwen 2 Instruct
https://hf.co/blog/leonardlin/chinese-llm-censorship-analysis
Recommendations
Aligning Large Language Models with BRAIn
https://hf.co/blog/gauravpandey1/brain
Experimental results <a name="experimental" rel="nofollow"></a>
What CI/CD practitioners know that ML engineers don’t… yet
https://hf.co/blog/Manialgie/what-cicd-practitioners-know-that-ml-engineers-don
TL;DR
BrAIn: next generation neurons?
https://hf.co/blog/as-cle-bert/brain-next-generation-neurons
References
Training an Object Detection Model with AutoTrain
https://hf.co/blog/abhishek/object-detection-autotrain
Conclusion
Orchestrating Small Language Models (SLM) using JavaScript and the Hugging Face Inference API
https://hf.co/blog/rrg92/orchestrating-small-llms-javascript-inference-api
Other Endpoints
Orquestrando Small Language Models (SLM) usando JavaScript e a API de Inferência do Hugging Face
https://hf.co/blog/rrg92/orquestrando-small-llms-javascript-api-inferencia
Demais endpoints
Announcing Occiglot-Fineweb
https://hf.co/blog/malteos/occiglot-fineweb
Insights and Next steps
🦙⚗️ Using Llama3 and distilabel to build fine-tuning datasets
https://hf.co/blog/dvilasuero/synthetic-data-with-llama3-distilabel
Full pipeline code
Fine-tune and deploy open LLMs as containers using AIKit - Part 1: Running on a local machine
https://hf.co/blog/sozercan/finetune-deploy-aikit-part1
📚 Additional Resources
Virtual Try-On using IP-Adapter Inpainting
https://hf.co/blog/tonyassi/virtual-try-on-ip-adapter
About Me
LLM数据工程3——数据收集魔法:获取顶级训练数据的方法
https://hf.co/blog/JessyTsu1/data-collect-zh
数据版本控制
LLM Data Engineering 3——Data Collection Magic: Acquiring Top Training Data
https://hf.co/blog/JessyTsu1/data-collect
Data Version Control
I ran 580 model-dataset experiments to show that, even if you try very hard, it is almost impossible to know that a model is degrading just by looking at data drift results
https://hf.co/blog/santiviquez/data-drift-estimate-model-performance
<p> In my opinion, data drift detection methods are very useful when we want to understand what went wrong with a model, but they are not the right tools to know how my model's performance is doing.</p> <p>Essentially, using data drift as a proxy for performance monitoring is not a great idea.</p> <p>I wanted to prov...