title
stringlengths
4
172
link
stringlengths
27
86
article
stringlengths
4
40.1k
FiftyOne Computer Vision Datasets Come to the Hugging Face Hub
https://hf.co/blog/jamarks/fiftyone-datasets-come-to-hf-hub
📚Resources
⚗️ 🔥 Building High-Quality Datasets with distilabel and Prometheus 2
https://hf.co/blog/burtenshaw/distilabel-prometheus-2
Resource
Expert-Level Tutorials on Stable Diffusion & SDXL: Master Advanced Techniques and Strategies
https://hf.co/blog/MonsterMMORPG/expert-level-tutorials-on-stable-diffusion-gen-ai
Tutorial Videos
Wikipedia's Treasure Trove: Advancing Machine Learning with Diverse Data
https://hf.co/blog/frimelle/wikipedias-treasure-trove-ml-data
More Wikimedia data on Hugging Face - How?
Introducing Tenzin 1.0:
https://hf.co/blog/Tar9897/my-first-model
<p> Tenzin: A Technical Exploration into Achieving Artificial General Intelligence Artificial General Intelligence (AGI) represents the zenith of artificial intelligence research—a machine capable of understanding, learning, and applying knowledge across a wide array of tasks at a level comparable to human intelligen...
Mergoo: Efficiently Build Your Own MoE LLM
https://hf.co/blog/alirezamsh/mergoo
Learn More
Fine-tuning LLMs with Singular Value Decomposition
https://hf.co/blog/fractalego/svd-training
References
Introducing UNA-ThePitbull Series
https://hf.co/blog/fblgit/una-thepitbull
Bonus
Indexify: Bringing HuggingFace Models to Real-Time Pipelines for Production Applications
https://hf.co/blog/rishiraj/announcing-indexify
Start Using Indexify
HelpingAI 9B: Cutting Edge Emotionally Intelligent AI
https://hf.co/blog/KingNish/helpingai-9b
Conclusion:
How to directly access 150k+ Hugging Face Datasets with DuckDB and query using GPT-4o
https://hf.co/blog/chilijung/access-150k-hugging-face-datasets-with-duckdb
Start asking questions
FaceChain-FACT: Open-source 10-second portrait generation, reusing massive LoRa styles, a base-model-friendly portrait application.
https://hf.co/blog/haoyufirst/facechain-fact
Expansion & Co-construction
Revolutionizing Human-Computer Interaction: The Emotional Intelligence and Ethical Impact of HelpingAI-9B
https://hf.co/blog/Abhaykoul/helpingai
7.3. Closing Remarks
So WTF is an Audio Embedding Model?
https://hf.co/blog/cappuch/audio-embedding-wtf
What Can Audio Embedding Model Be Used For
Orchestration of Experts: The First-Principle Multi-Model System
https://hf.co/blog/alirezamsh/leeroo-multi-model-system
Citation
How to Fine-Tune Custom Embedding Models Using AutoTrain
https://hf.co/blog/abhishek/finetune-custom-embeddings-autotrain
Summary
GPU Poor Savior: Revolutionizing Low-Bit Open Source LLMs and Cost-Effective Edge Computing
https://hf.co/blog/NicoNico/green-bit-llm
Links:
Not Legal Advice on AI Training Data in Japan
https://hf.co/blog/leonardlin/ai-training-data-in-japan
Terms of Service and Synthetic Data
Sales Forecasting with Image Regression
https://hf.co/blog/tonyassi/image-regression
About Me
AI has a problem with objectifying women
https://hf.co/blog/sasha/objectifying-women-in-ai
<p> May 24, 2024</p> <p>Last week, OpenAI did a much-publicized <a href="https://www.wired.com/story/openai-gpt-4o-model-gives-chatgpt-a-snappy-flirty-upgrade/" rel="nofollow">demo</a> of their new chatbot, ChatGPT 4.0, now endowed with a speech interface. One of the voices used during their demo, nickname Sky, insta...
Training MoE on AWS Trainium
https://hf.co/blog/ynakashima/training-moe-on-aws-trainium
Conclusion
Let's talk about LLM evaluation
https://hf.co/blog/clefourrier/llm-evaluation
Acknowledgements
Synthetic dataset generation techniques: generating custom sentence similarity data
https://hf.co/blog/davanstrien/synthetic-similarity-datasets
Conclusion
Journey With Me Into The Mind of Large Language Models: Interesting Findings in AnthropicAI's Scaling Monosemanticity paper.
https://hf.co/blog/Jaward/journey-with-me-into-the-mind-of-llms
<p> One of the many unknowns with LLMs is the why behind the responses they give - it's unclear why certain responses are chosen over others. Which shows how little we know of what's happening inside these models. </p> <p>To have a deeper sense of this, they tried Sparse Dictionary Learning on a larger model (Claude ...
Enjoy the Power of Phi-3 with ONNX Runtime on your device
https://hf.co/blog/Emma-N/enjoy-the-power-of-phi-3-with-onnx-runtime
Enjoy Phi-3 on your device
What is going on with AlphaFold3?
https://hf.co/blog/as-cle-bert/what-is-going-on-with-alphafold3
References
Decoding GPT-4'o': In-Depth Exploration of Its Mechanisms and Creating Similar AI.
https://hf.co/blog/KingNish/decoding-gpt-4o
Making of Similar AI
Sora
https://hf.co/blog/Kvikontent/sora
Ready to try it out?
Explaining the SDXL latent space
https://hf.co/blog/TimothyAlexisVass/explaining-the-sdxl-latent-space
Back to top
Diffusion Models
https://hf.co/blog/Esmail-AGumaan/diffusion-models
Citation:
Evaling llm-jp-eval (evals are hard)
https://hf.co/blog/leonardlin/llm-jp-eval-eval
<p> With training of <a href="https://wandb.ai/augmxnt/shisa-v2/" rel="nofollow">shisa-v2</a> starting in earnest, I've been digging a bit more into <a href="https://github.com/llm-jp/llm-jp-eval" rel="nofollow">llm-jp-eval</a>, which I used as a quick and simple benchmark to help to track shisa-v1 (especially the ba...
2024-04-22 - Hub Incident Post Mortem
https://hf.co/blog/mcpotato/hub-incident-post-mortem-20240422
Timeline
Hugging Face + Google Visual Blocks
https://hf.co/blog/radames/hugging-face-google-visual-blocks
Acknowledgements
Multimodal Augmentation for Documents: Recovering “Comprehension” in “Reading and Comprehension” task
https://hf.co/blog/danaaubakirova/doc-augmentation
References
Synthetic dataset generation techniques: Self-Instruct
https://hf.co/blog/davanstrien/self-instruct
Using Self Instruct
Glaze and the Effectiveness of Anti-AI Methods for Diffusion Models
https://hf.co/blog/parsee-mizuhashi/glaze-and-anti-ai-methods
Conclusion
RFDiffusion Potentials
https://hf.co/blog/AmelieSchreiber/rfdiffusion-potentials
Example 3: Combining <code>substrate_contacts</code>, <code>monomer_ROG</code>, and <code>monomer_contacts</code> for motif scaffolding
Exploration of Job Application Automation with Data Scraping
https://hf.co/blog/herooooooooo/automation-job-applications-with-python-and-ollama
Conclusion
Everything About Long Context Fine-tuning
https://hf.co/blog/wenbopan/long-context-fine-tuning
What's Next
Advancing Open-source Large Language Models in the Medical & Healthcare Domain
https://hf.co/blog/aaditya/openbiollm
Detailed Medical Subjectwise accuracy
Energy Star Ratings for AI Models
https://hf.co/blog/sasha/energy-star-ai-proposal
Future Work
Train Custom Models on Hugging Face Spaces with AutoTrain SpaceRunner
https://hf.co/blog/abhishek/autotrain-spacerunner
<p> Did you know you could train your custom models on Hugging Face Spaces!!!? Yes, its possible and super-easy to do with AutoTrain SpaceRunner 💥 All you need is a Hugging Face account (which you probably have already) and a payment method attached to your account (in case you want to use GPUs, CPU training is free...
makeMoE: Implement a Sparse Mixture of Experts Language Model from Scratch
https://hf.co/blog/AviSoori1x/makemoe-from-scratch
Putting it all together
Can we create pedagogically valuable multi-turn synthetic datasets from Cosmopedia?
https://hf.co/blog/davanstrien/cosmochat
TODO
Evalverse: Revolutionizing Large Language Model Evaluation with a Unified, User-Friendly Framework
https://hf.co/blog/Yescia/evalverse-llm-evaluation-opensource
Citation
🧑‍⚖️ "Replacing Judges with Juries" using distilabel
https://hf.co/blog/alvarobartt/replacing-judges-with-juries-distilabel
References
Fish Speech V1 - New Multilingual Open Source TTS Model
https://hf.co/blog/lengyue233/fish-speech-1
Next Steps
Google Search with LLM
https://hf.co/blog/nand-tmp/google-search-with-llm
How to use RAG method to access the entire internet with ML
Token Merging for fast LLM inference : Background and first trials with Mistral
https://hf.co/blog/samchain/token-merging-fast-inference
Links
⚗️ 🧑🏼‍🌾 Let's grow some Domain Specific Datasets together
https://hf.co/blog/burtenshaw/domain-specific-datasets
<strong>5. Review and share the dataset</strong>
Expanding Model Context and Creating Chat Models with a Single Click
https://hf.co/blog/maywell/llm-feature-transfer
Links
Estimating Memory Consumption of LLMs for Inference and Fine-Tuning for Cohere Command-R+
https://hf.co/blog/Andyrasika/memory-consumption-estimation
Conclusion
Post-OCR-Correction: 1 billion words dataset of automated OCR correction by LLM
https://hf.co/blog/Pclanglais/post-ocr-correction
Current results and use cases
Can We Train Chat Models with Raw Data?
https://hf.co/blog/maywell/layer-aware-1
This simple experiment was designed and conducted based on empirical intuition rather than theoretical grounds.
RealWorldQA, What's New?
https://hf.co/blog/KennyUTC/realworldqa
Takeaway
How to Finetune phi-3 on MacBook Pro
https://hf.co/blog/abhishek/phi3-finetune-macbook
<p> In this blog, i'll show you how you can train/finetune the latest phi-3 model from Microsoft on your macbook pro! You'll need an M1 or M2 mac to do this. We will be using AutoTrain Advanced!</p> <p>To install AutoTrain Advanced, you can do:</p> <pre><code class="language-bash">$ pip install autotrain-advanced </c...
Fine Tuning a LLM Using Kubernetes with Intel® Xeon® Scalable Processors
https://hf.co/blog/dmsuehir/llama2-fine-tuning-k8s
Citations
LLM Comparison/Test: Llama 3 Instruct 70B + 8B HF/GGUF/EXL2 (20 versions tested and compared!)
https://hf.co/blog/wolfram/llm-comparison-test-llama-3
TL;DR: Observations & Conclusions
Outpainting III - Inpaint Model
https://hf.co/blog/OzzyGT/outpainting-inpaint-model
4.- Final touch-ups
Outpainting II - Differential Diffusion
https://hf.co/blog/OzzyGT/outpainting-differential-diffusion
<p> This is the third guide about outpainting, if you want to read about the other methods here they are:</p> <ul> <li><a href="https://huggingface.co/blog/OzzyGT/outpainting-controlnet">Outpainting I - Controlnet version</a></li> <li><a href="https://huggingface.co/blog/OzzyGT/outpainting-inpaint-model">Outpainting ...
Outpainting I - Controlnet version
https://hf.co/blog/OzzyGT/outpainting-controlnet
6.- Outpaint tip with IP Adapter
Exploring Emotionally Intelligent AI with HelpingAI
https://hf.co/blog/Abhaykoul/emotionally-intelligent-ai
8.1. Embracing the Era of Emotionally Intelligent AI
Fine-tune Llama 3 with ORPO
https://hf.co/blog/mlabonne/orpo-llama-3
References
Starting Tiny with Protein LLaMA
https://hf.co/blog/monsoon-nlp/greenbeing-and-protein-models
Limitations and Safety Notes
Mixture of Depth is Vibe
https://hf.co/blog/joey00072/mixture-of-depth-is-vibe
Few Gotchas
Custom architectures with HuggingFace 🤗
https://hf.co/blog/not-lain/custom-architectures-with-huggingface
push to hub 🤗
Run the strongest open-source LLM model: Llama3 70B with just a single 4GB GPU!
https://hf.co/blog/lyogavin/llama3-airllm
Does Llama3’s success herald the rise of open-source models??
On Coding Your First Attention
https://hf.co/blog/Jaward/coding-your-first-attention
<p> While it’s not necessarily the case that you must code the attention block of a transformer from scratch to understand how it works, yet it sure is the closest you can get to having a first-principles understanding of why/how transformers behave the way they do.</p> <p><a href="https://cdn-uploads.huggingface.co/...
SVGDreamer: Text Guided Vector Graphics Generation with Diffusion Model
https://hf.co/blog/xingxm/svgdreamer
References
Releasing Youtube-Commons: a massive open corpus for conversational and multimodal data
https://hf.co/blog/Pclanglais/youtube-commons
<p> We announce today the release of <a href="https://huggingface.co/datasets/PleIAs/YouTube-Commons">Youtube-Commons</a> on HuggingFace:</p> <ul> <li>Youtube-Commons is the largest corpus of video to date under an entirely free license.</li> <li>Youtube-Commons comprises 2 million videos in CC-By with documented pro...
Design choices for Vision Language Models in 2024
https://hf.co/blog/gigant/vlm-design
Where are Vision-Language Models headed?
It's raining diffusion personalization techniques☔️🎭🖼️
https://hf.co/blog/linoyts/zero-shot-personalization
<p> Recently, generating high quality portraits from refrence photos was made possible with as little as a single reference image &amp; without any optimization⚡️</p> <p><a href="https://cdn-uploads.huggingface.co/production/uploads/638f308fc4444c6ca870b60a/U4Jxr5L5htNaxv02CRiNR.png" rel="nofollow"><img alt="image/pn...
History of State Space Models (SSM) in 2022
https://hf.co/blog/lbourdois/ssm-2022
<strong>Citation</strong>
What Historical AI Breakthroughs Have Been Unappreciated by The Mainsteam Media?
https://hf.co/blog/Smooke/ai-breakthroughs-unappreciated-by-mainstream-media
<p> <i>Recently had the chance to interview <a href="https://twitter.com/jbrowder1" rel="nofollow">Joshua Browder</a>, the Founder/CEO of <a href="https://donotpay.com" rel="nofollow">DoNotPay</a>. <a href="https://hackernoon.com/multimodal-is-the-most-unappreciated-ai-breakthrough-says-donotpayceo-joshua-browder" rel...
Analysis on evaluating 7 bilions italian LLMs
https://hf.co/blog/giux78/analysis-on-ita-llm
Conclusion
DS-MoE: Making MoE Models More Efficient and Less Memory-Intensive
https://hf.co/blog/bpan/ds-moe
<p> <em>Estimated reading time: 4 minutes</em> </p> <p>Mixture-of-Experts (MoE) language models are known for their ability to reduce computing needs by 2 to 4 times compared to traditional dense models, without sacrificing performance. This makes them especially useful in situations where computing resources are limi...
RAG Empowerment: Cohere C4AI Command-R and Transformers Unveiled
https://hf.co/blog/Andyrasika/command-r-transformer
Conclusion
🐦 The IBIS Challenge
https://hf.co/blog/nikgr/the-ibis-challenge
<p> Join <strong>the IBIS Challenge</strong>: an open competition in <strong>I</strong>nferring and predicting transcription factor <strong>Bi</strong>nding <strong>S</strong>pecificities.</p> <p>Deciphering human gene regulation is a cornerstone of <em>modern molecular biology and biomedicine</em>. On the regulatory...
The LASER technique: Evaluating SVD compression
https://hf.co/blog/fractalego/mistral-laser-svd
Citation
Open Source All About Data Processing, Dataverse
https://hf.co/blog/EujeongChoi/dataverse-opensource-for-data-processing
4. Future Work and Contribution Points
Many-shot jailbreaking
https://hf.co/blog/vladbogo/many-shot-jailbreaking
Conclusion
Aurora-M: The First Open Source Biden-Harris Executive Order Red teamed Multilingual Language Model
https://hf.co/blog/mayank-mishra/aurora
Conclusion
Gecko: Versatile Text Embeddings Distilled from Large Language Models
https://hf.co/blog/vladbogo/gecko
Conclusion
Finetune Mixtral 8x7B with AutoTrain
https://hf.co/blog/abhishek/autotrain-mixtral-dgx-cloud-local
<p> In this blog, I'll show you how you can fine-tune Mixtral 8x7B on your own dataset using <a href="https://github.com/huggingface/autotrain-advanced" rel="nofollow">AutoTrain</a>. The amount of coding used in this blog post will be quite small. We will be writing <em>zero</em> lines of code! </p> <p>Since Mixtral ...
How do Textual Inversion tokens destroy prompts?
https://hf.co/blog/Isamu136/textual-inversion-prompt-destruction
Conclusion and Future Direction
Experiments with Bitnet 1.5 (~ngmi~)
https://hf.co/blog/joey00072/experiments-with-bitnet-1-5
Training code
Create Mixtures of Experts with MergeKit
https://hf.co/blog/mlabonne/frankenmoe
References
Elevate Responses: RAG with LlamaIndex & MongoDB
https://hf.co/blog/Andyrasika/mongodb-llamaindex-rag
Conclusion
Samantha Mistral Instruct 7b - Comprehensive Bulleted Notes
https://hf.co/blog/cognitivetech/samantha-mistral-instruct-7b-bulleted-notes
Thanks
Policy Questions Blog 1: AI Data Transparency Remarks for NAIAC Panel 📚🔍⚖️
https://hf.co/blog/yjernite/naiac-data-transparency
A Minimum Standard for Meaningful Data Disclosure
Protein similarity and Matryoshka embeddings
https://hf.co/blog/monsoon-nlp/proteins-matryoshka-embeddings
🦠🧬🤖🪆 Future Thoughts
A brief analysis of automerger data, feat. SLERP and DARE-TIES LLM merging
https://hf.co/blog/kgourgou/a-first-look-at-automerger-data
To sum up
Data exploration and filtering with Nomic Atlas
https://hf.co/blog/visheratin/nomic-data-cleaning
Conclusion
Giskard Bot: Identifying robustness, performance and ethical vulnerabilities in the Top 10 Most Popular Hugging Face Models
https://hf.co/blog/JMJM/vulnerabilities-top-10-hf-models
Conclusion
Releasing Common Corpus: the largest public domain dataset for training LLMs
https://hf.co/blog/Pclanglais/common-corpus
<p> We announce today the release of <a href="https://huggingface.co/collections/PleIAs/common-corpus-65d46e3ea3980fdcd66a5613">Common Corpus</a> on HuggingFace:</p> <ul> <li>Common Corpus is the largest public domain dataset released for training LLMs.</li> <li>Common Corpus includes 500 billion words from a wide di...
What's Automatic Differentiation?
https://hf.co/blog/andmholm/what-is-automatic-differentiation
Personal
Dive Deeper into Yi-9B
https://hf.co/blog/lorinma/yi-9b-divedeep
📌 Related Resources
Sparse Mixture of Experts Language Model from Scratch: Extending makeMoE with Expert Capacity
https://hf.co/blog/AviSoori1x/makemoe2
Why is Expert Capacity even important?
VideoMamba: State Space Model for Efficient Video Understanding
https://hf.co/blog/vladbogo/video-mamba
Conclusion
Better RAG 3: The text is your friend
https://hf.co/blog/hrishioa/retrieval-augmented-generation-3-structure
Conclusion