title stringlengths 4 172 | link stringlengths 27 86 | article stringlengths 4 40.1k |
|---|---|---|
"Diffusers Image Fill" guide | https://hf.co/blog/OzzyGT/diffusers-image-fill |
<p>
This guide was an idea I had for a while but was asked by <a href="https://github.com/pietrobolcato" rel="nofollow">pietrobolcato</a> <a href="https://github.com/huggingface/diffusers/discussions/7482#discussioncomment-10529470" rel="nofollow">here</a> so finally made the decision to do it before it gets too old ... |
All LLMs Write Great Code, But Some Make (A Lot) Fewer Mistakes | https://hf.co/blog/onekq/all-llms-write-great-code | A huge thank to 🤗HuggingFace🤗 |
Training Flux Locally on Mac | https://hf.co/blog/AlekseyCalvin/mac-flux-training |
<p>
For all those struggling to set this up right now.</p>
<p><em><strong><strong>(rearticulated by A.C.T. soon® from a post/repo by Hughescr and the ai-toolkit Flux training script by Ostris)</strong></strong></em></p>
<p>This workflow is not grounded in Diffusers. However, I have not yet encountered a working Diffu... |
The Impact of Real-Time Summarization on Decision-Making | https://hf.co/blog/megoyaw3/impact-of-real-time-summarization | Final Words! |
Improving performance with Arena Learning in post training | https://hf.co/blog/satpalsr/arena-learning-post-train-data-performance-improve | References |
Fine Tuning a LLM Using Kubernetes with Intel® Gaudi® Accelerator | https://hf.co/blog/omarkhleif/gaudi-k8s-llm-finetuning | Citations |
Introducing AISAK-O | https://hf.co/blog/mandelakori/aisak-o | Beta Testing Opportunity |
Full Training Tutorial and Guide and Research For a FLUX Style | https://hf.co/blog/MonsterMMORPG/full-training-tutorial-and-research-for-flux-style | More Example Images - Last One Is Trained Dataset |
Fine-tuning a token classification model for legal data using Argilla and AutoTrain | https://hf.co/blog/bikashpatra/legal-data-token-classification-fine-tuning | 9. Acknowledgements |
Llama-3.1 8B Carrot - Capx AI | https://hf.co/blog/adarshxs/capx-vision | Conclusion |
Getty Images Brings High-Quality, Commercially Safe Dataset to Hugging Face | https://hf.co/blog/andreagagliano/gettyimages-brings-dataset-to-huggingface |
<p>
<em>Andrea Gagliano, Head of AI/ML at Getty Images</em></p>
<p>Hey Hugging Face community! We are Getty Images, and we’re excited to partner with Hugging Face to share something we think you’ll love – AI/ML scientists are now able to access a new sample dataset of our own wholly owned creative images and associate... |
LLM Inference at scale with TGI | https://hf.co/blog/martinigoyanes/llm-inference-at-scale-with-tgi | Relevant metrics per use case |
Meet Yi-Coder: A Small but Mighty LLM for Code | https://hf.co/blog/lorinma/yi-coder | Citation |
Converting Models to Core ML | https://hf.co/blog/fguzman82/frompytorch-to-coreml | References and Resources |
The Environmental Impacts of AI -- Primer | https://hf.co/blog/sasha/ai-environment-primer | 📕 References 📕 |
10 Star Webflow (no-code) Players Providing Premium Services | https://hf.co/blog/megoyaw3/best-webflow-players-in-the-market | 10. Creativecorner |
Selective fine-tuning of Language Models with Spectrum | https://hf.co/blog/anakin87/spectrum | Main References |
Key Insights into the Law of Vision Representations in MLLMs | https://hf.co/blog/Borise/law-vision-representation-in-mllms | In the end |
Extending *Transformer layers as Painters* to DiT's | https://hf.co/blog/NagaSaiAbhinay/transformer-layers-as-painters-dit | References & Citations |
To what extent are we responsible for our content and how to create safer Spaces? | https://hf.co/blog/davidberenstein1957/responsibility-for-ai-content-and-safer-spaces |
<p>
This is a brief blog that outlines some thoughts surrounding the question: To what extent are we responsible for our content and how to create safer Spaces? Certainly relevant for the Telegram CEO Pavel Durov but not less important for people like you and me.</p>
<p>😅 My own "oops"-moment. I created a space with... |
Understanding Vector Quantization in VQ-VAE | https://hf.co/blog/ariG23498/understand-vq | Bringing it together |
DEMO: French Spoken Language Understanding with the new speech resources from NAVER LABS Europe | https://hf.co/blog/mzboito/naver-demo-french-slu | Aknowledgments: |
How to integrate Apify with Huggging Face | https://hf.co/blog/airabbitX/how-to-integrate-apify-with-huggging-face | Conclusion |
How to Use SSAST Model Weights in the HuggingFace Ecosystem? | https://hf.co/blog/Syoy/use-ssast-model-weights-with-huggingface | References |
Searching for better (Full) ImageNet ViT Baselines | https://hf.co/blog/rwightman/vit-sbb-imagenet-full |
<p>
<code>timm</code> 1.0.9 was just released. Included are a few new ImageNet-12k and ImageNet-12k -> ImageNet-1k weights in my <a href="https://huggingface.co/collections/timm/searching-for-better-vit-baselines-663eb74f64f847d2f35a9c19">Searching for Better ViT Baselines</a> series. </p>
<div class="max-w-full ov... |
Introducing AuraFace: Open-Source Face Recognition and Identity Preservation Models | https://hf.co/blog/isidentical/auraface | Try It Out |
Efficient Deep Learning: A Comprehensive Overview of Optimization Techniques 👐 📚 | https://hf.co/blog/Isayoften/optimization-rush | References |
MicroJAX | https://hf.co/blog/joey00072/microjax | Pytree |
2D Parallelism using Ray PyTorch | https://hf.co/blog/huseinzol05/2d-parallelism-ray-pytorch | 2D Parallelism |
Social Bias NER with BERT | https://hf.co/blog/maximuspowers/bias-entity-recognition | Resources: |
Easy, Fast, and Effective Topic Modeling For Beginners with FASTopic | https://hf.co/blog/bobxwu/fastopic | Tutorial: Use FASTopic to analyze the News of the New York Times. |
Building DoRA Support for Embedding Layers in PEFT | https://hf.co/blog/ariG23498/peft-dora | Conclusion: The Joy of Contributing to Open Source |
How No-Code Platforms Are Making Tech More Accessible to Everyone | https://hf.co/blog/megoyaw3/no-code-platforms-makes-tech-more-accessible | Conclusion |
Processing Parquets 102 | https://hf.co/blog/hlky/processing-parquets-102 | Conclusion |
How to build an incremental Web Crawler with Apify | https://hf.co/blog/airabbitX/a-step-by-step-guide-to-integrating-apify-and-hugg | Advanced Setup with Follow-Up Task |
How to communicate in a Pull Request? | https://hf.co/blog/ariG23498/comm-pr |
<p>
<a href="https://cdn-uploads.huggingface.co/production/uploads/608aabf24955d2bfc3cd99c6/oJDsjjFA53jL5AEGUd0Ai.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/608aabf24955d2bfc3cd99c6/oJDsjjFA53jL5AEGUd0Ai.png"/></a></p>
<p>Hi there! I'm Aritra, and let me tell yo... |
dstack: Your LLM Launchpad - From Fine-Tuning to Serving, Simplified | https://hf.co/blog/chansung/alignment-handbook-with-dstack | <strong>Bonus</strong> |
Is Prompt Caching the new RAG? | https://hf.co/blog/airabbitX/is-prompt-caching-the-new-rag |
<p>
recently, Anthropic, the company behind Claude, has announced a remarkable new feature called <a href="https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching?ref=airabbit.blog" rel="nofollow">Prompt Caching</a>. This breakthrough development makes the processing of lengthy documents more affordable t... |
Using Writer Framework with Hugging Face Spaces | https://hf.co/blog/samjulien/writer-framework-spaces | Conclusion |
What are Embeddings and Vector Databases? | https://hf.co/blog/qdrddr/what-are-embeddings-and-vector-databases | Advantages & Disadvantages of Embeddings: |
Extractive Question Answering with AutoTrain | https://hf.co/blog/abhishek/extractive-qa-autotrain | Training the model on Hugging Face Hub |
How to get GPT to talk like a consultant | https://hf.co/blog/airabbitX/how-to-get-gpt-to-talk-like-a-consultant | Conclusion |
Web Scraping 102 | https://hf.co/blog/hlky/web-scraping-102 | Stage 2: Retrieval |
Self-Hosting LLaMA 3.1 70B (or any ~70B LLM) Affordably | https://hf.co/blog/abhinand/self-hosting-llama3-1-70b-affordably | Conclusion |
Tensor Parallelism | https://hf.co/blog/huseinzol05/tensor-parallelism | Production API |
Web Scraping 101 | https://hf.co/blog/hlky/web-scraping-101 | Stage 1b: More Recon! |
Llama-3.1-Storm-8B: Improved SLM with Self-Curation + Model Merging | https://hf.co/blog/akjindal53244/llama31-storm8b | Appendix |
∞🧙🏼♂️AnyClassifier - Generating Synthetic Data For Text Classification | https://hf.co/blog/kenhktsui/anyclassifier | Citation |
Data Formats 101 | https://hf.co/blog/hlky/data-formats-101 | <strong>Parquet</strong> |
Processing Parquets 101 | https://hf.co/blog/hlky/processing-parquets-101 | Conclusion |
Outperforming Claude 3.5 Sonnet with Phi-3-mini-4k for graph entity relationship extraction tasks | https://hf.co/blog/rcaulk/phi-3-mini-4k-instruct-graph | Models |
I Trained a 2D Game Animation Generation Model to Create Complex, Cool Game Actions (Fully Open-Source) | https://hf.co/blog/lyogavin/godmoeanimation | 07 Business Opportunities |
Create Dynamic Typed Videos with 'Type Byte🐧' | https://hf.co/blog/prithivMLmods/type-byte | <strong>Try It Out!</strong> |
Perspectives for first principles prompt engineering | https://hf.co/blog/KnutJaegersberg/first-principles-prompt-engineering | References |
Powering the Future: Be.Ta Labs’ Revolutionary 100% Solar-Powered AI Operation | https://hf.co/blog/Severian/powering-the-future-beta-labs-revolutionary-100-so | <strong>Join the Green AI Revolution</strong> |
**What** is Retrieval-based Voice Conversion WebUI? | https://hf.co/blog/Blane187/what-is-rvc | Conclusion |
BERT for Bias Detection in Text | https://hf.co/blog/maximuspowers/bias-detection-in-text | What's Next: |
RAG vs Fine-Tuning for LLMs: A Comprehensive Guide with Examples | https://hf.co/blog/airabbitX/rag-vs-fine-tuning-for-llms-a-com | Choosing the Right Approach |
Deploying Hugging Face models with Viam: Use models on any robot in the real world | https://hf.co/blog/ariellemadeit/deploy-models-with-viam | Next steps |
How to Set Up and Run Ollama on a GPU-Powered VM (vast.ai) | https://hf.co/blog/airabbitX/how-to-set-up-and-run-ollama |
<p>
In this tutorial, we'll walk you through the process of setting up and using Ollama for private model inference on a VM with GPU, either on your local machine or a rented VM from <a href="https://cloud.vast.ai/?ref_id=145250&ref=airabbit.blog" rel="nofollow">Vast.ai</a>or <a href="https://runpod.io/?ref=7su8g... |
Deploying a Private Hugging Face Model for Inference with RunPod and AnythingLLM (serverless) | https://hf.co/blog/airabbitX/deploy-hf-private-model | Conclusion |
The Workflow of PEFT | https://hf.co/blog/ariG23498/workflow-peft | Conclusion |
Parquet in Action: A Beginners Guide | https://hf.co/blog/cfahlgren1/intro-to-parquet-format | Reading Entire Footer |
20 New SDXL Fine Tuning Tests and Their Results (Better Workflow Obtained and Published) | https://hf.co/blog/MonsterMMORPG/20-new-sdxl-training-experiments-new-workflow | Old Best Config VS New Best Config |
Context Parallelism | https://hf.co/blog/huseinzol05/context-parallelism | Improvement |
⭐ PySpark and 🤗 Hugging Face Parquet Files | https://hf.co/blog/asoria/pyspark-hugging-face-datasets | 6. Conclusion |
Advanced AI-Driven Code Analysis: A Multi-Agent Framework for Comprehensive Software Optimization | https://hf.co/blog/Alyosha11/forker | Conclusion |
Bulleted Notes eBook Summary: A Different Way to Chat with PDF | https://hf.co/blog/cognitivetech/bulleted-notes-ebook-summary | I hope you'll find this tool as invaluable as I do. |
Your AI, Everywhere | https://hf.co/blog/wolfram/your-ai-everywhere | Conclusion |
Unlocking Creativity with Text-to-Image Generation: Exploring LoRA Models and Styles | https://hf.co/blog/prithivMLmods/lora-adp-01 | Conclusion |
Batch size 30 AdamW vs Batch Size 1 Adafactor SDXL Training Comparison | https://hf.co/blog/MonsterMMORPG/adamw-vs-adafactor-sdxl-fine-tuning-comparison |
<p style="margin-left:0px;">I was hanging OneTrainer Discord yesterday and saw one of the very old and experienced user comment. He was saying AdamW is better than Adafactor. So I have asked his config which you can see here : <a href="https://gist.github.com/FurkanGozukara/5e9ee7d2b2070abb9a173dab342e1221" rel="nofol... |
The Myth of Running Out of Data: Why Infinite Math Makes AI Training Limitless | https://hf.co/blog/TuringsSolutions/runningoutofdatamyth |
<p>
The rapid advancement of artificial intelligence (AI) has ignited a fascinating debate: Are we running out of data to fuel its growth? Some experts express concern that the vast amounts of text and images used for AI training are finite, potentially hindering future progress. However, this notion overlooks a fund... |
ArabicWeb24: Creating a High Quality Arabic Web-only Pre-training Dataset | https://hf.co/blog/MayFarhat/arabicweb24 | 5. Citation |
Agentic Task Delegation - Making Agents whole again | https://hf.co/blog/adarshxs/agentic-task-delegation | Conclusion |
HelpingAI2-6B : Revolutionizing Conversational AI with Emotional Intelligence | https://hf.co/blog/Abhaykoul/helpingai-6b | Buy Me a Coffee: |
Creating and Uploading a Dataset with Unsloth: An Adventure in Wonderland | https://hf.co/blog/dimentox/unsloth-mistral-training | Complete Code Notebook |
The case for specialized pre-training: ultra-fast foundation models for dedicated tasks | https://hf.co/blog/Pclanglais/specialized-pre-training | The case for language model specialization |
Local AI with Docker's Testcontainers | https://hf.co/blog/Tonic/localai-testcontainers | Ask Questions Below ! 👇🏻 |
How to use Instruct Embeddings Correctly | https://hf.co/blog/Tonic/instruct-embeddings-and-advanced-rag | What You DO WANT To Be Doing in RAG |
9 Notable Quotes From Mark Zuckerberg's Essay in Favor of Open Source AI | https://hf.co/blog/Smooke/mark-zuckerberg-open-source-ai-quotes-hackernoon |
<p>
<a href="https://cdn-uploads.huggingface.co/production/uploads/64862a25cf5ad5e1f0482ef2/PUqJO2YA-8pUNFwwZ0E63.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/64862a25cf5ad5e1f0482ef2/PUqJO2YA-8pUNFwwZ0E63.png"/></a></p>
<p><b>ICYMI</b> You can now read <a href="h... |
Crazy Challenge: Run Llama 405B on a 8GB VRAM GPU | https://hf.co/blog/lyogavin/run-llama-405b-on-4gb-vram | Open Source Project AirLLM |
🔥 Argilla 2.0: the data-centric tool for AI makers 🤗 | https://hf.co/blog/dvilasuero/argilla-2-0 | Argilla changes this with |
Clarity AI Upscaler Reproduction | https://hf.co/blog/1aurent/clarity-ai-upscaler-reproduction | Takeaways |
Build static HTML spaces | https://hf.co/blog/severo/build-static-html-spaces | Conclusion |
Train a Llama model from scratch | https://hf.co/blog/nroggendorff/train-with-llama-architecture | 8. Pushing the Trained Model to Hugging Face Hub |
Simulating Monte Carlo Algorithms With Gaussian Probability | https://hf.co/blog/TuringsSolutions/simulatingmontecarlo | References |
Fine-tune Llama 3.1 Ultra-Efficiently with Unsloth | https://hf.co/blog/mlabonne/sft-llama3 | Conclusion |
Encoding Video Locations with SatCLIP: A New Frontier in Geographic Machine Learning | https://hf.co/blog/Alyosha11/satclip-video | Conclusion |
Utilizing Gaussian Probability Space to Simulate Monte Carlo Algorithms with Particle Swarm Optimization | https://hf.co/blog/TuringsSolutions/gaussianprobabilitytosimulatrmontecarlo | References |
ZebraLogic: Benchmarking the Logical Reasoning Ability of Language Models | https://hf.co/blog/yuchenlin/zebra-logic | Citations |
MobileNet Baselines | https://hf.co/blog/rwightman/mobilenet-baselines |
<p>
Those who follow me know that I can't resist an opportunity to update an old baseline. </p>
<p>When the <a href="https://arxiv.org/abs/2404.10518" rel="nofollow">MobileNet-V4</a> paper came out I noted that they re-ran their MobileNet-V1 baseline to get a 74% ImageNet accuracy. The original models were around 71%... |
Abliterating Refusal and Code LLMs | https://hf.co/blog/monsoon-nlp/refusal-in-code-llms |
<p>
In April, "<a href="https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction" rel="nofollow">Refusal in LLMs is mediated by a single direction</a>" was posted to the AI Alignment Forum, followed by <a href="https://arxiv.org/abs/2406.11717" rel="nofollow">a paper on... |
Finetuning PaliGemma with AutoTrain | https://hf.co/blog/abhishek/paligemma-finetuning-autotrain | Training using UI |
Announcing BigCodeBench-Hard, and More | https://hf.co/blog/terryyz/bigcodebench-hard | Citation |
AI and its Role in Revolutionizing Dating and Relationships | https://hf.co/blog/Alyosha11/capx-capybara | The Future of AI-Powered Relationships |
Are We Ready for Multi-Image Reasoning? Launching VHs: The Visual Haystacks Benchmark! | https://hf.co/blog/davidchan/visual-haystacks | Ready to get started? |
MMLU-PRO-ITA a new eval for Italian LLMs | https://hf.co/blog/giux78/mmlu-pro-ita |
<p>
In a previous <a href="https://medium.com/@giuxale/an-analyses-on-italian-llms-models-evaluations-51bffe1d44d1" rel="nofollow">post</a>, we as <a href="https://mii-lab.it/" rel="nofollow"><strong>mii-llm</strong></a> lab, described an analysis on evaluating Italian LLMs on different common used benchmarks and ... |
Fine-tuning Mistral on Your Dataset | https://hf.co/blog/nroggendorff/finetune-mistral | Step 8: The cursed child |
Fine Tuning TinyLlama for Text Generation with TRL | https://hf.co/blog/nroggendorff/finetune-tinyllama | 8. Pushing the Trained Model to Hugging Face Hub |
Ghost 8B Beta Released: Game-Changing Language Model | https://hf.co/blog/lamhieu/ghost-8b-beta-released-game-changing-language-mode | Links |
End of preview. Expand in Data Studio
Created by the following code:
!pip install -Uq datasets
import requests
from bs4 import BeautifulSoup, Comment
import pandas as pd
from datasets import Dataset
def get_content(url):
response = requests.get(url)
if response.status_code == 200:
soup = BeautifulSoup(response.text, 'html.parser')
return soup
url = "https://huggingface.co/blog/community"
soup = get_content(url)
articles = soup.find_all("article")
titles = [article.h4.text for article in articles]
links = [f'https://hf.co{article.find("a", class_="block px-3 py-2 cursor-pointer").get("href")}' for article in articles]
def get_article(soup):
# Find all comments in the document
comments = soup.find_all(string=lambda text: isinstance(text, Comment))
# Initialize variables to store the start and end comments
start_comment = None
end_comment = None
# Identify the start and end comments
for comment in comments:
comment_text = comment.strip()
if comment_text == 'HTML_TAG_START':
start_comment = comment
elif comment_text == 'HTML_TAG_END':
end_comment = comment
# Check if both comments were found
if start_comment and end_comment:
# Collect all elements between the start and end comments
contents = []
current = start_comment.next_sibling
while current and current != end_comment:
contents.append(current)
current = current.next_sibling
# Convert the contents to a string
between_content = ''.join(str(item) for item in contents)
# Output the extracted content
return between_content
else:
return "Start or end comment not found."
article_soups = [get_content(link) for link in links]
articles = [get_article(article_soup) for article_soup in article_soups]
# Assuming titles, links, articles are your lists
df = pd.DataFrame({
'title': titles,
'link': links,
'article': articles
})
# Create a Hugging Face Dataset object
dataset = Dataset.from_pandas(df)
# Push the dataset to the Hugging Face Hub
dataset.push_to_hub("ariG23498/community-blogs")
- Downloads last month
- 15