url stringlengths 13 1.72k | title stringlengths 1 655 | category stringclasses 37 values | tags listlengths 0 4 | score int64 1 9 | source stringclasses 8 values |
|---|---|---|---|---|---|
https://www.linkedin.com/feed/update/urn:li:activity:7426000879515930624?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7426000879515930624%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Hesham Haroon — النص العربي في الـ LLMs بيكلف 2-5x أكتر من الإنجليزي في الـ tokens
ده مش رقم نظر | LLM Fine-tuning | [
"Arabic LLMs",
"token cost"
] | 6 | linkedin |
https://docs.anthropic.com/en/docs/claude-code/hooks?ref=dailydev | Hooks - Anthropic | LLM Fine-tuning | [
"hooks",
"AI",
"anthropic"
] | 8 | edge |
https://www.kdnuggets.com/2023/04/chatgpt-works-model-behind-bot.html?utm_source=rss&utm_medium=rss&utm_campaign=how-chatgpt-works-the-model-behind-the-bot | How ChatGPT Works: The Model Behind The Bot - KDnuggets | LLM Fine-tuning | [
"chatgpt",
"model",
"overview"
] | 8 | raindrop |
https://blog.bytebytego.com/p/how-llms-learn-from-the-internet?ref=dailydev | How LLMs Learn from the Internet: The Training Process | LLM Fine-tuning | [
"llm",
"training",
"internet"
] | 8 | edge |
https://app.daily.dev/posts/jd3xukclz | How LLMs Learn from the Internet: The Training Process | LLM Fine-tuning | [
"LLMs",
"Training Process"
] | 9 | dailydev |
https://app.daily.dev/posts/how-large-language-models-learn-e6yymrydq | How Large Language Models Learn | LLM Fine-tuning | [
"Learning",
"Large Language Models"
] | 8 | dailydev |
https://blog.bytebytego.com/p/how-transformers-architecture-powers?ref=dailydev | How Transformers Architecture Powers Modern LLMs | LLM Fine-tuning | [
"transformers",
"architecture",
"llms"
] | 9 | edge |
https://app.daily.dev/posts/hjutpjr8i | How to Add Persistent Memory to Any LLM (Supermemory Tutorial) | daily.dev | LLM Fine-tuning | [
"persistent memory",
"supermemory"
] | 8 | edge |
https://www.freecodecamp.org/news/how-to-compress-your-prompts-and-reduce-llm-costs/?ref=dailydev | How to Compress Your Prompts and Reduce LLM Costs | LLM Fine-tuning | [
"compress prompts",
"LLM costs"
] | 8 | edge |
https://app.daily.dev/posts/i0lvgfaij | How to Fine-Tune Open Source LLMs with Nebius Token Factory | Full Tutorial | daily.dev | LLM Fine-tuning | [
"fine-tune",
"open source",
"LLMs"
] | 9 | edge |
https://dev.to/dmitrybaraishuk/how-to-summarize-huge-documents-with-llms-beyond-token-limits-and-basic-prompts-57ao | How to Summarize Huge Documents with LLMs: Beyond Token Limits and Basic Prompts - DEV Community | LLM Fine-tuning | [
"summarization",
"LLMs",
"documents"
] | 8 | edge |
https://app.daily.dev/posts/how-to-run-an-llm-on-your-laptop-hkq0dph1q | How to run an LLM on your laptop | LLM Fine-tuning | [
"run LLM",
"laptop"
] | 7 | dailydev |
https://huggingface.co/datasets/HuggingFaceFW/finetranslations | HuggingFaceFW/finetranslations · Datasets at Hugging Face | LLM Fine-tuning | [
"HuggingFace",
"datasets"
] | 8 | edge |
https://gpthuman.ai/ | Humanize AI. Generate Undetectable AI Content. | LLM Fine-tuning | [
"ai-content",
"generation"
] | 8 | edge |
https://aisharenet.com/hunyuanocr/ | HunyuanOCR - 腾讯混元开源的光学字符识别专家模型 | AI分享圈 | LLM Fine-tuning | [
"hunyuanocr",
"OCR",
"model"
] | 6 | edge_bookmarks |
https://medium.com/unfiltered-voices/i-tried-the-most-popular-chatgpt-hacks-7e4c18508a20 | I Tried the Most Popular ChatGPT Hacks | LLM Fine-tuning | [
"chatgpt",
"hacks"
] | 7 | raindrop |
https://towardsdatascience.com/implementing-vibe-proving-with-rl/?ref=dailydev | Implementing Vibe Proving with Reinforcement Learning | Towards Data Science | LLM Fine-tuning | [
"reinforcement learning",
"Vibe Proving"
] | 7 | edge |
https://aisharenet.com/infinitystar/ | InfinityStar - 字节开源的统一时空自回归视频生成框架 | AI分享圈 | LLM Fine-tuning | [
"video",
"generation",
"AI"
] | 8 | edge_bookmarks |
https://app.daily.dev/posts/introducing-align-evals-streamlining-llm-application-evaluation-ecyjmtarb | Introducing Align Evals: Streamlining LLM Application Evaluation | LLM Fine-tuning | [
"LLM",
"Evaluation"
] | 8 | dailydev |
https://www.databricks.com/blog/introducing-mosaic-ai-model-training-fine-tuning-genai-models?ref=dailydev | Introducing Mosaic AI Model Training for Fine-Tuning GenAI Models | LLM Fine-tuning | [
"Mosaic AI",
"fine-tuning"
] | 9 | raindrop |
https://www.linkedin.com/feed/update/urn:li:activity:7292332914791317504?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7292332914791317504%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Jason Kuperberg — Meet OpenDeepResearcher: A new open-source AI system that automatically creates | LLM Fine-tuning | [
"OpenDeepResearcher",
"AI System",
"Open Source"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7415093445452742656?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7415093445452742656%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Jasper Adams — Ranking on Claude AI takes less than 28 days.
And almost no one is doing it yet | LLM Fine-tuning | [
"Claude AI",
"ranking",
"performance"
] | 6 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7423678610261159936?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7423678610261159936%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Javi Lopez — ⚡ Google Genie 3 but OPEN SOURCE
Not even 48h later and the Chinese did it agai | LLM Fine-tuning | [
"open source",
"Google Genie"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7280645495943782401?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7280645495943782401%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Jeff Boudier — What do Transformers see? 👀
See for yourself with this amazing Vision Transform | LLM Fine-tuning | [
"Transformers",
"Vision",
"AI"
] | 7 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7432494103348539392?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7432494103348539392%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Joshua Lochner — Okay, this is actually insane... You can now run LFM2.5-1.2B-Thinking (a 1.2B pa | LLM Fine-tuning | [
"LFM",
"AI",
"Model"
] | 9 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7397772637013299200?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7397772637013299200%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Julian G. — Experimenting with various quantized HuggingFace models and found a gem!
> Mode | LLM Fine-tuning | [
"HuggingFace",
"quantized",
"models"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7399964455633178624?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7399964455633178624%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Julian G. — Here are some prompts that were never released. They are quite complex, but give | LLM Fine-tuning | [
"prompts",
"complex"
] | 7 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7403441715942166529?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7403441715942166529%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Julien Chaumond — We got Claude Code to train an open LLM 🤯
Not just to write the training script | LLM Fine-tuning | [
"Claude Code",
"training"
] | 9 | linkedin |
https://huggingface.co/blog/jupyter-agent-2?ref=dailydev | Jupyter Agents: training LLMs to reason with notebooks | LLM Fine-tuning | [
"Jupyter",
"LLMs",
"training"
] | 9 | edge |
https://machinelearningmastery.com/kv-caching-in-llms-a-guide-for-developers/?ref=dailydev | KV Caching in LLMs: A Guide for Developers - MachineLearningMastery.com | LLM Fine-tuning | [
"kv caching",
"LLMs",
"developers"
] | 8 | edge |
https://www.linkedin.com/feed/update/urn:li:activity:7423042810771263488?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7423042810771263488%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Kaiden Kane — You are prompting CLAUDE wrong...
Anthropic just released this prompting master | LLM Fine-tuning | [
"prompting",
"CLAUDE"
] | 6 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7413602653001289729?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7413602653001289729%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Kumud Deepali R. — 🚨 Choosing the wrong LLM in 2026 will cost you time, money, and momentum.
With | LLM Fine-tuning | [
"LLM",
"cost"
] | 7 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7386326260051902464?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7386326260051902464%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Kyutai — Try asking your favorite speech LLM whether you’re speaking in a low voice or a | LLM Fine-tuning | [
"speech LLM",
"voice"
] | 5 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7409240309265645569?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7409240309265645569%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Kyutai — 🏠 Introducing CASA: a new way to input visual information into large language mo | LLM Fine-tuning | [
"visual information",
"large language models"
] | 8 | linkedin |
https://www.promptingguide.ai/research/llm-agents | LLM Agents | Prompt Engineering Guide | LLM Fine-tuning | [
"LLM",
"agents"
] | 8 | edge_bookmarks |
https://www.digitalocean.com/community/tutorials/llm-finetuning-domain-specific-models?ref=dailydev | LLM Fine-Tuning: A Guide for Domain-Specific Models | DigitalOcean | LLM Fine-tuning | [
"fine-tuning",
"domain-specific",
"guide"
] | 9 | edge |
https://app.daily.dev/posts/llm-model-storage-with-nfs-download-once-infer-everywhere-0uv0zekk8 | LLM Model Storage with NFS: Download Once, Infer Everywhere | daily.dev | LLM Fine-tuning | [
"LLM",
"model storage",
"NFS"
] | 7 | edge |
https://llmskirmish.com/?ref=dailydev | LLM Skirmish | LLM Fine-tuning | [
"LLM",
"skirmish"
] | 7 | edge |
https://bbycroft.net/llm | LLM Visualization | LLM Fine-tuning | [
"llm",
"visualization"
] | 7 | edge_bookmarks |
https://app.daily.dev/posts/llm-caching-proxy-server-that-emulates-popular-llms-with-the-ability-to-simulate-failures-bmilegvj7 | LLM caching proxy server that emulates popular LLMs with the ability to simulate failures | LLM Fine-tuning | [
"Caching Proxy",
"LLMs"
] | 7 | dailydev |
https://softwaremill.com/llms-for-legaltech/ | LLMs for LegalTech | SoftwareMill | LLM Fine-tuning | [
"legaltech",
"llms"
] | 7 | raindrop |
https://newsletter.eng-leadership.com/p/llms-common-terms-explained-simply?ref=dailydev | LLMs: Common terms explained, simply | LLM Fine-tuning | [
"LLMs",
"terminology"
] | 7 | edge |
https://app.daily.dev/posts/kjxoircvo | LLMs: Common terms explained, simply | LLM Fine-tuning | [
"Common Terms",
"LLMs"
] | 7 | dailydev |
https://app.daily.dev/posts/langbase-ai-sdk-for-building-declarative-and-composable-ai-powered-llm-products--marqxnv3d | Langbase AI SDK for building declarative and composable AI-powered LLM products. | LLM Fine-tuning | [
"AI SDK",
"LLM products",
"composable"
] | 7 | dailydev |
https://www.linkedin.com/feed/update/urn:li:activity:7411693611970502656?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7411693611970502656%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Leon Chlon, PhD — Chain-of-thought is 3-5× longer than it needs to be, burning money. We just prov | LLM Fine-tuning | [
"chain-of-thought",
"efficiency"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7415787725976731648?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7415787725976731648%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Leon Chlon, PhD — LLM hallucinations aren't bugs. They're compression artifacts. We just built a C | LLM Fine-tuning | [
"LLM hallucinations",
"compression artifacts"
] | 6 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7399092010348892160?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7399092010348892160%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Leon Jose — 50 LLM Use Cases You’re Missing Out On.
Before this tech wave passes you by.
Ev | LLM Fine-tuning | [
"LLM",
"use cases",
"technology"
] | 8 | linkedin |
https://www.modelscope.cn/home | Ling-flash-2.0 · 模型库 | LLM Fine-tuning | [
"model",
"library"
] | 7 | edge |
https://www.linkedin.com/feed/update/urn:li:activity:7425226809144713217?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7425226809144713217%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Lior Alexander — We solved "Code Generation." Now we’re facing the "Verification Gap."
Teams hea | LLM Fine-tuning | [
"code generation",
"verification",
"AI"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7415709234442588160?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7415709234442588160%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Lior Alexander — You can now run 70B LLMs on a 4GB GPU.
AirLLM just made massive models usable o | LLM Fine-tuning | [
"LLM",
"GPU",
"AirLLM"
] | 8 | linkedin |
https://www.marktechpost.com/2024/06/08/list-of-activities-and-their-corresponding-suitable-llms-in-the-artificial-intelligence-ai-world-right-now-a-comprehensive-guide/?ref=dailydev | List of Activities and Their Corresponding Suitable LLMs in the Artificial Intelligence AI World Right Now: A Comprehensive Guide | LLM Fine-tuning | [
"activities",
"llms",
"guide"
] | 8 | raindrop |
https://dev.to/hadil/litellm-vs-bifrost-comparing-python-and-go-for-production-llm-gateways-4dg5?ref=dailydev | LiteLLM vs Bifrost: Comparing Python and Go for Production LLM Gateways - DEV Community | LLM Fine-tuning | [
"Python",
"Go",
"production"
] | 7 | edge |
https://www.producthunt.com/products/llama-3-405b#llama-7 | Llama - 3.1-405B: an open source model to rival GPT-4o / Claude-3.5 | LLM Fine-tuning | [
"Llama",
"open source model"
] | 9 | raindrop |
https://github.com/MIATECHPARTNERS/PromptChains | MIATECHPARTNERS/PromptChains: Prompt chains maximize intelligence and results when using LLMs | LLM Fine-tuning | [
"prompt-chains",
"LLM"
] | 8 | edge |
https://aisharenet.com/moss-speech/ | MOSS-Speech - 复旦大学开源的语音到语音大模型 | AI分享圈 | LLM Fine-tuning | [
"moss-speech",
"speech",
"model"
] | 7 | edge_bookmarks |
https://www.linkedin.com/feed/update/urn:li:activity:7328264946520051712?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7328264946520051712%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Manthan Patel — LLMs are AI models, but not all AI models are LLMs.
Building upon traditional a | LLM Fine-tuning | [
"AI models",
"traditional models",
"LLMs"
] | 7 | linkedin |
https://www.anthropic.com/research/mapping-mind-language-model | Mapping the Mind of a Large Language Model \ Anthropic | LLM Fine-tuning | [
"llm",
"mapping",
"language"
] | 8 | edge |
https://www.linkedin.com/feed/update/urn:li:activity:7041020961881497600?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7041020961881497600%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Martin Backes 🚀 — 𝗖𝗵𝗮𝘁𝗚𝗣𝗧 #𝗨𝗫 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵 𝗣𝗿𝗼𝗺𝗽𝘁𝘀
These 116 #ChatGPT prompts by Caitlin D. Sullivan | LLM Fine-tuning | [
"ChatGPT",
"prompts"
] | 7 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7330254935114067969?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7330254935114067969%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Mary Newhauser — Fine-tuning massive LLMs used to be painfully slow.
Or downright impossible (me | LLM Fine-tuning | [
"fine-tuning",
"massive LLMs",
"performance"
] | 9 | linkedin |
https://app.daily.dev/posts/master-llm-prompting-tips-for-better-results-sn9fmnd7a | Master LLM Prompting: Tips for Better Results | LLM Fine-tuning | [
"prompting",
"tips"
] | 8 | dailydev |
https://www.riis.com/blog/mastering-gpt-5-api-a-complete-guide-to-the-new-features | Mastering GPT-5 API: A Complete Guide to the New Features - RIIS | LLM Fine-tuning | [
"GPT-5",
"API",
"features"
] | 9 | edge |
https://www.linkedin.com/feed/update/urn:li:activity:7435635830175731712?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7435635830175731712%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Maxime Labonne — 🪄 Introduction to Post-Training
I'm releasing 53 slides on post-training, cover | LLM Fine-tuning | [
"post-training",
"presentation",
"slides"
] | 7 | linkedin |
https://www.marktechpost.com/2023/04/14/meet-alibabas-chatgpt-competitor-tongyi-qianwen-a-large-language-model-that-will-be-embedded-in-its-tmall-genie-smart-speakers-and-workplace-messaging-platform-dingtalk/ | Meet Alibaba’s ChatGPT Competitor Tongyi Qianwen: a Large Language Model that will be Embedded in its Tmall Genie Smart Speakers and Workplace Messaging Platform DingTalk | LLM Fine-tuning | [
"tongyi qianwen",
"chatgpt competitor"
] | 7 | raindrop |
https://www.marktechpost.com/2023/06/16/meet-fingpt-an-open-source-financial-large-language-model-llms/ | Meet FinGPT: An End-To-End Open-Source Framework For Economic Large Language Models (FinLLMs) | LLM Fine-tuning | [
"fingpt",
"financial"
] | 8 | raindrop |
https://www.marktechpost.com/2023/04/18/meet-minigpt-4-an-open-source-ai-model-that-performs-complex-vision-language-tasks-like-gpt-4/ | Meet MiniGPT-4: An Open-Source AI Model That Performs Complex Vision-Language Tasks Like GPT-4 | LLM Fine-tuning | [
"MiniGPT-4",
"vision-language",
"open source"
] | 9 | raindrop |
https://www.marktechpost.com/2023/04/21/meet-redpajama-an-ai-project-to-create-fully-open-source-large-language-models-beginning-with-the-release-of-a-1-2-trillion-token-dataset/ | Meet RedPajama: An AI Project to Create Fully Open-Source Large Language Models Beginning with the Release of a 1.2 Trillion Token Dataset | LLM Fine-tuning | [
"redpajama",
"open-source",
"language models"
] | 8 | raindrop |
https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory?ref=dailydev | MemAlign: Building Better LLM Judges From Human Feedback With Scalable Memory | Databricks Blog | LLM Fine-tuning | [
"LLM",
"human-feedback",
"memory"
] | 9 | edge |
https://www.youtube.com/watch?v=quOe8V2n9rU | Mercury 2: The First Reasoning Diffusion Language Model (1,000+ tokens/sec) | LLM Fine-tuning | [
"Mercury",
"Language Model"
] | 8 | youtube_watch_later |
https://www.linkedin.com/feed/update/urn:li:activity:7404915281342730240?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7404915281342730240%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Merve Noyan — vibe train is here 🚂😄
you can ask Claude to fine-tune open vision language model | LLM Fine-tuning | [
"Claude",
"fine-tuning"
] | 8 | linkedin |
https://www.marktechpost.com/2023/06/13/microsoft-ai-introduces-orca-a-13-billion-parameter-model-that-learns-to-imitate-the-reasoning-process-of-lfms-large-foundation-models/ | Microsoft AI Introduces Orca: A 13-Billion Parameter Model that Learns to Imitate the Reasoning Process of LFMs (Large Foundation Models) | LLM Fine-tuning | [
"orca",
"model"
] | 8 | raindrop |
https://www.marktechpost.com/2023/04/13/microsoft-ai-open-sources-deepspeed-chat-an-end-to-end-rlhf-pipeline-to-train-chatgpt-like-models/ | Microsoft AI Open-Sources DeepSpeed Chat: An End-To-End RLHF Pipeline To Train ChatGPT-like Models | LLM Fine-tuning | [
"deepspeed",
"chatgpt",
"open-source"
] | 9 | raindrop |
https://unsloth.ai/docs/models/minimax-m25 | MiniMax-M2.5: How to Run Guide | Unsloth Documentation | LLM Fine-tuning | [
"MiniMax-M2.5",
"guide"
] | 7 | edge |
https://deepwiki.com/huggingface/transformers/5-model-architectures | Model Architectures | huggingface/transformers | DeepWiki | LLM Fine-tuning | [
"model-architectures",
"transformers",
"huggingface"
] | 9 | edge |
https://www.producthunt.com/products/keywords-ai#model-playground-by-keywords-ai | Model playground by Keywords AI - The easiest way to do output iteration with Llama3.1 405B | LLM Fine-tuning | [
"model playground",
"Llama"
] | 8 | raindrop |
https://app.daily.dev/posts/upjwjrpay | MoonshotAI/Kimi-K2: Kimi K2 is the large language model series developed by Moonshot AI team | LLM Fine-tuning | [
"Kimi-K2",
"language model"
] | 8 | dailydev |
https://blog.bytebytego.com/p/multimodal-llms-basics-how-llms-process?ref=dailydev | Multimodal LLMs Basics: How LLMs Process Text, Images, Audio & Videos | LLM Fine-tuning | [
"multimodal",
"LLMs",
"processing"
] | 7 | edge |
https://addyo.substack.com/p/my-llm-coding-workflow-going-into?ref=dailydev | My LLM coding workflow going into 2026 - by Addy Osmani | LLM Fine-tuning | [
"llm",
"coding workflow"
] | 8 | edge |
https://app.daily.dev/posts/hbtn9pln7 | My LLM coding workflow going into 2026 | daily.dev | LLM Fine-tuning | [
"LLM",
"coding workflow"
] | 8 | edge |
https://www.linkedin.com/feed/update/urn:li:activity:7381324012712513536?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7381324012712513536%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Nearform — Chunking text for LLMs and embeddings may not sound glamorous, but it’s critical | LLM Fine-tuning | [
"Chunking",
"Text Processing",
"Embeddings"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7399492648148824064?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7399492648148824064%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Niels Rogge — This is an amazing read which explains the inner workings of today's LLMs!
If | LLM Fine-tuning | [
"LLMs",
"inner workings",
"reading"
] | 9 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7406758809102163969?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7406758809102163969%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Niels Rogge — Wonder why OpenAI heavily recommends the Responses API for their GPT-5 models, a | LLM Fine-tuning | [
"Responses API",
"GPT-5"
] | 8 | linkedin |
https://github.com/NirDiamant/prompt_engineering | NirDiamant/Prompt_Engineering: This repository offers a comprehensive collection of tutorials and implementations for Prompt Engineering techniques, ranging from fundamental concepts to advanced strategies. It serves as an essential resource for mastering the art of effectively communicating with and leveraging large language models in AI applications. | LLM Fine-tuning | [
"prompt engineering",
"tutorials"
] | 9 | edge |
https://app.daily.dev/posts/official-implementation-of-graph-of-thoughts-solving-elaborate-problems-with-large-language-models-bi7mxuxtl | Official Implementation of "Graph of Thoughts: Solving Elaborate Problems with Large Language Models" | LLM Fine-tuning | [
"Graph of Thoughts",
"LLM"
] | 9 | dailydev |
https://simonwillison.net/2025/Nov/22/olmo-3/?ref=dailydev | Olmo 3 is a fully open LLM | LLM Fine-tuning | [
"open LLM",
"Olmo 3"
] | 7 | edge |
https://app.daily.dev/posts/open-source-observability-for-your-llm-application-based-on-opentelemetry-bxs7t4oqq | Open-source observability for your LLM application, based on OpenTelemetry | LLM Fine-tuning | [
"observability",
"OpenTelemetry",
"LLM application"
] | 6 | dailydev |
https://aider.chat/docs/llms/openai.html | OpenAI | aider | LLM Fine-tuning | [
"openai",
"aider"
] | 7 | edge_bookmarks |
https://docs.copilotkit.ai/reference/classes/llm-adapters/OpenAIAdapter | OpenAIAdapter | LLM Fine-tuning | [
"openai",
"llm-adapter"
] | 8 | edge_bookmarks |
https://www.comet.com/docs/opik/ | Opik Documentation - Open-Source LLM Observability & Optimization | Opik Documentation | LLM Fine-tuning | [
"observability",
"optimization",
"documentation"
] | 8 | edge |
https://platform.openai.com/docs/guides/optimizing-llm-accuracy | Optimizing LLM Accuracy - OpenAI API | LLM Fine-tuning | [
"llm",
"accuracy",
"openai"
] | 8 | edge |
https://app.daily.dev/posts/optimizing-generative-ai-models-with-quantization-8hdqpqlye | Optimizing generative AI models with quantization | LLM Fine-tuning | [
"generative AI",
"quantization"
] | 8 | dailydev |
https://aisharenet.com/ouro/ | Ouro - 字节跳动Seed团队开源的新型循环语言模型 | AI分享圈 | LLM Fine-tuning | [
"language model",
"AI"
] | 7 | edge_bookmarks |
https://developers.googleblog.com/en/own-your-ai-fine-tune-gemma-3-270m-for-on-device/ | Own your AI: Learn how to fine-tune Gemma 3 270M and run it on-device - Google Developers Blog | LLM Fine-tuning | [
"fine-tuning",
"Gemma",
"on-device"
] | 9 | edge |
https://www.linkedin.com/feed/update/urn:li:activity:7428273030231056384?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7428273030231056384%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Paolo Perrone — In 2019, GPT-2 took an entire OpenAI team and months to train
(undisclosed cost | LLM Fine-tuning | [
"GPT-2",
"Training",
"Cost"
] | 7 | linkedin |
https://github.com/mlc-ai/mlc-llm/issues/2273 | Phi-3 mini 4k instruct with MICROSOFT's quantization · Issue #2273 · mlc-ai/mlc-llm | LLM Fine-tuning | [
"quantization",
"mlc-llm",
"issue"
] | 8 | edge |
https://app.daily.dev/posts/prompting-vs-rag-vs-finetuning-0iuudm8gy | Prompting vs. RAG vs. Finetuning | LLM Fine-tuning | [
"prompting",
"RAG",
"finetuning"
] | 9 | dailydev |
https://felloai.com/2025/09/qwen-3-max-ai-all-you-need-to-know-about-alibabas-1-trillion-parameter-llm/ | Qwen 3 Max AI: All You Need to Know About Alibaba’s 1-Trillion Parameter LLM | Fello AI | LLM Fine-tuning | [
"qwen",
"alibaba",
"llm"
] | 7 | edge |
https://huggingface.co/Qwen/Qwen3-235B-A22B | Qwen/Qwen3-235B-A22B · Hugging Face | LLM Fine-tuning | [
"Qwen",
"Hugging Face",
"model"
] | 9 | edge |
https://huggingface.co/Qwen/Qwen3-32B | Qwen/Qwen3-32B · Hugging Face | LLM Fine-tuning | [
"huggingface",
"qwen"
] | 8 | edge |
https://huggingface.co/Qwen/Qwen3-Coder-Next | Qwen/Qwen3-Coder-Next · Hugging Face | LLM Fine-tuning | [
"qwen",
"huggingface",
"coder"
] | 8 | edge_bookmarks |
https://app.daily.dev/posts/qwenlm-qwen3-omni-qwen3-omni-is-a-natively-end-to-end-omni-modal-llm-developed-by-the-qwen-team-at-hb28tt5xj | QwenLM/Qwen3-Omni: Qwen3-omni is a natively end-to-end, omni-modal LLM developed by the Qwen team at Alibaba Cloud, capable of understanding text, audio, images, and video, as well as generating speec | LLM Fine-tuning | [] | 9 | dailydev |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.