url stringlengths 13 1.72k | title stringlengths 1 655 | category stringclasses 37 values | tags listlengths 0 4 | score int64 1 9 | source stringclasses 8 values |
|---|---|---|---|---|---|
https://www.linkedin.com/feed/update/urn:li:activity:7423311127788548096?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7423311127788548096%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Rados Mijatovic — Claude Code best practices (from real usage, not hype) ⚡️
Been playing seriousl | LLM Fine-tuning | [
"Claude Code",
"best practices"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7432487561757106176?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7432487561757106176%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Ravi N — 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗮𝗻 𝗟𝗟𝗠 𝗮𝗰𝗿𝗼𝘀𝘀 𝟭,𝟬𝟬𝟬 𝗚𝗣𝗨𝘀 𝗶𝘀 𝗻𝗼𝘁 𝗮 𝗰𝗼𝗺𝗽𝘂𝘁𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺.
𝗜𝘁 𝗶𝘀 𝗮 𝗰𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝗰𝘆 | LLM Fine-tuning | [
"LLM",
"Training",
"GPUs"
] | 9 | linkedin |
https://blog.dailydoseofds.com/p/recursive-language-models?ref=dailydev | Recursive Language Models - by Avi Chawla | LLM Fine-tuning | [
"recursive",
"language models"
] | 7 | edge |
https://app.daily.dev/posts/redplanethq-core-your-personal-plug-and-play-memory-layer-for-llms-fnnmjppvw | RedPlanetHQ/core: Your personal plug and play memory layer for LLMs | LLM Fine-tuning | [
"Memory Layer",
"LLMs"
] | 9 | dailydev |
https://app.daily.dev/posts/regular-ml-inference-vs-llm-inference-vwrarnvqn | Regular ML Inference vs. LLM Inference | daily.dev | LLM Fine-tuning | [
"ml inference",
"llm inference"
] | 7 | edge |
https://www.linkedin.com/feed/update/urn:li:activity:7381391124592095232?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7381391124592095232%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Saeed Al Hasan .AI Master — OpenAI Launches Codex 🚀
💻 OpenAI just released Codex – the AI that writes code | LLM Fine-tuning | [
"Codex",
"AI Writing"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7036654870422040576?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7036654870422040576%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Sam Szuchan — Not only can ChatGPT help you write copy faster—it'll help you write it better t | LLM Fine-tuning | [
"ChatGPT",
"Writing",
"Productivity"
] | 7 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7412998633643696128?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7412998633643696128%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Sarthak Rastogi — LMCache: reuse KV cache to speed up AI inference
Long-context and multi-turn LLM | LLM Fine-tuning | [
"KV cache",
"AI inference"
] | 8 | linkedin |
https://github.com/ScrapeGraphAI/toonify | ScrapeGraphAI/toonify: Toonify: Compact data format reducing LLM token usage by 30-60% | LLM Fine-tuning | [
"toonify",
"llm",
"data format"
] | 8 | edge |
https://www.linkedin.com/feed/update/urn:li:activity:7417238989239328768?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7417238989239328768%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Sebastian Messingfeld — Vision-LLMs make prototyping surprisingly smooth.
Not long ago, building someth | LLM Fine-tuning | [
"prototyping",
"vision-LLMs",
"development"
] | 8 | linkedin |
https://github.com/SebastianKuehnrich/hf-finetuning-german-sentiment | SebastianKuehnrich/hf-finetuning-german-sentiment | LLM Fine-tuning | [
"finetuning",
"German sentiment"
] | 7 | edge |
https://app.daily.dev/posts/selfhostllm-calculate-the-gpu-memory-you-need-for-llm-inference-hdjcrlgxe | SelfHostLLM: Calculate the GPU memory you need for LLM inference | LLM Fine-tuning | [
"GPU",
"memory",
"inference"
] | 7 | dailydev |
https://huggingface.co/ServiceNow-AI/Apriel-1.5-15b-Thinker | ServiceNow-AI/Apriel-1.5-15b-Thinker · Hugging Face | LLM Fine-tuning | [
"huggingface",
"model"
] | 9 | edge |
https://aisharenet.com/soulx-podcast/ | SoulX-Podcast - Soul AI Lab开源的对话式语音合成模型 | AI分享圈 | LLM Fine-tuning | [
"dialogue",
"voice synthesis",
"AI"
] | 7 | edge_bookmarks |
https://rentry.org/sdmodels#dreambooth | Stable Diffusion Models | LLM Fine-tuning | [
"stable diffusion",
"dreambooth"
] | 7 | raindrop |
https://www.linkedin.com/feed/update/urn:li:activity:7411422473893597184?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7411422473893597184%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Stanislav Beliaev — President of OpenAI shared a system prompt that gives GPT-5.2 Codex persistent m | LLM Fine-tuning | [
"GPT-5",
"system prompt"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7315687453187551232?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7315687453187551232%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Steve Nouri — 🚨I can Generate a One-Minute Cartoons. No Editing. No Stitching. Just AI. (Promp | LLM Fine-tuning | [
"cartoons",
"AI",
"generation"
] | 6 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7430939363758288896?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7430939363758288896%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Sumanth P — Fine-tune LLM agents without fine-tuning LLMs!
Memento is a memory based contin | LLM Fine-tuning | [
"Fine-tuning",
"LLM Agents",
"Memory"
] | 8 | linkedin |
https://nader.substack.com/p/supercharge-your-gpt-model-custom | Supercharge Your GPT Model: Custom Data Fine-Tuning using Node.js | LLM Fine-tuning | [
"fine-tuning",
"GPT"
] | 9 | raindrop |
https://claude.com/import-memory | Switch to Claude without starting over | Claude | LLM Fine-tuning | [
"Claude",
"memory import"
] | 7 | edge |
https://towardsdatascience.com/ten-lessons-of-building-llm-applications-for-engineers/?ref=dailydev | Ten Lessons of Building LLM Applications for Engineers | Towards Data Science | LLM Fine-tuning | [
"llm",
"applications",
"engineering"
] | 9 | edge |
https://llmstxt.org/ | The /llms.txt file – llms-txt | LLM Fine-tuning | [
"llms",
"text"
] | 7 | edge |
https://blog.bytebytego.com/p/the-architecture-behind-open-source?ref=dailydev | The Architecture Behind Open-Source LLMs | LLM Fine-tuning | [
"open-source",
"architecture"
] | 8 | edge |
https://deepwiki.com/karpathy/nanochat/3.2-hyperparameter-scaling-and-auto-configuration | The Complexity Dial: Auto-Configuration System | karpathy/nanochat | DeepWiki | LLM Fine-tuning | [
"auto-configuration",
"hyperparameter",
"scaling"
] | 8 | edge |
https://medium.com/data-science-collective/comprehensive-guide-to-fine-tuning-llm-4a8fd4d0e0af | The Comprehensive Guide to Fine-tuning LLM | by Sunil Rao | Data Science Collective | Medium | LLM Fine-tuning | [
"fine-tuning",
"guide",
"medium"
] | 9 | edge |
https://app.daily.dev/posts/the-developer-s-guide-to-smarter-fine-tuning-unlock-custom-ai-for-every-business-challenge-etrnasyvy | The Developer’s Guide to Smarter Fine-tuning: Unlock custom AI for every business challenge | LLM Fine-tuning | [
"fine-tuning",
"AI"
] | 8 | dailydev |
https://www.kdnuggets.com/easiest-way-of-running-llama-3-locally?ref=dailydev | The Easiest Way of Running Llama 3 Locally - KDnuggets | LLM Fine-tuning | [
"llama-3",
"local"
] | 8 | raindrop |
https://app.latitude.so/projects/19453/versions/1bd33f04-8d2c-4ee5-aca0-6890ec73369a/documents/45b88591-718d-47d0-a584-9c7bdbaddede | The Open-Source LLM Development Platform - Latitude | LLM Fine-tuning | [
"LLM",
"development",
"platform"
] | 9 | edge |
https://huggingface.co/spaces/HuggingFaceTB/smol-training-playbook#introduction | The Smol Training Playbook - a Hugging Face Space by HuggingFaceTB | LLM Fine-tuning | [
"training",
"Hugging Face"
] | 8 | edge |
https://www.superprompt.com/library | The Superprompt Library - Expert ChatGPT & Claude Prompts | Superprompt | LLM Fine-tuning | [
"superprompt",
"ChatGPT"
] | 9 | edge |
https://superprompt.com/library | The Superprompt Library - Expert ChatGPT & Claude Prompts | Superprompt | LLM Fine-tuning | [
"Superprompt",
"ChatGPT",
"prompts"
] | 9 | edge |
https://thetokencompany.com/ | The Token Company - LLM Input Compression API | bear-1 & bear-1.1 Models | LLM Fine-tuning | [
"llm",
"api",
"compression"
] | 8 | edge |
https://www.kdnuggets.com/the-ultimate-guide-to-approach-llms?ref=dailydev | The Ultimate Guide to Approach LLMs - KDnuggets | LLM Fine-tuning | [
"llms",
"guide"
] | 9 | raindrop |
https://arxiv.org/html/2408.13296v1 | The Ultimate Guide to Fine-Tuning LLMs from Basics to Breakthroughs: An Exhaustive Review of Technologies, Research, Best Practices, Applied Research Challenges and Opportunities (Version 1.0) | LLM Fine-tuning | [
"fine-tuning",
"guide",
"research"
] | 9 | edge |
https://app.daily.dev/posts/the-ultra-fast-llm-quantization-export-library-awrfs0i81 | The Ultra-Fast LLM Quantization & Export Library | daily.dev | LLM Fine-tuning | [
"quantization",
"export",
"library"
] | 8 | edge |
https://app.daily.dev/posts/the-best-approach-to-compare-llm-outputs-qev5ctp3d | The best approach to compare LLM outputs | LLM Fine-tuning | [
"LLM Outputs",
"Comparison"
] | 7 | dailydev |
https://app.daily.dev/posts/the-complete-guide-to-llm-observability-for-2026-vabqmevzx | The complete guide to LLM observability for 2026 | daily.dev | LLM Fine-tuning | [
"LLM",
"observability",
"guide"
] | 9 | edge |
https://ai.meta.com/blog/yann-lecun-ai-model-i-jepa/ | The first AI model based on Yann LeCun’s vision for more human-like AI | LLM Fine-tuning | [
"AI",
"model",
"Yann LeCun"
] | 8 | edge |
https://app.daily.dev/posts/the-web-crawling-api-for-llms-tgx8tyeka | The web crawling API for LLMs | LLM Fine-tuning | [
"web crawling",
"API",
"LLMs"
] | 7 | dailydev |
https://bigdataboutique.com/blog/thinking-fast-and-failing-slow-why-llm-as-a-judge-fails-aa1d67?ref=dailydev | Thinking Fast and Failing Slow: Why LLM as a Judge Fails - BigData Boutique | LLM Fine-tuning | [
"LLM",
"judge",
"failure"
] | 8 | edge |
https://www.linkedin.com/feed/update/urn:li:activity:7432185203957125120?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7432185203957125120%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Thrishma Reddy — For every 1 token writing code, 4 tokens were spent on overhead.
I parsed the r | LLM Fine-tuning | [
"token usage",
"code writing",
"overhead"
] | 5 | linkedin |
https://blog.vllm.ai/2025/12/14/halugate.html?ref=dailydev | Token-Level Truth: Real-Time Hallucination Detection for Production LLMs | vLLM Blog | LLM Fine-tuning | [
"hallucination",
"detection",
"llms"
] | 9 | edge |
https://machinelearningmastery.com/top-7-small-language-models-you-can-run-on-a-laptop/?ref=dailydev | Top 7 Small Language Models You Can Run on a Laptop - MachineLearningMastery.com | LLM Fine-tuning | [
"small models",
"laptop"
] | 6 | edge |
https://poloclub.github.io/transformer-explainer/ | Transformer Explainer: LLM Transformer Model Visually Explained | LLM Fine-tuning | [
"transformer",
"visualization"
] | 8 | edge |
https://huggingface.co/blog/faster-transformers | Tricks from OpenAI gpt-oss YOU 🫵 can use with transformers | LLM Fine-tuning | [
"transformers",
"openai"
] | 7 | edge |
https://blog.gopenai.com/understand-gpt-tokens-and-models-comparison-16acc771a01c?gi=13828a1fa272&ref=dailydev | Understand GPT Tokens and Models Comparison | LLM Fine-tuning | [
"GPT",
"tokens comparison"
] | 8 | raindrop |
https://sebastianraschka.com/blog/2025/qwen3-from-scratch.html?ref=dailydev | Understanding and Implementing Qwen3 From Scratch | LLM Fine-tuning | [
"Qwen3",
"implementation"
] | 8 | edge |
https://sebastianraschka.com/blog/2025/llm-evaluation-4-approaches.html?ref=dailydev | Understanding the 4 Main Approaches to LLM Evaluation (From Scratch) | LLM Fine-tuning | [
"llm",
"evaluation",
"approaches"
] | 8 | edge |
https://medium.com/javascript-scene/unit-testing-chatgpt-prompts-introducing-riteway-for-sudolang-52761c34abc4 | Unit Testing ChatGPT Prompts: Introducing Riteway for SudoLang | LLM Fine-tuning | [
"chatgpt",
"prompt testing",
"development"
] | 8 | raindrop |
https://www.linkedin.com/feed/update/urn:li:activity:7428796779866898432?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7428796779866898432%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Unsloth AI — You can now run MiniMax-2.5 locally! 🚀 At 230B parameters, it's the strongest LL | LLM Fine-tuning | [
"MiniMax",
"Local",
"Parameters"
] | 9 | linkedin |
https://blog.dailydoseofds.com/p/upgrading-the-huggingface-fine-tuning?ref=dailydev | Upgrading the HuggingFace Fine-Tuning Skill - by Avi Chawla | LLM Fine-tuning | [
"HuggingFace",
"fine-tuning",
"blog"
] | 8 | edge |
https://www.linkedin.com/feed/update/urn:li:activity:7230288944033148929?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7230288944033148929%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Vaibhav Srivastav — 500M parameters is all you need to automate away your chores - Using Qwen 0.5B o | LLM Fine-tuning | [
"Automation",
"Qwen",
"Parameters"
] | 7 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7374087892920578048?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7374087892920578048%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Vaibhav Srivastav — BOOM! IBM just released an updated SmolDocling - tiny 258M param SoTA VLM - Apac | LLM Fine-tuning | [
"IBM",
"VLM",
"AI Models"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7371953168144011264?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7371953168144011264%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Vaibhav Srivastav — BOOM! Starting today you can use open source frontier LLMs in Visual Studio Code | LLM Fine-tuning | [
"Open Source",
"LLMs",
"Visual Studio Code"
] | 8 | linkedin |
https://blog.dailydoseofds.com/p/verbalized-sampling-in-llms?ref=dailydev | Verbalized Sampling in LLMs - by Avi Chawla | LLM Fine-tuning | [
"sampling",
"LLMs"
] | 8 | edge |
https://www.linkedin.com/feed/update/urn:li:activity:7412108743758393344?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7412108743758393344%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Vikram Ranabhatt — Why LLMs Require a Dedicated Evaluation Framework
Unlike traditional software, | LLM Fine-tuning | [
"LLM evaluation",
"framework"
] | 7 | linkedin |
https://huggingface.co/blog/hf-skills-training | We Got Claude to Fine-Tune an Open Source LLM | LLM Fine-tuning | [
"fine-tuning",
"open source",
"LLM"
] | 9 | edge |
https://jigsawstack.com/blog/what-even-is-a-small-language-model-now--ai?ref=dailydev | What Even Is a Small Language Model Now? - JigsawStack | LLM Fine-tuning | [
"small-language-model",
"AI",
"blog"
] | 7 | edge |
https://www.youtube.com/watch?v=PqbB07n_uQ4 | What It's Like To be a Computer: An Interview with GPT-3 | LLM Fine-tuning | [
"GPT-3",
"AI",
"interview"
] | 9 | youtube_liked_videos |
https://app.daily.dev/posts/dcaeluz68 | asgeirtj/system_prompts_leaks | LLM Fine-tuning | [
"system prompts",
"leaks"
] | 5 | dailydev |
https://github.com/baaivision/emu3.5 | baaivision/Emu3.5: Native Multimodal Models are World Learners | LLM Fine-tuning | [
"emu3.5",
"multimodal",
"models"
] | 7 | edge_bookmarks |
https://github.com/bin123apple/AutoCoder | bin123apple/AutoCoder: We introduced a new model designed for the Code generation task. Its test accuracy on the HumanEval base dataset surpasses that of GPT-4 Turbo (April 2024) and GPT-4o. | LLM Fine-tuning | [
"code",
"generation",
"model"
] | 9 | raindrop |
https://github.com/cheahjs/free-llm-api-resources | cheahjs/free-llm-api-resources: A list of free LLM inference resources accessible via API. | LLM Fine-tuning | [
"free resources",
"API"
] | 8 | edge |
https://app.daily.dev/posts/yfgq0dca7 | google/langextract: A Python library for extracting structured information from unstructured text using LLMs with precise source grounding and interactive visualization. | LLM Fine-tuning | [
"information extraction",
"structured data"
] | 8 | dailydev |
https://app.daily.dev/posts/google-langextract-a-python-library-for-extracting-structured-information-from-unstructured-text-us-n2edyvcao | google/langextract: A Python library for extracting structured information from unstructured text using LLMs with precise source grounding and interactive visualization. | LLM Fine-tuning | [
"Python",
"library",
"structured information"
] | 7 | dailydev |
https://github.com/huggingface/trl | huggingface/trl: Train transformer language models with reinforcement learning. | LLM Fine-tuning | [
"transformer",
"reinforcement",
"learning"
] | 9 | edge |
https://github.com/inclusionAI/Ling-V2 | inclusionAI/Ling-V2: Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI. | LLM Fine-tuning | [
"moe",
"llm",
"open-source"
] | 8 | edge |
https://huggingface.co/inclusionAI/Ling-flash-2.0 | inclusionAI/Ling-flash-2.0 · Hugging Face | LLM Fine-tuning | [
"huggingface",
"llm",
"inclusionAI"
] | 7 | edge |
https://github.com/joselpart/rasbt-llms-from-scratch | joselpart/rasbt-llms-from-scratch: Implement a ChatGPT-like LLM in PyTorch from scratch, step by step | LLM Fine-tuning | [
"chatgpt",
"pytorch",
"implementation"
] | 9 | edge |
https://github.com/karpathy/nanoGPT | karpathy/nanoGPT: The simplest, fastest repository for training/finetuning medium-sized GPTs. | LLM Fine-tuning | [
"gpt",
"finetuning",
"training"
] | 9 | raindrop |
https://github.com/krillinai/KrillinAI | krillinai/KrillinAI: Video translation and dubbing tool powered by LLMs. The video translator offers 100 language translations and one-click full-process deployment. The video translation output is optimized for platforms like YouTube,TikTok. AI视频翻译配音工具,100种语言双向翻译,一键部署全流程,可以生抖音,小红书,哔哩哔哩,视频号,TikTok,Youtube等形态的内容成适配 | LLM Fine-tuning | [
"video translation",
"LLM",
"deployment"
] | 8 | edge_bookmarks |
https://github.com/langwatch/langwatch | langwatch/langwatch: The open LLM Ops platform - Traces, Analytics, Evaluations, Datasets and Prompt Optimization ✨ | LLM Fine-tuning | [
"llm",
"ops",
"analytics"
] | 8 | edge |
https://github.com/jujumilk3/leaked-system-prompts/blob/main/anthropic-claude-3.7-sonnet_20250224.md | leaked-system-prompts/anthropic-claude-3.7-sonnet_20250224.md at main · jujumilk3/leaked-system-prompts | LLM Fine-tuning | [
"prompts",
"leaked"
] | 6 | edge |
https://build.nvidia.com/nvidia/llama-3_3-nemotron-super-49b-v1_5 | llama-3.3-nemotron-super-49b-v1.5 Model by NVIDIA | NVIDIA NIM | LLM Fine-tuning | [
"NVIDIA",
"model",
"Llama"
] | 8 | edge |
https://github.com/lllyasviel/FramePack | lllyasviel/FramePack: Lets make video diffusion practical! | LLM Fine-tuning | [
"video",
"diffusion"
] | 7 | edge |
https://karpathy.github.io/2026/02/12/microgpt/ | microgpt | LLM Fine-tuning | [
"microgpt",
"gpt",
"fine-tuning"
] | 8 | edge |
https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/discussions/34 | microsoft/Phi-3-mini-4k-instruct · Quantization | LLM Fine-tuning | [
"quantization",
"Phi-3",
"Hugging Face"
] | 8 | edge |
https://github.com/microsoft/guidance/ | microsoft/guidance: A guidance language for controlling large language models. | LLM Fine-tuning | [
"guidance",
"language",
"models"
] | 8 | raindrop |
https://github.com/mlc-ai/web-llm | mlc-ai/web-llm: High-performance In-browser LLM Inference Engine | LLM Fine-tuning | [
"in-browser",
"LLM inference"
] | 9 | edge |
https://huggingface.co/moonshotai/Kimi-K2.5 | moonshotai/Kimi-K2.5 · Hugging Face | LLM Fine-tuning | [
"Kimi-K2.5",
"Hugging Face"
] | 7 | edge |
https://huggingface.co/neo4j/text-to-cypher-Gemma-3-27B-Instruct-2025.04.0?clone=true | neo4j/text-to-cypher-Gemma-3-27B-Instruct-2025.04.0 · Hugging Face | LLM Fine-tuning | [
"neo4j",
"text-to-cypher",
"Hugging Face"
] | 7 | edge |
https://app.daily.dev/posts/ngafar-llama-scan-transcribe-pdfs-with-local-llms-ayyivymca | ngafar/llama-scan: Transcribe PDFs with local LLMs | LLM Fine-tuning | [
"transcribe PDFs",
"local LLMs"
] | 7 | dailydev |
https://app.daily.dev/posts/notebooks-100-fine-tuning-llm-notebooks-on-google-colab-kaggle-and-more--dghxdap0w | notebooks — 100+ Fine-tuning LLM Notebooks on Google Colab, Kaggle, and more. | LLM Fine-tuning | [
"fine-tuning",
"notebooks"
] | 8 | dailydev |
https://huggingface.co/nvidia/Llama-3_3-Nemotron-Super-49B-v1_5 | nvidia/Llama-3_3-Nemotron-Super-49B-v1_5 · Hugging Face | LLM Fine-tuning | [
"NVIDIA",
"Hugging Face",
"Llama"
] | 8 | edge |
https://github.com/openai/evals?utm_medium=email&_hsmi=250083182&_hsenc=p2ANqtz--fiDJnGPVqHEhbVZ4epUBu51Q6Lh3RhbIg6lFctFgRWT6MNFUvi-9dG2zxmloASDFmU6nhTTCGuja3CmMcAdjON6OB3Q&utm_content=250083182&utm_source=hs_automation | openai/evals: Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks. | LLM Fine-tuning | [
"llm",
"evaluation",
"benchmarks"
] | 9 | raindrop |
https://github.com/run-llama/notebookllama?ref=dailydev | run-llama/notebookllama at dailydev | LLM Fine-tuning | [
"llama",
"notebook",
"fine-tuning"
] | 7 | edge |
https://app.daily.dev/posts/tekaratzas-rustgpt-an-transformer-based-llm-written-completely-in-rust-qzl7kvwtx | tekaratzas/RustGPT: An transformer based LLM. Written completely in Rust | LLM Fine-tuning | [] | 8 | dailydev |
https://github.com/toon-format/toon | toon-format/toon: 🎒 Token-Oriented Object Notation (TOON) – Compact, human-readable, schema-aware JSON for LLM prompts. Spec, benchmarks, TypeScript SDK. | LLM Fine-tuning | [
"toon",
"llm",
"json"
] | 8 | edge |
https://huggingface.co/unsloth/MiniMax-M2.5-GGUF | unsloth/MiniMax-M2.5-GGUF · Hugging Face | LLM Fine-tuning | [
"MiniMax-M2.5",
"Hugging Face"
] | 7 | edge |
https://app.daily.dev/posts/fmginfj18 | unslothai/unsloth: 80% faster 50% less memory LLM finetuning | LLM Fine-tuning | [
"Faster",
"Memory"
] | 8 | dailydev |
https://app.daily.dev/posts/vllm-high-throughput-llm-serving-memory-efficient-gpu-inference-python-c-cuda-hugging-face-6qidsbktl | vllm — High-throughput LLM serving (memory-efficient GPU inference); Python/C++ (CUDA), Hugging Face | LLM Fine-tuning | [
"high-throughput",
"LLM serving"
] | 8 | dailydev |
https://app.daily.dev/posts/cyvmgtomh | vllm-project/vllm: A high-throughput and memory-efficient inference and serving engine for LLMs | LLM Fine-tuning | [
"vllm",
"inference",
"serving engine"
] | 8 | dailydev |
https://app.daily.dev/posts/every-langgraph-user-we-know-is-making-the-same-mistake--f5b5swlv4 | (2) Every LangGraph User We know is Making the Same Mistake! | daily.dev | LangChain / LangGraph | [
"langgraph",
"mistakes",
"best practices"
] | 6 | edge |
https://dev.to/aiengineering/a-beginners-guide-to-getting-started-with-addmessages-reducer-in-langgraph-4gk0 | A Beginner’s Guide to Getting Started with add_messages Reducer in LangGraph - DEV Community | LangChain / LangGraph | [
"langgraph",
"beginner",
"guide"
] | 7 | edge |
https://blog.langchain.com/agent-middleware/?ref=dailydev | Agent Middleware | LangChain / LangGraph | [
"agent",
"middleware"
] | 7 | edge |
https://developers.googleblog.com/en/announcing-the-genkit-extension-for-gemini-cli/?ref=dailydev | Announcing the Genkit Extension for Gemini CLI - Google Developers Blog | LangChain / LangGraph | [
"Gemini",
"CLI",
"extension"
] | 6 | edge |
https://www.linkedin.com/feed/update/urn:li:activity:7424511913100922880?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7424511913100922880%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Atai Barkai — Introducing the Generative UI Research Canvas ✨
Combine LangChain, Tavily, Tako | LangChain / LangGraph | [
"Generative UI",
"LangChain",
"research"
] | 8 | linkedin |
https://eu.smith.langchain.com/o/6f799d53-3c6a-4081-a835-f721d32a935c/projects/p/7178450d-a6ca-4103-a4cb-c00351b264fe?timeModel=%7B%22duration%22%3A%221d%22%7D | BRAIN - LangSmith | LangChain / LangGraph | [
"langsmith",
"projects",
"AI"
] | 7 | edge |
https://docs.langchain.com/oss/javascript/langchain/multi-agent/subagents-personal-assistant | Build a personal assistant with subagents - Docs by LangChain | LangChain / LangGraph | [
"langchain",
"personal-assistant",
"docs"
] | 8 | edge |
https://redis.com/blog/build-ecommerce-chatbot-with-redis/ | Build an Ecommerce Chatbot with Redis, LangChain, and OpenAI | Redis | LangChain / LangGraph | [
"ecommerce",
"chatbot",
"redis"
] | 8 | raindrop |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.