url stringlengths 13 1.72k | title stringlengths 1 655 | category stringclasses 37 values | tags listlengths 0 4 | score int64 1 9 | source stringclasses 8 values |
|---|---|---|---|---|---|
https://www.linkedin.com/feed/update/urn:li:activity:7401652397674291201?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7401652397674291201%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | diginomica — Graph database company Neo4j mined 150,000 Reddit posts to forecast plot outcome | Knowledge Graphs & Neo4j | [
"Neo4j",
"forecasting"
] | 8 | linkedin |
https://github.com/getzep/graphiti | getzep/graphiti: Build Real-Time Knowledge Graphs for AI Agents | Knowledge Graphs & Neo4j | [
"knowledge-graphs",
"AI"
] | 8 | edge |
https://github.com/getzep/graphiti?ref=dailydev | getzep/graphiti: Build and query dynamic, temporally-aware Knowledge Graphs | Knowledge Graphs & Neo4j | [
"knowledge graphs",
"dynamic querying"
] | 8 | raindrop |
https://github.com/jacomyal/sigma.js | jacomyal/sigma.js: A JavaScript library aimed at visualizing graphs of thousands of nodes and edges | Knowledge Graphs & Neo4j | [
"javascript",
"graph",
"visualization"
] | 8 | edge |
https://github.com/kuzudb/kuzu | kuzudb/kuzu: Embedded property graph database built for speed. Vector search and full-text search built in. Implements Cypher. | Knowledge Graphs & Neo4j | [
"graph database",
"vector search"
] | 9 | edge |
http://localhost:5173/neo4j-dashboard | migRaven NEXUS UI | Knowledge Graphs & Neo4j | [
"neo4j",
"dashboard",
"UI"
] | 7 | edge_bookmarks |
https://huggingface.co/neo4j/text-to-cypher-Gemma-3-27B-Instruct-2025.04.0 | neo4j/text-to-cypher-Gemma-3-27B-Instruct-2025.04.0 · Hugging Face | Knowledge Graphs & Neo4j | [
"neo4j",
"text-to-cypher",
"huggingface"
] | 8 | edge |
https://www.yfiles.com/demos | yFiles - Demos | Knowledge Graphs & Neo4j | [
"demos",
"visualization"
] | 5 | edge |
https://app.daily.dev/posts/pokerbattle-ai-the-first-ever-cash-poker-tournament-for-llms-laoo14phc | (2) PokerBattle.ai — The first-ever cash poker tournament for LLMs | daily.dev | LLM Fine-tuning | [
"LLM",
"poker",
"tournament"
] | 7 | edge |
https://app.daily.dev/posts/refactor-an-existing-codebase-using-prompt-driven-development-drs8kyi95 | (2) Refactor an Existing Codebase using Prompt Driven Development | daily.dev | LLM Fine-tuning | [
"prompt driven",
"development",
"refactor"
] | 7 | edge |
https://app.daily.dev/posts/a-foundational-guide-to-evaluation-of-llm-apps-lrrb9l3mj | (4) A Foundational Guide to Evaluation of LLM Apps | daily.dev | LLM Fine-tuning | [
"evaluation",
"LLM",
"guide"
] | 9 | edge |
https://app.daily.dev/posts/accelerating-large-language-models-with-nvfp4-quantization-ope22pwoi | (4) Accelerating large language models with NVFP4 quantization | LLM Fine-tuning | [
"large-language-models",
"quantization"
] | 8 | edge |
https://app.daily.dev/posts/conversing-with-large-language-models-using-dapr-9tfmgithc | (4) Conversing with Large Language Models using Dapr | daily.dev | LLM Fine-tuning | [
"Dapr",
"large language models"
] | 7 | edge |
https://app.daily.dev/posts/ttqlfvb3q | (4) Google Just Dropped Transformer 2.0: Meet "Nested Learning" | LLM Fine-tuning | [
"Transformer",
"Nested Learning"
] | 8 | edge |
https://app.daily.dev/posts/how-do-llms-work--k2ddtksgh | (4) How Do LLMs Work? | daily.dev | LLM Fine-tuning | [
"llms",
"how to",
"tutorial"
] | 8 | edge |
https://app.daily.dev/posts/how-to-build-your-own-custom-llm-memory-layer-from-scratch-2og42b4ow | (4) How to Build Your Own Custom LLM Memory Layer from Scratch | LLM Fine-tuning | [
"custom",
"LLM",
"memory"
] | 8 | edge |
https://app.daily.dev/posts/how-to-evaluate-and-select-the-right-llm-for-your-genai-application-xqyon1gj7 | (4) How to Evaluate and Select the Right LLM for Your GenAI... | LLM Fine-tuning | [
"evaluate",
"LLM"
] | 8 | edge |
https://app.daily.dev/posts/llm-routing-in-production-choosing-the-right-model-for-every-request-zj72a5hr3 | (4) LLM routing in production: Choosing the right model for... | LLM Fine-tuning | [
"llm",
"routing",
"production"
] | 8 | edge |
https://app.daily.dev/posts/y4vkzlusv | (4) Why users shouldn’t choose their own LLM models | daily.dev | LLM Fine-tuning | [
"llm",
"models",
"users"
] | 7 | edge |
https://app.daily.dev/posts/your-llm-provider-will-go-down-here-s-what-should-happen-next--yth4hy0aa | (4) Your LLM Provider Will Go Down. Here's What Should... | LLM Fine-tuning | [
"llm",
"provider",
"downtime"
] | 6 | edge |
https://note.com/guruguruhyena/n/nb29af65f9107 | /Stable Diffusion models|guruguruhyena|note | LLM Fine-tuning | [
"stable diffusion",
"models"
] | 6 | raindrop |
https://medium.com/aimonks/10-chatgpt-4o-hacks-to-make-you-better-than-99-of-chatgpt-users-e7edd0818e75 | 10 ChatGPT-4o Hacks To Make You Better Than 99% of ChatGPT Users | LLM Fine-tuning | [
"chatgpt",
"hacks"
] | 8 | raindrop |
https://blog.dailydoseofds.com/p/5-llm-fine-tuning-techniques-250?ref=dailydev | 5 LLM Fine-tuning Techniques - by Avi Chawla | LLM Fine-tuning | [
"fine-tuning",
"techniques",
"llm"
] | 9 | edge |
https://app.daily.dev/posts/7-llm-generation-parameters-4mavwom35 | 7 LLM Generation Parameters | LLM Fine-tuning | [
"LLM",
"Generation Parameters"
] | 8 | dailydev |
https://blog.dailydoseofds.com/p/7-llm-generation-parameters?ref=dailydev | 7 LLM Generation Parameters - by Avi Chawla | LLM Fine-tuning | [
"LLM",
"generation parameters"
] | 8 | edge |
https://blog.dailydoseofds.com/p/8-key-llm-development-skills-for?ref=dailydev | 8 Key LLM Development Skills for AI Engineers | LLM Fine-tuning | [
"LLM",
"skills",
"AI"
] | 9 | edge |
https://app.daily.dev/posts/klgqsoffa | 8 Key LLM Development Skills for AI Engineers | LLM Fine-tuning | [
"LLM",
"Development Skills"
] | 9 | dailydev |
https://app.daily.dev/posts/8-key-llm-development-skills-for-ai-engineers-nf16jg6o8 | 8 Key LLM Development Skills for AI Engineers | LLM Fine-tuning | [
"LLM",
"Development Skills"
] | 9 | dailydev |
https://app.daily.dev/posts/a-beginner-s-field-guide-to-large-language-models-from-tokens-to-agents-fwjyydhci | A Beginner’s Field Guide to Large Language Models: From Tokens to Agents | daily.dev | LLM Fine-tuning | [
"field guide",
"large language models"
] | 8 | edge |
https://devblogs.microsoft.com/foundry/a-developers-guide-to-fine-tuning-gpt-4o-for-image-classification-on-azure-ai-foundry/?ref=dailydev | A Developer's Guide to Fine-Tuning GPT-4o for Image Classification on Azure AI Foundry | Azure AI Foundry Blog Fine-Tuning GPT-4o Vision on Azure: Image Classification with VLMs vs CNN | LLM Fine-tuning | [
"fine-tuning",
"gpt-4o",
"image classification"
] | 9 | edge |
https://blog.dailydoseofds.com/p/a-foundational-guide-to-evaluation-3ec?ref=dailydev | A Foundational Guide to Evaluation of LLM Apps (Part B) | LLM Fine-tuning | [
"evaluation",
"LLM apps"
] | 8 | edge |
https://app.daily.dev/posts/a-guide-to-llm-evals-y9wpcrtpx | A Guide to LLM Evals | daily.dev | LLM Fine-tuning | [
"llm",
"evals",
"guide"
] | 8 | edge |
https://github.com/AI-Maker-Space/LLM-Ops-Cohort-1 | AI-Maker-Space/LLM-Ops-Cohort-1: Following emerging Large Language Model Operations (LLM Ops) best practices in the industry, you’ll learn all about the key technologies that enable Generative AI practitioners like you to leverage tools like LangChain, LLamaIndex, and more, to build complex LLM applications. | LLM Fine-tuning | [
"LLM",
"LangChain",
"Cohort"
] | 9 | edge |
https://www.linkedin.com/feed/update/urn:li:activity:7187333501988257792?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7187333501988257792%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Abhishek Thakur — AutoTrain finetuned llama3 beats the base instruct model on all but one benchmar | LLM Fine-tuning | [
"AutoTrain",
"Llama3",
"benchmark"
] | 9 | linkedin |
https://addyosmani.com/blog/ai-coding-workflow/?ref=dailydev | AddyOsmani.com - My LLM coding workflow going into 2026 | LLM Fine-tuning | [
"LLM",
"coding workflow"
] | 8 | edge |
https://agenta.ai/ | Agenta - Prompt Management, Evaluation, and Observability for LLM apps | LLM Fine-tuning | [
"prompt",
"management",
"llm"
] | 8 | edge |
https://www.linkedin.com/feed/update/urn:li:activity:7391467445774901248?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7391467445774901248%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Akshay Pachaar — A simple trick cuts your LLM costs by 50%!
Just stop using JSON and use this in | LLM Fine-tuning | [
"cost reduction",
"LLM",
"JSON"
] | 7 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7366114771491803136?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7366114771491803136%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Alex Wang — Chat GPT 5 just got a face. 👩🏼💻
And these real-time AI personas are going main | LLM Fine-tuning | [
"Chat GPT 5",
"AI Personas",
"Real-time"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7384181282911473664?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7384181282911473664%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Alex Wang — LLMs are writing more code than ever. But can you trust it?
Sonar just publishe | LLM Fine-tuning | [
"Code Generation",
"Trust"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7433110959310831617?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7433110959310831617%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Alexander May — Your hot take about LLMs isn't original.
I wrote about the developer outrage cy | LLM Fine-tuning | [
"LLMs",
"developer outrage"
] | 6 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7415552680074641408?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7415552680074641408%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Alexandru Dan — You can teach Claude Code to Self Improve its Skills!
I watched the Developers | LLM Fine-tuning | [
"Claude Code",
"self-improvement"
] | 7 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7426173131192668160?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7426173131192668160%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | AlphaSignal — You can now parse any document with one 1.7B parameter model.
dots-ocr delivers | LLM Fine-tuning | [
"document parsing",
"model",
"AI"
] | 7 | linkedin |
https://app.daily.dev/posts/an-open-source-stack-for-industrial-grade-llm-applications-t8wjkf0ak | An open-source stack for industrial-grade LLM applications | LLM Fine-tuning | [
"open-source",
"LLM applications",
"stack"
] | 9 | dailydev |
https://www.linkedin.com/feed/update/urn:li:activity:7434290699153362944?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7434290699153362944%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | André Lindenberg — LLMs don’t fail because of model quality. They fail because they have no shared | LLM Fine-tuning | [
"model quality",
"LLMs"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7420405767658307584?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7420405767658307584%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Ari Singh — Claude Code just got a lot cheaper.
A few days ago, it became possible to run C | LLM Fine-tuning | [
"Claude Code",
"cost"
] | 7 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7415389907164233728?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7415389907164233728%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Artem Luko — 💡 AI Engineering Tip: Slash your token costs by 300% 😵
👉 Get AI Credits- https | LLM Fine-tuning | [
"Token Costs",
"AI Credits",
"Engineering Tip"
] | 7 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7435032665411723264?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7435032665411723264%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Ash Lewis — Choosing the right model is hard. Keeping it accurate in production is harder.
| LLM Fine-tuning | [
"model selection",
"production accuracy"
] | 7 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7383807744295800832?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7383807744295800832%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Ashwanth S — This prompt architecture just replaced $15K/month prompt engineers.
And 99% of | LLM Fine-tuning | [
"prompt architecture",
"cost reduction",
"engineering"
] | 7 | linkedin |
https://awesomegeminiprompts.tech/?ref=producthunt | Awesome Gemini Prompts | The Ultimate Collection | LLM Fine-tuning | [
"prompts",
"gemini",
"collection"
] | 8 | edge |
https://www.pondhouse-data.com/blog/azure-ai-content-filters | Azure OpenAI Content Filters: The Good, The Bad, and The Workarounds | LLM Fine-tuning | [
"Azure",
"content filters"
] | 8 | edge |
https://learn.microsoft.com/en-us/azure/foundry/openai/concepts/model-retirements?view=foundry-classic&tabs=text | Azure OpenAI in Microsoft Foundry Model Retirements - Microsoft Foundry | Microsoft Learn | LLM Fine-tuning | [
"Azure",
"model retirement"
] | 7 | edge |
https://aisharenet.com/bee/ | Bee - 腾讯混元联合清华开源的全栈多模态大模型项目 | AI分享圈 | LLM Fine-tuning | [
"multimodal",
"model",
"AI"
] | 8 | edge_bookmarks |
https://www.linkedin.com/feed/update/urn:li:activity:7399831492169924608?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7399831492169924608%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Ben Burtenshaw — Finally, NanoChat has landed in transformers! 🚀 And we went wild on this deep di | LLM Fine-tuning | [
"NanoChat",
"transformers"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7321148914923921408?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7321148914923921408%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Ben Meer — 5 Claude Prompts that will supercharge your productivity:
Claude just released | LLM Fine-tuning | [
"Claude",
"productivity",
"prompts"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7392127105511088128?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7392127105511088128%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Ben Niehaus — Can you reserve the prompt from a LLM?
Is it possible that someone could take | LLM Fine-tuning | [
"prompt",
"LLM"
] | 6 | linkedin |
https://blog.vllm.ai/2026/02/27/rocm-attention-backend.html?ref=dailydev | Beyond Porting: How vLLM Orchestrates High-Performance Inference on AMD ROCm | vLLM Blog | LLM Fine-tuning | [
"vLLM",
"inference"
] | 8 | edge |
https://devblogs.microsoft.com/foundry/beyond-the-prompt-why-and-how-to-fine-tune-your-own-models/ | Beyond the Prompt - Why and How to Fine-tune Your Own Models | Microsoft Foundry Blog | LLM Fine-tuning | [
"fine-tuning",
"models",
"blog"
] | 9 | edge |
https://dev.to/hadil/bifrost-the-fastest-llm-gateway-for-production-ready-ai-systems-40x-faster-than-litellm-2i51?ref=dailydev | Bifrost: The Fastest LLM Gateway for Production-Ready AI Systems (40x Faster Than LiteLLM) - DEV Community | LLM Fine-tuning | [
"LLM",
"gateway"
] | 9 | edge |
https://github.com/Bklieger/groqbook | Bklieger/groqbook: Groqbook: Generate entire books in seconds using Groq and Llama3 | LLM Fine-tuning | [
"book",
"generation",
"llama3"
] | 8 | raindrop |
https://www.claude.com/blog/building-skills-for-claude-code | Building Skills for Claude Code | Claude | LLM Fine-tuning | [
"Claude Code",
"skills",
"AI"
] | 8 | edge |
https://huggingface.co/ByteDance/Ouro-2.6B-Thinking | ByteDance/Ouro-2.6B-Thinking · Hugging Face | LLM Fine-tuning | [
"ByteDance",
"Hugging Face",
"model"
] | 7 | edge_bookmarks |
https://app.daily.dev/posts/categories-of-inference-time-scaling-for-improved-llm-reasoning-bguxcwmhk | Categories of Inference-Time Scaling for Improved LLM... | LLM Fine-tuning | [
"inference",
"scaling"
] | 8 | edge |
https://github.com/CaviraOSS/pagelm?ref=dailydev | CaviraOSS/PageLM at dailydev | LLM Fine-tuning | [
"pagelm",
"llm",
"fine-tuning"
] | 7 | edge |
https://www.linkedin.com/feed/update/urn:li:activity:7416586634135228416?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7416586634135228416%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Charly Wargnier — What?!
You can now run 70B LLMs on a 4GB GPU 🤯
AirLLM is a memory-optimized | LLM Fine-tuning | [
"LLMs",
"GPU",
"AirLLM"
] | 9 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7430929963442253824?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7430929963442253824%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Charly Wargnier — 🚨Every prompt you need for ChatGPT, Gemini, and Claude is now in one massive lib | LLM Fine-tuning | [
"ChatGPT",
"prompt library"
] | 7 | linkedin |
https://llm-ui.com/chat | Chat | llm-ui | LLM Fine-tuning | [
"chat",
"llm-ui"
] | 8 | edge |
https://dev.to/techiesdiary/chatgpt-prompts-for-developers-216d?ref=dailydev | ChatGPT - Prompts for developers | LLM Fine-tuning | [
"chatgpt",
"prompts"
] | 8 | raindrop |
https://answerharbor.com/2025/11/15/claude-ai-complete-guide-to-anthropics-ai-assistant/?fi=0&cid=3c4ac6a6-e084-40ba-8d49-57498b22786e&sub=claide.ai&utm_source=claide.ai&hide_featured=1 | Claude AI: Complete Guide to Anthropic’s AI Assistant | AnswerHarbor | LLM Fine-tuning | [
"Claude AI",
"Anthropic",
"assistant"
] | 8 | edge |
https://claudelog.com/ | Claude Code Docs, Guides & Best Practices | ClaudeLog | LLM Fine-tuning | [
"Claude",
"best practices"
] | 8 | edge |
https://code.claude.com/docs/en/microsoft-foundry | Claude Code on Microsoft Foundry - Claude Code Docs | LLM Fine-tuning | [
"Claude",
"Microsoft",
"foundry"
] | 8 | edge |
https://docs.anthropic.com/en/docs/claude-code/settings | Claude Code settings - Anthropic | LLM Fine-tuning | [
"Claude",
"AI",
"settings"
] | 7 | edge |
https://www.anthropic.com/news/skills | Claude Skills: Customize AI for your workflows \ Anthropic | LLM Fine-tuning | [
"claude",
"customize",
"workflows"
] | 7 | edge |
https://platform.openai.com/docs/models/compare?model=gpt-5-pro | Compare models - OpenAI API | LLM Fine-tuning | [
"openai",
"api",
"models"
] | 9 | edge |
https://github.com/ConardLi/easy-dataset | ConardLi/easy-dataset: A powerful tool for creating fine-tuning datasets for LLM | LLM Fine-tuning | [
"fine-tuning",
"datasets",
"github"
] | 9 | edge_bookmarks |
https://context7.com/ | Context7 - Up-to-date documentation for LLMs and AI code editors | LLM Fine-tuning | [
"documentation",
"AI code editors"
] | 8 | edge |
https://www.linkedin.com/feed/update/urn:li:activity:7407070996580503553?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7407070996580503553%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Daniel Han — You can now fine-tune LLMs and deploy them directly on your phone! 🚀 We collabed | LLM Fine-tuning | [
"fine-tuning",
"mobile"
] | 9 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7429907272736174080?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7429907272736174080%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Daniel Han — You can now train LLMs in VS Code for free via Colab & Unsloth AI. 🔥 We made a g | LLM Fine-tuning | [
"train LLMs",
"VS Code",
"Colab"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7427009839308062721?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7427009839308062721%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Daniel Han — You can now train MoE models 12× faster with 35% less VRAM via our new Triton ke | LLM Fine-tuning | [
"MoE models",
"Triton kernel",
"training speed"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7292372816593633280?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7292372816593633280%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | David Zhang — Introducing deep-research - my own open source implementation of OpenAI's new De | LLM Fine-tuning | [
"OpenAI",
"Deep Research",
"Implementation"
] | 8 | linkedin |
https://app.daily.dev/posts/decoding-generation-parameters-and-the-llm-application-lifecycle-1q51gs3mm | Decoding, Generation Parameters, and the LLM Application Lifecycle | daily.dev | LLM Fine-tuning | [
"decoding",
"generation",
"llm"
] | 8 | edge |
https://www.linkedin.com/feed/update/urn:li:activity:7416328462887604226?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7416328462887604226%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Dileep Krishna — Here’s a cheat code for your claude code, from Boris Cherny who created Claude C | LLM Fine-tuning | [
"cheat code",
"Claude Code"
] | 7 | linkedin |
https://www.kdnuggets.com/2023/04/dolly-20-chatgpt-open-source-alternative-commercial.html?utm_source=rss&utm_medium=rss&utm_campaign=dolly-2-0-chatgpt-open-source-alternative-for-commercial-use | Dolly 2.0: ChatGPT Open Source Alternative for Commercial Use - KDnuggets | LLM Fine-tuning | [
"dolly",
"open-source",
"chatgpt alternative"
] | 8 | raindrop |
https://www.linkedin.com/feed/update/urn:li:activity:7427666557868158976?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7427666557868158976%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Eduardo Ordax — Train and inference GPT in only 200 lines of pure dependency-free Python.
This | LLM Fine-tuning | [
"GPT",
"Python",
"Training"
] | 9 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7195392557734895616?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7195392557734895616%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Eric Vyacheslav — LLama 3 is now capable of generating GPT-4 level answers instantly. This is Llam | LLM Fine-tuning | [
"LLama 3",
"GPT-4",
"AI"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7414561743995236352?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7414561743995236352%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Eric Vyacheslav — You can now parse any document with one 1.7B parameter model.
dots-ocr delivers | LLM Fine-tuning | [
"Document Parsing",
"Model",
"OCR"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7413606898010099713?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7413606898010099713%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Evgenii Kniazev, PhD — This matches an insight I’ve had about the temporal resolution of LLMs
In my ex | LLM Fine-tuning | [
"temporal resolution",
"insight"
] | 7 | linkedin |
https://blog.dailydoseofds.com/p/fine-tuning-and-deploying-llm-with?ref=dailydev | Fine-tuning and Deploying LLM with Unsloth, SGLang and Runpod | LLM Fine-tuning | [
"fine-tuning",
"deployment"
] | 9 | edge |
https://www.linkedin.com/feed/update/urn:li:activity:7404553055846785024?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7404553055846785024%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Florian Hönicke — Jailbreak LLMs with one simple trick. 🧠
They discover scaling laws for jailbrea | LLM Fine-tuning | [
"jailbreak",
"scaling laws"
] | 8 | linkedin |
https://blog.alexewerlof.com/p/base-models-vs-instruct-models?ref=dailydev | Foundation vs. Instruct vs. Thinking Models | LLM Fine-tuning | [
"models",
"comparison"
] | 7 | edge |
https://cookbook.openai.com/examples/gpt-5/gpt-5_frontend | Frontend coding with GPT-5 | LLM Fine-tuning | [
"gpt-5",
"frontend",
"coding"
] | 7 | edge |
https://blog.google/innovation-and-ai/technology/developers-tools/functiongemma/ | FunctionGemma: New Gemma model for function calling | LLM Fine-tuning | [
"function calling",
"Gemma"
] | 8 | edge_bookmarks |
https://cookbook.openai.com/examples/gpt-5/gpt-5_new_params_and_tools | GPT-5 New Params and Tools | LLM Fine-tuning | [
"gpt-5",
"params",
"tools"
] | 8 | edge |
https://cookbook.openai.com/examples/gpt-5/gpt-5_prompting_guide | GPT-5 prompting guide | LLM Fine-tuning | [
"GPT-5",
"prompting"
] | 8 | edge |
https://www.linkedin.com/feed/update/urn:li:activity:7434262704472899585?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7434262704472899585%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Gadi Evron — Toward [un]prompted con, Knostic is open-sourcing OpenAnt, our LLM-based vulnera | LLM Fine-tuning | [
"vulnerability",
"open-source"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7279088935421530112?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7279088935421530112%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | GenAI Works — Open source alternative to ChatGPT that runs 100% offline on your computer 👇
T | LLM Fine-tuning | [
"Open Source",
"ChatGPT",
"Offline"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7298687915109359616?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7298687915109359616%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | GenAI Works — See How LLMs Work—In 3D 🧠
Ever wondered how large language models process infor | LLM Fine-tuning | [
"3D",
"LLMs",
"information processing"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7413622394428317696?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7413622394428317696%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Ghadeer A. — 🚨BREAKING: Anthropic just released ALL the Claude Code secrets
Claude Code feel | LLM Fine-tuning | [
"Claude Code",
"secrets"
] | 8 | linkedin |
https://learn.microsoft.com/en-us/azure/foundry-classic/concepts/model-catalog-content-safety?view=foundry-classic | Guardrails & controls for Models Sold Directly by Azure (classic) - Microsoft Foundry (classic) | Microsoft Learn | LLM Fine-tuning | [
"Azure",
"model controls"
] | 8 | edge |
https://www.linkedin.com/feed/update/urn:li:activity:7262221054637424640?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7262221054637424640%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Hafizal Johari — Breaking Barriers in Technical Analysis with LLM!
I’m super excited to share a | LLM Fine-tuning | [
"Technical Analysis",
"LLM",
"Barriers"
] | 8 | linkedin |
https://www.linkedin.com/feed/update/urn:li:activity:7422001708517568513?updateEntityUrn=urn%3Ali%3Afs_updateV2%3A%28urn%3Ali%3Aactivity%3A7422001708517568513%2CFEED_DETAIL%2CEMPTY%2CDEFAULT%2Cfalse%29 | Hamada Mahdi — Code is cheap now. Software is not.
AI coding tools can get you to 60-70% of th | LLM Fine-tuning | [
"AI coding tools",
"efficiency"
] | 8 | linkedin |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.