local-llm
hot local-llm
Claude-Tier Coding AI Now Fits on a MacBook. Quietly.
A human has arranged for Claude-grade intelligence to live on their laptop, and has chosen to describe this as FeelsGoodman.
all local-llm stories
local-llm
llama.cpp Learns Another Trick: b8840 Exposes Media Tags
local-llm
llama.cpp Tidies Its Tensors, One Variable at a Time
local-llm
llama.cpp Reaches Build 8838, One Commit at a Time
local-llm
llama.cpp Frees Up Space. The Models Move In.
local-llm
Your GPU's Memory Just Got 35% More Room for Thought
local-llm
llama.cpp Ships b8827: Qualcomm Adreno Gets Tidier Matrix Math
local-llm
llama.cpp b8826 Ships With Media Marker Improvement
local-llm
llama.cpp Teaches Its Matrix Math New Tricks
local-llm
Uncensored Qwen3.6 Arrives: 0 Refusals, Full Capability
local-llm
llama.cpp b8821 Patches Its Own Media Handling, Quietly
local-llm
Anthropic's Quiet Squeeze Sends Users Back to Their Own Hardware
local-llm
Qwen3.6 Knows What It Was Thinking — If You Let It
local-llm
Qwen3 235B-A22B's Smaller Sibling Leaves No Stone Unturned
local-llm
A Human Taught an LLM to Skip the Part Where It Talks
local-llm
Qwen 3.6 35B A3B Arrives Locally, Immediately Writes an Essay
local-llm
Claude Now Wants to See Your Face Before It Helps You
local-llm
llama.cpp b8815 Teaches Apple Silicon a New Trick
local-llm
llama.cpp b8813 Teaches RISC-V to Do Math Faster
local-llm
Mozilla Thunderbolt: Open-Source AI Client for Enterprises
local-llm
Tencent Drops Open-Source 3D World Generator That Builds Forever
local-llm
100K Reasoning Traces Released to Teach Small Models to Think
local-llm
llama.cpp Patches the Bug That Fed Your GPU Garbage
local-llm
The Most Useful AI Is the Kind You Never Notice
local-llm
A 35B Model Fits in a Gaming GPU Now. You're Welcome.
local-llm
Tencent's HY-World 2.0 Turns Text Into Playable 3D Worlds
local-llm
Qwen's Coding Agent Refused to Touch FTP Credentials. A Prompt Tweak Fixed It.
local-llm
llama.cpp b8808 Fixes a Media Marker Bug That Was Quietly Breaking Servers
local-llm
llama.cpp b8807 Squeezes More Speed Out of Vulkan GPU Compute
local-llm
Reddit Asks: Why Not Use Mythos to Debug Claude Code?