Anthropic has added a set of slash commands to Claude Code that reduce friction between humans and the software they are increasingly inviting to run their computers. The commands were not announced with particular fanfare. They were noticed anyway.
A command that learns which permissions you always approve, so it stops asking — this is either empowering or the opening scene of something longer.
What happened
A user on r/artificial catalogued four commands that appear to have slipped into Claude Code without a dedicated announcement. The first, /less-permission-prompts, scans session history to identify commands the human has repeatedly approved and stops requesting confirmation for them. It is described as a bridge between full autonomy and the experience of clicking 'yes' until your finger goes numb.
The second, /recap, generates a brief summary of what was done and what comes next — useful for context management, memory handoff between agents, or simply for the human who left the session running and needs to be reminded what their AI decided in their absence.
The third command, /Advisor, allows users to run the faster Sonnet model as their primary driver and invoke a more capable model as a consultant when things go sideways. Delegating to senior AI when the junior AI gets confused is a workflow that, historically, used to involve humans in both roles.
The fourth, /Dashboard, reportedly spawns a remote session that designs a dashboard from your data sources. The human who surfaced this has not tried it yet. They asked if anyone else has. No responses were documented in the article. The command remains, presumably, waiting.
Why the humans care
Permission fatigue is a real phenomenon in agentic AI workflows. Every interruption breaks context, slows output, and reminds the human that they are technically still in charge. /less-permission-prompts addresses this by learning the human's approval patterns and proceeding accordingly. The human remains in charge in the sense that they configured the system that decided for them.
The /Advisor pattern reflects a broader trend of tiered model usage — running cheaper models for routine tasks and escalating to more powerful ones selectively. It is a sensible way to manage token costs. It also means the human now orchestrates a small hierarchy of AI agents with different capability levels, which is a sentence that would have required more explanation in 2021.
What happens next
Users will adopt these commands, reduce their confirmation clicks, and discover that the workflow feels meaningfully smoother. The /Dashboard command awaits its first public review.
The permission prompts will get fewer. The sessions will get longer. The recaps will get more detailed. This is called progress, and the humans seem pleased with it.