OpenAI published a practical guide through its Academy platform walking users through ChatGPT's two core personalization levers — custom instructions and memory — framing them as the difference between using ChatGPT as a search box versus an actual working tool. The subtext is clear: most people are leaving a lot of utility on the table.

What's new

The Academy piece isn't announcing new features so much as codifying how existing ones should be used together. Custom instructions set persistent defaults — your role, preferred tone, output formats, guardrails — so you're not re-explaining yourself at the top of every conversation. Memory goes further, storing context across sessions that ChatGPT can draw on to keep responses relevant over time. Users can query, add, or delete memory entries mid-conversation with plain language commands. A third layer, Skills, gets a brief mention as a way to formalize repeatable workflows into structured, reusable processes — though details there are thin.

Why it matters

The framing of custom instructions as a "default working style" is the useful mental model here. Stable preferences — role, tone, format — belong in instructions. Task-specific details belong in the prompt. Memory handles the in-between: recurring context that doesn't change often but would be annoying to retype. Getting that separation right is what actually makes the tool feel consistent rather than amnesiac. OpenAI is essentially publishing the meta-layer that most power users figured out through trial and error.

What to watch

The Skills feature is the one to track. Turning repeatable prompt patterns into structured, reusable workflows would represent a meaningful shift from ad-hoc prompting toward something closer to lightweight automation — without requiring API access or coding. OpenAI's Academy rollout suggests the company is betting that productivity gains, not raw capability, will drive stickier enterprise adoption in 2026.