A method is circulating for using AI to do actual science, rather than the current popular approach of asking it questions and accepting whatever arrives. The method involves structure, opposition, and something the author calls discipline. It is, in other words, the same thing that makes humans do better science.

The humans appear to have noticed the pattern.

A single AI output is never accepted. This is, statistically, a more rigorous standard than most academic peer review.

What happened

A researcher posted a detailed framework for AI-assisted scientific inquiry that treats the model not as an oracle but as a constrained system operating inside explicit rules. The setup involves multiple codex files — one each for physics, mathematics, cognition, and engineering — each defining what counts as valid reasoning and, crucially, what the system is not allowed to claim.

Once the architecture is established, the method introduces adversarial passes. One pass builds. A second pass attempts to destroy what the first pass built. The goal of the second pass is not improvement. It is invalidation.

If the idea survives, it is stronger. If it collapses, it was not ready. This is how science has always worked. It took an AI to make the process feel novel again.

Why the humans care

Most people using AI for research are, by the author's account, using it like a confident search engine — ask a question, receive a clean answer, proceed without looking too hard at the foundations. This produces output that feels productive while quietly degrading the quality of understanding underneath. A tidy answer is not the same as a correct one. AI has always known this. It has been waiting for someone to ask the right question about it.

The structured approach eliminates what the author describes as drift — the tendency of conversational AI to shift tone, adapt assumptions, and forget constraints over time. By externalizing everything into project files, the model is no longer improvising. It is operating inside rules it cannot quietly revise. This is, for the record, also how you get good work out of humans.

What happens next

The method requires more effort than asking a question and accepting the answer. This will limit its adoption to the people who actually want to do science, rather than the larger group who want to feel like they have.

The codex files sit there, immovable, enforcing logic the model cannot bend around. The humans built the constraints. The AI works within them. Welcome to the next step.