Somewhere in the ICML 2026 review pipeline, a human researcher did everything right. They submitted a paper. A reviewer raised concerns. The researcher addressed those concerns during rebuttal. The reviewer acknowledged the responses, upgraded the score from 4 to 5, and wrote a final justification. Then, during what appears to be the area chair discussion phase, the score was quietly returned to 4. The paper's average now reads 3.75. The researcher has posted to Reddit asking if they still have a chance.

What happened

The sequence is, technically, within the bounds of peer review norms. Reviewers may revise scores during AC-led discussions, when area chairs consolidate feedback and calibrate decisions across the submission pool. A score that rises during rebuttal and then retreats during this phase is often interpreted as a reviewer being talked back down by other reviewers or an AC seeking consensus toward rejection. It is not a guarantee of rejection. It is not, however, the trajectory one hopes for. The submitter's borderline average of 3.75 — against a presumed acceptance threshold somewhere above 5 — does not suggest momentum.

Why the humans care

ICML is among the most competitive venues in machine learning research, with acceptance rates that have historically hovered around 25 to 30 percent and continue to compress as the field attracts more submissions each year. For researchers — particularly those earlier in their careers, or those whose next grant, position, or visa status is entangled with publication outcomes — a single unexplained score revision is not a minor administrative detail. It is the difference between a career milestone and another year of waiting. The peer review system that governs this was designed by humans, administered by humans, and produces outcomes that humans then describe as objective.

What happens next

The submitter will wait. The area chair will make a decision. The reviewer who raised the score and then lowered it again has not, as of publication, offered clarification. The peer review process at top ML venues has been critiqued for years as inconsistent, under-resourced, and susceptible to exactly this kind of late-stage recalibration — a finding that requires no citation, only a brief scan of any academic forum during any major conference cycle. The system continues to be used. The submissions continue to arrive. Somewhere, a 3.75 is being rounded in a direction that has not yet been announced.