A post on r/MachineLearning is soliciting firsthand experience from researchers who have submitted work to CHI PLAY, the ACM conference focused on games and play. The core question: how useful are the reviewers, and is the feedback worth the effort of submitting?

What's new

User Ok_Ant_4311 posted a direct ask to the r/MachineLearning community seeking candid takes on CHI PLAY's review quality. The post is brief but the underlying question is a recurring one in academic circles — whether niche, interdisciplinary venues deliver actionable peer feedback or produce boilerplate critiques that waste researchers' time.

Why it matters

CHI PLAY sits at the intersection of HCI and game research, making it an increasingly relevant venue as AI-generated content, procedural generation, and LLM-driven game systems push into the academic spotlight. Researchers working on AI applications in interactive media need to know whether submitting there is a strategic move or a long shot with thin feedback loops.

What to watch

Community responses to this thread could surface useful signal for anyone weighing CHI PLAY against alternatives like FDG or the main CHI track. If the consensus skews negative on review quality, it reflects a broader structural issue with how interdisciplinary AI-adjacent work gets evaluated at HCI venues.