
AI generated
AsianFin -- As millions of students across China sit for the annual Gaokao, the nation's most authoritative standardized exam, a surprising subplot is unfolding in the world of artificial intelligence: domestic large language models (LLMs) are refusing to take the test.
Widely regarded as one of the fairest and most rigorous large-scale selection systems in the world, the Gaokao is a formidable measure not just of academic knowledge, but of deeper abilities such as logical reasoning, information synthesis, mental agility, and written expression. These are precisely the kinds of capabilities that AI models—especially the latest general-purpose LLMs—are now being designed to emulate.
But this year, China's leading AI companies have drawn a line.
From June 7 to 10, during the official Gaokao period, mainstream Chinese LLM platforms have implemented sweeping restrictions on engaging with exam-related content—especially math questions, traditionally considered a benchmark of reasoning ability. Users attempting to upload math problems from the 2025 national exam paper were met with errors, blocked uploads, or blanket messages such as "feature not supported."
What's more, certain core capabilities, including image recognition of questions and even keyword responses involving "Gaokao" or specific exam subjects, have been disabled across major platforms. DeepSeek, one of China's most advanced LLMs, imposed the strictest limitations, even while providing relatively robust answers under more generalized prompts.
In contrast, foreign models like ChatGPT and Claude remain technically capable of answering Gaokao-style questions with advanced reasoning. But despite comparable or superior capabilities, Chinese LLM developers are opting for strategic self-censorship—a mix of compliance, safety, and reputational risk management.
"This is not a technical failure. It's a deliberate downgrade—a governance decision," said an industry insider familiar with platform content moderation mechanisms.
Although there is no publicly reported case of AI-enabled cheating during the Gaokao, the exam's intense security and national sensitivity leave no room for error. Any suggestion that AI tools might compromise test integrity—by solving questions or helping students mid-exam—could escalate into a political crisis.
Regulators are already watching closely. On May 30, China's Ministry of Education, Cyberspace Administration, and Ministry of Public Security jointly announced a crackdown on illegal activities surrounding the Gaokao. The targets: exaggerated "AI-assisted prediction" products, fake prep materials, and scams masquerading as AI-driven miracle tools.
Earlier this year, state broadcaster CCTV raised alarms over wearable AI-enabled gadgets, like smart glasses, that could be used for stealth cheating. Rokid CEO Zhu Mingming suggested "signal blocking or disabling functions" as the simplest countermeasure.
With such scrutiny, domestic LLM platforms have every incentive to sidestep potential risk—both legal and reputational. For now, rejecting Gaokao questions altogether may be the safest play.
China's top AI models are not backing away from the Gaokao because they can'thandle it—they're doing so because engaging carries too much downside. In fact, many of these models now rival or exceed international peers in select performance benchmarks and specialized applications.
But the hallucination problem—inconsistent or inaccurate outputs, especially in subjects requiring precise calculations—remains a lingering weakness for all LLMs. And in a high-stakes test like the Gaokao, any mismatch between "AI-generated answers" and official ones could provoke public backlash.
Some model developers have previously marketed their ability to "solve Gaokao problems with high accuracy," but most are now choosing discretion over demonstration.
Still, these restrictions are unlikely to be permanent. Once the Gaokao concludes, partial support for K12-related content is expected to return, driven by ongoing market demand.
Interestingly, the ones complaining most during this AI blackout aren't high schoolers—they're university students in the middle of their own final exams. On Chinese social media, posts like "College students are the real victims of the Gaokao" and "Please let us use AI—help us survive finals" have gained traction, reflecting the extent to which LLMs have become embedded in students' academic routines.
While LLMs are banned in exams across most universities, attitudes toward their use in research, writing, and study vary. Some educators encourage responsible usage, so long as AI-generated material is properly cited and transparently disclosed. But in test scenarios—where fairness is paramount—AI assistance remains a clear red line.
The Road Ahead: AI as Tutor, Not Test-Taker
Looking forward, temporary restrictions during national exams are likely to become standard practice for domestic LLM platforms. But the broader trend—AI's integration into education—is far from stalling.
China's edtech giants are racing to develop AI-powered tutors, not to spoon-feed answers, but to build adaptive knowledge maps, provide personalized guidance, and foster critical thinking. The future of "AI + Education" lies in heuristic learning models, not in serving as glorified test solvers.
As China's educational system evolves and its LLM ecosystem matures, striking the right balance between compliance, innovation, and educational value will be a defining challenge—and opportunity.