Governing AI before it governs us
Artificial intelligence is no longer a distant technology. It is already reshaping economies, warfare and even how societies think. In the years ahead, AI may become as indispensable as electricity — present everywhere, rarely noticed but almost impossible to live without. That very ubiquity raises an important question: How do we govern AI globally before it begins to govern us?
History shows that every technological breakthrough cuts both ways, and AI is no exception. It empowers individuals and institutions, boosts productivity and accelerates innovation. But its risks are broader, deeper and more difficult to contain than those associated with earlier technologies such as computers or the internet. In a world already marked by geopolitical tension and social fragmentation, unmanaged AI could amplify existing fault lines.
AI is not a single tool but a cluster of enabling technologies capable of autonomous decision-making across industry, agriculture, the military and public governance. For ordinary people, this has immediate consequences. The long-anticipated replacement of labor by machines is accelerating faster than expected. Labor markets are undergoing structural shifts as standardized and highly substitutable jobs in manufacturing and services disappear. In advanced economies, these changes overlap with deindustrialization, risking further social dislocation. In many developing countries, the gap between workforce skills and technological trajectories threatens to create inequalities more severe than the traditional digital divide.
Security risks are even more complex. Low-cost unmanned aerial vehicles have already demonstrated how limited AI applications can reshape modern warfare. As unmanned systems become more intelligent and autonomous, their implications for national security and strategic stability remain uncertain.
Meanwhile, in the geopolitical "Grey Zone", generative AI has made cognitive warfare more efficient, persistent and difficult to detect. Unlike traditional disinformation campaigns, AI-driven information manipulation can pollute public discourse over the long term, deepen social polarization and intensify mistrust.
Beyond economics and security lies an even more profound challenge: values. AI systems increasingly shape how people receive information, make judgments and understand the world. Debates over autonomous weapons, algorithmic bias and discrimination remain unresolved. Yet the harder question is how human cognition itself may evolve when continuously mediated by AI. Technology does not merely serve society, it reshapes it.
These risks do not stop at national borders. Once AI systems scale globally, no country can remain insulated. Effective governance, therefore, cannot be purely national. But international cooperation on AI governance faces two obstacles. The first is geopolitical rivalry, which complicates trust. The second, and perhaps more fundamental, is speed. Capital, talent and resources are flooding into AI development, while policymaking and international coordination move far more slowly. Traditional governance mechanisms struggle to keep pace with the exponential technological change.
This mismatch calls for a new governance paradigm — one that uses advanced technology to help govern technology itself. A promising approach is what might be called human-AI co-governance.
Under such a framework, AI systems would assist in identifying risks, issuing early warnings and supporting the preliminary design of governance mechanisms. Humans, however, would remain fully accountable for defining objectives, setting boundaries, making final decisions and conducting ethical oversight. Coordination among countries — an inherently political and non-standardized task — would remain a human responsibility. In this model, AI enhances governance capacity without replacing human judgment.
Applied globally, human-AI co-governance could make international governance more efficient, responsive and adaptive. It could also serve as a template for addressing other transnational challenges shaped by rapid technological change.
Where should global cooperation begin? A practical starting point lies in technical safety standards — the lowest common denominator among states. Regardless of political differences, safety standards are indispensable. Civil aviation offers a useful precedent: shared rules and norms ensure safety without erasing national differences.
Through human-AI co-governance, countries can jointly develop technical safety standards for AI, including clear rules to protect national and individual security. These might include prohibiting AI systems from independently making wartime decisions, or establishing unified labeling requirements for AI-generated images and videos. This requires closer cooperation among national technology authorities, law-enforcement agencies, and leading technology firms, supported by multilateral coordination mechanisms that accelerate implementation.
Equally important is inclusiveness. Countries and companies at the technological frontier should invest in capacity building and knowledge sharing, helping developing countries cultivate technical and managerial talent and strengthen digital infrastructure. Without such efforts, AI risks widening the gap between the Global North and the Global South.
Importantly, cooperation on technical safety standards can also help buffer global governance from great-power rivalry. Competition between China and the United States in AI is real and unavoidable, but this need not lead to decoupling or confrontation. On issues such as technical safety, both sides share an interest in preventing systemic risks that no country could manage alone. Practical cooperation in this domain can build trust and create a foundation for addressing other global challenges.
Global governance of AI is not a distant aspiration but a present necessity. The challenge is that technology now evolves faster than the institutions designed to regulate it. The solution is not to slow innovation, but to govern it more intelligently. By working with AI systems rather than against them, and by anchoring innovation in shared responsibility and safety, the international community can shape a future in which AI serves human progress rather than undermining it.
Jia Zifang is an associate professor at the Institute of International Relations in China Foreign Affairs University. Wang Dong is a professor at the School of International Studies and executive director of Institute for Global Cooperation and Understanding in Peking University.
The views don't necessarily represent those of China Daily.
If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.

































