China's AI Systems Reshape Human Rights

ASPI

This report shows how the rise of artificial intelligence (AI) is transforming China's state control system into a precision instrument for managing its population and targeting groups at home and abroad.

China's extensive AI‑powered visual surveillance systems are already well documented. This report reveals new ways that the Chinese Communist Party (CCP) is using large language models (LLMs) and other AI systems to automate censorship, enhance surveillance and pre‑emptively suppress dissent.

Drawing on LLM testing, detailed case studies and analyses of procurement documents, corporate filings and job postings, this data‑rich report traces how AI censorship mechanisms distort information and how predictive policing and biometric surveillance reinforce algorithmic repression. ASPI's research shows that the CCP has created market‑based mechanisms to encourage private innovation in AI‑enabled censorship technology, making it easier and cheaper for companies to comply with censorship mandates.

This report also reveals how AI‑powered technology is widening the power differential between China's state‑supported companies operating abroad and foreign populations—further enabling some Chinese companies to systematically violate the economic rights of vulnerable groups outside China, despite Beijing's claims that China respects the development rights and sovereignty of other countries.

The risks to other countries are clear. China is already the world's largest exporter of AI‑powered surveillance technology; new surveillance technologies and platforms developed in China are also not likely to simply stay there. By exposing the full scope of China's AI‑driven control apparatus, this report presents clear, evidence‑based insights for policymakers, civil society, the media and technology companies seeking to counter the rise of AI‑enabled repression and human rights violations, and China's growing efforts to project that repression beyond its borders.

The report focuses on four areas where the CCP has expanded its use of advanced AI systems most rapidly between 2023 and 2025: multimodal censorship of politically sensitive images; AI's integration into the criminal‑justice pipeline; the industrialisation of online information control; and the use of AI‑enabled platforms by Chinese companies operating abroad. Examined together, those cases show how new AI capabilities are being embedded across domains that strengthen the CCP's ability to shape information, behaviour and economic outcomes at home and overseas.

Because China's AI ecosystem is evolving rapidly and unevenly across sectors, we have focused on domains where significant changes took place between 2023 and 2025, where new evidence became available, or where human‑rights risks accelerated. Those areas do not represent the full range of AI applications in China but are the most revealing of how the CCP is integrating AI technologies into its political‑control apparatus.

Key findings

Chinese LLMs censor politically sensitive images, not just text.

  • While prior research has extensively mapped textual censorship, this report identifies a critical gap: the censorship of politically sensitive images by Chinese LLMs remains largely unexamined.
  • To address this, ASPI developed a testing methodology, using a dataset of 200 images likely to trigger censorship, to interrogate how LLMs censor sensitive imagery. The results revealed that visual censorship mechanisms are embedded across multiple layers within the LLM ecosystem.

The Chinese Government is deploying AI throughout the criminal‑justice pipeline—from AI‑enabled policing and mass surveillance, to smart courts, to smart prisons.

  • This emerging AI pipeline reduces transparency and accountability, enhances the efficiency of police, prosecutors and prisons, and further enables state repression.
  • Beijing is pushing courts to adopt AI not just in drafting basic paperwork, but even in recommending judgements and sentences, which could deepen structural discrimination and weaken defence counsels' ability to appeal.
  • The Chinese surveillance technology company iFlyTek stands out as a major provider of LLM‑based systems used in this pipeline.

China is using minority‑language LLMs to deepen surveillance and control of ethnic minorities, both in China and abroad.

  • The Chinese Government is developing, and in some cases already testing, AI‑enabled public‑sentiment analysis in ethnic minority languages—especially Uyghur, Tibetan, Mongolian and Korean—for the explicitly stated purpose of enhancing the state's capacity to monitor and control communications in those languages across text, video and audio.
  • DeepSeek and most other commercial LLM models have insufficient capacity to do this effectively, as there's little market incentive to create sophisticated, expensive models for such small language groups. The Chinese state is stepping in to provide resources and backing for the development of minority‑language models for that explicit purpose.
  • China is also seeking to deploy this technology to target those groups in foreign countries along the Belt and Road.

AI now performs much of the work of online censorship in China.

  • AI‑powered censorship systems scan vast volumes of digital content, flag potential violations, and delete banned material within seconds.
  • Yet the system still depends on human content reviewers to supply the cultural and political judgement that algorithms lack, according to ASPI's review of more than 100 job postings for online‑content censors in China. Future technological advances are likely to minimise that remaining dependence on human reviewers.

China's censorship regulations have created a robust domestic market for AI‑enabled censorship tools.

  • China's biggest tech companies, including Tencent, Baidu and ByteDance, have developed advanced AI censorship platforms that they're selling to smaller companies and organisations around China.
  • In this way, China's laws mandating internal censorship have created market incentives for China's top tech companies to make censorship cheaper, faster, easier and more efficient—and embedding compliance into China's digital economy.

The use of AI amplifies China's state‑supported erosion of the economic rights of some vulnerable groups abroad, to the financial benefit of Chinese private and state‑owned companies.

  • ASPI research shows that Chinese fishing fleets have begun adopting AI‑powered intelligent fishing platforms, developed by Chinese companies and research institutes, that further tip the technological scales towards Chinese vessels and away from local fishers and artisanal fishing communities.
  • ASPI has identified several individual Chinese fishing vessels using those platforms that operate in exclusive economic zones where Chinese fishing is widely implicated in illegal incidents, including Mauritania and Vanuatu, and ASPI found one vessel that has itself been specifically implicated in an incident.

We have constructed and made publicly available (under a CC BY-NC licence) the following datasets used in the analysis presented in Chapter 1 of the report:

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.