In an effort to strengthen its control on generative artificial intelligence (AI) services, China has released draft rules. The draft was published by the National Information Security Standardisation Technical Committee, which is in charge of establishing IT security regulations, and it emphasized two key issues: protecting training data and controlling large language models (LLMs) used in generative AI applications.
Guidelines state that AI algorithms should be modelled as per govt. regulations
The rules mandate that AI developers comply to security checks to prevent data breaches and copyright infringement and utilize only permitted data for training. This is a crucial step in ensuring the accuracy and legitimacy of the data used to feed these algorithms. The small print also refers to a “blacklist system” to restrict any training materials that include more than 5% of content that is deemed damaging or unlawful by the country’s cybersecurity legislation.
These regulations raise significant concerns about innovation and free expression even if they appear to be designed to make sure that AI services provide ethical and lawful material. The recommendations state that the algorithms should be based on models that have been registered with and granted licenses by authorities. This may restrict the room for innovators and experimenters, perhaps stunting the development of a technology with several uses.
Additionally, the rules add even another level to the government’s censoring apparatus. It’s a worry that AI models may be used to spread a single narrative in a nation where “illegal content” frequently covers touchy political subjects like Taiwan’s status. When queried about Taiwan’s status during internal testing, Chinese chatbots have been shown to reply in a variety of ways, with some refusing to answer and terminating the discussion.
Public comments on the draft are welcome until October 25. As of August, China was one of the first nations to pass legislation controlling generative AI, but the rest of the world is keenly watching to see how this development will affect AI in the future, both in terms of technology and freedom of speech.