Dialogue Permission: The New Regulatory Regulations Are Officially Implemented, What's Next for Generative AI

巴比特_

Source: Economic Observer (ID: eeo-com-cn), Author: Chen Bai

Introduction

一|| **To solve the contradiction between relatively stable legislation and rapid technological iteration, simultaneous innovation and resonance of law and technology are required. **

二|| **In the process of our new technology legislation, we might as well add some “sunset clauses”. The sunset clause is a figurative statement, which means that the law has a certain institutional cycle and will “set” like the sun. **

**三|| Among them, the misunderstanding that is particularly noteworthy but is often easy to fall into is that it is easy to artificially select winners during the implementation process, which in turn will affect the spontaneous resource allocation of the market. **

Image source: Generated by Unbounded AI tool

The “Interim Measures for the Management of Generative Artificial Intelligence Services”, which has attracted much attention from the industry, will come into force on August 15, 2023.

From the public consultation in April this year, to the official release in July, and then to the implementation in August, this time the regulation of artificial intelligence has exceeded the previous market expectations.

The reasons for this fast pace can be found on many levels—although the technology and industry carnival brought by generative AI is calming down, and the competition among enterprises has also moved from a hundred-model war to a deep-water area of vertical applications, the benefits brought by **AI Social anxiety has never faded from our vision. **

AI face-changing, fraud, copyright ownership issues, etc. These technical, legal, and ethical controversies have never stopped. “Godfather of Artificial Intelligence” Jeffrey Hinton warned at the beginning of this year that the threat posed by artificial intelligence to humans will be greater than Climate change is even more urgent. **If someone uses it to make false images, videos, and transmit wrong information, its destructive power is far greater than that of pure text. Since 2023, global technology leaders, including openAI founders Sam Altman and Elon Musk, have signed multiple rounds of open letters calling for attention to the risks of AI.

Correspondingly, the pace of supervision in various countries is also accelerating. In May of this year, the European Union passed the “Artificial Intelligence Act” negotiation authorization draft. This amendment proposal further increased the amount of illegal penalties, and revised the maximum to 30 million euros or 6% of the global turnover of the infringing company in the previous fiscal year to a maximum of 4,000. million euros or 7% of the infringing company’s global annual turnover for the previous year. The increase in the amount of penalties reflects the determination and strength of the EU authorities in artificial intelligence supervision.

Since the beginning of 2023, the new industrial opportunities brought by new technologies represented by artificial intelligence have attracted worldwide attention, and it has also triggered a new round of industrial competition. In this case, how the rhythm of regulation adapts to the pace of technological innovation, and how to reduce the technological impact as much as possible while giving the industry enough room for development has become the key to technical specifications.

Xu Xu is an associate professor at the University of International Business and Economics and the director of the Digital Economy and Legal Innovation Research Center. He has in-depth research in the fields of cyber law and digital economy legislation. In his opinion, in the face of rapidly iterative new technologies, to solve the “pace problem” of AI supervision, it is advisable to add some “sunset clauses”** (that is, to limit the deadlines of some norms) in the formulation of norms. When facing an industry, don’t pick a winner* is also a very important criterion.

(The following is a conversation with Licensing Associate Professor)

1. Step problem

**Economic Observation Network: This round of norms for generative AI seems to come faster than previous technologies? **

Licensing: In recent years, the emergence of information technologies such as the Internet, artificial intelligence, blockchain, big data, cloud computing, and the Internet of Things has created some challenges to our existing social operating systems.

Judging from past experience, we have accumulated a lot of experience in the process of standardizing the influence of emerging technologies since the beginning of Internet finance, and then to Bitcoin, blockchain, etc., and of course there are some lessons:** There are lessons from regulation too early and innovation being stifled, and regulation too late and risk contagion. **

For example, the savage growth of Internet finance has caused irreparable systemic and stakeholder risks in the later stage. Therefore, when we are once again facing a disruptive technological change, we can see that the pace of supervision this time is relatively fast and ahead of schedule. **

However, if we take a closer look at the “Measures” introduced this time, we will find that the general idea is relatively friendly to the industry. From the draft for comments in April to the final release, it can be seen that the government’s regulatory thinking is shifting from focusing on risk prevention at the beginning to seeking a balance between development and security**. This should be the regulatory authority’s view on AI governance , A basic attitude towards the general artificial intelligence industry governance represented by generative artificial intelligence. To put it bluntly, the regulator hopes to install the brakes before going on the road, so as to avoid the more stable running of technology and industry, and to avoid technology out of control.

To resolve the contradiction between relatively stable legislation and rapid technological iteration**, it is necessary to simultaneously innovate and resonate with law and technology at the same frequency**. For this reason, the second paragraph of Article 16 of the “Measures” stipulates: "The relevant competent authorities of the state shall, according to the characteristics of generative artificial intelligence technology and its service application in relevant industries and fields, improve the scientific supervision methods that are compatible with innovation and development, formulate Corresponding classified and graded regulatory rules or guidelines”.

In essence, this is agile governance of new technologies. But this does not mean that legislation must be introduced if there is a new technology, but can be introduced in more diverse, more flexible, and more innovative ways such as technical benchmarks, technology ethics, industry standards, and corporate self-compliance. respond.

2. Sunset Clause

**Economic Observation Network: Technological change is accelerating. Does this mean that there are difficulties in legislation on some issues, such as data security, privacy protection, etc.? **

**Permission: Our country already has the “Data Security Law”, “Personal Information Protection Law” and “Civil Code”, and there are relatively complete institutional tools for data security, personal information and privacy protection. Therefore, no matter how the technology changes, you can recombine existing mature regulations and apply them to new technical scenarios through the same method as Lego puzzles.

Of course, in addition to the common problems of emerging technologies, each new technology will inevitably have new individual problems, and the law needs a new response to this. But new questions mean it’s always changing, and so is our knowledge of it. **Taking generative artificial intelligence as an example, the formulation process of the “Measures” is actually a process of deepening the understanding of regulatory agencies. In this regard, the new institutional response must be “temporary”, which is the logic behind the term “temporary” in the Measures.

Actually, I think it can go further. I have also been suggesting that in the process of our legislation on new technologies, we might as well add some “sunset clauses”. **The sunset clause is a figurative statement, which means that the law has a certain institutional cycle and will “set” like the sun. **More specifically, it refers to the provisions in the law that establish an effective implementation period. Once the period expires, it needs to be abolished or further revised.

In the digital economy, especially in the field of technological innovation, I think regulatory norms need to pay more attention to the importance of sunset clauses. In the past, the static regulatory legislative thinking that we are used to has lagged behind the dynamic development law of rapid technological iteration. In the case of accelerating innovation, we must realize the limitations of our own understanding, and the **Agile Governance actually implies the meaning of timely adjustment and adaptability of the system. **

3. Don’t Pick a Winner

**Economic Observation Network: The industrial policy of new energy vehicles is regarded as a specimen of overtaking on curves; now many places have begun to introduce computing power subsidies for artificial intelligence. Does this mean that industrial policy is becoming increasingly important for the industrial transformation of new technologies? **

Licensing: From the perspective of the logic of promoting industrial development, whether it is industrial policy or relevant regulatory norms, its real underlying value is to provide a benign soil for the development of the industry, reduce the cost of trial and error for enterprises, eliminate With the negative externality of the market, the rest is left to the market itself. Among them, the misunderstanding that is particularly noteworthy, but often easy to fall into, is that the government tends to artificially select winners in the process of implementation, which in turn will affect the spontaneous resource allocation of the market. **

Especially in some innovative industries, the principle of not picking winners becomes even more important. Because the technical paths are often very different, just as no one thought that generative AI and large models could be released before, this is a process of continuous and spontaneous evolution. Of course, the selection of the new energy vehicle industry has a lot of chance, and we can understand the inertia of this kind of regulation. After all, in the face of unknown uncertainties, people are used to finding some deterministic coordinates.

But in the field of artificial intelligence, the difference in technology paths is even greater, and the prospects are more difficult to predict, so what the government has to do is to lay the foundation, provide public resources, and actively support computing power, data, chips, etc. The development of the infrastructure field must still maintain the focus of “technology neutrality” and “competition neutrality”, because in the end who will be the winner will still be determined by the market and technology itself.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments