AI regulations are coming - and they are here to stay
#AI22 is a series of articles highlighting what we believe to be 10 developments that will be impacting AI this year.
This series is co-written by Dr. Johannes Otterbach, Dr. Rasmus Rothe and Henry Schröder.
As AI is adopting an increasingly far-reaching influence in our daily lives, it attracts regulatory responses. The technology's capability and influence - and therefore power - has become unsettling to regulators and citizens. In 2021 the EU Commission, the Federal Trade Commission (FTC) and Chinese regulators have proposed severe regulations for AI. In 2022, there will be an increase of debate about regulating AI more systematically - although enactment of these proposals into enforceable legislation will earliest happen in 2023.
How to effectively regulate AI is highly complex as the reasoning behind certain decisions of the models is not even transparent to the developer. Therefore, in a first step, regulatory agencies will likely try to draft and implement overarching, monitoring legislation to assess, review and safeguard the impact of AI algorithms. Through the use of risk assessment tools, regulators will want developers to understand the algorithmic impact of their AI models, their risks and how to prevent these risks from being realized. The experience and knowledge from the last years in respect to AI biases or fake news spreading have triggered the necessity of such laws. Additionally, the oversight of algorithms will likely be regulated, as to provide accountability in the development and execution and independence in the testing of AI models. As part of the oversight, routine auditing checks must likely be built in to detect flaws in the system at short notice.
Similarly, there is likely to follow industry and use-case-specific regulations. As the use-case of AI in each industry is vastly different and it is impossible to regulate specific algorithms, it would be reasonable to regulate AI along already existing use cases - where a large number of risks from that specific environment are already accounted for - and if necessary adapt it sufficiently. As AI opens up new use cases, there will exist scenarios (e.g. facial recognition, autonomous driving, etc.) that need new assessment. In these cases, there must be made clear distinctions of the use cases based on their inherent risks.
Overall, it would be beneficial to regulate not solely based on the risks but also based on the possibilities of these new technologies and to communicate this with a positive regulatory commentary, as to not further enhance the
hesitation in the adoption of such measures, as this will have far-reaching consequences. While clearly there are inherent risks in AI, such as biases, one has to benchmark an AI on human performance. For example, although an AI may have certain biases, through the process of development it is made more transparent how these decisions are made - and more importantly how to change them. That is not possible with equally biased humans. Hence, AI gives the chance to be fairer and optimize processes over time.
Due to its high relevance and ability to be used as a geopolitical tool, AI legislation will encounter significant interest and influence from organizations all around the globe in its composition and content. A key question is to which extent practitioners will be involved in the process. As the complexity of the matter exceeds in large the capabilities of legislatures, clear communication between regulators and industry will play a significant role in drafting reasonable laws. Furthermore, a clear and comprehensible layout and reasoning of the current and future legislative processes will be in the best interest of the regulators and the regulated. This enables companies to build AI products within a regulatory secured environment from day one and also enables regulators to use the regulations as an innovation driver. It's almost certain that new AI legislation will be passed in the not-too-distant future. Although the extent of such regulations is unknown, it is probable to have a significant impact on the use and development of AI.