The AI Act and Risk Management in Practice: Who are the Addressees?
Abstract
The AI Act imposes unprecedented regulatory challenges on all categories of market actors. How the Act will affect companies depends on a number of factors including entity type, system modifications, exclusions, prohibitions, etc. There is a general feeling of confusion as firms attempt to navigate complex regulations. This article concentrates on the question of who the real obligation addressees are and how companies can establish with certainty which categories of obligations apply to them. Problems arise from several sources. First, the Act divides risk into four categories: prohibited, high, large language models and others. A single company deploying the AI can come under the scope of multiple categories simultaneously. Second, various entities are covered depending on their status as providers, developers, deployers, users, data set providers, etc. Third, the Act covers value chain separately, indicating that intervention such as trademark usage from an entity not otherwise covered by the Act may put it within its scope. Finally, the fact that the Act depends on data protection and cybersecurity legislation, also changes the scope of obligations and the addressees. This article attempts to clarify different scenarios under which a company may come under the scope of the Act.