生成式人工智能的快速发展在推动社会效率的同时也引发了数据安全、算法偏见及人格权侵害等新型风险,引发了侵权责任主体认定的难题。侵权责任主体的明确是构建风险治理体系的关键。因生成式人工智能缺乏独立的意思表达能力和责任承担能力,以及赋予其法律主体资格可能引发的社会风险等原因,现阶段不宜为其赋予法律主体地位。在生成式人工智能造成侵权后,程序设计者、服务提供者和使用者应根据各自的行为和过错承担相应责任,以维护被侵权人的合法权益。
The rapid development of generative artificial intelligence (GAI) has enhanced societal efficiency while
simultaneously triggering novel risks such as data security breaches, algorithmic biases, and infringement of personal
rights, thereby posing challenges in determining liable entities for tort liability. Clarifying accountability subjects is
pivotal to constructing a risk governance framework. Given that generative AI lacks independent intent expression
and liability-bearing capabilities, coupled with potential societal risks arising from granting it legal personhood, it is
currently inappropriate to confer legal subject status upon GAI. In cases where generative AI causes infringement,
program designers, service providers, and users should assume corresponding responsibilities based on their
respective actions and fault, thereby safeguarding the legitimate rights and interests of the infringed parties.