This paper addresses the inadequacy of the current U.S. tort liability system in handling the catastrophic risks posed by advanced AI systems. The author proposes punitive damages to incentivize caution in AI development, even without malice or recklessness. Additional suggestions include recognizing AI as an abnormally dangerous activity and requiring liability insurance for AI systems. The paper concludes by acknowledging the limits of tort liability and exploring complementary policies for mitigating catastrophic AI risk.
top of page
Rechercher
Posts récents
Voir tout“As analysts are primary recipients of these reports, we investigate whether and how analyst forecast properties have changed following...
00
This study proposes a new method for detecting insider trading. The method combines principal component analysis (PCA) with random forest...
10
Cyber risk classifications often fail in out-of-sample forecasting despite their in-sample fit. Dynamic, impact-based classifiers...
40
bottom of page
Comments