A study compared the views of domain experts and superforecasters on existential risks such as nuclear war and artificial intelligence (AI). The study found that domain experts tended to be more pessimistic than superforecasters about the likelihood of catastrophes and extinctions. According to the Economist:
The median superforecaster reckoned there was a 2.1% chance of an AI-caused catastrophe, and a 0.38% chance of an AI-caused extinction, by the end of the century. AI experts, by contrast, assigned the two events a 12% and 3% chance, respectively.
Superforecasters recognized the potential of AI as a force multiplier for other risks but were more uncertain about its specific risks. The study also highlighted differences in how the two groups perceived societal responses to AI and the limits of human intelligence in dealing with risks.
Read more in the Economist.
Зарегистрируйтесь по электронной почте сейчас для еженедельной акции акции
100% free, Unsubscribe any time!Add 1: Room 605 6/F FA YUEN Commercial Building, 75-77 FA YUEN Street, Mongkok KL, HongKong Add 2: Room 405, Building E, MeiDu Building, Gong Shu District, Hangzhou City, Zhejiang Province, China
Whatsapp/ тел: +8618057156223 * телефон: *: 0086 571 86729517 Tel in HK: 00852 66181601
Электронная почта: [email protected]