Growing Clamour For ‘Worldwide Ban’ On Further Development Of Artificial Intelligence
New York: There is a growing demand for a halt to further development of artificial intelligence (AI).
Eliezer Yudkowsky, who heads research at Machine Intelligence Research Institute, has demanded an immediate shutdown of all training of AI systems more powerful than ChatGPT.
The artificial intelligence researcher, writing in Time Magazine, opined that a 6-month moratorium was not enough.
Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, former US Presidential candidate Andrew Yang, Prof. Yuval Noah Harari from Hebrew University of Jerusalem were among over 1,100 signatories to an open letter recently that called for all artificial intelligence labs to immediately pause training of AI systems more powerful than OpenAI’s GPT-4 for at least 6 months.
However, Yudkowsky said the open letter — which warned of ‘profound risks’ to society and humanity — “understated” the dangers posed by generative technology and artificial intelligence.
“I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it,” Yudkowsky warned.
After ChatGPT in November, OpenAI launched GPT-4 last month, heightening concerns that artificial intelligence which surpasses human cognitive ability may arrive sooner than expected.
“We must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders,” the letter of tech titans stated.
Yukowsky has argued that creation of superhumanly smart AI, without precision and preparation, could result in the death of everyone on Earth.
“Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in maybe possibly some remote chance, but as in that is the obvious thing that would happen,” he wrote in TIME.
Yudkowsky has warned that if a too powerful AI is built in current conditions, every member of the human species and all biological life on Earth could die soon after.
Comments are closed.