Featured

Growing Clamour For ‘Worldwide Ban’ On Further Development Of Artificial Intelligence

New York: There is a growing demand for a halt to further development of artificial intelligence (AI).

Eliezer Yudkowsky, who heads research at Machine Intelligence Research Institute, has demanded an immediate shutdown of all training of AI systems more powerful than ChatGPT.

The artificial intelligence researcher, writing in Time Magazine, opined that a 6-month moratorium was not enough.

Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, former US Presidential candidate Andrew Yang, Prof. Yuval Noah Harari from Hebrew University of Jerusalem were among over 1,100 signatories to an open letter recently that called for all artificial intelligence labs to immediately pause training of AI systems more powerful than OpenAI’s GPT-4 for at least 6 months.

However, Yudkowsky said the open letter — which warned of ‘profound risks’ to society and humanity — “understated” the dangers posed by generative technology and artificial intelligence.

“I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it,” Yudkowsky warned.

After ChatGPT in November, OpenAI launched GPT-4 last month, heightening concerns that artificial intelligence which surpasses human cognitive ability may arrive sooner than expected.

“We must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders,” the letter of tech titans stated.

Yukowsky has argued that creation of superhumanly smart AI, without precision and preparation, could result in the death of everyone on Earth.

“Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in maybe possibly some remote chance, but as in that is the obvious thing that would happen,” he wrote in TIME.

Yudkowsky has warned that if a too powerful AI is built in current conditions, every member of the human species and all biological life on Earth could die soon after.

OB Bureau

Recent Posts

Deepali Das Seeks Clarity From Odisha Govt On CBI Probe Into Father Naba Das’ Murder

Bhubaneswar: Former Jharsuguda MLA Deepali Das on Wednesday asked the Mohan Majhi government to clear…

6 hours ago

Priyanka Chopra & Nick Jonas’ Daughter Malti Marie Turns 3, Nani Madhu Shares Unseen Pictures

Mumbai: Bollywood’s desi girl Priyanka Chopra and American singer Nick Jonas’ daughter Maltie Marie celebrated…

7 hours ago

Salman Khan’s ‘Sikandar’ Trailer To Be Unveiled During Ind Vs Pak League Match

Mumbai: The trailer of the highly-anticipated Salman Khan-Rashmika Mandanna starrer ‘Sikandar’ will be unveiled during…

7 hours ago

Air India Express Launches Direct Flight Between Bhubaneswar & Patna

Patna/Bhubaneswar: Air India Express on Wednesday expanded its direct flight network with maiden operations for…

7 hours ago

Miscreants Snatch Woman’s Necklace From Inside Her House In Odisha’s Dhenkanal

Dhenkanal: Two miscreants snatched a gold necklace from a woman by forcibly entering her residential…

7 hours ago

Rly Ministry Invites Tenders For Remaining Stretch Of Talcher-Bimalagarh Rail Link In Odisha

Bhubaneswar: The Ministry of Railways has invited tenders for 30.62 km stretch between Pallahara and Mahuldiha of…

8 hours ago