Featured

Growing Clamour For ‘Worldwide Ban’ On Further Development Of Artificial Intelligence

New York: There is a growing demand for a halt to further development of artificial intelligence (AI).

Eliezer Yudkowsky, who heads research at Machine Intelligence Research Institute, has demanded an immediate shutdown of all training of AI systems more powerful than ChatGPT.

The artificial intelligence researcher, writing in Time Magazine, opined that a 6-month moratorium was not enough.

Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, former US Presidential candidate Andrew Yang, Prof. Yuval Noah Harari from Hebrew University of Jerusalem were among over 1,100 signatories to an open letter recently that called for all artificial intelligence labs to immediately pause training of AI systems more powerful than OpenAI’s GPT-4 for at least 6 months.

However, Yudkowsky said the open letter — which warned of ‘profound risks’ to society and humanity — “understated” the dangers posed by generative technology and artificial intelligence.

“I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it,” Yudkowsky warned.

After ChatGPT in November, OpenAI launched GPT-4 last month, heightening concerns that artificial intelligence which surpasses human cognitive ability may arrive sooner than expected.

“We must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders,” the letter of tech titans stated.

Yukowsky has argued that creation of superhumanly smart AI, without precision and preparation, could result in the death of everyone on Earth.

“Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in maybe possibly some remote chance, but as in that is the obvious thing that would happen,” he wrote in TIME.

Yudkowsky has warned that if a too powerful AI is built in current conditions, every member of the human species and all biological life on Earth could die soon after.

OB Bureau
Share
Published by
OB Bureau

Recent Posts

Anti-Sikh Riots: Court Rejects Death Penalty Demand; 2nd Life Term For Sajjan Kumar

Delhi: Former Congress MP Sajjan Kumar was on Tuesday sentenced to life in connection with… Read More

1 minute ago

Fraud Mastermind Who Duped Over 500 Investors Held In Odisha’s Sambalpur

Sambalpur: In a major crackdown on financial fraud, a man was apprehended by the police… Read More

13 minutes ago

All AAP MLAs Except Amanatullah Khan Suspended For 3 Days Over Ruckus In Delhi Assembly

New Delhi: Speaker Vijender Gupta on Tuesday suspended 21 Aam Aadmi Party (AAP) MLAs till… Read More

40 minutes ago

Women Peacekeepers From 35 Nations Attend 2-Day Conference In Delhi

New Delhi: The Indian Army, through the Centre for United Nations Peacekeeping (CUNPK), is hosting… Read More

52 minutes ago

CAG Highlights Flaws In ‘New Excise Policy’ Of Previous Delhi Govt; Detects ₹2,000 Cr Loss

New Delhi: The Comptroller and Auditor General (CAG)’s report, tabled in the Delhi Assembly on… Read More

2 hours ago

No Paddy Procurement In Odisha After March 31 Deadline: Minister

Bhubaneswar: The Odisha government has made it clear that the process of procurement of paddy… Read More

2 hours ago