Oxford, Cambridge, and OpenAI jointly issued an AI warning "Malicious use of artificial intelligence"

Unmanned vehicles, drones, video generation... These AI applications are exciting, but they are also likely to bring disasters like never before – what if an unmanned team is maliciously manipulated to hit you? Let us take a look at what AI policy research scholars say.

On February 20th, 26 researchers from Oxford, Cambridge, OpenAI, the Electronic Frontier Foundation (a non-profit digital copyright organization), the New American Security Center (an American think tank) and other research institutes released a copy. The AI ​​warning report issues warnings about potential malicious use of AI and proposes appropriate prevention and mitigation measures.

Oxford, Cambridge, and OpenAI jointly issued an AI warning "Malicious use of artificial intelligence"

The impact of emerging technologies on human security has always been a hot topic. The report, published by the heavyweight institutes of Oxford, Cambridge, and OpenAI, is titled "The Malicious Use of Artificial Intelligence: Prediction, Prevention, and Mitigation," predicting the rapid growth of cybercrime in the next decade, the abuse of drones, Using "bots" to manipulate scenarios of malicious use of artificial intelligence, from elections to social media, calls on governments and businesses to focus on the dangers of current artificial intelligence applications.

The report also recommends the following interventions to mitigate the threat of malicious use of artificial intelligence:

Policymakers and technology researchers now need to work together to understand and prepare for the malicious use of artificial intelligence.

There are many positive applications for artificial intelligence, but it is a double-edged sword. Artificial intelligence researchers and engineers should pay attention to and actively predict the possibility of abuse.

Artificial intelligence should be learned from disciplines with a long history of processing risks, such as computer security.

Actively expand the range of stakeholders involved in preventing and mitigating malicious use.

The 100-page report focuses on three security areas that are particularly relevant to malicious use of artificial intelligence: digital security, physical security, and political security (poliTIcal). It shows that artificial intelligence can break the trade-off between scale and efficiency and make large-scale, fine-grained and efficient attacks possible.

For example, automated hacking, used to simulate speech synthesis of targets, use targeted spam of information obtained from social media, or exploit new types of cyber attacks such as vulnerabilities in the AI ​​system itself.

Similarly, the proliferation of drones and network systems will allow attackers to deploy or re-adjust these systems for harmful purposes, such as colliding autonomous vehicles, turning commercial drones into missiles that can target, or hijacking critical infrastructure. Ask for a ransom - the rise of autonomous weapons systems on the battlefield may lead to loss of control.

In the political arena, public opinion can be manipulated through detailed analysis, targeted publicity, cheap and highly credible virtual video, and the scale of such manipulation was previously unimaginable. With artificial intelligence, personal information can be aggregated, analyzed, and processed on a large scale, which enables monitoring, potentially invading privacy, and fundamentally changing the power between individuals, businesses, and countries.

To mitigate these risks, the authors explored several interventions to reduce the threats associated with AI abuse. These include rethinking cybersecurity, exploring different open models of information sharing, promoting a culture of responsibility, and developing policies and technologies that are conducive to security defenses.

One of the authors of the report, Dr. Seán ÓhÉigeartaigh, Executive Director of the Center for Survival Risk Research at the University of Cambridge, said:

Artificial intelligence is a machine that changes the rules of the game. This report has already imagined what the world will look like in the next five to ten years.

The world we live in may become a world full of harm caused by the abuse of artificial intelligence, and we need to deal with these problems because the risks are real. We now need to make choices, and our report calls for action by governments, institutions and individuals around the world.

Advocacy in artificial intelligence and machine learning has outweighed the facts for decades. But now it is no longer the case. This report looks at practices that are no longer applicable and suggests ways to help: for example, how to design software and hardware that are not vulnerable, and what types of laws and international regulations can be adopted.

However, at the beginning of the report, the author also acknowledged: "We have not given a solution to the long-term balance between artificial intelligence-related security attacks and corresponding defense measures."

There is also a voice of criticism on the Internet. GigaOm commentator Jon Collins pointed out that for all proposed risks, the report is not quantitatively described, but only outlined in terms of "plausibly", which is not a scientific description of the risks. How big is the risk? We don't know. In addition, many places in the report made it clear that “multiple authors of the report hold different views”. So in this regard, the wake-up effect of the report may be much larger than the indication of the actual operation.

Electrolytic capacitor

The electrolyte material inside the electrolytic capacitor, which has charge storage, is divided into positive and negative polarity, similar to the battery, and cannot be connected backwards.A metal substrate having an oxide film attached to a positive electrode and a negative electrode connected to an electrolyte (solid and non-solid) through a metal plate.

Nonpolar (dual polarity) electrolytic capacitor adopts double oxide film structure, similar to the two polar electrolytic capacitor after two connected to the cathode, the two electrodes of two metal plates respectively (both with oxide film), two groups of oxide film as the electrolyte in the middle.Polar electrolytic capacitors usually play the role of power filter, decoupling (like u), signal coupling, time constant setting and dc isolation in power circuit, medium frequency and low frequency circuit.Non-polar electrolytic capacitors are usually used in audio frequency divider circuit, television S correction circuit and starting circuit of single-phase motor.

Electrolytic Capacitor,Aluminum Electrolytic Capacitor,High Voltage Electrolytic Capacitor,12V Electronic Components Capacitor

YANGZHOU POSITIONING TECH CO., LTD. , https://www.cnchipmicro.com

Posted on