in

Pentagon’s Dark Move: AI Weapons Poised to Kill Unsupervised!

According to a New York Times report, the US government is seriously considering using AI-controlled drones that can decide for themselves whether to kill human targets.

According to the Times, nations like China, Israel, and the United States are creating deadly autonomous weapons that can select targets using artificial intelligence.

Don’t miss this! Carry faith with you everywhere with the Exclusive National Prayer Coin!

Critics claim that the use of “killer robots” would be a terrifying advancement because it would leave decisions about life and death on the battlefield to machines with little to no human oversight.

The US is one of the few nations—along with Russia, Australia, and Israel—that oppose any move by the United Nations to impose a binding resolution that forbids the use of AI killer drones.

“This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, said in an interview. “What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue and an ethical issue.”

Swarms of thousands of AI-enabled drones are being planned by the Pentagon, per a notice earlier this year.

US Deputy Secretary of Defense Kathleen Hicks said in an August speech that the US will be able to counterbalance China’s manpower-driven Liberation Army’s (PLA) numerical superiority in weapons and manpower thanks to technologies like AI-controlled drone swarms.

“We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat,” she said, Reuters reported.

Frank Kendall, the secretary of the Air Force, stated to The Times that AI drones must be able to make deadly decisions while being supervised by humans.

ALERT! Major Water Restrictions In Effect!

Individual decisions versus not doing individual decisions is the difference between winning and losing — and you’re not going to lose,” he said.

“I don’t think people we would be up against would do that, and it would give them a huge advantage if we put that limitation on ourselves.”

The New Scientist claims that during its war with Russia in October, Ukraine used AI-controlled drones; however, it is unclear if this resulted in any human casualties.

The film will be shown on Monday at a UN Convention on Conventional Weapons event organized by the Campaign to Stop Killer Robots, which will be hosted by Stuart Russell, a senior AI scientist at the University of California, Berkeley, and others.

A warning is issued by the campaign: “Machines don’t see us as people, just another piece of code to be processed and sorted. From smart homes to the use of robot dogs by police enforcement, A.I. technologies and automated decision-making are now playing a significant role in our lives. At the extreme end of the spectrum of automation lie killer robots.”

“Killer robots don’t just appear – we create them,” the campaign added. “If we allow this dehumanisation we will struggle to protect ourselves from machine decision-making in other areas of our lives. We need to prohibit autonomous weapons systems that would be used against people, to prevent this slide to digital dehumanisation.”

Russell argues that the development and use of autonomous weapons, like drones, tanks, and automatic machine guns, would be disastrous for human security and freedom, and that there is not much time left to stop them.

“The technology illustrated in the film is simply an integration of existing capabilities. It is not science fiction. In fact, it is easier to achieve than self-driving cars, which require far higher standards of performance,” Russell said.

As the Campaign to Ban Killer Robots also notes, some worry that because AI-powered devices are “quite inexpensive to produce in large quantities and end up in the hands of terrorists or rogue states, who could use them to wreak havoc and suppress populations, as the movie portrays.”

“A treaty banning autonomous weapons would prevent large-scale manufacturing of the technology,” BKR notes. “It would also provide a framework to police nations working on the technology, and the spread of dual-use devices and software such as quadcopters and target recognition algorithms.”

“Professional codes of ethics should also disallow the development of machines that can decide to kill a human,” Russell said.

In a 2017 episode of “Black Mirror,” killer robot dogs prowl the planet, pursuing people who the drones identify as a “threat.”

In an interview with Entertainment Weekly, Charlie Brooker, the creator of the iconic episode, discussed the reasoning behind his vision of a robopocalypse.

When asked about the inspiration for the story, the interviewer compared it to a hybrid of “Night of the Living Dead” and “Boston Dynamics videos on YouTube.”

“That’s actually scarily correct,” Brooker said. “It was from watching Boston Dynamics videos, but crossed with — have you seen the film All Is Lost? I wanted to do a story where there was almost no dialogue. And with those videos, there’s something very creepy watching them where they get knocked over, and they look sort of pathetic laying there, but then they slowly manage to get back up.”

An armed standoff was resolved last week by the Los Angeles Police Department using a robotdog.

The robot was being remotely controlled by a member of the SWAT squad when it approached the bus and went through the opening. After that, it circled the bus while transmitting cops real-time video feeds.

In order to convince the man to give up, the robot also spoke to him through its speaker. The man woke up after about an hour and forty-five minutes, got off the bus, and was taken into custody by the authorities.

Drones in the air are also utilizing artificial intelligence. DroneSense is a drone software platform for public safety that converts unprocessed data from drones into intelligence that emergency responders, such as police, fire, and ambulance services, can use. Multiple drone users can collaborate, review what each drone sees, and even follow a drone’s flight path in real time with the DroneSense OpsCenter.

To handle a variety of public safety scenarios, hundreds of teams have used the DroneSense public safety platform. The AI-driven application assists SWAT teams in gathering intelligence from the scene, assessing storm and tornado damage, and even utilizing thermal imagery to locate individuals who go missing.

Drones can find and identify people of interest in crowds with the help of Neurala, a deep learning neural network. It can also generate a real-time damage report and inspect large industrial pieces of machinery, like telephone towers. The startup claims that instead of the hours or days that the industry requires, their AI-powered software can identify an individual in a crowd in as little as 20 minutes.

Scale helps train drones for aerial imaging by utilizing AI and machine learning. Drones can detect, map, and identify anything from individual objects like cars to houses within a community with the help of machine learning software.

AI-powered autonomous military drones are produced by AeroVironment. The company’s drones range from the Switchblade, which is equipped with a precise attack payload for military operations, to a three-foot-long covert spy plane.

Aerovironment is a drone manufacturer that produces AI-powered drones for a range of military uses. The company’s unmanned aerial vehicles (UAVs) are also utilized in agriculture to map field area, identify crop health issues, and evaluate irrigation challenges.

Artificial intelligence is undoubtedly a valuable tool that humanity can greatly benefit from, but it is also a dangerous weapon that has the potential to impose an oppressive regime on a population.

Artificial Intelligence is the new threat in the Information Age, replacing nuclear weapons as the existential threat to humanity that the Cold War had to contain.

To stop politicians from abusing AI to impose harsh regimes on humanity, a growing understanding of the benefits and drawbacks of the technology will be necessary, as will the creation of new institutions to guarantee accountability and transparency.

Leave a Reply

Your email address will not be published. Required fields are marked *

Immigrant Attack in Ireland: Cities Descend into Chaos!

WATCH: Conservative Leader Silences Liberal with Facts!