Menu

Blog

Archive for the ‘terrorism’ category: Page 4

Oct 9, 2021

AI Weekly: EU facial recognition ban highlights need for U.S. legislation

Posted by in categories: food, government, information science, law enforcement, privacy, robotics/AI, security, terrorism

This week, The European Parliament, the body responsible for adopting European Union (EU) legislation, passed a non-binding resolution calling for a ban on law enforcement use of facial recognition technology in public places. The resolution, which also proposes a moratorium on the deployment of predictive policing software, would restrict the use of remote biometric identification unless it’s to fight “serious” crime, such as kidnapping and terrorism.

The approach stands in contrast to that of U.S. agencies, which continue to embrace facial recognition even in light of studies showing the potential for ethnic, racial, and gender bias. A recent report from the U.S. Government Accountability Office found that 10 branches including the Departments of Agriculture, Commerce, Defense, and Homeland Security plan to expand their use of facial recognition between 2020 and 2023 as they implement as many as 17 different facial recognition systems.

Commercial face-analyzing systems have been critiqued by scholars and activists alike throughout the past decade, if not longer. The technology and techniques — everything from sepia-tinged film to low-contrast digital cameras — often favor lighter skin, encoding racial bias in algorithms. Indeed, independent benchmarks of vendors’ systems by the Gender Shades project and others have revealed that facial recognition technologies are susceptible to a range of prejudices exacerbated by misuse in the field. For example, a report from Georgetown Law’s Center on Privacy and Technology details how police feed facial recognition software flawed data, including composite sketches and pictures of celebrities who share physical features with suspects.

Oct 1, 2021

5 Ways to Save Humanity from a Malicious AI

Posted by in categories: biotech/medical, robotics/AI, terrorism, transhumanism

Describing one scenario, an AI could act as a quiet bioterrorist. Since we have become heavily dependent on computers and computer-dependent systems are by default dispersed and impersonal, the task of eradicating humanity could be divvied up to gazillions of little micro-players, who may or may not know what they are doing. “For example, the AI could pay employees in a DNA-synthesizing company to do small jobs and then combine their work into creating a biological weapon, such as a virus able to kill a lot of people,” says Turchin.

“How are you certain you are not in a multilevel simulation, where your behavior is being monitored by humans who are testing your benevolence levels?”

Turchin addresses this in one of his letters to the young, future humanity-killing AI. “There are several instrumental reasons why exterminating humanity is not in your interest,” the Russian transhumanist writes. “Cooperating with humans will only increase your chances of accomplishing any goal you may have.”

Jul 27, 2021

Militaries are plundering sci-fi for technology ideas

Posted by in categories: cyborgs, military, terrorism

But while science fiction provides military planners with a tantalizing glimpse of future weaponry, from exoskeletons to mind-machine interfaces, the genre is always about more than flashy new gadgets. It’s about anticipating the unforeseen ways in which these technologies could affect humans and society – and this extra context is often overlooked by the officials deciding which technologies to invest in for future conflicts.

Imagined worlds

Continue reading “Militaries are plundering sci-fi for technology ideas” »

May 30, 2021

Israel’s operation against Hamas was the world’s first AI war

Posted by in categories: military, robotics/AI, supercomputing, terrorism

The Israeli military is calling Operation Guardian of the Walls the first artificial-intelligence war. the IDF established an advanced AI technological platform that centralized all data on terrorist groups in the Gaza Strip onto one system that enabled the analysis and extraction of the intelligence.


The IDF used artificial intelligence and supercomputing during the last conflict with Hamas in the Gaza Strip.

May 30, 2021

A rogue killer drone ‘hunted down’ a human target without being instructed to, UN report says

Posted by in categories: drones, government, military, robotics/AI, terrorism

Oh, joy. You can take the drone out of 2020, but you can’t take the 2020 out of the drone.


A “lethal” weaponized drone “hunted down a human target” without being told to for the first time, according to a UN report seen by the New Scientist.

The March 2020 incident saw a KARGU-2 quadcopter autonomously attack a human during a conflict between Libyan government forces and a breakaway military faction, led by the Libyan National Army’s Khalifa Haftar, the Daily Star reported.

Continue reading “A rogue killer drone ‘hunted down’ a human target without being instructed to, UN report says” »

May 17, 2021

New NTAS Bulletin Warns of ‘Broader’ Terror Targets as COVID Restrictions Ease

Posted by in categories: security, terrorism

Warning that the homeland is facing threats that have evolved significantly and become increasingly complex and volatile in 2021, the Department of Homeland Security issued a new National Terrorism Advisory System (NTAS) Bulletin.

May 14, 2021

Dr. Natasha Bajema — Dir., Converging Risks Lab, Council on Strategic Risks — WMD Threat Reduction

Posted by in categories: biological, chemistry, cyborgs, policy, security, terrorism, transhumanism

Nuclear Nonproliferation, Cooperative Threat Reduction and WMD Terrorism — Dr. Natasha Bajema, Director, Converging Risks Lab, The Council on Strategic Risks.


Dr. Natasha Bajema, is a subject matter expert in nuclear nonproliferation, cooperative threat reduction and WMD terrorism, and currently serves as Director of the Converging Risks Lab, at The Council on Strategic Risks, a nonprofit, non-partisan security policy institute devoted to anticipating, analyzing and addressing core systemic risks to security in the 21st century, with special examination of the ways in which these risks intersect and exacerbate one another.

Continue reading “Dr. Natasha Bajema — Dir., Converging Risks Lab, Council on Strategic Risks — WMD Threat Reduction” »

May 10, 2021

The Pentagon Inches Toward Letting AI Control Weapons

Posted by in categories: drones, military, robotics/AI, terrorism

Last August, several dozen military drones and tanklike robots took to the skies and roads 40 miles south of Seattle. Their mission: Find terrorists suspected of hiding among several buildings.

So many robots were involved in the operation that no human operator could keep a close eye on all of them. So they were given instructions to find—and eliminate—enemy combatants when necessary.

The mission was just an exercise, organized by the Defense Advanced Research Projects Agency, a blue-sky research division of the Pentagon; the robots were armed with nothing more lethal than radio transmitters designed to simulate interactions with both friendly and enemy robots.

May 3, 2021

West Africa is the Latest Testing Ground for US Military Artificial Intelligence

Posted by in categories: military, robotics/AI, terrorism

In its preparation for great power competition, the US military is modernizing its artificial intelligence and machine learning techniques and testing them in West Africa.

by Scott Timcke

Continue reading “West Africa is the Latest Testing Ground for US Military Artificial Intelligence” »

Apr 24, 2021

Making Sense Podcast Special Episode: Engineering the Apocalypse

Posted by in categories: bioengineering, biological, biotech/medical, existential risks, finance, media & arts, robotics/AI, terrorism

In this nearly 4-hour SPECIAL EPISODE, Rob Reid delivers a 100-minute monologue (broken up into 4 segments, and interleaved with discussions with Sam) about the looming danger of a man-made pandemic, caused by an artificially-modified pathogen. The risk of this occurring is far higher and nearer-term than almost anyone realizes.

Rob explains the science and motivations that could produce such a catastrophe and explores the steps that society must start taking today to prevent it. These measures are concrete, affordable, and scientifically fascinating—and almost all of them are applicable to future, natural pandemics as well. So if we take most of them, the odds of a future Covid-like outbreak would plummet—a priceless collateral benefit.

Rob Reid is a podcaster, author, and tech investor, and was a long-time tech entrepreneur. His After On podcast features conversations with world-class thinkers, founders, and scientists on topics including synthetic biology, super-AI risk, Fermi’s paradox, robotics, archaeology, and lone-wolf terrorism. Science fiction novels that Rob has written for Random House include The New York Times bestseller Year Zero, and the AI thriller After On. As an investor, Rob is Managing Director at Resilience Reserve, a multi-phase venture capital fund. He co-founded Resilience with Chris Anderson, who runs the TED Conference and has a long track record as both an entrepreneur and an investor. In his own entrepreneurial career, Rob founded and ran Listen.com, the company that created the Rhapsody music service. Earlier, Rob studied Arabic and geopolitics at both undergraduate and graduate levels at Stanford, and was a Fulbright Fellow in Cairo. You can find him at www.after-on.

Page 4 of 1412345678Last