Menu

Blog

Archive for the ‘policy’ category: Page 73

Apr 28, 2016

Trump acknowledges the power of 3D Printing

Posted by in categories: 3D printing, geopolitics, military, policy, robotics/AI

Don’t kill the messanger; I’m just sharing.


Yesterday Trump acknowledged the power of technology to help the USA in his future plans.

In a major foreign policy speech, yesterday, Republican presidential candidate Donald Trump said the U.S. needs to make better use of “3D printing, artificial intelligence, and cyberwarfare.”

Continue reading “Trump acknowledges the power of 3D Printing” »

Apr 27, 2016

If You Care About the Earth, Vote for the Least Religious Presidential Candidate

Posted by in categories: energy, existential risks, genetics, geopolitics, policy, transportation

My new Vice Motherboard article on environmentalism and why going green isn’t enough. Only radical technology can restore the world to a pristine condition—and that requires politicians not afraid of the future:


I’m worried that conservatives like Cruz will try to stop new technologies that will change our battle in combating a degrading Earth

But there are people who can save the endangered species on the planet. And they will soon dramatically change the nature of animal protection. Those people may have little to do with wildlife, but their genetics work holds the answer to stable animal population levels in the wild. In as little as five years, we may begin stocking endangered wildlife in places where poachers have hunted animals to extinction. We’ll do this like we stock trout streams in America. Why spend resources in a losing battle to save endangered wildlife from being poached when you can spend the same amount to boost animal population levels ten-fold? Maye even 100-fold. This type of thinking is especially important in our oceans, which we’ve bloody well fished to near death.

Continue reading “If You Care About the Earth, Vote for the Least Religious Presidential Candidate” »

Apr 23, 2016

Regulating Drone Airspace Using ‘Smart Markets’

Posted by in categories: drones, policy, robotics/AI

With commercially operated autonomous drones potentially on the horizon, a policy problem is likely to emerge: allocation of scarce airspace and preferred flight paths. “Smart markets” could help.

Read more

Apr 21, 2016

What Should the World Do With Its Nuclear Weapons? — By Joseph Cirincione | The Atlantic

Posted by in categories: geopolitics, governance, government, nuclear weapons, policy, weapons

lead_960

“At the possible brink of a new nuclear arms race, questions answered during the Cold War will need to be reopened.”

Read more

Apr 21, 2016

Post-Paris: Taking Forward the Global Climate Change Deal | Chatham House

Posted by in categories: environmental, geopolitics, governance, government, law, policy, science, sustainability, treaties

2016-04-21-post-paris

“Inevitably, the compromises of the Paris Agreement make it both a huge achievement and an imperfect solution to the problem of global climate change.”

Read more

Apr 7, 2016

LIU Adds Cyber Extortion Endorsement to Product Recall Policies

Posted by in categories: cybercrime/malcode, policy, robotics/AI

Hmmmm;


Liberty International Underwriters (LIU), part of Liberty Mutual Insurance, has launched a cyber extortion endorsement to its Product Recall and Contamination insurance policy for food and beverage companies.

This endorsement offers coverage to food and beverage policyholders for cyber extortion monies and consultant costs up to the policy sub-limit for acts against production and day-to-day operations.

Continue reading “LIU Adds Cyber Extortion Endorsement to Product Recall Policies” »

Mar 18, 2016

Who’s Afraid of Existential Risk? Or, Why It’s Time to Bring the Cold War out of the Cold

Posted by in categories: defense, disruptive technology, economics, existential risks, governance, innovation, military, philosophy, policy, robotics/AI, strategy, theory, transhumanism

At least in public relations terms, transhumanism is a house divided against itself. On the one hand, there are the ingenious efforts of Zoltan Istvan – in the guise of an ongoing US presidential bid — to promote an upbeat image of the movement by focusing on human life extension and other tech-based forms of empowerment that might appeal to ordinary voters. On the other hand, there is transhumanism’s image in the ‘serious’ mainstream media, which is currently dominated by Nick Bostrom’s warnings of a superintelligence-based apocalypse. The smart machines will eat not only our jobs but eat us as well, if we don’t introduce enough security measures.

Of course, as a founder of contemporary transhumanism, Bostrom does not wish to stop artificial intelligence research, and he ultimately believes that we can prevent worst case scenarios if we act now. Thus, we see a growing trade in the management of ‘existential risks’, which focusses on how we might prevent if not predict any such tech-based species-annihilating prospects. Nevertheless, this turn of events has made some observers reasonably wonder whether indeed it might not be better simply to put a halt to artificial intelligence research altogether. As a result, the precautionary principle, previously invoked in the context of environmental and health policy, has been given a new lease on life as generalized world-view.

The idea of ‘existential risk’ capitalizes on the prospect of a very unlikely event that, were it to pass, would be extremely catastrophic for the human condition. Thus, the high value of the outcome psychologically counterbalances its low probability. It’s a bit like Pascal’s wager, whereby the potentially negative consequences of you not believing in God – to wit, eternal damnation — rationally compels you to believe in God, despite your instinctive doubts about the deity’s existence.

However, this line of reasoning underestimates both the weakness and the strength of human intelligence. On the one hand, we’re not so powerful as to create a ‘weapon of mass destruction’, however defined, that could annihilate all of humanity; on the other, we’re not so weak as to be unable to recover from whatever errors of design or judgement that might be committed in the normal advance of science and technology in the human life-world. I make this point not to counsel complacency but to question whether ‘existential risk’ is really the high concept that it is cracked up to be. I don’t believe it is.

Continue reading “Who's Afraid of Existential Risk? Or, Why It's Time to Bring the Cold War out of the Cold” »

Mar 16, 2016

The 21st century Star Wars — By Dr Patricia Lewis | The World Today

Posted by in categories: governance, government, law, policy, satellites, security, space, transparency, treaties, weapons

Cover pic_0

“Modern life relies on satellite sytems but they are alarmingly vulnerable to attack as they orbit the Earth. Patricia Lewis explains why defending them from hostile forces is now a primary concern for states”

Read more

Mar 1, 2016

Autonomous Killing Machines Are More Dangerous Than We Think

Posted by in categories: cybercrime/malcode, drones, ethics, law, military, policy, robotics/AI

I see articles and reports like the following about military actually considering fully autonomous missals, drones with missals, etc. I have to ask myself what happened to the logical thinking.


A former Pentagon official is warning that autonomous weapons would likely be uncontrollable in real-world situations thanks to design failures, hacking, and external manipulation. The answer, he says, is to always keep humans “in the loop.”

The new report, titled “ Autonomous Weapons and Operational Risk,” was written by Paul Scharre, a director at the Center for a New American Security. Scharre used to work at the office of the Secretary of Defense where he helped the US military craft its policy on the use of unmanned and autonomous weapons. Once deployed, these future weapons would be capable of choosing and engaging targets of their own choosing, raising a host of legal, ethical, and moral questions. But as Scharre points out in the new report, “They also raise critically important considerations regarding safety and risk.”

Continue reading “Autonomous Killing Machines Are More Dangerous Than We Think” »

Feb 28, 2016

Report Cites Dangers of Autonomous Weapons

Posted by in categories: cybercrime/malcode, military, policy, robotics/AI

I agree 100% with this report by former pentagon official on AI systems involving missiles.


A new report written by a former Pentagon official who helped establish United States policy on autonomous weapons argues that such weapons could be uncontrollable in real-world environments where they are subject to design failure as well as hacking, spoofing and manipulation by adversaries.

In recent years, low-cost sensors and new artificial intelligence technologies have made it increasingly practical to design weapons systems that make killing decisions without human intervention. The specter of so-called killer robots has touched off an international protest movement and a debate within the United Nations about limiting the development and deployment of such systems.

Continue reading “Report Cites Dangers of Autonomous Weapons” »

Page 73 of 93First7071727374757677Last