Toggle light / dark theme

The term ‘ethical AI’ is finally starting to mean something

Since OpenAI first described its new AI language-generating system called GPT-3 in May, hundreds of media outlets (including MIT Technology Review) have written about the system and its capabilities. Twitter has been abuzz about its power and potential. The New York Times published an op-ed about it. Later this year, OpenAI will begin charging companies for access to GPT-3, hoping that its system can soon power a wide variety of AI products and services.


Earlier this year, the independent research organisation of which I am the Director, London-based Ada Lovelace Institute, hosted a panel at the world’s largest AI conference, CogX, called The Ethics Panel to End All Ethics Panels. The title referenced both a tongue-in-cheek effort at self-promotion, and a very real need to put to bed the seemingly endless offering of panels, think-pieces, and government reports preoccupied with ruminating on the abstract ethical questions posed by AI and new data-driven technologies. We had grown impatient with conceptual debates and high-level principles.

And we were not alone. 2020 has seen the emergence of a new wave of ethical AI – one focused on the tough questions of power, equity, and justice that underpin emerging technologies, and directed at bringing about actionable change. It supersedes the two waves that came before it: the first wave, defined by principles and dominated by philosophers, and the second wave, led by computer scientists and geared towards technical fixes. Third-wave ethical AI has seen a Dutch Court shut down an algorithmic fraud detection system, students in the UK take to the streets to protest against algorithmically-decided exam results, and US companies voluntarily restrict their sales of facial recognition technology. It is taking us beyond the principled and the technical, to practical mechanisms for rectifying power imbalances and achieving individual and societal justice.

Between 2016 and 2019, 74 sets of ethical principles or guidelines for AI were published. This was the first wave of ethical AI, in which we had just begun to understand the potential risks and threats of rapidly advancing machine learning and AI capabilities and were casting around for ways to contain them. In 2016, AlphaGo had just beaten Lee Sedol, promoting serious consideration of the likelihood that general AI was within reach. And algorithmically-curated chaos on the world’s duopolistic platforms, Google and Facebook, had surrounded the two major political earthquakes of the year – Brexit, and Trump’s election.

Facebook is training robot assistants to hear as well as see

In June 2019, Facebook’s AI lab, FAIR, released AI Habitat, a new simulation platform for training AI agents. It allowed agents to explore various realistic virtual environments, like a furnished apartment or cubicle-filled office. The AI could then be ported into a robot, which would gain the smarts to navigate through the real world without crashing.

In the year since, FAIR has rapidly pushed the boundaries of its work on “embodied AI.” In a blog post today, the lab has announced three additional milestones reached: two new algorithms that allow an agent to quickly create and remember a map of the spaces it navigates, and the addition of sound on the platform to train the agents to hear.

Artificial Intelligence Defeats Human F-16 Pilot In Virtual Dogfight

The plan in the next big war will probably be to let waves of AI fighters wipe out all the enemies targets, Anti aircraft systems, enemy fighters, enemy air fields etc…, however many waves that takes. And, then human pilots come in behind that.


An artificial intelligence algorithm defeated a human F-16 fighter pilot in a virtual dogfight sponsored by the Defense Advanced Research Projects Agency Thursday.

AI automatic tuning delivers step forward in quantum computing

Researchers at Oxford University, in collaboration with DeepMind, University of Basel and Lancaster University, have created a machine learning algorithm that interfaces with a quantum device and ‘tunes’ it faster than human experts, without any human input. They are dubbing it “Minecraft explorer for quantum devices.”

Classical computers are composed of billions of transistors, which together can perform complex calculations. Small imperfections in these transistors arise during manufacturing, but do not usually affect the operation of the computer. However, in a quantum computer similar imperfections can strongly affect its behavior.

In prototype semiconductor quantum computers, the standard way to correct these imperfections is by adjusting input voltages to cancel them out. This process is known as tuning. However, identifying the right combination of voltage adjustments needs a lot of time even for a single quantum . This makes it virtually impossible for the billions of devices required to build a useful general-purpose quantum computer.

From sociology of quantification to ethics of quantification

Quantifications are produced by several disciplinary houses in a myriad of different styles. The concerns about unethical use of algorithms, unintended consequences of metrics, as well as the warning about statistical and mathematical malpractices are all part of a general malaise, symptoms of our tight addiction to quantification. What problems are shared by all these instances of quantification? After reviewing existing concerns about different domains, the present perspective article illustrates the need and the urgency for an encompassing ethics of quantification. The difficulties to discipline the existing regime of numerification are addressed; obstacles and lock-ins are identified. Finally, indications for policies for different actors are suggested.

Future mental health care may include diagnosis via brain scan and computer algorithm

Newswise — Most of modern medicine has physical tests or objective techniques to define much of what ails us. Yet, there is currently no blood or genetic test, or impartial procedure that can definitively diagnose a mental illness, and certainly none to distinguish between different psychiatric disorders with similar symptoms. Experts at the University of Tokyo are combining machine learning with brain imaging tools to redefine the standard for diagnosing mental illnesses.

“Psychiatrists, including me, often talk about symptoms and behaviors with patients and their teachers, friends and parents. We only meet patients in the hospital or clinic, not out in their daily lives. We have to make medical conclusions using subjective, secondhand information,” explained Dr. Shinsuke Koike, M.D., Ph.D., an associate professor at the University of Tokyo and a senior author of the study recently published in Translational Psychiatry.

“Frankly, we need objective measures,” said Koike.

Gearing for the 20/20 Vision of Our Cybernetic Future — The Syntellect Hypothesis, Expanded Edition | Press Release

“A neuron in the human brain can never equate the human mind, but this analogy doesn’t hold true for a digital mind, by virtue of its mathematical structure, it may – through evolutionary progression and provided there are no insurmountable evolvability constraints – transcend to the higher-order Syntellect. A mind is a web of patterns fully integrated as a coherent intelligent system; it is a self-generating, self-reflective, self-governing network of sentient components… that evolves, as a rule, by propagating through dimensionality and ascension to ever-higher hierarchical levels of emergent complexity. In this book, the Syntellect emergence is hypothesized to be the next meta-system transition, developmental stage for the human mind – becoming one global mind – that would constitute the quintessence of the looming Cybernetic Singularity.” –Alex M. Vikoulov, The Syntellect Hypothesis https://www.ecstadelic.net/e_news/gearing-for-the-2020-visio…ss-release

#SyntellectHypothesis


Ecstadelic Media Group releases the new 2020 expanded edition of The Syntellect Hypothesis: Five Paradigms of the Mind’s Evolution by Alex M. Vikoulov as eBook and Paperback (Press Release, San Francisco, CA, USA, January 15, 2020 10.20 AM PST)

Picture

Named “The Book of the Year” by futurists and academics alike in 2019 and maintaining high rankings in Amazon charts in Cybernetics, Physics of Time, Phenomenology, and Phenomenological Philosophy, it has now been released as The 2020 Expanded New Deluxe Edition (2020e) in eBook and paperback versions. In one volume, the author covers it all: from quantum physics to your experiential reality, from the Big Bang to the Omega Point, from the ‘flow state’ to psychedelics, from ‘Lucy’ to the looming Cybernetic Singularity, from natural algorithms to the operating system of your mind, from geo-engineering to nanotechnology, from anti-aging to immortality technologies, from oligopoly capitalism to Star-Trekonomics, from the Matrix to Universal Mind, from Homo sapiens to Holo syntellectus.

New Algorithm Paves the Way Towards Error-Free Quantum Computing

To avoid this problem, the researchers came up with several shortcuts and simplifications that help focus on the most important interactions, making the calculations tractable while still providing a precise enough result to be practically useful.

To test their approach, they put it to work on a 14-qubit IBM quantum computer accessed via the company’s IBM Quantum Experience service. They were able to visualize correlations between all pairs of qubits and even uncovered long-range interactions between qubits that had not been previously detected and will be crucial for creating error-corrected devices.

They also used simulations to show that they could apply the algorithm to a quantum computer as large as 100 qubits without calculations getting intractable. As well as helping to devise error-correction protocols to cancel out the effects of noise, the researchers say their approach could also be used as a diagnostic tool to uncover the microscopic origins of noise.