Toggle light / dark theme

Robert Long–Artificial Sentience, Digital Minds

Robert Long is a research fellow at the Future of Humanity Institute. His work is at the intersection of the philosophy of AI Safety and consciousness of AI. We talk about the recent LaMDA controversy, Ilya Sutskever’s slightly conscious tweet, the metaphysics and philosophy of consciousness, artificial sentience, and how a future filled with digital minds could get really weird.

Audio & transcript: https://theinsideview.ai/roblong.
Michaël: https://twitter.com/MichaelTrazzi.
Robert: https://twitter.com/rgblong.

Robert’s blog: https://experiencemachines.substack.com.

OUTLINE
00:00:00 Intro.
00:01:11 The LaMDA Controversy.
00:07:06 Defining AGI And Consciousness.
00:10:30 The Slightly Conscious Tweet.
00:13:16 Could Large Language Models Become Conscious?
00:18:03 Blake Lemoine Does Not Negotiate With Terrorists.
00:25:58 Could We Actually Test Artificial Consciousness?
00:29:33 From Metaphysics To Illusionism.
00:35:30 How We Could Decide On The Moral Patienthood Of Language Models.
00:42:00 Predictive Processing, Global Workspace Theories and Integrated Information Theory.
00:49:46 Have You Tried DMT?
00:51:13 Is Valence Just The Reward in Reinforcement Learning?
00:54:26 Are Pain And Pleasure Symetrical?
01:04:25 From Charismatic AI Systems to Artificial Sentience.
01:15:07 Sharing The World With Digital Minds.
01:24:33 Why AI Alignment Is More Pressing Than Artificial Sentience.
01:39:48 Why Moral Personhood Could Require Memory.
01:42:41 Last thoughts And Further Readings.

AI Ethics And The Almost Sensible Question Of Whether Humans Will Outlive AI

I have a question for you that seems to be garnering a lot of handwringing and heated debates these days. Are you ready? Will humans outlive AI? Mull that one over. I am going to unpack the question and examine closely the answers and how the answers have been elucidated. My primary intent is to highlight how the question itself and the surrounding discourse are inevitably and inexorably rooted in AI Ethics.


A worthy question is whether humans will outlive AI, though the worthiness of the question is perhaps different than what you think it is. All in all, important AI Ethics ramifications arise.

William MacAskill: ‘There are 80 trillion people yet to come. They need us to start protecting them’

All those numbers seem incalculably abstract but, according to the moral philosopher William MacAskill, they should command our attention. He is a proponent of what’s known as longtermism – the view that the deep future is something we have to address now. How long we last as a species and what kind of state of wellbeing we achieve, says MacAskill, may have a lot to do with what decisions we make and actions we take at the moment and in the foreseeable future.

That, in a nutshell, is the thesis of his new book, What We Owe the Future: A Million-Year View. The Dutch historian and writer Rutger Bregman calls the book’s publication “a monumental event”, while the US neuroscientist Sam Harris says that “no living philosopher has had a greater impact” upon his ethics.

We tend to think of moral philosophers as whiskery sages, but MacAskill is a youthful 35 and a disarmingly informal character in person, or rather on a Zoom call from San Francisco, where he is promoting the book.

How Scientists Revived Organs in Pigs an Hour After They Died

Yes, it does. Although OrganEx helps revitalize pigs’ organs, it’s far from a deceased animal being brought back to life. Rather, their organs were better protected from low oxygen levels, which occur during heart attacks or strokes.

“One could imagine that the OrganEx system (or components thereof) might be used to treat such people in an emergency,” said Porte.

The technology could also help preserve donor organs, but there’s a long way to go. To Dr. Brendan Parent, director of transplant ethics and policy research at NYU Grossman School of Medicine, OrganEx may force a rethink for the field. For example, is it possible that someone could have working peripheral organs but never regain consciousness? As medical technology develops, death becomes a process, not a moment.

Can we make the future a million years from now go better?

You can buy What We Owe the Future here: https://www.basicbooks.com/titles/william-macaskill/what-we-…541618626/

In his new book about longtermism, What We Owe the Future, the philosopher William MacAskill argues that concern for the long-term future should be a key moral priority of our time. There are three central claims that justify this view. 1. Future people matter. 2. There could be a lot of them. 3. We can make their lives go better. In this video, we focus on the third claim.

We’ve had the opportunity to read What We Owe the Future in advance thanks to the Forethought Foundation. They reached out asking if we could make a video on the occasion of the book launch. We were happy to collaborate, to help spread the ideas of the longtermist philosophy as far as possible smile

Interested in donating to safeguard the long-term future of humanity? You can donate to an expert managed fund at: https://www.givingwhatwecan.org/charities/longtermism-fund.

▀▀▀▀▀▀▀▀▀PATREON, MEMBERSHIP, KO-FI▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

🟠 Patreon: https://www.patreon.com/rationalanimations.

AI Ethics Wary About Worsening Of AI Asymmetry Amid Humans Getting The Short End Of The Stick

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching.


AI Asymmetry is getting larger and worsening, particularly via the advent of fully autonomous systems, and for which society needs to be aware of and considering devising remedies such as arming more with AI to essentially fight fire with fire.

Roadmap: AI’s next big steps in the world (BCIs, Xiaomi CyberOne, Tesla Optimus, UBI, Sam Altman…)

Read the paper: https://lifearchitect.ai/roadmap/
The Memo: https://lifearchitect.ai/memo/

Sources: See the paper above.

Dr Alan D. Thompson is a world expert in artificial intelligence (AI), specialising in the augmentation of human intelligence, and advancing the evolution of ‘integrated AI’. Alan’s applied AI research and visualisations are featured across major international media, including citations in the University of Oxford’s debate on AI Ethics in December 2021.

Home

Music:
Under licence.

Hyundai Motor Group Launches Boston Dynamics AI Institute to Spearhead Advancements in Artificial Intelligence & Robotics

Boston Dynamics gets into AI.


SEOUL/CAMBRIDGE, MA, August 12, 2022 – Hyundai Motor Group (the Group) today announced the launch of Boston Dynamics AI Institute (the Institute), with the goal of making fundamental advances in artificial intelligence (AI), robotics and intelligent machines. The Group and Boston Dynamics will make an initial investment of more than $400 million in the new Institute, which will be led by Marc Raibert, founder of Boston Dynamics.

As a research-first organization, the Institute will work on solving the most important and difficult challenges facing the creation of advanced robots. Elite talent across AI, robotics, computing, machine learning and engineering will develop technology for robots and use it to advance their capabilities and usefulness. The Institute’s culture is designed to combine the best features of university research labs with those of corporate development labs while working in four core technical areas: cognitive AI, athletic AI, organic hardware design as well as ethics and policy.

“Our mission is to create future generations of advanced robots and intelligent machines that are smarter, more agile, perceptive and safer than anything that exists today,” said Marc Raibert executive director of Boston Dynamics AI Institute. “The unique structure of the Institute — top talent focused on fundamental solutions with sustained funding and excellent technical support — will help us create robots that are easier to use, more productive, able to perform a wider variety of tasks, and that are safer working with people.”