Menu

Blog

Archive for the ‘ethics’ category

Jan 21, 2025

Mind the Anticipatory Gap: Genome Editing, Value Change and Governance

Posted by in categories: biotech/medical, ethics, governance, mobile phones

I was recently a co-author on a paper about anticipatory governance and genome editing. The lead author was Jon Rueda, and the others were Seppe Segers, Jeroen Hopster, Belén Liedo, and Samuela Marchiori. It’s available open access here on the Journal of Medical Ethics website. There is a short (900 word) summary available on the JME blog. Here’s a quick teaser for it:

Transformative emerging technologies pose a governance challenge. Back in 1980, a little-known academic at the University of Aston in the UK, called David Collingridge, identified the dilemma that has come to define this challenge: the control dilemma (also known as the ‘Collingridge Dilemma’). The dilemma states that, for any emerging technology, we face a trade-off between our knowledge of its impact and our ability to control it. Early on, we know little about it, but it is relatively easy to control. Later, as we learn more, it becomes harder to control. This is because technologies tend to diffuse throughout society and become embedded in social processes and institutions. Think about our recent history with smartphones. When Steve Jobs announced the iPhone back in 2007, we didn’t know just how pervasive and all-consuming this device would become. Now we do but it is hard to put the genie back in the bottle (as some would like to do).

The field of anticipatory governance tries to address the control dilemma. It aims to carefully manage the rollout of an emerging technology so as to avoid the problem of losing control just as we learn more about the effects of the technology. Anticipatory governance has become popular in the world of responsible innovation and design. In the field of bioethics, approaches to anticipatory governance often try to anticipate future technical realities, ethical concerns, and incorporate differing public opinion about a technology. But there is a ‘gap’ in current approaches to anticipatory governance.

Dec 27, 2024

Why ethics is becoming AI’s biggest challenge

Posted by in categories: economics, ethics, robotics/AI

Teams designing AI should include linguistics and philosophy experts, parents, young people, everyday people with different life experiences from different socio-economic backgrounds.

Dec 27, 2024

AI And Cybersecurity: The Good, The Bad, And The Future

Posted by in categories: cybercrime/malcode, ethics, information science, robotics/AI

• Ethics: As AI gets more powerful, we need to address ethics such as bias in algorithms, misuse, privacy and civil liberties.

• AI Regulation: Governments and organizations will need to develop regulations and guidelines for the responsible use of AI in cybersecurity to prevent misuse and ensure accountability.

AI is a game changer in cybersecurity, for both good and bad. While AI gives defenders powerful tools to detect, prevent and respond to threats, it also equips attackers with superpowers to breach defenses. How we use AI for good and to mitigate the bad will determine the future of cybersecurity.

Dec 21, 2024

“Life Will Get Weird The Next 3 Years!” — Future of AI, Humanity & Utopia vs Dystopia | Nick Bostrom

Posted by in categories: biotech/medical, ethics, military, robotics/AI

Thank you to today’s sponsors:
Eight Sleep: Head to https://impacttheory.co/eightsleepAugust24 and use code IMPACT to get $350 off your Pod 4 Ultra.
Netsuite: Head to https://impacttheory.co/netsuiteAugust24 for Netsuite’s one-of-a-kind flexible financing program for a few more weeks!
Aura: Secure your digital life with proactive protection for your assets, identity, family, and tech – Go to https://aura.com/impacttheory to start your free two-week trial.

Welcome to Impact Theory, I’m Tom Bilyeu and in today’s episode, Nick Bostrom and I dive into the moral and societal implications of AI as it becomes increasingly advanced.

Continue reading “‘Life Will Get Weird The Next 3 Years!’ — Future of AI, Humanity & Utopia vs Dystopia | Nick Bostrom” »

Dec 16, 2024

No, Extreme Human Longevity Won’t Destroy the Planet

Posted by in categories: ethics, life extension, robotics/AI, transhumanism

Futurism, transhumanism, bioethics, ethics, science, philosophy, artificial intelligence, personhood.

Dec 10, 2024

Time to shift from artificial intelligence to artificial integrity

Posted by in categories: ethics, law, robotics/AI

There are contexts where human cognitive and emotional intelligence takes precedence over AI, which serves a supporting role in decision-making without overriding human judgment. Here, AI “protects” human cognitive processes from things like bias, heuristic thinking, or decision-making that activates the brain’s reward system and leads to incoherent or skewed results. In the human-first mode, artificial integrity can assist judicial processes by analyzing previous law cases and outcomes, for instance, without substituting a judge’s moral and ethical reasoning. For this to work well, the AI system would also have to show how it arrives at different conclusions and recommendations, considering any cultural context or values that apply differently across different regions or legal systems.

4 – Fusion Mode:

Artificial integrity in this mode is a synergy between human intelligence and AI capabilities combining the best of both worlds. Autonomous vehicles operating in Fusion Mode would have AI managing the vehicle’s operations, such as speed, navigation, and obstacle avoidance, while human oversight, potentially through emerging technologies like Brain-Computer Interfaces (BCIs), would offer real-time input on complex ethical dilemmas. For instance, in unavoidable crash situations, a BCI could enable direct communication between the human brain and AI, allowing ethical decision-making to occur in real-time, and blending AI’s precision with human moral reasoning. These kinds of advanced integrations between humans and machines will require artificial integrity at the highest level of maturity: artificial integrity would ensure not only technical excellence but ethical robustness, to guard against any exploitation or manipulation of neural data as it prioritizes human safety and autonomy.

Dec 3, 2024

Infants have no conception of morality

Posted by in category: ethics

The question as to whether morality is innate has been hotly debated in developmental psychology for decades.


An international study with LMU participation provides evidence that our moral sense is not innate.

Dec 2, 2024

The Scientific American Goes Woke + Laura Helmuth’s Resignation. By Michael Shermer

Posted by in categories: ethics, neuroscience

“An Unscientific American” discusses the resignation of Laura Helmuth from her position as editor-in-chief at Scientific American. The author, Michael Shermer, argues that her departure exemplifies the risks of blending facts with ideology in scientific communication.

Helmuth faced backlash after posting controversial remarks on social media regarding political views, which led to public criticism and her eventual resignation. Shermer reflects on how the magazine’s editorial direction has shifted towards progressive ideology, suggesting this has compromised its scientific integrity. He notes that had Helmuth made disparaging comments about liberal viewpoints, her outcome would likely have been more severe.

Continue reading “The Scientific American Goes Woke + Laura Helmuth’s Resignation. By Michael Shermer” »

Nov 27, 2024

How AI Dragons Set GenAI on Fire This Year

Posted by in categories: cybercrime/malcode, ethics, robotics/AI

While LLMs are trained on massive, diverse datasets, SLMs concentrate on domain-specific data. In such cases, the data is often from within the enterprise. This makes SLMs tailored to industries or use cases, thereby ensuring both relevance and privacy.

As AI technologies expand, so do concerns about cybersecurity and ethics. The rise of unsanctioned and unmanaged AI applications within organisations, also referred to as ‘Shadow AI’, poses challenges for security leaders in safeguarding against potential vulnerabilities.

Predictions for 2025 suggest that AI will become mainstream, speeding up the adoption of cloud-based solutions across industries. This shift is expected to bring significant operational benefits, including improved risk assessment and enhanced decision-making capabilities.

Nov 24, 2024

OpenAI is funding research into ‘AI morality’

Posted by in categories: ethics, information science, robotics/AI

One of the leading AI companies is funding academic research into algorithms that can predict humans’ moral judgements.

Page 1 of 8312345678Last