Nearly four months ago, Chinese researcher He Jiankui announced that he had edited the genes of twin babies with CRISPR. CRISPR, also known as CRISPR/Cas9, can be thought of as “genetic scissors” that can be programmed to edit DNA in any cell. Last year, scientists used CRISPR to cure dogs of Duchenne muscular dystrophy. This was a huge step forward for gene therapies, as the potential of CRISPR to treat otherwise incurable diseases seemed possible. However, a global community of scientists believe it is premature to use CRISPR in human babies because of inadequate scientific review and a lack of international consensus regarding the ethics of when and how this technology should be used.
What does this have to do with AI self-driving cars?
AI Self-Driving Cars Will Need to Make Life-or-Death Judgements
At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One crucial aspect to the AI of self-driving cars is the need for the AI to make “judgments” about driving situations, ones that involve life-and-death matters.
What police would do with the information has yet to be determined. The head of WMP told New Scientist they won’t be preemptively arresting anyone; instead, the idea would be to use the information to provide early intervention from social or health workers to help keep potential offenders on the straight and narrow or protect potential victims.
But data ethics experts have voiced concerns that the police are stepping into an ethical minefield they may not be fully prepared for. Last year, WMP asked researchers at the Alan Turing Institute’s Data Ethics Group to assess a redacted version of the proposal, and last week they released an ethics advisory in conjunction with the Independent Digital Ethics Panel for Policing.
While the authors applaud the force for attempting to develop an ethically sound and legally compliant approach to predictive policing, they warn that the ethical principles in the proposal are not developed enough to deal with the broad challenges this kind of technology could throw up, and that “frequently the details are insufficiently fleshed out and important issues are not fully recognized.”
Daily life during a pandemic means social distancing and finding new ways to remotely connect with friends, family and co-workers. And as we communicate online and by text, artificial intelligence could play a role in keeping our conversations on track, according to new Cornell University research.
Humans having difficult conversations said they trusted artificially intelligent systems —the “smart” reply suggestions in texts—more than the people they were talking to, according to a new study, “AI as a Moral Crumple Zone: The Effects of Mediated AI Communication on Attribution and Trust,” published online in the journal Computers in Human Behavior.
“We find that when things go wrong, people take the responsibility that would otherwise have been designated to their human partner and designate some of that to the artificial intelligence system,” said Jess Hohenstein, a doctoral student in the field of information science and the paper’s first author. “This introduces a potential to take AI and use it as a mediator in our conversations.”
Dr. Ezekiel Emanuel, an American oncologist and bioethicist who is senior fellow at the Center for American Progress as well as Vice Provost for Global Initiatives at the University of Pennsylvania and chair of the Department of Medical Ethics and Health Policy, said on MSNBC on Friday, March 20, that Tesla and SpaceX CEO Elon Musk told him it would probably take 8–10 weeks to get ventilator production started at his factories (he’s working on this at Tesla and SpaceX).
The mission of healthy life extension, or healthy longevity promotion, raises a broad variety of questions and tasks, relating to science and technology, individual and communal ethics, and public policy, especially health and science policy. Despite the wide variety, the related questions may be classified into three groups. The first group of questions concerns the feasibility of the accomplishment of life extension. Is it theoretically and technologically possible? What are our grounds for optimism? What are the means to ensure that the life extension will be healthy life extension? The second group concerns the desirability of the accomplishment of life extension for the individual and the society, provided it will become some day possible through scientific intervention.
How will then life extension affect the perception of personhood? How will it affect the availability of resources for the population? Yet, the third and final group can be termed normative. What actions should we take? Assuming that life extension is scientifically possible and socially desirable, and that its implications are either demonstrably positive or, in case of a negative forecast, they are amenable – what practical implications should these determinations have for public policy, in particular health policy and research policy, in a democratic society? Should we pursue the goal of life extension? If yes, then how? How can we make it an individual and social priority? Given the rapid population aging and the increasing incidence and burden of age-related diseases, on the pessimistic side, and the rapid development of medical technologies, on the optimistic side, these become vital questions of social responsibility. And indeed, these questions are often asked by almost any person thinking about the possibility of human life extension, its meaning for oneself, for the people in one’s close circle, for the entire global community. Many of these questions are rather standard, and the answers to them are also often quite standard. Below some of those frequently asked questions and frequently given answers are given, with specific reference to the possibility and desirability of healthy human life extension, and the normative actions that can be undertaken, by the individual and the society, to achieve this goal.
The news did not sit well with Chinese scientists, who are still recovering from the CRISPR baby scandal. “It makes you wonder, if their reason for choosing to do this in a Chinese laboratory is because of our high-tech experimental setups, or because of loopholes in our laws?” lamented one anonymous commentator on China’s popular social media app, WeChat.
Their frustration is understandable. Earlier in April, a team from southern China came under international fire for sticking extra copies of human “intelligence-related” genes into macaque monkeys. And despite efforts to revamp its reputation in biomedical research ethics, China does have slacker rules in primate research compared to Western countries.
If you’re feeling icked out, you’re not alone. The morality and ethics of growing human-animal hybrids are far from clear. But creepiness aside, scientists do have two reasons for wading into these uncomfortable waters.
WUHAN (China Daily/ANN): With the public eagerly anticipating effective drugs to cure the novel coronavirus pneumonia, a medical ethics committee at the forefront of fighting the outbreak in Wuhan has quickened the pace of approving clinical trials.
Several programmes related to the diagnosis and treatment of the disease have gained ethical approval from Huazhong University of Science and Technology and are being carried out by the university, including two drugs that are under clinical trials, said Chen Jianguo, vice-president of the university.
The two drugs are remdesivir, a drug being developed by US-based pharmaceutical company Gilead, and chloroquine phosphate, which is available on the market to treat malaria.
Are we facing a golden digital age or will robots soon run the world? We need to establish ethical standards in dealing with artificial intelligence — and to answer the question: What still makes us as human beings unique?