Toggle light / dark theme

As artificial intelligence takes off, how do we efficiently integrate it into our lives and our work? Bridging the gap between promise and practice, Jann Spiess, an associate professor of operations, information, and technology at Stanford Graduate School of Business, is exploring how algorithms can be designed to most effectively support—rather than replace—human decision-makers.

This research, published on the arXiv preprint server, is particularly pertinent as prediction machines are integrated into real-world applications. Mounting suggests that high-stakes decisions made with AI assistance are often no better than those made without it.

From credit reports, where an overreliance on AI may lead to misinterpretation of risk scores, to , where models may depend on certain words to flag toxicity, leading to misclassifications—successful implementation lags behind the technology’s remarkable capabilities.

Long-read sequencing technologies analyze long, continuous stretches of DNA. These methods have the potential to improve researchers’ ability to detect complex genetic alterations in cancer genomes. However, the complex structure of cancer genomes means that standard analysis tools, including existing methods specifically developed to analyze long-read sequencing data, often fall short, leading to false-positive results and unreliable interpretations of the data.

These misleading results can compromise our understanding of how tumors evolve, respond to treatment, and ultimately how patients are diagnosed and treated.

To address this challenge, researchers developed SAVANA, a new algorithm which they describe in the journal Nature Methods.

The advancement of artificial intelligence (AI) and the study of neurobiological processes are deeply interlinked, as a deeper understanding of the former can yield valuable insight about the other, and vice versa. Recent neuroscience studies have found that mental state transitions, such as the transition from wakefulness to slow-wave sleep and then to rapid eye movement (REM) sleep, modulate temporary interactions in a class of neurons known as layer 5 pyramidal two-point neurons (TPNs), aligning them with a person’s mental states.

These are interactions between information originating from the external world, broadly referred to as the receptive field (RF1), and inputs emerging from internal states, referred to as the contextual field (CF2). Past findings suggest that RF1 and CF2 inputs are processed at two distinct sites within the neurons, known as the basal site and apical site, respectively.

Current AI algorithms employing attention mechanisms, such as transformers, perceiver and flamingo models, are inspired by the capabilities of the human brain. In their current form, however, they do not reliably emulate high-level perceptual processing and the imaginative states experienced by humans.

A team of researchers at AI Google Quantum AI, led by Craig Gidney, has outlined advances in quantum computer algorithms and error correction methods that could allow such computers to crack Rivest–Shamir–Adleman (RSA) encryption keys with far fewer resources than previously thought. The development, the team notes, suggests encryption experts need to begin work toward developing next-generation encryption techniques. The paper is published on the arXiv preprint server.

RSA is an encryption technique developed in the late 1970s that involves generating public and private keys; the former is used for encryption and the latter decryption. Current standards call for using a 2,048-bit encryption key. Over the past several years, research has suggested that quantum computers would one day be able to crack RSA encryption, but because quantum development has been slow, researchers believed that it would be many years before it came to pass.

Some in the field have accepted a theory that a quantum computer capable of cracking such codes in a reasonable amount of time would have to have at least 20 million qubits. In this new work, the team at Google suggests it could theoretically be done with as few as a million qubits—and it could be done in a week.

A research team from the University of South China has developed a set of algorithms to help optimize radiation-shielding design for new types of nuclear reactors.

Their achievement, which was published in the journal of Nuclear Science and Techniques and shared by TechXplore, will help engineers meet the difficult demands for next-gen reactors, including transportable models, as well as those intended for marine and space environments.

Safety is of paramount concern when it comes to nuclear energy, especially considering the public’s perception of this clean energy source following some notable accidents over the past 68 years.

Machine-learning algorithms can now estimate the “brain age” of infants with unprecedented precision by analyzing electrical brain signals recorded using electroencephalography (EEG).

A team led by Sarah Lippé at Université de Montréal’s Department of Psychology has developed a method that can determine in minutes whether a baby’s brain development is advanced, delayed or in line with their chronological age.

This breakthrough promises to enable early screening and personalized monitoring of developmental disorders in babies.

Learning and motivation are driven by internal and external rewards. Many of our day-to-day behaviours are guided by predicting, or anticipating, whether a given action will result in a positive (that is, rewarding) outcome. The study of how organisms learn from experience to correctly anticipate rewards has been a productive research field for well over a century, since Ivan Pavlov’s seminal psychological work. In his most famous experiment, dogs were trained to expect food some time after a buzzer sounded. These dogs began salivating as soon as they heard the sound, before the food had arrived, indicating they’d learned to predict the reward. In the original experiment, Pavlov estimated the dogs’ anticipation by measuring the volume of saliva they produced. But in recent decades, scientists have begun to decipher the inner workings of how the brain learns these expectations. Meanwhile, in close contact with this study of reward learning in animals, computer scientists have developed algorithms for reinforcement learning in artificial systems. These algorithms enable AI systems to learn complex strategies without external instruction, guided instead by reward predictions.

The contribution of our new work, published in Nature (PDF), is finding that a recent development in computer science – which yields significant improvements in performance on reinforcement learning problems – may provide a deep, parsimonious explanation for several previously unexplained features of reward learning in the brain, and opens up new avenues of research into the brain’s dopamine system, with potential implications for learning and motivation disorders.

Reinforcement learning is one of the oldest and most powerful ideas linking neuroscience and AI. In the late 1980s, computer science researchers were trying to develop algorithms that could learn how to perform complex behaviours on their own, using only rewards and punishments as a teaching signal. These rewards would serve to reinforce whatever behaviours led to their acquisition. To solve a given problem, it’s necessary to understand how current actions result in future rewards. For example, a student might learn by reinforcement that studying for an exam leads to better scores on tests. In order to predict the total future reward that will result from an action, it’s often necessary to reason many steps into the future.

Kirigami is a traditional Japanese art form that entails cutting and folding paper to produce complex three-dimensional (3D) structures or objects. Over the past decades, this creative practice has also been applied in the context of physics, engineering, and materials science research to create new materials, devices and even robotic systems.

Researchers at Sichuan University and McGill University recently devised a new approach for the inverse engineering of kirigami, which does not rely on advanced computational tools and numerical algorithms. This new method, outlined in a paper published in Physical Review Letters, could simplify the design of intricate kirigami for a wide range of real-world applications.

“This work is a natural extension of our previous work on kirigami,” Damiano Pasini, senior corresponding author of the paper, told Phys.org.

Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. The name comes from the Monte Carlo Casino in Monaco, where the primary developer of the method, mathematician Stanisław Ulam, was inspired by his uncle’s gambling habits.

Monte Carlo methods are mainly used in three distinct problem classes: optimization, numerical integration, and generating draws from a probability distribution. They can also be used to model phenomena with significant uncertainty in inputs, such as calculating the risk of a nuclear power plant failure. Monte Carlo methods are often implemented using computer simulations, and they can provide approximate solutions to problems that are otherwise intractable or too complex to analyze mathematically.

Monte Carlo methods are widely used in various fields of science, engineering, and mathematics, such as physics, chemistry, biology, statistics, artificial intelligence, finance, and cryptography. They have also been applied to social sciences, such as sociology, psychology, and political science. Monte Carlo methods have been recognized as one of the most important and influential ideas of the 20th century, and they have enabled many scientific and technological breakthroughs.