Toggle light / dark theme

The more data collected, the better the results.


Understanding the genetics of complex diseases, especially those related to the genetic differences among ethnic groups, is essentially a big data problem. And researchers need more data.

1000, 000 genomes

To address the need for more data, the National Institutes of Health has started a program called All of Us. The project aims to collect genetic information, medical records and health habits from surveys and wearables of more than a million people in the U.S. over the course of 10 years. It also has a goal of gathering more data from underrepresented minority groups to facilitate the study of health disparities. The All of Us project opened to public enrollment in 2018, and more than 270000 people have contributed samples since. The project is continuing to recruit participants from all 50 states. Participating in this effort are many academic laboratories and private companies.

Developing Next Generation Artificial Intelligence To Serve Humanity — Dr. Patrick Bangert, Vice President of AI, Samsung SDS.


Dr. Patrick D. Bangert, is Vice President of AI, and heads the AI Engineering and AI Sciences teams, at Samsung SDS is a subsidiary of the Samsung Group, which provides information technology (IT) services, and are active in research and development of emerging IT technologies such as artificial intelligence (AI), blockchain, Internet of things (IoT) and Engineering Outsourcing.

Dr. Bangert is responsible for the Brightics AI Accelerator, a distributed ML training and automated ML product, and for X.insights, a data center intelligence platform.

Among his other responsibilities, Dr. Bangert acts as a visionary for the future of AI at Samsung.

Before joining Samsung, Dr. Bangert spent 15 years as CEO at Algorithmica Technologies, a machine learning software company serving the chemicals and oil and gas industries. Prior to that, he was assistant professor of applied mathematics at Jacobs University in Germany, as well as a researcher at Los Alamos National Laboratory and NASA’s Jet Propulsion Laboratory.

Whatever business a company may be in, software plays an increasingly vital role, from managing inventory to interfacing with customers. Software developers, as a result, are in greater demand than ever, and that’s driving the push to automate some of the easier tasks that take up their time.

Productivity tools like Eclipse and Visual Studio suggest snippets of code that developers can easily drop into their work as they write. These automated features are powered by sophisticated language models that have learned to read and write after absorbing thousands of examples. But like other deep learning models trained on big datasets without explicit instructions, language models designed for code-processing have baked-in vulnerabilities.

“Unless you’re really careful, a hacker can subtly manipulate inputs to these models to make them predict anything,” says Shashank Srikant, a graduate student in MIT’s Department of Electrical Engineering and Computer Science. “We’re trying to study and prevent that.”

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build quantum computers, but thinks GPU-accelerated platforms are the best systems for quantum circuit and algorithm development and testing.

As a proof point, Nvidia reported it collaborated with Caltech to develop “a state-of-the-art quantum circuit simulator with cuQuantum running on NVIDIA A100 Tensor Core GPUs. It generated a sample from a full-circuit simulation of the Google Sycamore circuit in 9.3 minutes on Selene, a task that 18 months ago experts thought would take days using millions of CPU cores.”

Physicists from Swansea University are part of an international research collaboration which has identified a new technique for testing the quality of quantum correlations.

Quantum computers run their algorithms on large quantum systems of many parts, called qubits, by creating quantum correlations across all of them. It is important to verify that the actual computation procedures lead to quantum correlations of desired quality.

However, carrying out these checks is resource-intensive as the number of tests required grows exponentially with the number of qubits involved.

Classical hydrodynamics laws can be very useful for describing the behavior of systems composed of many particles (i.e., many-body systems) after they reach a local state of equilibrium. These laws are expressed by so-called hydrodynamical equations, a set of mathematical equations that describe the movement of water or other fluids.

Researchers at Oak Ridge National Laboratory and University of California, Berkeley (UC Berkeley) have recently carried out a study exploring the hydrodynamics of a quantum Heisenberg spin-1/2 chain. Their paper, published in Nature Physics, shows that the spin dynamics of a 1D Heisenberg antiferromagnet (i.e., KCuF3) could be effectively described by a dynamical exponent aligned with the so-called Kardar-Parisi-Zhang universality class.

“Joel Moore and I have known each other for many years and we both have an interest in quantum magnets as a place where we can explore and test new ideas in physics; my interests are experimental and Joel’s are theoretical,” Alan Tennant, one of the researchers who carried out the study, told Phys.org. “For a long time, we have both been interested in temperature in quantum systems, an area where a number of really new insights have come along recently, but we had not worked together on any projects.”

MIT Technology Review Insights, in association with AI cybersecurity company Darktrace, surveyed more than 300 C-level executives, directors, and managers worldwide to understand how they’re addressing the cyberthreats they’re up against—and how to use AI to help fight against them.


Cyberattacks continue to grow in prevalence and sophistication. With the ability to disrupt business operations, wipe out critical data, and cause reputational damage, they pose an existential threat to businesses, critical services, and infrastructure. Today’s new wave of attacks is outsmarting and outpacing humans, and even starting to incorporate artificial intelligence (AI). What’s known as “offensive AI” will enable cybercriminals to direct targeted attacks at unprecedented speed and scale while flying under the radar of traditional, rule-based detection tools.

Some of the world’s largest and most trusted organizations have already fallen victim to damaging cyberattacks, undermining their ability to safeguard critical data. With offensive AI on the horizon, organizations need to adopt new defenses to fight back: the battle of algorithms has begun.

Place one clock at the top of a mountain. Place another on the beach. Eventually, you’ll see that each clock tells a different time. Why?


In his book “The Order of Time,” Italian theoretical physicist Carlo Rovelli suggests that our perception of time — our sense that time is forever flowing forward — could be a highly subjective projection. After all, when you look at reality on the smallest scale (using equations of quantum gravity, at least), time vanishes.

“If I observe the microscopic state of things,” writes Rovelli, “then the difference between past and future vanishes … in the elementary grammar of things, there is no distinction between ‘cause’ and ‘effect.’”

So, why do we perceive time as flowing forward? Rovelli notes that, although time disappears on extremely small scales, we still obviously perceive events occur sequentially in reality. In other words, we observe entropy: Order changing into disorder; an egg cracking and getting scrambled.