Toggle light / dark theme

In 1918, the American chemist Irving Langmuir published a paper examining the behavior of gas molecules sticking to a solid surface. Guided by the results of careful experiments, as well as his theory that solids offer discrete sites for the gas molecules to fill, he worked out a series of equations that describe how much gas will stick, given the pressure.

Now, about a hundred years later, an “AI scientist” developed by researchers at IBM Research, Samsung AI, and the University of Maryland, Baltimore County (UMBC) has reproduced a key part of Langmuir’s Nobel Prize-winning work. The system— (AI) functioning as a scientist—also rediscovered Kepler’s third law of planetary motion, which can calculate the time it takes one space object to orbit another given the distance separating them, and produced a good approximation of Einstein’s relativistic time-dilation law, which shows that time slows down for fast-moving objects.

A paper describing the results is published in Nature Communications on April 12.

As quantum advantage has been demonstrated on different quantum computing platforms using Gaussian boson sampling,1–3 quantum computing is moving to the next stage, namely demonstrating quantum advantage in solving practical problems. Two typical problems of this kind are computational-aided material design and drug discovery, in which quantum chemistry plays a critical role in answering questions such as ∼Which one is the best?∼. Many recent efforts have been devoted to the development of advanced quantum algorithms for solving quantum chemistry problems on noisy intermediate-scale quantum (NISQ) devices,2,4–14 while implementing these algorithms for complex problems is limited by available qubit counts, coherence time and gate fidelity. Specifically, without error correction, quantum simulations of quantum chemistry are viable only if low-depth quantum algorithms are implemented to suppress the total error rate. Recent advances in error mitigation techniques enable us to model many-electron problems with a dozen qubits and tens of circuit depths on NISQ devices,9 while such circuit sizes and depths are still a long way from practical applications.

The difference between the available and actually required quantum resources in practical quantum simulations has renewed the interest in divide and conquer (DC) based methods.15–19 Realistic material and (bio)chemistry systems often involve complex environments, such as surfaces and interfaces. To model these systems, the Schrödinger equations are much too complicated to be solvable. It therefore becomes desirable that approximate practical methods of applying quantum mechanics be developed.20 One popular scheme is to divide the complex problem under consideration into as many parts as possible until these become simple enough for an adequate solution, namely the philosophy of DC.21 The DC method is particularly suitable for NISQ devices since the sub-problem for each part can in principle be solved with fewer computational resources.15–18,22–25 One successful application of DC is to estimate the ground-state potential energy surface of a ring containing 10 hydrogen atoms using the density matrix embedding theory (DMET) on a trapped-ion quantum computer, in which a 20-qubit problem is decomposed into ten 2-qubit problems.18

DC often treats all subsystems at the same computational level and estimates physical observables by summing up the corresponding quantities of subsystems, while in practical simulations of complex systems, the particle–particle interactions may exhibit completely different characteristics in and between subsystems. Long-range Coulomb interactions can be well approximated as quasiclassical electrostatic interactions since empirical methods, such as empirical force filed (EFF) approaches,26 are promising to describe these interactions. As the distance between particles decreases, the repulsive exchange interactions from electrons having the same spin become important so that quantum mean-field approaches, such as Hartree–Fock (HF), are necessary to characterize these electronic interactions.

Space travel, exploration, and observation involve some of the most complex and dangerous scientific and technical operations ever carried out. This means that it tends to throw up the kinds of problems that artificial intelligence (AI) is proving itself to be outstandingly helpful with.

Because of this, astronauts, scientists, and others whose job it is to chart and explore the final frontier are increasingly turning to machine learning (ML) to tackle the everyday and extraordinary challenges they face.


AI is revolutionizing space exploration, from autonomous spaceflight to planetary exploration and charting the cosmos. ML algorithms help astronauts and scientists navigate and study space, avoid hazards, and classify features of celestial bodies.

Quantum computing promises to be a revolutionary tool, making short work of equations that classical computers would struggle to ever complete. Yet the workhorse of the quantum device, known as a qubit, is a delicate object prone to collapsing.

Keeping enough qubits in their ideal state long enough for computations has so far proved a challenge.

In a new experiment, scientists were able to keep a qubit in that state for twice as long as normal. Along the way, they demonstrated the practicality of quantum error correction (QEC), a process that keeps quantum information intact for longer by introducing room for redundancy and error removal.

A quantum computational solution for engineering materials. Researchers at Argonne explore the possibility of solving the electronic structures of complex molecules using a quantum computer. If you know the atoms that compose a particular molecule or solid material, the interactions between those atoms can be determined computationally, by solving quantum mechanical equations — at least, if the molecule is small and simple. However, solving these equations, critical for fields from materials engineering to drug design, requires a prohibitively long computational time for complex molecules and materials.

Fascinating proposal for methodology.


Models are scientific models, theories, hypotheses, formulas, equations, naïve models based on personal experiences, superstitions (!), and traditional computer programs. In a Reductionist paradigm, these Models are created by humans, ostensibly by scientists, and are then used, ostensibly by engineers, to solve real-world problems. Model creation and Model use both require that these humans Understand the problem domain, the problem at hand, the previously known shared Models available, and how to design and use Models. A Ph.D. degree could be seen as a formal license to create new Models[2]. Mathematics can be seen as a discipline for Model manipulation.

But now — by avoiding the use of human made Models and switching to Holistic Methods — data scientists, programmers, and others do not themselves have to Understand the problems they are given. They are no longer asked to provide a computer program or to otherwise solve a problem in a traditional Reductionist or scientific way. Holistic Systems like DNNs can provide solutions to many problems by first learning about the domain from data and solved examples, and then, in production, to match new situations to this gathered experience. These matches are guesses, but with sufficient learning the results can be highly reliable.

We will initially use computer-based Holistic Methods to solve individual and specific problems, such as self-driving cars. Over time, increasing numbers of Artificial Understanders will be able to provide immediate answers — guesses — to wider and wider ranges of problems. We can expect to see cellphone apps with such good command of language that it feels like talking to a competent co-worker. Voice will become the preferred way to interact with our personal AIs.

This video will cover the philosophy of artificial intelligence, the branch of philosophy that explores what artificial intelligence specifically is, and other philosophical questions surrounding it like; Can a machine act intelligently? Is the human brain essentially a computer? Can a machine be alive like a human is? Can it have a mind and consciousness? Can we build A.I. and align it with our values and ethics? If so, what ethical systems do we choose?

We’re going to be covering all those equations and possible answers to them in what will hopefully be an easy-to-understand, 101-style manner.

== Subscribe for more videos like this on YouTube and turn on the notification bell to get more videos: https://tinyurl.com/thinkingdeeply ==

0:00 Introduction.
0:45 What is Artificial Intelligence?
1:13 Rene Descartes.
2:11 Alan Turing & the ‘Turing Test’
3:42 A.I.M.A. & A.I.
4:45 Intelligent Agents.
5:40 Newell’s Definition.
6:26 Weak A.I. vs Strong A.I.
7:31 Narrow A.I. vs General A.I. vs Super Intelligence.
10:00 Computationalism.
10:44 Approaches to A.I.
13:32 Can a Machine Have Consciousness?
14:23 The ‘Chinese Room’
16:30 Critical Responses.
17:18 The ‘Hard Problem of Consciousness’
18:47 Philosophical Zombies.
21:20 New Questions in the Philosophy of A.I.
21:34 Singularitarianism.
24:40 A.I. Alignment.
26:45 The Orthogonality Thesis.
27:36 The Ethics of A.I.
30:56 Conclusion.

Descartes, 1,637, R., in Haldane, E. and Ross, G.R.T., translators, 1911, The Philosophical Works of Descartes, Volume 1, Cambridge, UK: Cambridge University Press.

Russell, S. & Norvig, P., 2009, Artificial Intelligence: A Modern Approach 3rd edition, Saddle River, NJ: Prentice Hall.

The team was able to produce blur-free, high-resolution images of the universe by incorporating this AI algorithm.

Before reaching ground-based telescopes, cosmic light interacts with the Earth’s atmosphere. That’s why, the majority of advanced ground-based telescopes are located at high altitudes on Earth, where the atmosphere is thinner. The Earth’s changing atmosphere often obscures the view of the universe.

The atmosphere obstructs certain wavelengths as well as distorts the light coming from great distances. This interference may interfere with the accurate construction of space images, which is critical for unraveling the mysteries of the universe. The produced blurry images may obscure the shapes of astronomical objects and cause measurement errors.

Are you ready for the future of #ai? In this video, we will showcase the top 10 AI-tools to watch out for in 2023. From advanced machine learning algorithms to cutting-edge deep learning #technologies, these tools will revolutionize the way we work, learn, and interact with the world. Join us as we explore the #innovative capabilities of these AI-tools and discover how they can boost your productivity, streamline your operations, and enhance your decision-making process. Don’t miss out on this exciting glimpse into the future of artificial intelligence!