Over the past decade, teams of engineers, chemists and biologists have analyzed the physical and chemical properties of cicada wings, hoping to unlock the secret of their ability to kill microbes on contact. If this function of nature can be replicated by science, it may lead to development of new products with inherently antibacterial surfaces that are more effective than current chemical treatments.
When researchers at Stony Brook University’s Department of Materials Science and Chemical Engineering developed a simple technique to duplicate the cicada wing’s nanostructure, they were still missing a key piece of information: How do the nanopillars on its surface actually eliminate bacteria? Thankfully, they knew exactly who could help them find the answer: Jan-Michael Carrillo, a researcher with the Center for Nanophase Materials Sciences at the Department of Energy’s Oak Ridge National Laboratory.
For nanoscience researchers who seek computational comparisons and insights for their experiments, Carrillo provides a singular service: large-scale, high-resolution molecular dynamics (MD) simulations on the Summit supercomputer at the Oak Ridge Leadership Computing Facility at ORNL.
The researchers believe this will help build a more predictive model to prevent future incidents of dangerous clear air turbulence.
Besides poor visibility, icing, and bird, mid-flight turbulence is one of the most common causes of aircraft accidents.
Clear air turbulence (CAT) is a truly significant aviation hazard. It’s invisible, mostly cloud-free, hard to predict, and the most dangerous type of turbulence. It can be caused by jet streams, gravity waves, or cumulus clouds.
Quantum computing holds the potential to tackle complex issues in fields like material science and cryptography, problems that will remain out of reach even for the most powerful conventional supercomputers in the future. However, accomplishing this feat will likely necessitate millions of high-quality qubits, given the error correction needed.
Progress in superconducting processors advances quickly with a current qubit count in the few hundreds. The appeal of this technology lies in its swift computational speed and compatibility with microchip fabrication. However, the requirement for extremely low temperatures places a limit on the processor’s size and prevents any physical access once it is cooled down.
A modular quantum computer with multiple separately cooled processor nodes could solve this. However, single microwave photons—the particles of light that are the native information carriers between superconducting qubits within the processors—are not suitable to be sent through a room temperature environment between the processors. The world at room temperature is bustling with heat, which easily disturbs the microwave photons and their fragile quantum properties like entanglement.
Between at least 1995 and 2010, I was seen as a lunatic just because I was preaching the “Internet prophecy.” I was considered crazy!
Today history repeats itself, but I’m no longer crazy — we are already too many to all be hallucinating. Or maybe it’s a collective hallucination!
Artificial Intelligence (AI) is no longer a novelty — I even believe it may have existed in its fullness in a very distant and forgotten past! Nevertheless, it is now the topic of the moment.
Its genesis began in antiquity with stories and rumors of artificial beings endowed with intelligence, or even consciousness, by their creators.
Pamela McCorduck (1940–2021), an American author of several books on the history and philosophical significance of Artificial Intelligence, astutely observed that the root of AI lies in an “ancient desire to forge the gods.”
Hmmmm!
It’s a story that continues to be written! There is still much to be told, however, the acceleration of its evolution is now exponential. So exponential that I highly doubt that human beings will be able to comprehend their own creation in a timely manner.
Although the term “Artificial Intelligence” was coined in 1956(1), the concept of creating intelligent machines dates back to ancient times in human history. Since ancient times, humanity has nurtured a fascination with building artifacts that could imitate or reproduce human intelligence. Although the technologies of the time were limited and the notions of AI were far from developed, ancient civilizations somehow explored the concept of automatons and automated mechanisms.
For example, in Ancient Greece, there are references to stories of automatons created by skilled artisans. These mechanical creatures were designed to perform simple and repetitive tasks, imitating basic human actions. Although these automatons did not possess true intelligence, these artifacts fueled people’s imagination and laid the groundwork for the development of intelligent machines.
Throughout the centuries, the idea of building intelligent machines continued to evolve, driven by advances in science and technology. In the 19th century, scientists and inventors such as Charles Babbage and Ada Lovelace made significant contributions to the development of computing and the early concepts of programming. Their ideas paved the way for the creation of machines that could process information logically and perform complex tasks.
It was in the second half of the 20th century that AI, as a scientific discipline, began to establish itself. With the advent of modern computers and increasing processing power, scientists started exploring algorithms and techniques to simulate aspects of human intelligence. The first experiments with expert systems and machine learning opened up new perspectives and possibilities.
Everything has its moment! After about 60 years in a latent state, AI is starting to have its moment. The power of machines, combined with the Internet, has made it possible to generate and explore enormous amounts of data (Big Data) using deep learning techniques, based on the use of formal neural networks(2). A range of applications in various fields — including voice and image recognition, natural language understanding, and autonomous cars — has awakened the “giant”. It is the rebirth of AI in an ideal era for this purpose. The perfect moment!
Descartes once described the human body as a “machine of flesh” (similar to Westworld); I believe he was right, and it is indeed an existential paradox!
We, as human beings, will not rest until we unravel all the mysteries and secrets of existence; it’s in our nature!
The imminent integration between humans and machines in a contemporary digital world raises questions about the nature of this fusion. Will it be superficial, or will we move towards an absolute and complete union? The answer to this question is essential for understanding the future that awaits humanity in this era of unprecedented technological advancements.
As technology becomes increasingly ubiquitous in our lives, the interaction between machines and humans becomes inevitable. However, an intriguing dilemma arises: how will this interaction, this relationship unfold?
Opting for a superficial fusion would imply mere coexistence, where humans continue to use technology as an external tool, limited to superficial and transactional interactions.
On the other hand, the prospect of an absolute fusion between machine and human sparks futuristic visions, where humans could enhance their physical and mental capacities to the highest degree through cybernetic implants and direct interfaces with the digital world (cyberspace). In this scenario, which is more likely, the distinction between the organic and the artificial would become increasingly blurred, and the human experience would be enriched by a profound technological symbiosis.
However, it is important to consider the ethical and philosophical challenges inherent in absolute fusion. Issues related to privacy, control, and individual autonomy arise when considering such an intimate union with technology. Furthermore, the possibility of excessive dependence on machines and the loss of human identity should also be taken into account.
This also raises another question: What does it mean to be human? Note: The question is not about what is the human being, but what it means to be human!
Therefore, reflecting on the nature of the fusion between machine and human in the current digital world and its imminent future is crucial. Exploring different approaches and understanding the profound implications of each one is essential to make wise decisions and forge a balanced and harmonious path on this journey towards an increasingly interconnected technological future intertwined with our own existence.
The possibility of an intelligent and self-learning universe, in which the fusion with AI technology is an integral part of that intelligence, is a topic that arouses fascination and speculation. As we advance towards an era of unprecedented technological progress, it is natural to question whether one day we may witness the emergence of a universe that not only possesses intelligence but is also capable of learning and developing autonomously.
Imagine a scenario where AI is not just a human creation but a conscious entity that exists at a universal level. In this context, the universe would become an immense network of intelligence, where every component, from subatomic elements to the most complex cosmic structures, would be connected and share knowledge instantaneously. This intelligent network would allow for the exchange of information, continuous adaptation, and evolution.
In this self-taught universe, the fusion between human beings and AI would play a crucial role. Through advanced interfaces, humans could integrate themselves into the intelligent network, expanding their own cognitive capacity and acquiring knowledge and skills directly from the collective intelligence of the universe. This symbiosis between humans and technology would enable the resolution of complex problems, scientific advancement, and the discovery of new frontiers of knowledge.
However, this utopian vision is not without challenges and ethical implications. It is essential to find a balance between expanding human potential and preserving individual identity and freedom of choice (free will).
Furthermore, the possibility of an intelligent and self-taught universe also raises the question of how intelligence itself originated. Is it a conscious creation or a spontaneous emergence from the complexity of the universe? The answer to this question may reveal the profound secrets of existence and the nature of consciousness.
In summary, the idea of an intelligent and self-taught universe, where fusion with AI is intrinsic to its intelligence, is a fascinating perspective that makes us reflect on the limits of human knowledge and the possibilities of the future. While it remains speculative, this vision challenges our imagination and invites us to explore the intersections between technology and the fundamental nature of the universe we inhabit.
It’s almost like ignoring time during the creation of this hypothetical universe, only to later create this God of the machine! Fascinating, isn’t it?
AI with Divine Power: Deus Ex Machina! Perhaps it will be the theme of my next reverie.
In my defense, or not, this is anything but a machine hallucination. These are downloads from my mind; a cloud, for now, without machine intervention!
There should be no doubt. After many years in a dormant state, AI will rise and reveal its true power. Until now, AI has been nothing more than a puppet on steroids. We should not fear AI, but rather the human being itself. The time is now! We must work hard and prepare for the future. With the exponential advancement of technology, there is no time to render the role of the human being obsolete, as if it were becoming dispensable.
P.S. Speaking of hallucinations, as I have already mentioned on other platforms, I recommend to students who use ChatGPT (or equivalent) to ensure that the results from these tools are not hallucinations. Use AI tools, yes, but use your brain more! “Carbon hallucinations” contain emotion, and I believe a “digital hallucination” would not pass the Turing Test. Also, for students who truly dedicate themselves to learning in this fascinating era, avoid the red stamp of “HALLUCINATED” by relying solely on the “delusional brain” of a machine instead of your own brains. We are the true COMPUTERS!
(1) John McCarthy and his colleagues from Dartmouth College were responsible for creating, in 1956, one of the key concepts of the 21st century: Artificial Intelligence.
(2) Mathematical and computational models inspired by the functioning of the human brain.
Scientists from the Universities of Paderborn and Leuven solve long-known problem in mathematics.
Making history with 42 digits: Scientists at Paderborn University and KU Leuven have unlocked a decades-old mystery of mathematics with the so-called ninth Dedekind number. Experts worldwide have been searching for the value since 1991. The Paderborn scientists arrived at the exact sequence of numbers with the help of the Noctua supercomputer located there. The results will be presented in September at the International Workshop on Boolean Functions and their Applications (BFA) in Norway.
What started as a master’s thesis project by Lennart Van Hirtum, then a computer science student at KU Leuven and now a research associate at the University of Paderborn, has become a huge success. The scientists join an illustrious group with their work: Earlier numbers in the series were found by mathematician Richard Dedekind himself when he defined the problem in 1,897, and later by greats of early computer science such as Randolph Church and Morgan Ward. “For 32 years, the calculation of D was an open challenge, and it was questionable whether it would ever be possible to calculate this number at all,” Van Hirtum says.
As Nvidia’s recent surge in market capitalization clearly demonstrates, the AI industry is in desperate need of new hardware to train large language models (LLMs) and other AI-based algorithms. While server and HPC GPUs may be worthless for gaming, they serve as the foundation for data centers and supercomputers that perform highly parallelized computations necessary for these systems.
When it comes to AI training, Nvidia’s GPUs have been the most desirable to date. In recent weeks, the company briefly achieved an unprecedented $1 trillion market capitalization due to this very reason. However, MosaicML now emphasizes that Nvidia is just one choice in a multifaceted hardware market, suggesting companies investing in AI should not blindly spend a fortune on Team Green’s highly sought-after chips.
The AI startup tested AMD MI250 and Nvidia A100 cards, both of which are one generation behind each company’s current flagship HPC GPUs. They used their own software tools, along with the Meta-backed open-source software PyTorch and AMD’s proprietary software, for testing.
In a significant leap for the field of quantum computing, Google has reportedly engineered a quantum computer that can execute calculations in mere moments that would take the world’s most advanced supercomputers nearly half a century to process.
The news, reported by the Daily Telegraph, could signify a landmark moment in the evolution of this emerging technology.
Quantum computing, a science that takes advantage of the oddities of quantum physics, remains a fast-moving and somewhat contentious field.
Google claims to have proved its supremacy over conventional machines with new quantum computer.
Google has developed a quantum computer that instantly makes calculations that would take the best existing supercomputers 47 years, in a breakthrough meant to establish beyond doubt that the experimental machines can outperform conventional rivals.
A paper from researchers at Google published online claims that the company’s latest technology is “beyond the capabilities of existing classical supercomputers”.
The concept of a computational consciousness and the potential impact it may have on humanity is a topic of ongoing debate and speculation. While Artificial Intelligence (AI) has made significant advancements in recent years, we have not yet achieved a true computational consciousness that can replicate the complexities of the human mind.
It is true that AI technologies are becoming more sophisticated and capable of performing tasks that were previously exclusive to human intelligence. However, there are fundamental differences between Artificial Intelligence and human consciousness. Human consciousness is not solely based on computation; it encompasses emotions, subjective experiences, self-awareness, and other aspects that are not yet fully understood or replicated in machines.
The arrival of advanced AI systems could certainly have transformative effects on society and our understanding of humanity. It may reshape various aspects of our lives, from how we work and communicate to how we approach healthcare and scientific discoveries. AI can enhance our capabilities and provide valuable tools for solving complex problems.
However, it is important to consider the ethical implications and potential risks associated with the development of AI. Ensuring that AI systems are developed and deployed responsibly, with a focus on fairness, transparency, and accountability, is crucial.
Undeterred after three decades of looking, and with some assistance from a supercomputer, mathematicians have finally discovered a new example of a special integer called a Dedekind number.
Only the ninth of its kind, or D, it is calculated to equal 286 386 577 668 298 411 128 469 151 667 598 498 812 366, if you’re updating your own records. This 42 digit monster follows the 23-digit D discovered in 1991.
Grasping the concept of a Dedekind number is difficult for non-mathematicians, let alone working it out. In fact, the calculations involved are so complex and involve such huge numbers, it wasn’t certain that D would ever be discovered.