Toggle light / dark theme

Beyond the hype surrounding artificial intelligence (AI) in the enterprise lies the next step—artificial consciousness. The first piece in this practical AI innovation series outlined the requirements for this technology, which delved deeply into compute power—the core capability necessary to enable artificial consciousness. This piece looks at the control and storage technologies and requirements that are not only necessary for enterprise AI deployment but also essential to achieve the state of artificial consciousness.

Controlling unprecedented compute power

While artificial consciousness is impossible without a dramatic rise in compute capacity, that is only part of the challenge. Organizations must harness that compute power with the proper control plane nodes—the familiar backbone of the high availability server clusters necessary to deliver that power. This is essential for managing and orchestrating complex computing environments efficiently.

The American public intellectual and creator of the television series Closer to Truth, Robert Lawrence Kuhn has written perhaps the most comprehensive article on the landscape of theories of consciousness in recent memory. In this review of the consciousness landscape, Àlex Gómez-MarÃn celebrates Robert Kuhn €™s rejection of the monopoly of materialism and uncovers the radical implications of these new accounts of consciousness for meaning, artificial intelligence, and human immortality.

The scientific study of consciousness was not sanctioned by the mainstream until the nineties. Let us not forget that science stands on the shoulders of giants but also on the three-legged stool of data, theory, and socio-political wants. Thirty years later, the field has grown into a vibrant milieu of approaches blessed and burdened by covert assumptions, contradictory results, and conflicting implications. If the study of behaviour and cognition has become the Urban East, consciousness studies are the current Wild West of science and philosophy.

Brain-inspired hardware emulates the structure and working principles of a biological brain and may address the hardware bottleneck for fast-growing artificial intelligence (AI). Current brain-inspired silicon chips are promising but still limit their power to fully mimic brain function for AI computing. Here, we develop Brainoware, living AI hardware that harnesses the computation power of 3D biological neural networks in a brain organoid. Brain-like 3D in vitro cultures compute by receiving and sending information via a multielectrode array. Applying spatiotemporal electrical stimulation, this approach not only exhibits nonlinear dynamics and fading memory properties but also learns from training data. Further experiments demonstrate real-world applications in solving non-linear equations. This approach may provide new insights into AI hardware.

Artificial intelligence (AI) is reshaping the future of human life across various real-world fields such as industry, medicine, society, and education1. The remarkable success of AI has been largely driven by the rise of artificial neural networks (ANNs), which process vast numbers of real-world datasets (big data) using silicon computing chips 2, 3. However, current AI hardware keeps AI from reaching its full potential since training ANNs on current computing hardware produces massive heat and is heavily time-consuming and energy-consuming 46, significantly limiting the scale, speed, and efficiency of ANNs. Moreover, current AI hardware is approaching its theoretical limit and dramatically decreasing its development no longer following ‘Moore’s law’7, 8, and facing challenges stemming from the physical separation of data from data-processing units known as the ‘von Neumann bottleneck’9, 10. Thus, AI needs a hardware revolution8, 11.

A breakthrough in AI hardware may be inspired by the structure and function of a human brain, which has a remarkably efficient ability, known as natural intelligence (NI), to process and learn from spatiotemporal information. For example, a human brain forms a 3D living complex biological network of about 200 billion cells linked to one another via hundreds of trillions of nanometer-sized synapses12, 13. Their high efficiency renders a human brain to be ideal hardware for AI. Indeed, a typical human brain expands a power of about 20 watts, while current AI hardware consumes about 8 million watts to drive a comparative ANN6. Moreover, the human brain could effectively process and learn information from noisy data with minimal training cost by neuronal plasticity and neurogenesis,14, 15 avoiding the huge energy consumption in doing the same job by current high precision computing approaches12, 13.

Over the past decades, computer scientists have developed various computing tools that could help to solve challenges in quantum physics. These include large-scale deep neural networks that can be trained to predict the ground states of quantum systems. This method is now referred to as neural quantum states (NQSs).

SearchGPT is just a “prototype” for now. The service is powered by the GPT-4 family of models and will only be accessible to 10,000 test users at launch, OpenAI spokesperson Kayla Wood tells The Verge. Wood says that OpenAI is working with third-party partners and using direct content feeds to build its search results. The goal is to eventually integrate the search features directly into ChatGPT.

It’s the start of what could become a meaningful threat to Google, which has rushed to bake in AI features across its search engine, fearing that users will flock to competing products that offer the tools first. It also puts OpenAI in more direct competition with the startup Perplexity, which bills itself as an AI “answer” engine. Perplexity has recently come under criticism for an AI summaries feature that publishers claimed was directly ripping off their work.