Archive for the ‘information science’ category: Page 31
Mar 27, 2024
AI’s Learning Path: Surprising Uniformity Across Neural Networks
Posted by Dan Breeden in categories: information science, robotics/AI
Summary: Neural networks, regardless of their complexity or training method, follow a surprisingly uniform path from ignorance to expertise in image classification tasks. Researchers found that neural networks classify images by identifying the same low-dimensional features, such as ears or eyes, debunking the assumption that network learning methods are vastly different.
This finding could pave the way for developing more efficient AI training algorithms, potentially reducing the significant computational resources currently required. The research, grounded in information geometry, hints at a more streamlined future for AI development, where understanding the common learning path of neural networks could lead to cheaper and faster training methods.
Mar 24, 2024
God’s Number Revealed: 20 Moves Proven Enough to Solve Any Rubik’s Cube Position
Posted by Quinn Sena in categories: alien life, computing, information science, mathematics
Year 2010 😗😁
The world has waited with bated breath for three decades, and now finally a group of academics, engineers, and math geeks has discovered the number that explains life, the universe, and everything. That number is 20, and it’s the maximum number of moves it takes to solve a Rubik’s Cube.
Known as God’s Number, the magic number required about 35 CPU-years and a good deal of man-hours to solve. Why? Because there’s-1 possible positions of the cube, and the computer algorithm that finally cracked God’s Algorithm had to solve them all. (The terms God’s Number/Algorithm are derived from the fact that if God was solving a Cube, he/she/it would do it in the most efficient way possible. The Creator did not endorse this study, and could not be reached for comment.)
Mar 24, 2024
Bayesian neural networks using magnetic tunnel junction-based probabilistic in-memory computing
Posted by Dan Breeden in categories: information science, particle physics, robotics/AI
Bayesian neural networks (BNNs) combine the generalizability of deep neural networks (DNNs) with a rigorous quantification of predictive uncertainty, which mitigates overfitting and makes them valuable for high-reliability or safety-critical applications. However, the probabilistic nature of BNNs makes them more computationally intensive on digital hardware and so far, less directly amenable to acceleration by analog in-memory computing as compared to DNNs. This work exploits a novel spintronic bit cell that efficiently and compactly implements Gaussian-distributed BNN values. Specifically, the bit cell combines a tunable stochastic magnetic tunnel junction (MTJ) encoding the trained standard deviation and a multi-bit domain-wall MTJ device independently encoding the trained mean. The two devices can be integrated within the same array, enabling highly efficient, fully analog, probabilistic matrix-vector multiplications. We use micromagnetics simulations as the basis of a system-level model of the spintronic BNN accelerator, demonstrating that our design yields accurate, well-calibrated uncertainty estimates for both classification and regression problems and matches software BNN performance. This result paves the way to spintronic in-memory computing systems implementing trusted neural networks at a modest energy budget.
The powerful ability of deep neural networks (DNNs) to generalize has driven their wide proliferation in the last decade to many applications. However, particularly in applications where the cost of a wrong prediction is high, there is a strong desire for algorithms that can reliably quantify the confidence in their predictions (Jiang et al., 2018). Bayesian neural networks (BNNs) can provide the generalizability of DNNs, while also enabling rigorous uncertainty estimates by encoding their parameters as probability distributions learned through Bayes’ theorem such that predictions sample trained distributions (MacKay, 1992). Probabilistic weights can also be viewed as an efficient form of model ensembling, reducing overfitting (Jospin et al., 2022). In spite of this, the probabilistic nature of BNNs makes them slower and more power-intensive to deploy in conventional hardware, due to the large number of random number generation operations required (Cai et al., 2018a).
Mar 24, 2024
Probabilistic Neural Computing with Stochastic Devices
Posted by Dan Breeden in categories: information science, robotics/AI
The brain has effectively proven a powerful inspiration for the development of computing architectures in which processing is tightly integrated with memory, communication is event-driven, and analog computation can be performed at scale. These neuromorphic systems increasingly show an ability to improve the efficiency and speed of scientific computing and artificial intelligence applications. Herein, it is proposed that the brain’s ubiquitous stochasticity represents an additional source of inspiration for expanding the reach of neuromorphic computing to probabilistic applications. To date, many efforts exploring probabilistic computing have focused primarily on one scale of the microelectronics stack, such as implementing probabilistic algorithms on deterministic hardware or developing probabilistic devices and circuits with the expectation that they will be leveraged by eventual probabilistic architectures. A co-design vision is described by which large numbers of devices, such as magnetic tunnel junctions and tunnel diodes, can be operated in a stochastic regime and incorporated into a scalable neuromorphic architecture that can impact a number of probabilistic computing applications, such as Monte Carlo simulations and Bayesian neural networks. Finally, a framework is presented to categorize increasingly advanced hardware-based probabilistic computing technologies.
Keywords: magnetic tunnel junctions; neuromorphic computing; probabilistic computing; stochastic computing; tunnel diodes.
© 2022 The Authors. Advanced Materials published by Wiley-VCH GmbH.
Mar 24, 2024
Emerging Artificial Neuron Devices for Probabilistic Computing
Posted by Dan Breeden in categories: biological, finance, information science, robotics/AI
Probabilistic computing with stochastic devices.
In recent decades, artificial intelligence has been successively employed in the fields of finance, commerce, and other industries. However, imitating high-level brain functions, such as imagination and inference, pose several challenges as they are relevant to a particular type of noise in a biological neuron network. Probabilistic computing algorithms based on restricted Boltzmann machine and Bayesian inference that use silicon electronics have progressed significantly in terms of mimicking probabilistic inference. However, the quasi-random noise generated from additional circuits or algorithms presents a major challenge for silicon electronics to realize the true stochasticity of biological neuron systems. Artificial neurons based on emerging devices, such as memristors and ferroelectric field-effect transistors with inherent stochasticity can produce uncertain non-linear output spikes, which may be the key to make machine learning closer to the human brain. In this article, we present a comprehensive review of the recent advances in the emerging stochastic artificial neurons (SANs) in terms of probabilistic computing. We briefly introduce the biological neurons, neuron models, and silicon neurons before presenting the detailed working mechanisms of various SANs. Finally, the merits and demerits of silicon-based and emerging neurons are discussed, and the outlook for SANs is presented.
Keywords: brain-inspired computing, artificial neurons, stochastic neurons, memristive devices, stochastic electronics.
Continue reading “Emerging Artificial Neuron Devices for Probabilistic Computing” »
Mar 24, 2024
OpenAI’s GPT-5, their next-gen foundation model is coming soon
Posted by Kelvin Dafiaghor in categories: information science, robotics/AI
A hot potato: ChatGPT, the chatbot that turned machine learning algorithms into a new gold rush for Wall Street speculators and Big Tech companies, is merely a “storefront” for large language models within the Generative Pre-trained Transformer (GPT) series. Developer OpenAI is now readying yet another upgrade for the technology.
OpenAI is busily working on GPT-5, the next generation of the company’s multimodal large language model that will replace the currently available GPT-4 model. Anonymous sources familiar with the matter told Business Insider that GPT-5 will launch by mid-2024, likely during summer.
OpenAI is developing GPT-5 with third-party organizations and recently showed a live demo of the technology geared to use cases and data sets specific to a particular company. The CEO of the unnamed firm was impressed by the demonstration, stating that GPT-5 is exceptionally good, even “materially better” than previous chatbot tech.
Mar 23, 2024
Time travel is close to becoming a reality, astrophysicist claims • Earth
Posted by Paul Battista in categories: information science, time travel
Can you imagine going back in time to visit a lost loved one? This heartwrenching desire is what propelled astrophysicist Professor Ron Mallett on a lifelong quest to build a time machine. After years of research, Professor Mallett claims to have finally developed the revolutionary equation for time travel.
The idea of bending time to our will – revisiting the past, altering history, or glimpsing into the future – has been a staple of science fiction for over a century. But could it move from fantasy to reality?
Professor Mallett’s obsession with time travel and its equation has its roots in a shattering childhood experience. When he was just ten years old, his father, a television repairman who fostered his son’s love of science, tragically passed away from a heart attack.
Mar 23, 2024
Debates on the nature of artificial general intelligence
Posted by Cecile G. Tamura in categories: business, Elon Musk, government, humor, information science, robotics/AI, transportation
The term “artificial general intelligence” (AGI) has become ubiquitous in current discourse around AI. OpenAI states that its mission is “to ensure that artificial general intelligence benefits all of humanity.” DeepMind’s company vision statement notes that “artificial general intelligence…has the potential to drive one of the greatest transformations in history.” AGI is mentioned prominently in the UK government’s National AI Strategy and in US government AI documents. Microsoft researchers recently claimed evidence of “sparks of AGI” in the large language model GPT-4, and current and former Google executives proclaimed that “AGI is already here.” The question of whether GPT-4 is an “AGI algorithm” is at the center of a lawsuit filed by Elon Musk against OpenAI.
Given the pervasiveness of AGI talk in business, government, and the media, one could not be blamed for assuming that the meaning of the term is established and agreed upon. However, the opposite is true: What AGI means, or whether it means anything coherent at all, is hotly debated in the AI community. And the meaning and likely consequences of AGI have become more than just an academic dispute over an arcane term. The world’s biggest tech companies and entire governments are making important decisions on the basis of what they think AGI will entail. But a deep dive into speculations about AGI reveals that many AI practitioners have starkly different views on the nature of intelligence than do those who study human and animal cognition—differences that matter for understanding the present and predicting the likely future of machine intelligence.
Continue reading “Debates on the nature of artificial general intelligence” »
Mar 22, 2024
Quantum Entanglement Transforms Next-Generation Sensors
Posted by Saúl Morales Rodriguéz in categories: information science, particle physics, quantum physics
Researchers have revolutionized quantum sensing with an algorithm that simplifies the assessment of Quantum Fisher Information, thereby enhancing the precision and utility of quantum sensors in capturing minute phenomena.
Quantum sensors help physicists understand the world better by measuring time passage, gravity fluctuations, and other effects at the tiniest scales. For example, one quantum sensor, the LIGO gravitational wave detector, uses quantum entanglement (or the interdependence of quantum states between particles) within a laser beam to detect distance changes in gravitational waves up to one thousand times smaller than the width of a proton!
LIGO isn’t the only quantum sensor harnessing the power of quantum entanglement. This is because entangled particles are generally more sensitive to specific parameters, giving more accurate measurements.