Menu

Blog

Archive for the ‘information science’ category: Page 14

Aug 1, 2024

A higher-dimensional model can help explain cosmic acceleration without dark energy

Posted by in categories: cosmology, information science, quantum physics

Dark energy remains among the greatest puzzles in our understanding of the cosmos. In the standard model of cosmology called the Lambda-CDM, it is accounted for by adding a cosmological constant term in Einstein’s field equation first introduced by Einstein himself. This constant is very small and positive and lacks a complete theoretical understanding of why it has such a tiny value. Moreover, dark energy has some peculiar features, such as negative pressure and does not dilute with cosmic expansion, which makes at least some of us uncomfortable.

Jul 30, 2024

AI brain images create realistic synthetic data to use in medical research

Posted by in categories: biotech/medical, information science, robotics/AI, supercomputing

An AI model developed by scientists at King’s College London, in close collaboration with University College London, has produced three-dimensional, synthetic images of the human brain that are realistic and accurate enough to use in medical research.

The model and images have helped scientists better understand what the human brain looks like, supporting research to predict, diagnose and treat such as dementia, stroke, and multiple sclerosis.

The algorithm was created using the NVIDIA Cambridge-1, the UK’s most powerful supercomputer. One of the fastest supercomputers in the world, the Cambridge-1 allowed researchers to train the AI in weeks rather than months and produce images of far higher quality.

Jul 28, 2024

Novel algorithm for discovering anomalies in data outperforms current software

Posted by in categories: biotech/medical, information science, robotics/AI

An algorithm developed by Washington State University researchers can better find data anomalies than current anomaly-detection software, including in streaming data.

The work, reported in the Journal of Artificial Intelligence Research, makes fundamental contributions to artificial intelligence (AI) methods that could have applications in many domains that need to quickly find anomalies in large amounts of data, such as in cybersecurity, power grid management, misinformation, and medical diagnostics.

Being able to better find the anomalies would mean being able to more easily discover fraud, disease in a medical setting, or important unusual information, such as an asteroid whose signals overlap with the light from other stars.

Jul 27, 2024

New method for 3D quantitative phase imaging eliminates need for digital phase recovery algorithms

Posted by in categories: information science, transportation

QPI is a powerful technique that reveals variations in optical path length caused by weakly scattering samples, enabling the generation of high-contrast images of transparent specimens. Traditional 3D QPI methods, while effective, are limited by the need for multiple illumination angles and extensive digital post-processing for 3D , which can be time-consuming and computationally intensive.

In this innovative study, the research team developed a wavelength-multiplexed diffractive optical processor capable of all-optically transforming distributions of multiple 2D objects at various axial positions into intensity patterns, each encoded at a unique wavelength channel.

This allows for the capture of quantitative phase images of input objects located at different axial planes using an intensity-only image sensor, eliminating the need for digital phase recovery algorithms.

Jul 27, 2024

Models, metaphors and minds

Posted by in categories: biological, computing, information science, life extension, neuroscience

The idea of the brain as a computer is everywhere. So much so we have forgotten it is a model and not the reality. It’s a metaphor that has lead some to believe that in the future they’ll be uploaded to the digital ether and thereby achieve immortality. It’s also a metaphor that garners billions of dollars in research funding every year. Yet researchers argue that when we dig down into our grey matter our biology is anything but algorithmic. And increasingly, critics contend that the model of the brain as computer is sending scientists (and their resources) nowhere fast. Is our attraction to the idea of the brain as computer an accident of current human technology? Can we find a better metaphor that might lead to a new paradigm?

Jul 26, 2024

Brain Organoid Computing for Artificial Intelligence

Posted by in categories: biotech/medical, information science, robotics/AI

Brain-inspired hardware emulates the structure and working principles of a biological brain and may address the hardware bottleneck for fast-growing artificial intelligence (AI). Current brain-inspired silicon chips are promising but still limit their power to fully mimic brain function for AI computing. Here, we develop Brainoware, living AI hardware that harnesses the computation power of 3D biological neural networks in a brain organoid. Brain-like 3D in vitro cultures compute by receiving and sending information via a multielectrode array. Applying spatiotemporal electrical stimulation, this approach not only exhibits nonlinear dynamics and fading memory properties but also learns from training data. Further experiments demonstrate real-world applications in solving non-linear equations. This approach may provide new insights into AI hardware.

Artificial intelligence (AI) is reshaping the future of human life across various real-world fields such as industry, medicine, society, and education1. The remarkable success of AI has been largely driven by the rise of artificial neural networks (ANNs), which process vast numbers of real-world datasets (big data) using silicon computing chips 2, 3. However, current AI hardware keeps AI from reaching its full potential since training ANNs on current computing hardware produces massive heat and is heavily time-consuming and energy-consuming 46, significantly limiting the scale, speed, and efficiency of ANNs. Moreover, current AI hardware is approaching its theoretical limit and dramatically decreasing its development no longer following ‘Moore’s law’7, 8, and facing challenges stemming from the physical separation of data from data-processing units known as the ‘von Neumann bottleneck’9, 10. Thus, AI needs a hardware revolution8, 11.

A breakthrough in AI hardware may be inspired by the structure and function of a human brain, which has a remarkably efficient ability, known as natural intelligence (NI), to process and learn from spatiotemporal information. For example, a human brain forms a 3D living complex biological network of about 200 billion cells linked to one another via hundreds of trillions of nanometer-sized synapses12, 13. Their high efficiency renders a human brain to be ideal hardware for AI. Indeed, a typical human brain expands a power of about 20 watts, while current AI hardware consumes about 8 million watts to drive a comparative ANN6. Moreover, the human brain could effectively process and learn information from noisy data with minimal training cost by neuronal plasticity and neurogenesis,14, 15 avoiding the huge energy consumption in doing the same job by current high precision computing approaches12, 13.

Jul 26, 2024

Creation of a deep learning algorithm to detect unexpected gravitational wave events

Posted by in categories: information science, physics, robotics/AI

Starting with the direct detection of gravitational waves in 2015, scientists have relied on a bit of a kludge: they can only detect those waves that match theoretical predictions, which is rather the opposite way that science is usually done.

Jul 26, 2024

Optimization algorithm successfully computes the ground state of interacting quantum matter

Posted by in categories: information science, quantum physics, robotics/AI

Over the past decades, computer scientists have developed various computing tools that could help to solve challenges in quantum physics. These include large-scale deep neural networks that can be trained to predict the ground states of quantum systems. This method is now referred to as neural quantum states (NQSs).

Jul 25, 2024

Network properties determine neural network performance

Posted by in categories: information science, mapping, mathematics, mobile phones, robotics/AI, transportation

Machine learning influences numerous aspects of modern society, empowers new technologies, from Alphago to ChatGPT, and increasingly materializes in consumer products such as smartphones and self-driving cars. Despite the vital role and broad applications of artificial neural networks, we lack systematic approaches, such as network science, to understand their underlying mechanism. The difficulty is rooted in many possible model configurations, each with different hyper-parameters and weighted architectures determined by noisy data. We bridge the gap by developing a mathematical framework that maps the neural network’s performance to the network characters of the line graph governed by the edge dynamics of stochastic gradient descent differential equations. This framework enables us to derive a neural capacitance metric to universally capture a model’s generalization capability on a downstream task and predict model performance using only early training results. The numerical results on 17 pre-trained ImageNet models across five benchmark datasets and one NAS benchmark indicate that our neural capacitance metric is a powerful indicator for model selection based only on early training results and is more efficient than state-of-the-art methods.

Jul 25, 2024

The Clinical, Philosophical, Evolutionary and Mathematical Machinery of Consciousness: An Analytic Dissection of the Field Theories and a Consilience of Ideas

Posted by in categories: biotech/medical, evolution, information science, mathematics, neuroscience, quantum physics

The Cartesian model of mind-body dualism concurs with religious traditions. However, science has supplanted this idea with an energy-matter theory of consciousness, where matter is equivalent to the body and energy replaces the mind or soul. This equivalency is analogous to the concept of the interchange of mass and energy as expressed by Einstein’s famous equation [Formula: see text]. Immanuel Kant, in his Critique of Pure Reason, provided the intellectual and theoretical framework for a theory of mind or consciousness. Any theory of consciousness must include the fact that a conscious entity, as far as is known, is a wet biological medium (the brain), of stupendously high entropy. This organ or entity generates a field that must account for the “binding problem”, which we will define. This proposed field, the conscious electro-magnetic information (CEMI) field, also has physical properties, which we will outline. We will also demonstrate the seamless transition of the Kantian philosophy of the a priori conception of space and time, the organs of perception and conception, into the CEMI field of consciousness. We will explore the concept of the CEMI field and its neurophysiological correlates, and in particular, synchronous and coherent gamma oscillations of various neuronal ensembles, as in William J Freeman’s experiments in the early 1970s with olfactory perception in rabbits. The expansion of the temporo-parietal-occipital (TPO) cortex in hominid evolution epitomizes metaphorical and abstract thinking. This area of the cortex, with synchronous thalamo-cortical oscillations has the best fit for a minimal neural correlate of consciousness. Our field theory shifts consciousness from an abstract idea to a tangible energy with defined properties and a mathematical framework. Even further, it is not a coincidence that the cerebral cortex is very thin with respect to the diameter of the brain. This is in keeping with its fantastically high entropy, as we see in the event horizon of a black hole and the conformal field theory/anti-de Sitter (CFT/ADS) holographic model of the universe. We adumbrate the uniqueness of consciousness of an advanced biological system such as the human brain and draw insight from Avicenna’s gendanken, floating man thought experiment. The multi-system high volume afferentation of a biological wet system honed after millions of years of evolution, its high entropy, and the CEMI field variation inducing currents in motor output pathways are proposed to spark the seeds of consciousness. We will also review Karl Friston’s free energy principle, the concept of belief-update in a Bayesian inference framework, the minimization of the divergence of prior and posterior probability distributions, and the entropy of the brain. We will streamline these highly technical papers, which view consciousness as a minimization principle akin to Hilbert’s action in deriving Einstein’s field equation or Feynman’s sum of histories in quantum mechanics. Consciousness here is interpreted as flow of probability densities on a Riemmanian manifold, where the gradient of ascent on this manifold across contour lines determines the magnitude of perception or the degree of update of the belief-system in a Bayesian inference model. Finally, the science of consciousness has transcended metaphysics and its study is now rooted in the latest advances of neurophysiology, neuro-radiology under the aegis of mathematics.

Keywords: anatomy & physiology; brain anatomy; disorders of consciousness; philosophy.

Copyright © 2020, Kesserwani et al.

Page 14 of 322First1112131415161718Last