Menu

Blog

Archive for the ‘information science’ category: Page 109

Nov 15, 2022

Closed-form continuous-time neural networks

Posted by in categories: information science, robotics/AI

Physical dynamical processes can be modelled with differential equations that may be solved with numerical approaches, but this is computationally costly as the processes grow in complexity. In a new approach, dynamical processes are modelled with closed-form continuous-depth artificial neural networks. Improved efficiency in training and inference is demonstrated on various sequence modelling tasks including human action recognition and steering in autonomous driving.

Nov 14, 2022

Computer scientists succeed in solving algorithmic riddle from the 1950s

Posted by in categories: computing, information science, mapping, mathematics

For more than half a century, researchers around the world have been struggling with an algorithmic problem known as “the single source shortest path problem.” The problem is essentially about how to devise a mathematical recipe that best finds the shortest route between a node and all other nodes in a network, where there may be connections with negative weights.

Sound complicated? Possibly. But in fact, this type of calculation is already used in a wide range of the apps and technologies that we depend upon for finding our ways around—as Google Maps guides us across landscapes and through cities, for example.

Now, researchers from the University of Copenhagen’s Department of Computer Science have succeeded in solving the single source shortest problem, a riddle that has stumped researchers and experts for decades.

Nov 12, 2022

Artificial Intelligence is the Magic Tool the World was Waiting For

Posted by in categories: business, economics, information science, robotics/AI, sustainability, transportation

Artificial Intelligence (AI) is rapidly changing the world. Emerging technologies on a daily basis in AI capabilities have lead to a number of innovations including autonomous vehicles, self-driving flights, robotics, etc. Some of the AI technologies feature predictions on future and accurate decision-making. AI is the best friend to technology leaders who want to make the world a better place with unfolding inventions.

Whether humans agree or not, AI developments are slowly impacting all aspects of the society including the economy. However, some technologies might even bring challenges and risks to the working environment. To keep a track on AI development, good leaders head the AI world to ensure trust, reliability, safety and accuracy.

Intelligent behaviour has long been considered a uniquely human attribute. But when computer science and IT networks started evolving, artificial intelligence and people who stood by them were on the spotlight. AI in today’s world is both developing and under control. Without a transformation here, AI will never fully deliver the problems and dilemmas of business only with data and algorithms. Wise leaders do not only create and capture vital economic values, rather build a more sustainable and legitimate organisation. Leaders in AI sectors have eyes to see AI decisions and ears to hear employees perspective.

Nov 12, 2022

Clever Machines Learn How to Be Curious

Posted by in categories: computing, information science, neuroscience

“You can think of curiosity as a kind of reward which the agent generates internally on its own, so that it can go explore more about its world,” Agrawal said. This internally generated reward signal is known in cognitive psychology as “intrinsic motivation.” The feeling you may have vicariously experienced while reading the game-play description above — an urge to reveal more of whatever’s waiting just out of sight, or just beyond your reach, just to see what happens — that’s intrinsic motivation.

Humans also respond to extrinsic motivations, which originate in the environment. Examples of these include everything from the salary you receive at work to a demand delivered at gunpoint. Computer scientists apply a similar approach called reinforcement learning to train their algorithms: The software gets “points” when it performs a desired task, while penalties follow unwanted behavior.

Nov 11, 2022

A Brain-Inspired Chip Can Run AI With Far Less Energy

Posted by in categories: information science, robotics/AI

An energy-efficient chip called NeuRRAM fixes an old design flaw to run large-scale AI algorithms on smaller devices, reaching the same accuracy as wasteful digital computers.

Nov 11, 2022

New Theory Cracks Open the Black Box of Deep Learning

Posted by in categories: information science, robotics/AI

A new idea is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.

Nov 10, 2022

AI Researchers At Mayo Clinic Introduce A Machine Learning-Based Method For Leveraging Diffusion Models To Construct A Multitask Brain Tumor Inpainting Algorithm

Posted by in categories: biotech/medical, information science, privacy, robotics/AI

The number of AI and, in particular, machine learning (ML) publications related to medical imaging has increased dramatically in recent years. A current PubMed search using the Mesh keywords “artificial intelligence” and “radiology” yielded 5,369 papers in 2021, more than five times the results found in 2011. ML models are constantly being developed to improve healthcare efficiency and outcomes, from classification to semantic segmentation, object detection, and image generation. Numerous published reports in diagnostic radiology, for example, indicate that ML models have the capability to perform as good as or even better than medical experts in specific tasks, such as anomaly detection and pathology screening.

It is thus undeniable that, when used correctly, AI can assist radiologists and drastically reduce their labor. Despite the growing interest in developing ML models for medical imaging, significant challenges can limit such models’ practical applications or even predispose them to substantial bias. Data scarcity and data imbalance are two of these challenges. On the one hand, medical imaging datasets are frequently much more minor than natural photograph datasets such as ImageNet, and pooling institutional datasets or making them public may be impossible due to patient privacy concerns. On the other hand, even the medical imaging datasets that data scientists have access to could be more balanced.

In other words, the volume of medical imaging data for patients with specific pathologies is significantly lower than for patients with common pathologies or healthy people. Using insufficiently large or imbalanced datasets to train or evaluate a machine learning model may result in systemic biases in model performance. Synthetic image generation is one of the primary strategies to combat data scarcity and data imbalance, in addition to the public release of deidentified medical imaging datasets and the endorsement of strategies such as federated learning, enabling machine learning (ML) model development on multi-institutional datasets without data sharing.

Nov 9, 2022

Chirping toward a Quantum RAM

Posted by in categories: computing, information science, mobile phones, nanotechnology, quantum physics

A new quantum random-access memory device reads and writes information using a chirped electromagnetic pulse and a superconducting resonator, making it significantly more hardware-efficient than previous devices.

Random-access memory (or RAM) is an integral part of a computer, acting as a short-term memory bank from which information can be quickly recalled. Applications on your phone or computer use RAM so that you can switch between tasks in the blink of an eye. Researchers working on building future quantum computers hope that such systems might one day operate with analogous quantum RAM elements, which they envision could speed up the execution of a quantum algorithm [1, 2] or increase the density of information storable in a quantum processor. Now James O’Sullivan of the London Centre for Nanotechnology and colleagues have taken an important step toward making quantum RAM a reality, demonstrating a hardware-efficient approach that uses chirped microwave pulses to store and retrieve quantum information in atomic spins [3].

Just like quantum computers, experimental demonstrations of quantum memory devices are in their early days. One leading chip-based platform for quantum computation uses circuits made from superconducting metals. In this system, the central processing is done with superconducting qubits, which send and receive information via microwave photons. At present, however, there exists no quantum memory device that can reliably store these photons for long times. Luckily, scientists have a few ideas.

Nov 9, 2022

Speaking the same language: How artificial neurons mimic biological neurons

Posted by in categories: biological, chemistry, information science, robotics/AI

Artificial intelligence has long been a hot topic: a computer algorithm “learns” by being taught by examples: What is “right” and what is “wrong.” Unlike a computer algorithm, the human brain works with neurons—cells of the brain. These are trained and pass on signals to other neurons. This complex network of neurons and the connecting pathways, the synapses, controls our thoughts and actions.

Biological signals are much more diverse when compared with those in conventional computers. For instance, neurons in a biological neural network communicate with ions, biomolecules and neurotransmitters. More specifically, neurons communicate either chemically—by emitting the messenger substances such as neurotransmitters—or via , so-called “action potentials” or “spikes”.

Artificial neurons are a current area of research. Here, the efficient communication between the biology and electronics requires the realization of that emulate realistically the function of their biological counterparts. This means artificial neurons capable of processing the diversity of signals that exist in biology. Until now, most artificial neurons only emulate their biological counterparts electrically, without taking into account the wet biological environment that consists of ions, biomolecules and neurotransmitters.

Nov 8, 2022

Digital Doubles and Second Selves

Posted by in categories: augmented reality, automation, big data, computing, cyborgs, evolution, futurism, information science, innovation, internet, life extension, machine learning, neuroscience, posthumanism, robotics/AI, singularity, software, supercomputing

This time I come to talk about a new concept in this Age of Artificial Intelligence and the already insipid world of Social Networks. Initially, quite a few years ago, I named it “Counterpart” (long before the TV series “Counterpart” and “Black Mirror”, or even the movie “Transcendence”).

It was the essence of the ETER9 Project that was taking shape in my head.

Over the years and also with the evolution of technologies — and of the human being himself —, the concept “Counterpart” has been getting better and, with each passing day, it makes more sense!

Imagine a purely digital receptacle with the basics inside, like that Intermediate Software (BIOS(1)) that computers have between the Hardware and the Operating System. That receptacle waits for you. One way or another, it waits patiently for you, as if waiting for a Soul to come alive in the ether of digital existence.

Continue reading “Digital Doubles and Second Selves” »