Menu

Blog

Archive for the ‘robotics/AI’ category: Page 121

Jun 2, 2024

Memristor-based adaptive neuromorphic perception in unstructured environments

Posted by in categories: information science, robotics/AI, transportation

Differential neuromorphic computing, as a memristor-assisted perception method, holds the potential to enhance subsequent decision-making and control processes. Compared with conventional technologies, both the PID control approach and the proposed differential neuromorphic computing share a fundamental principle of smartly adjusting outputs in response to feedback, they diverge significantly in the data manipulation process (Supplementary Discussion 12 and Fig. S26); our method leverages the nonlinear characteristics of the memristor and a dynamic selection scheme to execute more complex data manipulation than linear coefficient-based error correction in PID. Additionally, the intrinsic memory function of memristors in our system enables real-time adaptation to changing environments. This represents a significant advantage compared to the static parameter configuration of PID systems. To perform similar adaptive control functions in tactile experiments, the von Neumann architecture follows a multi-step process involving several data movements: 1. Input data about the piezoresistive film state is transferred to the system memory via an I/O interface. 2. This sensory data is then moved from the memory to the cache. 3. Subsequently, it is forwarded to the Arithmetic Logic Unit (ALU) and waits for processing.4. Historical tactile information is also transferred from the memory to the cache unless it is already present. 5. This historical data is forwarded to the ALU. 6. ALU calculates the current sensory and historical data and returns the updated historical data to the cache. In contrast, our memristor-based approach simplifies this process, reducing it to three primary steps: 1. ADC reads data from the piezoresistive film. 2. ADC reads the current state of the memristor, which represents the historical tactile stimuli. 3. DAC, controlled by FPGA logic, updates the memristor state based on the inputs. This process reduces the costs of operation and enhances data processing efficiency.

In real-world settings, robotic tactile systems are required to elaborate large amounts of tactile data and respond as quickly as possible, taking less than 100 ms, similar to human tactile systems58,59. The current state-of-the-art robotics tactile technologies are capable of elaborating sudden changes in force, such as slip detection, at millisecond levels (from 500 μs to 50 ms)59,60,61,62, and the response time of our tactile system has also reached this detection level. For the visual processing, suppose a vehicle travels 40 km per hour in an urban area and wants control effective for every 1 m. In that case, the requirement translates a maximum allowable response time of 90 ms for the entire processing pipeline, which includes sensors, operating systems, middleware, and applications such as object detection, prediction, and vehicle control63,64. When incorporating our proposed memristor-assisted method with conventional camera systems, the additional time delay includes the delay from filter circuits (less than 1 ms) and the switching time for the memristor device, which ranges from nanoseconds (ns) to even picoseconds (ps)21,65,66,67. Compared to the required overall response time of the pipeline, these additions are negligible, demonstrating the potential of our method application in real-world driving scenarios68. Although our memristor-based perception method meets the response time requirement for described scenarios, our approach faces several challenges that need to be addressed for real-world applications. Apart from the common issues such as variability in device performance and the nonlinear dynamics of memristive responses, our approach needs to overcome the following challenges:

Currently, the modulation voltage applied to memristors is preset based on the external sensory feature, and the control algorithm is based on hard threshold comparison. This setting lacks the flexibility required for diverse real-world environments where sensory inputs and required responses can vary significantly. Therefore, it is crucial to develop a more automatic memristive modulation method along with a control algorithm that can dynamically adjust based on varying application scenarios.

Jun 2, 2024

A 3D ray traced biological neural network learning model

Posted by in categories: biological, information science, robotics/AI

In artificial neural networks, many models are trained for a narrow task using a specific dataset. They face difficulties in solving problems that include dynamic input/output data types and changing objective functions. Whenever the input/output tensor dimension or the data type is modified, the machine learning models need to be rebuilt and subsequently retrained from scratch. Furthermore, many machine learning algorithms that are trained for a specific objective, such as classification, may perform poorly at other tasks, such as reinforcement learning or quantification.

Even if the input/output dimensions and the objective functions remain constant, the algorithms do not generalize well across different datasets. For example, a neural network trained on classifying cats and dogs does not perform well on classifying humans and horses despite both of the datasets having the exact same image input1. Moreover, neural networks are highly susceptible to adversarial attacks2. A small deviation from the training dataset, such as changing one pixel, could cause the neural network to have significantly worse performance. This problem is known as the generalization problem3, and the field of transfer learning can help to solve it.

Transfer learning4,5,6,7,8,9,10 solves the problems presented above by allowing knowledge transfer from one neural network to another. A common way to use supervised transfer learning is obtaining a large pre-trained neural network and retraining it for a different but closely related problem. This significantly reduces training time and allows the model to be trained on a less powerful computer. Many researchers used pre-trained neural networks such as ResNet-5011 and retrained them to classify malicious software12,13,14,15. Another application of transfer learning is tackling the generalization problem, where the testing dataset is completely different from the training dataset. For example, every human has unique electroencephalography (EEG) signals due to them having distinctive brain structures. Transfer learning solves the generalization problem by pretraining on a general population EEG dataset and retraining the model for a specific patient16,17,18,19,20. As a result, the neural network is dynamically tailored for a specific person and can interpret their specific EEG signals properly. Labeling large datasets by hand is tedious and time-consuming. In semi-supervised transfer learning21,22,23,24, either the source dataset or the target dataset is unlabeled. That way, the neural networks can self-learn which pieces of information to extract and process without many labels.

Jun 2, 2024

Geoffrey Hinton

Posted by in category: robotics/AI

Timestamps Early inspirations (00:00:00) Meeting Ilya Sutskever (00:05:05) Ilya’s intuition (00:06:12) Understanding of LLMs (00:09:00) Scaling neural networks (00:15:15) What is language? (00:18:30) The GPU revolution (00:21:35) Human Brain Insights (00:25:05) Feelings & analogies (00:29:05 Problem selection (00:32:58) Gradient processing (00:35:21) Ethical implications (00:36:52) Selecting talent (00:40:15) Developing intuition (00:41:49) The road to AGI (00:43:50) Proudest moment (00:45:00)

Jun 2, 2024

AI headphones let wearer listen to a single person in a crowd, by looking at them just once

Posted by in category: robotics/AI

Noise-canceling headphones have gotten very good at creating an auditory blank slate. But allowing certain sounds from a wearer’s environment through the erasure still challenges researchers. The latest edition of Apple’s AirPods Pro, for instance, automatically adjusts sound levels for wearers — sensing when they’re in conversation, for instance — but the user has little control over whom to listen to or when this happens.

A University of Washington team has developed an artificial intelligence system that lets a user wearing headphones look at a person speaking for three to five seconds to “enroll” them. The system, called “Target Speech Hearing,” then cancels all other sounds in the environment and plays just the enrolled speaker’s voice in real time even as the listener moves around in noisy places and no longer faces the speaker.

Continue reading “AI headphones let wearer listen to a single person in a crowd, by looking at them just once” »

Jun 1, 2024

Cambridge Scientists Develop “Third Thumb” That Could Redefine Human Capability

Posted by in categories: biotech/medical, cyborgs, robotics/AI

Researchers at Cambridge have shown that the Third Thumb, a robotic prosthetic, can be quickly mastered by the public, enhancing manual dexterity. The study stresses the importance of inclusive design to ensure technologies benefit everyone, with significant findings on performance across different demographics.

Cambridge researchers demonstrated that people can rapidly learn to control a prosthetic extra thumb, known as a “third thumb,” and use it effectively to grasp and handle objects.

The team tested the robotic device on a diverse range of participants, which they say is essential for ensuring new technologies are inclusive and can work for everyone.

Jun 1, 2024

How to Raise Your Artificial Intelligence: A Conversation with Alison Gopnik and Melanie Mitchell

Posted by in categories: internet, robotics/AI

JULIEN CROCKETT: Let’s start with the tension at the heart of AI: we understand and talk about AI systems as if they are both mere tools and intelligent actors that might one day come alive. Alison, you’ve argued that the currently popular AI systems, LLMs, are neither intelligent nor dumb—that those are the wrong categories by which to understand them. Rather, we should think of them as cultural technologies, like the printing press or the internet. Why is a “cultural technology” a better framework for understanding LLMs?

Jun 1, 2024

How FinalSpark Wants to Contribute to a Low Carbon Future. The Energy-saving Potential of Biocomputing

Posted by in categories: futurism, robotics/AI

One of the trade-offs of today’s technological progress is the big energy costs necessary to process digital information. To make AI models using silicon-based processors, we need to train them with huge amounts of data. The more data, the better the model. This is perfectly illustrated by the current success of large language models, such as ChatGPT. The impressive abilities of such models are due to the fact that huge amounts of data were used for their training.

The more data we use to teach digital AI, the better it becomes, but also the more computational power is needed.

This is why to develop AI further; we need to consider alternatives to the current status quo in silicon-based technologies. Indeed, we have recently seen a lot of publications about Sam Altman, the CEO of OpenAI topic.

Jun 1, 2024

Bilingual AI brain implant helps stroke survivor communicate in Spanish and English

Posted by in category: robotics/AI

The implant uses a form of AI to turn the man’s brain activity into sentences, allowing him to participate in a bilingual conversation and “switch between languages.”

May 31, 2024

Research brings together humans, robots and generative AI to create art

Posted by in category: robotics/AI

Researchers at Carnegie Mellon University’s Robotics Institute (RI) have developed a robotic system that interactively co-paints with people. Collaborative FRIDA (CoFRIDA) can work with users of any artistic ability, inviting collaboration to create art in the real world.

May 31, 2024

Sony Will Use AI to Cut Film Costs, Says CEO Tony Vinciquerra

Posted by in categories: economics, entertainment, robotics/AI

The next “Spider-Verse” film may have a new animation style: AI.

Sony Pictures Entertainment (SPE) CEO Tony Vinciquerra does not mince words when it comes to artificial intelligence. He likes the tech — or at the very least, he likes the economics.

“We are very focused on AI. The biggest problem with making films today is the expense,” Vinciquerra said at Sony’s Thursday (Friday in Japan) investor event. “We will be looking at ways to…produce both films for theaters and television in a more efficient way, using AI primarily.”

Page 121 of 2,374First118119120121122123124125Last