Menu

Blog

Archive for the ‘robotics/AI’ category: Page 205

May 12, 2024

MultiOn Innovates With Virtual Agents

Posted by in category: robotics/AI

In the brand-new world of AI, we’re slowly learning that there’s a big difference between a small routine transaction like buying a hamburger, and something much more complex and high-stakes.


Human AI Robot With Flowing Binary High-Res Stock Photo — Getty Images

For reference, here’s more on the company’s mission statement:

Continue reading “MultiOn Innovates With Virtual Agents” »

May 12, 2024

Microsoft, Sanctuary AI tie up to build better general-purpose robots

Posted by in category: robotics/AI

The robotics company will use Microsoft’s Azure infrastructure for training, inference, networking and storage.

May 12, 2024

AI systems are getting better at tricking us

Posted by in category: robotics/AI

But what we perceive as deception is AI mindlessly achieving the goals we’ve set for it.

May 11, 2024

Deep Learning Illustrated, Part 3: Convolutional Neural Networks

Posted by in category: robotics/AI

An illustrated and intuitive guide on the inner workings of a CNN.

May 11, 2024

Alphafold 3.0: the AI protein predictor gets an upgrade

Posted by in category: robotics/AI

Hear the biggest stories from the world of science | 8 May 2024.

May 11, 2024

“Bionic eye” discovers Plato’s final resting place

Posted by in categories: cyborgs, robotics/AI, transhumanism

This led to the creation of a “bionic eye” that uses a combination of AI and several advanced scanning techniques, including optical imaging, thermal imaging, and tomography (the technique used for CT scans), to capture differences between parts of the scrolls that were blank and those that contained ink — all without having to physically unroll them.

Where’s Plato? On April 23, team leader Graziano Ranocchia announced that the group had managed to extract about 1,000 words from a scroll titled “The History of the Academy” and that the words revealed Plato’s burial place: a private part of the garden near a shrine to the Muses.

The recovered text, which accounted for about 30% of the scroll, also revealed that Plato may have been sold into slavery between 404 and 399 BC — historians previously thought this had happened later in the philosopher’s life, around 387 BC.

May 11, 2024

Scientists uncover quantum-inspired vulnerabilities in neural networks: the role of conjugate variables in system attacks

Posted by in categories: mathematics, quantum physics, robotics/AI

In a recent study merging the fields of quantum physics and computer science, Dr. Jun-Jie Zhang and Prof. Deyu Meng have explored the vulnerabilities of neural networks through the lens of the uncertainty principle in physics. Their work, published in the National Science Review, draws a parallel between the susceptibility of neural networks to targeted attacks and the limitations imposed by the uncertainty principle—a well-established theory in quantum physics that highlights the challenges of measuring certain pairs of properties simultaneously.

The researchers’ quantum-inspired analysis of neural network vulnerabilities suggests that adversarial attacks leverage the trade-off between the precision of input features and their computed gradients. “When considering the architecture of deep neural networks, which involve a loss function for learning, we can always define a conjugate variable for the inputs by determining the gradient of the loss function with respect to those inputs,” stated in the paper by Dr. Jun-Jie Zhang, whose expertise lies in mathematical physics.

This research is hopeful to prompt a reevaluation of the assumed robustness of neural networks and encourage a deeper comprehension of their limitations. By subjecting a neural network model to adversarial attacks, Dr. Zhang and Prof. Meng observed a compromise between the model’s accuracy and its resilience.

May 11, 2024

Optimizing Graph Neural Network Training with DiskGNN: A Leap Toward Efficient Large-Scale Learning

Posted by in categories: innovation, robotics/AI

Graph Neural Networks (GNNs) are crucial in processing data from domains such as e-commerce and social networks because they manage complex structures. Traditionally, GNNs operate on data that fits within a system’s main memory. However, with the growing scale of graph data, many networks now require methods to handle datasets that exceed memory limits, introducing the need for out-of-core solutions where data resides on disk.

Despite their necessity, existing out-of-core GNN systems struggle to balance efficient data access with model accuracy. Current systems face a trade-off: either suffer from slow input/output operations due to small, frequent disk reads or compromise accuracy by handling graph data in disconnected chunks. For instance, while pioneering, these challenges have limited previous solutions like Ginex and MariusGNN, showing significant drawbacks in training speed or accuracy.

The DiskGNN framework, developed by researchers from Southern University of Science and Technology, Shanghai Jiao Tong University, Centre for Perceptual and Interactive Intelligence, AWS Shanghai AI Lab, and New York University, emerges as a transformative solution specifically designed to optimize the speed and accuracy of GNN training on large datasets. This system utilizes an innovative offline sampling technique that prepares data for quick access during training. By preprocessing and arranging graph data based on expected access patterns, DiskGNN reduces unnecessary disk reads, significantly enhancing training efficiency.

May 11, 2024

The Quest for AGI Continues Despite Dire Warnings From Experts

Posted by in category: robotics/AI

Musk, Gates, Hawking, Altman and Putin all fear artificial general intelligence, AGI. But what is AGI and why might it be an advantage that more people are trying to develop it despite very serious risks?

“We are all so small and weak. Imagine how easy life would be if we had an owl to help us build nests,” said one sparrow to the flock. Others agreed:

“Yes, and we could use it to look after our elderly and our children. And it could give us good advice and keep an eye on the cat.”

May 11, 2024

Nick Bostrom’s ‘Deep Utopia’ On Our AI Future: Can We Have Meaning And Fun?

Posted by in categories: cosmology, robotics/AI

A new book by Nick Bostrom is a major publishing and cultural event. His 2014 book, Superintelligence, helped to wake the world up to the impact of the first Big Bang in AI, the arrival of deep learning. Since then we have had a second Big Bang in AI, with the introduction of transformer systems like GPT-4. Bostrom’s previous book focused on the downside potential of advanced AI. His new one explores the upside.

Deep Utopia is an easier read than its predecessor, although its author cannot resist using some of the phraseology of professional philosophers, so readers may have to look up words like “modulo” and “simpliciter.” Despite its density and its sometimes grim conclusions, Superintelligence had a sprinkling of playful self-ridicule and snark. There is much more of this in the current offering.

The structure of Deep Utopia is deeply odd. The book’s core is a series of lectures by an older version of the author, which are interrupted a couple of times by conflicting bookings of the auditorium, and once by a fire alarm. The lectures are attended and commented on by three students, Kelvin, Tessius and Firafax. At one point they break the theatrical fourth wall by discussing whether they are fictional characters in a book, a device reminiscent of the 1991 novel Sophie’s World.

Page 205 of 2,430First202203204205206207208209Last