Menu

Blog

Archive for the ‘information science’ category: Page 171

Aug 18, 2021

Val Kilmer Recreated His Speaking Voice Using Artificial Intelligence and Hours of Old Audio

Posted by in categories: biotech/medical, education, information science, robotics/AI

🥲👍


Val Kilmer marked the release of his acclaimed documentary “Val” (now streaming on Amazon Prime Video) in a milestone way: He recreated his old speaking voice by feeding hours of recorded audio of himself into an artificial intelligence algorithm. Kilmer lost the ability to speak after undergoing throat cancer treatment in 2014. Kilmer’s team recently joined forces with software company Sonantic and “Val” distributor Amazon to “create an emotional and lifelike model of his old speaking voice” (via The Wrap).

Continue reading “Val Kilmer Recreated His Speaking Voice Using Artificial Intelligence and Hours of Old Audio” »

Aug 18, 2021

An AI expert explains why it’s hard to give computers something you take for granted: Common sense

Posted by in categories: information science, physics, robotics/AI

Quick – define common sense

Despite being both universal and essential to how humans understand the world around them and learn, common sense has defied a single precise definition. G. K. Chesterton, an English philosopher and theologian, famously wrote at the turn of the 20th century that “common sense is a wild thing, savage, and beyond rules.” Modern definitions today agree that, at minimum, it is a natural, rather than formally taught, human ability that allows people to navigate daily life.

Continue reading “An AI expert explains why it’s hard to give computers something you take for granted: Common sense” »

Aug 18, 2021

Team develops AI to decode brain signals and predict behavior

Posted by in categories: information science, robotics/AI

An artificial neural network (AI) designed by an international team involving UCL can translate raw data from brain activity, paving the way for new discoveries and a closer integration between technology and the brain.

The new method could accelerate discoveries of how brain activities relate to behaviors.

The study published today in eLife, co-led by the Kavli Institute for Systems Neuroscience in Trondheim and the Max Planck Institute for Human Cognitive and Brain Sciences Leipzig and funded by Wellcome and the European Research Council, shows that a , a specific type of deep learning , is able to decode many different behaviors and stimuli from a wide variety of brain regions in different species, including humans.

Aug 17, 2021

Deepmind Introduces PonderNet, A New AI Algorithm That Allows Artificial Neural Networks To Learn To “Think For A While” Before Answering

Posted by in categories: information science, robotics/AI

Artificial intelligence, machine learning, data science.

Aug 17, 2021

How (And Where) The Brain Analyzes Math and Language Spoken Simultaneously

Posted by in categories: information science, mathematics, neuroscience

Summary: Study reveals how the brain analyzes different types of speech which may be linked to how we comprehend sentences and calculate mathematical equations.

Source: SfN

Separate math and language networks segregate naturally when listeners pay attention to one type over the other, according to research recently published in Journal of Neuroscience.

Aug 16, 2021

The Genius of 3D Printed Rockets

Posted by in categories: engineering, information science, space travel

3D printed rockets save on up front tooling, enable rapid iteration, decrease part count, and facilitate radically new designs. For your chance to win 2 seats on one of the first Virgin Galactic flights to Space and support a great cause, go to https://www.omaze.com/veritasium.

Thanks to Tim Ellis and everyone at Relativity Space for the tour!
https://www.relativityspace.com/
https://youtube.com/c/RelativitySpace.

Continue reading “The Genius of 3D Printed Rockets” »

Aug 15, 2021

UAT Virtual Let’s Talk Tech Open House

Posted by in categories: bioengineering, biological, genetics, information science, internet, robotics/AI

Learn More


University of Advancing Technology’s Artificial Intelligence (AI) degree explores the theory and practice of engineering tools that simulate thinking, patterning, and advanced decision behaviors by software systems. With inspiration derived from biology to design, UAT’s Artificial Intelligence program teaches students to build software systems that solve complex problems. Students will work with technologies including voice recognition, simulation agents, machine learning (ML), and the internet of things (IoT).

Students pursuing this specialized computer programming degree develop applications using evolutionary and genetic algorithms, cellular automata, artificial neural networks, agent-based models, and other artificial intelligence methodologies. UAT’s degree in AI covers the fundamentals of general and applied artificial intelligence including core programming languages and platforms used in computer science.

Continue reading “UAT Virtual Let’s Talk Tech Open House” »

Aug 14, 2021

New Algorithm Trains Drones To Fly Around Obstacles at High Speeds

Posted by in categories: drones, information science, robotics/AI

New algorithm could enable fast, nimble drones for time-critical operations such as search and rescue.

If you follow autonomous drone racing, you likely remember the crashes as much as the wins. In drone racing, teams compete to see which vehicle is better trained to fly fastest through an obstacle course. But the faster drones fly, the more unstable they become, and at high speeds their aerodynamics can be too complicated to predict. Crashes, therefore, are a common and often spectacular occurrence.

Continue reading “New Algorithm Trains Drones To Fly Around Obstacles at High Speeds” »

Aug 13, 2021

Progress in algorithms makes small, noisy quantum computers viable

Posted by in categories: information science, quantum physics, robotics/AI

As reported in a new article in Nature Reviews Physics, instead of waiting for fully mature quantum computers to emerge, Los Alamos National Laboratory and other leading institutions have developed hybrid classical/quantum algorithms to extract the most performance—and potentially quantum advantage—from today’s noisy, error-prone hardware. Known as variational quantum algorithms, they use the quantum boxes to manipulate quantum systems while shifting much of the work load to classical computers to let them do what they currently do best: solve optimization problems.

“Quantum computers have the promise to outperform for certain tasks, but on currently available quantum hardware they can’t run long algorithms. They have too much noise as they interact with environment, which corrupts the information being processed,” said Marco Cerezo, a physicist specializing in , quantum machine learning, and quantum information at Los Alamos and a lead author of the paper. “With variational , we get the best of both worlds. We can harness the power of quantum computers for tasks that classical computers can’t do easily, then use classical computers to compliment the computational power of quantum devices.”

Current noisy, intermediate scale quantum computers have between 50 and 100 qubits, lose their “quantumness” quickly, and lack error correction, which requires more qubits. Since the late 1990s, however, theoreticians have been developing algorithms designed to run on an idealized large, error-correcting, fault tolerant quantum computer.

Aug 13, 2021

Classical variational simulation of the Quantum Approximate Optimization Algorithm

Posted by in categories: computing, information science, quantum physics

In this work, we introduce a classical variational method for simulating QAOA, a hybrid quantum-classical approach for solving combinatorial optimizations with prospects of quantum speedup on near-term devices. We employ a self-contained approximate simulator based on NQS methods borrowed from many-body quantum physics, departing from the traditional exact simulations of this class of quantum circuits.

We successfully explore previously unreachable regions in the QAOA parameter space, owing to good performance of our method near optimal QAOA angles. Model limitations are discussed in terms of lower fidelities in quantum state reproduction away from said optimum. Because of such different area of applicability and relative low computational cost, the method is introduced as complementary to established numerical methods of classical simulation of quantum circuits.

Classical variational simulations of quantum algorithms provide a natural way to both benchmark and understand the limitations of near-future quantum hardware. On the algorithmic side, our approach can help answer a fundamentally open question in the field, namely whether QAOA can outperform classical optimization algorithms or quantum-inspired classical algorithms based on artificial neural networks48,49,50.