Menu

Blog

Archive for the ‘information science’ category: Page 105

Dec 7, 2022

Researchers develop a scaled-up spintronic probabilistic computer

Posted by in categories: chemistry, information science, particle physics, quantum physics, robotics/AI

Researchers at Tohoku University, the University of Messina, and the University of California, Santa Barbara (UCSB) have developed a scaled-up version of a probabilistic computer (p-computer) with stochastic spintronic devices that is suitable for hard computational problems like combinatorial optimization and machine learning.

Moore’s law predicts that computers get faster every two years because of the evolution of semiconductor chips. While this is what has historically happened, the continued evolution is starting to lag. The revolutions in machine learning and means much higher computational ability is required. Quantum computing is one way of meeting these challenges, but significant hurdles to the practical realization of scalable quantum computers remain.

A p-computer harnesses naturally stochastic building blocks called probabilistic bits (p-bits). Unlike bits in traditional computers, p-bits oscillate between states. A p-computer can operate at room-temperature and acts as a domain-specific computer for a wide variety of applications in machine learning and artificial intelligence. Just like quantum computers try to solve inherently quantum problems in , p-computers attempt to tackle probabilistic algorithms, widely used for complicated computational problems in combinatorial optimization and sampling.

Dec 5, 2022

Stephen Wolfram on the Wolfram Physics TOE, Blackholes, Infinity, and Consciousness

Posted by in categories: alien life, cryptocurrencies, economics, information science, mathematics, particle physics, robotics/AI

Stephen Wolfram is at his jovial peak in this technical interview regarding the Wolfram Physics project (theory of everything).
Sponsors: https://brilliant.org/TOE for 20% off. http://algo.com for supply chain AI.

Link to the Wolfram project: https://www.wolframphysics.org/

Continue reading “Stephen Wolfram on the Wolfram Physics TOE, Blackholes, Infinity, and Consciousness” »

Dec 5, 2022

Scientists create AI neural net that can unlock digital fingerprint-secured devices

Posted by in categories: information science, mobile phones, privacy, robotics/AI, security

Computer scientists at New York University and Michigan State University have trained an artificial neural network to create fake digital fingerprints that can bypass locks on cell phones. The fakes are called “DeepMasterPrints”, and they present a significant security flaw for any device relying on this type of biometric data authentication. After exploiting the weaknesses inherent in the ergonomic needs of cellular devices, DeepMasterPrints were able to imitate over 70% of the fingerprints in a testing database.

An artificial neural network is a type of artificial intelligence comprising computer algorithms modeled after the human brain’s ability to recognize patterns. The DeepMasterPrints system was trained to analyze sets of fingerprint images and generate a new image based on the features that occurred most frequently. This “skeleton key” could then be used to exploit the way cell phones authenticate user fingerprints.

In cell phones, the necessarily small size of fingerprint readers creates a weakness in the way they verify a print. In general, phone sensors only capture a partial image of a print when a user is attempting to unlock the device, and that piece is then compared to the phone’s authorized print image database. Since a partial print means there are fewer characteristics to distinguish it than a full print, a DeepMasterPrint needs to match fewer features to imitate a fingerprint. It’s worth noting that the concept of exploiting this flaw is not unique to this particular study; however, generating unique images rather than using actual or synthesized images is a new development.

Dec 3, 2022

UK rules that AI cannot patent inventions

Posted by in categories: government, information science, robotics/AI

The UK government has announced that artificial intelligence algorithms that come up with new technologies will not be able to patent their inventions.

The Intellectual Property Office said on Tuesday that it also plans to tweak existing laws to make it easier for people and institutions to use AI, machine learning and data mining software in order to rapidly advance research and innovation without requiring extensive permissions from copyright owners.

Dec 3, 2022

AI predicts crime a week before it happens — study

Posted by in categories: information science, robotics/AI

‘It will tell you what’s going to happen in future,’ says University of Chicago professor. ‘It’s not magical, there are limitations… but it works really well’

New AI crime prediction tech is reminiscent of the 2002 sci-fi film Minority report, based on the 1956 short story by Philip K. Dick

An artificial intelligence algorithm that can predict crimes a week in advance with a 90 per cent accuracy has been demonstrated for the first time.

Dec 2, 2022

AI art is nearing a Renaissance, but ‘algorithm aversion’ could turn off human skeptics

Posted by in categories: information science, robotics/AI

Will human biases prevent us from enjoying computer-generated creative works?

Dec 1, 2022

This Artificial Intelligence Paper Presents an Advanced Method for Differential Privacy in Image Recognition with Better Accuracy

Posted by in categories: biotech/medical, finance, information science, robotics/AI

Machine learning has increased considerably in several areas due to its performance in recent years. Thanks to modern computers’ computing capacity and graphics cards, deep learning has made it possible to achieve results that sometimes exceed those experts give. However, its use in sensitive areas such as medicine or finance causes confidentiality issues. A formal privacy guarantee called differential privacy (DP) prohibits adversaries with access to machine learning models from obtaining data on specific training points. The most common training approach for differential privacy in image recognition is differential private stochastic gradient descent (DPSGD). However, the deployment of differential privacy is limited by the performance deterioration caused by current DPSGD systems.

The existing methods for differentially private deep learning still need to operate better since that, in the stochastic gradient descent process, these techniques allow all model updates regardless of whether the corresponding objective function values get better. In some model updates, adding noise to the gradients might worsen the objective function values, especially when convergence is imminent. The resulting models get worse as a result of these effects. The optimization target degrades, and the privacy budget is wasted. To address this problem, a research team from Shanghai University in China suggests a simulated annealing-based differentially private stochastic gradient descent (SA-DPSGD) approach that accepts a candidate update with a probability that depends on the quality of the update and the number of iterations.

Concretely, the model update is accepted if it gives a better objective function value. Otherwise, the update is rejected with a certain probability. To prevent settling into a local optimum, the authors suggest using probabilistic rejections rather than deterministic ones and limiting the number of continuous rejections. Therefore, the simulated annealing algorithm is used to select model updates with probability during the stochastic gradient descent process.

Dec 1, 2022

New AI-enabled study unravels the principles of aging

Posted by in categories: biotech/medical, information science, life extension, robotics/AI

New work from Gero, conducted in collaboration with researchers from Roswell Park Comprehensive Cancer Center and Genome Protection Inc. and published in Nature Communications, demonstrates the power of AI combined with analytical tools borrowed from the physics of complex systems to provide insights into the nature of aging, resilience and future medical interventions for age-related diseases including cancer.

Longevity. Technology: Modern AI systems exhibit superhuman-level performance in medical diagnostics applications, such as identifying cancer on MRI scans. This time, the researchers took one step further and used AI to figure out principles that describe how the biological process of aging unfolds in time.

The researchers trained an AI algorithm on a large dataset composed of multiple blood tests taken along the life course of tens of thousands of aging mice to predict the future health state of an animal from its current state. The artificial neural network precisely projected the health condition of an aging mouse with the help of a single variable, which was termed dynamic frailty indicator (dFI) that accurately characterises the damage that an animal accumulates throughout life [1].

Dec 1, 2022

We built an algorithm that predicts the length of court sentences — could AI play a role in the justice system?

Posted by in categories: information science, law, robotics/AI

Artificial intelligence could help create transparency and consistency in the legal system – our model shows how.

Nov 30, 2022

In reinforcement learning, slower networks can learn faster

Posted by in categories: entertainment, information science

We then tested the new algorithms, called DQN with Proximal updates (or DQN Pro) and Rainbow Pro on a standard set of 55 Atari games. We can see from the graph of the results that the Pro agents overperform their counterparts; the basic DQN agent is able to obtain human-level performance after 120 million interactions with the environment (frames); and Rainbow Pro achieves a 40% relative improvement over the original Rainbow agent.

Further, to ensure that proximal updates do in fact result in smoother and slower parameter changes, we measure the norm differences between consecutive DQN solutions. We expect the magnitude of our updates to be smaller when using proximal updates. In the graphs below, we confirm this expectation on the four different Atari games tested.

Overall, our empirical and theoretical results support the claim that when optimizing for a new solution in deep RL, it is beneficial for the optimizer to gravitate toward the previous solution. More importantly, we see that simple improvements in deep-RL optimization can lead to significant positive gains in the agent’s performance. We take this as evidence that further exploration of optimization algorithms in deep RL would be fruitful.