Menu

Blog

Archive for the ‘information science’ category: Page 99

Jan 11, 2023

Open-Sourcing And Accelerating Precision Health Of The Future: Progress, Potential and Possibilities Podcast episode

Posted by in categories: biotech/medical, food, health, information science, robotics/AI

Simon Waslander is the Director of Collaboration, at the CureDAO Alliance for the Acceleration of Clinical Research (https://www.curedao.org/), a community-owned platform for the precision health of the future.

CureDAO is creating an open-source platform to discover how millions of factors, like foods, drugs, and supplements affect human health, within a decentralized autonomous organization (DAO), making suffering optional through the creation of a “WordPress of health data”.

Continue reading “Open-Sourcing And Accelerating Precision Health Of The Future: Progress, Potential and Possibilities Podcast episode” »

Jan 11, 2023

Neural network expert explains NEURALINK (in simple language)

Posted by in categories: information science, internet, robotics/AI

00:00 Trailer.
05:54 Tertiary brain layer.
19:49 Curing paralysis.
23:09 How Neuralink works.
33:34 Showing probes.
44:15 Neuralink will be wayyy better than prior devices.
1:01:20 Communication is lossy.
1:14:27 Hearing Bluetooth, WiFi, Starlink.
1:22:50 Animal testing & brain proxies.
1:29:57 Controlling muscle units w/ Neuralink.

I had the privilege of speaking with James Douma-a self-described deep learning dork. James’ experience and technical understanding are not easily found. I think you’ll find his words to be intriguing and insightful. This is one of several conversations James and I plan to have.

Continue reading “Neural network expert explains NEURALINK (in simple language)” »

Jan 11, 2023

AI creates high-resolution brain images from low-field strength MR scans

Posted by in categories: biotech/medical, information science, robotics/AI

Portable, low-field strength MRI systems have the potential to transform neuroimaging – provided that their low spatial resolution and low signal-to-noise (SNR) ratio can be overcome. Researchers at Harvard Medical School are harnessing artificial intelligence (AI) to achieve this goal. They have developed a machine learning super-resolution algorithm that generates synthetic images with high spatial resolution from lower resolution brain MRI scans.

The convolutional neural network (CNN) algorithm, known as LF-SynthSR, converts low-field strength (0.064 T) T1-and T2-weighted brain MRI sequences into isotropic images with 1 mm spatial resolution and the appearance of a T1-weighted magnetization-prepared rapid gradient-echo (MP-RAGE) acquisition. Describing their proof-of-concept study in Radiology, the researchers report that the synthetic images exhibited high correlation with images acquired by 1.5 T and 3.0 T MRI scanners.

Morphometry, the quantitative size and shape analysis of structures in an image, is central to many neuroimaging studies. Unfortunately, most MRI analysis tools are designed for near-isotropic, high-resolution acquisitions and typically require T1-weighted images such as MP-RAGE. Their performance often drops rapidly as voxel size and anisotropy increase. As the vast majority of existing clinical MRI scans are highly anisotropic, they cannot be reliably analysed with existing tools.

Jan 10, 2023

ChatGPT’s insane powerful searches could be coming to your smartphone soon

Posted by in categories: information science, mobile phones, robotics/AI

It is also looking at a possible investment from Microsoft.

OpenAI, the artificial intelligence research company, is building an iOS app powered by its globally popular chatbot ChatGPT which helps users search for answers using an iMessage like interface. A beta version of the app is being tested currently, and a demo version was shared on the professional networking site LinkedIn.

Launched in November last year, ChatGPT made global news for its ease of answering even complex questions in a conversational manner. The algorithm that powers the chatbot, GPT3.5 is built by Open AI and is trained to learn what humans mean when they ask a question.

Jan 10, 2023

This Company Is Using Generative AI To Design New Antibodies

Posted by in categories: biotech/medical, information science, robotics/AI

You have probably heard of ChatGPT and DALLE-E, a new class of AI-powered software tools that can create new images or write text. The algorithm brings to life any idea you may have by putting together fragments of what it has previously seen — such as images annotated with meta-descriptions of what they represent — to generate original content from user-defined input. But now generative AI technology is revolutionizing drug discovery. Absci Corporation (Nasdaq: ABSI) is using machine learning to transform the field of antibody therapeutics: Absci has put out a press release today announcing the ability to create new antibodies with the use of generative AI.


GenerativeAI: You’ve seen it with images like DALL-E, you’ve seen it with text like ChatGPT. Now you can see it with protein design as well.

Jan 10, 2023

Machine Learning Accelerates Drug Formulation Development, Changing the Game for Pharmaceutical Research

Posted by in categories: biotech/medical, information science, robotics/AI

New study demonstrates the potential for machine learning to accelerate the development of innovative drug delivery technologies.

Scientists at the University of Toronto have successfully tested the use of machine learning models to guide the design of long-acting injectable drug formulations. The potential for machine learning algorithms to accelerate drug formulation could reduce the time and cost associated with drug development, making promising new medicines available faster.

The study will be published today (January 10, 2023) in the journal Nature Communications.

Jan 10, 2023

DeepMind AI invents faster algorithms to solve tough maths puzzles

Posted by in categories: information science, mathematics, robotics/AI

Researchers at DeepMind in London have shown that artificial intelligence (AI) can find shortcuts in a fundamental type of mathematical calculation, by turning the problem into a game and then leveraging the machine-learning techniques that another of the company’s AIs used to beat human players in games such as Go and chess.

The AI discovered algorithms that break decades-old records for computational efficiency, and the team’s findings, published on 5 October in Nature1, could open up new paths to faster computing in some fields.

“It is very impressive,” says Martina Seidl, a computer scientist at Johannes Kepler University in Linz, Austria. “This work demonstrates the potential of using machine learning for solving hard mathematical problems.”

Jan 10, 2023

New Algorithm Closes Quantum Supremacy Window

Posted by in categories: computing, information science, quantum physics

That general question is still hard to answer, again in part because of those pesky errors. (Future quantum machines will compensate for their imperfections using a technique called quantum error correction, but that capability is still a ways off.) Is it possible to get the hoped-for runaway quantum advantage even with uncorrected errors?

Most researchers suspected the answer was no, but they couldn’t prove it for all cases. Now, in a paper posted to the preprint server arxiv.org, a team of computer scientists has taken a major step toward a comprehensive proof that error correction is necessary for a lasting quantum advantage in random circuit sampling — the bespoke problem that Google used to show quantum supremacy. They did so by developing a classical algorithm that can simulate random circuit sampling experiments when errors are present.

Jan 10, 2023

Information Fabricated

Posted by in categories: biotech/medical, cybercrime/malcode, information science, robotics/AI

This post is also available in: he עברית (Hebrew)

Hackers constantly improve at penetrating cyber defenses to steal valuable documents. So some researchers propose using an artificial-intelligence algorithm to hopelessly confuse them, once they break-in, by hiding the real deal amid a mountain of convincing fakes. The algorithm, called Word Embedding–based Fake Online Repository Generation Engine (WE-FORGE), generates decoys of patents under development. But someday it could “create a lot of fake versions of every document that a company feels it needs to guard,” says its developer, Dartmouth College cybersecurity researcher V. S. Subrahmanian.

If hackers were after, say, the formula for a new drug, they would have to find the relevant needle in a haystack of fakes. This could mean checking each formula in detail—and perhaps investing in a few dead-end recipes. “The name of the game here is, ‘Make it harder,’” Subrahmanian explains. “‘Inflict pain on those stealing from you.’”

Jan 9, 2023

Are quantum computers about to break online privacy?

Posted by in categories: computing, encryption, information science, quantum physics

A new algorithm is probably not efficient enough to crack current encryption keys — but that’s no reason for complacency, researchers say.