Menu

Blog

Archive for the ‘robotics/AI’ category: Page 320

Jan 17, 2024

AI can copy HANDWRITING — can you tell it apart from the real thing?

Posted by in categories: law, robotics/AI

AI tools like ChatGPT can draft letters, tell jokes and even give legal advice – but only in the form of computerized text.

Now, scientists have created an AI that can imitate human handwriting, which could herald fresh issues regarding fraud and fake documents.

Continue reading “AI can copy HANDWRITING — can you tell it apart from the real thing?” »

Jan 17, 2024

DeepMind’s Latest AI System, AlphaGeometry, Aces High-School Math

Posted by in categories: economics, education, mathematics, robotics/AI

(Bloomberg) — Google DeepMind, Alphabet Inc.’s research division, said it has taken a “crucial step” towards making artificial intelligence as capable as humans. It involves solving high-school math problems. Most Read from BloombergWall Street Dials Back Fed Wagers After Solid Data: Markets WrapMusk Pressures Tesla’s Board for Another Massive Stock AwardChina’s Economic Growth Disappoints, Fueling Stimulus CallsChina Population Extends Record Drop on Covid Deaths, Low BirthsApple to Allow Outsi.

Jan 17, 2024

Companies that use AI to replace workers will ultimately lose, Stanford University professor says

Posted by in category: robotics/AI

Speaking at the World Economic Forum in Davos this week, a Stanford University professor said companies should augment AI rather than using it to replace an entire occupation or task.

Jan 17, 2024

Innatera shows RISC-V neuromorphic edge AI microcontroller

Posted by in category: robotics/AI

Dutch chip startup Innatera has shown its neuromorphic microcontroller for edge AI sensor applications based on the RISC-V open instruction set architecture.

Jan 17, 2024

BrainChip demonstrates its neuromorphic processor on Microchip’s 32-bit MPU at CES 2024

Posted by in category: robotics/AI

BrainChip, a neuromorphic computing device provider, will present a demonstration featuring its Akida neuromorphic processor operating on Microchips’ embedded platform at CES 2024. This will utilize two evaluation boards, namely Microchip’s SAMv71 Ultra board and SAMA7G54-EK board, with a particular focus on showcasing the efficiency of the Akida neuromorphic processor when integrated with a 32-bit microprocessor unit. BrainChip aims to highlight its capabilities in always-on machine learning tasks, including keyword spotting and visual wake words.

“We look forward to demonstrating the potential and ease of integrating Akida for always-on machine learning applications on embedded devices at CES,” says Rob Telson, vice president of Ecosystem and Partnerships at BrainChip.

Neuromorphic computing systems are designed to execute parallel and distributed processing, mimicking the neural structure and functioning of the human brain. BrainChip Akida is an example of such a neuromorphic computing processor, which is designed for edge applications. It operates on an event-based principle, remaining dormant until activated, thereby reducing power consumption.

Jan 17, 2024

Google Scientists Discovered 380,000 New Materials Using Artificial Intelligence

Posted by in categories: economics, robotics/AI, solar power, supercomputing, sustainability

New advancements in technology frequently necessitate the development of novel materials – and thanks to supercomputers and advanced simulations, researchers can bypass the time-consuming and often inefficient process of trial-and-error.

The Materials Project, an open-access database founded at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) in 2011, computes the properties of both known and predicted materials. Researchers can focus on promising materials for future technologies – think lighter alloys that improve fuel economy in cars, more efficient solar cells to boost renewable energy, or faster transistors for the next generation of computers.

Jan 17, 2024

Waymo’s Driverless Cars Are Hitting the Highway Sans Safety Drivers in Arizona

Posted by in categories: business, robotics/AI, sustainability, transportation

To back up the decision, Waymo pointed to its safety record and history building and operating self-driving trucks on highways. (The company shuttered its self-driving truck project last year to focus on taxis.) Including highways should also decrease route times for riders—especially from the airport—with some rides taking half the time.

Although highways are simpler to navigate than city streets—where cars contend with twists, turns, signs, stoplights, pedestrians, and pets—the stakes are higher. A crash at 10 or 20 miles per hour is less likely to cause major injury than one at highway speeds. And while it’s relatively straightforward (if less than ideal) for a malfunctioning robotaxi to stop or pull to the side of the road and await human help in the city, such tactics won’t do on the highway, where it’s dangerous for cars to suddenly slow or stop.

Continue reading “Waymo’s Driverless Cars Are Hitting the Highway Sans Safety Drivers in Arizona” »

Jan 17, 2024

Amazing Robot Controlled By Rat Brain Continues Progress

Posted by in categories: biological, cyborgs, robotics/AI

Some technologies are so cool they make you do a double take. Case in point: robots being controlled by rat brains. Kevin Warwick, once a cyborg and still a researcher in cybernetics at the University of Reading, has been working on creating neural networks that can control machines. He and his team have taken the brain cells from rats, cultured them, and used them as the guidance control circuit for simple wheeled robots. Electrical impulses from the bot enter the batch of neurons, and responses from the cells are turned into commands for the device. The cells can form new connections, making the system a true learning machine. Warwick hasn’t released any new videos of the rat brain robot for the past few years, but the three older clips we have for you below are still awesome. He and his competitors continue to move this technology forward – animal cyborgs are real.

The skills of these rat-robot hybrids are very basic at this point. Mainly the neuron control helps the robot to avoid walls. Yet that obstacle avoidance often shows clear improvement over time, demonstrating how networks of neurons can grant simple learning to the machines. Whenever I watch the robots in the videos below I have to do a quick reality check – these machines are being controlled by biological cells! It’s simply amazing.

Jan 17, 2024

BIDCell: Biologically-informed self-supervised learning for segmentation of subcellular spatial transcriptomics data

Posted by in categories: biotech/medical, robotics/AI

Thirdly, more recent approaches have begun to leverage deep learning (DL) methods. DL models such as U-Net12 have provided solutions for many image analysis challenges. However, they require ground truth to be generated for training. DL-based methods for SST cell segmentation include GeneSegNet13 and SCS14, though supervision is still required in the form of initial cell labels or based on hard-coded rules. Further limitations of existing methods encountered during our benchmarking, such as lengthy code runtimes, are included in Supplementary Table 1. The self-supervised learning (SSL) paradigm can provide a solution to overcome the requirement of annotations. While SSL-based methods have shown promise for other imaging modalities15,16, direct application to SST images remains challenging. SST data are considerably different from other cellular imaging modalities and natural images (e.g., regular RGB images), as they typically contain hundreds of channels, and there is a lack of clear visual cues that indicate cell boundaries. This creates new challenges such as (i) accurately delineating cohesive masks for cells in densely-packed regions, (ii) handling high sparsity within gene channels, and (iii) addressing the lack of contrast for cell instances.

While these morphological and DL-based approaches have shown promise, they have not fully exploited the high-dimensional expression information contained within SST data. It has become increasingly clear that relying solely on imaging information may not be sufficient to accurately segment cells. There is growing interest in leveraging large, well-annotated scRNA-seq datasets17, as exemplified by JSTA18, which proposed a joint cell segmentation and cell type annotation strategy. While much of the literature has emphasised the importance of accounting for biological information such as transcriptional composition, cell type, and cell morphology, the impact of incorporating such information into segmentation approaches remains to be fully understood.

Here, we present a biologically-informed deep learning-based cell segmentation (BIDCell) framework (Fig. 1 a), that addresses the challenges of cell body segmentation in SST images through key innovations in the framework and learning strategies. We introduce (a) biologically-informed loss functions with multiple synergistic components; and (b) explicitly incorporate prior knowledge from single-cell sequencing data to enable the estimation of different cell shapes. The combination of our losses and use of existing scRNA-seq data in supplement to subcellular imaging data improves performance, and BIDCell is generalisable across different SST platforms. Along with the development of our segmentation method, we created a comprehensive evaluation framework for cell segmentation, CellSPA, that assesses five complementary categories of criteria for identifying the optimal segmentation strategies. This framework aims to promote the adoption of new segmentation methods for novel biotechnological data.

Jan 17, 2024

Sam Altman Says Human-Tier AI Is Coming Soon

Posted by in categories: employment, robotics/AI

Human-tier AI will change the world and jobs “much less than we think,” OpenAI CEO Sam Altman said while attending the World Economic Forum.

Page 320 of 2,373First317318319320321322323324Last