Achieving such a concept — commonly referred to as AGI — is the driving mission of ChatGPT-maker OpenAI and a priority for the elite research wings of tech giants https://fortune.com/company/amazon-com/” class=””>Amazon, https://fortune.com/company/alphabet/” class=””>Google, Meta and https://fortune.com/company/microsoft/” class=””>Microsoft.
It’s also a cause for concern https://apnews.com/article/artificial-intelligence-risks-uk-…d6e2b910b” rel=“noopener” class=””>for world governments. Leading AI scientists published research Thursday in the journal Science warning that unchecked AI agents with “long-term planning” skills could pose an existential risk to humanity.
But what exactly is AGI and how will we know when it’s been attained? Once on the fringe of computer science, it’s now a buzzword that’s being constantly redefined by those trying to make it happen.
Leave a reply