“For AI to be motivated towards a goal, it must know what it wants.”
The possible board combinations in a game of Go are more than the number of atoms in the known universe, but it’s still a finite number. In the real world, there are infinite possibilities for what might happen next, and uncertainty is rampant. How realistic, then, is AGI?
A recent research paper published in Frontiers in Ecology and Evolution explores obstacles toward AGI. Biological systems with degrees of general intelligence — organisms ranging from the humble microbes to the humans reading this — are capable of improvising to meet their goals. What prevents AI from improvising?
Comments are closed.