Where Are the Thinking Machines?
While most of today’s mainstream press coverage and popular hype seems focused on the threats or promises of “strong” superintelligent AI, the lion’s share of AI research and breakthroughs behind-the-scenes is in the less ambitious field of “weak” or specialized AI. After more than half a century of being seemingly perpetually ten-to-twenty years from the birth of the “strong” AI that so captures our imaginations, why has the bulk of research and funding gone to “weak” AI instead? TOPBOTS explores, delving into the history and future of AI research.
The field of artificial intelligence was founded in the 1950s on a platform of ambition and optimism. Early pioneers were confident they would soon create machines displaying “Strong” or “human-like” AI. Rapid developments in computational power during that era contributed to an overall buoyant atmosphere among researchers.
Nearly 70 years later, Strong AI continues to lie out of reach while the market overflows with “Weak” or “Narrow” AI programs that learn through rote iteration or extract patterns from massive curated datasets rather than sparse experience like humans do.
What happened to derail the ambitions of those early researchers? And how are cutting edge programmers today looking to kickstart a resurgence towards true thinking machines?