The Avalon survival game involves an AI agent that generates landscape and interactions with the player, such as finding food, hunting, and avoiding danger.

Generally Intelligent Searches for Way to Enable AI to ‘Think’ More Like Humans

When we think of AI we usually think of it being super intelligent. Well, some folks think there is a general intelligence that most AI so far has been sadly lacking. So they started a company they call Generally Intelligent. Their mission is the search for intelligent life within AI that is more like the human intelligence we are all familiar with. Researchers say one problem in the search for AI that can equate to a human, is there is too much AI and not enough humans.

Kyle Wiggers, the AI writer for techcrunch.com, explores what passes for super intelligent and what is just generally intelligent in a recent article. More importantly, he writes about the latest efforts to implant humanity into AI. Or if experts even think that it is possible.

“We believe that generally intelligent computers will someday unlock extraordinary potential for human creativity and insight,” CEO Kanjun Qiu told TechCrunch in an email interview. “However, today’s AI models are missing several key elements of human intelligence, which inhibits the development of general-purpose AI systems that can be deployed safely … Generally Intelligent’s work aims to understand the fundamentals of human intelligence in order to engineer safe AI systems that can learn and understand the way humans do.”

Kanjun Qiu and Josh Albrecht, are the co-founders of Generally Intelligent. Qui from Dropbox and the co-founder of Ember Hardware, which designed laser displays for VR headsets. Albrecht co-launched several companies, including BitBlinder (a privacy-preserving torrenting tool) and CloudFab (a 3D-printing services company). When they went looking for funding they used a different formula with investors.

While Generally Intelligent’s co-founders might not have traditional AI research backgrounds — Qiu was an algorithmic trader for two years — they’ve managed to secure support from several luminaries in the field. Among those contributing to the company’s $20 million in initial funding (plus over $100 million in options) is Tom Brown, former engineering lead for OpenAI’s GPT-3; former OpenAI robotics lead Jonas Schneider; Dropbox co-founders Drew Houston and Arash Ferdowsi; and the Astera Institute.

“The ambition for Avalon to build hundreds or thousands of tasks is an intensive process — it requires a lot of evaluation and assessment. Our funding is set up to ensure that we’re making progress against the encyclopedia of problems we expect Avalon to become as we continue to build it out,” she said. “We have an agreement in place for $100 million — that money is guaranteed through a drawdown setup which allows us to fund the company for the long term. We have established a framework that will trigger additional funding from that drawdown, but we’re not going to disclose that funding framework as it is akin to disclosing our roadmap.”

Finding the Missing Agents

As far as how this new idea of Qui and Albretch’s is supposed to work is explained a little further in Wiggers’s article. But we can say it involves what they call “agents.” The agents include research that has already been done previously and extrapolating on it. They plan on using lots of proven research and expanding upon that with their new algorithms.

Generally Intelligent built a simulated research environment where AI agents — entities that act upon the environment — train by completing increasingly harder, more complex tasks inspired by animal evolution and infant development cognitive milestones. The goal, Qiu says, is to train lots of different agents powered by different AI technologies under the hood to understand what the different components of each are doing.

“We believe such [agents] could empower humans across a wide range of fields, including scientific discovery, materials design, personal assistants and tutors and many other applications we can’t yet fathom,” Qiu said. “Using complex, open-ended research environments to test the performance of agents on a significant battery of intelligence tests is the approach most likely to help us identify and fill in those aspects of human intelligence that are missing from machines. [A] structured battery of tests facilitates the development of a real understanding of the workings of [AI], which is essential for engineering safe systems.”

Many of the goals they have set have been set by others as well. Names like  John Carmack’s Keen Technologies or Deep Mind’s Gato are similar in their objectives, as they can perform hundreds of tasks, from playing games to controlling robots.

And not everyone is on board with the idea that AI can perform like a human being. Luminaries like Mila founder Yoshua Bengio and Facebook VP and chief AI scientist Yann LeCun have repeatedly argued that so-called artificial general intelligence isn’t technically feasible — at least not today.

read more at techcrunch.com