Business

Generally Intelligent secures cash from OpenAI vets to build capable AI systems • TechCrunch

Written by admin

A new AI research company is launching out of stealth today with an ambitious goal: to research the fundamentals of human intelligence that machines currently lack. Called Generally Intelligent, it plans to do this by turning these fundamentals into an array of tasks to be solved and by designing and testing different systems’ ability to learn to solve them in highly complex 3D worlds built by their team.

“We believe that generally intelligent computers will someday unlock extraordinary potential for human creativity and insight,” CEO Kanjun Qiu told TechCrunch in an email interview. “However, today’s AI models are missing several key elements of human intelligence, which inhibits the development of general-purpose AI systems that can be deployed safely … Generally Intelligent’s work aims to understand the fundamentals of human intelligence in order to engineer safe AI systems that can learn and understand the way humans do.”

Qiu, the former chief of staff at Dropbox and the co-founder of Ember Hardware, which designed laser displays for VR headsets, co-founded Generally Intelligent in 2021 after shutting down her previous startup, Sourceress, a recruiting company that used AI to scour the web. (Qiu blamed the high-churn nature of the leads-sourcing business.) Highly Intelligent’s second co-founder is Josh Albrecht, who co-launched a number of companies, including BitBlinder (a privacy-preserving torrenting tool) and CloudFab (a 3D printing services company).

While Generally Intelligent’s co-founders might not have traditional AI research backgrounds — Qiu was an algorithmic trader for two years — they’ve managed to secure support from several luminaries in the field. Among those contributing to the company’s $20 million in initial funding (plus over $100 million in options) is Tom Brown, former engineering lead for OpenAI’s GPT-3; former OpenAI robotics lead Jonas Schneider; Dropbox co-founders Drew Houston and Arash Ferdowsi; and the Astera Institute.

Qiu said that the unusual funding structure reflects the capital-intensive nature of the problems Generally Intelligent is attempting to solve.

“The ambition for Avalon to build hundreds or thousands of tasks is an intensive process — it requires a lot of evaluation and assessment. Our funding is set up to ensure that we’re making progress against the encyclopedia of problems we expect Avalon to become as we continue to build it out,” she said. “We have an agreement in place for $100 million — that money is guaranteed through a drawdown setup which allows us to fund the company for the long term. We have established a framework that will trigger additional funding from that drawdown, but we’re not going to disclose that funding framework as it is akin to disclosing our roadmap.”

Image Credits: Generally Intelligent

What convinced them? Qiu says it’s Generally Intelligent’s approach to the problem of AI systems that struggle to learn from others, extrapolate safely, or learn continuously from small amounts of data. Generally Intelligently built a simulated research environment where AI agents — entities that act upon the environment — train by completing increasingly harder, more complex tasks inspired by animal evolution and infant development cognitive milestones. The goal, Qiu says, is to train lots of different agents powered by different AI technologies under the hood in order to understand what the different components of each are doing.

“We believe such [agents] could empower humans across a wide range of fields, including scientific discovery, materials design, personal assistants and tutors and many other applications we can’t yet fathom,” Qiu said. “Using complex, open-ended research environments to test the performance of agents on a significant battery of intelligence tests is the approach most likely to help us identify and fill in those aspects of human intelligence that are missing from machines. [A] structured battery of tests facilitates the development of a real understanding of the workings of [AI]which is essential for engineering safe systems.”

Currently, Generally Intelligent is primarily focused on studying how agents deal with object occlusion (ie, when an object becomes visually blocked by another object) and persistence and understanding what’s actively happening in a scene. Among the more challenging areas the lab’s investigating is whether agents can internalize the rules of physics, like gravity.

Generally Intelligent’s work brings to mind earlier work from Alphabet’s DeepMind and OpenAI, which sought to study the interactions of AI agents in gamelike 3D environments. For example, OpenAI in 2019 explored how hordes of AI-controlled agents set loose in a virtual environment could learn increasingly sophisticated ways to hide from and seek each other. DeepMind, meanwhile, last year trained agents with the ability to succeed at problems and challenges, including hide-and-seek, capture the flag and finding objects, some of which they didn’t encounter during training.

Game-playing agents might not sound like a technical breakthrough, but it’s the assertion of experts at DeepMind, OpenAI and now Generally Intelligent that such agents are a step toward more general, adaptive AI capable of physically grounded and human-relevant behaviors — like AI that can power a food-preparing robot or an automatic package-sorting machine.

“In the same way that you can’t build safe bridges or engineer safe chemicals without understanding the theory and components that comprise them, it’ll be difficult to make safe and capable AI systems without theoretical and practical understanding of how the components impact the system,” Qiu said. “Generally Intelligent’s goal is to develop general-purpose AI agents with human-like intelligence in order to solve problems in the real world.”

Generally Intelligent

Image Credits: Generally Intelligent

Indeed, some researchers have questioned whether efforts to date toward “safe” AI systems are truly effective. For instance, in 2019, OpenAI released Safety Gym, a suite of tools designed to develop AI models that respect certain “constraints.” But constraints as defined in Safety Gym wouldn’t preclude, say, an autonomous car programmed to avoid collisions from driving two centimeters away from other cars at all times or doing any number of other unsafe things in order to optimize for the “avoid collisions” constraint.

Safety-focused systems aside, a host of startups are pursuing AI that can accomplish a vast range of diverse tasks. Adept is developing what it describes as “general intelligence that enables humans and computers to work together creatively to solve problems.” Elsewhere, legendary computer programmer John Carmack raised $20 million for his latest venture, Keen Technologies, which seeks to create AI systems that can theoretically perform any task that a human can.

Not every AI researcher is of the opinion that general-purpose AI is within the realm of possibility. Even after the release of systems like DeepMind’s Gato, which can perform hundreds of tasks, from playing games to controlling robots, luminaries like Mila founder Yoshua Bengio and Facebook VP and chief AI scientist Yann LeCun have repeatedly argued that so-called artificial general intelligence isn it’s technically feasible — at least not today.

Will Generally Intelligent prove the skeptics wrong? The juries out. But with a team numbering around 12 people and a board of directors that includes Neuralink founding team member Tim Hanson, Qiu believes it has an excellent shot.

About the author

admin

Leave a Comment