We’ve all been astonished at how chatbots seem to understand the world. But what if they were truly connect to the real world? What if the dataset behind the chat interface was physical reality itself, captured in real time by interpreting the input of billions of sensors sprinkled around the globe? That’s the idea behind Archetype AI, an ambitious startup launching today. As cofounder and CEO Ivan Poupyrev puts it, “Think of ChatGPT, but for physical reality.”
Archetype’s foundational model is called Newton. Yes, they know about Apple’s long-lamented handheld device killed by Steve Jobs in 1997, and no, they don’t seem to care. The new Newton is designed to process data from sensors of all kinds and answer questions or provide charts or even computer code to report what’s happening in the world. For Poupyrev himself, Archetype is fulfillment of a long-held belief that the digital world can provide a means of deep engagement with the physical one. Though I didn’t know it at the time, I had my own hand in this: Soviet-born Poupyrev’s fascination for manipulating the world through tech was triggered in part by my book Hackers, which had been slipped to his father during a visit to China. “The idea that things could be hacked and their nature could be changed by inventing new technology inspired me for the rest of my life,” he tweeted about the book in 2020. He also honed his English by reading and rereading the book. You’re welcome.
Poupyrev’s journey after he left the Soviet Union and became a computer scientist included stints at Sony, Disney, and, until last March, Google’s ATAP division. That’s where he led a team working on Soli, a project that built tiny radar devices into wearable gadgets to allow them to respond to a person’s gestures and movements. The demos were impressive, but that approach had limits. “Analyzing sensors was really hard. You had read them by hand,” he says. When LLMs appeared, Poupyrev and colleagues realized that with modifications they could make sensor data more powerful by providing a way for humans to easily explore and monitor data collected across vast swaths of time and space. Instead of a large language model it would be a large behavior model. “We were excited to see how they can work with real-time data from the physical world,” he says. They were particularly excited to do it outside of Google, free of the constraints of working within a giant organization. In March last year, Poupyrev and eventually four others left to start Archetype, now funded by a $13 million seed round of investment funding.
“The physical world is where we have most of our problems, because it is so complex and fast moving that things are beyond our perception to fully understand,” says Brandon Barbello, a cofounder who is also Archetype’s COO. “We put sensors in all kinds of things to help us, but sensor data is too difficult to interpret. There’s a potential to use AI to understand that sensor data—then we can finally understand these problems and solve them.”
When I visited Archetype’s founding team of five, currently working out of a cramped room in the Palo Alto office of its lead funder, venture capital firm Venrock, they showed me some illuminating demos that, they assured me, only hinted of Newton’s vast potential impact. They placed a motion sensor inside a box and prompted Newton to imagine that the container was an Amazon package with fragile cargo that should be carefully monitored. When the box was dropped, the display running the model broke the news that the package might be damaged. One can easily imagine a shipment of vaccines with motion, temperature, and GPS sensors monitored to verify whether it will arrive with full effectiveness.
One key use case is using Newton “to talk to a house or chat with a factory,” says Barbello. Instead of needing a complex dashboard or custom-built software to make sense of the data from a home or industrial facility wired with sensors, you can have Newton tell you what’s happening in plain language, ChatGPT style. “You’re no longer looking sensor by sensor, device by device, but you actually have a real-time mirror of the whole factory,” Barbello says.
Naturally, Amazon—owner of some of the world’s most digitally sophisticated logistics operations—is one of Archetype’s backers, through its Industrial Innovation Fund. “This has the potential to further optimize the flow of goods through our fulfillment centers and improve the speed of delivery for customers, which is obviously a big goal for us,” says Franziska Bossart, who heads the fund. Archetype is also exploring the health care market. Stefano Bini, a professor at UC San Francisco’s Department of Orthopaedic Surgery, has been working with sensors that can assess the recovery progress after a person has knee replacement surgery. Newton might help him in his quest for a single metric, perhaps drawn from multiple sensors, that “can literally measure the impact of any intervention in health care,” he says.
Another early Archetype client is Volkswagen, which is running some early tests of Archetype’s model. Surprisingly, these don’t involve autonomous driving, though Archetype very much wants its technology to be used for that. One Volkswagen experiment involves a scenario where a car’s sensors can analyze movement, perhaps in concert with a sensor on a driver’s person, to figure out when its owner is returning from the store and needs an extra hand. “If we recognize human intention in that scenario, I can automatically open that back gate, and maybe place my stuff into specially heated or cooled locations.” says Brian Lathrop, senior principal scientist at Volkwagen’s Silicon Valley innovation center. That mundane task, believes Lathrop, is just the beginning of what becomes possible when AI can digest reams of sensor data into human-centric insights. Volkswagen’s interests include the safety of people outside vehicles as well as passengers and drivers. “What happens when you network all those cameras from those millions of vehicles on the roadway, sitting in parking lots, on driveways?” he says, “If you have AI looking at all these data feeds, it opens up an incredible amount of possibilities and use cases.”
It’s not hard to imagine the dark side of a trillion-sensor monitoring system providing instant answers to questions about what’s happening at any location in its dense network. When I mention to Poupyrev and Barbello that this seems a trifle dystopian, they assure me they’ve thought of this. As opposed to cameras, they say, radar and other sensor data is more benign. (Camera data, however, is one of the sensor inputs that Archetype can process.) “The customers we are working with are focusing on solving their specific problems with a broad variety of sensors without affecting privacy,” says Poupyrev. Volkswagen’s Lathrop agrees. “When we’re using Archetype software, I’m detecting behavior, not identity. If someone walks up to my wife and tries to grab her purse, that’s a behavior you can detect without identifying the person.” On the other hand, there’s evidence that the way people walk—something high-quality radar might well detect—is as distinctive as a fingerprint. Just sayin’.
Archetype’s vision isn’t unique. Robotics companies in particular are looking at using generative AI and sensors; one company is literally called Physical Intelligence. Poupyrev acknowledges the competition but says that Archetype is distinguished by its breadth. “We address a much more generic market where pretty much anybody who has sensor data, irrespective of use case, should be able to make it useful and functional for them,” he says. Barbello adds that Archetype’s technology is more powerful because of its breadth. “Our models are able to learn from more examples of how the physical world works than those foundations served by more narrow physical models,” he says. “We think our approach is the best way to tackle this problem of understanding the entire physical world.” Got it. But after that, maybe we can tackle the new problems that come when Archetype’s mission is accomplished.
Until a year ago, Archetype’s team all worked together at ATAP, an R&D lab inside Google, embedded within the Motorola division acquired in 2011. Its DNA was a strange mixture of mobile-directed research, the values of the US Department of Defense’s Advanced Research Project Agency, and Google’s own fixation on making bets on exotic breakthroughs. I talked about ATAP in 2013 when writing about an underappreciated, and sadly discontinued, animation project called Spotlight Stories, and its first sensor-driven mini-movie called Windy Day.
Windy Day came from Motorola’s in-house moonshot division, Advanced Technology and Products (ATAP). This research group, begun in May 2012 (the same month that Google’s $12.5 billion purchase of Motorola Mobile became official), shares the high ambitions of its parent company’s own long-term research group, Google X. But ATAP has a different model: DARPA, the US Defense Department’s Advanced Research Projects Agency. DARPA’s accomplishments include game-changers like lasers, stealth bombers, and that little thing we call the internet.
Over the years, DARPA has developed a well-honed process to develop its breakthroughs, a regimen Motorola Mobility’s new CEO Dennis Woodside thought was worth emulating. So he hired DARPA’s charismatic director, Regina Dugan, as a senior vice president. Making the jump with Dugan was her deputy, Ken Gabriel.
They instantly began a mini-DARPA inside Motorola. Like DARPA, ATAP engages researchers for two-year stints, directing them to take on a project just at the point where new technologies make it possible to make a groundbreaking advance. Project leaders are free to contract with outsiders to assist – enlisting some of the world’s best minds for edgy research with specific goals. Some of those goals are just what you’d expect from a commercial R&D division. Just as DARPA undertakes certain projects with obvious military application, ATAP looks at obvious stuff like speeding up graphics, developing thinner materials, or making your phone’s battery last longer. If the result won’t be at least five to ten times better, a project is a non-starter. But it doesn’t just explore the stuff that other people want. “It’s also our job at ATAP to do things that they don’t know to ask for,” says Ken Gabriel.
Joe asks, “How do battery-powered garden equipment like mowers, snow blowers, and leaf blowers compare to their gas-powered counterparts?”
Thanks for asking, Joe. Are you asking in terms of environmental impact? Because you know the answer—electric is the way to go. As far as choosing which one does the best job, after extensive research (Google, ChatGPT, and consulting BJ, the guy who actually clears my driveway), I find consensus. Battery-powered tools are lower maintenance, easier to use, and a lot less noisy. But they can’t take on heavier jobs, in part because the charge doesn’t last too long and in part because they don’t have the power. If you live in Buffalo, a battery-powered snow blower might not do the trick for you. Or if you own an estate like Saltburn, it’s going to take a lot of recharging to cover all that ground with an electric mower. Also there’s this: BJ tells me the local fire department recently had to put out a huge conflagration ignited by a battery charger for a leaf blower.
But, people, how long are we going on this cursed path of burning energy to tend our gardens? Google once hired a bunch of goats to eat its grass! Leaves can get gathered by good ol’ rakes. Or let them mulch in place! And the best way to clear a driveway piled with snow is to hire some neighborhood ragamuffins with snow shovels, pay them some cash, and give them a belt of hot chocolate. Mother Nature will honor you!
You can submit questions to mail@wired.com. Write ASK LEVY in the subject line.
Taiwan’s earthquake this week hit a magnitude of 7.4. Naturally, people are asking: Will this slow down Nvidia?
While our LLMs talk to sensors, the chips that run them might one day communicate via light.
Two years ago a lone hacker took down North Korea’s internet. Now he’s revealing his identity. Brave soul.
Turkey’s love affair with crypto was symbolized by Faruk Özer, who is now beginning a 11,196-year prison term for fraud. What will Bitcoin be worth then?
For some female scientists, the cold was not the most treacherous part of Antarctica.
Don’t miss future subscriber-only editions of this column. Subscribe to WIRED (50% off for Plaintext readers) today.
In recent years, artificial intelligence (AI) has made significant strides in various industries, revolutionizing the way we live and work. One area that has seen remarkable advancements is the development of conversational AI, which aims to enable seamless interactions between humans and machines. One startup, in particular, is envisioning a future where conversations with houses, cars, and factories become a reality.
This AI startup, let’s call it ConversAI, is at the forefront of creating intelligent systems that can understand and respond to human commands and queries. Their vision extends beyond just voice assistants like Siri or Alexa; they are working towards building AI-powered entities that can engage in meaningful conversations with inanimate objects.
Imagine waking up in the morning and having a conversation with your house. You could say, “Good morning, house. Set the temperature to 72 degrees, brew a cup of coffee, and play my favorite playlist.” The house, equipped with ConversAI’s technology, would understand your instructions and carry them out seamlessly. It would adjust the thermostat, activate your coffee machine, and start playing music, all based on your preferences.
ConversAI’s technology also extends to automobiles. In their vision, you could have a natural conversation with your car, making it a true companion on your journeys. Instead of fumbling with buttons and screens, you could simply say, “Car, find the nearest gas station,” or “Car, play some relaxing music.” The AI system embedded in the vehicle would understand your requests and respond accordingly, providing you with real-time information or playing your desired music.
But ConversAI’s ambitions don’t stop there. They are also exploring the potential of conversational AI in industrial settings. Imagine being able to have a dialogue with a factory floor or a manufacturing plant. You could ask questions like, “Factory, how many units have been produced today?” or “Plant, what is the current status of machine X?” The AI system would be able to analyze data in real-time, providing you with accurate and up-to-date information about the operations.
The implications of ConversAI’s technology are vast. It has the potential to enhance efficiency, convenience, and safety across various domains. In the case of houses, it could optimize energy consumption by understanding your preferences and adjusting settings accordingly. In cars, it could minimize distractions and improve road safety by allowing drivers to interact with their vehicles using natural language. And in factories, it could streamline operations and enable better decision-making by providing real-time insights.
Of course, there are challenges to overcome in realizing this vision. Natural language understanding and processing is a complex task, requiring advanced algorithms and vast amounts of data. ConversAI is investing heavily in research and development to improve the accuracy and capabilities of their AI systems. They are also working on ensuring privacy and security, as conversations with machines involve sharing personal information and sensitive data.
As ConversAI continues to push the boundaries of conversational AI, their vision of interacting with houses, cars, and factories becomes increasingly plausible. The integration of AI into our daily lives is inevitable, and ConversAI’s technology represents a significant step towards a future where conversations with machines are as natural as talking to another person. With their innovative approach, they are paving the way for a new era of human-machine interactions that hold immense potential for transforming our lives for the better.