Lambda Labs raises $15M for AI-optimized hardware infrastructure

Join executive leaders at the Conversational AI & Intelligent AI Assistants Summit, presented by Five9. Watch now!

Lambda Labs, an AI infrastructure company, this week announced it has raised $15 million in a venture funding round. This latest investment brings the company’s total raised to $19 million, following a $4 million investment from Gradient Ventures and Bloomberg Data.

In 2013, San Francisco, California-based Lambda controversially launched a facial recognition API for developers working on apps for Google Glass, Google’s ill-fated heads-up augmented reality display. The API — which soon expanded to other platforms — enabled apps to do things like “remember this face” and “find your friends in a crowd,” Lambda CEO Stephen Balaban told TechCrunch at the time. The API has been used by thousands of developers and was, at least at one point, seeing over 5 million API calls per month.

Since then, however, Lambda has pivoted to selling hardware systems designed for AI, machine learning, and deep learning applications. Among these are the TensorBook, a laptop with a dedicated GPU, and a workstation product with up to four desktop-class GPUs for AI training. Lambda also offers servers, including one designed to be shared between teams and another, called Echelon, that Balaban describes as “datacenter-scale.”

Above: One of Lambda Labs’ workstations.

“Machine learning teams spend a huge amount of their time building and managing their own compute infrastructure,” Balaban said during a showcase at VentureBeat’s Transform 2021 conference. “If they’re using on-premises infrastructure, they’re spending time designing servers and workstations, buying and sourcing GPUs and CPUs, negotiating with vendors, understanding power and cooling, and then doing actual deployments in the datacenter. If they’re running their infrastructure in the cloud, they need to spend time designing the machine images to deploy with that instance, write provisioning scripts, and keep those machine images up to date. Basically, [companies are] paying a bunch of money to people who have Ph.D.s in computer science to do Linux system administration for you. And having your machine learning team do Linux system administration just doesn’t make sense.”

Software plus hardware

A number of startups offer preconfigured hardware for AI development, including Graphcore. But Balaban says Lambda’s major differentiator is its software tools.

Every Lambda machine comes preinstalled with Lambda Stack, a collection of machine learning software development frameworks, including Google’s TensorFlow and Facebook’s PyTorch. Developers can update the frameworks with a single console command, and if they’ve trained the model on a local machine, they can copy it up to a Lambda server running in the cloud.

“Our customers include Apple, Intel, Microsoft, Amazon Research, Tencent, Kaiser Permanente, MIT, Stanford, Harvard, Caltech, and the Department of Defense,” Balaban said. “We have [thousands] of users, [and] most of the Fortune 500 and almost every major research university in the U.S. — as well as [many of] the research labs at the Department of Transportation and Department of Energy — use Lambda hardware and Lambda Stack.”

Balaban also claims that the company, which was founded in 2012, has been cash-flow positive since November 2013. It’s on a $60 million revenue run rate for 2021.


  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Source: Read Full Article