The Surging Opportunity for Robotic Automation

The House Fund
The House Fund
Published in
14 min readMay 18, 2021

--

Ken Goldberg and The House Fund discuss the opportunities and challenges in state-of-the-art robotics for Manufacturing and Logistics.

Today we talk with renowned roboticist Ken Goldberg on the current state of robotics, and how rising use of e-commerce, surged by COVID-19, has accelerated the need for automation in Manufacturing and Logistics. Ken is in conversation with Cameron Baradar, Partner at The House Fund.

Ken Goldberg is a William S. Floyd Jr. Distinguished Chair in Engineering, at UC Berkeley, Industrial Engineering / Operations Research (IEOR) Dept., Director of the AUTOLab (automation research), and CPAR (CITRIS people and robots), Founding Member and Chair of the Berkeley AI Research (BAIR) Lab Steering Committee, and holds joint appointments at UC Berkeley in EECS, Art Practice, and School of Information, and at UCSF, for Radiation Oncology.

The History of Robotics

CB: Before we talk about today, let’s take a look back at the field of robotics. You have spoken before of the “three waves of robot grasping research” and how each wave has influenced the field. Can you talk about that framework and why grasping is such a challenge?

KG: Grasping is hard, which surprises a lot of people. The perception is, if machines can play chess and Go far better than humans, then they should be able to pick up any object. But the fact is, most robots are klutzes.

If the object happens to be a cube, a robot can pick it up. But for more interesting objects — for example, a hairbrush — the robot must figure out where to put its grasping contact points. And if the hairbrush is sitting in a pile of other things, figuring out how to get in there and grasp it gets tricky.

It’s harder to predict the motion of an object moving across a table than it is to predict the motion of an asteroid a million miles away.

Uncertainty in physics, perception, and control makes robotic grasping a grand challenge. Credit: Ken Goldberg

There’s uncertainty in the perception, because even though you may have a high-res camera, you still don’t know precisely where things are in 3D space. We can scan with cameras, but we can’t build a perfect three-dimensional model of the world.

There’s uncertainty in control. If I tell my robot to move its hand to a specific point in space, it will always be slightly off. A small error makes a big difference. If I’m grasping something and move one of my fingertips just a bit, I’m likely to drop it.

And there’s uncertainty in the physics. If I’m trying to grab something that’s pressing up against other things, there are frictional interactions that are very difficult to predict. It’s harder to predict the motion of an object moving across a table than it is to predict the motion of an asteroid a million miles away. These are really fundamental and deep questions. They’re not easy to resolve.

So, it’s a grand challenge for robots to pick up a novel object. People have been looking at it for a long time. The classic technique, what I call the First Wave, is still very popular. First Wave uses analytic techniques from physics and mechanics, drawing on theory and research that’s been around for 200-plus years. The theory says that if you know the geometry of an object and where the contacts are, you can answer the fundamental question: will the object be immobilized? The theory is elegant and beautiful, and it’s infinitely interesting and complicated. The trouble is that it doesn’t solve the problem because of uncertainty. We don’t know the geometry or frictional forces.

Grasping requires incredible skill and dexterity. Here, a robot with a suction gripper picks up an amorphous 3D printed object on the left, which would have been challenging for grippers to find a solid grasping point. And on the right, grippers are required to pick up a multi-colored woven ball, which would have been too challenging for suction to hold on to. Cover of Science Robotics Journal, January 2019. Photo by Adriel Olmos.

That’s where the Second Wave comes in, which is about learning from examples — robots trying things and exploring and building empirical models. That’s the central idea behind deep learning, and it has been surprisingly successful. The challenge is that the learning curve is very slow. It takes a long time to get good enough to pick up arbitrary objects with high reliability. We can get up to seventy, eighty percent quickly, then it starts to asymptote. It takes a long time to improve beyond that.

I’m excited about the Third Wave, a synthesis of the first two. The trick is how to put them together, because they’re very different frameworks. In our case, we figured out that we could use the analytic model not in the real world, but in simulation, to very quickly generate examples.

In simulation, we can do experiments a thousand times faster and generate millions of examples overnight. That’s the foundation for the ideas Jeff Mahler and I developed for Dex-Net , a project to develop highly reliable robot grasping through deep simulation-to-reality transfer learning.

Another key ingredient is depth maps, which are a very reduced representation of objects that concentrate on the geometry and ignore color and texture and everything else, because those factors are less relevant for grasping.

Depth map from Alan Zucconi

I’m practical. I’m from the school of, let’s engineer a system that works reliably. There are certain things, like being able to manipulate the robot arm using inverse kinematics, that have been studied for 50-plus years. Let’s not throw that out the window. Take the pieces we know and build on top of them.

While COVID accelerated need, there are significant challenges ahead for robotics in Manufacturing.

CB: Here we are today, in the midst of a global pandemic that has affected every corner of the world. It has brought up unprecedented challenges, particularly in manufacturing and logistics, that can be solved by robotics.

Let’s talk about manufacturing first. From pharmaceuticals, to automotive, to electronics, with borders closed and lockdowns in effect, manufacturing was brought to a standstill, and is another landscape that would greatly benefit from automation.

Robots have historically been used for very repetitive, highly scripted manufacturing tasks like painting and welding, for example. But until recently, robotic systems have been incapable of performing more advanced manufacturing tasks, given we haven’t yet had technology that was flexible enough to adapt to new conditions.

How has the introduction of deep learning in the Second and Third Wave of robotics opened new opportunities?

KG: In manufacturing, robots must perform for a sustained period of time, reliably and cost effectively, and that is hard. There’s not a big margin for error. In car manufacturing, for example, if one bolt is not tightened properly — the car will eventually have a major failure. And that’s a nightmare for companies. They want Six Sigma, failures happen one in a million times. The bar is high. And that poses many interesting additional questions for researchers.

Deep learning is very good at function approximation. My name for it is hyperparametric function approximation. “Deep Learning” sounds a little too magical. It can work extremely well, but it’s hard to prove. You can’t get guarantees, and industry is wary. If they put something out on a production line — if it’s Intel making chips — they don’t want anything to go wrong. A deep learning method can adjust the heating values, and it may reduce production time most of the time, but if it burns up a chip, they’re not going to be happy about that. That’s an ongoing challenge.

Manufacturing tends to be conservative. That’s reality. There are areas where we’re starting to see deep learning, especially with something that has to be variable, where every system is slightly different. For example, radiation treatment customized to the individual. We patented a way to 3D print a customized implant that shields the person getting radiation treatment.

Patented 3D printed shield for treating an oral cancer patient. Ken and team used robot motion planning to optimize the curved channels through which the radioactive seed is moved.

Let’s say someone has mouth cancer and needs to get radiation. Right now, there are standardized small, medium, and large shields they put in your mouth to direct the radiation. But with 3D printing, we can customize, personalize, and optimize the shape — it’s an area that’s really exciting and interesting, where I think we’ll see more innovation.

Manufacturing tends to be conservative. That’s reality. There are areas where we’re starting to see deep learning, especially with something that has to be variable, where every system is slightly different.

The Manufacturing industry is paying a lot of attention to 3D printing right now, which is useful for both rapid prototyping and design, but can also be used for fabrication, and new materials like carbon fiber are being used for new structures. Both of those require solving a lot of subtle geometric problems, and techniques from AI can be relevant there: How do you optimize where you place support structures or struts, say, in an aircraft wing? How do you design a chair that’s going to have maximal strength but minimal weight because it’s going to be used in an airplane? There’s also the idea of analyzing the structural strength and properties of a material, like heat resistance — how do I change a material’s thickness to distribute heat more accurately? All of those are generally done with trial and error by humans. But there’s a degree to which they can be automated.

There’s also a hope that we can start to learn from humans in manufacturing environments to do certain tasks. An estimated 1,000 human hands touch a typical iPhone in production. And it’s not because manufacturers don’t want to automate. They’re trying to automate. But it’s very, very difficult, because there are delicate, nuanced things that have to be press fit just right. It’s been hard to automate those, but there’s a hope that we can learn by observing, and experiment to get a system that could automate certain aspects of those.

Logistics, a promising frontier for robotics

CB: Let’s move on to logistics. The pandemic has escalated all e-commerce logistics, opening up a big need for automation and robotics in the coming decade. We were already seeing an increase in e-commerce before the pandemic (12.7 percent YoY) — now, we’ve shot to 30.1 percent, representing almost $350B in spending (in H1 2020).

It’s a true shift in consumer behavior, and while we’ll have to wait for the pandemic to subside to see what post-pandemic norms will look like, COVID is estimated to have accelerated e-commerce adoption by 5 years. This leap is leading to problems including incredible labor shortages — warehouse managers who can’t keep roles filled, and even when they do, they struggle to keep their employees safe from COVID or overuse injuries from the repetitive nature of the job.

What’s interesting, is that given where we are with robotics, hiring employees is still the answer for large companies, as witnessed by Amazon’s big 2020 hiring spree (427,300 new employees between January and October), showing they weren’t ready, or perhaps didn’t have the time, to adopt automation at that scale. Yet there are some robotics solutions that are making big headway on grasping, which could open up hundreds of billions in market value.

How far do you think the tech is from being a viable solution for such expansion?

KG: Logistics is a sweet spot for robotics, because there’s a fault tolerance. In a parcel sorting environment, or warehouse, you can drop things — it’s not the end of the world.. People don’t expect Six Sigma performance like they do in manufacturing. You can even have 3, 4 or 5% failures. That’s acceptable when you’re dealing with lots and lots of packages.

In industry, as we’ve learned from Ambi Robotics (our AI/Robotics sorting startup with Jeff Mahler, in The House Fund portfolio), there are very clear criteria: I want to see X units-per-hour come through this system. And it’s not just ‘pick it up,’ but scan it, and put it into the right bin. It’s really demanding. And a robot must do that over and over, day after day, with very little downtime.

COVID is estimated to have accelerated e-commerce adoption by 5 years (TC). This leap is leading to problems including incredible labor shortages — warehouse managers who can’t keep roles filled, and even when they do, they struggle to keep their employees safe from COVID or overuse injuries from the repetitive nature of the job.

Logistics is a sweet spot for robotics, because there’s a fault tolerance. In a parcel sorting environment, or warehouse, you can drop things — it’s not the end of the world.. People don’t expect Six Sigma performance like they do in manufacturing.

People make a distinction between research and engineering, but I consider them both research. They both require ingenuity and patience, diligence and tuning. In China, it’s prestigious to work in a factory, and we need to adopt that attitude.

In a factory, you pick up the same thing over and over. We have techniques to train a robot to do that one thing. But in a warehouse every problem is different.

I don’t think logistics is an area that’s winner take all, by the way. There are areas, like social media, where as soon as Facebook got so big, nobody could catch up. Logistics is different. By nature of the variance of warehouses, item types, and job types that will all require different robotic solutions, there are going to be a lot of different approaches. There will be many winners in different niches.

As for Amazon’s hiring spree — we are at that turning point of when to use humans or robots for e-commerce sorting. Amazon continues to hire mass amounts of people because there’s such turnover in sorting — it’s backbreaking, tedious, and mind-numbing. And robots haven’t yet been up to the task. Although, we believe our robot at Ambi is ready.

I don’t think logistics is an area that’s winner take all… By nature of the variance of warehouses, item types, and job types that will all require different robotic solutions, there are going to be a lot of different approaches. There will be many winners…

There will always be a need for humans in e-commerce — there’s such huge and growing demand in e-commerce.

How to turn tech into a business

CB: One thing we help our entrepreneurs think through is how to solve true customer pain points to find product-market fit. For instance, the challenges of pick and pack labor is a clear pain point for a startup to fill.

I’m sure you do this with your research students regularly too — when a new breakthrough technology emerges, how do you help your students applying it to specific problems? Or put another way, how do you think through which tech is right for a particular problem? Are there any helpful frameworks that can help guide entrepreneurs on applying the right tech to their startups?

KG: The biggest thing is having your eyes open about the transition from the lab into practice. That’s a big gulf and it’s important to understand your secret weapons. That’s true for research, too.

Why are you going to solve this problem when all the smart people out there have not? You’ve got to have a good answer for that, which is normally something like: I have a new theorem and I’m going to see how I can apply that here. Or, I’m using this new piece of hardware that nobody else has — a brand new sensor or actuator — or some combination of those that basically says, I can do this better than anyone else. And then, is it defensible? If you come out with something that someone can look at and copy, somebody is going to copy it.

You have to understand the industry, and to have the humility to say, okay, I don’t know enough, how am I going to learn more. You have to talk to people and really spend time.

We’ve been doing that at Ambi Robotics. The team spent a year side by side with people in the warehouses to understand how they work and what matters, and building trust with the people on the shop floor.

Grasping as a way to comprehend the world

CB: In a piece for the MIT Technology Review, Will Knight writes: “Developments in machine dexterity may also be significant for the advancement of artificial intelligence” noting that “manual dexterity played a critical role in the evolution of human intelligence.” What are your thoughts on this?

KG: I like that, because, the opposable thumb, the ability to grasp, allowed homo sapiens to make and use tools. This had a significant effect on evolution and allowed human thinking to evolve.

The hand is very complex. For example, sensing —our ability to feel a tiny groove, a little scratch, something microscopically small. It’s amazing our fingers can do that. We don’t know how to solve that. Tactile sensing is another grand challenge for robotics.

Robots are not coming for our jobs

CB: I’ve had many conversations with your colleague Michael Jordan about artificial intelligence as a human-centric engineering discipline. These systems are increasingly finding their way into applications that can have profound effects on people’s lives, and this human element needs to be at the center of the conversation. How do you think about this when building and Implementing robots that will work alongside humans, potentially supplanting them?

KG: I think a lot about this. I’ve been working as a professional artist for a number of years, and my artwork raises questions about ethics and hype and illusions around technology, particularly robotics, and conventional wisdom.

Ken Goldberg’s autonomous garden raises the question: is a robot capable of tending a garden? The implication is no — there are things robots could never do.

There is a long history of fear around robots that, at least in the West, goes all the way back to the ancient Greeks and Pygmalion. The word “robot” comes from a science fiction play that carries forward the Frankenstein theme that robots will run amok and do away with us. That idea is very deeply rooted.

It’s part of my job to go out and explain where there is danger, and also to reassure people, because the robots are not coming that fast. Humans have many good years left. The New York Times might state that jobs are going to be wiped out by robots, but that’s not going to happen. People said that at the turn of the century, and in the mid-century. There have always been these predictions, and they are almost always exaggerated.

The New York Times might state that jobs are going to be wiped out by robots, but that’s not going to happen.

One thing we’ve all been reminded of this year is the inequality that exists globally and nationally. It’s important to be aware of that and sensitive to it. A mistake many technologists make is to say, “this robot this will replace 4.3 workers”. But the reality is, if you’ve got that attitude, those workers on the shop floor will make sure your system doesn’t work.

If a robot were going to take over my job, I wouldn’t be happy about it either. But if a robot is coming to reduce injuries — injuries by the way, are very high in pick-and-pack and warehousing — and monotony, then the intention and impact will be fundamentally different. Robots can do dull and dangerous jobs, freeing people to start doing other things, like managing the flows of elements in the factories and warehouses — and even recovering objects when the robots drop things!

The House Fund is a pre-seed and early-stage venture capital fund focused on the boldest Berkeley startups.

Working on something new? Get in touch at thehouse.fund

--

--

The House Fund
The House Fund

The House Fund is a pre-seed and early stage venture capital fund focused on the boldest Berkeley startups.