If AI can replace some jobs, should we?
In this week’s episode, Chris talks with Missy Cummings, professor and the director of George Mason University's Autonomy and Robotics Center. Missy spent eleven years (1988–1999) as a naval officer and military pilot and was one of the United States Navy's first female fighter pilots, flying an F/A-18 Hornet. In October 2021, the Biden administration named Cummings as a new senior advisor for safety at the National National Highway Traffic Safety Administration (NHTSA). Her appointment to the NHTSA was met with criticism from Tesla's CEO Elon Musk and personal harassment and death threats from Tesla advocates in response to her previous statements critical of Tesla. Missy’s research interests include artificial intelligence, human-robot interaction and the socio-ethical impact of technology. Cummings has written on the brittleness of machine learning and future applications for drones. In addition, she has spoken critically of the safety of Tesla's Full Self-Driving Capability surrounding its reliance on computer vision.
- Hello everyone. I am Chris Hyams, CEO of Indeed. My pronouns are he and him, and welcome to the next episode of "Here To Help". For accessibility, I'll offer a quick visual description. I am a middle aged man with dark rimmed glasses. I'm wearing a black T-shirt. Behind me is the North Austin skyline. At Indeed, our mission is to help people get jobs. This is what gets us out of bed in the morning and what keeps us going all day. And what powers that mission is people. "Here to Help" is a look at how experience, strength, and hope inspires people to want to help others. My very special guest today is Missy Cummings. Missy was one of the US Navy's first ever female fighter pilots, and she's now a leading academic in the fields of engineering, automation, and robotics. Missy calls herself a tech futurist whose job she says is to make tech work, not to stop it, but to help it get better. Missy has had an extraordinary academic career across multiple institutions. She was an NROTC instructor at Penn State. She was assistant professor of engineering at Virginia Tech. After earning her PhD at the University of Virginia, she was Professor of Aeronautics and Astronautics at MIT. Missy is currently director of George Mason University's Autonomy and Robotics Center. At the same time, she's also professor at the Duke University Pratt School of Engineering and the Duke Institute for Brain Sciences, and an affiliate professor with the University of Washington's Aeronautics and Astronautics department. She is an influential figure in the field of artificial intelligence or AI, particularly in relation to autonomous systems and human robot interaction. She is a prolific researcher and advocate for the responsible development and integration of AI technologies. She's been a vocal advocate for ensuring that the development and deployment of AI systems prioritize human wellbeing, safety, and societal impact. I am very excited for this conversation today, Missy. Thank you so much for joining me.
- Thanks for having me. I know my bio's really a mouthful. I just tell people in general, just say, she's just a really great gal.
- All right, well we can start with that. Let me actually start the conversation where we always start by asking you, how are you doing right now?
- Well, right now, I am always amazed at once you're a fighter pilot, you're always a fighter pilot. And what that means is I'm always pushing myself and pushing my limits. And I just finished a couple of days hiking through Mount Washington and I'm not 25 anymore. That's how I'm doing.
- We have a lot to talk about, specifically around AI and its implications. But before we dive into that, I'd love to hear a little bit about how you got to where you are today. Can you tell us a little bit about your experience as a fighter pilot in the US Navy and how that led to your academic interests and research?
- Yeah, I think it's an unusual combination where somebody first went into the military and then after they left the military, and I was on the front lines, the tip of the spear, what they say, and then to change over into academia. But those two experiences for me are unbelievably intertwined. And I did an "Atlantic" article about this a few years ago. Frankly, I don't know how other academics become academics without first being a fighter pilot. 'Cause it's a dog-eat-dog world out there, even in academia. And so I find my fighter pilot skills work, they translate quite well. But I grew up in the military basically. My dad was an enlisted mechanic on aircraft in the Navy. And so despite the fact that it is the Navy, I never knew a maritime Navy as much as I knew an aviation Navy. And it was a big influence. And then it's not surprising that I went to college. In his line, my father's line of the family, I was the first person to actually go to and finish college. And it was great, I mean, meaning a great challenging experience. And then of course I graduated in 1988, and if anybody remembers their history, 1986, 1987 were "Top Gun". "Top Gun" was everywhere. And that's when I found out that I could be a pilot. I didn't, up to that time, I thought I'd be an intel officer and when I was at the Naval Academy and found out I could be a pilot, and then of course you see "Top Gun", well why wouldn't you want to do that? And that's what I did. And I was very much that person who likes to live life on the edge. I feel the need, the need for speed. That was definitely me. I'm a better driver now only because I've been working in driver. But there were many years where my driving, I look back on my younger self and just am horrified at who I was when I was younger. But that experience and when I flew fighters, the F-18 Hornet, I flew them for three years. And on average, one person a month died that I knew flying fighters. And it was always the human machine-interaction. They were, no one that I knew ever died in war. They always died in training accidents. And it's because the capabilities of the machine far exceeded those of the human. And so that really motivated me to go back to school and get my PhD because I realized I was never going to be able to fix that while I was in the military. The way to fix that was from the outside.
- We have a lot to talk about with respect to AI, but I'd love to start with your current role as director at George Mason's University Autonomy and Robotics Center. Can you talk about the focus of your research there?
- So I recently moved to George Mason from Duke University because I had been working for the Biden Administration in the National Highway Traffic Safety Administration, helping them with policy around self-driving cars. And prior to that I had done a lot of work with the Defense Innovation Board for the Secretary of Defense and the military. And both those experiences really highlighted to me just what a problem it was in government. And I would actually, present company not included in this shade I'm about to throw. I'm shocked at the level of just ignorance in the C-suite of what AI and autonomy really is. Again, no one on this call. And so that really motivated me to start looking around and thinking about, just like when I had been a fighter pilot wanting to say, I want to fix this thing. I started really thinking about how can I fix the problem where we do not have enough, if any, qualified people in various agencies to really speak authoritatively about AI, what it can do, what it can't do, how to think about its limitations, 'cause companies are there to sell. That's their job. They're there to make you believe in their product, but you also need people on the other side to be able to assess the claims and promises. And so at that time, I was spending a lot of time in Washington DC with NHTSA. And so George Mason made me an offer, president Greg Washington, who'd been a former friend of mine from many years ago, he became the new president. He really reached out and I decided that moving to DC permanently so that I could begin new education and research programs focused on what I call translational AI. There's a lot of academics that like to tweak algorithms and show that my algorithm, my computer vision algorithm for example, can get up to 89.2% accuracy over the 89% accuracy that some of my peers did. And that's great for fundamental research, but we need to move beyond fundamental research to translational research, which you hear a lot about in medicine, but you don't hear a lot about in AI. It's great that we have a lot of these ideas about artificial intelligence, but who's actually making it work in the real world? And I think that my experience with self-driving cars, seeing what worked and what didn't work, that's really where I am now, is trying to help translate the fundamentals of AI into the practicalities of AI.
- AI's been around for decades and decades and the ideas behind it even longer. But it's clearly having a moment right now. So I actually just looked on Google Trends last night, and if you look just for the Google Trend for the topic artificial intelligence from 2004 to the present, it sort of hovered at pretty much the same level up until mid 2022 where it jumped about about 3X and then another 3X this year alone. So we're sort of almost at 10 times the interest in the last year than there has been for the last 20 years. Although this work has been going on for a very, very long time. But what is going on right now that you think has gotten everyone losing their minds?
- It's funny when you say that 'cause I think that's, you and I need to get in rocking chairs on the front porch to talk about those dag nabbit young kids. We've been doing AI since, I mean truly since before I was born. Licklider, if you don't know anything about Licklider, it's great to reach back and look at his work in the fifties to see how some things he got right and some things he got wrong about predictions for artificial intelligence. And the old curmudgeons like us like to sit around and talk about, people think neural nets, they just came up with this stuff and we've been, and it's true. I've been using it in my research for as long as I've been doing research. So I think we're in a moment because of several factors. I think computational resources, the speed of processing, the availability of data, kind of all the fundamental ingredients in the recipe are far more accessible and available. And the computational research resources are faster than we've ever seen before. So I think there were some key enablers to actually start getting us to the point where we could start running these algorithms, which we have all known about, if you've been in this business, since 50 years ago. And I also think the public is in a different mental space. I do think that almost post COVID, people, we had so much drama going on around COVID, there was always something there with COVID. And then when COVID started to go away and it no longer filled our everyday timeline, I think people may be addicted to some new leap in the next terrible thing that could affect us all. And so somehow we've managed to transition over to AI is now going to be the job killer and is the end of humanity as we know it. What we're seeing with AI, especially large language models, I was thinking about this over the weekend while I was hiking. If you've never seen the Charlie Chaplin movies of old, it's really worth reaching back and looking at those because there was a real palpable fear that as factories were going from primarily human skilled labor to automation labor, that this would be the job killer. It would cripple mankind as we knew it. And there was a real fear there. And I think that that's what happens in people's brains are that they see what they perceive to be a threat and certainly what's been advertised as a threat in the media. And they don't realize that there's actually new areas, work areas, new opportunities that we could have never seen. Indeed, it's because of factories that we were able to make that next leap into the industrial revolution. So I think that we're a little traumatized from COVID. I think the media has not been super helpful because they haven't been super informative. But then I would say the last guilty party in all this is academia because we're not training enough people that have the right balance of technical skills, but also socio-technical skills. People who can understand AI and who can communicate what AI is and what it isn't.
- So you just recently published an amazing article that I got to read this week in "IEEE Spectrum" called "What Self-Driving Cars Tell Us About AI Risks". And you have these lessons basically to sort of help understand. Can you talk a little bit through what some of the things that this arena of self-driving cars can tell us about some of the larger risks around AI in general?
- Yeah, so I put together this little five lessons learned because this goes back to the communication. Look, we could sit here and talk about hyper parameters and loss functions and we could really get into the details of AI, but unless you're a well seasoned current user of those kinds of systems, those messages are going to be lost on you. So I wanted to develop an easy to understand for everyone of all levels without being an engineer to make them understand what they were up against. And so the first lesson that I learned was that we so desperately, we being engineers and computer scientists, want to automate out the human because we say we're going to automate out wherever human error is. Which is hilarious to me because that somehow people forget that they, as the developers and the creators of the technology, are humans, and that they are also easily vulnerable to errors. And so I saw this when I worked for NHTSA, the National Highway Traffic Safety Administration, it was kind of shocking the number of bonehead coding errors that I saw that resulted in accidents. Errors get made, that's 100% predictable. But that's why we've developed code checking algorithms. Code checking algorithms can't catch everything, so then you need different kinds of testing. But particularly in the military has these kinds of developmental testing stage. And then right before you go operational, you have some operational tests and you have to understand how those testing levels change and what you have to do. And we're just not there yet in the self-driving world. I think that, not that aviation has been flawless, there have been accidents like the 737 MAX which showed that even sometimes aviation can make mistakes. But for the most part, aviation, they have such a flawless safety record because there is so much effort put into the testing and certification aspect. And I think that we've really, that the self-driving world and any other AI endeavor where you're going to be putting potentially critical information out in the world, you've got to take testing more seriously.
- In the long arc of history, there's always been a pushback against all types of automation. And that if you look over the last couple hundred years, we work fewer hours in safer conditions, we have a higher quality of life, and the technology has led to more opportunity. However, at the same time, every time technological advances happen, there is disruption in the short term. So there are people who lose their jobs and then it's the job of folks like us to figure out how to help them find what is next. What do you envision some of the impact over the next couple of decades of artificial intelligence on larger areas of the employment sector?
- Yeah, I get asked this question quite a bit and I want to be honest. I'm not saying that some jobs will not be lost, but on the whole, there's going to be a huge net gain in jobs for some of the same reasons that I just talked about. The AI and maintenance field, that's just not even, we're just at the very beginning of what is going to be a huge growth in that field. I think a lot about meaningfulness of work and what it means to have meaningful work for humans. Dignified work. When you talk to taxi cab drivers, and I've talked to them all over the world, this is meaningful work for them. They like the transportation aspect. They like working with people for the most part. So that's, I could see that that's, should we replace that job? I mean, I actually don't even, I'm not in a conundrum because I don't see that happening anytime soon. And I will tell you for sure, even if we have some low speed shuttles being replaced, that could potentially replace bus drivers. First of all, we don't have enough bus drivers. So I think this is a reality. We have to realize if we don't have enough bus drivers, depending on if you're in a rural area, we don't have enough taxi cab drivers. We don't even have enough Uber and Lyft drivers.
- We don't have enough pilots either while we're at it.
- That's right, that's right. We don't have enough pilots. So I think we also have to remember that there some jobs, particularly as you move into rural America, that need augmentation. So that's how I think more about AI and autonomy is really we're going to augment human capabilities as opposed to flat out replace them. And trucking, I've got news for everyone. It's not happening in trucking and for a long, long, long time, especially the last mile, because there's just so many nuances in trucking that we can't really automate in the short term. I'm not saying it'll never happen, but it's not going to happen in my lifetime. So could we see, but I also tell people, look, if you are a journalist and it's your job to write the sports column and AI can do just as good a job as you are and come up with the same quality of article about who beat who in whatever A versus B rivalry, honestly, it's probably good that your job got replaced so that you can move on to more meaningful work. And what does that mean? Meaningful for humans means that it's not just rote memory, what we call for robotics, dull, dirty, dangerous. These are not good jobs for people. They're repetitive, like mining, robots in mines, best application ever. Dangerous, repetitive digging, exposure to dust and chemicals that you would not, the human body is not good for. So that's the best place to start automating out some jobs. I'm very optimistic here. Jobs that can be replaced probably should be replaced, and that will elevate humankind so that we can do more meaningful work. Now I would have one caveat to that, it's kind of a burbling, it's just under the surface burbling. And this is what are computer scientists doing with data annotators? Is this meaningful work? And ooh, this is one of my favorite topics to really get in right now because there is a problem with data annotation and is this meaningful work? It wouldn't be for me, but it's raising the status of living for people in India, for example. So I think that we will continue to have these arguments and it's good to have these arguments because then maybe we can insert some technology and some processes until we get automated data annotation working, which I'm not even sure is really possible to work at the levels we need it to work. We need to find better ways of working. So even inside AI creation, I think we could do a lot to make more meaningful work.
- Yeah, and I think one of the interesting things when you talk about augmentation, and a big area of what you said was both your inspiration going into academia is the connection between humans and machines and your research is around the human and machine interaction. You told a story last week about the Roomba. I'd love to hear you sort of retell that. And I'm curious about the implications for how these systems not only simplify things, but might actually have to change how we adapt and work with them.
- Yeah, I like to, as a teacher of human robot interaction, when I start talking about the man-machine, human-machine interaction, I like to use the Roomba as the example because it's just so classic. So I've worked a lot with iRobot over the years and even one of the iRobot researchers told me that the only good robot that they really produce is the garage robot 'cause it's the simpler robot. And he said that the real reason that the Roomba works and that people like it is because it forces them, it changes their behavior. It makes them a cleaner person. Because if you've ever had at least the older versions of the Roomba, you would have to pick your floor up. You couldn't have anything on the floor because if there was an obstacle, it would get caught and stuck in the obstacle and you could buy these sensors to make sure it didn't go downstairs. So it just made you cleaner, it made you a better person to pick up the floor and thus the better you did, the better your robot would work. And so we are sort of seeing that in a scary way come out in self-driving because I do hear quite often, well we need to create new lanes for these cars. We need to put extra sensors in these cars. We need to put all this technology in the roads to make sure these cars work. And while I get why these ideas are are coming forth, what they do is they reduce the uncertainty of the world. If you're not an engineer, you just don't hear all the dollar signs that are in there. And it's one of the things that I saw firsthand in North Carolina that there were cities, towns that were so sure that the self-driving cars were coming, they started changing city plans about how to make park. The idea was that we're going to take out parking garages, so we knew your car was going to circle for you and just float out somewhere or maybe go to an outlying parking lot and wait until your car came back. And so city planners were starting to draw up plans to make sure that this would happen. And they also started painting the white lines on the road around gutters so that Teslas and other cars would know to avoid those. We could track on the white line. And I get that we're trying to make the world easy for the technology to work, but that ends up costing the taxpayers millions of dollars. I mean, you would not, just painting roads, the lines, what it takes to paint the lines up here in New Hampshire and Massachusetts. Ha ha. After one season of snow, all that paint's gone anyway. So I think that we need to start thinking about the realities of, it's one thing for you to pick up your floor to make your Roomba work, but it's a completely different animal when you start have to spending millions of dollars and changing your infrastructure to make a technology work. Which I saw a "Wall Street Journal" article today, still nobody really wants them.
- Talk a little bit about this need and about what you think might help in building more multifaceted or well-rounded technologists.
- People with a strong command of both the humanities and technology, engineering, computer science. There are just not enough of us out there. And that's because academia is just so stovepipe. I mean, I have to tell you, if you think I beat my head against a wall about the self-driving, I mean that's nothing compared to beating my head against a wall about trying to get academia to move off these stovepipe platforms. And even computer science, depending on the department at the university that you're in, they are incredibly stovepipe. People do not want to play in the sandbox with somebody else. I mean, it goes back to resources and fiefdom and power and control. But until we fundamentally change how we're educating people and what we consider to be valuable, meaning the kinds of degrees, it's going to be very hard for us to make a lot of headway in this area. I think there should be computer science and, like your computer science and journalism, computer science and health.
- We always close with the same question. And this podcast started actually right at the very start of COVID. And so what, if anything, has left you with some hope or optimism for the future?
- I'm very optimistic, despite me whining and complaining all the time. Look, the fact that we're where we are. I tell people this all the time. I read an article recently that said self-driving cars are four to eight times more likely to get into an accident than a human. And people think that's negative. It's just a statement of fact. And it's also like, are you kidding me? This is amazing from an engineering perspective, only four to eight times more likely to get into an accident than a human. This is just tremendous. I don't want to undersell the progress that people are making. ChatGPT... it can be amazing but can also be an amazing disaster. But still the fact remains that we've gotten to a place where I do find it, it tickles me that there are reporters who think that we're achieving sentience and intelligence. I mean it's that good. It's that much of a party trick that some people just swear by it. So when I take a step back, I mean, I think this is when we start to think about the progress that we've made in such a short time span. If you're looking at history in larger scale, it is amazing. So if we could get this far in a short period of time, what else is out there? So I'm extremely optimistic, but I also, my job is to try to keep people safe and to keep that interaction productive. One of the things that I worry about is if self-driving cars, they have not killed anyone yet, but if that starts to happen and if they expand like GM and Google want, then someone will die. And at that point we could really handicap the public's acceptance. And so I think this is really critical. We are making amazing progress, it doesn't have to be overnight. And companies need to be mindful that, I know that they want to make money, they've got shareholders that they have to report to. But in the end, there is this larger issue of don't rush too quickly to get those short-term gains. 'Cause it could really hurt you and the advancement of technology in the long term.
- Missy Cummings, thank you so much for joining me today for this conversation. Thank you so much for everything that you do to make the world a little bit smarter and a little bit safer.
- Thank you for having me.