David Autor: The Human Side of AI
MIT economist David Autor joins the St. Louis Fed’s Scott Wolla to explore how AI is reshaping the labor market and technology as a collaborator in the classroom.
David Autor, MIT’s Daniel and Gail Rubinfeld professor of economics, is a leading thinker and researcher on how AI could reshape the labor market. While he says AI could revolutionize how we work, it won’t replace human intuition. In this episode, Autor joins St. Louis Fed Economic Education Officer Scott Wolla and explains how he challenges students to use technology as a collaborator, not a crutch. He also discusses his approach for pushing students to focus on how economics can solve real-world problems.
Scott Wolla (VO): Welcome to Teach Economics from the St. Louis Fed, where we bring you classroom-tested insights and perspectives that can electrify your teaching and transform how students engage with economics.
On this episode, we’re joined by economist David Autor.
David is one of the most influential economic thinkers of our time, known for his research on labor market impacts, technological change, globalization, and more. He is the Daniel and Gail Rubinfeld Professor in the MIT Department of Economics, co-director of the NBER Labor Studies Program and of the James M. and Cathleen D. Stone Center on Inequality and Shaping the Future of Work.
On top of that, he’s won several teaching awards, including the MacVicar Faculty Fellowship for excellence in undergraduate teaching and the James A. and Ruth Levitan Award for Excellence in Teaching.
The Economist magazine called him “the academic voice of the American worker,” and he was dubbed “twerpy MIT economist” by comedian John Oliver.
As if that’s not enough, he’s also the captain of the MIT Economics hockey team.
[cut to interview]
Wolla: What position do you play?
David Autor: Mostly defense. I’m not fast enough for offense. And my shot is not good enough, but I can get in other people’s way if I work hard at it.
Wolla: Well, that’s important. I’m also really interested in your back story. Let’s start at the beginning. What initially sparked your interest in economics?
Autor: That happened quite late in life by accident. I was a master’s student in public policy at the Harvard Kennedy School. Based on my professional experience, both in education and in computers and software development, I worked for several years in San Francisco at a learning center for poor kids and adults, teaching computer skills.
I went to get a master’s in public policy thinking: I want to study that better and learn more about that. Then, once I got to the Kennedy School, I said, “Oh, I should apply for a Ph.D. to study this,” but I didn’t know how I was going to study it. But because I was applying for the PCE, I had to take the upper-level stats and economics classes, and I had never, ever taken economics. I just didn’t see it as relevant to me because I thought it was about the study of money. Then, I was in a class with Adam Jaffe, who works on innovation now at Brandeis. And it was just mind-blowing to me. I felt like Oh, wow. There’s a whole rigorous methodology for understanding the type of questions I like to think about and had never known how to approach.
I studied psychology as an undergraduate with a concentration in computer science informally. I’d chosen against both computer science and psychology as fields, ultimately—computer science, because I really like the methods, but I felt like the problems weren’t things I really wanted to work on; and psychology, because I thought the problems were interesting, but I didn’t really love the approach.
So, economics felt like this unknown toolkit for taking social problems, things affecting lots of people, and approaching them with the kind of rigor of computer science and formalism.
I found that incredibly, incredibly engaging and exciting, and especially because I was very interested in the labor market and because of the work I had done, saying: “Well, how is changing technology affecting the opportunities for different sets of people? The value of their skills? What they need to learn? Where they can go?” So, that’s how I got started in economics.
I had to do a lot of remediation. I took Calculus 1B with Harvard undergraduates when I was 30 years old. And I took as much economics as I could take, which I took quite slowly because it was really hard for me.
I did not think I was heading for a career in economics. I hoped to get a job at a public policy program. Partway through my public policy degree, it occurred to me: If I were doing life over again, I would go get a Ph.D. in economics. But I guess that window has closed.
I was very surprised to find myself in the middle of the economics job market when I graduated. That was really due to the help of my advisers—Larry Katz and Dick Murnane and Tom Kane. I also worked with Frank Levy and then was hired by MIT in 1999. And I’ve been here since that time.
Wolla: That’s really amazing. Were there particular economics teachers or professors along the way that really inspired you or helped draw you into economics?
Autor: Adam Jaffe was inspirational himself. He taught his class to advance welfare economics. It just put together the notions of consumer preference and efficiency and choice, and then general equilibrium and game theory. It was kind of a whistle-stop tour of economics as it relates to policy, because it was the end of policy school. So, it wasn’t just the theory, it was always coming back to: How do we think about environmental regulation? How do we think about risk and safety regulation? And so on. So, Adam was great.
And then other teachers who I had along the way who were incredibly inspirational were Dick Murnane at the Harvard Ed School. He taught these classes on education and the economy and was very focused on technological change.
Larry Katz, from whom I took graduate labor. From Guido Imbens and Gary Chamberlain, I learned statistics and econometrics, as well as causal inference.
It was the first semester that Harvard economics had offered a class in causal inference. I misread the course catalog; I thought it said “casual” inference, and I thought, Oh, that sounds fun and useful. So, that’s why I signed up.
I was so inspired by teachers, and I was incredibly engaged by the literature. Actually, my hero was Alan Krueger, whose papers I read. I thought they were incredibly creative and asking the questions that I cared most about. If you’d woken me in the middle of the night in my grad school and asked, “Hey, who do you want to be when you grow up?” I would have said, “Alan Krueger. Now, let me go back to sleep.”
I had the privilege of working with Alan and Larry Katz on a paper while I was a graduate student. Alan was a mentor to me as well, although Larry Katz invested so much time and was so generous in supporting my work.
Wolla: That’s great. You know, teachers are really influential, open a lot of doors for people. And I’m glad that you had some really great people in your life doing that for you.
Autor: I really think if I had taken economics as an undergraduate, I never would have studied it. The way I see it taught is so different from what economists actually do, so much less on-point. So, being thrown into it as a graduate student, especially within a public policy program—it was economics as we use it to solve, to understand, to frame, and to solve problems, and analyze problems and gather data, and put those things together. So, I think being thrown into the deep end that way was really the right way to go.
Certainly, in the way I teach, I try very hard to impart that approach to undergraduates. I imagine the thought exercise of taking undergraduate economics with [Gregory] Mankiw’s book and thinking, Well, this is unrealistic. It’s not very relevant. It sort of misses all the nuance of human behavior. So, why would I go further with it? I would have been turned off.
Wolla: That’s really interesting. In fact, this podcast oftentimes deals with the undergraduate level, especially at the principals level. So, picking up that thread a little bit, if you were going to teach an introductory- or principles-level micro/macro—either one—course next semester, what are some subtle changes that you would make to make it different from normal?
Autor: I teach a class called Microeconomic Theory in Public Policy that is really an alternative intro class to our big classes. It’s a slightly higher level, but basically to any undergraduate who asks me if they can enroll in my class, I say yes. It basically starts with questions like: What are people willing to pay for health insurance? Or, why don’t they buy it? Or, how do minimum wages affect labor markets? Or, how do we understand trade or credentialing? Then it says, “Well, we’re going to need three things to do that. We’re going to need conceptual apparatus theory to help us formalize these questions. So, we set expectations, and we find testable, falsifiable predictions. Then, we’re going to need data, and to look at outcomes. But we’re also going to need a framework for causal inference, so we can actually connect evidence to cause and effect.
I basically teach simultaneously. Everything I do is theory, research, design and evidence, and every part of the class is instantiation of that, using different tools. I do teach consumer theory, I do teach general equilibrium, I do teach trade, I teach a little bit of game theory, but it’s always motivated through that lens.
I also teach instrumental variables. I teach regression discontinuity, I teach difference-in-difference just to help people understand how the methods connect to the frame, connect to the results. And I don’t think this is a radical thing to do. This is the way we do research. I find the textbooks totally unnatural because they’re so disconnected from the work that we do.
I’m an applied economist. I start with questions, but I realize that to answer a question—a research question—it’s not sufficient to say, “I’m interested in this.” You have to set it up: How do I want to formally think about this? And what should I be looking for? How would I know if this was true or false? And what is the evidentiary frame that’s going to allow me to distinguish cause and effect, as opposed to just correlation? Then, what do I do with that once I know it? What does that imply?
So, I feel like teaching is just teaching students about the way we work and the way we use economics.
Wolla: That’s really great. And I’m really excited to hear that you teach undergraduate level and introductory courses. I think that’s super.
Autor: Well, I have to say, it’s been incredibly beneficial for me. It’s hard for me to emphasize how uninformed I was when I was hired and how little I knew. I really had done only remedial, bootstrap-education economics. And I didn’t have the math when I was starting to understand a lot of it. I understood much more intuitively than I did formally.
When I was asked to teach my first semester class at MIT—an undergraduate applied-theory class—I was really, really intimidated, really intimidated. And I found it extremely difficult. I would—I’ve said this before; I’m not ashamed of this—I would sit in my office late at night and cry, as I was trying to prepare my lecture notes, and ask myself: Why did they hire me? Don’t they realize I don’t know this stuff? Why? I do not belong here. So, the experience of having to teach that material was incredibly clarifying for me.
Now it’s so first-nature to me to think about income effects and substitution effects, and the Slutsky equation and the expenditure function—these are tools. You don’t hear those terms used very much. It’s not like something you would hear at a seminar. But consumer theory and producer theory are foundational to the way we set up almost all economic models. Having those things, I learned a ton. It may have been much better working with sophisticated materials and learning to build my own skill set. I think part of the reason that I’ve had some success in teaching is because none of it is first-nature to me.
So, I have to work really hard to make it as clear to myself as I can. And if I can do that, I feel like I can make it clear to others as well.
Wolla: Yeah, I was going to ask about that. Coming into economics a little bit later in life, you had some work experience in there as well. And, in my own experience, I think good teaching is about teaching yourself first and then going into the classroom and teaching others.
Autor: Absolutely.
Wolla: So maybe your path may have actually made you a better educator, because you came at it from this other direction. Do you think that’s fair?
Autor: Yeah. The things I care about I developed in my professional life. I didn’t become interested in labor economics because I saw it in a class; I was interested in the labor market, and then I wanted to study it. So, I’ve always been led by what I want to do.
First of all, the topics I teach, I’m very committed to; second, I never found any of it easy. So, I don’t assume anyone else will find it easy. (I’m sure some find it easier than I do.) I try to really lay things out as clearly as possible and explain them in a way that I understand and would understand.
Then, of course—I think everybody does this—when you have to teach something, you’re incredibly motivated to understand it from every conceivable angle, because you know you could get a question that could really throw you off your game. And it’s super unfun to be standing in front of a class of undergraduates and saying, “Yeah, yeah, I’m not sure,” or “Oh, yeah, I can’t solve that,” or “No, that doesn’t seem right. I’m not sure. I’ll get back to you.” So, you’re highly, highly motivated. And because I find it super exciting, I think that also helps a lot in teaching. Again, if I had to teach undergraduate micro principles out of a textbook, I don’t think I’d find it exciting, and I wouldn’t be very successful at that. It’s because I designed the class in a way that I find really illuminating that I enjoy and convey that to students.
Wolla: One of the areas where you’ve done a lot of work, and I’ve benefited from this work, is in automation, and more recently, AI. I know you think about this a lot. You’ve researched it; you’ve written about it. One of the things that I’ve learned from reading a lot of your materials is that automation comes with the fear that machines are going to take all of our jobs, right? And that’s historically, right? You can go back and find different points in history, probably throughout history, when people have had this fear. But in the past, the number of jobs has increased along with automation. But now AI—people are wondering: Is this time different? Is AI similar to the past, or is this really something different when it comes to labor markets?
Autor: This time is different, but every other time has also been different from the previous one. So, I would say, one commonality is—Although, I don’t think there’s a lot of evidence of automation wholesale reducing the amount of employment over long periods of time, it can be highly disruptive, and not everyone benefits—not Pareto-improving, right?
People are always deriding the Luddites, smashing the machines, and how foolish they were to not recognize that this was progress. But their livelihoods were wiped out by the water frames and the power loom, and so on, and the artisanal way of life was destroyed by the first Industrial Revolution.
And there was this six-decade period in British history, called the Engels’ Pause, when basically working class living standards did not rise for essentially the first six decades of the first Industrial Revolution. It didn’t start to improve until the late 1800s.
This has been true of the computer era as well, right? The computer has made our lives more convenient. It’s given us great toys. It’s increased productivity. But it’s been highly, highly inequitable in the sense that it’s been extremely complimentary to people who have judgment, analytical skills, interpersonal skills. It’s a force multiplier for that type of expertise.
But for people whose primary work in offices and factories was following formal rules and procedures, whether that was typing, copying, filing, whether that was assembling, doing repetitive motions—skilled work often requiring literacy and numeracy, and so on—those procedures were very formal and therefore were much easier to codify in software.
So, I do think people should take very seriously the implications of automation, but I don’t think running out of jobs is one of them. That should be the least of our concerns.
We are, in the United States as in much of the industrialized world, in a demographic crunch where we’re actually running out of workers, not running out of work. And, of course, the U.S. is quickly exacerbating this as aggressively as it can by cutting off immigration, which has been our major source of labor force growth. So, the growth of the U.S. labor force now is projected to be at its lowest point in all of U.S. history, the slowest.
Just to summarize: Economists are often too sanguine about automation. They say, “Well, you know, it raises productivity. What could possibly be wrong with that?” And, of course, the answer is: Nothing’s wrong with raising productivity, but it has distributional implications, just like international trade, right? Trade also raises GDP and effectively raises productivity. But we have understood for a long time that trade is not privy to improving, that it changes prices, and that means it changes the value factors, which means it changes the relative and real wages of low- and high-skilled workers, for example.
Automation is the same way. But AI is different again from prior waves, so it’s going to have different implications. It’s a wide-open question. A lot of people are working on this—I’m one of them. And I will say, first, I don’t think it’s going to be the same as the last wave. Because why would it be? It has really different properties, right? Artificial intelligence is not a better, cheaper or faster version of something that we’ve had. It has fundamentally different strengths and limitations. As a starting point, AI is bad with facts and numbers, which sounds ironic when we’re describing something done by computers.
Traditional computing is really good, incredibly good, incredibly efficient, 100% reliable at basic calculation, information storage and retrieval, following rules, procedures, right? So, it’s completely deterministic, in a way that what got coded in is what came out.
But AI is not that. Fundamentally, it is deterministic. It’s running a program, but it’s totally unpredictable in what it’s going to do to any human operator at any given time, including the people who create it. So, it means it’s suited to a different set of problems. And we’re just figuring out what those will be, and then thinking about what that means for workers. That’s a problem I’ve been trying to frame.
The earlier work that I did, starting with Frank Levy and Dick Murnane, later with Larry Katz and Ernest Moghalu, sort of proposed the saying, “Let’s think about computers or other machines as doing different tasks and that they can replace these routine tasks that computers say are codifiable.” That means that they implicitly don’t do the rest. In some cases, that’s very complimentary because it makes those other things more useful. In some cases, it doesn’t have much of an effect because it’s sort of a sideshow.
In the most recent work I’ve done with my colleague Neil Thompson at the MIT Computer Science and Artificial Intelligence Lab, we’ve taken the additional step of that work and said, “You could think of machines as doing different tasks, but what does that imply for workers, given that workers do many tasks, not just one?” Actually, most people are doing a bundle of tasks, and if you’re doing a bundle of tasks, and some of them are automated, what does that imply for the value of the labor that you’re supplying in the remaining tasks?
The answer we suggest is: It depends a lot on where in that bundle of things you’re doing automation falls.
Let’s take a very concrete example: taxi drivers. They do two things: They drive vehicles, and they navigate, which are really different skills. Everybody can drive, or at least everybody thinks they can drive. But very few people, myself included, know roads well and are good at routing in real time. So, you might ask them: “If you could have one of those two things automated, which would you prefer?” A natural response might be: “The navigation; that’s hard.”
But, in fact, you should answer the opposite. Because, if the navigation is automated, anyone can do your job. Whereas, if the driving is automated, since most people don’t know how to navigate, you still have specialized expertise. In fact, you could say, “Look, I’m more effective. I can now just navigate; I don’t even have to sit at the wheel. And I can get places faster.”
The broader point we’re making is, it matters how—If you think of your tasks arrayed from least to most expert, some are very supporting tasks we all have to do. We all have to file receipts, and we all have to fill out timesheets or whatever, or we all have to report on different activities. Those things, we do them, they take our time, but they’re really not what we’re expert in. And if we could not do them, we would be more productive, not less productive, because that’s kind of a time sink. On the other hand, there are things that are our central skills for which we’re really paid—it’s for teaching or for research, or for writing.
If those things found cheap automation substitutes, our expertise would be much less valuable. So, when you’re thinking about how automation relates to the value of labor people supply, it’s not sufficient to say, “Oh, their job is very exposed to computerization or artificial intelligence”—which a lot of papers do—because, well, what does that exposure mean? Is it exposed because it’s going to automate their supporting tasks to basically allow them to focus more effectively on what they’re really good at? Or is it going to automate their expert tasks, the thing they do for which they are really paid, that makes them valuable? And is it going to make that a commodity? So, in some sense, yeah, they are still supporting tasks, but the main thing that they do is no longer scarce and therefore no longer well-paid.
So, that’s the idea that we frame up in that paper called “Expertise.” And then we present evidence on this over the last four decades and say, “We can look at occupations exposures, not just to what fraction of tasks are removed or automated, but also where those fall in the expertise distribution.”
And we show that for occupations that become less expert because of automation—because their expert tasks are automated away—wages don’t rise; they tend to fall in those occupations, at least in relative terms, although often employment grows. It’s like Uber and taxis, allowing entry of less-expert people to do the work. They cost less, and there are more people who can do it. It’s like an effective supply shift.
Alternatively, occupations that become more expert because their supporting tasks are automated, their relative wages tend to rise, but they don’t tend to grow as much. Because you’re now making the work more specialized, you’re making it more expensive, and you’re making it harder with a smaller set of people who can do it.
So, we find that this very simple approach—distinguishing between the quantity of tasks and the expertise of tasks—has a lot of empirical power for explaining not only which wages rose and fell, but also which occupations grew and contracted. And they move, they move in opposite directions. So, we’re now taking that set of insights, which is backward-looking and trying to do it in a forward-looking way, and ask: Where do we expect artificial intelligence to not just remove tasks but also change expertise? For whom? And what does that imply?
Now, let me just say that new technologies are not just about automation, right? They’re often giving us new capabilities and creating new tasks, or we create new tasks with them. Much of the work that we do essentially didn’t exist 100 years ago, when people were leaving the farms. They were leaving at the turn of the 20th century saying, “I guess we’ll eventually do search engine optimization and online banking support.” Right? Those things didn’t exist. So, it’s dangerous to focus exclusively on automation. But we’re pretty bad at predicting what the new work will be and where it will appear. So, that makes that part of the problem harder.
So, our work tries to grapple with that, but I would say we’re much better at predicting automation, and even at measuring automation, than we are at predicting and measuring new work.
[cut to VO]
Wolla (VO): We’re going to take a quick break.
When we come back, David Autor will explain why developing good judgment is key to using AI as a helper rather than a replacement. He’ll also discuss his teaching style that makes economics relevant to real-world problems students actually care about.
[FRE Promo]
Wolla (VO): Welcome back to Teach Economics.
Before the break, David explained that while technological change has always disrupted labor markets, it doesn’t reduce overall employment; rather it redistributes work across the labor force.
We’ll continue the conversation by discussing the impact of AI in the classroom.
[cut to interview]
Wolla: I talked to a lot of teachers and professors, and everyone is grappling with how to deal with AI. On one level, it changes the way teachers assign, like, an essay test, because LLMs are such expert writers, whereas students are not. How should educators think about preparing their students for the future labor market in terms of all these changes?
Autor: Yeah, super good and super hard question. Fundamentally, the human skill that will continue to be incredibly, fundamentally valuable is the ability to translate from a formal understanding of the world into a complex, high-dimensional setting. Right?
Think about: What do doctors do? What do lawyers do? What do contractors do? What do repair people do? They have a formal body of knowledge of medicine, of law, of construction, of how plumbing systems work, for example. But then they face problems that are not cut-and-paste. Every construction project is different every time you go to replace someone’s old water heater, right? You have a different constellation of problems that you find in their basement. Most legal cases, most medical cases have lots of differentiation.
So, where human expertise comes in is in translating from that formal body of knowledge to that specific instantiation. Basically, it’s high stakes. It’s one-off. It matters if you do this well. There is not a known right answer. How do I care for this patient? How do I land this plane? How do I architect this piece of software? What’s the best way to remodel this house? What will happen if I pull off this water heater; will valves start leaking? Have I prepared for that? So, what our capabilities allow us to do, especially in practice, is develop judgment to work in those complex domains. And, I think that will continue to be true.
People now talk a lot about “agentic AI.” What does that term mean (other than just hype)? What it means is the machine understands the overarching goal. It’s not just carrying out a set of rote tasks; it understands how they all fit together. It says, “Oh, that didn’t work.” My experience is that AI is not good at that. Despite that claim, I have not experienced AI systems that are—I would not want to give them agency over anything with important stakes.
But in fact, we are exercising agency all the time, because we understand the big-picture goal as well as all the specific steps along the way. And then we use lots and lots and lots of tools to make us effective in doing that. So, when we think about using AI for education, we want people to be really good at picking tools and using them well, but we need them to develop the judgment to use those tools well. And that’s the real danger, right?
Actually, James Manyika, who’s the head of tech at Google, and I are writing an article for The Atlantic about this. We make the distinction about what we call “automation collaboration.”
You can think of tools that you encounter as falling roughly into these two buckets (they’re not of equal size): One are automation tools. These are tools that basically say, “This used to be an expert task.” Someone needed to know something. Now the machine’s totally got this for you—your dishwasher, the automatic transmission in your car, the thing that takes your toll as you go through on the highway, and so on. Or even, like, it used to be the elevator operators for you to use elevators; now they’re just buttons.
But that was a transition. That was an automation transition. And those things basically make a promise to the user: “Hey, you don’t need to understand this because I’ve got this for you.” That’s fine. That’s great. If you can do it, do it. Wonderful.
But many, many tools don’t have that property. They are collaboration tools. They require you to bring some relevant expertise to the table. They’re kind of a force multiplier for that. Like, a stethoscope is useful to a doctor; not useful to me. A chainsaw—valuable to some; hazardous to many. So, you need to know what you’re doing; it’s conditional on that. Then you can go further with it.
So, when you’re choosing to use AI, you can try to use it to automate. Just say, “I don’t need to understand this; AI will do it for me.” Or, you can say, “I do understand this, but I can interact with AI to speed things up or to give me ideas or to quickly rewrite. But I’m the supervisor; I have agency. I’m supervising this.”
I think the danger in education is that we’re asking people to master that agency—it’s the hardest thing. AI seems to offer the shortcut of saying, “You don’t need to get this. I’ve got this for you.” But, if people use AI primarily for automation in their education, they’re not going to learn that agency.
They need effectively to learn how to use it collaboratively rather than using it as a replacement. The temptation is greatest in education because learning is hard. People say, “We want learning to be fun,” and so on. Yeah, it’s fun, but it’s always effortful. It’s just like playing a sport is fun, right? But it’s always effortful. The fact that you like playing a sport doesn’t mean it’s easy; it means it’s rewarding work. And learning is rewarding work, or it’s not rewarding work, but it’s always work. And that’s why successful education is challenging, because you’re trying to get people to do that work.
That’s why I think classrooms can work, when they work well, because they’re kind of a high-stakes social environment. It’s not, like, here’s a person spouting information in front of the room. In the ideal case, they’re engaging you. They’re asking you questions. There are stakes, right? They’re going to ask you a question, and you’re going to get it right or wrong. They’re going to make a joke. People enjoy that. The tension in the room affects how that joke lands. Everyone is engaged, just like you’re engaged when you’re watching a movie. You’re engaged when you’re in a conversation. That’s what’s difficult in education—making that happen.
If it was just a matter of providing information, libraries would have solved our ignorance problems centuries ago, and online education would be a great success; everyone can just watch videos. But that doesn’t work. Learning requires engagement, requires effort. How do you get people to engage, to commit to that effort? It has to be through some setting that overcomes that friction, that makes them want to pay that cost. Classrooms can do that, if successful.
It’s something that we’re not good at finding close substitutes for. Obviously, there are people who are autodidacts. There are smart people. They’re self-taught. Give them all the books in the world, and they just absorb information. But that’s a very small percentage of people. For everybody else, education needs to be designed in a way that draws them in, gives them incentives and support in a way that makes them able to do that hard work of learning.
Wolla: You have a lot of expertise in your field. So, when you use, say, an LLM, and you get the output, you read it with your expertise. You can see where the problems lie and make those corrections?
Autor: Yes. I was writing a report last night, and I used, for the first time, an AI tool into which I could feed the report, and then it would attempt to help me write as I wrote. For the first few sentences, it was a helpful because it helped me write a summary of the paper pretty quickly. But then it was actually totally worthless, because it was just sort of pattern-matching on words, even though it had “read the paper,” and it could paraphrase things out of the paper, it didn’t understand clearly the substance of the arguments I was making. So, it would just throw in the most likely completed sentence, which half the time was relevant and half the time was irrelevant. But I could see that it wasn’t thinking about the problem.
That’s not its fault, right? But in many cases, these things fall so far short of the hype that surrounds them. That doesn’t mean they’re not great. I want to be clear. One can think that AI has limitations and still think it’s incredibly useful. I think it’s incredibly useful and will only become more so. But we need to recognize what it is and what it isn’t.
Wolla: Exactly. And in that scenario, you were able to identify those things because you’ve already developed the expertise. It becomes this tool that you can—You have good judgment ...
Autor: If I just let those sentences go, they sounded good, but some were vacuous, and some were just absolutely wrong. They were the opposite of what I was trying to say. They sounded good. They used the right words, and they were well-constructed, but they might make the opposite of the point I was trying to make. If I didn’t know what I wanted to say, I could be very misled by that.
Wolla: For students who are in the process of acquiring that human capital, but they become dependent on AI tools, I think the fear is that they never develop the level of expertise where they can turn around and make the judgment about whether the output is valuable.
Autor: Absolutely. We see this in all kinds of settings, right? Like pilots who use autopilot a lot for flight, they become rusty in what’s called “hand flying” aircraft. And there have been bad accidents that have occurred partly because of these unsuccessful handoffs.
Expertise has a short shelf life. You actually need to keep using it for it to stay relevant, and for it to stay fresh, and for it to keep developing. Seeing the virtues of human expertise, we should recognize it’s hard to acquire. It has a short shelf life. We’re fallible. Even the best experts aren’t their best selves at all times. Sometimes people are sleepy. Sometimes people are inattentive.
There’s tremendous opportunity for complementarity. There’s absolutely every reason to think that we can use AI to supplement that, to allow people, experts, to be better at being themselves, being better experts, and people who are not as expert to have guidance and guardrails to do high-stakes work with fewer errors and with better judgment.
I rely on my car to warn me of things I might back up over, to pay attention on the highway to try to keep me in my lane, to warn me if I’m changing lanes. Those are all collaborating with me, right? I’m a pretty good driver—at least I believe that I am. But I’m definitely better with this set of tools that’s assisting me. And that is actually AI in the car. I’m sure a day will come when I won’t have to pay attention at all. But that day isn’t here. And, in many cases, it’s dangerous to give the illusion “the car has autopilot; don’t pay attention.” It’s true that you don’t need to pay attention 99% of the time, but there’s that other 1%—you don’t know when it’s coming—when you really need to pay attention. It’s actually a danger of automation that overpromises; it causes people to disconnect and not engage with, for example, driving, and then they put themselves in very dangerous situations.
There’s tremendous opportunity here. I do not think that AI means the death of all human expertise. I think it means amplifying in many realms and supplementing it for people who have judgment but who are not at the frontier.
In specific domains, it will replace expertise. I expect we’ll have fewer computer coders over the long run. I think we’ll have fewer language translators. We already have fewer medical transcriptionists. We may have fewer people who do illustration for advertising, and so on. And that has pluses and minuses, right? It lowers prices, and so on. It makes things more convenient. It obviously devalues existing human expertise and puts some people out of work.
So, even if one says, “There’s lots of opportunity. This can complement expertise.” One should not think that that means it’s all good. It might be on average good, but very few people are average. Some people are benefiting a lot, and some people are really being made disadvantaged.
And no one is actually experiencing the mean.
Wolla: So, what advice would you give to students? My daughter is a senior in high school, headed off to university next year. Given that scenario, what kind of advice would you give to students today when they’re thinking about what to study after high school? Or, what kind of training should they get to prepare for the labor market?
Autor: David Deming has done a lot of work on this—David Deming at the Harvard Kennedy School. And I’ve been very influenced by his work. This is a lot of valuable work that essentially uses a combination of formal knowledge and judgment and management and interpersonal skills.
Think about what a lot of professionals do. They are not just sitting at a desk working in just a setting. They’re communicating with others, right? They’re communicating with patients, with clients, whether those clients are law clients or contracting clients. I think people should say, “Look, I need to have a formal body. I need to have an area of specialization. But my goal should not be primarily to be a technician, but a person who’s at the intersection where that knowledge is applied in interaction with other people in some setting.” That could be in the professions, but not only in the professions.
There’s tons and tons of skilled blue collar work, of work in the trades, and even in personal care work as well. Where this shows up the most is in health care at all levels. Not just medical doctors, but people who are X-ray technicians and people who do physical therapy and people who are nurse practitioners. They are all highly skilled, trained people. But they are at the interface between the technology and the patient. And there will be demand there, in my view, for a very, very long time to come. (Others will disagree.)
So, one is choosing that domain, and the other is the discipline of learning—and learning really is a discipline. You have to figure it out: Well, I don’t want to cheat myself out of skills by relying on machines to do things I do not understand. I want machines to do things that I understand and then can save me a ton of time. But I need to understand enough that I can build on this. I can use my judgment; I can filter it and say, “Wait, that number doesn’t make any sense.”
But there’s not all downside here too. My son, who’s in college, is doing a major in economics, with a concentration in computer science also. I know that large language models have helped him in programming classes a lot. He was sort of throwing himself in the deep end, and I think he was able to solve problems that he couldn’t have solved on his own. But he still learns a lot along the way. You can say, “I could take on harder challenges if I have better tools to support learning, and help me fill it in.”
So, I think there’s opportunity; it’s not all just risk.
Wolla: So, what are some common misconceptions students bring into the economics class?
Autor: I would say the most common misconception—not misconception, but the most common preconception—is “The assumptions are unrealistic. So, this is not useful.” And I think the first part of that is true: The assumptions are not realistic. But whether it’s useful or not really depends on what you’re trying to understand.
I think economics credits people with essentially—as many will say—trying to do the best they can with what they have. It treats people as rational. Our stylized models make people perfectly rational—and sometimes that just gets absurd—but rational in the sense of trying to make decent choices under constrained circumstances.
I think that’s a very good description of human behavior. And I prefer it to descriptions that say, “We’re irrational,” or “We’re led around by social forces that we don’t understand.” So, I think it’s a very good starting point.
When you’re trying to explain things that happen in markets, that’s even a better description, because even if individual actors are not optimizing, the market actually will tend to move toward an efficient equilibrium, which kind of arbitrages mistakes away. So, I think that the assumptions are crude and can be quite unattractive.
Milton Friedman made this point a long time ago: You should evaluate economics not just in terms of whether you like the ingredients, but does it explain much with little? Does it provide you a framework for thinking about hard problems, that allows you to get insight, make testable predictions, and lets you learn from them differently?
So, I think that’s the one misconception.
I think the other is that most people, undergraduates, think economics is about that cross that you write on the blackboard with supply and demand, and that’s the end of it. It’s very sterile.
I want people to see that economics is so central to so many decisions people are making about their health care; about their employment; about what food they consume in developing countries, when resources are scarce, and the thing that you would actually ideally like to buy doesn’t have as much nutrition as the thing that’s much cheaper but less palatable; and, how we understand the integration of markets through international trade—who wins and who loses, and why.
I want people to understand that economics is such a useful tool for the things they actually really care about. I wouldn’t do it if it wasn’t that way. If I had learned economics in the 1950s, when Samuelson—When it was much more about macroeconomics, when it was much more about formally approving things, I wouldn’t have found that interesting either.
I’m repeating myself, but I think it’s fundamentally incredibly applicable, and the insights are first-order; they’re approximations. But in fact, they’re approximating the part of it that’s often the most relevant. So, if you have to simplify, then what you want to bring out is the thing that explains the most of the phenomenon.
I think economics often does that pretty well.
Wolla: As an educator, what experiences have been the most rewarding for you?
Autor: Having done this for a while now, the thing that feels most rewarding for me is that when I teach, I don’t feel like I’m just teaching this or that paper, or this or that concept. I feel like I’m having a conversation in which this broad subject is economics and how we approach that. Like for undergraduates, how do we think about so many policy problems? Then at the graduate level, often I’m teaching a series of a dozen lectures on labor markets and wage setting, and skills and technology. I kind of think of all of the lectures as being one big story.
It’s more than just “I’m teaching you ‘Here’s how they did it here; here’s how they did it there.’” I want them to view all of these as, if not a world view, at least a kind of tapestry of different pieces that go together.
So that’s super gratifying for me. And the other thing that’s gratifying for me is when I make a funny joke. Fundamentally, if I were funnier, I would have done comedy, but I’m not funny enough. On the other hand, with teaching, I have a captive audience. So, if I can land a good joke every once in a while—I mean, obviously, I’m not there for that, but—it’s really gratifying.
Wolla: That’s great. Do you have any final thoughts or advice for fellow economic educators?
Autor: I would say, especially if you’re teaching undergraduates, I would teach in the way that you do research or understand research, rather than in the units in a textbook. Start from a problem that you want to understand, and then think: What is the theory I need to build up? What is the methodology? And what is the best evidence? Let’s talk about that.
You can still cover a lot of ground. I cover all of consumer theory in my undergraduate class. And in the second lecture, I say, “Look, I’m going to teach you these five axioms, and I’m not going to teach you anything more for the next three or four weeks. Everything that follows is just from those axioms, and all you will do is derive implications. But along the way, I’m going to show you about preference, about minimum wages, about health care, about people substituting among food in extreme poverty, and so on. And it’s all going to follow from exactly that.”
So, I’m not skimping on the theory. I’m just trying to explain. I wouldn’t teach it if I didn’t think a lot could be learned from it. I’m not teaching it just ritualistically because it’s always been there. But that’s how all of us learn, right? As academics, learning is a big part of our job. We have to be learning. Our knowledge stock would otherwise go quickly out of date. But we learn it because we are trying to understand something. You know, you’re not motivated to read a paper and to understand it unless you think it’s somehow relevant to you.
Teaching should always be making a case for the relevance of what you’re teaching.
Wolla: That’s great. Your students are very lucky to have you. I wish I would have had a few more professors like you along the way.
Autor: I’m super lucky in an infinite number of ways to even be in the field I’m in. But it’s also quite a privilege to be at MIT, where the students are obviously highly skilled. They’re also highly motivated. And they’re actually pretty humble.
One thing I would say is, I think at a lot of top institutions, people feel like, “Hey, I made it. Here I am”—at Harvard, Yale or Princeton, whatever. And I think at MIT, they feel like, “Here I am, first day of boot camp. I’m going to get the stuffing beat out of me by my instructors.” I don’t beat the stuffing out of anyone, but they expect it to be challenging. They don’t view themselves as knowing it all and having accomplished it all. They expect to work hard, and it’s easy to open their eyes with things because they’re not so impressed with the fact that they already know everything about the world. So, it’s really a great place to teach both undergraduates and graduates.
I also want to finally say, I had the best colleagues, and I sang the praises of my advisers, but I got a lot of my education here as a faculty member from Josh Angrist, Daron Acemoglu, Amy Finkelstein and Esther Duflo, and many others who spent a lot of time with me. That’s a privilege, a real privilege of having gotten such a lucky break.
Wolla: That’s great. Yeah. Good mentors and colleagues are a huge help.
David, thank you so much for spending time with me today. Thank you for your contributions to economic education. I really appreciate it.
Autor: Thank you so much, Scott. It’s really been a lot of fun. I’m glad to have the opportunity to do it. Thanks for inviting me on the program. And thanks for the work you’re doing with this podcast. I hope it inspires students to want to see what’s valuable in economics, and professors to actually use the classroom the way they view their research—not as a chore, but as part of the exciting way of approaching the world that we do as scholars.
Wolla: Well, that’s great.
[cut to VO]
Wolla (VO): Thank you for listening to my conversation with economist David Autor.
If you like this show, please subscribe anywhere you get podcasts.
And be sure to leave us a review; each one really helps.
I’m Scott Wolla, and from the St. Louis Fed, you’ve been listening to Teach Economics.
---
If you have difficulty accessing this content due to a disability, please contact us at economiceducation@stls.frb.org or call the St. Louis Fed at 314-444-8444 and ask for Economic Education.