Conversations about AI and labor often focus on the dichotomy between automation, or replicating tasks that people perform, and augmentation, or enabling people to do things they previously couldn’t. (Charter has also written about this distinction, arguing in favor of the latter.) “But task automation and labor augmentation are not polar opposites,” write economists Ajay Agrawal, Joshua Gans, and Avi Goldfarb in a recent piece in Science magazine. They argue that task automation isn’t all bad, and that in some cases, it could decrease income inequality. If AI automates a task that requires a specialized skill, for example, it could boost the productivity of less-skilled workers and create new employment opportunities for them.

We spoke with Gans, a professor of strategic management at the University of Toronto’s Rotman School of Management and co-author of Power and Prediction: The Disruptive Economics of Artificial Intelligence, about the automation versus augmentation debate and how AI could decrease income inequality. Here are excerpts from our conversation, lightly edited for length and clarity: 

Can you tell me about your background in AI?

We've been studying AI for over a decade now since we started seeing startups trying to use AI through our Creative Destruction Lab here at the University of Toronto. We got curious about it, and we ended up writing a book called Prediction Machines back in 2018, which basically pointed out that all the recent advances in AI were really just an advance in statistics—so, in effect, they were less exciting than you'd see in a popular culture movie, but still quite significant nonetheless. It was from that perspective that we've sort of approached everything including what the impacts on jobs might be. 

Have the recent advancements changed how you view AI? 

The advances have been quite significant quite quickly, more so than a lot of people expected. Precisely where all that's going to lead is still a bit of an open question, but it's certainly been an observable increase in productivity for all manner of people. 

You've co-authored a few articles that challenge the dichotomy between automation and augmentation, arguing that “one person’s automation can be another’s augmentation, and the two are not mutually exclusive.” Can you unpack that argument? 

When you think about how something like ChatGPT works, for instance, it makes it very easy to write a letter. What is it doing when it does that? It is automating the writing task for that letter. Well, that means that if you are a busy person, and writing a letter is part of your job, that's saving you time. To the extent that it's allowing you to write an even better letter than you would as well, that's increasing your productivity. So there we have something that is technically automation, but is actually augmenting people in the plain meaning of the word augment. So to say, ‘Oh, we should stop doing automation and concentrate on augmentation’ is not really a useful statement since pretty much everything we are doing is some form of automation. So from that perspective, I can't go around to scientists saying, ‘Oh, you're trying to automate something—bad.’ That seems like it would be saying, ‘Oh, therefore don't do anything,’ which doesn't seem like the right answer.

You also argue that some tasks, like writing, are barriers to entry for certain professions for people who don’t have those skills. So if you automate that task, you open up this job to more people. 

That's right. AI is very good at imitating what a skillful person could do in a task. Your job may be a multitude of tasks, some of which require some skills that you do not have. If AI has been able to provide that, it enhances your overall productivity. If you've already got those skills, of course it's not going to do very much. 

One historical example you’ve written about when it comes to technology and its impact on occupations is London taxi drivers, who have had to pass an exam demonstrating their knowledge of the city's thousands of streets and landmarks since the nineteenth century. Can you talk about that example?

You have to learn the whole street map of London. And you have to not only do that, but you have to learn the shortest distance between any two points. You have to memorize it. It’s something that takes you two to four years depending on how good you are. But now basically every single person who has a smartphone has that skill and has it for free. So this thing that was a unique capability is now very widespread. Effectively that means that anyone can be as good as a London taxi driver, even if you've never been to London before—or it's very close to that. And so you have to say, ‘Why are we forcing these people to still learn all this stuff?’ That doesn't seem like a good idea, creating a four-year entry barrier to be a taxi driver. But more importantly, it's a dramatic democratization of that skillset. 

Are there other examples that come to mind of technologies that have automated certain tasks, lowering the barrier to entry for certain jobs?

I mean, people used to have a skill in doing arithmetic and computations and they used to be called computers, and then computers came along and they just did that skill. So that's an example of that type of thing. So it's not unprecedented that we've had machines come and replicate skills that were cognitive just as we've had the machines that have replicated physical ones as well. So I don't think that's unprecedented. And with respect to the computing one that allowed a lot more people to do those jobs.

One of your arguments is that AI could decrease inequality by opening up more professions for more people. A concern someone could have with that argument is that the technology wouldn’t be decreasing inequality by bringing everyone up. Instead, it’s bringing everyone down to low-wage work by devaluing expertise. How would you respond to that? 

Well, it's very hard to work out all of these things, but it's not doing this to every single skill that people have. Moreover, there's actually a limited amount of workers out there. What tends to happen in this situation is yes, if you are earning a premium for a particular skill—and the taxi driver is a great case of that because you can identify this as almost the only skill that they have that they’re better at, I suppose they might be better drivers as well—but it wipes that out. But in most of these other situations, it's a barrier to entry to the job, and it unlocks other skills. 

For instance, if you have a landscape gardening business, but you are an immigrant and you don't speak English particularly well, when you're communicating over email with clients, now AI allows you to do so in a more fluent manner. And so that allows you to now earn money in your actual skill, which is gardening. Yes, that's going to mean the native English speaking gardeners aren't going to earn quite as much as they were previously, but there's a whole lot of others who earn a lot more. That's how a lot of these barriers tend to work, and they've worked in the past and there's nothing in AI that looks like that won't happen. 

I'm sure we'll be able to point to jobs like the taxi drivers, where yes, it has completely transformed this thing. But for the vast majority of jobs, I don't think that'll be the case. Even for the very high-skilled people, chances are they've got other skills going on as well, even the ones that aren't replaced by AI. It's not like they're going to lose those advantages that much altogether. That's why it's kind of hard to predict what's going to happen with inequality. But that's our point is it's not obvious that it's going to be creating massive amounts of new inequality. It doesn't seem like that sort of thing.

One argument you sometimes hear when talking about AI and jobs is this phrase, “AI won't replace people, but people with AI will replace people without AI.” I’m curious to hear what you think about that idea. 

It's a bit cute. I mean, yeah, everybody should learn how to use AI. There's part of me that doesn't know what that really means because it does something that I think is a huge mistake, which is calling everything that AI does, a lump of thing called AI. Whereas it's so varied what's going on that if we hadn't bothered to call this AI because some academics wanted to market it that way or was inspired by the human brain, none of that would've been a whole thing. We could have said the same thing in the 1980s. ‘Well, computers aren't going to replace jobs, but someone who can use a computer is going to replace the job with someone who can't use a computer.’

So the criticism is that it's not unique to AI?

No, it's not unique. It's just saying, ‘Give someone a tool, they're better off.’ If they learn how to use a useful tool, that’s fine.

Do you think that further out, AI will be so baked into the ways companies operate that there won’t be ‘people who know how to use AI’ and ‘people who don’t know how to use AI’?

They’re going to look like apps to us. They’ll look like apps or they're going to be something that we don't even see. So learning to use AI as if it was some sort of screwdriver seems kind of stupid. It's not like that. That doesn't mean the companies don't have to invest in AI and find new applications and all those sorts of things. But the idea that we should be teaching it in the school. Teaching what in the school? I don't know what it would be.

I think if you are using one of these large language models, it really helps to know what it can and can't do. And we know a bit about that and some of that we can educate people on. Sadly, even the people who invented it don't know everything about what it can and can't do. 

Here's what I don't like about the, ‘You're going to be replaced by someone who uses AI’ idea: It's stoking this fear. It's very odd, this technology has appeared that is so wondrously useful in so many things, and accompanying it is such an outcry about fear. I don't know if the writers strike or the actors strike are about AI, but boy are they talking it up. When you think about it for a few seconds, it's like: Do you really believe that what you do is so at risk from AI? 

Well, for TV writers, their main task is writing, which large language models are good at. I wouldn’t say that ChatGPT's writing is at the level of TV writing, but—

Let's fast forward to the future and it can do that. Is it really replacing what they're good at? None of these things just write on their own. They write better when you tell them what to do. I tried this. I started prompting ChatGPT, and then I’d look at the ending and say, ‘I don't like this ending. Change the ending, making it blah, blah, blah.’ And I did this about five times, and in about 15 minutes, I had a story. If I wanted to, I would go through it, edit it, beef it up, and work on it for only a few more hours. So maybe this is a task that would've taken me days that now can be done in less time. But think about what that all means. ChatGPT did none of it, in fact. It was all my story idea. Occasionally it would come up with some way of doing something that would've been a minor problem I had to solve, and it put down a solution—a solution, not all the options for me.

I sent the story out, and people said they really liked this story. I was kind of surprised because I thought it was fairly badly written. But they liked the story, which means that basically I had been able to use prompts to get a core idea out. What they were saying is they liked the idea of it, ChatGPT allowed me to communicate that a little more easily. 

So if you are writing anything for consumption for entertainment, my guess is it could really save some time on some of these things, get you a first draft or whatever, but the ideas are all coming from you. So no studio executive can say, ‘Well, I can fire all my writers now. Just go ask the machine to do it.’ So there is that element of you have to tell it what to do and then you have to look at it at the end and say, ‘Is this good?’

I just don't believe that for significant writing tasks—that means other than the email that no one really reads anyway, so, in other words, writing tasks that people are actually going to consume—that they're going to lose the thing that they were actually good at. In fact, a lot of the annoying tasks are going to go out of the way. Now I can imagine that maybe there were less-skilled writers who would be handed something to do that now can be done by ChatGPT. And it's not as good with ChatGPT, but it gets done in two seconds so that the other person can work on it. So you might imagine therefore that the writing teams might be reduced in size. But I don't know that for sure. I just could see that happening. But that means the writers aren't all in the same boat here.

In your article, you compare AI to the computer, which is a skill-biased technology. Why do you think AI will have a different impact on the labor market than a computer?

Well, I'm not sure it's going to have a different impact. I think there are a lot of similarities going on. Computers were able to do a whole lot of very routine tasks that were the same thing over and over again, like sorting checks or something. It could do it a lot quicker, a lot more accurately, without the drudgery of it. AI is not so much going to be doing that. Each task will be sort of different. It won’t be the same level of automation of a routine task. It might be something very non-routine, a letter for a specific circumstance. It is a good example of that. So in that sense, there seems to be a difference. 

But even that is a hard game to follow. The one thing they've been trying to use AI to crack is the ability to pick up an object and put it in a box. The whole of Amazon's automated production process was trying to get people out of that warehouse, but in the end, the people were in the warehouse because they still can’t get it to take items from one box and put it into another. Only people can do that. You kind of feel like, ‘Surely AI, with everything else that it's doing, will be able to do this.’ But they haven't been able to crack that yet. And this is the same with self-driving cars. We haven't gotten there yet, even though it looked like it. That last bit turned out to be very difficult. The only way we can conceive of is if we get rid of all people driving all cars, we can have self-driving cars. But what that's basically saying is we can't have self-driving cars in an environment as complicated as one where people are around.