ChatGPT is making its way into workplaces, and organizations are grappling with what policies to put in place and how to prepare their workers.
We recently spoke with George Westerman, a senior lecturer at MIT’s Sloan School of Management, founder of the university’s skills-focused Global Opportunity Initiative, and the co-author of Leading Digital: Turning Technology Into Business Transformation, about the AI-related skills employers should be prioritizing right now. Here is a transcript of our conversation, edited for length and clarity:
You've written that technology changes quickly, but organizations change much more slowly. Can you unpack that as it applies to ChatGPT and other generative AI technologies?
Technology only really creates value for companies when you change the way that you're doing business. No matter how fast technology's going, the real thing is how fast you can change your business. There is a lot of sense-making that has to happen. You've got a lot of trial and error that has to happen. And then of course you need to get people to adopt and to use things really well. So that's really the slow part here.
When we think about ChatGPT, certainly there's going to be a lot of adoption by individuals doing whatever they're going to do. That happens all the time. But when you actually have to go out and change what you're doing in an intentional way, you’ve got to do all your planning, all your rollout, all those things. And that takes time.
Do you anticipate that change will be driven more by individuals adopting ChatGPT and workplaces having to catch up to what their employees are already doing?
This happens a lot in any consumer-facing technology, is that the consumers tend to lead the businesses in doing it. We'll see it here too. It's already being used all over the place by students in classrooms, by individuals working, but there's less happening in companies yet. So it'll be the same way: The adoption will be getting ahead of the companies and the companies will have to catch up.
Some companies are already creating their ChatGPT policies. What would be your recommendation to organizations as they figure out what their own policy should look like?
The first response to anything unknown is always to say, ‘No, we don't want to do that.’ So the first and wrong answer is just to block it. If you block it in-house, it’ll just be used out of the house, and then you won't have any idea what's happening there. That's number one. Number two, you might want to set up some policy just to say, ‘Here are the dangers that can happen. Here's acceptable, not acceptable.’ Get that out pretty quickly so that people have some idea what the right way is, and then they could choose whether to follow that or not. And at that point, once you get better at it, then you can start thinking about other policies, other ways you might use this.
This stuff is really pretty early. I think it's too early to create a real GPT policy yet, other than to say no, and I think no is the wrong answer. You just want to help people get comfortable with what's possible as you get more clear on what specifically you want to do. You want to put out guidelines, help people understand where the swim lanes are, so they can at least understand if they're going too far one way or the other. And then put in the more specific policy guidance once we all know what we're doing. That could be a while. Certainly it's not in the next couple weeks or months.
What’s an example of what those early guidelines might look like?
‘Here are the places where GPT might violate fair use or copyright restrictions. And so if you're going to use G P T in these environments, you need to be really careful in that case.’ Something like that. Or, for example, in classrooms, ‘Here's how we think about plagiarism. So where does GPT fall on that line and how do you want to think about how you might use it without violating plagiarism requirements?’
Worker trust in AI is far from universal. To what extent do you see differing levels of employee trust being an issue—for instance, cases where one employee is eager to incorporate AI in their work and one of their colleagues on the same team is more wary? Or workers who have a manager who mistrusts AI, or vice versa?
Hopefully a little less than we had in the political debates that were happening in the last couple years. If we had these different questions of trust and distrust, using and not using, I would encourage in my team a regular conversation about these things. Let's open up the feelings, let's think about our values and we can figure out how we can work when we disagree on things.
One thing is that there is a natural distrust of automation, because for hundreds of years there's been a question about whether the machine will replace me. So what can you do as a manager to help with this? Have the conversation around how this might actually help people do better, and it might actually help them to focus on more interesting work rather than the more routine work. Encourage them that this isn't taking their job, it's just allowing them to do more with less or allowing them to do more interesting stuff. Another one would be, ‘We don't know the value of this thing, so please experiment with it. And I just want you to know that if it does turn out that it automates really well, we'll look out for you if that happens.’
Now, that's assuming that you aren't going to replace people. If you are going to replace people, then the conversation has to be much more like, ‘This will happen over time. If it turns out that this will displace people, then we will work really hard to find the people that get displaced a good place to go.’
How much should employers be prioritizing AI literacy right now as an employee skill?
We want to think about levels of knowledge when we think about training in technologies in general. There were people out there saying that the right answer is to train everybody how to program. And I just completely disagree. You want to train people with enough knowledge to understand, without making them go overboard on some of the technical concepts. And the same thing will happen with AI. Some of your technical people may need to know the depths of how AI works, other technical people may not have to deal with it at all. The people who are non-technical are going to have to be much more understanding of the AI-enabled tools: what are the limits there, where are the biases in there, that kind of thing.
So not necessarily training them on AI, but training them on the tools instead. For example, if I write customer-service emails all day, I don't need to know how AI works, but I probably need to know how the new tools coming out of AI are going to help me design those emails easier. So we want to think about the limit. We want to think about the levels of expertise that are required and tailor the training to the level of expertise required in that role.
For example, you might need to know Excel for your job, but you don't need to know the programming intricacies that make Excel what it is?
Right. Let's take that even farther. Some of us may need to know Excel, others need to be really expert at pivot tables, but not everybody.
Should employers worry that certain skills that GPT can replace will atrophy over time?
I think it depends whether you expect an apocalypse to come on anytime soon. If we expect that the tools will remain there, then there's a question about whether the skills that we had before matter. An example: I used to love driving my manual transmission car, and I got really good at it. I don't know whether I could do it anymore or not, but when do I ever have to do that? The automatic transmission does that for me. Similarly, I used to be really, really good at doing basic calculations in my head. Now I've got a calculator, so how important is it that I can multiply three-digit numbers in my head? It doesn't matter. So I think when we think about atrophying, that's kind of a value-laden statement. There are always skills that you're gaining and skills that you're losing. And the question is, what is the cost of the skills you're losing? It may not be there. There may not be a cost for it.
Attention is a different thing. If you start abdicating all of your judgment to these tools, you don't want to do that. For an example, people driving Teslas every once in a while get in accidents and get killed. They trusted it too much—it's not omnipotent, it will make some mistakes. If they just turn themselves completely over to that, things will happen badly. And that can happen, that when something is really pretty good, you stop paying attention. That's not really atrophying a skill, that's just an attention thing.
Does this feel like an inflection point to you, in terms of how organizations should be reckoning with their skilling strategies? Or is this just part of the same continuation of technological advancement that’s been happening forever?
One thing we've been learning with advanced automation, especially in an office environment, is that automation's very good at the routine work. If you are a person who does routine work, then you're going to need to figure out how to do non-routine work. We're seeing the same thing in our studies of advanced manufacturing: When a technician moves from strict manufacturing to an advanced manufacturing context, suddenly you need to think about critical thinking, understanding systems, troubleshooting. So when we think about skilling in a world of fast-moving automation, we want to make sure that we're scaling up people with these thinking skills, these creative skills, the critical-thinking systems kinds of skills.
Because they're going to need that more. The other thing they're going to need more of is probably to brush up on the things that are uniquely human. The circle of what is uniquely human gets smaller and smaller over time, but certainly there's empathy and there's interaction and there's creativity. So that's a long answer to say if we're still training skills for routine jobs, that's not good for the employer or employee. What you want to do is more critical thinking, higher-order thinking, because that's something where the humans can continue to contribute to the story as advanced automation takes over their routine stuff.
Presumably critical thinking will also become more important in learning to evaluate AI-generated information…
Yeah. So there's some training to be able to understand, what are the limits? What are the biases? One of the things I've learned about a self-driving car is that the car works really well in certain situations and it's a little less good in other situations. So I've learned to pay attention, especially in those situations that it's maybe not as perfect as it usually is.
We don't have enough experience yet to know what good practice is or bad, but certainly we know critical thinking itself. There are hundreds of courses out, there are thousands of papers written on it. We know what critical thinking is. We know how to train critical thinking. How do we take those frameworks of critical thinking and apply them with an AI context?
We talked about unconscious bias, we talked about institutional bias, and we’ve got to get into algorithmic bias also. For example, Florida had a system to assess whether somebody coming out of prison is likely to reoffend and they use the system to make judgements on parole. It turned out the system was heavily racially biased, and they learned that later. Certainly also, employment systems that think about who's most promotable will often be based on the people who have risen in the organization before, and those people who have risen in the organization before tend to be old white men. So just thinking about algorithmic bias would be another element here.
And for everything that organizations can do to make their products and their marketing better, the bad people out there will find ways to make their products and their processes better, too. So you can expect that phishing emails will become more personalized and more professional-looking. You can expect that the people calling in the call centers and trying to fake you into giving them passwords, or changing account numbers so they get paid instead of somebody else, that those will start sounding an awful lot like people who are trusted in the organization. You can expect, moving beyond the text world, that the deepfake videos and voices will become pretty compelling. And so what that means is we need to help people be more aware of these threats that are coming down the line. We also need to double down on the verification measures that people sometimes forget to do, because it's going to become more human judgment on whether this person is a crook or an honest person. It's going to become more and more difficult.
In terms of organizational priorities around employee learning, we’ve talked about critical thinking and more emphasis on human skills—are there any others that should be high on the list right now?
There's a phrase out there called ‘prompt engineering,’ which is how to put your queries into ChatGPT the right way so you get the right answers. That could be a skill that that's being emphasized. But we did some work on these human skills. We looked at 41 different frameworks of human skills and we came up with our set of four that are really the ones that can help you thrive, move forward.
We kept hearing that you hire for hard skills, you fire for soft skills. And so of course, typical professor, I'm like, ‘Well, how do we define soft skills?’ We went out and we found these 41 different frameworks. There was commonality among them, but there were also some things that just felt like they weren't quite there yet. So we synthesized those 41 frameworks, we did a lot of interviews with experts, and we came up with our two by two human skills matrix , because that's how we think in management schools. So the top is doing, the bottom is leading, the left is me, the right is other. Just thinking in terms of those four levels—how I think, how I interact, how I manage myself, how I lead others—can be a really powerful way of thinking about what these human skills are. It can really help you think where you might be good, where you might need some work. You can also think from an occupational side, which of these four is more or less important in these different occupations?
So these are skills that will become increasingly important going forward?
We say these are the skills that can help you thrive in a time of rapidly advancing technological change. Now will all of them become increasingly important? I don't know. Project management, for example— the computers may get really good at that. I saw one paper about five years ago now where they had a computer identify the project plan, hire the gig staff, and get a whole thing developed, all by the computer. But these are the skills we've seen in our frameworks and our analysis that are still useful in this world.
Do you see AI as a tool that could help with the development of these skills? For instance, I had a conversation recently with somebody who said that they envision using ChatGPT for rehearsals for difficult feedback conversations.
Absolutely. This is something I've been wanting to do for a long time. We haven't known how to code it, but we're starting to look into it now. A lot of these human skills, the best way to train or assess them through role play, and we’ve developed role plays for some of these things, but that requires having a human actor involved. Can the computer be the other actor? Now you don't need to get two people together. You can just practice these things whenever you want.