A recent paper in the journal Current Opinion in Psychology highlights one potential challenge in AI adoption: the risk that the use of tools such as digital assistants can lead workers to act less ethically than they would otherwise.

“Users can ‘tell’ their AI assistants not only what to do, but how to do it,” the paper’s authors write. “These new forms of power could bypass existing social and regulatory checks on unethical behavior”—a phenomenon they attribute to “a well-studied phenomena whereby people are less sensitive to the moral consequences of indirect actions (where actions occur through intermediaries).”

“When you are interacting with somebody face to face, you have this sense of feeling a positive kind of pressure to treat that other person well, and to have the interaction go well,” explains Nathanael Fast, an associate professor of management and organization at the University of Southern California’s Marshall School of Business, who co-authored the paper with USC computer-science professor Jonathan Gratch.“When you remove the human from the process… it removes some of that potency.”

We spoke to Fast, the co-director of the Psychology of Technology Institute and the head of USC’s Neely Center for Ethical Leadership and Decision-Making, about how AI affects decision-making and how organizations can respond to those effects. Here are excerpts from our conversation, edited for length and clarity:

What does research tell us about how AI use affects ethical decision-making?

It's a very, very complicated question, and most of the ways that it affects us are ways that we don't really understand yet. That being said, researchers like John Gratch and myself have been doing research for several years to try to anticipate this day and try to anticipate the technologies that are emerging. One of the big areas of research that we've been doing is looking at what happens when you replace humans with AI. The more that AI develops, the more capacity we have to replace humans from different work-related processes that we otherwise have.

And so psychologically, what happens when we do that? That's a big question. One of the areas of work that John has done is on this concept called ‘other regard.’ When you are interacting with somebody face to face, you have this sense of feeling a positive kind of pressure to treat that other person well, and to have the interaction go well. And when you remove the human from the process, even in ways that are not AI— if you don't see them, but you're interacting over email or those types of things—it removes some of that potency. The more we're using AI-based assistance, the more we're finding that other regard actually goes away. We're more likely to program our AI assistants to negotiate more harshly on our behalf than we would if it was us doing it. That has implications for ethical-decision making: People are going to be more likely to be unethical, because that psychological construct of other regard is not going to be present.

You’ve done some research on how AI and perceptions of power affect decision-making. Can you explain the relationship there?

The thing that's really interesting about AI and power is that when we're interacting with humans, there's a very clear sense of hierarchy, because people are who they are. Hierarchy is determined in both formal and informal ways. The formal ways are roles and titles and positions in a company. Informal ways are status characteristics and things like physical size or intimidation factors. And those aren't changing back and forth during the course of a conversation, for example.

One of the interesting things about AI is that when we're interacting with large language models or AI-based avatars, for example, they can actually completely change the way that they come across in an interaction. They can use low-power language, low-power tone, interact with us in a way that makes us feel powerful, or maybe we can program them to be more of a boss or whatnot. We might see a day where it's not only having digital assistants, but we have a digital partner that in some cases is an assistant, in some cases can go into boss mode and be a little bit more bossy. or whatever. That's really both exciting, but also has a lot of maybe scary implications attached to it, too, in terms of manipulation.

When might one setting be more preferable than the other?

I think the digital assistant is probably more enjoyable and probably leads to psychology that is better in most cases. When you look at the psychology power literature, which I've been studying for many, many years, there are a lot of different effects. When you feel more powerful or you randomly assign people to a condition that makes them feel more powerful, then they tend to have an illusion of control. They feel more optimistic and confident, and they're more action oriented. They also tend to see the bigger picture. And so there are a lot of ways in which feeling powerful is adaptive.

On the other hand, some of those feelings and psychological states can lead us down paths of risk-taking, in some cases unethical risk-taking. They can lead to people being more confident in the knowledge that they have than they should be. In some cases, the people in our studies lost money when they were betting on trivia questions and things like that, whereas making people randomly feel lower-power made them feel more cautious, and they lost less money. And so my sense is that having a virtual assistant will make people feel better and maybe feel more action-oriented and stuff like that, but if you're doing a task that really requires attention to detail and not making any mistakes and not getting it wrong, it might be good to switch that knob a little bit and feel a little bit less powerful, because then you're going to be more careful and vigilant.

What should managers and workplace leaders do with that knowledge?

Some of the programming decisions are going to be made by the companies developing the technology, and so we don't necessarily have choice over those as managers. I'm doing some research with Juliana Schroder and Maya Cratsley where we're looking at developing a technological intelligence scale: In the same way that we have emotional intelligence, we really need to develop a sense of technological intelligence. And so I think one of the most important things that managers can do is try these things out, but don't commit to them until there's some evidence that they're actually going to be useful. I hope if we look at our own personal use of technologies, we can see a whole bunch of examples of testing out and trying technologies that we ended up abandoning because they actually wasted time, or weren't really useful.

Social media is a good example of that for many people who thought, ‘Hey, this is a great way to get my ideas out. I'm going to create an Instagram and a Twitter and all these other accounts.’ And then if people are really honest with themselves, in many cases, it actually wastes more time and leads to negative emotions and [realizing] that they could get more work done some other way. So the advice would be, test these things out, don't reject them without trying them or considering them, but also have an evaluative mindset all the way through to say, is this really helping our productivity or is it harming our productivity going forward?

What other considerations should go into that evaluation? Is there an ethical-behavior component?

There are a lot of different ethical frameworks, so there's no one way to think about ethics. There's consequentialism: What are the outcomes and how do we maximize the benefits and minimize the harms? There's duty-based ethics, so adhering to norms and laws and so on. There's virtue ethics, where you're identifying core values and trying to stay true to those. The problem with ethics in the age of AI is that it's very difficult to use any of those frameworks because we don't understand the technology very well. Let's say we're going to use consequentialism and say, ‘Okay, let's maximize the benefits and minimize the harms’. Well, we really don't know yet what the harms are going to be ultimately or what the benefits are going to be.

We have to be very thoughtful and very conscious of that. That's why I really believe in tech intelligence. We can look at social media as an example. When social media came out initially, people thought it was either really fun, it was going to connect people, or it was really boring, or irrelevant. Nobody really looked at it and said, ‘Oh, this is going to cause polarization or major distraction.’ And it's partly because when it first came out, it didn't use algorithms to govern what comes next in our feed. But as it evolved and started using more artificial intelligence to dictate the next piece of content that comes down your feed, that had major implications for our attention, for our emotional experiences, and also societally for things like polarization.

And so thinking through those types of things is important because when we're using digital assistants, it might be that on day one, a digital assistant that is developed is really helpful. It maybe summarizes a website or something like that for us and saves us some time on that front. But part of what happens is these things evolve over time, and as we add more in artificial intelligence or other algorithms to the products, they change. And so this is a long answer to your question, but basically, we have to be very thoughtful about what's happening at the individual level, psychological level, but also what's happening to the culture of our organizations. Are people interacting in really positive ways? Are they interacting more than they were before? Or instead, are we in a situation where virtual assistants are doing most of the work and the people in the organization are starting to pull away from having lots of interactions? We don't know the answers to these questions, but these are things we should be looking at.

What safeguards can organizations put in place to prevent that potential harm to culture?

Constant check-ins and constant monitoring. It's not a one-time decision that you make at the beginning and then you made the right decision, but rather it's an ongoing process as the technology iterates and changes over time. And so having ongoing conversations where there are evaluations of the different technologies that are being used, that's a really important part of organizations or an important safeguard that they need to adopt.

Can you say more about your work on technological intelligence?

It's not something that we've put out yet, but we're developing a scale that would assess two components. One is people's knowledge and understanding of technology. That includes how to use it, what's out there, how do you make use of it and how does it work, things like that. And then a second component to tech intelligence is the ability to evaluate the harms and benefits of these technologies. So we're not very intelligent if we're using tools and technologies effectively, but we're doing it in a way that undermines our goals and so on. Those two components need to both be present for tech intelligence to be present.

Is it something that's best cultivated on an individual level, or something employers should be teaching to their workers?

That is one of the questions. First we're designing the measure, and then secondly, we're going to be launching a lot of research studies about how best to develop tech intelligence. That's a huge part of the work at the USC Neely Center for Ethical Leadership and Decision Making. That's a big part of what we're trying to do, is elevate people's tech intelligence. And our sense is that absolutely, it can be developed at the individual level as well as societally.

I think you see that with AI. If you think about social media initially, it took us a long time to figure out that we should be thinking about some of the downstream negative consequences. It took over a decade for us to really get a sense of that. When it comes to generative AI,  that's the conversation from day one. And so in some ways you might say that society, tech leaders, and organizational leaders are getting more intelligent about these technologies. That doesn't make it necessarily easy to figure out, but at least I think our TQ is going up.

If people have the tech intelligence to understand some of these effects—for example, the loss of other regard—does that lessen their potency at all?

Generally with psychological effects like that, we can know about them, but they still affect us. One example, Azim Shariff has talked about how we've evolved to be concerned about privacy in the context of other people—if there are other people in the room, we're very skillful at understanding privacy concerns, whereas when it's a technology, we haven't evolved for that yet. We just haven't evolved to be thinking about the effects in that way. And so even understanding those or being informed about those, it still makes it hard for us to get through to our psychology.

Roshni Raveendhran and Peter Carnevale and I have some work where we've looked at the fear of negative evaluation. One of the things that causes people to avoid tracking and be aversive to behavior tracking is that they don't want somebody looking over their shoulder, because they feel like their manager's going to evaluate them or judge them and negatively—or even if they're judging them positively, they feel this extra pressure to perform in a certain way that reduces a sense of autonomy. And so people don't generally like to be micromanaged, but when it comes to technology doing that same tracking, all of a sudden people are more open to that and more accepting of tracking because it eliminates that negative judgment component.

We've also found the same thing with managers. When they have to do uncomfortable tasks that might cause judgment from the subordinates—micromanagement or checking in repeatedly or things like that, that might annoy them—they tend to prefer to do that through virtual agents and remove themselves from the process, and allow a virtual assistant or virtual avatar to do the interaction for them because they want to reduce that feeling of negative evaluation. So that implies that perhaps for some of the tasks that might be more unethical or whatever, these new tools allow managers to hide behind them a little bit.

You’ve written that we often fail to think through the consequences when adopting new technologies. Can you say more?

One example of that is the privacy paradox. In the research community among psychologists who are studying psychology of technology, they talk about the privacy paradox: People say that they care about it, but then when it comes to adopting new technologies, there's almost no consideration of it. I think that there are a few reasons why people don't really think very carefully about each adoption of new technologies. One of them is that we tend to see the benefits more clearly. The benefits are more salient to us when we're thinking about adopting. The harms are more hidden. So if we're thinking about downloading a new app or something like that, it's like, ‘Oh, this will help me do my work more efficiently,’ and the harms are not really present. They're harder for us to imagine. So one of the problems is the imbalance there.

Another problem is the feeling people have that there's nothing they can do, that this stuff is inevitable. ‘I do care about privacy, but there's nothing I can do about it because all the tools and applications and technologies that I have to use for my work are taking away my privacy. And so I either can not have a job or give up my privacy.’ It's that kind of determinism that is problematic for people. It's tough to feel like we have a choice about this stuff.

The third one would just be the tech intelligence itself. So the more we get conscious of the fact that there are always benefits and there are always harms—I think a good metaphor for this is medications. We all know that there are side effects of medications, that they have some impact. The problem with technologies, especially with AI, is that it's become increasingly powerful. It's almost equivalent to putting a drug out into society and saying, ‘Hey, look what this did to people. Look how that's changing people.’ We would never do that with a drug. But with these powerful technologies, I do worry that we're not being more cautious and careful about what the products are first before we actually launch them out into public.