“In a way, we’re all startups again,” declared A.Team founder and CEO Raphael Ouzan at the AI x Future of Work Summit the company hosted this past week. The acceleration of generative AI has forced even the most established companies to experiment and rapidly transform their existing business models. “The homepage on Google generates hundreds of billions of dollars—you don't want to mess with that,” said Ouzan. “And yet they are messing with that, and they're putting a component of generative AI right on that homepage that literally says, ‘This is experimental. Watch out for the results.’” 

Embracing a startup mindset around AI is one of several pieces of advice for organizations that we heard at two separate AI events this week. At both A.Team’s summit and the Evident AI Symposium, we listened to insights from leaders—including a chief people officer at an automotive tech supplier and multiple data and analytic executives at banks—about how AI is changing their ways of doing business.

Here are some takeaways from those events:

  • Continually remind employees to apply their judgment over the long term. Vahe Andonians, chief technology officer and chief product officer of Cognaize, said that we need humans to apply their judgment to AI outputs even when we no longer need to be checking its facts. “I'm going to steal from Nietzsche,” he said, “It should be humans because we are the only ones that can suffer. AI is not going to suffer…so the judgment layer should be us.” George Lee, co-head of the office of applied innovation at Goldman Sachs, added that one thing his team has been focused on is how to encourage employees to keep a sharp eye, even when the AI system is performing well: “After the 10th experience of it looking just great, are you going to pay attention?” (A recent study on BCG consultants, covered by Charter, illustrated the danger of employees ‘switching off their brains’ when working with impressive AI systems.)
  • Just do something. That was the advice of Jeff McMillan, head of analytics, data, and innovation at Morgan Stanley Wealth Management, who led the effort to create a tool, built on OpenAI’s GPT-4, for the firm’s wealth advisors. “We went from driving horse-drawn carriages around the streets of New York City and then someone gave me a BMW,” he said. McMillan noted that prioritizing speed led to a less-than-smooth rollout—”We piloted for nine months. We had…20,000 pieces of feedback because the day we went live with this thing, it was not producing high quality”—but that the experience enabled his team to learn a lot, quickly, about how to move forward. His advice is to “get the smartest people you have in the room, engage with these tools in a controlled playground, and invent.” What you shouldn’t do, says McMillan, is spend too much time deciding which model you’re going to go with. “Honestly, they’re all BMWs.”
  • Focus more on the input data and less on the model. “The hard part was not getting the [large language model] to work,” says Morgan Stanley's McMillan, about the tool the firm rolled out for its wealth advisors. “The hardest part was getting a hundred thousand documents to be AI quality,” he said about the knowledge base they fed the model. “AI is stupid when you give it bad content…I would argue our greatest achievement is the quality of the inputs.” The unparalleled importance of input data was a sentiment echoed by many at both conferences, including Bill Schaninger, a former senior partner at McKinsey, at the A.Team summit: “If you've done a lousy job…[with] your routine knowledge management, any model you build and or test on will be garbage.” For example, personalized employee onboarding, which Schaninger sees as a promising generative AI use case, is only going to work if the company’s internal knowledge base is well-documented and the model stays up to date with new information.
  • Internal generative AI use cases are coming first. Something we heard at both events is that many companies are focusing on internal generative AI applications before turning to external applications, like customer self-service. This is particularly true for financial institutions, where regulations abound and the risks are greater. “We don't expect to see a lot of banking clients interacting with a chatbot to get financial advice in 2024,” said Foteini Agrafioti, chief science officer of RBC and head of Borealis AI. As far as internal applications go, a few banks mentioned experiments with programming copilots and knowledge assistants, which workers can query to quickly get information about company knowledge and policies.
  • Some industries are eager for a world beyond large language models (LLMs). “I'm looking forward to the day that we stop talking about LLMs,” said RBC’s Agrafioti, who explained the potential for foundation models that are more tailored to banking. “If you look at our entire data assets—language data, we got a ton of that for sure—but our core asset is transactions, market data, and risk, and fraud. That’s where the core of our business lies.” Agrafioti talked about the potential for large transaction models that have the same generalized abilities as LLMs today. Similar to how you can prompt an LLM to be your chef or tutor, explains Agrafioti, you could prompt a large transaction model to adjudicate for you, price a specific banking product, and understand your risk tolerance. “We're moving towards a world where we have foundation models behind servicing all of our client needs and treating clients consistently across the enterprise. We've been at this for the last three years…We knew how to build foundation models, and we've been accelerating that path for the assets in our businesses that matter the most.”
  • There will be two types of companies, said David Rice, global chief operating officer of commercial banking at HSBC. The first group will take the productivity improvements from AI and cut their headcount while maintaining the same amount of output. The second group will use the productivity improvements as an opportunity to expand their business without cutting headcount. “I'm an optimist, and history would suggest that the financial services industry will do the latter,” said Rice. Goldman Sachs’ Lee similarly left room for optimism. “When I joined the firm, we had hundreds of NASDAQ traders, now we have less than a handful,” said Lee. “This routine of substitution of capital for labor happens, and yet our headcount continues to grow. I think it's going to be a very similar situation.”

Correction: This story has been updated to reflect Lee's title. He is the co-head of the office of applied innovation at Goldman Sachs, not the co-head of applied innovation.

Already a member? Sign in.

Upgrade to Charter Pro

Sharpen your expertise with Charter Pro’s original journalism, research, and expert analysis focused on the future of work, including AI, flexible work, diversity and wellness.
Yearly (15% off)
We use Stripe for secure and encrypted payments.
With Charter Pro, get unlimited access to:
In-depth reporting, exclusive analysis, and essential research to stay ahead on critical workplace issues and navigate the future of work
Three premium, expert-led newsletters per week for quick decision-making insights
Templates, toolkits and benchmarks to effectively lead and implement key initiatives
Exclusive live events and on-demand workshops for unique networking and upskilling opportunities
Learn more about Charter Pro →