image missing
Date: 2024-08-16 Page is: DBtxt003.php txt00026868
AI
CHATGPT

Sam Altman Just Made the 1 Mistake No CEO Should Ever Make>br> The company says any similarity of its 'Sky' AI voice to
Scarlett Johansson's is just an unfortunate coincidence.


Sam Altman. Getty Images

Original article: https://www.inc.com/jason-aten/sam-altman-just-made-1-mistake-no-ceo-should-ever-make.html
Peter Burgess COMMENTARY

Peter Burgess
Sam Altman Just Made the 1 Mistake No CEO Should Ever Make

The company says any similarity of its 'Sky' AI voice to Scarlett Johansson's is just an unfortunate coincidence.


EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN

MAY 23, 2024

Last week, OpenAI had an event to announce its latest product, GPT-4o, which is faster and more capable than GPT-4. While it's technically the same model, it adds multimodal support, which--along with its increased speed--makes it more useful and fun to use.

By all accounts, the event was a success. As an added bonus, OpenAI preempted much of what Google would eventually demo at its I/O event by holding its own announcement the day before.

You'd think the company would be riding high right now. Instead, it's facing an intense--and self-inflicted--controversy over the voice of its latest chat product, which sounds eerily like Scarlett Johansson's.

It's not just that the voice, which OpenAI calls Sky, sounds like Johansson's. I mean, it definitely does. But the real problem is that--according to a statement released by the actress--the company reached out to her more than a year ago, and she declined to give permission for her voice to be used. OpenAI, it seems, just decided to do it anyway.

'Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system,' Johansson wrote in her statement. 'He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and AI.'

She continued, 'After much consideration and for personal reasons, I declined the offer. Nine months later, my friends, family, and the general public all noted how much the newest system named 'Sky' sounded like me.'

In response, OpenAI's CEO, Sam Altman, says that the voice isn't modeled after Johansson's.

'The voice of Sky is not Scarlett Johansson's, and it was never intended to resemble hers,' Altman said. 'We cast the voice actor behind Sky's voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky's voice in our products. We are sorry to Ms. Johansson that we didn't communicate better.'

The thing is, the problem here isn't that the company didn't 'communicate better.' The problem is that it asked for permission to do a thing, got turned down, and went ahead and did the thing anyway. Then, when it got caught, Altman is trying to pretend like the whole thing is just an unfortunate coincidence. Look, I suppose there are probably two reasons the company might have done this.

First, it seems pretty clear that Altman is enchanted by the 2013 movie Her, for which Johansson provided the voice of an AI assistant that gets romantically involved with a human. Before the company's Spring Update event, Altman tweeted the word 'her.'

Second, OpenAI seems to very much think it can do what it wants with other people's intellectual property or representations. It has built its entire business on the idea that it can use basically any information on the internet for free to train its model and then charge for it as a product.

When asked if the company trained its text-to-video generation tool, Sora, on YouTube videos, its CTO, Mira Murati, refused to answer the question, saying instead only that 'it was publicly available or licensed data.' When pressed by The Wall Street Journal's Joanna Stern as to whether that includes YouTube, Murati answered that she's 'actually not sure about that,' It seems highly unlikely for the person responsible for the company's technology not to know how its technology works.

This is becoming a pattern. The company wants everyone to believe that this whole thing is just a mistake, or a misunderstanding, or some kind of weird coincidence, but none of that is even remotely believable.

That, by the way, is the real problem. If you're building a company that makes a product people think might have the potential to bring about the end of humanity, trust seems like a thing you should guard jealously.

Instead, Altman's explanation isn't credible. If a company's CEO is willing to lie so blatantly about something so obvious, it raises all kinds of questions. What else are they not being honest about?

Trust, after all, is your most valuable asset. Altman has turned breaking trust into a habit, and that's a real problem for the company he leads. In fact, I'd argue that breaking trust is the one thing no CEO should ever do because--as the leader--your credibility is your company's credibility.

OpenAI has been the hottest tech company over the past two years, and ChatGPT one of the fastest-growing products ever, but all of that is put at risk if people can't trust that the company cares about doing the right thing. That's a problem that goes far beyond whether the company used one of the most famous voices on the planet without permission.

Now accepting applications for Inc.’s Best Workplace awards. Apply by February 16 for your chance to be featured!

Like this column? Sign up to subscribe to email alerts and you'll never miss a post.

The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

SITE COUNT Amazing and shiny stats
Copyright © 2005-2021 Peter Burgess. All rights reserved. This material may only be used for limited low profit purposes: e.g. socio-enviro-economic performance analysis, education and training.