Laden...
This is a free edition of Big Technology. Upgrading to paid gets you weekly stories like this one and subscriber benefits like Big Tech War Stories and our insider columns. Get 20% off here: Inside the Crisis at GoogleCulture war narratives give Google's organizational coherence a bit too much credit. There's more to the story.
It’s not likeartificial intelligence caught Sundar Pichai off guard. I remember sitting in the audience in January 2018 when the Google CEO said it was as profound as electricity and fire. His proclamation stunned the San Francisco audience that day, so bullish it still seems a bit absurd, and it underscores how bizarre it is that his AI strategy now appears unmoored. The latest AI crisis at Google — where its Gemini image and text generation tool produced insane responses, including portraying Nazis as people of color — is now spiraling into the worst moment of Pichai’s tenure. Morale at Google is plummeting, with one employee telling me it’s the worst he’s ever seen. And more people are calling for Pichai’s ouster than ever before. Even the relatively restrained Ben Thompson of Stratechery demanded his removal on Monday. Yet so much — too much — coverage of Google’s Gemini incident views it through the culture war lens. For many, Google either caved to wokeness or cowed to those who’d prefer not to address AI bias. These interpretations are wanting, and frankly incomplete explanations for why the crisis escalated to this point. The culture war narrative gives too much credit to Google for being a well organized, politics-driven machine. And the magnitude of the issue runs even deeper than Gemini’s skewed responses. There’s now little doubt that Google steered its users’ Gemini prompts by adding words that pushed the outputs toward diverse responses — forgetting when not to ask for diversity, like with the Nazis — but the way those added words got there is the real story. Even employees on Google’s Trust and Safety team are puzzled by where exactly the words came from, a product of Google scrambling to set up a Gemini unit without clear ownership of critical capabilities. And a reflection of the lack of accountability within some parts of Google. "Organizationally at this place, it's impossible to navigate and understand who's in rooms and who owns things,” one member of Google’s Trust and Safety team told me. “Maybe that's by design so that nobody can ever get in trouble for failure.” Organizational dysfunction is still common within Google, something it’s worked to fix through recent layoffs, and it showed up in the formation of its Gemini team. Moving fast while chasing OpenAI and Microsoft, Google gave its Product, Trust and Safety, and Responsible AI teams input into the training and release of Gemini. And their coordination clearly wasn’t good enough. In his letter to Google employees addressing the Gemini debacle this week, Pichai singled out “structural changes” as a remedy to prevent a repeat, acknowledging the failure. Those structural changes may turn into a significant rework of how the organization operates. “The problem is big enough that replacing a single leader or merging just two teams probably won’t cut it,” the Google Trust and Safety employee said. Already, Google is rushing to fix some of the deficiencies that contributed to the mess. On Friday, a ‘reset’ day Google, and through the weekend — when Google employees almost never work — the company’s Trust and Safety leadership called for volunteers to test Gemini’s outputs to prevent further blunders. “We need multiple volunteers on stand-by per time block so we can activate rapid adversarial testing on high priority topics,” one executive wrote in an internal email. Upgrade: New Subscriber Special And as the crisis brewed internally, it escalated externally when Google shared the same type of opaque public statements and pledges about doing better that have worked for its core products. That underestimated how different the public’s relationship is with generative AI than other technology, and made matters worse. Unlike search, which points you to the web, generative AI is the core experience, not a route elsewhere. Using a generative tool like Gemini is a tradeoff. You get the benefit of a seemingly-magical product. But you give up control. While you may get answers quickly, or a cool looking graphic, you lose touch with the source material. To use it means putting more trust in giant companies like Google, and to maintain that trust Google needs to be extremely transparent. Yet what do we really know about how its models operate? Continuing on as it if were business as usual, Google contributed to the magnitude of the crisis. Now, some close to Google are starting to ask if it’s focused in the right places, coming back to Pichai’s strategic plan. Was it really necessary, for instance, for Google to build a $20 per month chatbot, when it could simply imbue its existing technology — including Gmail, Docs, and its Google Home smart speakers — with AI? There are all worthwhile questions, and the open wondering about Pichai’s job is fair, but the current wave of Generative AI is still so early that Google has time to adjust. On Friday, for instance, Elon Musk sued OpenAI for betraying its founding agreement, a potential setback for the company’s main competitor. Google, which just released a powerful Gemini 1.5 model, will have at least a few more shots until a true moment for panic sets in. But everyone within the company knows it can’t afford many more of the previous week’s incidents, from Pichai to the workers pulling shifts this weekend. Start selling to enterprises with the API that expands your TAM (sponsor)WorkOS is a modern identity management for B2B SaaS. It provides easy-to-use APIs for authentication, user identity, and complex enterprise features like SSO and SCIM provisioning. It's a drop-in replacement for Auth0 and supports up to 1 million monthly active users for free. It's perfect for companies frustrated with high costs, opaque pricing, and lack of enterprise capabilities supported by legacy auth vendors. What Else I’m Reading, Etc.Musk sues OpenAI for breach of contract [CNBC] Microsoft tried to sell Bing to Apple [9to5mac] Why the Apple Car failed [New York Times] Zuck is having a moment [Axios] We need self-driving cars. Let’s not burn them, okay? [Newcomer] The New York Times wants to know who leaked its Daily episode [Vanity Fair] Salesforce CEO Marc Benioff’s buying up land in Hawaii with unclear intentions [NPR] I joined CNBC to discuss the Gemini incident and Pichai’s performance [YouTube] Quote Of The WeekGoogle has lost its way. It's the best company to compete with. Even investors have stopped asking "What if Google does it?" Playground CEO Suhail Doshi, who runs a competitive AI image generator. Number of The Week25% Gartner predicts traffic to search engines will drop by 25% by 2026. It also predicted 50% of people would limit or abandon their social media use by 2025. This week on Big Technology Podcast: NVIDIA's AI Moat & Origins — With Bryan CatanzaroBryan Catanzaro is NVIDIA's VP of applied deep learning research. He joins Big Technology Podcast to discuss why NVIDIA is building more than just chips, examining its software and algorithms that help tech companies build and run AI models. Join us for a conversation about how NVIDIA sees the world, what's led to its success, and what makes it indispensable. In the second half, we discuss how Bryan helped kick off NVIDIA's push into AI, from the very start to where it is today You can listen on Apple, Spotify, or wherever you get your podcasts. Send me news, gossip, and scoops? I’m always looking for new stories to write about, no matter big or small, from within the tech giants and the broader tech industry. You can share your tips here I will never publish identifying details without permission. Thanks again for reading. Please share Big Technology in slack, with a friend, or on social media if you like it! And hit that Like Button, we promise you won’t have to work the weekend checking Big Technology for all the bad things it might do. My book Always Day One digs into the tech giants’ inner workings, focusing on automation and culture. I’d be thrilled if you’d give it a read. You can find it here. Questions? Email me by responding to this email, or by writing alex.kantrowitz@gmail.com News tips? Find me on Signal at 516-695-8680 Thank you for reading Big Technology! Paid subscribers get this weekly column, breaking news insights from a panel of experts, monthly stories from Amazon vet Kristi Coulter, and plenty more. Please consider signing up here.
© 2024 Alex Kantrowitz |
Laden...
Laden...
© 2024