Google squashes AI teams together in push for fresh models

→ Оригинал (без защиты от корпорастов) | Изображения из статьи: [1]

Google is consolidating the various teams working on generative AI under the DeepMind team in a bid to accelerate development of more capable systems.

The decision, handed down by CEO Sundar Pichai on Thursday, builds on previous consolidation efforts around AI development at the Chocolate Factory. Last year, the search giant combined its DeepMind team and its Brain division to form a group confusingly called Google DeepMind. This team was ultimately responsible for training the models behind Google's Gemini chatbot.

Pichai now plans to move all teams building AI models under one roof.

"All of this work will now sit in Google DeepMind and scale our capacity to deliver capable AI for users, partners and customers," he wrote, adding that the change will allow Google Research to focus more of its attention on investigating foundational and applied computer science.

Google's so-called Responsible AI team is also joining the DeepMind crew, where it's expected to play a larger role earlier in the development of new models. Google says it already moved other responsibility groups under its central Trust and Safety team and plans to build out more extensive AI testing and evaluation protocols.

The moves were presumably made in an effort to avoid embarrassing snafus, which have plagued many early generative AI offerings, including Google's Bard and later Gemini chatbots.

In February, Google was forced to suspend Gemini's text-to-image generation capabilities after it failed to accurately represent Europeans and Americans in specific historical contexts. For example, when asked to generate images of a German soldier from 1943 it would consistently generate images of non-White characters.

To avoid this, Pichai says Google is also "standardizing launch requirements for AI powered features and increasing investments in 'red team' testing for vulnerabilities and broader evaluations to help ensure responses are accurate and responsive to our users."

In other words, now that Google has established itself as a generative AI heavyweight alongside Microsoft, OpenAI, and others, it's going to spend a little more time finding ways to make its models misbehave before its users have a chance to.

Circumventing AI guardrails to generate abusive, toxic, or unsafe content or suggestions, like how to commit suicide, has become a major issue for model builders.

  • Meta lets Llama 3 LLM out to graze, claims it can give Google and Anthropic a kicking
  • Google fires 28 staff after sit-in protest against Israeli cloud deal ends in arrests
  • Google laying off staff again and moving some roles to 'hubs,' freeing up cash for AI investments
  • UK unions publish AI bill to protect workers from 'risks and harms' of tech

The consolidation efforts also extend to AI hardware development. The changes will see the Platform and Ecosystem and Devices and Services teams combined into a new group called Platforms and Devices. As part of this move, employees working on computational photography and on-device intelligence will also join the combined hardware division.

Pichai closed his memo with warning to employees to leave their personal vendettas at home.

"This is a business and not a place to act in a way that disrupts coworkers or makes them feel unsafe, to attempt to use the company as a personal platform, or to fight over disruptive issues or to debate policies," he wrote.

Pichai's comments appear to be in reference to the 28 employees fired this week for protesting Google's dealings with the Israeli government, however, we have no doubt they also apply to the ethical minefield the field of AI is quickly becoming. ®