Google DeepMind at ICLR 2024

research
Development of artificial intelligence agents of the next generation, explore new methods, and leading basic learning
Next week, artificial intelligence researchers from all over the world will approach the twelfth international conference to represent ICLR, which will be held from May 7 to 11 in Vienna, Austria.
RAIA HadSell, Vice president of Research at Google DeepMind, will present a major word that reflects over the past twenty years in this field, with highlighting how the lessons learned for the future of artificial intelligence in favor of humanity are formed.
We will also present live shows how to bring our foundational research to reality, from developing robot transformers to creating open source tools and models such as GEMMA.
Google DeepMind will offer more than 70 sheets this year. Some research highlights:
Factors to solve problems and curricula inspired
LLMS models have already revolutionized the advanced AI tools, however their full potential remains unexploited. For example, llm -based artificial intelligence agents can convert effective digital actions into more useful and intuitive tools.
Artificial intelligence assistants who follow the natural language instructions to carry out the web -based tasks on behalf of people will be a great time. In an oral presentation, we offer webagent, an agent that is moved by LLM learns from the self -experience of mobility and complex tasks management on real web sites.
To increase the strengthening of LLMS’s general benefit, we focused on enhancing their problem -solving skills. We explain how we achieved this by preparing a LLM system in a traditional humanitarian approach: production and use of “tools”. Separately, we offer a training technology that ensures that language models produce more socially acceptable outputs. Our approach uses the area of the sand box rehearsal that represents the values of society.
Pay the borders in vision and coding
The DyST transformer model takes advantage of the world’s camera clips to extract 3D representations of the objects in the scene and their movements.
Until recently, large AI models have mostly focused on text and images, laying the foundation to identify large -scale patterns and interpret data. Now, the field advances beyond these fixed worlds to adopt the dynamics of visual environments in the real world. With computing in all fields, it is important to be created and improved with maximum efficiency.
When you watch a video on a flat screen, you will understand intuitively understand the 3D nature of the scene. Machinery, however, struggle to simulate this ability without explicit supervision. We show the dynamic scene transformer model (DYST), which enhances one camera videos in the real world to extract 3D representations of the objects in the scene and their movements. What’s more, DYST also enables the generation of new versions of the same video, with the user control over the corners of the camera and the content.
Simulation of human cognitive strategies also makes the best generators of artificial intelligence symbol. When programmers write a complex symbol, they usually “decompose” the important tasks. With EXDEC, we present a new approach to generating the code that harnesses the decomposition approach to raising programming and generalizing the artificial intelligence systems.
In the parallel Spotlight paper, we explore the new use of automated learning not only to create software instructions, but to improve it, and enter a set of data for strong standards to perform the code. Improving the code is difficult, requires complex thinking, and enables our data set to explore a set of ML technologies. We explain that the resulting learning strategies excels the human code improvements.
EXDEC introduces a new approach to generating the code and harms the decomposition approach to raising programming and generalization to artificial intelligence systems
Fundamental learning progress
Our research teams deal with big questions related to AI – from exploring the essence of perception in the machine to understanding how to generalize advanced artificial intelligence models – while also working to overcome the main theoretical challenges.
For both humans and machines, causal thinking and the ability to predict events are closely related concepts. In a presentation of the lights, we explore how learning to enhance prediction -based training goals, and we attract similarities to changes in the brain activity also associated with prediction.
When artificial intelligence agents are able to generalize well on new scenarios, this is because, like humans, they have learned a basic curative model for their world? This is a decisive question in advanced artificial intelligence. In an oral show, we reveal that such models have already learned a rough causal model for their training data, and discuss deep effects.
Another decisive question in artificial intelligence is confidence, which is partly dependent on how microb it can estimate uncertainty in their outputs – a decisive factor for reliable decisions. We have made a great progress in the uncertainty within the Bayesian deep learning, using a simple and cost -free method.
Finally, we explore the balance of Nash Equilibrium (NE) – a situation in which no player benefits from changing their strategy if others maintain their strategy. In addition to simple games for the two years, even a closer balance is closer to the account, but in an oral offer, we reveal new approaches to the latest model in negotiating deals from poker to auctions.
Combine artificial intelligence society
We are pleased to care for ICLR and support initiatives including Queer in artificial intelligence and women in machine learning. These partnerships not only promote research cooperation, but also enhances a vibrant and varied society in artificial intelligence and machine learning.
If you are in ICLR, be sure to visit our adjacent Google Research. Discover our pioneering research, meet workshops to host our teams, and interact with our experts who present throughout the conference. We look forward to communicating with you!
2024-05-03 13:39:00