Business

Inside the new open-source AI that helps anyone track a changing planet

Welcome to Eye on AI, with AI correspondent Sharon Goldman replacing Jeremy Khan who travels. In this edition… A new open-source AI platform helps nonprofits and public agencies track a changing planet… Getty Images narrowly wins, but mostly loses, in landmark UK lawsuit against image creator Stability AI… Anthropic forecasts $70 billion in revenue… China offers tech giants cheap energy to power domestic AI chips...Amazon employees oppose the company’s expansion into artificial intelligence.

I’m excited to share the AI ​​for Good story today Eye on artificial intelligence: Imagine if conservation groups, scientists, and local governments could easily use AI to address challenges like deforestation, crop failure, or wildfire risk, without any AI experience at all.

Until now, this has been elusive, requiring vast, inaccessible data sets, large budgets, and specialized AI knowledge, which most nonprofits and public agencies lack. Platforms like Google Earth AI, released earlier this year, and other proprietary systems have shown what’s possible when satellite data is combined with AI, but these are closed systems that require access to cloud infrastructure and developers’ knowledge.

That’s changing now with OlmoEarth, a new open source, no-code platform that runs powerful AI models trained on millions of Earth observations – from satellites, radar and environmental sensors, including open data from NASA, NOAA and the European Space Agency – to analyze and predict planetary changes in real time. It was developed by Ai2, the Allen Institute for Artificial Intelligence, a Seattle-based nonprofit research lab founded in 2014 by the late Microsoft co-founder Paul Allen.

The first partners are already putting the OlmoEarth project into action: in Kenya, researchers are mapping crops to help farmers and officials boost food security. In the Amazon region, environmentalists are monitoring deforestation in almost real time. In mangrove areas, early tests showed 97% accuracy, cutting processing time in half and helping governments act faster to protect fragile coastlines.

I spoke with Patrick Bokema, who heads the Ai2 team that built OlmoEarth, a project that started earlier this year. Biokema said the goal is to go beyond simply launching a powerful model. Many organizations struggle to connect raw satellite and sensor data to usable AI systems, so Ai2 built OlmoEarth as a complete, end-to-end platform.

“Organizations are finding it very difficult to build pipelines from all these satellites and sensors, even the most basic things are very difficult – a model might need to connect to 40 different channels from three different satellites,” he explained. “We’re just trying to democratize access to these organizations that are working on these really important problems and super important missions — and we believe that technology should be publicly available and easy to use.”

One concrete example Biokema gave me was related to bushfire risk assessment. A key variable in assessing wildfire risk is how wet the forest is, because that determines how flammable it is. “Right now, what people do is go out into the forest and collect sticks or logs and weigh them before and after they dry, to get one measure of how humid it is at the site,” he said. “Park rangers do this work, but it is very expensive and arduous to do.”

Using OlmoEarth, AI can now estimate forest moisture from space: The team trained the model using years of specialized field data from forest and wildfire managers, and linked those ground-based measurements to satellite observations from dozens of channels — including radar, infrared, and optical images. Over time, the model learned to predict how wet or dry an area would be just by analyzing this combination of signals.

Once trained, it can draw a continuous map of humidity levels across entire regions, updating it as new satellite data arrives, and doing so at a cost millions of times lower than traditional methods. The result: maps of near-real-time wildfire risk that can help planners and rangers act faster.

“We hope this will help people on the front lines do this important work,” Bokema said. “This is our goal.”

With that said, here’s more AI news.

Sharon Goldman
sharon.goldman@fortune.com
@sharonggoldman

If you want to learn more about how AI can help your company succeed and hear from industry leaders about where this technology is headed, I hope you’ll consider joining Jeremy and I at Fortune Brainstorm AI San Francisco on December 8-9. Speakers confirmed to appear so far include Google Cloud CEO Thomas Kurian, Intuit CEO Sasan Goodarzi, Databricks CEO Ali Ghodsi, Glean CEO Arvind Jain, Amazon CEO Panos Panay, and many more. Register now.

Fortune on artificial intelligence

Palantir’s Quarterly Revenue Hits $1.2 Billion, But Stocks Fall After Massive Rally – By Jessica Matthews

Amazon says its AI shopping assistant Rufus is so effective that it’s on pace to generate $10 billion in additional sales – By Dave Smith

Sam Altman Sometimes Wishes OpenAI Was Public So Haters Could Short-Sell Stock – ‘I’d Like to See Them Burn For That’ – By Marco Quiroz-Gutierrez

Artificial intelligence is enabling criminals to launch ‘tailored attacks at scale’ – but it can also help companies fortify their defences, say tech industry leaders – By Angelica Ang

Artificial intelligence in the news

Getty Images mostly loses landmark UK lawsuit against image creator Stability AI. Reuters reported today that a London court ruled that Getty narrowly succeeded, but mostly lost, in its case against Stable AI, finding that Stable Diffusion infringed Getty’s trademarks by reproducing its watermark in AI-generated images. But the judge rejected Getty’s broader copyright claims, saying Stable Diffusion “does not store or reproduce any copyrighted works” — a technical distinction that lawyers said reveals gaps in copyright protection in the UK. The mixed ruling leaves unresolved the central question of whether training AI models on copyrighted data constitutes infringement, an outcome that both companies claimed as a partial victory. Getty said it plans to use the ruling to bolster its parallel lawsuit in the United States, while calling on governments to strengthen transparency and intellectual property rules for artificial intelligence.

Anthropic expects revenues of $70 billion and cash flows of $17 billion in 2028. Anthropic, the maker of the Claude chatbot, is forecasting explosive growth, anticipating revenues of $70 billion by 2028, up from about $5 billion this year, according to The Information. The company expects most of this growth to come from companies using its AI models through the API, and expects to double OpenAI’s comparable revenue next year. Unlike OpenAI, the maker of ChatGPT, which is burning billions on computing costs, Anthropic expects to be cash flow positive by 2027 and generate up to $17 billion in cash the following year. These numbers could help it target a valuation between $300 billion and $400 billion in its next funding round, positioning the four-year-old startup as a financially viable challenger to OpenAI’s dominance.

China is offering tech giants cheap energy to boost domestic AI chips. According to the Financial Times. China is increasing financial support for its largest data centers, cutting electricity bills by up to 50% for facilities running local AI chips, in an attempt to reduce dependence on Nvidia and boost the local semiconductor industry, according to the British newspaper “Daily Mail”. Financial Times. Local governments in provinces such as Gansu, Guizhou and Inner Mongolia are offering new incentives after tech giants including ByteDance, Alibaba and Tencent complained that Chinese chips from Huawei and Cambricon are less energy efficient and more expensive to operate. This step confirms Beijing’s endeavor to make its artificial intelligence infrastructure self-sufficient, even with the high demand for energy in the country’s data centers, and local chips still require 30 to 50% more electricity than what Nvidia needs.

Amazon employees oppose the company’s expansion into artificial intelligence. Last week, a group of Amazon employees published an open letter warning that the company’s push “too quickly” toward artificial intelligence comes at the expense of climate goals, worker protections, and democratic accountability. The signatories — who say they help build and deploy Amazon’s AI systems — argue that the company’s planned $150 billion data center expansion will increase carbon emissions and water use, especially in drought-prone regions, even as it continues to provide cloud tools to oil and gas companies. They also criticize Amazon’s growing ties to government surveillance and military contracts, and claim that internal AI initiatives are accelerating automation without supporting workers’ progress. The group calls for three commitments: no AI powered by dirty energy, no AI built without employee input, and no AI for violence or mass surveillance.

Eye on artificial intelligence research

What if large AI models could read each other’s minds instead of text chatting? That’s the idea behind a new paper by researchers at CMU, Meta AI, and MBZUAI titled “Intellectual Communication in Multi-Agency Collaboration.”. The team proposes a system called ThoughtComm, which would allow AI agents to share their latent “thoughts” — the hidden representations behind their thinking — rather than simply exchanging words or tokens. To do this, they use a sparse autoencoder, a type of neural network that compresses complex information into a smaller set of the most important features, helping to reveal the “ideas” that really matter. By knowing which thoughts agents exchange and which thoughts they keep private, this framework allows them to coordinate and think together more efficiently—suggesting a future in which AI systems collaborate not by talking, but by “thinking” in sync.

Artificial Intelligence Evaluation

November 10-13: Web Summit, Lisbon.

November 26-27: World Conference on Artificial Intelligence, London.

December 2-7: NewReps, San Diego

December 8-9: Brainstorming Fortune Amnesty International San Francisco. Apply to attend here.

Brain food

How can AI companies quietly practice censored journalism?

I wanted to highlight something new Atlantic An investigation by writer Alex Reisner, which reveals how Common Crawl, a non-profit organization that collects billions of web pages to create a free Internet Archive, may have become a backdoor for training AI on paywalled content. Reisner reported that despite Common Crawl’s public claim that it avoids content behind a paywall, its datasets include entire articles from major news outlets — and those articles have ended up in the training data of thousands of AI models.

Common Crawl maintains that it is doing nothing wrong. When pressed by publishers’ requests to remove their content, Common Crawl director Rich Skrenta brushed off the complaints, saying, “You don’t have to put your content online if you don’t want it to be online.” Skrenta, who told Reisner that he views the archive as a kind of digital time capsule — “a crystal cube on the moon” — sees it as a record of civilizational knowledge. But no matter what, it certainly highlights the growing tension between AI’s thirst for data and the journalism industry’s battle over copyright.

Brainstorming Fortune Artificial Intelligence He returns to San Francisco December 8-9 to meet with the smartest people we know—from technology experts, entrepreneurs, Fortune Global 500 executives, investors, policymakers, and the brightest minds—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.

Don’t miss more hot News like this! Click here to discover the latest in Business news!

2025-11-04 18:11:00

Related Articles

Back to top button