Nowadays, Machine Learning (ML) can assist in identifying climate models based on daily output. How? Firstly, it is important to understand ML as the process of developing algorithms that enable computers to learn and make predictions or decisions based on data, without being explicitly programmed for each scenario. In that context, ML techniques, such as Convolutional Neural Networks (or CNNs), are increasingly utilized in Climate Science to evaluate climate models; identify model characteristics; and assess model performance in comparison to observational data.

The study “Identifying climate models based on their daily output using machine learning”, by researchers Lukas Brunnel and Sebastian Sippel, shed light on the use of ML classifiers – such as the CNNs mentioned before. Specifically, on how they can be trained to robustly identify climate models using daily temperature output. 

By analyzing individual daily temperature maps, ML methods can separate models from observations and from each other, even in the presence of considerable noise from internal variability on specific weather timescales (Brunnel & Sippel, 2023). Internal variability refers to the fluctuations in the climate system that arise from various processes within the Earth’s atmosphere, oceans, and land surfaces that we might refer to as weather. Hence, the ML approach allows for the identification of models and observations based on short timescales, providing new ways to evaluate and interpret model differences.

Separating models from observations, and from each other

The study used daily temperature maps from 43 Coupled Model Intercomparison Projects (CMIP6) models and four different observational datasets. Additionally, ICON-Sapphire, one of the Earth system models developed by nextGEMS, was utilized as an experimental km-scale model. With that basis, two different statistical and ML methods were used to separate models from observations, and from each other.

Firstly, through logistic regression, the researchers were able to distinguish between models and observations because it allows the appreciation of the learned coefficients (Brunnel & Sippel, 2023). The coefficients learned by the logistic regression classifier reveal that many well-known climatological model biases are already emerging as important for identifying daily maps. Nonetheless, other regions like the Arctic are not relevant for daily classification at all. 

It is important to mention that logistic regression is a linear method and, after bias correction with the mean seasonal cycle, it is no longer skilful. To complement logistic regression, a second methodnamely CNN, specially due to the possibility of obtaining more trainable parameters that can also lean more complex, non-linear relationships within the data. (Brunnel & Sippel, 2023)

Main findings and future directions

Some of the main results of this research work are related to the high accuracy achieved by CNN classifiers in identifying models and observational datasets, even when faced with complex classification tasks. Overall accuracy of 83% was achieved in identifying 43 models and four observational datasets (Brunnel & Sippel, 2023). Moreover, CNNs could pick up unique patterns specific to each dataset, enabling successful separation from other datasets. Generally, it is important to take away that dependencies between models – and observations – emerge even on daily time scales.

On another note, Brunnel and Sippel (2023) clarified that misclassifications often occurred within model families or were related to common “ancestors”, indicating shared features among related datasets. However, the study revealed the ability of the CNN to correctly identify a significant portion of test samples, even those from distant time periods and under different climate scenarios.

The authors anticipate a planned follow-up study that aims to analyze the origin of classification skill in more detail, using explainable ML techniques and domain-specific approaches from Climate Sciences. In other words, the follow-up study will investigate the coupling of atmosphere and ocean; surface energy balance in models; and targeted masking of regions to understand model performance dependencies. 

If you are interested in working with this method, feel free to do so! The researchers have made the code used in the paper freely available on Github.


Brunner, L., & Sippel, S. (2023). Identifying climate models based on their daily output using machine learning. Environmental Data Science2.

Aerosols, defined as tiny particles suspended in the atmosphere, play a pivotal role in the Earth’s climate system. Despite their minuscule size, often smaller than a human hair, these particles exert a significant influence on the planet’s climate. They originate from natural sources, including sea spray, dust storms, and wildfires, as well as human activities, such as industrial emissions and transportation.

How do aerosols impact our climate?

Aerosols affect the climate through both direct and indirect mechanisms. They can absorb or scatter solar radiation, and act as nuclei for cloud droplet formation. These dynamics result in:

Cooling the Earth’s surface by reflecting solar radiation.

Warming the atmosphere by absorbing heat.

Influencing cloud formation, brightness, and lifetime.

While aerosols generally contribute to cooling the Earth, quantifying this effect is a complex scientific challenge due to uncertainties, particularly related to indirect effects.

Challenges in Aerosol Modeling

Accurately incorporating aerosol interactions within climate simulations has been a persistent difficulty. There are coarse-scale climate simulations with interactive aerosol models, but these models are very expensive. Moreover, they are not feasible to be used in km-scale climate simulations. Therefore, nowadays aerosols in km-scale climate simulations are not interactive but prescribed based on historical data. This approach fails to account for real-time atmospheric processes, such as the deposition of aerosols by winds and precipitation. In other words, this appeal overlooks critical dynamics.

Interactive aerosols in High-Resolution Climate Models

New high-resolution climate models, like those developed in the nextGEMS project, are addressing the limitations mentioned above. These models resolve essential processes in the Earth’s system down to a few kilometers, enabling detailed simulations of phenomena such as thunderstorms and tropical cyclones.

nextGEMS aims to integrate aerosols interactively within these advanced climate models. The process begins with a complex aerosol module, which operates at coarse resolutions. Scientists then simplify the micro-physical aerosol processes, before coupling the simplified version with the new climate model.

This research has produced a streamlined and efficient model, making the aerosol simulations more accurate and usable. The model’s design facilitates understanding and adaptation for other researchers, enabling detailed simulations of long-term processes on a global scale. It now includes intricate processes, such as the transport of aerosols by winds, cloud formation, precipitation, and the scattering or absorption of solar radiation.

Practical Implications and Future Research

The nextGEMS model provides a groundbreaking tool for examining specific events or broader phenomena related to aerosol movement through the Earth system. It improves our understanding of aerosol-cloud-radiation interactions and helps quantify aerosols‘ cooling effects more precisely. This advancement is crucial for better predicting both short-term weather events and long-term climate trends.

Practical applications of this model include investigating how future wildfires across regions like the Congo and the Amazon could affect local cloud formation and precipitation, or assessing the potential damage tropical cyclones may cause to coastal areas of Japan and Florida. Additionally, simulation with interactive aerosols could help us to better understand the radiative forcing and therefore better estimate by how much aerosols actually cool the Earth.

Future research will focus on tracking phenomena over time with high-resolution data and conducting regional studies in areas with unique characteristics or significant events. Furthermore, it will allow the performing of long-term simulations to explore different emissions scenarios. Collaboration with other projects and scientists will continuously refine the model, fostering interdisciplinary research within and beyond the nextGEMS initiative.

Understanding aerosols and their interactions with our climate, we magnify our ability to predict and mitigate the impacts of Climate Change, contributing to a sustainable future for our planet.

Visualisations created by: Latest Thinking GmbH

by Thorsten Mauritsen, MISU

Model performance in the renewable energy sector

By directly and more physically simulating specific events (e.g., tropical cyclones, rainfall extremes, blockings) most associated with hazards, nextGEMS provides an improved basis for assessing risk globally. The importance of simulating fine scales for assessing hazards, but also for other applications, is well understood and motivates the patchwork of downscaling approaches known as the value chain. A Challenge Problem, co-defined with stakeholder groups, hereby, will help guide the development of the SR-ESMs, and their associated workflows, in ways that better expose their information content to application communities. This will allow us to “short-circuit” the value chain and develop a new model of Integrated Assessment. Activities are planned in the form of pilot projects on near surface (wind/solar) renewable energy, marine productivity, and changing weather or climate related hazards.

For the challenge problem in the renewable energy sector, we addressed specific challenges:

Challenge 1: What is the minimal amount of information needed to optimize the design of a regional renewable energy system and how can we extract this information from global storm-resolving models.

Challenge 2: How does the potential renewable energy output landscape change with a changing climate?

During the Cycle 2 Hackathon our stipends were provided with meteogram station data from two different models. Along with temporally sparse snapshots of full three dimensional model output. The main goal was to either find a condensation of the high-frequency output or suggest other output variables, to support the design of renewable energy systems.

Support was given by technical consultants (members of the modelling groups) to help them understand the model output and ways of accessing it since the focus shall lay on the science rather than solving technical problems.

Additionally, the teams were supported by energy consultants, to for instance help them understand design parameters (hub height, diffuse versus direct efficiency, etc.). In particular we had two sessions with Iberdrola and with Vestas Wind Systems. These sessions were particularly useful for the participants to understand the problems that the energy industry is facing, and provided an opportunity to discuss their results with experts. There was also a discussion of how the participants potentially develop a carreer in the renewable energy sector.

Expert interacting with Hackathon participant.

Findings during the Cycle 2 Hackathon

The approach taken in the Hackathon is to use the ability of the nextGEMS models to resolve the mesoscales and represent relevant motion and fields for renewable energy production. We used primarily dedicated high frequency output for a series of stations, but also the complete mapped output.

For example the wind at rotor height of a typical wind power plant around 100 m above ground is used directly to calculate the power output from a typical turbine. Figure 1 shows how the output depends on the wind speed. The output starts at a minimum wind speed and increases with the wind speed to the third power up to a maximum value which is limited by the generator size. At high wind speeds the turbines automatically turn off to limit wear and for safety reasons.

Because the power output curve is highly non-linear in the wind speed, we were interested in seeing how much estimated power is biased if using lower temporal resolution. Figure 2 shows output for four different stations. First we note that estimated output is monotonically decreasing with lower resolution suggesting that all stations, except perhaps the EURECA ocean site, are mostly seeing winds on the qubic wind speed range. For this a different combination of turbine, generator and tower height can be used to extract more energy at these sites. We also see that the bias is very low when degrading from 3 minute to 1 hour means, and even monthly mean wind speeds provide a reasonable estimate. Here it is important to note that this is the average of the instaneous wind speed. If the wind components were average the degradation would be much larger.

Figure 1. Dependency of power output to wind speed.
Figure 2. Power output for four different stations.
Figure 3. Bias of model performance compared to observations. Y-axis shows the deviation from the observation.

Model Performance

We checked how the models perform in terms of representing the observed wind speed at the flat surface Cabauw site in the Netherlands. It turns out that both the IFS and ICON grossly underestimates the occurrence of high wind speeds.

This bias in both the IFS and the standard ICON-Sapphire setup is related to the parameterisation of turbulent drag. This can be seen from the 10 km resolution test simulation conducted using the TTE scheme, which exhibits a more realistic distribution at Cabauw (see figure 3).

Solar Power generation

Figure 4 shows a comparison of the monthly mean modelled downwelling shortwave radiation with observations at the Cabauw site. Here both IFS and ICON do a good job in predicting the observations. Also, the IFS model was analysed in three different resolutions but there is no obvious difference.

To convert the downwelling shortwave radiation to power output one must take into account various losses, here assumed to amount to 12 percent, as well as the temperature degradation, here assumed to be -0.5 %/K (see Figure 5.)

Figure 4. Model performance compared to observations throughout the year for Cabauw, NL.

Figure 5. Left panel shows a map from ICON of the solar energy reaching the surface in kW hours per year. To the right the maximum which can be extracted with such panels. We see that although there is a lot of radiation available in the sub-tropics, e.g. the Sahara, much of this advantage is counteracted by the warm temperatures.

Challenges for the renewable energy industry

What became clear through the Hackathon was that the wind industry is already working with quite advanced modelling tools for site planning and short term forecasting of wind power, along with on site observations. The situation is slightly less advanced for solar power, partly because the modelling tools are not nearly of the same quality due to problems with modelling clouds. New demands on the industry to also assess the impact of the changing climate on production, safety and durability/maintenance needs is a challenge that the industry is not well suited to meet, and where more research is urgently needed. Also, industry is looking forward to leveraging DestinationEarth digital twin simulations in their workflow.


Here you can make settings regarding data protection.