Soil: The Hidden Risk to Your Business and How to Manage ItDownload Now
Meredith Mejia • August 7th, 2023.
During the seventeenth century, two schools of thought characterized the philosophy of science: deductive reasoning (promoted by Rene Descartes) and inductive reasoning (promoted by Sir Francis Bacon). In the context of machine learning forecasting, deductive reasoning could be applied by utilizing known climate data and established scientific principles to make predictions about future seasonal patterns, while inductive reasoning involves analyzing historical climate data and looking for trends or patterns that are indicative of specific seasonal outcomes.
According to Sal Khan, “deductive reasoning is taking some set of facts and using that to come up with some other set of facts. Inductive reasoning is looking for a trend or for a pattern and then generalizing.” Today, the distinction between these two scientific regimes manifests itself in a climate science problem: seasonal forecasting.
Seasonal forecasting is the task of predicting a climate variable, such as temperature, two weeks to one year ahead of time. Currently, the European Centre for Medium-range Weather Forecasting (ECMWF) and the National Oceanic and Atmospheric Administration (NOAA) use dynamical forecasting models, an example of deductive thinking. These dynamical models start from a set of facts:
They use this physical information to arrive at another set of facts: a future forecast of the climate.
This has largely been considered the best forecasting method available until recent years when A) climate change added a new variable into forecasting and B) Machine Learning (ML) forecasting was introduced. Now, attitudes are shifting toward the latter.
Meteorologists use forecasting algorithms, mathematical systems that predict future demand using past time series data, to create weather forecasts. Right now, these reports can predict a 7-day forecast with 80% accuracy and a 5-day forecast with 90% accuracy.
At ClimateAi, we use deep learning, a form of inductive reasoning, for seasonal forecasting. Deep learning models are called neural networks, and under this paradigm, a neural network is shown many examples of the predictor (the initial climate state) and the predict and (the future climate state). From these examples, the model uses an optimization algorithm to learn a pattern between the predictor and predictand. Crucially, no physical equations are directly encoded in a neural network; therefore, it’s sometimes referred to as a black box.
In layman’s terms, inductive reasoning aims to develop a theory while deductive reasoning aims to draw conclusions from an existing one. Both have their virtues and drawbacks, but we find that inductive reasoning is compatible with emerging ways of thinking, especially in the realm of machine learning forecasting.
Why we use inductive reasoning over deductive forecasting algorithms: the climate is changing and, in some cases, historic data as an algorithmic variable is becoming less weighted. Deductive reasoning bases conclusions on accepted facts; regarding weather, the future might not look like the past.
Here is a basic step-by-step explanation of how artificial intelligence forecasting works:
1. Data Collection
This includes satellite imagery, radar data, weather station observations, and other relevant sources. An extensive and diverse dataset must be available for analysis and training weather AI models.
2. Data Preprocessing
To ensure accuracy and reliability, data is cleaned up and prepared for analysis. This involves removing any outliers, handling missing data, and normalizing the data to make it consistent and suitable for analysis.
3. Feature Extraction
To help the AI identify patterns and relationships, variables like temperature, humidity, wind speed and direction, atmospheric pressure, cloud cover, and precipitation are extracted.
4. Model Training
In this step, the AI forecasting models are trained using the preprocessed data and extracted features. We use neural networks to have models try to find correlations that can predict future weather conditions.
5. Model Evaluation
Models are then tested on a separate dataset, and the most accurate one is chosen. Common evaluation metrics include accuracy, precision, recall, and mean squared error.
Once the models are trained and evaluated, they can be used to predict future weather conditions. By inputting current weather data, the AI models generate forecasts for various parameters.
7. Iterative Learning
Models are periodically retrained with new data to continuously improve forecasting accuracy. This allows them to adapt to changing weather patterns and improve their prediction capabilities over time.
8. Output and Visualization
Forecasts are then — into graphical representations, maps, and charts that make the information digestible and informative.
There’s a basic explanation of how AI makes forecasting smarter. Industries like finance, sales, manufacturing, agriculture, and healthcare use various machine learning algorithms to predict future events and optimize operations.
Random Forests: Combine multiple decision trees built independently on different subsets of training data. The final prediction is determined by aggregating the predictions of each tree.
Support Vector Machines (SVM): Works by finding an optimal hyperplane that separates data points from different classes with the largest margin. Particularly effective with smaller datasets and can handle both linear and non-linear relationships.
Artificial Neural Networks (ANN): Our choice for artificial intelligence forecasting, ANNs are inspired by the structure and function of the human brain. Each neuron applies a mathematical transformation to its input and passes the result to the next layer.
Gradient Boosting Machines (GBM): Builds models iteratively, with each subsequent model aiming to correct the errors of the previous ones. Models are combined by weighted averaging and are very popular in AI weather prediction.
Long Short-Term Memory (LSTM): This method of AI for forecasting is a type of recurrent neural network designed to model sequential data and capture long-term dependencies. Can retain information over long periods and selectively update or forget information based on context.
All of these models are extremely valuable, but at ClimateAi, we’ve chosen neural networks as our main AI weather prediction model for the following reasons:
Machine learning is reaching the level of sophistication where it can be employed successfully in just about any industry. By learning from and interacting with datasets, AI models can function without human oversight. Here’s why it stands out for AI weather forecasting.
Climate forecasting employs variables such as atmospheric pressure, wind speed, humidity, temperature, precipitation, and severity. Such complex variables require intricate processing skills that AI can provide.
Beyond weather forecasting, machine learning allows our platform to offer complex solutions that apply to real-world business matters. Our supply chain platform, for example, applies forecasting findings to issues like inventory management and asset protection.
Inspired by the human brain, neural networks can adapt and learn based on new data, patterns, and reactions. The agricultural sector takes advantage of ClimateAi’s ability to adapt and learn to further its research and development undertakings. Growers use advanced forecasting machine learning to influence decisions about growing locations, seed types, harvest dates, and more.
Climate variables often have disproportionate effects on one another. Arguably the most persuasive quality of machine learning for forecasting is the ability to evaluate risks with non-linear or difficult-to-measure relationships.
Turn risks into a competitive advantage with CliamteAi’s climate risk platform. Businesses can uncover hidden patterns and correlations, enabling them to make more informed decisions in an uncertain climate environment.
Feature extraction enables the identification and extraction of relevant patterns from raw weather data. This permits more focused weather pattern analyses, increasing accuracy and efficiency. ClimateAi’s forecasting algorithms permit businesses to draw meaningful conclusions from their own datasets and improve asset diligence strategies that protect portfolios and investments.
Machine learning forecasting models can handle large datasets, taking raw data and identifying meaningful patterns and relationships. This makes them incredibly equipped for real-time weather forecasting at a global level; there’s no limit to the amount of information they can process.
What does this mean for businesses? When you scale your operations, AI can scale with you.
Climate variables come from diverse sources: radar data, weather balloons, satellite imagery, sensor networks, social media, and global monitoring feeds. Merging as many data sources as possible improves the precision of climate predictions; this is what ML forecasting algorithms bring to the table.
AI gathers data from manufacturing, agricultural, supply chain, and finance sectors to optimize things like demand planning and asset protection.
AI models aren’t limited by the datasets they were trained with. As more weather data becomes accessible, models undergo retraining, resulting in more accurate forecasts that change with the times (this quality is increasingly valuable with climate change’s looming presence).
Businesses can take advantage of machine learning for forecasting with ClimateLens-Adapt, a platform that makes climate data applicable to real-world enterprise scenarios.
Many stakeholders in climate-sensitive sectors hesitate to trust a “black-box” algorithm. Therefore, an important question is which school of thought offers more interpretable seasonal forecasts?
Dr. Zachary Lipton, a professor at Carnegie Mellon University, isolates two types of interpretability: “transparency (‘How does the model work?’) and post-hoc explanation (‘What else can the model tell me?’)” (Lipton, 2016).
From the lens of transparency, dynamical models are more interpretable. Each component of a dynamical model is solving physical equations governing the movement of the atmosphere and the ocean. In contrast, the exact purpose and behavior of each component of a neural network are unclear and often uninterpretable.
However, from the lens of post-hoc explanation, neural networks are promising and offer certain advantages over dynamical models. Neural networks can be used to generate “saliency maps,” which indicate the most important pixels in an image for the neural network’s prediction. In climate datasets, the “image” is a global grid of a specific variable, such as temperature. In climate science, saliency maps are uniquely informative because, in every image, each pixel refers to a specific location, given by latitude and longitude. This allows for a saliency map to be averaged across multiple years. Below is the average of 20 years of saliency maps for ClimateAi’s neural network that forecasts El Niño.
In the saliency map, the gray to white shading indicates how important a region is to the neural network prediction, and the purple to yellow shading indicates how closely tied a region is to El Niño (a metric called R2). From this saliency map, we can interpret the network’s behavior; we can conclude that the network bases its forecasts on activity in the Pacific Ocean, specifically in regions that have a high R2 with El Nino. This is in line with what we would expect, as El Niño is a phenomenon of warm and cold temperatures in the equatorial Pacific. These post-hoc visualizations are a unique benefit of neural networks. After dynamical models have been run, they offer no built-in way to generate a saliency map.
Despite their differences, the interpretability of dynamical models and machine learning have an important similarity. Neither method can be decomposed into specific El Niño simulators, and Dr. Lipton identifies decomposability as a form of transparency. In a dynamical model, El Niño emerges as a result of the overall interactions between the atmosphere and ocean components, but there is no specific module or code that directly simulates it (Guilyardi, 2017). (This property of El Niño is called emergence.) Likewise, in a neural network, there is no specific layer or filter that can be physically interpreted as the El Niño simulator; the El Niño forecast emerges from the interaction of the weights and layers as a whole.
Both neural networks and dynamical models have emergent properties which are not strictly interpretable or decomposable, so we must validate their underlying systems. ClimateAi builds trust in neural networks similar to how ECMWF builds trust in its dynamical models: we compare ClimateAi’s neural network forecasts, the ECMWF’s dynamical models’ forecasts (labeled in the graph below as “SEAS5”), and the true “Target” state of El Niño.
At ClimateAi, we are actively researching ways to develop transparent methods of seasonal forecasting. By dissecting the strengths and weaknesses of deep learning and dynamical models, we are excited to view this task from the lens of an age-old contrast in the philosophy of science. In order to develop the most reliable and interpretable seasonal forecasts, we hope to leverage both deductive and inductive reasoning.
Built with the agricultural sector in mind, ClimateAi’s ML forecasting platform provides growers with actionable insights surrounding temperature trends, chill hours, and transportation planning.
The rise in energy crises and power outages in the West is showcasing the inadequacies of current forecasting methods. Using AI forecasting, the energy sector can predict climate-driven demand years in advance.
Financial security is only as strong as the company’s diversification and resilience. ClimateAi’s weather AI platform makes it possible to build climate-proof portfolios.
Only our sophisticated weather AI platforms can help the food & beverage industry predict supply and demand, optimize supply chain networks, and prevent material shortages.
The interconnectedness of global manufacturing networks means a weather event in China can impact operations across the globe. Machine learning’s adaptable models mean forecasts can stay up-to-date with changing weather norms.
Most infrastructure built decades ago wasn’t made to withstand changing water patterns and storm intensities. AI’s inductive reasoning basis makes it possible to identify at-risk areas weeks and years ahead.
AI for forecasting has emerged as a powerful tool, revolutionizing the field and adapting to challenges posed by climate change. With current limitations on using past data, the ability to harness AI’s intellectual power to make smarter predictions is becoming increasingly valuable. AI is making waves in the weather forecasting world, addressing these challenges and providing invaluable insights for diverse industries.
The quest to find the best forecasting methods remains a priority for everyone. Dedicated to continuous improvement, ClimateAi is committed to refining AI weather forecasting techniques and ensuring accurate and reliable predictions for a better understanding of our ever-evolving climate.
I’d like to thank Dr. V. Balaji and Dr. Travis O’Brien for their helpful discussions regarding this topic. Also, I’d like to thank Max Evans, Garima Jain, Mattias Castillo, Aranildo Lima, Brent Lunghino, Himanshu Gupta, Dr. Carlos Gaitan, Jarrett Hunt, Omeed Tavasoli, and Dr. Patrick T. Brown for their collaboration on our project.
“Guilyardi, Eric. “Challenges with ENSO in Today’s Climate Models.” National Oceanic and Atmospheric Administration (2015). Available here.
Khan, Salman. “Inductive & deductive reasoning.” Khanacademy.com. Available here.
Lipton, Zachary C. “The mythos of model interpretability.” arXiv preprint arXiv:1606.03490 (2016).”