Verif: A Weather-Prediction Verification Tool for Effective Product Development

Thomas N. Nipen Norwegian Meteorological Institute, Oslo, Norway;

Search for other papers by Thomas N. Nipen in
Current site
Google Scholar
PubMed
Close
,
Roland B. Stull University of British Columbia, Vancouver, Canada

Search for other papers by Roland B. Stull in
Current site
Google Scholar
PubMed
Close
,
Cristian Lussana Norwegian Meteorological Institute, Oslo, Norway;

Search for other papers by Cristian Lussana in
Current site
Google Scholar
PubMed
Close
, and
Ivar A. Seierstad Norwegian Meteorological Institute, Oslo, Norway;

Search for other papers by Ivar A. Seierstad in
Current site
Google Scholar
PubMed
Close
Open access

Abstract

Verif is an open-source tool for verifying weather predictions against a ground truth. It is suitable for a range of applications and designed for iterative product development involving fine-tuning of algorithms, comparing methods, and addressing scientific issues with the product. The tool generates verification plots based on user-supplied input files containing predictions and observations for multiple point-locations, forecast lead times, and forecast initialization times. It supports over 90 verification metrics and diagrams and can evaluate deterministic and probabilistic predictions. An extensive set of command-line flags control how the input data are aggregated, filtered, stratified, and visualized. The broad range of metrics and data manipulation options allows the user to gain insight from both summary scores and detailed time series of individual weather events. Verif is suitable for many applications, including assessing numerical weather prediction models, climate models, reanalyses, machine learning models, and even the fidelity of emerging observational sources. The tool has matured through long-term development at the Norwegian Meteorological Institute and the University of British Columbia. Verif comes with an extensive wiki page and example input files covering a wide range of prediction applications, allowing students and researchers interested in verification to get hands-on experience with real-life datasets. This article describes the functionality of Verif version 1.3 and shows how the tool can be used for effective product development.

© 2023 American Meteorological Society. This published article is licensed under the terms of the default AMS reuse license. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Thomas N. Nipen, thomasn@met.no

Abstract

Verif is an open-source tool for verifying weather predictions against a ground truth. It is suitable for a range of applications and designed for iterative product development involving fine-tuning of algorithms, comparing methods, and addressing scientific issues with the product. The tool generates verification plots based on user-supplied input files containing predictions and observations for multiple point-locations, forecast lead times, and forecast initialization times. It supports over 90 verification metrics and diagrams and can evaluate deterministic and probabilistic predictions. An extensive set of command-line flags control how the input data are aggregated, filtered, stratified, and visualized. The broad range of metrics and data manipulation options allows the user to gain insight from both summary scores and detailed time series of individual weather events. Verif is suitable for many applications, including assessing numerical weather prediction models, climate models, reanalyses, machine learning models, and even the fidelity of emerging observational sources. The tool has matured through long-term development at the Norwegian Meteorological Institute and the University of British Columbia. Verif comes with an extensive wiki page and example input files covering a wide range of prediction applications, allowing students and researchers interested in verification to get hands-on experience with real-life datasets. This article describes the functionality of Verif version 1.3 and shows how the tool can be used for effective product development.

© 2023 American Meteorological Society. This published article is licensed under the terms of the default AMS reuse license. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Thomas N. Nipen, thomasn@met.no

The mission of the global weather enterprise is to continually improve the quality of weather information products. Verification—the act of comparing a prediction against a truth—is an essential activity in all parts of this value chain, whether it is for delivering an observation-based product, a numerical weather prediction (NWP) product, or a postprocessed product tailored for a specific end-user. Verification helps to ensure that new products are of higher quality than its predecessors and is a prerequisite for adoption by the community. Furthermore, verification results give users a better idea of a product’s strengths, weaknesses, and uncertainty.

Verification also plays a major role in the day-to-day work of product developers. Verification is used for diagnosing problems with existing products, understanding the behavior of algorithms being developed, and for making comparisons between multiple candidate algorithms. This highly iterative process requires extensive fine-tuning of algorithms to ensure that the product performs well under a wide range of weather situations and supports its end-users’ needs.

Verif is a simple yet flexible verification tool designed for effective product development. Verif was developed at the University of British Columbia in a research environment that delivers customized weather forecasts for government agencies and private companies. It has been further extended at the Norwegian Meteorological Institute (MET Norway), where it is used to refine operational weather forecasts for the popular weather app Yr (www.yr.no) and for hydrological and climatological products available on SeNorge (www.senorge.no). This development has resulted in several official releases, the latest of which is version 1.3 and which is presented here. Verif is suitable for a wide variety of deterministic and probabilistic applications, and is easy to install. Full documentation of the tool is available on the Verif website (https://github.com/WFRT/verif) or see the sidebar for how to get started.

Introducing the Verif command-line tool

Verif is a command-line tool that produces verification figures by processing input files containing predictions and observations. The user controls what verification results are computed and presented by specifying various command-line flags. A typical workflow consists of developing a prediction system in a preferred programming language (e.g., FORTRAN, Python, R), exporting the predictions and observations into a Verif-supported format, and using the Verif program to investigate the results in a language-independent way. As performance issues (such as systematic biases) with the prediction system are identified, algorithms can be fine-tuned or new methods can be developed, and the process is repeated. This iterative cycle can be a very efficient way to develop prediction products. Central to this cycle is Verif’s two standardized input file formats: tabular text files (simple to create) and NetCDF formatted files (faster to read). A description of how to organize data into either of these formats is presented in the tutorials on the Verif website.

Getting started with Verif

Verif requires a Python installation, and can easily be installed by running the following command in a terminal:

pip install verif

To get started with Verif, download the example datasets from the website (https://github.com/WFRT/verif). Verif is run by issuing commands in a terminal, such as this:

verif MEPS.nc ECMWF.nc -m mae -latrange 58,62 -x day -lw 4

A window with a verification figure will then pop up (Fig. SB1). Here two input files (MEPS.nc and ECMWF.nc), each with its own forecasts, are provided by the user. The command plots mean absolute error (-m) for all points within the latitude range 58°–62°N (-latrange) for each month (-x) and plots lines with a line width (-lw) of four pixels. Run verif --help to see a description of all available command-line arguments.

Fig. SB1.
Fig. SB1.

An example Verif verification plot showing mean absolute error for different months.

Citation: Bulletin of the American Meteorological Society 104, 9; 10.1175/BAMS-D-22-0253.1

This workflow separates data generation from verification analysis and can also be effective when collaborating with other research groups. Researchers often have preferred programming languages or frameworks that they develop within, but they can still collaborate by sharing verification files in Verif’s standardized format. In this way, researchers can easily compare their results and understand the strength of methods without having to invest in the codebase of others.

Verif’s more than 70 command-line arguments control how the input data are processed, what verification metric(s) to compute, and how the resulting verification scores are visualized (Fig. 1). The entire data pipeline is run from beginning to end each time a Verif command is executed. Many of the command-line arguments are optional. This design choice enables inexperienced users to use the tool without understanding all the features, as Verif will revert to defaults where options are omitted. At the same time, Verif allows experienced users to perform complex verification by selecting specific sets of command-line arguments.

Fig. 1.
Fig. 1.

Verif’s pipeline for parsing, processing, scoring, and visualizing input data. Optional steps are shown by dashed boxes.

Citation: Bulletin of the American Meteorological Society 104, 9; 10.1175/BAMS-D-22-0253.1

Each input file contains observations and predictions that are represented by three dimensions: time, lead time, and location. Time represents the model initialization time of an NWP model, or the valid time of an observational product. Lead time is the duration between a forecast horizon’s valid time and the forecast initialization time. Observational products have a single lead time equal to 0 h. Location is a vector of points defined by its latitude, longitude, and elevation, and will correspond to the locations of the verifying observations. The user must therefore potentially interpolate gridded forecasts to these points. Gridded observations can also be used, but the grid points must then be serialized into a vector of points.

Verif supports forecasts in deterministic and probabilistic form. The user can provide probabilistic forecasts either by a set of exceedance probabilities (e.g., probability of precipitation exceeding 0.1 mm) or a set of quantile levels (e.g., the 90th percentile). Probabilistic forecasts are typically generated by extracting probabilities from the empirical distribution of ensemble members or by a statistical model that models the probability distribution directly. Verif does not put any restrictions on what type of prediction variable or what type of observations it accepts, provided the data can be represented by the supported dimensions.

The parsed data structures are then passed through a series of optional data processing steps. First, anomalies can be calculated by subtracting (or dividing) a climatology reference from all observations and predictions. This is important in some applications where a prediction system’s skill appears artificially high when mixing predictions for locations and seasons that have different climatologies (Hamill and Juras 2006).

Next, the data can be aggregated in time. This is useful if input data are hourly, but the user wants to assess aggregated values (e.g., 24 h precipitation accumulations). This can only be applied to deterministic forecasts since neither exceedance probabilities nor quantiles can be aggregated when the necessary temporal covariance structures are not known.

The data are then optionally filtered, retaining only specified times, lead times, and locations. Filtering allows the user to isolate the performance of the prediction system for specific seasons or geographical regions. The observations and predictions are then stratified into chunks and combined to produce verification scores. The data can be stratified along any of the three dimensions.

These scores are then visualized in a graph. When multiple input files are provided, each input appears as a separate labeled line in the graph. To ensure a fair comparison, the filtering stage ensures that only times, lead times, and locations common to all files are used. Cases for which one or more sources do not have data available are therefore ignored. Verif includes options for customizing the visual elements of the plots, including axis limits, axis labels, font sizes, colors, and lines and marker styles. Plots can be exported into a variety of image file formats, such as PNG, JPEG, and PDF. The computed verification results can also be exported to tabular text files for further downstream use.

A number of other verification packages exists that provide many of the same metrics and diagrams as Verif does (see the appendix on verification software in Jolliffe and Stephenson 2012). Examples includes libraries of functions such as Verification (NCAR–Research Application Program 2015), SpatialVx (Gilleland 2022), and s2dVerification (Manubens et al. 2018) in R and climpred (Brady and Spring 2021) in Python. Verif differs in that it provides the command-line interface on top of this functionality, allowing the user to verify the forecasts without having to program the whole processing pipeline in Fig. 1 from scratch. Model Evaluation Tools (MET; Brown et al. 2021) is a comprehensive verification framework that provides a suite of tools for computing and visualizing verification scores. Verif aims to be a lightweight alternative by focusing on the subset of functionality that is most relevant for point-based forecasting. Verif is much narrower in scope and does not support spatial verification techniques, which are commonly used to evaluate the predictive potential of NWP models. Also, Verif does not have MET’s extensive functionality for importing data from commonly used formats and requires its users to set up the data files on their own. This includes the need to interpolate gridded forecasts to observations points.

Metrics and diagrams

There are many aspects that describe the quality of a prediction (Murphy 1993). A rich set of verification techniques have been developed over time, addressing the challenges of forecast verification (Ebert et al. 2013). For a comprehensive overview of verification techniques, see Jolliffe and Stephenson (2012) and Wilks (2011). Verif now supports 67 metrics, which are scoring rules for observations and forecasts, and 28 diagrams, which are specific ways of visualizing one or more metrics. A selection of the commonly used ones are presented in Table 1 (deterministic) and Table 2 (probabilistic). When a metric is chosen, Verif plots the stratified dimension on the x axis and the computed scores on the y axis (Fig. 2a). When scores are stratified by location, the scores can be plotted on a map (Fig. 2b). Diagrams, such as the Taylor diagram (Taylor 2001) elegantly combines several metrics (central root-mean-square error, correlation, and forecast variance) into one diagram (Fig. 2c).

Table 1.

List of the most commonly-used deterministic metrics and diagrams (marked with *) available in Verif and what prediction quality they assess. For a complete list, see the wiki page at https://github.com/WFRT/verif/wiki.

Table 1.
Table 2.

As in Table 1, but for probabilistic metrics and diagrams.

Table 2.
Fig. 2.
Fig. 2.

Example metrics and diagrams in Verif. (a) Mean absolute error (MAE) as a function of lead time; (b) MAE by location; (c) Taylor diagram; (d) reliability diagram.

Citation: Bulletin of the American Meteorological Society 104, 9; 10.1175/BAMS-D-22-0253.1

The choice of metric or diagram depends on the application and the end-user. Some users are sensitive to large prediction errors whereas others are sensitive to specific thresholds (e.g., around the freezing point of water). Other users are sensitive to long-run properties, such as bias and distribution of forecasts. For probabilistic predictions, properties such as reliability (Fig. 2d) and sharpness are important (Gneiting et al. 2007).

Product development with Verif

We have used Verif extensively in our development process for MET Norway’s public weather forecasts on time scales ranging from nowcasting to subseasonal forecasting. Continually improving our products is essential to stay relevant in a competitive market of prediction products. An example of such an improvement was the introduction of citizen observations into our operational postprocessing system to increase the local accuracy of our weather forecasts (Nipen et al. 2020).

Introducing such a change requires a significant amount of method development through trial and error, resulting in potentially hundreds of method configurations that must be evaluated. Understanding the behavior of the proposed methods and the consequences of the algorithmic choices we make is critical. Verif was very useful as a diagnostic tool to detect weaknesses in proposed methods.

Verif supports both case studies and summary scores as investigative techniques. Although individual events do not contribute much to overall performance, studying time series plots (Fig. 3a) can often lead to physical insight that helps us understand underlying deficiencies in the product. Weaknesses can stem from systematic errors that are specific in time, to a region, or to specific weather situations. In Fig. 3a, a clear warm bias at daytime is evident in the time series plot. Verif’s many filtering options are important to pinpoint these events. The goal then becomes to define the problem and verify the theory with summary scores.

Fig. 3.
Fig. 3.

Example plots often used to support product development. (a) A time series diagram showing predictions and observations for two prediction systems; (b) impact diagram of mean absolute error (MAE) of temperature. Red (blue) dots are conditional forecasts where inclusion of citizen observations improves (degrades) the prediction. The size of the circle is proportional to the total share that the conditional samples have toward the total MAE. The bars along the side are marginal sums for that temperature; (c) impact diagram showing the impact of citizen observations on prediction MAE during daytime (1200–1500 UTC) in summer months (July and August).

Citation: Bulletin of the American Meteorological Society 104, 9; 10.1175/BAMS-D-22-0253.1

When the error patterns are not clear and occur intermittently, summary scores can be a more suitable diagnostic tool. Summary scores aggregate results over many events, revealing subtle patterns that may indicate a problem. The two strategies complement each other, and switching between them rapidly is common in product development.

Assessing the added value of a proposed method relative to an existing system is also very common. For this we use an impact diagram, which shows the average impact for every combination of joint forecasts. The diagram plots the two systems on separate axes and shows the average improvement/degradation of a chosen metric for the joint forecasts (Fig. 3b). This particular figure suggests that the method has added value, but not for forecast temperatures above 20°C. Impact diagrams can also be shown on a map (Fig. 3c), helping to identify regions where the product is underperforming.

The insights we gain from Verif’s many metrics and diagrams guide our development process. In simple cases, algorithms can be fine-tuned to alleviate deficiencies identified by verification. After adjusting an algorithm, a broad set of summary scores must then be checked to see that other properties of the forecasts have not been affected. Sometimes, an algorithm is abandoned and an entirely different approach is developed. Other times, alternative input data sources must be explored.

Learning verification with Verif

Verif can be used to learn about verification. The Verif website (https://github.com/WFRT/verif), provides in-depth tutorials on Verif’s functionality, with examples of the metrics, diagrams, and plotting options. We believe verification is best learned by doing and have therefore made available several Verif-compatible datasets to complement the tutorials. Each dataset focuses on a particular application, giving researchers a good starting point to learn about the specific verification aspects for that application. At the time of writing, we have included datasets for temperature reanalysis, precipitation nowcasting, and short-range precipitation and wind forecasting. We encourage the community to contribute datasets for other applications. These datasets can be published to the Verif community on Zenodo (https://zenodo.org/communities/verif/), a platform for storing research datasets.

Verif’s tutorials, datasets, and simple installation instructions make Verif a suitable tool for teaching verification. Instructors can quickly bring theory to practice with real-life datasets, giving students hands-on experience with verification. The tool allows students to go beyond textbook examples, allowing them to explore and discover on their own the many fascinating aspects of verification.

Outlook

Verif’s continual development is guided by feedback from our users and the Verif website includes an issue tracker for reporting bugs. In the future, we wish to make it easier for Python developers to make direct calls to the core functionality (such as metrics and diagrams) in Verif and aim to improve support for ensemble forecasts in terms of scenarios (i.e., not just quantiles and exceedance probabilities). We also wish to add support for determining if differences in performance between competing forecasts are statistically significant, as this can be helpful when deciding whether or not to adopt a new forecasting system.

The standardized input formats described in section 2 are designed to be simple to produce data for. In settings where the user is already writing code to generate the forecast product, creating Verif files is usually straightforward since the data are already available within the programming environment. In other settings, such as when verifying output from an externally provided forecasting systems, putting together the Verif files can more challenging. We aim to support this use case better by developing scripts that can convert commonly used forecast and observation formats into the Verif-supported formats.

Acknowledgments.

We thank three anonymous reviewers for their helpful comments that improved this article. This work has been partly funded by the Canadian Natural Sciences and Engineering Research Council (NSERC) and British Columbia Hydro and Power Authority (BC Hydro). We thank former and current members of the UBC Weather Forecast Research Team for extensive testing and suggestions leading to improvements to Verif. We also acknowledge the helpful comments and suggestions we have received from Verif’s community of users.

Data availability statement.

The verification figures in this article are based on freely accessible data that has been published to the Verif community on Zenodo accessible at https://zenodo.org/communities/verif/.

References

  • Brady, R. X., and A. Spring, 2021: climpred: Verification of weather and climate forecasts. J. Open Source Software, 6, 2781, https://doi.org/10.21105/joss.02781.

    • Search Google Scholar
    • Export Citation
  • Brown, B., and Coauthors, 2021: The Model Evaluation Tools (MET): More than a decade of community-supported forecast verification. Bull. Amer. Meteor. Soc., 102, E782E807, https://doi.org/10.1175/BAMS-D-19-0093.1.

    • Search Google Scholar
    • Export Citation
  • Ebert, E., and Coauthors, 2013: Progress and challenges in forecast verification. Meteor. Appl., 20, 130139, https://doi.org/10.1002/met.1392.

    • Search Google Scholar
    • Export Citation
  • Gilleland, E., 2022: Comparing spatial fields with SpatialVx: Spatial forecast verification in R. NCAR, 69 pp., https://doi.org/10.5065/4px3-5a05.

  • Gneiting, T., F. Balabdaoui, and A. E. Raftery, 2007: Probabilistic forecasts, calibration and sharpness. J. Roy. Stat. Soc., 69B, 243268, https://doi.org/10.1111/j.1467-9868.2007.00587.x.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., and J. Juras, 2006: Measuring forecast skill: Is it real skill or is it the varying climatology? Quart. J. Roy. Meteor. Soc., 132, 29052923, https://doi.org/10.1256/qj.06.25.

    • Search Google Scholar
    • Export Citation
  • Jolliffe, I. T., and D. B. Stephenson, Eds., 2012: Forecast Verification: A Practitioner’s Guide in Atmospheric Science. 2nd ed. John Wiley and Sons, 296 pp.

    • Search Google Scholar
    • Export Citation
  • Manubens, N., and Coauthors, 2018: An R package for climate forecast verification. Environ. Modell. Software, 103, 2942, https://doi.org/10.1016/j.envsoft.2018.01.018.

    • Search Google Scholar
    • Export Citation
  • Murphy, A. H., 1993: What is a good forecast? An essay on the nature of goodness in weather forecasting. Wea. Forecasting, 8, 281293, https://doi.org/10.1175/1520-0434(1993)008<0281:WIAGFA>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • NCAR Research Application Program, 2015: Verification: Weather forecast verification utilities, version 1.42. R package, https://cran.r-project.org/package=verification.

  • Nipen, T. N., I. A. Seierstad, C. Lussana, J. Kristiansen, and Ø. Hov, 2020: Adopting citizen observations in operational weather prediction. Bull. Amer. Meteor. Soc., 101, E43E57, https://doi.org/10.1175/BAMS-D-18-0237.1.

    • Search Google Scholar
    • Export Citation
  • Taylor, K. E., 2001: Summarizing multiple aspects of model performance in a single diagram. J. Geophys. Res., 106, 71837192, https://doi.org/10.1029/2000JD900719.

    • Search Google Scholar
    • Export Citation
  • Wilks, D. S., 2011: Statistical Methods in the Atmospheric Sciences. International Geophysics Series, Vol. 100, Academic Press, 676 pp.

Save
  • Brady, R. X., and A. Spring, 2021: climpred: Verification of weather and climate forecasts. J. Open Source Software, 6, 2781, https://doi.org/10.21105/joss.02781.

    • Search Google Scholar
    • Export Citation
  • Brown, B., and Coauthors, 2021: The Model Evaluation Tools (MET): More than a decade of community-supported forecast verification. Bull. Amer. Meteor. Soc., 102, E782E807, https://doi.org/10.1175/BAMS-D-19-0093.1.

    • Search Google Scholar
    • Export Citation
  • Ebert, E., and Coauthors, 2013: Progress and challenges in forecast verification. Meteor. Appl., 20, 130139, https://doi.org/10.1002/met.1392.

    • Search Google Scholar
    • Export Citation
  • Gilleland, E., 2022: Comparing spatial fields with SpatialVx: Spatial forecast verification in R. NCAR, 69 pp., https://doi.org/10.5065/4px3-5a05.

  • Gneiting, T., F. Balabdaoui, and A. E. Raftery, 2007: Probabilistic forecasts, calibration and sharpness. J. Roy. Stat. Soc., 69B, 243268, https://doi.org/10.1111/j.1467-9868.2007.00587.x.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., and J. Juras, 2006: Measuring forecast skill: Is it real skill or is it the varying climatology? Quart. J. Roy. Meteor. Soc., 132, 29052923, https://doi.org/10.1256/qj.06.25.

    • Search Google Scholar
    • Export Citation
  • Jolliffe, I. T., and D. B. Stephenson, Eds., 2012: Forecast Verification: A Practitioner’s Guide in Atmospheric Science. 2nd ed. John Wiley and Sons, 296 pp.

    • Search Google Scholar
    • Export Citation
  • Manubens, N., and Coauthors, 2018: An R package for climate forecast verification. Environ. Modell. Software, 103, 2942, https://doi.org/10.1016/j.envsoft.2018.01.018.

    • Search Google Scholar
    • Export Citation
  • Murphy, A. H., 1993: What is a good forecast? An essay on the nature of goodness in weather forecasting. Wea. Forecasting, 8, 281293, https://doi.org/10.1175/1520-0434(1993)008<0281:WIAGFA>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • NCAR Research Application Program, 2015: Verification: Weather forecast verification utilities, version 1.42. R package, https://cran.r-project.org/package=verification.

  • Nipen, T. N., I. A. Seierstad, C. Lussana, J. Kristiansen, and Ø. Hov, 2020: Adopting citizen observations in operational weather prediction. Bull. Amer. Meteor. Soc., 101, E43E57, https://doi.org/10.1175/BAMS-D-18-0237.1.

    • Search Google Scholar
    • Export Citation
  • Taylor, K. E., 2001: Summarizing multiple aspects of model performance in a single diagram. J. Geophys. Res., 106, 71837192, https://doi.org/10.1029/2000JD900719.

    • Search Google Scholar
    • Export Citation
  • Wilks, D. S., 2011: Statistical Methods in the Atmospheric Sciences. International Geophysics Series, Vol. 100, Academic Press, 676 pp.

  • Fig. SB1.

    An example Verif verification plot showing mean absolute error for different months.

  • Fig. 1.

    Verif’s pipeline for parsing, processing, scoring, and visualizing input data. Optional steps are shown by dashed boxes.

  • Fig. 2.

    Example metrics and diagrams in Verif. (a) Mean absolute error (MAE) as a function of lead time; (b) MAE by location; (c) Taylor diagram; (d) reliability diagram.

  • Fig. 3.

    Example plots often used to support product development. (a) A time series diagram showing predictions and observations for two prediction systems; (b) impact diagram of mean absolute error (MAE) of temperature. Red (blue) dots are conditional forecasts where inclusion of citizen observations improves (degrades) the prediction. The size of the circle is proportional to the total share that the conditional samples have toward the total MAE. The bars along the side are marginal sums for that temperature; (c) impact diagram showing the impact of citizen observations on prediction MAE during daytime (1200–1500 UTC) in summer months (July and August).

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 7348 5761 1709
PDF Downloads 1746 781 55