Air Quality Modeling
7.5 Air Quality Model Design and Simulation
7.5.3 Modeling Verification
7.5.3.1 Introduction
A fundamental requirement for the use of models in policy is that their predictions be credible. This means that the model must not only get the right answer, but that it must get the right answer for the right reasons. The second point is important, because a model that predicts the correct current concentration of a pollutant because of a cancellation of errors cannot be relied on to provide correct predictions for altered conditions, for example, for scenarios of reduced emissions, as would typically be required for policy evaluation.
Full model evaluation and validation should include: Thorough peer review of the science of the model. Evaluation of the model’s ability to predict concentrations of the pollutant of interest, by comparing predictions against measurements, preferably over a wide range of meteorological conditions (this operational evaluation tests the model’s ability to get the right answer). Comparison of the performance of two or more models. More detailed evaluation of the ability of the model to predict correctly the concentrations of other chemical species involved in the chemical scheme.
Atmospheric quality models are evaluated by comparison of their predictions against ambient measurements. A variety of statistical measures of the agreement or disagreement between predicted and observed values can be used. Although raw statistical analysis may not reveal the cause of the discrepancy, it can offer valuable insights about the nature of the mismatch.
7.5.3.2 Ambient Data Comparison
Confidence in a model’s performance is gained only if model predictions stand up against rigorous comparisons with data. Data used for comparing results against are ideally an extension of the data used for initializing the model. In other words, if time-dependent observational data are available, data for the time corresponding to the beginning of the model simulation should be used to initialize the model, and data for all subsequent times should be used to compare model results against.
Data should be compared with predictions for as many parameters and locations as possible. If meteorological and gas variables are predicted by a model, predictions should be compared with observed temperatures, pressures, velocities, and concentrations. Comparisons should be made at as many horizontal and vertical locations as possible.
Data for times after the start of a simulation should not be used to force or nudge a model towards a better solution. Nudging model results by data assimilation prevents a model from being prognostic. In other words, if a known result is used to push the model to that result, the model cannot be used to predict future events. Even when past events are simulated, the use of nudging or data assimilation impairs the evaluation of the accuracy of the model.
7.5.3.3 Sensitivity Tests
For regional modeling, common sensitivity tests include testing changes in boundary conditions, initial conditions, and emissions. One test is to set all inflow gas and aerosol concentrations at horizontal boundaries equal to zero and compare the results with the baseline case and with data. Another test is to set all initial gas and particle concentrations equal to zero. A third is to adjust the emissions on model results. On a global scale, similar sensitivity tests for emissions and initial conditions can be run.
7.5.3.4 Model Accuracy
For policy purposes it would be desirable to be able to state that the model prediction is uncertain to ±X%. Such a definitive statement cannot be made, because model uncertainty depends on many factors, some of them specific to the particular application. Thus, model uncertainty includes contributions from uncertainties in the input data (meteorology, emissions, etc.) and in the model itself. Model uncertainties include uncertainty in parameters like chemical rates, uncertainties in the science on which the model is based, and uncertainties in implementation of the science into numerical form. In addition, the process of model evaluation itself is somewhat uncertain, because of measurement uncertainties, and also because of the problem of incommensurability.
Even if a statement in the desired form could be made about the model, further uncertainty arises because policy applications require the prediction of some future or unknown state. This future state will involve emission changes, as new facilities are built, or as emissions of existing facilities are controlled, and will also correspond to unknown meteorological conditions, and possibly also to changed surface conditions (e.g., changes in land use).
A fundamental requirement for the use of models in policy is that their predictions be credible. This means that the model must not only get the right answer, but that it must get the right answer for the right reasons. The second point is important, because a model that predicts the correct current concentration of a pollutant because of a cancellation of errors cannot be relied on to provide correct predictions for altered conditions, for example, for scenarios of reduced emissions, as would typically be required for policy evaluation.
Full model evaluation and validation should include: Thorough peer review of the science of the model. Evaluation of the model’s ability to predict concentrations of the pollutant of interest, by comparing predictions against measurements, preferably over a wide range of meteorological conditions (this operational evaluation tests the model’s ability to get the right answer). Comparison of the performance of two or more models. More detailed evaluation of the ability of the model to predict correctly the concentrations of other chemical species involved in the chemical scheme.
Atmospheric quality models are evaluated by comparison of their predictions against ambient measurements. A variety of statistical measures of the agreement or disagreement between predicted and observed values can be used. Although raw statistical analysis may not reveal the cause of the discrepancy, it can offer valuable insights about the nature of the mismatch.
7.5.3.2 Ambient Data Comparison
Confidence in a model’s performance is gained only if model predictions stand up against rigorous comparisons with data. Data used for comparing results against are ideally an extension of the data used for initializing the model. In other words, if time-dependent observational data are available, data for the time corresponding to the beginning of the model simulation should be used to initialize the model, and data for all subsequent times should be used to compare model results against.
Data should be compared with predictions for as many parameters and locations as possible. If meteorological and gas variables are predicted by a model, predictions should be compared with observed temperatures, pressures, velocities, and concentrations. Comparisons should be made at as many horizontal and vertical locations as possible.
Data for times after the start of a simulation should not be used to force or nudge a model towards a better solution. Nudging model results by data assimilation prevents a model from being prognostic. In other words, if a known result is used to push the model to that result, the model cannot be used to predict future events. Even when past events are simulated, the use of nudging or data assimilation impairs the evaluation of the accuracy of the model.
7.5.3.3 Sensitivity Tests
For regional modeling, common sensitivity tests include testing changes in boundary conditions, initial conditions, and emissions. One test is to set all inflow gas and aerosol concentrations at horizontal boundaries equal to zero and compare the results with the baseline case and with data. Another test is to set all initial gas and particle concentrations equal to zero. A third is to adjust the emissions on model results. On a global scale, similar sensitivity tests for emissions and initial conditions can be run.
7.5.3.4 Model Accuracy
For policy purposes it would be desirable to be able to state that the model prediction is uncertain to ±X%. Such a definitive statement cannot be made, because model uncertainty depends on many factors, some of them specific to the particular application. Thus, model uncertainty includes contributions from uncertainties in the input data (meteorology, emissions, etc.) and in the model itself. Model uncertainties include uncertainty in parameters like chemical rates, uncertainties in the science on which the model is based, and uncertainties in implementation of the science into numerical form. In addition, the process of model evaluation itself is somewhat uncertain, because of measurement uncertainties, and also because of the problem of incommensurability.
Even if a statement in the desired form could be made about the model, further uncertainty arises because policy applications require the prediction of some future or unknown state. This future state will involve emission changes, as new facilities are built, or as emissions of existing facilities are controlled, and will also correspond to unknown meteorological conditions, and possibly also to changed surface conditions (e.g., changes in land use).