Meaningful Verification

Meaningful verification cannot occur without first determining what you are trying to learn from verification and understanding the data set. In this demonstration we are trying to learn about the impact from QPF inputs. The two RFC case studies represented different types of data and forecasts, but in each case the verification technique involved for understanding the impact of QPF included:

  • Understanding the data, especially the basin aggregation in the OHRFC case and the ensemble spreads in the MARFC case
  • The impact of lead time
  • The general hydrologic forecast performance in terms of error and bias
  • A more detailed examination of hydrologic forecast performance as measured by error, bias, and correlation
  • An examination of hydrologic forecast skill
  • A look at conditional scores like forecast reliability and/or forecast discrimination
  • A comparison of scores for important subsets of the data (such as high flow events or fast-response basins only)

There were scores and measures examined that were specific to each case. But this overall approach should work for almost any case.

Let's remember that the final outcome of meaningful forecast verification is to understand and quantify forecast performance, and to use the verification information to improve the forecast system. Improving a forecast system can be more effective and efficient if we understand the impact of model input, like QPF, how that impact manifests itself, where that impact is most obvious in the study area, and when that impact is most prominent within the forecast period.

Photo of Boulder Creek, Boulder, CO April 2006.