WRF With NAM Initialization

This page shows some of results when NAM is used to initialize WRF. Many of the trends are about the same as runs when RUC is used to initialize WRF, so not every RUC graph from the Results and Discussion pages will be duplicated for NAM. The NAM forecast product is produced only once every 6 hours as opposed to RUC being produced every hour. But NAM forecasts 84 hours into the future as opposed to RUC's 9 or 12 hour forecasts. So the procedure to produce these NAM based forecasts is different. The range of dates was still Jan 1, 2009 through Aug 31, 2009, but a forecast was produced once every 18 hours and each forecast was 36 hours long. For most graphs, the first two forecasts hours were not included because two forecast hours are needed to remove the bias in the NAM data. But that still has every time from January through August covered by two NAM based forecasts because every 18 hour time period has 34 hours of coverage. The RUC graphs have 7 hours of coverage for every 11 hour time period. So these 324 NAM based runs have almost three times as many data point per graph point as the graphs of 525 RUC based runs. The NAM data is provided in increments of three hours. So in the graphs based on forecast hour, its data is shown as noncontinous points at three hour intervals.


 


This graph on the left shows how WRF starts out with a bias equivalent to NAM's bias because NAM provides all the initial conditions. Over the first couple forecast hours, WRF is able to remove that bias. WRF retains a little bias of its own, but it is not clear whether that is coming from the NAM data or WRF is creating it on its own. If it is the former, then perhaps my grids were designed too small. My design approach was to use lean buffer areas in my three domains, but if the NAM bias is working its way through to my innermost domain, I may need larger domains.

The graph on the right shows the number of data points that go into each graph point for both the graph on the left and the two graphs in the next section showing Mean Absolute Error (MAE) of forecast wind speed and power.





 


The graph on the left shows mean absolute error in the forecast wind speed. The graph on the right shows it for mean absolute error if the bias is removed. There is some noticeable increase in error over time into the forecast. That would be expected, although NAM seems to be increasing at a slower rate than WRF after the first 8 or 10 hours.

Also, the bias correction helps the NAM data more than the WRF data, so that implies that NAM error is accounted for by its bias more than WRF's error.




 

This graph shows the number of data points that go into each speed, as measured by a sodar, on the following graphs. Obviously, the majority of the points are at lower speeds. The data at higher speeds is less reliable because less observations were available to create them. The sodar is also less likely to report the higher speeds due to reduced confidence.





 

This graph shows the bias of the NAM data and the WRF runs initialized with the NAM data. The bias in forecast wind speed is on the left. The bias in forecast power divided by capacity ("normalized") is on the right.


WRF initialized with NAM seems to have about the same bias as WRF initialized with RUC, as seen on this page: Results and Discussion. But the NAM data seems to have a lower bias than the RUC data.





 

These are the graphs of the Mean Absolute Error (MAE). As was seen with the RUC based data, the error goes down at the low speeds in the same region that the bias is approaching zero. Then, as the bias increases beyond zero at the higher speeds, the error also increases.




 

These graphs show the error with the bias removed. This helps distinguish between systemic error and random error. As mentioned elsewhere, the bias is removed from the same data from which it is calculated. For forecasts, this might be considered cheating, but it is still useful because it represents the best possible correction that could be expected if bias were accumulated over a long time. It shows the limits of making a bias correction to the data.