Kevin Matthew Nuss (left) and Todd Haynes (right) With a SODAR
This is me on the left and Todd Haynes on the right. We are in front of a SODAR, used to measure wind speeds up to 200 meters high.
Main Research Page

Below is a list of topics, most of which have their own page to get into more detail. I hope to add more topics as I develop something to say about them. But keeping web pages up to date is not a high priority. Below is what I have to offer so far.

There is an additional set of research pages that discusses results from my wind forecasting research in 2009 at Boise State University. I learned a lot about running WRF in 2009. I ran WRF over 20,000 times that year. There are many graphs and some casual discussion about what I believe the graphs tell us. There are several pages branching off from the main page, Results and Discussion. That main page has a better introduction to methods used and some fuller explanations about procedures, so it would be a good place to start. The site map, in the left sidebar can be used to see a list of all those results pages.

I have an overview of my history of research at this page: My History of Doing Research. But the description is mostly a narative of various aspects of the jobs and job functions. You may find it interesting if for some reason you are curious about me, but it is mostly written at a personal level rather than being useful to your research.

One way or another, people ask me for help with WRF errors. I never have THE answer, but have developed several things to try. So I made a page for that called WRF Errors - CFL Errors, SIGSEGV Segmentaion Faults, and Stopping or Hanging. Besides, if I don't put the information somewhere, I'll forget it when I need it.

Using Google Earth to Display Data

I wrote programs to take the output from WRF and display the information onto Google Earth. Google Earth uses a data format called Keyhole Markup Language. To the left is a JPG image from Google Earth showing the kinds of data I create. To see more pictures and download a file that can be run in Google Earth, go to this page: WRF Data in Google Earth. Some people are not aware that Google Earth can also display time varying data. I use this feature to create wind speed contours (isotachs) that change with time. Being able to zoom in and to see what is happening in the context of the terrain is helpful in understanding the model simulations. One of the interesting side benefits of using Google Earth is that I was able to find wind turbines that match the size of those used in the study area. By also adding them, I can see wind speed contours change with time and location as they move around the very objects that use that wind. I have an additional page that has some pictures of WRF data displayed on Google Earth. These pictures show how the higher resolution terrain height data helps create grid cells that have a closer match to the terrain. Here is the link to that page: High Resolution Terrain Data Example.

Comparing Planetary Boundary Layer and Surface Layer Schemes in WRF

Part of my research had been to try various physics options in WRF to find combinations that work best in short term wind forecasts for a particular study area. I created a poster to present at the WRF Users Workshop, June 23-26, 2009. Click on the picture to the left to download that poster. Unfortunately, this version is slightly different than what I took to the workshop. That poster had a background picture of the study area. For some reason, after uploading to this website, it is not viewable upon download. The one to the right has the same data content, but no background picture. It does have a smaller, different picture of the study area embedded in it.

To get more information, both graphs and discussion about the results of the comparisons, go to this page: Comparing WRF Physics Options.

Comparing WRF to the Data Used to Initialize WRF

Another page compares how well WRF did in improving the forecast over what was provide in the RUC and NAM forecasts from NCEP. That information is at WRF vs Initialization Data. Some similar information, but over a longer time period and with more forecasts, is included throughout the Results and Discussions pages previously mentioned.

Using WRF Data to Initialize Fluent

As part of the research that I was doing, I wrote programs to get information from WRF output files and use that data to initialize Fluent for high resolution CFD simulations of the areas of interest. Fluent is a computational fluid dynamics (CFD) program that is used to design things from miniature water nozzles to large airplanes. We used Fluent to model wind over terrain. Some details can be found here: Initialize Fluent With WRF Data.

A Couple Thoughts About Using Observation Nudging (fdda)

The RUC and NAM data I used to initialize WRF and to provide boundary conditions for a forecast run, obviously needed to include the thousands of observations that become available every hour. These get the RUC and NAM to an initial state consistent with reality. Sophisticated techniques and accumulated analysis judge the quality of each reporting site, both in general and for a particular report. That's great. I feel no need to duplicate that effort by processing the same observations. Those processed observations just come in as part of the initialization data. However, I have a few observations of my own that I could have included into my high resolution forecast and I ccould get them a little faster and more frequent than RUC and NAM make their forecasts available. Great. I have tools like WRF's fdda observational nudging to handle my local observations. And it works just like advertised. But how much does it really help? In my limited experience, not very much, and precisely because it does work as advertised, as it is supposed to. For my area of interest in southern Idaho, RUC and NAM have a wind speed bias lower than the reality as reported by my sodar. That's OK. If they were perfect, my higher resolution forecast might not even be needed. The first hour or two of my forecast removes this bias (and goes on to create a little bias to the high speed side). That is not a problem because it takes an hour or two for the RUC or NAM data to become available and a little more time to get my forecast run far enough along to get high resolution forecast data out. By then, my model's bias is consistently there (thus removable) and my forecast only degrades slowly with time. The initial bias is gone before the usable data is available.

So I bring in my own observations from a nearby met tower. Voila! It immediately removes the bias from incoming RUC and NAM data. Great! But does that actually help? Let me explain why it doesn't. How long do I want the time window of influence to last for my met tower? An hour sounds good. Persistence, meaning "just use the previous hour's wind speed as a forecast for the next hour" generally works pretty well. And that is similar to what I get with my observational nudging. But persistence gets worse for longer time periods. Using the actual wind speeds from three hours ago as a forecast for this hour's speed is less of a good idea because weather does change. For the same reason, a long time window of influence is not so good for observational nudging.

But my single, or perhaps a few, observations only affect my run for the specified time window of influence. When the influence is over, my model generally seems to go back to what it would be without the observation. And since the incoming weather from the lateral boundary conditions didn't see my observations and my WRF configuration has its own limitations, why would a permanent effect linger in my small, high resolution model? The influence fades, just like advertised. And it should. As mentioned, I don't want a persistence forecast from too many hours ago.

So the reasonable influence from a local observation fades from my forecast almost before the time that usable data from my forecast is available. The observation is approaching uselessness. Not totally, just close. I need to do more research, but I think I may also see a rebound effect from the nudging. That could make the nudging worse than useless if that rebound is in the usually useful portion of my forecast.

I am overstating my case to make a point. And the same argument does not apply to the larger RUC and NAM forecasts themselves because they and their lateral boundary providers use wider ranging observations, so the incoming lateral boundary conditions do not override the local observations (I hope). They are all part of the same system. And for hind casts, such as those used to model past air quality problems or to assess wind for energy using historical data, fdda may be just the thing to keep a local, high resolution model on track and from being influenced by biases from the lower resolution forecasts/analyses.

I had higher hopes that assimilating local observations would greatly improve a local high resolution wind forecast. Eventually I saw that the benefit was minimal. And it should be minimal for the same reasons that a persistence forecast should have a limited time scope.

Custom Changes to WRF Source Code

WRF outputs its results at runtime specified interval. Each output is an instantaneous snapshot of the model data. For my research, I wanted wind speed data averaged over a time period. Sometimes I wanted it for an hour interval, sometimes for 10 minutes, or less. This can be accomplished by having WRF output instantaneous data frequently and then averaging that data, but huge data files get generated that way, even if wind is output separately from other data. A better solution is to change the WRF source code. This way, instantaneous data is taken from every time step and accumulated over short periods of time so averages can be calculated. The periods of time and the heights for which to accumulate the data are specified at runtime. And since WRF uses pressure as its vertical coordinates, and since pressure changes over time, the wind speeds have to be interpolated to the fixed heights before being accumulated for output. The page describing this in a little more detail is here: WRF Source Code Changes.

Using High Resolution Terrain Height Data

The terrain height data the comes with WRF has a resolution of 30 arc seconds, which is around 1 km. Since my research uses grid cell sizes of 1 km and even less, I want to use higher resolution terrain height data. I was able to find 1/3 arc second data on a USGS site, but had to carefully download it and make some changes so I can use it in WRF. And by displaying grid cell heights in Google Earth, I am able to see how incorporating the higher resolution data causes the grid cells to more closely match the terrain. The new grid cells are now an average of data having 10 m horizontal resolution rather than an interpolation of the surrounding 1 km resolution terrain heights. The page describing the process of using this higher resolution data can be found here: High Resolution Terrain Height Data in WRF. If you would like to see how the grid cells generated by WPS follows the actual terrain better when using the higher resolution terrain height data, go to this page: High Resolution Terrain Data Example.

Some WRF Benchmarks and Profile Results on a Single Computer

I ran some performance tests on a single computer to see how it performed using different compile options for serial, shared memory, distributed memory, and combinations. I also compiled it for a serial run but with profiling options turned on so I could see how different physics and dynamics options contribute to the run time. The results are on this page: WRF Benchmarks and Profiling.

Changes to CALWRF For Use With WRF Version 3

An associate has used my WRF output as an input to a program called CALMET. At this website,, I downloaded a program to covert WRF output to CALMET input called CALWRF. However, it had two problems that prevented it from reading data generated from version 3 of WRF. Both the entire downloaded zip file and the calwrf.f source code with my changes are on this website's Downloadable Files page.

The first problem caused this error message: ""This is not a wrfout file No 3D.DAT will be created." It was caused because the program specifically looks for "OUTPUT FROM WRF V2" in the WRF output data. That check is on line 1458 of calwrf.f; I just removed the "2."

The second problem is that the program looks for an attribute called "DYN_OPT" which is no longer part of the WRF output file. That attribute use to indicate whether it was WRF ARW or WRF NMM. The program did not use the information, it just looked for it. So, on line 1425, I had it look for "FEEDBACK" instead. Below is the results of doing "diff" on the before and after:

<       vattnames(3)='DYN_OPT'
>       vattnames(3)='FEEDBACK'
<       if(INDEX(value_chr,'OUTPUT FROM WRF V2') == 0)then
>       if(INDEX(value_chr,'OUTPUT FROM WRF V') == 0)then

Maybe it is common knowledge, but apparently CALMET needs WRF forecasts to span 5:00 AM local time. I do not know the reasons, but it had something to do with the predawn transitions. Since I have been focused on wind, I sometimes turn microphysics off in WRF. This causes the WRF output to not included several moisture variables even though they are defined in the Registry for output. If you need to do this and have already learned enough about making registry changes, you can force WRF to output the moisture variables by changing the line in the Registry that starts with "package   passiveqv     mp_physics==0." 

If I rememebr it right, it took a couple tries to get my calwrf.inp file right so here is an example of what I used, in case it helps you:

WRF Output Run
calwrf.lst               ! Log file name
calwrf3.dat              ! Output file name
-1,-1,-1,-1,-1,-1        ! Beg/End I/J/K ("-" for all)
2009070809               ! Start date (UTC yyyymmddhh, "-" for all)
2009070821               ! End   date (UTC yyyymmddhh), "-" for all
1                        ! Number of WRF output files ( 1 only now)
wrflnk                   ! File name of wrf output (Loop over files)

And I just made a symbolic link named wrflink to the actually wrfout_d0 file that I wanted it to use.