My History of Doing Research

I would include more pictures of the Alaska work, but even the few pictures I have are in old-fashion "slide projector" format and are out of state at my parents'. Eventually, I will retrieve them and convert to digital format. But I have a couple more Alaska pictures on this page: Some Alaska Stories.

Moose Watching

My undergraduate degree was in Wildlife Management from University of Alaska, Fairbanks. While trying to get my first job with AK Fish and Game, I volunteered with U.S. Forest Service for a weekend of following a radio-collared, bull moose. This was done in Denali National Park in central Alaska. There was a full time, paid crew assigned to the job, but giving them a weekend off was the goal. We had three teams of two people, plus the supervising researcher. Each team had an eight hour shift of following the moose, carrying a stopwatch and clipboard, and recording what the moose was doing at the end of each two minute interval. Christy and I had the 5:00pm to 1:00am shift. Being late Spring in Alaska, there was enough light to see all night. Most of the time, it was of course a rather tedious job, but great scenery. The moose traveled significantly during the evening, so at the end of the shift, the boss used an antenna to find the radio collar and bring the replacement crew. However there was an interesting development on the second night. Our bull had joined a rather loosely formed group of bulls, cows, and calves on a hillside. Several moose (or as sometimes said as a plural, moosers) acted spooked. Then a large grizzly bear stood on its haunches a couple hundred meters away. Then it went down and was hidden by the willow bushes. We didn't know where it was. A large moose, like our subject, is about two meters tall at the shoulders. He was able to lift his head and see much better than either Christy or me. So we stayed with our moose who had become very attentive too and hoped he would provide any advance warning of danger. That was the only time during our two eight hour shifts that we did not record that the subject was either browsing on willow bushes or chewing its cud. We still recorded his actions, but it was simply that he was attentively watching. Even when walking, the moose was continuously grabbing bites from the willow bushes it was passing. Apparently, moose make the most of easy foraging during the summer.

During my time off, I took a bus tour of Denali. It was extraordinary. We saw quite a few moose (yawn), a dozen bears, Dall sheep, and mountain goats. I even saw a red fox catch a ptarmigan. I recommend it if you get a chance.

Salmon Management with AK Fish and Game

I spent a couple field seasons working with salmon at the Alaska Department or Fish and Game. Most of the work involved the management function of monitoring how many fish were traveling upstream. The commercial salmon fisheries were managed on a weekly basis by setting how many days that fishermen can catch salmon. The primary method used by fishermen were the purse seine, but some used trolling for the larger species of salmon. By keeping counts of how many escaped harvesting, a sustainable population could be maintained by limiting the days of fishing the next week.

Salmon counts can be made by flying in a small plane along a river and estimating the number in stream and comparing to counts in both the same and previous years. This is much less difficult than it sounds. I did a little bit of this with my boss Alex near Yakutat. But primarily, I was stationed at a weir, which is basically a picket fence across the river made of vertical, metal electrical conduit. There was nothing electrical about it; conduit is just a cheap source of the properly sized pipe. As needed, an opening is made in the weir and the salmon are counted as they pass through. I worked at four such sites and I remember counts of 33,000 at one and 133,000 at another. Salmon rivers during the spawn are amazing. To make the most of the weirs and the weir operators, other data was also collected - different kinds of data and different sample frequency at different sites. Some salmon are captured and length, weight, gender, and scale samples are collected. The age of salmon can be determined by the number of rings that have developed on their scales, a little bit like tree rings. There doesn't seem to be much variance in pink salmon, but with the other four Alaskan species, not all the individuals stay out in the ocean the same number of years. By taking a scale from a very specific location of the fish and sending it back to a lab in town, a fish's age can be determined and changes in the population's age distribution can be monitored.

Kevin Matthew Nuss with a Helly Smith
 No! The lights are not on my hat, they are on a pole behind me.

Trap Bay

During one Autumn, I helped with some field research for the U.S. Forest Service (USFS) at Trap Bay on Admiralty Island for about 10 weeks. Earlier that summer, I had met and provided some assistance to a few USFS folks. This research was related to hydrology. They needed a couple of us to stay at a remote cabin so gravel movement measurements versus stream height could be made as the river rose. Unfortunately, this meant that we mostly worked in the rain and at night. We did not have electricity at our cabin, but the work site had a small generator to light our work. It was decades later beforel I came across someone else who had used or even knew what a Helly Smith was; that's the net-like device in the picture to the left that is used to capture the moving gravel.

We did have several other hydrological measurements that we made at regular weekly or 10 day intervals. We determined erosion rates by measuring the length of fixed rods protruding from a stream bank. We measured maximum ground water levels by how far cork shavings were pushed up in a glass tube that was placed in a pipe in the ground. We collected allochthonous (foreign to the creek, i.e. leaves, stems, insects) material in trays suspended over a creek. We monitored and maintained temperature, humidity, and precipitation instruments. We read a lot because there was not enough to keep us very busy, but they needed someone to be there when the rain came and the creek rose.

One project was to take core samples of the streambed. Since the stones can be various sizes and would be misplaced if a drill was used, a rather interesting method was deployed. It involved pounding a narrow copper pipe into the streambed. A wand with a nozzle from a compressed CO2 canister was inserted into the pipe which froze the stones and gravel to the pipe. A hand winch on a tripod was used to extract the pipe. Then, with the pipe and gravel laid on top of collector bins, a blow torch was used to melt the ice. Each bin represented a different depth and, back at the lab, the streambed material was categorize for each depth.

The Research Vessel Kittiwake
Prince of Wales Island

I was recruited for one September project to recover tags from salmon. The project was part of a study to define which areas of the coastal waters were properly part of the Canadian or U.S. salmon fisheries. Earlier in the summer, purse seines were used to catch Pink salmon at various location off the coast of southern Southeast Alaska and nearby Canada. These salmon were tagged with Peterson tags, which are numbered, nickel-size plastic disks. And of course, the fish were then released. In the follow up portion in which I participated, Fish and Gamers visited various salmon streams to count the fish and recover and count any tags. This ties the proportion of the population in a spawning river to a location off the coast. So with 5 other Fish and Gamers and a captain and an engineer/cook, I spent two weeks on a 32 ft research ship that had been converted from a WW II tug boat used to move troop barges. We circled Prince of Wales Island, surveying dozens of streams and rivers. The ship carried three 13 ft boats (Boston Whalers) with 25 horsepower motors. Daily, we went out in pairs, each pair to one or more rivers in the area. At night, we slept on the ship while the captain took us to the next area of the island.

At each river, we made estimates of the number of live fish and the number of dead. The tags were bright red, so for any that we saw, we tried to recover it by spearing the fish with a five-pronged, 8 ft spear. And we did keep count of the few tags that we saw but could not recover. Aside from the spears, each of us also had to carry a weapon. Thankfully, Prince of Wales is a black bear only island, so we could get by with 12 gauge shotguns with slugs and shot. They are lighter than the large caliber rifles needed in brown bear areas. With the daily sea travel, the pump shotguns were also easier to keep in working order than bolt action rifles. We cleaned them every evening and used heavy protective oil, but by the next evening, they had already developed superficial rust because of the salt water and humidity.

Salmon rivers are surprising to those not familiar with them. In one very small creek, the fish were so thick that some got caught under my feet as I walked in hip waders. In another location, we had to bring in a helicopter because the river was too big and too long to survey in one day. It is slow going to walk in and along rivers in rough country. The helicopter let off and picked up each pair of us at different stretches of the river. Kirk and I had a 5 km stretch. I don't remember how many tags we recovered (maybe several dozen), but Kirk estimated 90,000 untagged, live Pink salmon in the river and I estimated 110,000 dead ones. The counts amaze me every time I think of them.

Hugh Smith Lake

One of the locations at which I worked a weir is worth singling out, Hugh Smith Lake. Even in recent historical times it had been a fjord, but an earthquake had caused a landslide, blocking off the outlet and causing it to be a fresh water lake. If I remember right, it was approximately 5 km long, 1 km wide, and 200 m deep, with the bottom third still being salt water. Our only access to our cabin on the lake was by float plane. We had small motorboats, both on the lake and on the ocean. The outlet river was perhaps 30 m long. The primary salmon species was Coho. We saw a few Sockeye/Red salmon, but merely dipped them from the trap directly to the lake side of the weir. With the Coho, we caught, measured, tagged, and took a scale sample from every one entering the lake. We had a trap formed in the weir that allowed us to dip net and anesthetize every fish. The anesthesia was a plastic barrel of water with Tricaine added. Coho are large and can get injured during handling if they are putting up a struggle. To determine the time they spent in the lake, not spawning, we read the tags from the same fish after they had crossed the lake and were in the feeder rivers. In the rivers, which were shallow, we used dip nets to catch them. While in the nets, they were generally calm enough to read the numbers from the t-bar tags. Some of the fish had been tagged as fry leaving the lake with coded wires inserted into their heads. A specified percentage of the males with such wire tags were sacrificed as they returned and the heads sent to the lab for some kind of analysis.

At Hugh Smith Lake, we also fertilized the entire lake on a weekly basis. The object was to determine whether salmon populations could be augmented if more nutrients were available in the cold, deep Alaska lake. The assumption seemed to be that lack of nutrients limited the number of young salmon fry that survived long enough to migrate to the ocean. I don't know how or when they were delivered, but we had a supply of plastic barrels filled with liquid fertilizer. A small gasoline pump was used to pump the diluted fertilizer and deliver it to the lake with a boom that was suspend out the stern of the motorboat. We had a predefined crisscross pattern on the lake to help create even distribution.

Additionally, during the three months I was stationed there, we made a few sonar surveys of fish in the lake. This was done at night. Before dark, we would place several battery powered, flashing lights on the shore at specific locations. After dark, we made transects across the lake, from light to light, trolling a sonar that could detect even small fish. These were used to determine the count, distribution, and depth of salmon fry. And because of the presence of another fish, a small, non-migrating species called sticklebacks, we used a trawl net to get a population sample to determine proportions of the species.

Beowulf Cluster at Boise State University

Cluster Related Research

While taking computer science classes at Boise State University, a professor there hired me to help with specifying, purchasing, implementing, and administering a beowulf cluster. At the time, it was a respectable "Commercial Off The Shelf" (COTS) cluster. It had 64 nodes, each was a tower PC. Each motherboard had two hyperthreaded processors for a total of 128 Intel Xeon processors that looked like 256. It only had gigabit ethernet connectivity. This was before there was a lot of good support for clusters, so there was quite a bit of work getting it set up, both physically, with firmware (BIOS settings), and with software. By current standards, this is an unimpressive setup, but at the time it was quite a step forward for the university.

After the hardware was set up and OS software installed, there were benchmarks of processor and network throughput. Aside from the implementation, I also helped with converting existing codes developed or used by researchers at BSU. In general, it has only been recently that many researchers have moved to parallel computing. Now, most people are aware of the possibility of parallelizing computationally intensive computer code, and many have made the move to do so. And of course, now the added potential of using GPUs has arisen and is getting more and more acceptance as a way to boost processing power.

Miscellaneous Weather Related Research

When one of the BSU professors wanted to use the cluster to run WRF, a weather modeling code, I was asked to help. Initially the main problem was to get it to compile on the installed Linux kernel. There were a lot of incompatibility issue with the WRF code, the WPS code, and the libraries needed for them, i.e. NetCDF, zlib, and PNG. The scripts that came with WRF to help with the compilation were not working well with our setup. At the time I was new to the configuration and compile scripts that are common in Linux. I learned a lot about manipulating them and about library versioning. Installing other codes was just as difficult at that time.

Eventually, WRF was suitably set up and a few runs were made to model the winter temperature inversions that are common in Boise, ID. Vis5D was used to display the time sequence of development at a regional weather modeling consortium meeting and at an ID Dept. of Environmental Quality meeting. Inversions cause pollution build up by trapping the city emissions in the valley containing Boise. The university had bought a Scintec SODAR, co-owned with Washington State University. We wanted it for wintertime inversion research and WSU wanted it for summertime research/regulation of burning crop stubble. It was a difficult instrument to maintain and to use. One limitation was its need for external power, which required a location near buildings. But it created too much noise to be near populated buildings.

Wind Research at BSU

The first research I did that focused on wind for wind energy was actually some contract work that I did with a professor at BSU. This was the same professor for whom I setup and ran WRF for temperature inversions. Several decades earlier, he had done research in an area of Wyoming in which a wind developer was now interested. The professor had used instrumented aircraft to make measurements of the "hydraulic jumps," also called gravity waves, in certain mountain passes. The wind developer thought that 1) the mountain passes might be a good location and 2) the hydraulic jumps may be a help by increasing wind speeds. He wanted us to model the phenomenon. We also modeled several potential sites of interest, creating turbine height wind speed estimates based on simulations of random months picked from 30 years of reanalysis data. We started off using NCL as output of the WRF generated NetCDF files, but eventually we turned to outputting results as KML so Google Earth could be used to interactively view averaged data.

A grant was obtained to do further research in wind forecasting. It was only for a year but greatly added to my knowledge of weather modeling. I had read enough to run WRF adequately, especially with added meteorological expertise from the professor, but in trying to optimize the wind forecasts, a lot of experimentation increased my modeling abilities. Additionally, I was able to attend the WRF tutorial. I am so glad to have had some experience with WRF beforehand. The presented material itself did help, but having the experts there to answer accumulated questions was probably the most beneficial. I ran over 20,000 simulations during that one year grant, trying to find combinations of physics schemes and grids that gave model results that matched our SODAR data. By then, BSU had acquired a Second Wind Triton SODAR. It was a great tool, especially since it had solar cell/ battery power, and the support people were helpful. Most of the WRF runs were for 9 hour forecasts, starting once every 11 hours during the 8 months of the available SODAR data. I had been requested to limit both the number of running jobs and the number of jobs in the queue so others could have easy access to the cluster. So as you can imagine, extensive scripts and programs were developed to generated the runs, monitor their progress, and post-process the results for statistical comparison to SODAR data. The SODAR itself was situated near a production wind farm and part of the grant involved converting wind forecasts to power output and comparing to actual wind farm output. Most of the data was statistical in nature with line graphs, bar charts, and scatter points, but I was able to expand my abilities of generating KML data and displaying it on Google Earth. Google Earth provides topographical and surface details and thus more context for the data. You can see more information about such things on the WRF Data in Google Earth page and the WTOOLS page.

There are two program changes about which others from around the world have contacted me. One is having WRF output wind speeds as an average for a specified time period and interpolated to fixed heights above ground. I made these changes to better match the ten minute, fixed height SODAR data. Normally, WRF outputs most variables, and winds in particular, as instantaneous values for the most recent time step. I wanted to create an average for all time steps since the last output. Also, WRF outputs at pressure level heights. Since pressure changes with time, at each time step I had to interpolate to the fixed heights that our SODAR uses before accumulating for the average. More information about what I did can be found here: WRF Source Code Changes. Another change that was popular but I haven't heard about recently, is a change to the program that converts WRF output to CALMET input. A colleague ran CALMET as part of the wind research and I had to change the conversion program so it would work with version 3 of WRF output. The program had some 2 version checks in it and also did not account for another minor version 3 change. Those changes, if anyone still has use for them, are described at the bottom of the Main Research Page.

Island Park Instrumentation Towers
We constructed two of these 24 m towers with eddy covariance, energy flux, soil moisture, sonic anemometer, and other related hydrology instrumentation.

Hydrology Research at BSU

After the wind research grant, I began work on some hydrology related research at BSU. This was part of a multi-year grant, but I was not a part of it at the beginning. Some of the job was to provide computer support for others working on the grant, but I was also able to use my WRF experience. One of the new tools I learned was Generic Mapping Tools (GMT) which is a common open source tool used for publication quality figures. Initially I used gnuplot for several line and scatter plots, but GMT has similar capabilities so I began to use that for all figures. After I developed the techniques and some scripts, I provided help to others in our group to switch to GMT; our boss was really big on that tool.

The WRF related work focused on adding irrigation of croplands to the Noah land surface scheme in WRF. It took significant research to determine a few suitable approaches to implement irrigation, but several were coded and tried. These WRF runs were approximately 15 month simulations so that the effects of irrigation could be quantified for an entire growing season. The additional months were "spin-up" to simulate the continuing soil moisture effects from the previous growing season. But in general, the snow melt between growing seasons were enough to replenish the soil moisture; but we didn't know that until we modeled it. One thing that greatly simplified the implementation of irrigation was the fact that our area of interest was in arid southern Idaho where about 95% of the cropland is irrigated. Older WRF land use data distinguishes between irrigated and non-irrigated cropland, but the newer data does not. For southern Idaho, I just had the model irrigate all cropland. Some of the results of adding cropland irrigation to WRF can be seen here: WRF_Users_Workshop_Presentation, 2011. At the last minute, my boss added some esoteric slides that came from a similar but different set of runs to the presentation. I made the original set of presentation slides, but these pages were later added: 3-5, 10, 11, 17-20, and 31. And there are some more "backup" slides at the end that were removed.

A colleague was using the Noah Land Surface Model (LSM) uncoupled from WRF. That model uses external weather data and runs the LSM without the feedback to the weather that WRF has. I researched and coded some irrigation schemes, but my colleague developed a scheme that was more suitable for the uncoupled version. By looking at the data used by the external forcing data, which was 32 km reanalysis data, we could tell that it classified all the cropland as grassland or shrub land. This resulted in temperatures that were too high, so the model tended to evaporate too much water, and so more than the actual amount of irrigation water was used by the model. The colleague was new to many of the details of weather data and Fortran programming, so I provided support as both advice and debugging/compilation help.

I did some work with a hydrology program name SWAT. Mostly I tracked down and fixed two problems. The first was the limitation of soil-vegetation units in the model. A study area is divided into areas, usually sub-basins of a watershed. Each sub-basin has units, not specified geographically, that contain a different combination of soil type and land use. The maximum number of these units was hard coded in the program. I found the necessary places to increase this number and figured out how to recompile the program. I don't fully understand the process, but the second bug for which I created a workaround had to do with temperature data. Apparently, there is a way to provided temperature data in a spreadsheet format. A program for which I didn't have source code, converted it to a specialized, flat text format. However, certain temperatures were formatted incorrectly during the conversion. These were temperatures very near zero degrees Celsius. If a few of these misformatted  temperatures appeared on a single line of the file, they were misread by the program, without any indication of error. If there were more than a few misformatted temperatures on a line, then the program would die with an error. I wrote a Java program that would read the temperature file, look for those misformatted temperatures, and output a corrected temperature file.

I made several sets of changes to VIC, another hydrology model. Others determined that the Shuffle Complex Exchange (SCE) method was a good way to calibrate the parameters for VIC. So for the first change, I wrote the code that implemented the SCE methodology by making the appropriate modifications of VIC's input files, making the appropriate calls to run VIC, and collect and compare the results to measurements. Since I wanted it to be easy to use, it was quite an extensive bit of programming. I was pleased with the results. But since SCE uses a lot of runs to home in on the optimal parameters, we needed a way to speed up VIC. So for the second change, I created a parallel VIC. In case you are unfamiliar with the concept, that means that I made the program changes to allow multiple computers to work on a single VIC run by dividing up the work between them. The parallelization was actually pretty simple, but it did take quite awhile to study VIC and make sure my approach was reasonable for that program. The third change to VIC was for a colleague who was working on combining VIC, a surface model, with MODFLOW, an aquifer model. A major problem with combining those two models is that VIC performs its calculations differently. It calculates all the changes for a single cell for all the time steps before moving on to the next cell. But MODFLOW makes calculations for all the cells for one time step before moving on to the next time step. So the order that VIC does things needed to be changed. If one is looking at the code, it seems like adding data structures for every cell instead of one at a time would be enough to make the change in the loops. But VIC uses two not-so-good techniques that hide problems. The first is its use of the vegetation library. Since it is called a library, one might assume that it is static and can be used by all cells. But the input data for a cell can change the library! So a copy of the library needs to be made for every cell. The second problem is that VIC uses "STATIC" variables in its procedures. Some of these hold information for a cell from one time step to another. I made a struct to hold all such variables and made a copy of it for each cell. Now the code that previously referenced the STATIC variables now references the corresponding variable in the current cell's struct. It was not trivial to do, but I was able to make all the needed changes and test them within a few days. For the data set I was provided (2346 cells with daily time steps for 5 years), I got identical results from the pre and post changed program. I also developed some code to make the passing of data to/from VIC/MODFLOW easier. It reads some of the configuration files of both programs to get units and options and stores that in Fortran modules; MODFLOW is written in Fortran, VIC in C. And the new routines make units conversions and fill arrays. These changes did not combine the two models but were meant to help my colleague do so.

The picture up and to the left is part of some field work that became part of this job. To study moisture in the soil and the air, three sites were identified and eddy covariance, energy flux, soil moisture, and other instrumentation were installed. The grassland and shrubland locations were done before my time. I made a few trips to Island Park, ID to help with the forested site. Because of the trees, those two towers needed to be 24 m tall, which was quite challenging. Many people worked on the construction of the towers and everyone did whatever was needed, but I was the primary "mule" one the ground who hauled everything up using a rope and a single pulley. The pulley was attached to the every rising top of the tower. My function was hard work, but I suspect it was even harder on the people who had to maneuver all that steel once I got it up to them.