Getting started using WDSS-II for researchNote that this page deals only with research use. The free version of WDSS-II has restricted use. If you wish to use WDSS-II in real-time, you should license a version from the University of Oklahoma.
Currently, we have versions of the software for Linux 2.4 (such as Fedora Core 1 and Red Hat Enterprise 3) and Linux 2.6 (such as Fedora Core 2 and Red Hat Enterprise 4). The WDSS-II tar file you downloaded contains tools and algorithms for interacting with, and building weather data sets. For more information on WDSS-II, please see this webpage.
- Software Installation
- Obtaining radar data
- A note on data formats
- A note on using scripts
- Processing WSR-88D data
- A note on rearranging index files.
- Creating cartesian lat-lon grids from data from one or more radars
- Lightning data
- Download the software to your local machine.
- Create a directory and untar the file there:
- mkdir /home/username/WDSS2
- cd /home/username/WDSS2
- tar xvfz <name-of-downloaded-file>
- bin: this contains all the algorithms and tools that you can run
- lib: this contains all the libraries to which the binaries are linked.
- w2config: this contains configuration information for the software.
- export PATH=/home/username/WDSS2/bin:$PATH
- export LD_LIBRARY_PATH=/home/username/WDSS2/lib:$LD_LIBRARY_PATH
- export W2_CONFIG_LOCATION=/home/username/w2config:/home/username/WDSS2/w2config
NCDC. Download, and uncompress the data. You will be able to do this if you use the command tar xvf filename.tar and then use the command uncompress *.Z . These data are in Level-II format. To save space, you may want to compress the files with gzip * or bzip2 * -- WDSS-II tools can read data compressed with gzip or bzip2 directly.
The Level-II radar files can be converted to netcdf using the tool 'ldm2netcdf' that is part of the WDSS-II distribution. See the section on "Creating Netcdf Products from Level-II files" for details. data format documentation ). You can read these files into Matlab, C, Java, Fortran, Perl, Python, etc. You can find a list of these interfaces at Unidata's NetCDF page . Remember that the files are probably gzipped. Unzip them using "gunzip name-of-file" before attempting to use these NetCDF interfaces. Also, WDSS-II by default stores data in a "sparse-grid format" where the data are run-length encoded. General-purpose netcdf tools may not be able to handle this. You can tell WDSS-II to not use sparse-grids by setting a flag in the file w2config/misc/dataformat. The WDSS-II display will automatically uncompress on the fly, so you don't have to run gunzip to view the data using wg.
You can view the structure of a NetCDF file using the tool "ncdump.pl":
- ncdump.pl name-of-netcdf-file
a script for running algorithms. Follow the instructions in the script itself on how to run it, and how to modify things.
The script in this case was written for a student to study whether cleaned up reflectivity data (cleaned up using a neural network) improved the performance of two algorithms -- w2segmotion and netssap. To do that, we need to generate both the raw data and the edited data, then run the two algorithms on both data streams. That is what the script does.
- converts the original radar data to netcdf and dealiases the velocity
- converts the LB (real-time queue) to an archive case (xml file). This is required so that the algorithms that follow will stop and exit. If they read data from a LB, they will never exit since they will be in real-time streaming mode.
- runs the neural net to clean up the reflectivity data and produce ReflectivityQC and ReflectivityQComposite
- runs w2vil to generate the ReflectivityMaximum product
- runs segmotion and netssap on the unedited stream and puts the output in a subdirectory called "unedited"
- runs segmotion and netssap on the quality-controlled stream and puts the output in a subdirectory called "qc"
Level-II to unedited Netcdf filesRun the program ldm2netcdf
- Type: ldm2netcdf <ret> to get a list of options
- You will be providing an input directory, an output directory and processing archived data. With the -s option, you will be providing the radar name. Example:
- ldm2netcdf -s KTLX -i /home/username/downloadedata/subdirectory -o /home/username/datacase/case_1/ -a -1 -p KTLX2002 --verbose
- Depending on the filenames, you may need to change the prefix option.
- The raw reflectivity files will be in a subdirectory called Reflectivity
- The raw velocity files will be in a subdirectory called
Getting dealiased velocity data
To get edited velocity data (required if you need shear/rotation
products, etc.), you will also need to add a (-d) option when you run
- The edited velocity files will be in a subdirectory called
Getting cleaned up reflectivity dataTo get cleaned up reflectivity data, you will need to run the algorithm w2qcnn (qc = quality control).
- Type: w2qcnn <ret> to get a list of options
- You will be providing an input URL (to a directory called code_index.fam that was created by ldm2netcdf) and an output directory. Example:
- w2qcnn -u -i /home/username/datacase/case_1/code_index.fam -o /home/username/datacase/case_1 --verbose
- In the above example, I am sending the output of qcnn to the same directory as the input, but you don't have to.
- The cleaned up reflectivity data will be in a directory called ReflectivityQC
- Note: If you use code_index.fam as the input, you are using something developed
for real-time use and therefore, the program will not actually exit.
You can stop the program by typing ctrl-C (look at top and make
sure that the program is done). Alternatively, you can convert
the real-time fam index into an archive-case XML by using replaceIndex (see
Creating Azimuthal Shear and Rotation ProductsTo create azimuthal shear and rotation products, you need to run off the dealiased (edited) velocity products using a program called w2circ.
- Type w2circ <ret> to get a list of options.
- This is similar to running w2qcnn, so read the comments there.
- w2circ -i /home/username/datacase/case_1/code_index.fam -o /home/username/datacase/case_1 --verbose
- The required data will be in subdirectories called AzShear and Divergence
Running the Single-Radar Severe Storms Analysis Package (SSAP)The Severe Storms Analysis Package (SSAP) includes the following algorithms that will run on the radar data:
- Storm-cell identification and tracking (SCIT, also called the Cell Table)
- Hail Detection Algorithm (HDA)
- Mesocyclone Detection Algorithm (MDA)
- Tornado Detection Algorithm (TDA)
- cd WDSS2
- mkdir netssapdat
- cd netssapdat
- Download ssapdat.tgz into that directory.
- Untar the file (tar xvfz ssapdat.tgz)
- You should have a number of .dat files
- Whenever you run "netssap" you need to be in this directory.
- Type: replaceIndex <ret> to get a list of options
- You will provide the input code_index.fam and get out a XML file. You don't need to do any replacement. Example:
- replaceIndex -i /home/username/datacase/case_1/code_index.fam -o /home/username/datacase/case_1/code_index.xml
- You can use this XML file as the input to an algorithm by specifying its location in the -i for that algorithm.
Database descriptionIn the XML database, the "selections" tag provides a unique name for each piece of data. The "params" tag provides the location of the netcdf file that the data can be found in. Since the XML file is sorted, you can use it as the input to your program if you need to get a list of data files.
You will also need to pre-create a cache for the combination of the radars and grid dimensions using createCache. This way, the actual combination will be at real-time speeds. You don't have to do this if you are processing an archive case from a single radar -- the cache will be created on demand. Just be aware that the very first time (depending on the size of your grid), the merger will take much longer than usual. The next time you run things on the same grid, it should be faster.
To summarize, if you are doing a multi-radar case, you need to use w2simulator and createCache . You also need to run w2simulator, w2merger and w2segmotionll simultaneously.
createCache -- pre-create computations for radar/grid combinations
- Type: createCache <ret> to get a list of options.
- Example: createCache -i KTLX -t "37 -100 20" -b "30.5 -93 1" -s "0.05 0.05 1" --verbose
- This program will take a long time because it will have to do the computations for all the possible elevation angles.
- Repeat for every radar in your domain.
- Make sure that the domain that you provide with the -t, -b and -s options are exactly the same domain options you will provide later to w2merger.
- The cached domain is in ~/.w2mergercache. A 20x400x400 will
take approximately 1GB of hard-drive space. You can reduce the storage
requirement if you limit the vcps using the -v option.
w2simulator -- synchronizationTo run archived data through a multi-radar process, the files on disk have to fed to the process in "real-time".
- Type: w2simulator <ret> to get a list of options. If
you are using a very old build of WDSS-II, this used to be called
- Provide inputs and an output directory. For example:
- w2simulator -i "xml:/home/wdssii/data/tri_radar_demo/ktlx_052001/code_index.xml xml:/home/wdssii/data/tri_radar_demo/kinx_052001/code_index.xml xml:/home/wdssii/data/tri_radar_demo/ksrx_052001/code_index.xml" -o /tmp/simulation --verbose
- Depending on how fast your machine is, you may want to provide a "-r 0.2" to slow the simulation down.
- You might also want to specify begin time, end time or common time (b, e, c) options if you don't want to simulate over the entire time period.
- You need to provide the indexes created in the output directory
as the input to w2merger i.e. /tmp/simulation/index_0.fam, etc. in this
- Type w2merger <ret> to get a list of options
- If you are running w2simulator, the input indexes will be the simulation directory -i "/tmp/simulation/index_0.fam /tmp/simulation/index_1.fam /tmp/simulation/index_2.fam", for example. On a single-radar case, you can provide the original data LB here.
- If you are running w2simulator, you need to provide the -r option
since you are simulating real-time.
- The recommended blending strategy is to use
TimeAndDistanceWeighted (5) i.e. -C 5
- Make sure to provide the correct type of data to merge. Examples:
- -I ReflectivityQC
- -I AzShear
- Provide -e 60 to make an output once every 60 seconds. Doing it
without the -e option will cause a rapid update grid that your I/O
hardware may not be able handle properly.
- If you are merging multiple radar data, if is a good idea to get
motion estimates also in. You can get motion estimates in a feedback
loop by running the algorithm w2segmotionll on the merged grid. Make
run w2merger and w2segmotionll simultaneously, feeding the output of
w2merger to w2segmotionll and the output of w2segmotionll as the -M
option to w2merger.
Motion Estimates (to feed into the merger, optional)
- Type w2segmotionll <ret> to get a list of options
- Use "-T MergedReflectivityQComposite" to track based on the composite field produced.
- Use -O 5 to estimate motion over 5 minutes (every one minute is too often, and changes will be very minute).
- Don't worry about the advection (forecast) options.
- Run the merger with the -M option and specify the path to the outputs of the segmotion.
- ltgArchivedDataConvertor will read archived NLDN lightning data and convert it into time-based flash products.
- ltgIngest does the same thing as the archived data convertor, but reads from a socket in real-time.
- w2ltg produces lightning flash density grids. It also makes forecasts based on other inputs (Reflectivity at various isotherms).
- w2lma_ingest reads archived and real-time source data from New Mexico Tech lightning mapping array ASCII files.
- w2ltg can be used to produce 3D lightning density (specify the -3
To use wg, you need to have the glib2 RPMs for GTK2 installed. To see if they already exist on your machine, cd to the WDSS2/bin directory and type "ldd wg". Any library in that list that is not found, you need to find and install. For example, you may find that you need whichever RPM provides these libraries for your distro:
- On the source selector (found in the "Products" control window on w2), find and select "Add Source".
- Give your data case a name "case1"
- Select the right protocol (fam or xml)
- Browse to the code_index.fam or code_index.xml file.
- Select "Add Source"
- On the Products control, move over the tab named "case1"
- If necessary, click "connect"
- You should be able to select any of the products displayed.
- You may need to move the time-selector to "All" to get a complete list of data (when doing an Add Source).
Remember, that for research, you can visualize netcdf files using other software as well.