EMC



Cloud and Radiation Diagnostics of the NCEP Operational Global Analysis/Forecast System

National Centers for Environmental Prediction

(have questions? please contact kenneth.campana@noaa.gov)



NWS DISCLAIMER
NWS PRIVACY POLICY

NOTES from the Global Climate&Weather Modeling Branch briefings, beginning in January 2005; a.k.a. the Peter Caplan Memorial Notes. Both the MAP Briefingsand the Technical Briefings. DISCLAIMER: these NOTES are not meant to be minutes or accurate transcriptions, rather they are highlights of speaker's main points. Readers desiring additional information are advised to contact the speaker directly!

INVENTORY of GLOBAL Branch disk space; Total values on dew, Total values on mist, ALL users on dew, and ALL users on mist.

=== ALSO TOTAL Amount used, as well as percentage of LIMIT on dew, AND on mist!!

Global Model PRE-IMPLEMENTATION checklist for: GFS Spring 2006

HISTORICAL LISTS of Numerical Models used by JNWPU, NMC, NCEP (1955-present)

last changed JANUARY 2006

In compiling the lists, there has been no intent to list the many changes to individual models that have occurred through the years ; e.g. changes to numerics, physical processes, etc , as well as the multitude of changes that have occurred in producing initial analyses for the numerical models. There have been a number of other models, such as the Coupled Ocean-Atmosphere Model (1995), which have been run routinely, but not as part of the 'operational suite'. These are NOT included either. Here, the intent is to show, in a chronological sense, implementations of distinctively new operational models and, in some instances, dates when forecast length has been increased to satisfy other operational requirements. There is an attempt to record dates when models went out of service, generally occurring when a new model was implemented. However, there are exceptions (e.g. NGM continues to run in support of MOS)! These will be noted as they are 'discovered'! A more detailed, highly informative, companion to these lists may be found in a Bulletin of the American Meteorological Society paper, co-authored by Kalnay, Lord and McPherson, entitled 'Maturity of Operational Numerical Weather Prediction', December 1998, p 2753. Additional information regarding the 'early' years may be found in NMC Office Note 72, by Shuman, 1972, entitled 'The Research and Development Program at the National Meteorological Center'.

HISTORICAL list of JNWPU/NMC/NCEP NUMERICAL MODELS , chronologically. DIAGNOSTICS for JANUARY
2002
OPNL vs parallel X - Diagnostics for 2002
2003
OPNL vs parallel X - Diagnostics for 2003
2004
OPNL vs parallel X - Diagnostics for 2004
2005
2006
2007
2008


DIAGNOSTICS for FEBRUARY
2002
OPNL vs parallel X - Diagnostics for 2002
2003
OPNL vs parallel X - Diagnostics for 2003
2004
OPNL vs parallel X - Diagnostics for 2004
2005
2006
2007
OPNL vs parallel X - Diagnostics for 2007
2008

DIAGNOSTICS for MARCH
2002
OPNL vs parallel X - Diagnostics for 2002
2003
OPNL vs parallel X - Diagnostics for 2003
2004
OPNL vs parallel X - Diagnostics for 2004
2005
2006
2007
OPNL vs parallel X - Diagnostics for 2007
2008

DIAGNOSTICS for APRIL
2001
2002
OPNL vs parallel X - Diagnostics for 2002
2003
OPNL vs parallel X - Diagnostics for 2003
2004
OPNL vs parallel X - Diagnostics for 2004
2005
OPNL vs parallel X - Diagnostics for 2005
2006
2007
OPNL vs parallel X - Diagnostics for 2007
2008

DIAGNOSTICS for MAY
2001
2002
OPNL vs parallel X - Diagnostics for 2002
2003
OPNL vs parallel X - Diagnostics for 2003
2004
OPNL vs parallel X - Diagnostics for 2004
2005
OPNL vs parallel X - Diagnostics for 2005
2006
2007
2008

DIAGNOSTICS for JUNE
2001
2002
OPNL vs parallel X - Diagnostics for 2002
2003
OPNL vs parallel X - Diagnostics for 2003
2004
OPNL vs parallel X - Diagnostics for 2004
2005
2006
2007
2008

DIAGNOSTICS for JULY
2001
OPNL vs parallel X - Diagnostics for 2001
2002
2003
OPNL vs parallel X - Diagnostics for 2003
2004
OPNL vs parallel X - Diagnostics for 2004
2005
OPNL vs parallel X - Diagnostics for 2005
2006
OPNL vs parallel X - Diagnostics for 2006
2007

DIAGNOSTICS for AUGUST
2001
2002
2003
OPNL vs parallel X - Diagnostics for 2003
2004
OPNL vs parallel X - Diagnostics for 2004
2005
OPNL vs parallel X - Diagnostics for 2005
2006
2007

DIAGNOSTICS for SEPTEMBER
2001
2002
2003
OPNL vs parallel X - Diagnostics for 2003
2004
OPNL vs parallel X - Diagnostics for 2004
2005
OPNL vs parallel X - Diagnostics for 2005
2006
2007

DIAGNOSTICS for OCTOBER
2001
2002
2003
OPNL vs parallel X - Diagnostics for 2003
2004
OPNL vs parallel X - Diagnostics for 2004
2005
OPNL vs parallel X - Diagnostics for 2005
2006
2007

DIAGNOSTICS for NOVEMBER
2001
2002
2003
OPNL vs parallel X - Diagnostics for 2003
2004
OPNL vs parallel X - Diagnostics for 2004
2005
OPNL vs parallel X - Diagnostics for 2005
2006
2007

DIAGNOSTICS for DECEMBER
2001
OPNL vs parallel X - Diagnostics for 2001
2002
OPNL vs parallel X - Diagnostics for 2002
2003
OPNL vs parallel X - Diagnostics for 2003
2004
OPNL vs parallel X - Diagnostics for 2004
2005
OPNL vs parallel X - Diagnostics for 2005
2006
2007

SUBSECTIONS found below

PURPOSE of the site

USAF Cloud data

Latest 1-degree (2 DAYs old) WWMCA Analyses and TOVS daily composite

Latest 0.5-degree CLAVRx Analyses

USAF cloud statistics

TOVS gridded cloud product

USAF-WWMCA, hourly analyses .vs. Experimental TOVS clouds

Cloud VERIFICATION, station data

S.Moorthi CLW test Aug 1999

CLW comparison (14-20) Feb 2001 Parallels X and Y vs OPNL

Branch Meeting 14Jun01

Moorthi test of 'bug' fix 21Nov01

Corrected Cloud Overlap 19Nov01

Convective Cloud cover 19Nov01

Parallel-X- vs OPERATIONAL global model: Latest forecast

IMPLEMENTATION Presentation to NCEP director: RRTM - 8/11/03

NEW USAF cloud analysis .vs. OLD RTNEPH : 21-25 Jun 2002

Experimental TOVS clouds .vs. RTNEPH : 25 May-25 Jun 2002

TEST using a more relaxed data time-window for the WWMCA USAF cloud analysis

8DAYcomparison of old and wide time-window for 24-31 DEC 2002

FEB 2003-Moorthi's T254 convection/evaporation tests


Purpose

The purpose of this site is to present CLOUD and RADIATION diagnostics of the global analysis/forecast system. Some of them have been accumulated routinely ('over the counter') since the early 1990's. They are not intended to present a complete picture of model performance, rather they will complement diagnostics on display elsewhere in NCEP's Environmental Modeling Center's WEB site.

This site is intended to be a working site and as such is subject to errors and to frequent changes. Comments are always welcome.


U.S.A.F. Cloud info

Cloud verification data are from the United States Air Force (USAF) operational cloud analysis system. Until June 26, 2002, this was the Real-Time Nephanalysis (RTNEPH). That analysis process was documented by Hamill, etal. 1992. The latest cloud analysis system is part of the Cloud Depiction and Forecast System II (CDFS II), which has been documented by Northrop Grumman Information Technology (March 2002). This cloud analysis is called the World-Wide Merged Cloud Analyses (WWMCA). NCEP has been receiving the USAF cloud data via NOAA-NESDIS since mid-1992 over the DOD/NOAA-NESDIS Shared Processing Network. The RTNEPH data was received at 8 synoptic times per day, but the WWMCA data is hourly. Both the WWMCA and RTNEPH data are on polar stereographic grids of the Northern and Southern hemispheres, where horizontal resolution is 23.8125 km and 47.625 km, respectively, at 60 degrees latitude. The RTNEPH data sources were primarily the infrared and visible channels on two DMSP satellites, with some information from NOAA polar orbiters. A major addition to the WWMCA process is use of geostationary satellites (GOES, METESAT, GMS). However, surface observations are used when available, and a manual bogus is employed during each analysis cycle. Vertically, clouds are allowed to exist in 'up to' 4 distinct cloud layers at each data point.

For purposes of these diagnostics, the cloud analyses are horizontally compacted to an equal angle 1-degree global grid; AND vertically compacted into High, Middle, Low and Boundary Layer (since Jan99) cloud domains. The Total cloud is also used. High clouds are defined as having pressures less than 450mb, Middle clouds as having pressures between 450-650 mb, and Low clouds having pressures higher than 650 mb. These definitions are valid for points equatorward of 45 deg latitude. During the mid-1990's the pressure boundaries of the layered clouds poleward of 45 deg latitude were changed, in an attempt to mimic the lowering of the tropopause in polar regions. Since Jan 99 the lowest 10% of the atmosphere is considered the domain of a Boundary Layer cloud, thus decreasing Low cloud values. Clouds having valid bases and tops will span more than one of these vertical domains.

Both the old RTNEPH and the new WWMCA cloud data were available for 5 days in June 2002. Comparisons between the 2 analyses for this time period have been made.


Latest WWMCA synoptic analyses

The U.S.A.F. WWMCA analyses are received 24 times per day, every hour. NCEP places the data on a 1-deg grid, as described above, for Total, High, Middle, Low, and Boundary Layer cloud. Generally, only data in a (-2,0) hour window surrounding each synoptic time is used. However, polar satellite data is older, and poleward of 45 deg, the window is opened to (-3,0) in a smooth latitudinal transition. Originally (-1,0) and (-2,0) windows were used, but to avoid a significant number of missing data in monthly means the windows were widened on 23 Dec 2002 (see discussion below : data time-windows). Missing data is portrayed in light green. Graphical displays are found for 00Z, 01Z, 02Z, 03Z, 04Z, 05Z, 06Z, 07Z, 08Z, 09Z, 10Z, 11Z, 12Z, 13Z, 14Z, 15Z, 16Z, 17Z, 18Z, 19Z, 20Z, 21Z, 22Z, 23Z,

The daily mean WWMCA cloud is computed for the most recent day when all 24 synoptic 1-degree data files are potentially available. See WWMCAdaily !!

Daily composite of ascending and descending TOVS cloud products are compared to the daily mean WWMCA cloud in aTOVSdaily !!

Mean data over 11-regions of the globe, labeled 'Rico STATS' in reference to Rico Allegrino of NESDIS (who first worked with me on this!), are shown for the TOVS clouds and the Max-overlap WWMCA clouds which are seen in the WWMCAdaily and aTOVSdaily graphical plots!!

For a bit of information on the TOVS product, go to gridded TOVS data

An attempt has been made to taken into account cloud overlap for the WWMCA data when comparisons are made between the WWMCA and TOVS Low/Middle cloud data. The desire is to make a fairer comparison between the WWMCA and a satellite-only product. The 2 'gif' files labelled 'atvrtMx...' show Low and Middle WWMCA clouds computed using Maximum overlap. This gives a lower bound for satellite-view WWMCA layered clouds. In this case the overlap begins in the highest cloudy domain, so High clouds will be indentical in the normal and overlapped cases.

CAREFUL, there are several components of this processing which prevent this from being a "proper" comparison :

1. The overlapping should be done at each WWMCA point on the native grid, before the move to the 1-degree grid,

2. There is no removal of surface data and bogus-data, as, unlike the old RTNEPH, the WWMCA does not have this information,

3. The H,M,L domains are not identical; the WWMCA has a Low and Boundary layer cloud, rather than just Low cloud in TOVS;the WWMCA pressure boundaries are a function of latitude, and TOVS are not,

4. No attempt has been made to match the time of the satellite data orbits with the synoptic WWMCA (that is a future project). ====> 12/01/03 : the FUTURE is NOW : Points 1, 3, 4 are being used, please click here


Latest CLAVRx analyses

As of Autumn 2004, CLAVRx data has been archived routinely at NCEP in near real-time. This completely satellite-derived product is from the CLouds from AVhrR (CLAVR) algorithm originally developed at NOAA/NESDIS by L. Stowe (1991: Cloud and Aerosol products at NOAA/NESDIS, "Global and Planetary Change",p 25-32), and updated by A. Heidinger (web-site: click here ). Data is available at 0.5 degree global resolution for Ascending (nightime), Descending (daytime) orbits, as well as 4 synoptic times (where data is at most 1.5 hours away from the advertized Greenwich hour). The synoptic files contain data from all available satellites (NOAA 15,16,17, as of Autumn 2004). In early 2005, synoptic data is obtained from an operational NESDIS/OSDPD site, thereby getting data only from NOAA 16,17 (NOAA 17,18 in Summer/Autumn 2005)! CLAVR is a multi-spectral cloud detection algorithm based on a sequence of radiance threshold tests, which differ for land, sea, day, and night. Tests include intensity discrimination against the background, multi-spectral differences and ratios, and spatial uniformity in a set of pixel arrays. The NCEP archive consists of a subset of the original ~130 cloud parameters, where the 'hdf' formatted data has been converted to binary data via software graciously provided by A. Jelenak and A. Heidinger, NOAA/NESDIS. The subset contains 13 parameters thought to be useful to NCEP modelers, which have been placed in Grib-1 format. Parameters are : Total cloud amount, Water and Ice cloud amounts, H,M,L cloud amounts, Observation time (GMT in minutes), Cloud top temperature, and Cloud properties (emissivity, optical depth, effective particle size, liquid water path, ice path). H cloud contains tops at pressures less than 440mb, L contains cloud tops at pressures greater than 680mb, and M clouds are found in-between; where surface pressure is assumed 1000mb everywhere. An estimate of cloud top pressure is found from the following formula, assuming a standard lapse rate for all points : Tc/Tsfc = (Pc/Psfc) **(-lapse_rate*R/g). Most of the cloud properties are only calculated for daytime data and realistically represent the topmost portion of a cloud mass. Cloud properties are an average of all the cloudy pixels in a 1/2 degree grid cell (not an average of all pixels in the cell). CURRENT data from the NCEP archive are shown here and DATA files missing from the NOMADS site are listed here

U.S.A.F. cloud statistics

Following the work of Mitchell and Hahn(1989), a set of USAF statistical scores, modified by Hou (1993), is used to validate the model's total and layered cloud over various geographical regions. The scores are calculated from 2-dimensional contingency tables of cloud fraction obtained from both the model and the USAF data. Most of the data is shown for 43 regions covering the globe. The contingency tables consist of 11 cloud fraction categories: 0-0.05, 0.05-0.15,..., 0.85-0.95, 0.95-1.0. The scores shown are traditional ones, correlation and bias, as well as less traditional USAF 20/20 and CONTRAST scores. The USAF 20/20 score, ranging from 0 to 1, provides a measure of how well 2 gridded cloud fields agree - its value represents the fraction of points where cloud fractions differ by less than 0.2 (which is 20% cloud amount, hence the 20/20 name). A value of 1 means perfect agreement within 0.2 cloud fraction. The CONTRAST score, ranging from 0 to 1, measures the sharpness of the gradient between clear and overcast regions - its value is a measure of similarity to a 'U-shaped' cloud frequency distribution, where 1 means perfect binary distribution (cloud fractions either 0 or 1) and a value of 0 means cloud fractional values are constant (a perfectly flat distribution).

The scoring regions are described below:

1 = global points 23 = Tropical Pacific (all pts) (110E-75W)
2 = global points(LAND) 24 = Indonesia/Phillipines (all pts)(85E-170E)
3 = global points(OCEAN) 25 = Australia (all pts) (110E-160E)
4 = N. Hemisphere 26 = East Asia (all pts)
5 = S. Hemisphere 27 = East Asia (LAND)
6 = N. Hemisphere(LAND) 28 = Central Asia Desert
7 = N. Hemisphere(OCEAN) 29 = Indian Ocean Region (all pts)
8 = S. Hemisphere(LAND) 30 = Central South Africa(12N-50S)(all pts)
9 = S. Hemisphere(OCEAN) 31 = Central South Africa(12N-50S)(LAND)
10 = Tropics (20N-20S) 32 = North Africa Desert
11 = Tropics (20N-20S)(LAND) 33 = North Atlantic/Europe (15W-50E) (all pts)
12 = Tropics (20N-20S)(OCEAN) 34 = Europe (LAND)
13 = N. mid-lat (60N-20N) 35 = North Atlantic (OCEAN)
14 = S. mid-lat (60S-20S) 36 = South Atlantic (OCEAN)
15 = N. mid-lat (60N-20N)(LAND) 37 = Equatorial Atlantic (OCEAN)
16 = N. mid-lat (60N-20N)(OCEAN) 38 = Eastern N America (all pts)
17 = S. mid-lat (60S-20S)(LAND) 39 = Eastern N America (LAND)
18 = S. mid-lat (60S-20S)(OCEAN) 40 = Western N America (all pts)
19 = Eastern Pacific Ocean (130W-75W) 41 = Western N America (LAND)
20 = Central Pacific Ocean (170E-130W) 42 = South America (all pts)
21 = Northern Pacific Ocean (140E-110W) 43 = South America (LAND)
22 = Southern Pacific Ocean (140E-75W)

TOVS 1-deg Cloud Product

NOAA/NESDIS produces daily composites of cloud fraction from the polar-orbitting HIRS instrument. One degree data is produced daily for the Ascending and Decsending satellite orbits. That data is merged into a daily composite (Asc+Dsc). There are 4 cloud products : Total, High (pressures less than 440mb), Middle (440-680 mb), and Low (pressures greater than 680mb). Vertical domains for the layered clouds are taken from ISCCP layering. Global plots are found in the monthly packages, starting with July 2001.

WWMCA .vs. TOVS : 1-deg Ascending and Descending grids 12/01/03

Routine comparison of WWMCA and TOVS clouds is being made each day (for today-2 - e.g. if today is Dec 1, make calculations for Nov 29). Graphical plots and 'Rico STATS' are located here .

Missing data is plotted 'black' in the '.gif' files with 'miss' in its name. ASC and DSC are ascending and descending orbits, respectively. 'Obstim' contain plots of orbital times in the data, AND 'prtCFRAC' contains text versions of the statistical means of data in 11 regions of the globe. The following processes represent an attempt to make a fair comparison between the clouds, where :

1. The WWMCA is separated into H,M,L layers, on the 1-degree 'boxes' grid = (360,180) arrays! The pressure definitions for each domain are identical to TOVS - e.g. use 440mb and 680mb, globally, to do the vertical separation.

2. Multi-layer WWMCA data at each grid point is maximally overlapped, thereby 'flying a satellite' over the USAF analyses.

3. Since the WWMCA is an hourly analysis, driven by GOES data, do not use data older than 1 hour. This essentially means we cannot compare polar regions (TESTS, opening the window to 3 hours, adds a few, but not many, polar grid points to the intercomparison).

4. Create Ascending and Descending WWMCA data from both TOVS orbital times and the nearest hourly WWMCA synoptic analysis. If WWMCA data is missing (too old), 'reach out' 1 older and 1 newer hourly analysis trying to fill in data, primarily in polar regions.

5. Reprocess TOVS data by removing data points where WWMCA clouds are 'missing'. Now both grids have identical non-missing point-data, thus insuring proper statistical intercomparisons.

Model Cloud .vs. WWMCA obs; past 10 days

Total cloud fractional coverage has been extracted from the WWMCA observations, up to 24 times per day, and from the GFS (Global Forecast System) 00Z model. OBS and FCST data are from the gridpoint nearest to a station's latitude/longitude coordinates, without horizontal interpolation. The WWMCA data is changed when a new observation is made (generally every hour). Cloud fraction values of -1.0 signify MISSING wwmca observations. The observed wwmca data are compared : with model means from the 6 hour forecast (the GFS analysis cycle, 4 times per day, labeled as 'avnanalysis'), with 12-hour means from the GFS (0-12) and (12-24) forecasts starting at 00Z of each of the 10 days (labeled as 'avnday1'), and with the 10-day GFS forecast made at 00Z of the first date listed (12-hour averages, labeled 'avn10dyfcst'). To get a 'feel' for the variability in wwmca point data, there is also a comparison between the nearest two wwmca points to each station (labeled as 'wwmcacompare').

Data is shown for a 10-day period, starting 11 days ago. There are 20 stations (3-character abbreviations are mine) plotted in the Northern Hemisphere : Boston (BOS), Washington National (DCA),Cape Hatteras (HAT), Okalahoma City (OKC), Edmonton (EDM), Victoria BC (VIC), Phoenixi (PHX), Bermuda (BDA), London UK (LON), Timbuktu (TBU), Naples IT (NPL), Norrkoping SWE (NRR), Moscow RUS (MOS), Baghdad IQ (BGD), Karachi PK (KAR), New Dehli IN (NDE), Beijing CH (BEI), Tokyo JP (TOK), Singapore (SGN), and a point in the stratus region off the California coast (35N, 125W), labeled (CAS). There are 7 stations in the Southern Hemisphere : Capetown SA (CTW), Zanzibar (ZAN), Perth Australia (PER), Sydney Australia (SYD), Easter Island (EIS), Brasilia Brazil (BRS), and Buenos Aires Argentina (BAR). The stations were chosen to cover the globe and may be updated in the future. Plots of layered clouds, in addition to Total cloud seen here, will be made in the future. Current NH plots may be viewed at nhSTATIONS and SH plots may be seen at shSTATIONS .

T62L28 prognostic cloud test

S. Moorthi has run a GDAS test using a cloud water parameterization at T62 resolution with 28 layers for August 1999. A series of monthly mean cloud verification graphics are shown for the cloud statistics, described above. There are also zonal and 2-dimensional comparisons with the USAF cloud analyses. The cloud statistics are in SCORES , while the 2-dimensional and zonal comparisons among cloud water forecasts, operational global model (T126) forecasts, and RTNEPH are shown in 2D and ZNL .

T170 L42 Parallel X, Parallel Y, Operational (14-20)Feb 2001

Pre-implementation testing in Feb 2001. Monthly mean cloud verification graphics are shown for the cloud statistics, described above. There are also zonal and 2-dimensional comparisons with the RTNEPH data. The cloud statistics are in SCORES , while the 2-dimensional and zonal comparisons among cloud water forecasts, operational global model forecasts, and RTNEPH are shown in 2D and ZNL .

Presentation at Branch Meeting 14 Jun 01

Discussion centered on the change in the regional statistics of the operational MRF model run when the new cloud and radiation schemes were implemented in mid-May. The so-called 'surface flux' file data has been routinely compacted into zonal and regional monthly means since the early 1990's. That software was updated to save daily mean forecast data (fcst hours 12-36) for zonal and 9 regions (global, N/S Hemisphere, Tropics (0-30deg), Mid-Lat, and Polar (60-90deg) for each hemisphere). Daily mean data, relative to each regional area, may be found in REGIONS . Some differences occur in the cloud and radiation data between May 15 and 16, which seem related to the change in the structure of the convective cloud model (the old scheme modeled convective clouds as deep columnar towers sitting in a 'bed' of lower cloud). The reader is invited to view the plots in REGIONS . Some points to ponder: Total cloud changes very little globally Fig 1 . whereas a Northern Hemisphere decrease from 0.57 to 0.54 is driven by a tropical decrease, Fig 2 . The increase (0.52-0.56) in the Southern Hemisphere is driven by the mid-latitudes Fig 3 . Similar statements can be applied to low cloud on those figures, except that the low cloud doesn't change in a Southern Hemispheric sense because the tropical decrease is balanced by a mid-latitude increase. Polar regions are generally cloudy during the transition to the new model. High cloud may show a small increase in the Northern Hemisphere, whereas the middle cloud decreases everywhere - from 0.26 to 0.20 globally Fig 4 . Northern Hemisphere high cloud increases strongly in the tropics while decreasing elsewhere Fig 5 . Boundary Layer cloud decreases, especially Southern Hemisphere Fig 6 and its driven by oceanic decreases, while increasing over land Fig 7 . Again, this appears to be due to a tropical decrease, whereas mid-latitude and polar regions experience an increase during the transition to the new model Fig 8 . The vertical locations of the clouds have also changed, much of it again related to the removal of the columnar convective cloud in the new scheme. High cloud top pressure generally decreases by 15 mb, while base pressure does not change much Fig 9 . Globally, the high clouds have become thicker, though this may be more of a Northern hemisphere phenomenon. Removal of the artificial convective 1-layer anvil may account for this. Low clouds appear to shift upward approximately 40 mb Fig 10. which is more of an oceanic, midlat/tropical effect, see Fig 11 and Fig 12 . There are more cloud plots, as well as plots of mean surface heat and moisture fluxes, precipitation, and various radiative fluxes at the earth's surface and T.O.A. in REGIONS .


Results of Moorthi's Test of 21Nov01 case

S.Moorthi has run 3 T170L42 5-day forecasts from an initial time of 21 November 2001. Using the Parallel codes, he has run a standard parallel-X- run, a fix to the error in computation of aerosol properties, and a no-aerosol test. I have looked at forecast differences for day 1/5 (average of 2 relevant sflux files) in several of the cloud and radiation fields. At day 1, impact of the bugfix is small and generally of order similar to removal of the aerosol physics in the SW radiation code. Though removal of aerosols allows more downward SW radiation to reach the earth's surface (enhancing sensible heat flux, increasing 2-meter temperature,...), while the bug fix generally reduces downward SW by less than 5 W/m2 in regions, presumably, where there is little change to cloudiness. By day 5, other interactions have caused the differences to become larger, but nothing systematic in the 'BUGfix' tests. The no-aerosol forecast differences seem to show that there are straight line artificialities in the parameterization????? A few relevant plots may be seen in the TESTAREA .


Results of testing corrected cloud overlap in the cloud diagnostics 19Nov01 case

During a model run, the diagnostic calculation of TOTAL atmospheric cloud, as well as clouds in H, M, L, BL domains, is calculated by assuming the same cloud overlap as the model's radiation parameterization. However, when the model's cloud parameterization was changed from a diagnostic formulation to an explicit liquid water scheme in May 2001, AND the cloud overlap in the radiation scheme was changed, the diagnostic calculation was, inadvertantly, NOT!!! That is, the new overlap used in the radiation scheme became random, while the diagnostic calculation continued to use a combined maximum/random overlap (BL clouds were maximum overlapped). Since maximum overlapping stacks cloud layers on top of each other, it should always give cloud values less than or equal to a purely random overlap assumption. Data displayed here shows the differences associated with a change in cloud overlap for the diagnostic calculation (NO model IMPACT occurs, as this is purely a diagnostic computation!). One 24 hour forecast as shown below, and the first day is validated against the RTNEPH 'observations' in order to give a sense of the differences to be expected. The operational calculation yields the following global cloud fractions for T,H,M,L,BL : .50,.11,.28,.19,.24, while for the more-proper overlap the values become .59,.20,.31,.24,.30! Plots are shown in cloud TESTAREA . While the proper-overlap gives larger numbers, a retuning of the cloud fraction algorithms may be needed. Operationally, the model's radiation parameterization 'sees' up to 10% more cloud than the diagnostic numbers would lead us to believe!


Plots of the diagnostic convective cloud arrays 19Nov01 case

We do not normally look at the convective cloud array in the model's FLUX file! In the old diagnostic cloud scheme, this quantity was computed from a relationship between convective rainfall rate and convective cloud fraction developed by Julia Slingo. The maximum cloud fraction allowed is 0.8! The rainfall rate was accumulated from the convective parameterization over the interval between cloud calculations (so its not an instantaneous value). And cloud top/base pressures were obtained from the minimum/maximum values over that interval. In May 2001, the cloud parameterization was changed to an explicit liquid water scheme, which was not dependent on this convective cloud data. However, its calculation remains in the model forecasts. I have several one-case plots of the convective cloud arrays in conv cloud TESTAREA . I am wondering if the 3 records (convective top/base/fraction) are useful data, am wondering if its being used at NCEP (I believe it was originally requested by someone in NCEP?), and am wondering if we can replace the 3 records with some other model data (such as TOA downward SW and TOA SW and LW clear sky radiative fluxes). The latter would keep the flux file at its current size!


FLUX FILE comparison of Parallel-X- with Operational

Day 1 and day 5 forecasts made with the Parallel -X- are compared with the Operational forecasts using the 'so-called' flux file data. The day 1 and day 5 forecasts are made from the same initial time. In order to compare with WWMCA observed analyses, we wait until the validating data is available (current day 'minus' 6 for the day 5 forecast). Thus the plots are for forecasts generated 6 days ago. The operational forecast is currently made at T382L64 (0-180 hrs) and the -X- is at the same resolution. The operational flux file data is obtained from the /com directories, and it contains averages over 6 hours, thus requiring 4 files to cover a compleat day! The -X- is the NCO real-time parallel and has the same 6-hr averages, is found in the /com/gfs/para directories. The CLOUD SCORES are computed using data from the respective native grids, but they are binned into the same 43 geographical regions (described above). Comparisons are shown for 2-Dimensional Cloud+Radiative Fluxes , Zonal Means , and Regional Cloud Scores


Comparison between use of different data time-windows for WWMCA 1-degree processing

The WWMCA processing of data from the native, polar stereographic grid to a global 1-degree, equal angle, uses several quality control (QC) checks. One is the 'age' of the data. Plots of data age (hours before synoptic time) on the polar stereographic grids for one case (15 Oct 2002) shows that it can vary quite a bit where polar orbiter data is used. GOES data is generally less than an hour old (occassionally its up to 2 hours old when there has been trouble processing an orbital swath!). Currently the QC-age check, in hours, is : 1 hour equatorward of 45 deg, 2 hours poleward of 55 deg, and a linear transition bewteen 45-55. A one case test has been done for a current time (16 Dec 2002), in which the QC-age algorithm has been relaxed to 2 hours equatorward of 45 deg and 3 hours poleward of 55 deg (linear transition 45-55). Comparison of synoptic data for the OPNL and TEST shows that for several synoptic times (e. g. 16Z), there is a significant reduction in the number of missing (green shading) data points on the resulting 1-deg grid. In the operational processing, constant missing (old!) data can show problems in the daily mean, itself! The test DAILY MEAN, labeled 'wider time' shows only some small differences in cloud amount, but there is a removal of all the residual missing data.


FURTHER comparison of above time-windows for WWMCA 1-degree processing

The above change was 'implemented' on 23 December 2002 at 12Z! HOWEVER, there seems to be some strange monthly mean December 2002 data for Total and BL cloud, especially south of the BERING SEA. This is a region where there has been missing data in past monthly means. Because of concern that opening the time-window may be allowing small sample sized data points to harm the monthly mean, the period 24-31 December has been reprocessed for both the old and new (wider) time window. The 8-day mean T,H,M,L,BL cloud amounts and the number of data points used to contstruct the mean at each grid are SHOWN HERE There seems to be very little effect on cloud amount, other than some missing data re-appearing in the narrower (old) time window! However, the merge area between GOES and polar orbiter data is quite evident in TOT, LO, BL cloud amounts for both 'windows'. The number of data points used to create the 8-day means are also displayed. Since there are up to 24 observations per day, the maximum value is 192 (24*8) on these plots. Check the plot labeled 'TOTPT.windo_2431dec02.gif', and one can see 5 GOES images covering the globe; regions where greater than 160 data points are used (more than 20 obs per day). The polar orbiter regions have much less data, with some of the cusp regions, such as the area south of the Bering Sea, using less than 20 data points (of order2-3 obs per day) in the older method and less than 40 (5 obs per day) in the newly 'operational' wide time window. It seems that the strange behavior in the December 2002 mean data is not due to the magnitude of the time-window, but rather due to an imperfect merging of GOES and polar orbiter data, leaving a small number of observations for the calculation of monthly mean values relative to the rest of the globe.

Convection Experiments

S. Moorthi performed several month-long (Feb 2003) GDAS/FCST model experiments with changes to the global model's convection scheme. Click HERE to see day1 and day3 comparisons between the original (labeled in several places as exp85) and the reduced evaporation (labeled as exp95) tests. Click HERE to see the comparison between the original test and the reduced-evaporation test for 1 day. Sorry, but there are NO comparisons with the GFS operational model!