Search Results
You are looking at 1 - 3 of 3 items for
- Author or Editor: Chin-Fei Hsu x
- Refine by Access: All Content x
Abstract
As a result of a 1980–81 drought, statistically derived outlooks of monthly and seasonal precipitation began to be issued to Illinois officials who were making management decisions relating to water supplies and agricultural activities. Outlooks of above, near or below normal precipitation have subsequently been issued operationally over a 3-year period for four areas of Illinois. They are assessed here as to their skill and major uses. This assessment shows that 56% of the seasonal outlooks were correct as opposed to 33% expected by chance, and 30% were correct when only persistence is used to forecast the coming season. The seasonal outlooks were correct most often in fall (67%) and least often in winter (42%). Monthly operational outlooks were correct 52% of the time. The skill levels in the monthly outlooks during the operational period were very similar to those in earlier experimental tests, being 53% correct in monthly tests of 1940–79. However, the seasonal tests using 1970–80 showed 41% accuracy compared with 56% in the 3-year operational period with the high value a result of sampling vagaries. The monthly outlooks were correct in detecting the occurrence of ending of extreme conditions, including the wet month that ended the 1980-81 drought, the lack of above normal rain during potential flooding conditions in the spring of 1982, and the deficient rainfall in summer 1983. The magnitude of the conditional probability value associated with the most likely monthly outlook value was well related to its correctness. When the maximum probability value obtained was 50% or more, 64% of the monthly outlooks (which specified that category) were correct, but when it was 35 to 45%, only 39% of the outlooks were correct.
Abstract
As a result of a 1980–81 drought, statistically derived outlooks of monthly and seasonal precipitation began to be issued to Illinois officials who were making management decisions relating to water supplies and agricultural activities. Outlooks of above, near or below normal precipitation have subsequently been issued operationally over a 3-year period for four areas of Illinois. They are assessed here as to their skill and major uses. This assessment shows that 56% of the seasonal outlooks were correct as opposed to 33% expected by chance, and 30% were correct when only persistence is used to forecast the coming season. The seasonal outlooks were correct most often in fall (67%) and least often in winter (42%). Monthly operational outlooks were correct 52% of the time. The skill levels in the monthly outlooks during the operational period were very similar to those in earlier experimental tests, being 53% correct in monthly tests of 1940–79. However, the seasonal tests using 1970–80 showed 41% accuracy compared with 56% in the 3-year operational period with the high value a result of sampling vagaries. The monthly outlooks were correct in detecting the occurrence of ending of extreme conditions, including the wet month that ended the 1980-81 drought, the lack of above normal rain during potential flooding conditions in the spring of 1982, and the deficient rainfall in summer 1983. The magnitude of the conditional probability value associated with the most likely monthly outlook value was well related to its correctness. When the maximum probability value obtained was 50% or more, 64% of the monthly outlooks (which specified that category) were correct, but when it was 35 to 45%, only 39% of the outlooks were correct.
Abstract
The statistical relationships between monthly precipitation (P) and shallow groundwater levels (GW) in 20 wells scattered across Illinois with data for 1960–84 were defined using autoregressive integrated moving average (ARIMA) modeling. A lag of 1 month between P to GW was the strongest temporal relationship found across Illinois, followed by no (0) lag in the northern two-thirds of Illinois where mollisols predominate, and a lag of 2 months in the alfisols of southern Illinois. Spatial comparison of the 20 P-GW correlations with several physical conditions (aquifer types, soils, and physiography) revealed that the parent soil materials of outwash alluvium, glacial till, thick loess (≥2.1 m), and thin loess (>2.1) best defined regional relationships for drought assessment.
Equations developed from ARTMA using 1960–79 data for each region were used to estimate GW levels during the 1980–81 drought, and estimates averaged between 25 to 45 cm of actual levels. These estimates are considered adequate to allow a useful assessment of drought onset, severity, and termination in other parts of the state. The techniques and equations should be transferrable to regions of comparable soils and climate.
Abstract
The statistical relationships between monthly precipitation (P) and shallow groundwater levels (GW) in 20 wells scattered across Illinois with data for 1960–84 were defined using autoregressive integrated moving average (ARIMA) modeling. A lag of 1 month between P to GW was the strongest temporal relationship found across Illinois, followed by no (0) lag in the northern two-thirds of Illinois where mollisols predominate, and a lag of 2 months in the alfisols of southern Illinois. Spatial comparison of the 20 P-GW correlations with several physical conditions (aquifer types, soils, and physiography) revealed that the parent soil materials of outwash alluvium, glacial till, thick loess (≥2.1 m), and thin loess (>2.1) best defined regional relationships for drought assessment.
Equations developed from ARTMA using 1960–79 data for each region were used to estimate GW levels during the 1980–81 drought, and estimates averaged between 25 to 45 cm of actual levels. These estimates are considered adequate to allow a useful assessment of drought onset, severity, and termination in other parts of the state. The techniques and equations should be transferrable to regions of comparable soils and climate.
Abstract
As part of research concerned with operational seeding and evaluation techniques, analyses were made of two warm-season seeding projects involving rainfall enhancement: a 5-year (1975–79) aircraft seeding program conducted in 15 southwestern Kansas counties, and a ground generator seeding project conducted in 3 counties of northwestern Oklahoma in 1972–76. Data for 153 and 111 seeding days in Kansas and Oklahoma, respectively, were used. Rainfall data were obtained from the climatic raingage network of the National Weather Service. Seeding-day data were gratified according to meteorological parameters, including synoptic storm type, storm motion, and plume movement derived from the low-level wind field. Comparisons of 24-hour rainfall amounts in target and control areas were made. Movable controls determined from storm motions obtained from hourly radar data and upper-level winds were used to minimize control contamination by the seeding agent. In the southwestern Kansas operation, results indicated a target increase of 9% in the warm season rainfall, but this modest increase does not provide firm support for seeding enhancement considering rainfall natural variability, rainfall sampling deficiences, and other sources of sampling error. In the Oklahoma project, no substantial support was established for seeding-induced rainfall from the ground generator operations.
Abstract
As part of research concerned with operational seeding and evaluation techniques, analyses were made of two warm-season seeding projects involving rainfall enhancement: a 5-year (1975–79) aircraft seeding program conducted in 15 southwestern Kansas counties, and a ground generator seeding project conducted in 3 counties of northwestern Oklahoma in 1972–76. Data for 153 and 111 seeding days in Kansas and Oklahoma, respectively, were used. Rainfall data were obtained from the climatic raingage network of the National Weather Service. Seeding-day data were gratified according to meteorological parameters, including synoptic storm type, storm motion, and plume movement derived from the low-level wind field. Comparisons of 24-hour rainfall amounts in target and control areas were made. Movable controls determined from storm motions obtained from hourly radar data and upper-level winds were used to minimize control contamination by the seeding agent. In the southwestern Kansas operation, results indicated a target increase of 9% in the warm season rainfall, but this modest increase does not provide firm support for seeding enhancement considering rainfall natural variability, rainfall sampling deficiences, and other sources of sampling error. In the Oklahoma project, no substantial support was established for seeding-induced rainfall from the ground generator operations.