Search Results

You are looking at 1 - 10 of 14 items for

  • Author or Editor: John C. Lin x
  • All content x
Clear All Modify Search
Jahrul M. Alam and John C. Lin

Abstract

An improved treatment of advection is essential for atmospheric transport and chemistry models. Eulerian treatments are generally plagued with instabilities, unrealistic negative constituent values, diffusion, and dispersion errors. A higher-order Eulerian model improves one error at significant cost but magnifies another error. The cost of semi-Lagrangian models is too high for many applications. Furthermore, traditional trajectory “Lagrangian” models do not solve both the dynamical and tracer equations simultaneously in the Lagrangian frame. A fully Lagrangian numerical model is, therefore, presented for calculating atmospheric flows. The model employs a Lagrangian mesh of particles to approximate the nonlinear advection processes for all dependent variables simultaneously. Verification results for simulating sea-breeze circulations in a dry atmosphere are presented. Comparison with Defant’s analytical solution for the sea-breeze system enabled quantitative assessment of the model’s convergence and stability. An average of 20 particles in each cell of an 11 × 20 staggered grid system are required to predict the two-dimensional sea-breeze circulation, which accounts for a total of about 4400 particles in the Lagrangian mesh. Comparison with Eulerian and semi-Lagrangian models shows that the proposed fully Lagrangian model is more accurate for the sea-breeze circulation problem. Furthermore, the Lagrangian model is about 20 times as fast as the semi-Lagrangian model and about 2 times as fast as the Eulerian model. These results point toward the value of constructing an atmospheric model based on the fully Lagrangian approach.

Full access
Jui-Lin F. Li, Martin Köhler, John D. Farrara, and C. R. Mechoso

Abstract

When sea surface temperatures are prescribed at its lower boundary, the University of California, Los Angeles (UCLA) atmospheric general circulation model (AGCM) produces a realistic simulation of planetary boundary layer (PBL) stratocumulus cloud incidence. Despite this success, net surface solar fluxes are generally overpredicted in comparison to Earth Radiation Budget Experiment (ERBE) derived data in regions characterized by persistent stratocumulus cloud decks. It is suggested that this deficiency is due to the highly simplified formulation of the PBL cloud optical properties. A new formulation of PBL cloud optical properties is developed based on an estimate of the stratocumulus cloud liquid water path. The January and July mean net surface solar fluxes simulated by the revised AGCM are closer to ERBE-derived values in regions where stratocumulus clouds are frequently observed. The area-averaged estimated error reductions range from 24 (Peru region) to 53 W m−2 (South Pacific storm track region). The results emphasize that surface heat fluxes are very sensitive to the radiative properties of stratocumulus clouds and that a realistic simulation of both the geographical distribution of stratocumulus clouds and their optical properties is crucial.

Full access
Derek V. Mallia, Adam Kochanski, Dien Wu, Chris Pennell, Whitney Oswald, and John C. Lin

Abstract

Presented here is a new dust modeling framework that uses a backward-Lagrangian particle dispersion model coupled with a dust emission model, both driven by meteorological data from the Weather Research and Forecasting (WRF) Model. This new modeling framework was tested for the spring of 2010 at multiple sites across northern Utah. Initial model results for March–April 2010 showed that the model was able to replicate the 27–28 April 2010 dust event; however, it was unable to reproduce a significant wind-blown dust event on 30 March 2010. During this event, the model significantly underestimated PM2.5 concentrations (4.7 vs 38.7 μg m−3) along the Wasatch Front. The backward-Lagrangian approach presented here allowed for the easy identification of dust source regions with misrepresented land cover and soil types, which required an update to WRF. In addition, changes were also applied to the dust emission model to better account for dust emitted from dry lake basins. These updates significantly improved dust model simulations, with the modeled PM2.5 comparing much more favorably to observations (average of 30.3 μg m−3). In addition, these updates also improved the timing of the frontal passage within WRF. The dust model was also applied in a forecasting setting, with the model able to replicate the magnitude of a large dust event, albeit with a 2-h lag. These results suggest that the dust modeling framework presented here has potential to replicate past dust events, identify source regions of dust, and be used for short-term forecasting applications.

Full access
Kenneth P. Bowman, John C. Lin, Andreas Stohl, Roland Draxler, Paul Konopka, Arlyn Andrews, and Dominik Brunner

In October 2011 an American Geophysical Union Chapman Conference was held in Grindelwald, Switzerland, titled “Advances in Lagrangian Modeling of the Atmosphere.” Lagrangian models are being applied to a wide range of high-impact atmospheric phenomena, such as the transport of volcanic ash and dispersion of radioactive releases. One common theme that arose during the meeting is the need for improved access to the output products of forecast models and reanalysis systems, which are used as in-puts to trajectory and dispersion models. The steady increases in horizontal and vertical resolution in forecast models and data assimilation systems have not been accompanied by changes in model output products, such as higher-frequency winds and the provision of important auxiliary parameters (e.g., heating rates and subgrid-scale mixing properties). This paper discusses the principles of Lagrangian kinematic models and recommends changes in model output practices that would lead directly to significant improvements in the accuracy of trajectory and dispersion calculations.

Full access
Wenli Wang, Kun Yang, Long Zhao, Ziyan Zheng, Hui Lu, Ali Mamtimin, Baohong Ding, Xin Li, Lin Zhao, Hongyi Li, Tao Che, and John C. Moore

Abstract

Snow depth on the interior of Tibetan Plateau (TP) in state-of-the-art reanalysis products is almost an order of magnitude higher than observed. This huge bias stems primarily from excessive snowfall, but inappropriate process representation of shallow snow also causes excessive snow depth and snow cover. This study investigated the issue with respect to the parameterization of fresh snow albedo. The characteristics of TP snowfall were investigated using ground truth data. Snow in the interior of the TP is usually only some centimeters in depth. The albedo of fresh snow depends on snow depth, and is frequently less than 0.4. Such low albedo values contrast with the high values (~0.8) used in the existing snow schemes of land surface models. The SNICAR radiative transfer model can reproduce the observations that fresh shallow snow has a low albedo value, based on which a fresh snow albedo scheme was derived in this study. Finally, the impact of the fresh snow albedo on snow ablation was examined at 45 meteorological stations on TP using the land surface model Noah-MP which incorporated the new scheme. Allowing albedo to change with snow depth can produce quite realistic snow depths compared with observations. In contrast, the typically assumed fresh snow albedo of 0.82 leads to too large snow depths in the snow ablation period averaged across 45 stations. The shallow snow transparency impact on snow ablation is therefore particularly important in the TP interior, where snow is rather thin and radiation is strong.

Free access
John C. Lin, Logan Mitchell, Erik Crosman, Daniel L. Mendoza, Martin Buchert, Ryan Bares, Ben Fasoli, David R. Bowling, Diane Pataki, Douglas Catharine, Courtenay Strong, Kevin R. Gurney, Risa Patarasuk, Munkhbayar Baasandorj, Alexander Jacques, Sebastian Hoch, John Horel, and Jim Ehleringer

Abstract

Urban areas are responsible for a substantial proportion of anthropogenic carbon emissions around the world. As global populations increasingly reside in cities, the role of urban emissions in determining the future trajectory of carbon emissions is magnified. Consequently, a number of research efforts have been started in the United States and beyond, focusing on observing atmospheric carbon dioxide (CO2) and relating its variations to carbon emissions in cities. Because carbon emissions are intimately tied to socioeconomic activity through the combustion of fossil fuels, and many cities are actively adopting emission reduction plans, such urban carbon research efforts give rise to opportunities for stakeholder engagement and guidance on other environmental issues, such as air quality.

This paper describes a research effort centered in the Salt Lake City, Utah, metropolitan region, which is the locus for one of the longest-running urban CO2 networks in the world. The Salt Lake City area provides a rich environment for studying anthropogenic emissions and for understanding the relationship between emissions and socioeconomic activity when the CO2 observations are enhanced with a) air quality observations, b) novel mobile observations from platforms on light-rail public transit trains and a news helicopter, c) dense meteorological observations, and d) modeling efforts that include atmospheric simulations and high-resolution emission inventories.

Carbon dioxide and other atmospheric observations are presented, along with associated modeling work. Examples in which the work benefited from and contributed to the interests of multiple stakeholders (e.g., policymakers, air quality managers, municipal government, urban planners, industry, and the general public) are discussed.

Open access
Jia-Lin Lin, George N. Kiladis, Brian E. Mapes, Klaus M. Weickmann, Kenneth R. Sperber, Wuyin Lin, Matthew C. Wheeler, Siegfried D. Schubert, Anthony Del Genio, Leo J. Donner, Seita Emori, Jean-Francois Gueremy, Frederic Hourdin, Philip J. Rasch, Erich Roeckner, and John F. Scinocca

Abstract

This study evaluates the tropical intraseasonal variability, especially the fidelity of Madden–Julian oscillation (MJO) simulations, in 14 coupled general circulation models (GCMs) participating in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4). Eight years of daily precipitation from each model’s twentieth-century climate simulation are analyzed and compared with daily satellite-retrieved precipitation. Space–time spectral analysis is used to obtain the variance and phase speed of dominant convectively coupled equatorial waves, including the MJO, Kelvin, equatorial Rossby (ER), mixed Rossby–gravity (MRG), and eastward inertio–gravity (EIG) and westward inertio–gravity (WIG) waves. The variance and propagation of the MJO, defined as the eastward wavenumbers 1–6, 30–70-day mode, are examined in detail.

The results show that current state-of-the-art GCMs still have significant problems and display a wide range of skill in simulating the tropical intraseasonal variability. The total intraseasonal (2–128 day) variance of precipitation is too weak in most of the models. About half of the models have signals of convectively coupled equatorial waves, with Kelvin and MRG–EIG waves especially prominent. However, the variances are generally too weak for all wave modes except the EIG wave, and the phase speeds are generally too fast, being scaled to excessively deep equivalent depths. An interesting result is that this scaling is consistent within a given model across modes, in that both the symmetric and antisymmetric modes scale similarly to a certain equivalent depth. Excessively deep equivalent depths suggest that these models may not have a large enough reduction in their “effective static stability” by diabatic heating.

The MJO variance approaches the observed value in only 2 of the 14 models, but is less than half of the observed value in the other 12 models. The ratio between the eastward MJO variance and the variance of its westward counterpart is too small in most of the models, which is consistent with the lack of highly coherent eastward propagation of the MJO in many models. Moreover, the MJO variance in 13 of the 14 models does not come from a pronounced spectral peak, but usually comes from part of an overreddened spectrum, which in turn is associated with too strong persistence of equatorial precipitation. The two models that arguably do best at simulating the MJO are the only ones having convective closures/triggers linked in some way to moisture convergence.

Full access
John R. Gyakum, Marco Carrera, Da-Lin Zhang, Steve Miller, James Caveen, Robert Benoit, Thomas Black, Andrea Buzzi, Cliément Chouinard, M. Fantini, C. Folloni, Jack J. Katzfey, Ying-Hwa Kuo, François Lalaurette, Simon Low-Nam, Jocelyn Mailhot, P. Malguzzi, John L. McGregor, Masaomi Nakamura, Greg Tripoli, and Clive Wilson

Abstract

The authors evaluate the performance of current regional models in an intercomparison project for a case of explosive secondary marine cyclogenesis occurring during the Canadian Atlantic Storms Project and the Genesis of Atlantic Lows Experiment of 1986. Several systematic errors are found that have been identified in the refereed literature in prior years. There is a high (low) sea level pressure bias and a cold (warm) tropospheric temperature error in the oceanic (continental) regions. Though individual model participants produce central pressures of the secondary cyclone close to the observed during the final stages of its life cycle, systematically weak systems are simulated during the critical early stages of the cyclogenesis. Additionally, the simulations produce an excessively weak (strong) continental anticyclone (cyclone); implications of these errors are discussed in terms of the secondary cyclogenesis. Little relationship between strong performance in predicting the mass field and skill in predicting a measurable amount of precipitation is found. The bias scores in the precipitation study indicate a tendency for all models to overforecast precipitation. Results for the measurable threshold (0.2 mm) indicate the largest gain in precipitation scores results from increasing the horizontal resolution from 100 to 50 km, with a negligible benefit occurring as a consequence of increasing the resolution from 50 to 25 km. The importance of a horizontal resolution increase from 100 to 50 km is also generally shown for the errors in the mass field. However, little improvement in the prediction of the cyclogenesis is found by increasing the horizontal resolution from 50 to 25 km.

Full access
Leo J. Donner, Bruce L. Wyman, Richard S. Hemler, Larry W. Horowitz, Yi Ming, Ming Zhao, Jean-Christophe Golaz, Paul Ginoux, S.-J. Lin, M. Daniel Schwarzkopf, John Austin, Ghassan Alaka, William F. Cooke, Thomas L. Delworth, Stuart M. Freidenreich, C. T. Gordon, Stephen M. Griffies, Isaac M. Held, William J. Hurlin, Stephen A. Klein, Thomas R. Knutson, Amy R. Langenhorst, Hyun-Chul Lee, Yanluan Lin, Brian I. Magi, Sergey L. Malyshev, P. C. D. Milly, Vaishali Naik, Mary J. Nath, Robert Pincus, Jeffrey J. Ploshay, V. Ramaswamy, Charles J. Seman, Elena Shevliakova, Joseph J. Sirutis, William F. Stern, Ronald J. Stouffer, R. John Wilson, Michael Winton, Andrew T. Wittenberg, and Fanrong Zeng

Abstract

The Geophysical Fluid Dynamics Laboratory (GFDL) has developed a coupled general circulation model (CM3) for the atmosphere, oceans, land, and sea ice. The goal of CM3 is to address emerging issues in climate change, including aerosol–cloud interactions, chemistry–climate interactions, and coupling between the troposphere and stratosphere. The model is also designed to serve as the physical system component of earth system models and models for decadal prediction in the near-term future—for example, through improved simulations in tropical land precipitation relative to earlier-generation GFDL models. This paper describes the dynamical core, physical parameterizations, and basic simulation characteristics of the atmospheric component (AM3) of this model. Relative to GFDL AM2, AM3 includes new treatments of deep and shallow cumulus convection, cloud droplet activation by aerosols, subgrid variability of stratiform vertical velocities for droplet activation, and atmospheric chemistry driven by emissions with advective, convective, and turbulent transport. AM3 employs a cubed-sphere implementation of a finite-volume dynamical core and is coupled to LM3, a new land model with ecosystem dynamics and hydrology. Its horizontal resolution is approximately 200 km, and its vertical resolution ranges approximately from 70 m near the earth’s surface to 1 to 1.5 km near the tropopause and 3 to 4 km in much of the stratosphere. Most basic circulation features in AM3 are simulated as realistically, or more so, as in AM2. In particular, dry biases have been reduced over South America. In coupled mode, the simulation of Arctic sea ice concentration has improved. AM3 aerosol optical depths, scattering properties, and surface clear-sky downward shortwave radiation are more realistic than in AM2. The simulation of marine stratocumulus decks remains problematic, as in AM2. The most intense 0.2% of precipitation rates occur less frequently in AM3 than observed. The last two decades of the twentieth century warm in CM3 by 0.32°C relative to 1881–1920. The Climate Research Unit (CRU) and Goddard Institute for Space Studies analyses of observations show warming of 0.56° and 0.52°C, respectively, over this period. CM3 includes anthropogenic cooling by aerosol–cloud interactions, and its warming by the late twentieth century is somewhat less realistic than in CM2.1, which warmed 0.66°C but did not include aerosol–cloud interactions. The improved simulation of the direct aerosol effect (apparent in surface clear-sky downward radiation) in CM3 evidently acts in concert with its simulation of cloud–aerosol interactions to limit greenhouse gas warming.

Full access
Stanley G. Benjamin, Stephen S. Weygandt, John M. Brown, Ming Hu, Curtis R. Alexander, Tatiana G. Smirnova, Joseph B. Olson, Eric P. James, David C. Dowell, Georg A. Grell, Haidao Lin, Steven E. Peckham, Tracy Lorraine Smith, William R. Moninger, Jaymes S. Kenyon, and Geoffrey S. Manikin

Abstract

The Rapid Refresh (RAP), an hourly updated assimilation and model forecast system, replaced the Rapid Update Cycle (RUC) as an operational regional analysis and forecast system among the suite of models at the NOAA/National Centers for Environmental Prediction (NCEP) in 2012. The need for an effective hourly updated assimilation and modeling system for the United States for situational awareness and related decision-making has continued to increase for various applications including aviation (and transportation in general), severe weather, and energy. The RAP is distinct from the previous RUC in three primary aspects: a larger geographical domain (covering North America), use of the community-based Advanced Research version of the Weather Research and Forecasting (WRF) Model (ARW) replacing the RUC forecast model, and use of the Gridpoint Statistical Interpolation analysis system (GSI) instead of the RUC three-dimensional variational data assimilation (3DVar). As part of the RAP development, modifications have been made to the community ARW model (especially in model physics) and GSI assimilation systems, some based on previous model and assimilation design innovations developed initially with the RUC. Upper-air comparison is included for forecast verification against both rawinsondes and aircraft reports, the latter allowing hourly verification. In general, the RAP produces superior forecasts to those from the RUC, and its skill has continued to increase from 2012 up to RAP version 3 as of 2015. In addition, the RAP can improve on persistence forecasts for the 1–3-h forecast range for surface, upper-air, and ceiling forecasts.

Full access