Methane is the second most important anthropogenic greenhouse gas after CO2. As a short-lived climate forcing agent (lifetime ~10 years), it provides a lever for slowing near-term climate change. Major anthropogenic sources of methane include oil/gas exploration and use, livestock, landfills, coal mining, and rice cultivation. Wetlands are the dominant natural source. The magnitude and spatial distribution of methane sources is highly uncertain and difficult to constrain.
Fig. Simulated methane concentrations using emissions constrained by satellite observations.
The hydroxyl radical (OH) is the primary oxidant for a number of non-CO2 greenhouse gases and CFCs. It also regulates the production of tropospheric ozone, a leading pollutant. As such, changes in tropospheric OH could have large implications for both future climate and air quality. However we currently lack a predictive understanding of OH on decadal-to-centennial timescales, evidenced by the disagreement between global models in their simulation of OH.
Fig. OH concentrations in a 6000 year equilibrium simulation with a coupled chemistry-climate model.
Carbon dixoide (CO2) is an atmospheric trace gas and the largest anthropogenic radiative forcer. CO2 levels have increased from 280 ppm in pre-industrial times to greater than 400 ppm in the present, largely due to changes in fossil fuel emissions, and can be measured via ground stations, aircraft, and satellites. The paradigm in ground-based trace gas measurements has been to employ a sparse network of high-precision instruments that can be used to measure atmospheric concentrations. These concentrations are then used to estimate emission fluxes, validate numerical models, and quantify changes in physical processes. However, the BEACO2N project (http://beacon.berkeley.edu/Overview.aspx) aims to provide a better understanding of the emissions and physical processes governing CO2 by deploying a high density of moderate-precision instruments.
Fig. We constructed a custom, hourly, 1-km CO2 emission inventory for the Bay Area.
Inverse models quantify the state variables driving the evolution of a physical system by using observations of that system. This requires a physical model that relates a set of input variables (state vector) to a set of output variables (observation vector). A critical step in solving the inverse problem is determining the amount of information contained in the observations and choosing the state vector accordingly. This is a non-trivial problem when using a large ensemble of observations with large errors.
Fig. Illustration of using a Gaussian mixture model and radial basis functions for defining the state vector.