Classify each of the following scenarios based on which attribute of water is responsible.

Modeling Dietary Exposure with Special Sections on Modeling Aggregate and Cumulative Exposure

Barbara J. Petersen, in Hayes' Handbook of Pesticide Toxicology (Third Edition), 2010

50.4 Uncertainty

The EPA (1992) has classified uncertainty in exposure assessments in three categories: scenario uncertainty, parameter uncertainty, and model uncertainty. Examples of how these uncertainties may arise in an exposure assessment follow.

50.4.1 Scenario Uncertainty

Scenario uncertainties include descriptive errors, aggregation errors, and incomplete analysis. For instance, for residues on imported crops, scenario uncertainty may result from incorrect information regarding the regions in which the product is used and how it is used.

50.4.2 Parameter Uncertainty

Parameter uncertainty includes measurement errors, sampling errors, variability, and use of surrogate data. Two examples of measurement uncertainty in the data may be the presumed tendency of some survey respondents to underestimate their body weights or to underreport food consumption. In the first example, parameter uncertainty may result in potential overestimation of the exposures, whereas in the second example, it may result in potential underestimation of exposures. Sampling errors may result from sampling too few observations or nonrepresentative sampling. Generally, studies of residential exposures often include very few measurements and typically are conducted for a limited number of scenarios.

50.4.3 Model Uncertainty

A comparison of the results of the various analyses provides the assessor with a measure of the impact of the uncertainty in the exposure model used. Another example of model uncertainty may result from using the wrong model to represent the degradation, over time, in air (or soil) concentrations.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123743671000501

Principles

Barbara J. Petersen, ... Cassi L. Walls, in Handbook of Pesticide Toxicology (Second Edition), 2001

17.4 UNCERTAINTY

The EPA (1992) has classified uncertainty in exposure assessments in three categories scenario uncertainty, parameter uncertainty, and model uncertainty. Examples of how these uncertainties may arise in an exposure assessment follow.

17.4.1 SCENARIO UNCERTAINTY

Scenario uncertainties include descriptive errors, aggregation errors, and incomplete analysis. For instance, for residues on imported crops, scenario uncertainty may result from incorrect information regarding the regions in which the product is used and how it is used.

17.4.2 PARAMETER UNCERTAINTY

Parameter uncertainty includes measurement errors, sampling errors, variability, and use of surrogate data. Two examples of measurement uncertainty in the data may be the presumed tendency of some survey respondents to underestimate their body weights or to underreport food consumption. In the first example, parameter uncertainty may result in potential overestimation of the exposures, whereas in the second example, it may result in potential underestimation of exposures. Sampling errors may result from sampling too few observations or nonrepresentative sampling. Generally, studies of residential exposures often include very few measurements, and typically are conducted for a limited number of scenarios.

17.4.3 MODEL UNCERTAINTY

A comparison of the results of the various analyses provides the assessor with a measure of the impact of the uncertainty in the exposure model used. Another example of model uncertainty may result from using the wrong model to represent the degradation, over time, in air (or soil) concentrations.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124262607500203

Risk Assessment, Uncertainty

D. Schwela, in Encyclopedia of Toxicology (Third Edition), 2014

Uncertainty in Exposure Assessment

As pointed out by WHO/International Programme on Chemical Safety, the first step of an exposure analysis is to establish a framework designed to reflect the links between the pollutant source and human exposure. This framework is a conceptual model that is the umbrella for exposure scenarios. In Figure 4, a generic conceptual model of exposure to chemicals is given. It shows the sources of their release and pathways of exposure to it. Depending on its source, there are different levels of scenarios, such as that describing the release of the compound from transport or from building materials. Each exposure from one of the sources may be described in a particular scenario. All of these scenarios may be combined to yield bigger and more complex scenarios to characterize, e.g., the inhalation exposure route.

Classify each of the following scenarios based on which attribute of water is responsible.

Figure 4. Generic conceptual model of exposure to chemicals.

In exposure assessment, uncertainty arises from insufficient knowledge about relevant exposure scenarios, exposure models, and model inputs. Uncertainty pertains to different steps and approaches in the assessment and can be classified into three broad categories:

Scenario uncertainty;

Model uncertainty; and

Exposure factor uncertainty.

Scenario uncertainty is the uncertainty in specifying the exposure scenario that is consistent with the scope and purpose of the exposure assessment. This uncertainty includes errors

In scenario description (e.g., wrong or incomplete information);

Of assessment (e.g., choice of the wrong model);

Of incomplete analysis (e.g., overlooking an important exposure route); and

In the specification of

the agent to be considered;

exposed populations;

spatial and temporal information (e.g., geographic applicability seasonal applicability);

microenvironments;

population activities;

sources of the released agent;

exposure pathways;

exposure events;

exposure routes; and

available risk management measures.

Model uncertainty is the uncertainty due to gaps in scientific knowledge that hamper an adequate capture of the correct causal relations between exposure factors. This uncertainty is principally based on

Modeling errors (i.e., non-consideration of important exposure factors); and

Relation (dependency) errors (i.e., erroneous interpretation of correlations).

In addition, when using models, the following sources of uncertainty are to be considered:

Linking the selected model to the adopted scenario (model boundaries);

Model dependencies;

Model assumptions;

Model detail (i.e., simple or complex);

Model extrapolation;

Model implementation and technical model aspects (e.g., errors in software and hardware); and

Model input data (diversity of substances and humans, variation among individuals, statistical errors of estimates, systematic bias, aggregation bias, measurement errors).

Exposure factor uncertainty is involved in the specification of numerical values for human exposure such as those in the US Environmental Protection Agency (EPA) Exposure Factors Handbook, the EU Technical Guidance Document, and the KTL (the National Public Health Institute of Finland) Exposure Factors Sourcebook for Europe. Exposure assessment involves the specification of values for exposure factors, either for direct determination of the exposure or as input for mechanistic or empirical or distribution-based models that are used to fill the exposure scenario with adequate information. Sources of exposure factor uncertainty include the following:

Measurement errors (random or systematic);

Sample uncertainty;

Data type (e.g., surrogate data, expert judgment, default data, modeling data, measurement data);

Extrapolation uncertainty; and

Uncertainty in the determination of the statistical distribution used to represent distributed parameter values.

All these uncertainties have to be considered in exposure assessment.

The overarching consideration in increasing the level of sophistication in the exposure and uncertainty analysis is in conducting a higher tier analysis. The exact form of analysis in a given tier may vary depending on the specific technical and regulatory context. The level of detail in the quantification of assessment uncertainties, however, should match the degree of refinement in the underlying exposure or risk analysis. Where appropriate to an assessment objective, exposure assessments should be iteratively refined over time to incorporate new data, information, and methods to reduce uncertainty and improve the characterization of variability.

Tier 0 uncertainty analysis is performed for routine screening assessments, where it is only possible to characterize uncertainty by established default uncertainty factors. These screening-level assessments are designed to demonstrate if the projected exposures or risks are unlikely to exceed reference values.

Where the screening assessment indicates a concern, a Tier 1 (qualitative) analysis is intended to examine how likely it is that, and by how much, the exposure or risk levels of concern may be exceeded. The main objective of Tier 1 uncertainty analysis is to characterize the influence of each identified source of uncertainty independently on the results of the assessment. In a qualitative uncertainty analysis, the uncertainties in each of the major elements of the exposure or risk analysis are usually described, often together with a statement of the estimated magnitude and direction of the uncertainty. If this Tier 1 analysis does not provide a sufficient basis to reach a risk management decision, it would form the basis for performing Tier 2 uncertainty analysis.

Tier 2 uncertainty analysis consists of a deterministic point estimate sensitivity analysis to usually examine the sensitivity of results to input assumptions by using modified input values. This analysis intends to identify the relative contribution of the uncertainty in a given parameter value (e.g., inhalation rate, emission rate) or a model component to the total uncertainty in the exposure or risk estimate. A sensitivity analysis thus performed may provide high, average, or low predictions corresponding to the range of values considered for each of the inputs. If this Tier 2 analysis does not provide a sufficient basis to reach a risk management decision, it would form the basis for performing Tier 3 uncertainty analysis.

The starting point for any Tier 3 analysis is the quantification of probability distributions for each of the key exposure or risk model input values (e.g., mean and standard deviation of fitted statistical distributions, such as normal, log-normal, or Weibull distributions). Tier 3 uncertainty analysis examines the combined influence of the input uncertainties on the predictions by propagating either analytically (e.g., Taylor series approximation) or numerically (e.g., Monte Carlo simulation). More comprehensive quantitative analyses (e.g., two-dimensional Monte Carlo analysis) of exposure assessment uncertainties often rely on modeling approaches that separately characterize variability and uncertainty in the model inputs and parameters. In principle, the outputs from variability and uncertainty analysis are used to quantify the nature of the variability in the predicted distribution of the exposures and the uncertainties associated with different percentiles of the predicted population exposure or risk estimates.

The ECHA also recommends a tiered approach (qualitative, deterministic, and probabilistic analysis) for exposure assessment in the implementation projects for the European chemicals regulation REACH as well as the European Food Safety Authority for dietary exposure assessment.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012386454300573X

Cerebral Malformations

William D. Graf, Shihui Yu, in Rosenberg's Molecular and Genetic Basis of Neurological and Psychiatric Disease (Fifth Edition), 2015

Molecular genetics

Any genes involving any process in neurodevelopment can potentially cause cerebral malformations if these genes are mutated or dysregulated. Genetic/genomic defects involving the induction of the ectoderm or pattern formation of the neural tube in the early stages of embryonic neurodevelopment, or involving regulation of neuronal migration and differentiation in the later stages of fetal neurodevelopment, will generally cause major malformation. Various levels of genetic/genomic alteration from single nucleotide mutation to whole chromosomes are found to cause cerebral malformations with all known genetic inheritances involved. Some examples of genes associated with microcephaly, lissencephaly, cobblestone cortex, heterotopia, and polymicrogyria are listed in Table 13.1.

Genomic Screening by CMA

Genomic copy-number variants (CNV) identified by CMA can be clearly pathogenic or presumably benign, depending on the size, location and genetic content correlating to a certain region of the genome. In addition, CMA may identify some CNVs as “variants of uncertain significance” (VUS) based on insufficient knowledge about the functions and clinical relevance of the gene content. Genome-wide CMA scanning will continue to reveal some novel pathogenic CNVs and some VUS. However, the uncertainty of some VUS will diminish as more is learned about the biological function of human genomic variations and their relevance to clinical phenotypes. By thoroughly scanning the human genome, CMA can frequently identify incidental findings that are discovered unintentionally and are unrelated to the original aim of the test.22 In general, there are numerous advantages of CMA over conventional chromosome karyotyping such as high sensitivity, high throughput, and high detection rate. CMA has become the first-line genetic test in the etiological evaluation of fetus with ultrasound abnormalities or infants and children with neurodevelopment disorders (NDD) including cerebral malformations.23,24 CMA has clinical utility for diagnosis and genetic counseling as well as for the potential for patient management. Up to one-third of all pathogenic CNVs may contain genetic information that leads to useful clinical action.25

Postnatal CMA

A genomic cause of an NDD will be identified by CMA in 5–20% of affected infants and children depending on the severity of the phenotype and the CMA platforms applied. The detection rate is even higher in children with dysmorphic features and other organ system anomalies. Clinically, the majority of patients with primary microcephaly, especially those with cerebral dysgenesis, multiple organ anomalies, or dysmorphic facial features, have unrecognizable syndromes—even to the most experienced clinicians. Conversely, it is not uncommon that the diagnoses made by CMA frequently involve specific clinical features that may have been present but not apparent or were not yet manifest at the time of testing. These situations lead to the concept of genotype-first over the transitional phenotype-first in clinical practice.

Prenatal CMA

Prenatal diagnosis through CMA using uncultured cells from chorionic-villus sampling or amniocentesis fluid has identified additional, clinically relevant information when compared with the previous standards of chromosome karyotyping.6 Prenatal CMA is most beneficial after ultrasonographic examination has identified fetal structural anomalies.26 Expanded prenatal genetic testing involves benefits, limitations, and consequences. Inconclusive prenatal test results (e.g., neuroimaging differences with CNVs of uncertain significance) cause parental anxiety and clinical dilemmas. Uncertainty scenarios underscore the critical need for comprehensive pretest consultation with informed consent, judicious reporting of test results, and access to qualified genetic counselors in the process of enabling parents to make well-informed decisions.

Genomic Screening by NGS Techniques

Although in its adolescent stage, NGS techniques have had a revolutionary impact on clinical genetic/genomic testing, and are gradually making their way into clinical laboratories. NGS will replace conventional candidate gene approaches, in which the recognition of certain features of a known phenotype directs a clinician to a specific gene test in an attempt to confirm a suspected diagnosis. NGS-based genetic/genomic tests include disease-targeted gene panels (NGS-panels), whole-exome sequencing (WES) or whole-genome sequencing (WGS). Both NGS-panels and WES require an enrichment step of the desired genomic regions before NGS can be performed, but this step is not necessary for WGS. WGS techniques uncover virtually all variants within an individual’s genome simultaneously, while WES typically evaluates all known genes and NGS-panels target a group of selected genes (from several to hundreds) related to certain diseases or disorders for which both allelic and locus heterogeneity are substantial. Different from conventional “targeted and specific” genetic testing strategies and NGS-panels, which mostly target only known disease genes for diagnosis purposes, WES/WGS is a process of scanning the whole exome/genome for both discovery and diagnostic purposes. In many cases, research discoveries may be directly translated into clinical diagnoses when compelling evidence for establishing causal relationships between novel variants and unique phenotypes exists.

Postnatal NGS Analysis

WES has rapidly become a popular diagnostic test in the characterization of possible genetic causes of nonspecific or unusual disease presentations.27 Available data indicates that WES identified the underlying genetic defect in 25–30% of patients without a clear clinical diagnosis or for patients who have negative test results for genes known to be associated with suspected disorders.28 However there are some major limitations of WES clinical utility including: 1) excessive cost; 2) capture methods restricted to “known genes”; 3) the absence of WES capacity to sequence noncoding-regulatory or deep-intronic regions in known genes that may contain the etiologic mutations; 4) the inadequacy of WES to cover 5–10% of coding regions due to the presence of pseudogenes (or repetitive regions including trinucleotide repeats) or GC-rich regions that obscure capture and sequencing procedures; 5) the shortfall of WES technology to detect germline CNV due to biased enrichment of capture procedures (PCR amplicons), uneven genomic distribution of exons, and insufficient software; 6) overwhelming numbers of variants without full understanding of their biological implication to human health or disease; 7) false positive findings due to imperfect data-filtering algorisms; 8) the magnitude of data per sample and the time required for data interpretation prior to clinical use; 9) the lack of tools, standards, regulations and policies to integrate meaningful output for physicians and families; and 10) many unsettled issues relating to ethics, privacy, consent and legal protections.

Instead of targeting all exons of the known genes in WES, NGS-panels target a group of selected genes related to certain diseases or disorders. Many disease-specific diagnostic assays by NGS-panel methods are now commercially available for genetically heterogeneous constitutional disorders. Considering the limitations of WES, NGS-panel strategies are expected to remain the major application of NGS in diagnostic testing for the next few years.29 NGS-panels can avoid some of the WES limitations by: 1) filling in missing NGS content with supplemental Sanger sequencing and other complementary technologies; 2) supplementing disease-targeted sequencing tests with CNV detection approaches currently missed by WES services; 3) limiting the numbers of VUS and incidental findings that are unrelated to the indication for testing; and 4) providing disease-specific expertise already residing in laboratories that previously carried out disease-targeted testing. Thus, the American College of Medical Genetics and Genomics (ACMG) recommended that WES or WGS should be reserved for those cases in which disease-targeted testing is negative or unlikely to return a positive result in a timely and cost-effective manner.30

WGS is currently applied mostly at the research level and few examples have successfully demonstrated utility in patient management for single-gene disorders.31,32 WGS has some advantages over WES and NGS-panels, such as fewer sample biases during preparation, more comprehensive genome coverage, and easier identification of large deletions/duplications and other genomic abnormalities. When the understanding of noncoding regions improves and sequencing costs decrease, and when software improvement and sequencing technologies are capable of detecting all types of mutations across all genetic loci, it is anticipated that WGS will become the standard clinical method to enable the “genotype-first” screening approach in the future—even in the practice of neonatal or fetal medicine, in an attempt to attain individualized healthcare and predetermine optimal patient management and eventually eliminate the need for CMA as a separate test. Furthermore, with the understanding of the effects of new genetic variabilities, a more comprehensive genetic/genomic testing era will encompass DNA sequencing as well as transcriptomics, proteomics and epigenomics.

Prenatal NGS Analysis

Both invasive and noninvasive methods of prenatal diagnosis by NGS for genetic/genomic abnormalities are available; however, the use of these technologies in the prenatal period raises many ethical and policy questions.33,34 Similar to prenatal CMA analysis, NGS-based prenatal testing requires chorionic-villus sampling or amniocentesis fluid, from which DNA is extracted. Noninvasive prenatal detection of common fetal aneuploidies with maternal plasma cell-free DNA (cfDNA) using various NGS platforms has been widely applied in a clinical setting.35 Meanwhile, NGS methods to detect genomic CNVs using cfDNA have been achieved.36 Recently several groups provided promising solutions for fetal Mendelian diseases noninvasively, especially for couples with a born proband.37

Genotype–Phenotype Correlations

Patterns of morphological development are continuously correlated with new data regarding the genetic programming of the CNS. The over-reaching goal would be to create an integrated scheme that would explain brain malformations both in terms of morphogenesis and genetic gradients along the axes of the neural tube and segmentation.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124105294000139

Assessing and reporting uncertainties in dietary exposure analysis

Susanne Kettler, ... David Tennant, in Food and Chemical Toxicology, 2015

4.1.1 Model-specific scenario uncertainties

Scenario uncertainty is not a major factor in deterministic models since the exposure route of interest and the population of concern are pre-defined. However, sometimes it is unclear what time interval is relevant to the toxicological end-point. For chronic exposures this can vary from a few weeks or months to a lifetime, depending upon the characteristics of the hazard. This factor can govern whether children should be considered separately or populations from different age-groups combined. Other factors may also determine whether specific population groups should be addressed in the scenario.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0278691515001167

Climate change and its impact on the projected values of groundwater recharge: A review

Dima Al Atawneh, ... Edoardo Bertone, in Journal of Hydrology, 2021

3.4.1 Identification and quantification of sources of uncertainty

Among the aforementioned sources of uncertainty, inter-model uncertainty was the most often considered in term of quantification (20%) or emission scenarios uncertainty, with natural variability often not properly accounted for (Fig. 6). Further, only one study quantified the uncertainty from downscaling methods through GCM-RCM coupling (Moeck et al., 2016).

Classify each of the following scenarios based on which attribute of water is responsible.

Fig. 6. Percentage of studies quantifying different types of uncertainty.

Many approaches are used to quantify or identify the uncertain variables which have a potential influence on the projected GWR values. The use of probability distributions from ensemble of multiple GCMs, or from running the same GCM under multiple emission scenarios (Foley, 2010), was the most common approach to characterise the uncertainty associated with the occurrence of certain future scenarios. Examples includes probability of exceedance (Crosbie et al., 2012, 2013b; Goodarzi et al., 2016; Haidu and Nistor, 2020; Klammler et al., 2013; Lindquist et al., 2019; Rodriguez-Huerta et al., 2020; Sishodia et al., 2018), or Pearson Type III probability distribution (Lauffenburger et al., 2018). Other techniques such as Kernel density estimation was used by Adane et al. (2019) to draw frequency distributions for different parameters such as rainfall and temperature. Monte Carlo distribution was used by Ng et al. (2010) to quantify the uncertainty. Similar approaches can also be used for sensitivity analysis, to identify and weigh the most impactful variables on future projections. These methods represent a means of demonstrating uncertainty and a way of communicating the uncertainty around the predicted results (Teng et al., 2017).

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0022169421006508

Wave climate projections along the French coastline: Dynamical versus statistical downscaling methods

Amélie Laugel, ... Fernando Méndez, in Ocean Modelling, 2014

5 Discussions and conclusion

This work compared the distributions of bulk wave parameters (significant wave height, mean wave period, mean wave direction and wave energy flux) obtained using two state-of-the-art downscaling methods applied to one global climate model (GCM) and three greenhouse gas emission scenarios (B1, A1B and A2) (IPCC, 2007). A weather pattern-based statistical downscaling method and a dynamical downscaling method were applied using the ARPEGE-CLIMAT GCM to project the future sea state climate along the French Atlantic, English Channel and North Sea coastlines for the target period of 2061–2100. Under both methods and all three scenarios, wave parameters were compared not only in terms of wave climatology but also considering joint distributions and seasonal and interannual variabilities.

In the general context of estimating the potential impact of climate change on the wave climate, this work aimed to compare the results of two projection methods. The validation step first compared SWH simulations obtained using the two methods for the present wave climate against in situ observations performed at the buoy Gascogne located in the Bay of Biscay over the period 2000–2009 (cf. Appendix A) and demonstrated the ability of both methods to estimate the climatology of the SWH, although the DD method provides results that are slightly closer to the observations than the SD method. Therefore, we decided to evaluate the ability of the SD method to model the future sea state with respect to the DD projections, which were considered as reference data. In addition, it was emphasized that uncertainties associated with downscaling methods should be accounted for in estimating the impact of climate change on seasonal wave variability, in the same manner as model and scenario uncertainties.

We showed that the SD projections are able to reproduce the mean wave climate as well as the DD projections under the three examined climate scenarios, with some differences being observed for high and low values of wave parameters. SWH dynamics at monthly, seasonal and annual scale are well reproduced. In particular, the two methods show a very similar future sea state for spring and autumn, in terms of both mean values and distributions, while the summer and winter projections reveal some differences. The largest of these discrepancies for the A1B scenario correspond to the higher SWH values obtained from the SD projections in summer along the Atlantic coastline and the lower SWH values obtained from the SD projections in winter in the Brittany region with respect to the DD method. In terms of seasonal projections, these differences between the two methods increase from scenario B1 to A1B and from A1B to A2.

The exhaustive comparison of the SD vs the DD methods can contribute to the understanding of the aforementioned differences. First, the analysis of the annual joint distributions of (SWH, Tm 02) and (POW,θm) indicated that the SD method affects the distributions of wind sea and young waves. Nevertheless, it also shows that developed waves and swells are similarly projected by the two methods, as are uni-directional and multi-directional sea states. Second, the SD method may exhibit limitations in modeling the tails of the wave probability distributions. Comparison of monthly SWH percentiles between the two methods revealed lower values for the SD projections in winter starting from the 75th percentile, reaching approximately 2 m for the 99th percentile along the French Atlantic coastline. Finally, these differences are linked by the loss of monthly variability under the SD method in comparison with the DD method.

Nevertheless, the estimates of the future mean wave climate obtained using the SD method based on weather type classification match the estimates obtained using the DD method, at least for the dominant features, but with a much lower computational cost. Similar results were obtained for seasonal mean values, joint distributions and future interannual variability. However, some improvements of the SD method could be considered to enhance the modeling of wind seas and energetic sea states. These improvements could concern the definition of the predictor, such as an increase in the spatial resolution or the inclusion of a model dedicated to extreme values in the SD method to better capture the upper tail of the SWH distribution.

Finally, considering the limitations and advantages of both downscaling methods, the authors recommend that the advantages of each technique should be exploited. The SD method could be applied with an ensemble of GCMs and scenarios to estimate the potential impact of climate change on the mean wave climatology and the associated uncertainties. Application of the DD method could also complete the estimation of the possible impact of climate change on more energetic sea state conditions and quantify the downscaling uncertainties inherent to the projection of the future wave climate.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1463500314001334

Castles built on sand or predictive limnology in action? Part B: Designing the next monitoring-modelling-assessment cycle of adaptive management in Lake Erie

George B. Arhonditsis, ... Vincent Cheng, in Ecological Informatics, 2019

4 Risks and uncertainties with the implementation of best management practices: what does the literature suggest?

A variety of costly BMPs have been designed to mitigate pollution from diffuse sources in agricultural and urban areas (Leitão et al., 2018; Edwards et al., 2016; Dietz, 2007; Sharpley et al., 2006). Although their implementation has been based on the stipulation that both their short- and long-term effectiveness are guaranteed, emerging evidence is suggestive of moderate water quality improvements in many watersheds and broad variability in their performances, often much lower compared to the specs of the original design from BMP experimental studies (Jarvie et al., 2013; Kleinman et al., 2011). This form of scenario uncertainty can be attributed to a number of factors, such as suboptimal design, lack of landowner participation (Fig. 5a, b), erroneous selection of BMPs (Fig. 5c), failure to address non-point pollution sources, inadequate coverage of the watershed, lag time between BMP implementation and distinct improvements of downstream conditions, different efficiency between particulate and soluble nutrient forms (Fig. 5d, e), and variability induced by extreme events and other weather-related anomalies (Liu et al., 2017; Meals et al., 2010). In Lake Erie, Smith et al. (2018) noted that the majority of local farmers apply P fertilizers at or below the current recommendations, and are erroneously singled out as the main culprit for the recent re-eutrophication. It was asserted that agronomic changes (e.g., no-tillage adoption, crop cultivar advances) in the surrounding watersheds and the lack of appropriate fertility guidance and practices to protect water quality could instead be primarily responsible for the recent trends in nutrient biogeochemical cycles (Smith et al., 2018). The same study also questioned whether the “law of unintended consequences” has received sufficient consideration in the local decision-making process, as environmental interventions can conceivably have long-term damaging effects on ecosystem services given our limited knowledge of complex ecosystem interactions (Smith et al., 2018; May and Spears, 2012).

Classify each of the following scenarios based on which attribute of water is responsible.

Fig. 5. (a) Areal nutrient balance for USA and Canada, where dotted lines indicate cumulative P inputs of fertilizer and manure and dashed line represent P uptake by crops (Bouwman et al., 2013); (b) areal nutrient balance for Ontario, Canada, with estimated P accumulation in soil for 1973–2013 (International Plant Nutrition Institute, 2013); (c) scatterplot of reported BMP effectiveness for SRP and TP for filter strips and conservation tillage (Gitau et al., 2005), in which negative values indicate that the BMP acts as a P source; (d) and (e) illustrate the probability distributions of BMP effectiveness on SRP and TP reduction for reduced tillage and wetland restoration, respectively (Igras, 2016).

In the same context, Osmond et al. (2012) raised concerns that many important empirical findings from past conservation practices across North America have not been incorporated into current BMP guides. For example, earlier work in the area cautioned that the focus on sediment erosion control (no-till conservation, buffer strips, and fall fertilization) may entail a trade-off effect with elevated losses of bioavailable phosphorus (Gebhardt et al., 1985; Logan et al., 1979) and indeed recent studies by Jarvie et al. (2017) and Baker et al. (2017) have attributed the re-appearance of HABs to the unintended consequences from conservation decisions adopted 20–50 years ago (Fig. 6). More recently, Liu et al. (2017, 2018) identified that BMP performance assessments are predominantly based on short-term experimental studies, whereas long-term monitoring has registered variable performance trends. For example, Mitsch et al. (2012) has observed a gradual degradation of constructed wetlands in terms of their effectiveness for SRP removal within 15 years of monitoring, while Kieta et al. (2018) reported limited efficiency of vegetative buffer strips in Great Lakes basin, where the majority of nutrients are transported with spring freshet during the non-growing season. Similarly, Li and Babcock (2014) reported long-term orthophosphate areal export rates from green roofs comparable to those of highly intensive agricultural areas. In order to minimize the discrepancy between expected and actual environmental effects, Liu et al. (2018) proposed a framework to incorporate BMP life-cycle effectiveness into watershed management plans by explicit accounting for: (i) the variability in starting efficiency of each BMP type to reduce the severity of runoff and pollutant concentrations due to local condition differences and installation practices; (ii) intrinsic variability of operational performance due to watershed geophysical conditions, differential response to storm events, and seasonality; (iii) non-linearity of BMP effectiveness in response to different loading regimes as well as the expected decline in performance over time, which in turn enforces the need for regular maintenance; and (iv) lagged manifestation of water quality improvements after BMP adoption due to nutrient spiraling downstream or recycling in receiving water bodies (Fig. 6).

Classify each of the following scenarios based on which attribute of water is responsible.

Fig. 6. Risks and uncertainties with the BMP implementation of Best Management Practices in the Maumee River watershed. Our study highlights the importance of designing land-use management scenarios that accommodate recent conceptual and technical advancements of the life-cycle effectiveness of various BMPs, the variability in their starting operational efficiency, and differential response to storm events or seasonality.

Promoting watershed management plans often requires financial incentives, such as tax credits, cost-sharing, reimbursements, insurance, and certification price premiums (Tuholske and Kilbert, 2015). The aforementioned discrepancy in timing between BMP implementation and water quality improvement can make the financial incentives unappealing, if we opt for the “pay-per-performance” practice. Failure of selected BMPs to achieve loading reduction targets should be viewed cumulatively as direct budget losses, environmental capital depreciation, and socio-economic values at risk (Wolf et al., 2017; Farber et al., 2002). The consideration of BMP uncertainties into scenario analysis would introduce financial risk assessment in strategic agro-environmental management decisions by weighting the amount of the proposed financial incentives with non-attainment risks of nutrient reduction goals (Palm-Forster et al., 2016). The Chesapeake Bay Program (CBP) protocol can serve as an exemplary case of comprehensive validation guidance of BMP effectiveness based on rigorous assessment of both treatment risks (known probabilities associated with BMP performance) and uncertainty (lack of knowledge surrounding these probabilities). The CBP protocol is based on transparency and inclusivity, and as such it considers detailed literature review, expert elicitation, data collection from local BMPs, and rigorous analysis (CBP, 2015).

To the best of our knowledge, none of the current watershed models accounts for the life-cycle non-stationarity or overall uncertainty in BMP effectiveness. In particular, SWWM5 does consider concentration-dependent removal of pollutants with specific BMPs during peak and base flows, but still relies on deterministic values of statistically significant median influent- and effluent-event concentrations (Rossman and Huber, 2016). Other major ecohydrological models, such as SWAT and HSPF, are either based on a deterministic (pre-specified constant) nutrient removal effectiveness or on empirical relationships of variable statistical power (Dorioz et al., 2006). In particular, SWAT considers the impact of vegetative filter strips on dissolved phosphorus removal as a linear function of surface runoff reduction. Nonetheless, the corresponding regression model explains <30% of the observed variability, while the empirical reduction efficiency ranges from 43% to −31% near zero runoff reduction (Dillaha et al., 1989). As a first step to accommodate BMP uncertainty, we thus propose a moderate enhancement with a stochastic time-invariant representation of BMP effectiveness in watershed models (Griffin, 1995), followed by the introduction of time-variant probability distributions for BMP life-cycle performance (Liu et al., 2018). The proposed stochastic augmentation would allow sampling over the uncertainty of BMP scenarios with Monte Carlo simulations, thereby providing a pragmatic tool to assess the likelihood of the achievability of the proposed nutrient-loading reduction goals. These probabilities can then be subjected to sequential updating through the iterative monitoring-modelling-assessment cycles of adaptive management, whereby our degree of confidence on the success of a selected BMP strategy can be refined.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S157495411830267X

The role of agent-based models in wildlife ecology and management

Adam J. McLane, ... Danielle J. Marceau, in Ecological Modelling, 2011

1 Introduction

Wildlife species are under tremendous pressure from both natural and anthropogenic influences, including climate change, pollution, and habitat loss and fragmentation. Identification and protection of critical habitats is central to the management of species at risk, and the need to designate habitats as critical for species persistence is universally recognized by scientists, resource managers, and the general public. However, critical habitat designations will be challenged if they affect stakeholders who bear the lost opportunity costs of economic activity (Rosenfeld and Hatfield, 2006). As a result, political decision-makers involved in conservation planning of critical habitats face difficult challenges when it comes to balancing economic development and the maintenance of a healthy environment.

Conservation planning is the process of locating, configuring, implementing and maintaining areas that are managed to promote the persistence of biodiversity (Margules and Pressey, 2000). Effective conservation planning also acknowledges the complexity imposed by dynamic updating of priorities for both biodiversity patterns and processes as decisions are made. For instance, anticipated changes to species distributions in response to environmental and/or landscape change will influence decisions about conservation design (Pressey et al., 2007). Scenario planning is one important component of conservation planning, and is necessary for assisting the development of knowledge and planning tools required by managers and decision makers. A technique for making decisions in the face of uncontrollable, irreducible uncertainty, scenario planning offers managers a method for creating more resilient conservation policies by considering multiple possible futures, both socio-economic and ecological (Peterson et al., 2003). Benefits of using scenario planning include increased understanding of key uncertainties, the incorporation of alternative perspectives into conservation planning, and greater resilience of decisions to surprise. This approach has direct implication for the process of delineating critical habitats for species at risk, since in addition to determining wildlife habitat space and usage, conservation planning of wildlife habitats also involves the analysis of future habitat-linked population demographics under various land-use development scenarios.

To better inform management in the determination of critical habitat, wildlife research has long focused on understanding wildlife use of habitats and, when combined with the availability of resources, what animals select and avoid on the landscape, and how, and why they select the features that they do (Morris et al., 2008). Specifically, information on the wildlife's adaptive behaviors of habitat selection, movement ecology, and its responses to a dynamic environment are integral to successful conservation and scenario planning. For instance, an examination of the underlying processes and mechanisms of habitat-selection by the individual will provide the ability to distinguish habitat use based on adaptive preferences, maladaptive preferences (ecological traps), or non-ideal habitat selection (i.e., the fitness consequences of habitat selection; for an example see Arlt and Pärt, 2007). This distinction is of considerable value in the ranking of habitat types for conservation planning. Next, the movement ecology of the organism, which includes the internal state, motion capacity, and navigation capacity of the individual, provides insight into how wildlife are affected by matrix heterogeneity, and can generate emergent properties that improve our understanding of the demographics of stochastic, spatially structured populations (Revilla and Wiegand, 2008). Because the dynamic nature of the environment plays such an influential role in affecting organism state, behavioral decisions, and motion, a representation of the animal's actual environment in a spatially explicit manner in habitat modeling can improve the effectiveness of conservation planning, since it can highlight the causal links between organism movement and environmental change (Nathan et al., 2008). Finally, the capacity to accommodate the dynamism of the environment, the spatial patterns of inter- and intra-species mechanisms, and the feedbacks and adaptations inherent in these systems can allow one to explore how animals will respond to and be affected by future and novel changes in their landscape, which is an essential criterion for scenario planning.

Management of wildlife therefore requires the stewardship and/or conservation of cognizant and adaptive individuals that interact with one another and their environment, the combination of which comprises very diverse and dynamic populations. It is this diversity and dynamic nature that makes populations robust and capable of handling perturbations in environmental conditions, and therefore this information should not be overlooked. What is needed is a thorough understanding of the individual behaviors and motivations of wildlife involved in habitat selection and use, and the ability to utilize and project these fitness-maximizing decision and movement rules in a spatio-temporal context to assess how animals will respond to future changes in their environment. A range of habitat models are available and are capable of addressing one or more of these issues independently or in concert; for instance, resource-selection models (e.g., Johnson et al., 2004), dynamic optimization models (e.g., Chubaty et al., 2009), and population-level land-use change models (e.g., Copeland et al., 2009), to name but a few. Our intent is not to conduct a systematic comparison of each approach as they often complement, as opposed to supersede one another. Rather, we review here a further methodology that can accommodate spatio-ecological information, and which links detailed knowledge of animal behavior and movement with explicit and dynamic environment variables: agent-based modeling.

Agent-based models (ABMs) are computational simulation tools cable of incorporating intelligence, by combining elements of learning, adaptation, evolution, and fuzzy logic. Specifically, ABMs rely on a bottom-up approach that begins by explicitly considering the components of a system (i.e., individual agents) and tries to understand how the system's properties emerge from the interactions among these components (Grimm, 1999; Grimm et al., 2005). A community of agents acts independently of any controlling intelligence, they are goal-driven and try to fulfill specific objectives, they are aware of and can respond to changes in their environment, they can move within that environment, and they can be designed to learn and adapt their state and behavior in response to stimuli from other agents and their environment. This emphasis on interactions between agents and their environment is what distinguishes agent-based modeling (also referred to as individual-based models) from other systemic modeling approaches (Marceau, 2008).

Over the past fifteen years, ABMs have been applied to address a broad range of issues related to environmental resource management, such as water, forest, and agro-ecosystem management (see review by Bousquet and Le Page, 2004). ABMs have also been extensively used in ecology to study species relationships, population dynamics, and to understand how animals perceive, learn and adapt to their environment (DeAngelis and Mooij, 2005). Recently, ABMs have begun being used cross-disciplinarily to address human-wildlife interactions and their management (An et al., 2005; Anwar et al., 2007). The recent proliferation of ABMs in ecological applications and specifically in the realm of animal movement and behavior (Wang and Grimm, 2007; Stillman, 2008) suggests they could play a key role in understanding habitat selection and use for conservation planning. Further, the ability for ABMs to incorporate dynamic representations of the environment through cellular automata (CA) also suggests a critical function for these models when it comes to future-scenario development and implementation of management strategies.

Due to the identifiable need in wildlife ecology and management for the inclusion of individuality of wildlife species as adaptive, responsive entities, the use of an ABM as a tool for management is advantageous: dynamic interplay between agents is readily accommodated, realistic environmental conditions can be approximated, and hypothetical scenarios can be simulated. An ABM specifically developed for use in the determination of critical habitat is one that explicitly incorporates individual fitness-seeking behaviors of animal movement in a spatially-realistic representation of the environment that is then subjected to alternate scenarios of land-use development.

This paper explores the role of ABMs in wildlife habitat selection for the purpose of unifying different fields of study (i.e., behavioral ecology, animal-movement ecology, geographical information science, and computational intelligence) into a cohesive realm for the benefit of wildlife conservation planning, emphasizing the need for a multi-disciplinary approach. It is aimed specifically at those in the disciplines of behavioral ecology, animal-movement ecology, geography, and geocomputation, as well as on-site managers and decision makers responsible for the management and conservation of wildlife and wildlife habitat. Increasingly, wildlife-management is becoming more multidisciplinary in nature and marked by an increase in stakeholder participation as it moves away from the near exclusive reliance on biological science and decision-making by so-called experts (Riley et al., 2002). ABMs are an excellent tool for wildlife-management, since they allow for the integration of expertise from multiple disciplines, as well as the interests of stakeholders that fall outside of the core sciences. The paper begins by first describing the fundamental elements required to develop the specific wildlife-management ABM set forth in this paper, from the representation of space to the animal agent attributes. Next, each key element is specifically addressed, with a thorough review of how ecologists have implemented it in their models. A summary of the trends is then provided, with an evaluation of the models’ fit with the objectives of the wildlife-management ABM, and where future directions lie. The paper concludes with a description of the ecological data requirements needed to implement these ABMs, how the models can then be robustly evaluated, and the tools available to ecologists and managers to create ABMs.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0304380011000524