Download
ecog.05360.pdf 1,89MB
WeightNameValue
1000 Titel
  • Explainable artificial intelligence enhances the ecological interpretability of black-box species distribution models
1000 Autor/in
  1. Ryo, Masahiro |
  2. Angelov, Boyan |
  3. Mammola, Stefano |
  4. Kass, Jamie |
  5. Benito, Blas M. |
  6. Hartig, Florian |
1000 Erscheinungsjahr 2020
1000 LeibnizOpen
1000 Publikationstyp
  1. Artikel |
1000 Online veröffentlicht
  • 2020-11-17
1000 Erschienen in
1000 Quellenangabe
  • 44:199–205
1000 FRL-Sammlung
1000 Copyrightjahr
  • 2020
1000 Lizenz
1000 Verlagsversion
  • https://doi.org/10.1111/ecog.05360 |
1000 Ergänzendes Material
  • https://doi.org/10.5281/zenodo.4048271 |
1000 Publikationsstatus
1000 Begutachtungsstatus
1000 Sprache der Publikation
1000 Abstract/Summary
  • Species distribution models (SDMs) are widely used in ecology, biogeography and conservation biology to estimate relationships between environmental variables and species occurrence data and make predictions of how their distributions vary in space and time. During the past two decades, the field has increasingly made use of machine learning approaches for constructing and validating SDMs. Model accuracy has steadily increased as a result, but the interpretability of the fitted models, for example the relative importance of predictor variables or their causal effects on focal species, has not always kept pace. Here we draw attention to an emerging subdiscipline of artificial intelligence, explainable AI (xAI), as a toolbox for better interpreting SDMs. xAI aims at deciphering the behavior of complex statistical or machine learning models (e.g. neural networks, random forests, boosted regression trees), and can produce more transparent and understandable SDM predictions. We describe the rationale behind xAI and provide a list of tools that can be used to help ecological modelers better understand complex model behavior at different scales. As an example, we perform a reproducible SDM analysis in R on the African elephant and showcase some xAI tools such as local interpretable model-agnostic explanation (LIME) to help interpret local-scale behavior of the model. We conclude with what we see as the benefits and caveats of these techniques and advocate for their use to improve the interpretability of machine learning SDMs.
1000 Sacherschließung
lokal explainable artificial intelligence
lokal xAI
lokal interpretable machine learning
lokal habitat suitability modeling
lokal species distribution model
lokal ecological modeling
1000 Fächerklassifikation (DDC)
1000 Liste der Beteiligten
  1. https://orcid.org/0000-0002-5271-3446|https://orcid.org/0000-0001-5068-4234|https://orcid.org/0000-0002-4471-9055|https://orcid.org/0000-0002-9432-895X|https://orcid.org/0000-0001-5105-7232|https://frl.publisso.de/adhoc/uri/SGFydGlnLCBGbG9yaWFu
1000 Label
1000 Förderer
  1. Projekt DEAL |
  2. JSPS Overseas Research Fellowships |
  3. European Research Council |
  4. Horizon 2020 |
  5. Okinawa Institute of Science |
  6. Technology Graduate University |
1000 Fördernummer
  1. -
  2. -
  3. 647038
  4. 882221
  5. -
  6. -
1000 Förderprogramm
  1. Open Access fund
  2. Overseas Research Fellowship; Postdoctoral Fellowships for Foreign Researchers program
  3. BIODESERT project
  4. -
  5. -
1000 Dateien
1000 Objektart article
1000 Beschrieben durch
1000 @id frl:6430220.rdf
1000 Erstellt am 2021-11-12T15:15:36.861+0100
1000 Erstellt von 317
1000 beschreibt frl:6430220
1000 Bearbeitet von 317
1000 Zuletzt bearbeitet 2021-11-15T14:13:15.016+0100
1000 Objekt bearb. Fri Nov 12 15:16:47 CET 2021
1000 Vgl. frl:6430220
1000 Oai Id
  1. oai:frl.publisso.de:frl:6430220 |
1000 Sichtbarkeit Metadaten public
1000 Sichtbarkeit Daten public
1000 Gegenstand von

View source