Modified

June 2, 2025

Interactive tool for model evaluation

Our goal is to develop an interactive model evaluation tool (MET) to evaluate Species Distribution Models (SDMs) and synthesize expert evaluation with quantitative evaluation to inform model improvement and application.

What are the problems we’re trying to solve?

  • Unfair, ambiguity and disperse critique of modeling

  • Frustrations and miscommunications on both model users, evaluators and modelers

  • Miscommunications with downstream information/opinion-receivers

  • Missed opportunities for model improvements

  • Loss of model information that could be well used for inputs to actions/decisions due to interactions with model outputs that are due to:

    • Inexplicit modelers intentions’ about the utility and applicability of their model and mismatch with evaluators’ purpose(s)

    • Undisciplined path of model interaction and evaluation

    • Purposes not well aligned with modeling utility

    • Model appropriateness regarded as Yes/No instead of gradient of appropriateness

We propose integrating multiple modules within an interactive tool, as follow:

Interactive model evaluation tool

This module integrates all modules describe below and facilitates multiple users navigating, inquiring and reacting to model materials.

Model development module

It will allow modelers to upload model inputs, outputs and metadata to the MET.

We are using the ODMAP Shiny app developed by (Zurell et al., 2020) as a template to upload model metadata

We will evaluate SDMs for multiple species from:

However, we will be also testing:

Model Evaluation module

It will enable evaluators to provide feedback on model materials. We are using the ODMAP Shiny app developed by (Zurell et al., 2020) as a template tocreate a model evaluation tool.

Reporting module

It will allow evaluator to summarize their evaluation and

Synthesis module

it will combines feedback from multiple evaluators with relevant model materials to inform model use and improvement

  • Interactive tool: it will allow multiple users to:

    • Interact with multiple datasets formats (maps, text, tables, etc) in a shiny app.

    • Display multiple panels of information (e.g., comparing predictions for two species).

    • Write comments, suggestions, recommendations (text).

    • Rank model inputs, outputs and metadata

    • Save and submit feedback (option to save it temporarily and resume later).

    • Save strategic information from users and evaluators purposes (option to save it temporarily and resume later)

    • Revisit evaluation

    • Track the path taken by evaluators to provide feedback and extract information.

    • Provide feedback on other evaluators’ feedback

    • Ability to generate reports

  • Hosted: the MET will be stored in a Github repository defined by ECCC and a functional version of the app will be set available by the contractor in a website defined by ECCC with some projects for demonstrations (The Ontario Breeding Bird Atlas models (Atlas-3) and the upcoming v5 models from the Boreal Avian Modelling project (BAM).

  • Portability: it allows users to install and implement the tool locally on different platforms or environments with minimal changes.

  • Upload data sets: it will allow users to populate the MET, by uploading models’ inputs, outputs and metadata for multiple species.

  • Compile, store, and synthesize: the tool should allow to compile, store and summarize feedback and actions proposed by multiple users.

  • Track and report changes and progress: the MET will provide the functionality to track, consolidate and visualize changes and progress in model evaluation (e.g., species X has been evaluated 3 time by Y, main action include A,B and C, improvement are “spatial predictions” and model performance metrics, etc.). 

  • Modular: the MET will be developed using independent modules, each handling a specific functionality (for instance a module to upload metadata, a module to capture evaluators’ feedback etc.). Breaking down the complexity of the tool into modules will help in maintenance, functionality updates, troubleshooting, etc. The contractor will provide the functionality to effectively link these modules.

  • Expandable: we want to expand MET by adding new modules that will be only conceptualized in this contract (e.g., model application & risk assessment, etc.)..

  • Flexible:  It will also be the frequent case to add items or sections to specific modules of the MET (e.g., a new task that requires feedback from evaluators)

  • Generalizable: our vision is that the MET can be used in multiple types of models and multiple projects.

  • Spatially explicit: users will have the opportunity to visualize, interact and provide feedback on multiple raster and vector layers (predictions, uncertainty, covariates, etc).

  • Open source: we want to use all resources, software and tools that are open source to develop the MET.

  • Comparative: we want to provide the capability to view multiple data outputs (e.g., predictions of multiple species), either side by side or through toggling between layers, to gain a deeper understanding of model outputs.

  • Multiple formats: ability to upload, read and interact with multiple formats (text, tables, rasters, vectors).