Computational Model Library Peer Review

CoMSES Net Computational Library Peer Review

Peer Reviewed

Authors who submit their computational models to the CoMSES Net Computational Model Library can request peer review of their models. If the model passes review, models will be granted a peer reviewed badge and a DOI.

Models must remain private during peer review so that you can continue to adjust your computational model files and metadata to address any reviewer concerns raised during the peer review process. Publishing a codebase release locks the files associated with that release (but not the metadata), so you would need to draft a new release to address any reviewer concerns related to the files included in your codebase release.

Review Criteria

The CoMSES Net Computational Model Peer Review process uses a straightforward checklist to verify that a computational model’s source code and documentation meet baseline standards derived from good enough practices in the software engineering and scientific communities we serve. The goal of this process is to encourage publication and sharing of higher quality models that align with the FAIR Principles for Research Software (FAIR4RS) and promote “frictionless reuse”, enabling others to more easily understand, reuse or extend a model.

Reviewers should evaluate computational models based on the following criteria:

  1. Ease of Execution. Can the model be run with a reasonable amount of effort? This may involve compilation into an executable, resolving input data dependencies or software library dependencies - all of which should be clearly documented by the author(s).
  2. Thorough Documentation. Is the model accompanied by detailed narrative documentation? This should be provided as a standalone document, as comments or other in-code descriptions (e.g., in NetLogo’s info tab) are not sufficient. Narrative documentation should adhere to the ODD protocol or an equivalent documentation framework and present a cogent high level overview of how the model works as well as essential internal details and assumptions. The documentation should ideally be comprehensive enough for another computational modeler to replicate the model and its results without needing to refer to the source code. Visual aids like flowcharts, equations, and diagrams are highly encouraged.
  3. Code Quality. Is the source code clean, well-structured, and easy to understand? Code should have semantically meaningful variable names and relevant comments that clearly explain methods, functions, and parameters. Technical debt hinders comprehension and reuse. Examples of technical debt include unused or duplicated code, excessive use of global variables, or overly complex and difficult-to-follow logic. Clean, well-documented and well-structured code makes it easier for others to review, reuse, or extend the code.

For examples of computational models that have successfully passed peer review, please visit the Computational Model Library.

Reviewers are not required to assess the theoretical soundness, scientific merit, or validity of model outputs. However, they may privately raise any concerns about these or other aspects of the model with the review editors.

This website uses cookies and Google Analytics to help us track user engagement and improve our site. If you'd like to know more information about what data we collect and why, please see our data privacy policy. If you continue to use this site, you consent to our use of cookies.
Accept