Our mission is to help computational modelers at all levels engage in the establishment and adoption of community standards and good practices for developing and sharing computational models. Model authors can freely publish their model source code in the Computational Model Library alongside narrative documentation, open science metadata, and other emerging open science norms that facilitate software citation, reproducibility, interoperability, and reuse. Model authors can also request peer review of their computational models to receive a DOI.
All users of models published in the library must cite model authors when they use and benefit from their code.
Please check out our model publishing tutorial and contact us if you have any questions or concerns about publishing your model(s) in the Computational Model Library.
We also maintain a curated database of over 7500 publications of agent-based and individual based models with additional detailed metadata on availability of code and bibliometric information on the landscape of ABM/IBM publications that we welcome you to explore.
Displaying 10 of 1171 results
This model presents the simulation model of a city in the context of overtourism. The study area is the city of Santa Marta in Colombia. The purpose is to illustrate the spatial and temporal distribution of population and tourists in the city. The simulation analyzes emerging patterns that result from the interaction between critical components in the touristic urban system: residents, urban space, touristic sites, and tourists. The model is an Agent-Based Model (ABM) with the GAMA software. Also, it used public input data from statistical centers, geographical information systems, tourist websites, reports, and academic articles. The ABM includes assessing some measures used to address overtourism. This is a field of research with a low level of analysis for destinations with overtourism, but the ABM model allows it. The results indicate that the city has a high risk of overtourism, with spatial and temporal differences in the population distribution, and it illustrates the effects of two management measures of the phenomenon on different scales. Another interesting result is the proposed tourism intensity indicator (OVsm), taking into account that the tourism intensity indicators used by the literature on overtourism have an overestimation of tourism pressures.
A minimal genetic algorithm was previously developed in order to solve an elementary arithmetic problem. It has been modified to explore the effect of a mutator gene and the consequent entrance into a hypermutation state. The phenomenon seems relevant in some types of tumorigenesis and in a more general way, in cells and tissues submitted to chronic sublethal environmental or genomic stress.
For a long time, some scholars suppose that organisms speed up their own evolution by varying mutation rate, but evolutionary biologists are not convinced that evolution can select a mechanism promoting more (often harmful) mutations looking forward to an environmental challenge.
The model aims to shed light on these controversial points of view and it provides also the features required to check the role of sex and genetic recombination in the mutator genes diffusion.
This model is pertinent to our JASSS publication “Raising the Spectrum of Polarization: Generating Issue Alignment with a Weighted Balance Opinion Dynamics Model”. It shows how, based on the mechanisms of our Weighted Balance Theory (a development of Fritz Heider’s Cognitive Balance Theory), agents can self-organize in a multi-dimensional opinion space and form an emergent ideological spectrum. The degree of issue alignment and polarization realized by the model depends mainly on the agent-specific ‘equanimity parameter’ epsilon.
An agent-based framework to simulate the diffusion process of a piece of misinformation according to the SBFC model in which the fake news and its debunking compete in a social network. Considering new classes of agents, this model is closer to reality and proposed different strategies how to mitigate and control misinformation.
This is an extended replication of Abelson’s and Bernstein’s early computer simulation model of community referendum controversies which was originally published in 1963 and often cited, but seldom analysed in detail. This replication is in NetLogo 6.3.0, accompanied with an ODD+D protocol and class and sequence diagrams.
This replication replaces the original scales for attitude position and interest in the referendum issue which were distributed between 0 and 1 with values that are initialised according to a normal distribution with mean 0 and variance 1 to make simulation results easier compatible with scales derived from empirical data collected in surveys such as the European Value Study which often are derived via factor analysis or principal component analysis from the answers to sets of questions.
Another difference is that this model is not only run for Abelson’s and Bernstein’s ten week referendum campaign but for an arbitrary time in order that one can find out whether the distributions of attitude position and interest in the (still one-dimensional) issue stabilise in the long run.
Prior to COVID-19, female academics accounted for 45% of assistant professors, 37% of associate professors, and 21% of full professors in business schools (Morgan et al., 2021). The pandemic arguably widened this gender gap, but little systemic data exists to quantify it. Our study set out to answer two questions: (1) How much will the COVID-19 pandemic have impacted the gender gap in U.S. business school tenured and tenure-track faculty? and (2) How much will institutional policies designed to help faculty members during the pandemic have affected this gender gap? We used agent-based modeling coupled with archival data to develop a simulation of the tenure process in business schools in the U.S. and tested how institutional interventions would affect this gender gap. Our simulations demonstrated that the gender gap in U.S. business schools was on track to close but would need further interventions to reach equality (50% females). In the long-term picture, COVID-19 had a small impact on the gender gap, as did dependent care assistance and tenure extensions (unless only women received tenure extensions). Changing performance evaluation methods to better value teaching and service activities and increasing the proportion of female new hires would help close the gender gap faster.
IMine is a flexible framework which can be adopt multiple criteria for convergence to solve Influence Minig problems. It can use any diffusion model, as well as resilience to compute the influence of a set of nodes base on the use case.
The code is written and tested on ‘R’ v3.5
A simple model is constructed using C# in order to to capture key features of market dynamics, while also producing reasonable results for the individual insurers. A replication of Taylor’s model is also constructed in order to compare results with the new premium setting mechanism. To enable the comparison of the two premium mechanisms, the rest of the model set-up is maintained as in the Taylor model. As in the Taylor example, homogeneous customers represented as a total market exposure which is allocated amongst the insurers.
In each time period, the model undergoes the following steps:
1. Insurers set competitive premiums per exposure unit
2. Losses are generated based on each insurer’s share of the market exposure
3. Accounting results are calculated for each insurer
…
This is a simulation of an insurance market where the premium moves according to the balance between supply and demand. In this model, insurers set their supply with the aim of maximising their expected utility gain while operating under imperfect information about both customer demand and underlying risk distributions.
There are seven types of insurer strategies. One type follows a rational strategy within the bounds of imperfect information. The other six types also seek to maximise their utility gain, but base their market expectations on a chartist strategy. Under this strategy, market premium is extrapolated from trends based on past insurance prices. This is subdivided according to whether the insurer is trend following or a contrarian (counter-trend), and further depending on whether the trend is estimated from short-term, medium-term, or long-term data.
Customers are modelled as a whole and allocated between insurers according to available supply. Customer demand is calculated according to a logit choice model based on the expected utility gain of purchasing insurance for an average customer versus the expected utility gain of non-purchase.
This is an agent-based model of a simple insurance market with two types of agents: customers and insurers. Insurers set premium quotes for each customer according to an estimation of their underlying risk based on past claims data. Customers either renew existing contracts or else select the cheapest quote from a subset of insurers. Insurers then estimate their resulting capital requirement based on a 99.5% VaR of their aggregate loss distributions. These estimates demonstrate an under-estimation bias due to the winner’s curse effect.
Displaying 10 of 1171 results