Our mission is to help computational modelers at all levels engage in the establishment and adoption of community standards and good practices for developing and sharing computational models. Model authors can freely publish their model source code in the Computational Model Library alongside narrative documentation, open science metadata, and other emerging open science norms that facilitate software citation, reproducibility, interoperability, and reuse. Model authors can also request peer review of their computational models to receive a DOI.
All users of models published in the library must cite model authors when they use and benefit from their code.
Please check out our model publishing tutorial and contact us if you have any questions or concerns about publishing your model(s) in the Computational Model Library.
We also maintain a curated database of over 7500 publications of agent-based and individual based models with additional detailed metadata on availability of code and bibliometric information on the landscape of ABM/IBM publications that we welcome you to explore.
Displaying 10 of 610 results agent based clear search
An agent-based framework to simulate the diffusion process of a piece of misinformation according to the SBFC model in which the fake news and its debunking compete in a social network. Considering new classes of agents, this model is closer to reality and proposed different strategies how to mitigate and control misinformation.
This is an extended replication of Abelson’s and Bernstein’s early computer simulation model of community referendum controversies which was originally published in 1963 and often cited, but seldom analysed in detail. This replication is in NetLogo 6.3.0, accompanied with an ODD+D protocol and class and sequence diagrams.
This replication replaces the original scales for attitude position and interest in the referendum issue which were distributed between 0 and 1 with values that are initialised according to a normal distribution with mean 0 and variance 1 to make simulation results easier compatible with scales derived from empirical data collected in surveys such as the European Value Study which often are derived via factor analysis or principal component analysis from the answers to sets of questions.
Another difference is that this model is not only run for Abelson’s and Bernstein’s ten week referendum campaign but for an arbitrary time in order that one can find out whether the distributions of attitude position and interest in the (still one-dimensional) issue stabilise in the long run.
Prior to COVID-19, female academics accounted for 45% of assistant professors, 37% of associate professors, and 21% of full professors in business schools (Morgan et al., 2021). The pandemic arguably widened this gender gap, but little systemic data exists to quantify it. Our study set out to answer two questions: (1) How much will the COVID-19 pandemic have impacted the gender gap in U.S. business school tenured and tenure-track faculty? and (2) How much will institutional policies designed to help faculty members during the pandemic have affected this gender gap? We used agent-based modeling coupled with archival data to develop a simulation of the tenure process in business schools in the U.S. and tested how institutional interventions would affect this gender gap. Our simulations demonstrated that the gender gap in U.S. business schools was on track to close but would need further interventions to reach equality (50% females). In the long-term picture, COVID-19 had a small impact on the gender gap, as did dependent care assistance and tenure extensions (unless only women received tenure extensions). Changing performance evaluation methods to better value teaching and service activities and increasing the proportion of female new hires would help close the gender gap faster.
A simple model is constructed using C# in order to to capture key features of market dynamics, while also producing reasonable results for the individual insurers. A replication of Taylor’s model is also constructed in order to compare results with the new premium setting mechanism. To enable the comparison of the two premium mechanisms, the rest of the model set-up is maintained as in the Taylor model. As in the Taylor example, homogeneous customers represented as a total market exposure which is allocated amongst the insurers.
In each time period, the model undergoes the following steps:
1. Insurers set competitive premiums per exposure unit
2. Losses are generated based on each insurer’s share of the market exposure
3. Accounting results are calculated for each insurer
…
This is a simulation of an insurance market where the premium moves according to the balance between supply and demand. In this model, insurers set their supply with the aim of maximising their expected utility gain while operating under imperfect information about both customer demand and underlying risk distributions.
There are seven types of insurer strategies. One type follows a rational strategy within the bounds of imperfect information. The other six types also seek to maximise their utility gain, but base their market expectations on a chartist strategy. Under this strategy, market premium is extrapolated from trends based on past insurance prices. This is subdivided according to whether the insurer is trend following or a contrarian (counter-trend), and further depending on whether the trend is estimated from short-term, medium-term, or long-term data.
Customers are modelled as a whole and allocated between insurers according to available supply. Customer demand is calculated according to a logit choice model based on the expected utility gain of purchasing insurance for an average customer versus the expected utility gain of non-purchase.
This is an agent-based model of a simple insurance market with two types of agents: customers and insurers. Insurers set premium quotes for each customer according to an estimation of their underlying risk based on past claims data. Customers either renew existing contracts or else select the cheapest quote from a subset of insurers. Insurers then estimate their resulting capital requirement based on a 99.5% VaR of their aggregate loss distributions. These estimates demonstrate an under-estimation bias due to the winner’s curse effect.
The development and popularisation of new energy vehicles have become a global consensus. The shortage and unreasonable layout of electric vehicle charging infrastructure (EVCI) have severely restricted the development of electric vehicles. In the literature, many methods can be used to optimise the layout of charging stations (CSs) for producing good layout designs. However, more realistic evaluation and validation should be used to assess and validate these layout options. This study suggested an agent-based simulation (ABS) model to evaluate the layout designs of EVCI and simulate the driving and charging behaviours of electric taxis (ETs). In the case study of Shenzhen, China, GPS trajectory data were used to extract the temporal and spatial patterns of ETs, which were then used to calibrate and validate the actions of ETs in the simulation. The ABS model was developed in a GIS context of an urban road network with travelling speeds of 24 h to account for the effects of traffic conditions. After the high-resolution simulation, evaluation results of the performance of EVCI and the behaviours of ETs can be provided in detail and in summary. Sensitivity analysis demonstrates the accuracy of simulation implementation and aids in understanding the effect of model parameters on system performance. Maximising the time satisfaction of ET users and reducing the workload variance of EVCI were the two goals of a multiobjective layout optimisation technique based on the Pareto frontier. The location plans for the new CS based on Pareto analysis can significantly enhance both metrics through simulation evaluation.
This ABM aims to introduce a new individual decision-making model, BNE into the ABM of pedestrian evacuation to properly model individual behaviours and motions in emergency situations. Three types of behavioural models has been developed, which are Shortest Route (SR) model, Random Follow (RF) model, and BNE model, to better reproduce evacuation dynamics in a tunnel space. A series of simulation experiments were conducted to evaluate the simulating performance of the proposed ABM.
The Communication-Based Model of Perceived Descriptive Norm Dynamics in Digital Networks (COMM-PDND) is an agent-based model specifically created to examine the dynamics of perceived descriptive norms in the context of digital network structures. The model, developed as part of a master’s thesis titled “The Dynamics of Perceived Descriptive Norms in Digital Network Publics: An Agent-Based Simulation,” emphasizes the critical role of communication processes in norm formation. It focuses on the role of communicative interactions in shaping perceived descriptive norms.
The COMM-PDND is tuned to explore the effects of normative deviance in digital social networks. It provides functionalities for manipulating agents according to their network position, and has a versatile set of customizable parameters, making it adaptable to a wide range of research contexts.
The my-side bias is a well-documented cognitive bias in the evaluation of arguments, in which reasoners in a discussion tend to overvalue arguments that confirm their prior beliefs, while undervaluing arguments that attack their prior beliefs. This agent-based model in Netlogo simulates a group discussion among myside-biased agents, within a Bayesian setting. This model is designed to investigate the effects of the myside bias on the ability of groups to reach a consensus or collectively track the correct answer to a given binary issue.
Displaying 10 of 610 results agent based clear search