Simulations are more and more replacing physical tests to accelerate the analysis and reduce the costs for research and development. Simulation results may, however, deviate from reality. It is therefore essential to perform a statistical calibration and model validation process to make optimal use of the few available experimental data and ground the simulation models in reality. By combining algorithms from the field of data science and advanced simulation know-how, it can be ensured that the simulations mirror the real problems as good as possible. This results in potentially huge savings regarding both the number of design cycles and overall costs. The statistical calibration is thereby the process of tuning the model calibration parameters such that the simulation results match best with a series of experimental measurements. Without a proper calibration, simulation results are meaningless or might even deliver misinformation about the real system. An exact model calibration is therefore generally viewed as a crucial step for generating a simulation model with predictive capability. The calibration parameters usually cannot be measured directly in experimental studies. These parameters often represent physical quantities such as material properties, which are difficult to measure. Sometimes, however, the calibration parameters are merely parameters of the numerical solution scheme and thus not present in the actual physical system. Simulation tools are nowadays used for highly detailed physical problems, potentially taking into account several physical fields. Statistical calibration allows the quantification of uncertainty in all aspects of the model, including the uncertainty in the adjusted calibration parameters. Statistical calibration also allows for determining discrepancies between the model and the observed data for the optimized calibration parameters. Such model discrepancies are necessary for model validation processes and detecting any inadequacies of the model.