The value of software systems are affected by many drivers as they become more complex; many of them are not given their due importance, especially software maintenance. If a problem is solved with too complex a solution, a company might end up paying too much for developers with the necessary expertise, or for training staff to understand the implemented IT system.
Prominent examples are common financial models in the area of quantitative finance, like the LIBOR interest rate model, or credit default swap calculators. They can evolve into complex software solutions, comprising notoriously unstable data IO from various sources, a non-trivial mathematical simulation engine, various calibration procedures including manual parameter changes etc.
Such models are often implemented as large-scale Monte Carlo simulations. To optimize their performance, several enhancements can be made to the Monte Carlo framework (e.g., variance reduction techniques, drift corrections etc.), yet this makes the code base larger and more prone to errors. In the end, such non-trivial enhancements will increase complexity and might drive up the costs of implementing, operating and maintaining those models.
An alternative can be to avoid complexity and find simple IT architectures (e.g., parallelization, Web distribution, computation by screen saver) by exploiting cheap CPU power, much like in the SETI@home project. This tries to trade complexity versus raw computational power and utilization of unused resources, or in short, LOC vs. FLOPS.
Right now, such low-cost parallelization methods exist, but they are not readily applicable as they often provide too complicated features for the models at hand, which have been dubbed "embarrassingly parallel" problems.
The goal of this project is to find minimal-complexity IT architectures to address the modeling issues in this rich category of
applications: easy setup and maintenance; costs in model results per $; security; performance; reliability; transparency.