*UnRisk* 5 & *Mathematica* 8—Blazingly Fast and Insightful Risk Analysis

*with*Michael Aichinger/Sascha Kratky

uni software plus GmbH

Sezmi Corporation

In this talk the design and implementation of an item-item recommender (IIR) based on linear algebra operations are presented. The presented IIR has great performance and scalability properties. Design and algorithmic approaches are discussed for recommendation proofs, tuning, and diversification. Recommendations of movies, music, and houses will be demonstrated using a common user interface.

The algorithms discussed are from the fields of sparse matrix linear algebra, collaborative filtering, natural language processing, principal component analysis, association rule learning, and outlier detection.

Consultant

Normal Windows applications open directly from the desktop and make Windows recognize their data files as belonging to them. For example, .doc files belong to a word processor application and .nb files belong to *Mathematica*. Files of the appropriate type also have their own icons inside Explorer. This feature of Windows is tremendously powerful, both because it offers easy access to data and because data can be usefully embedded in other software—such as XMIND or in HTML. Applications that behave like other Windows software are also much more acceptable to users unfamiliar with *Mathematica*. This talk will describe how to create an application that responds in the same way, without writing part of that application in another language, and will be based on my Super Widget Package, which is available free from my website. It will be illustrated using Hans-Gerlach Woudboer's Rapid Business Modeling software.

University of Houston Law Center

Insurers cluster. They generally group insurance applicants with similar perceived risk levels together and offer each group different contracts. This practice, particularly where fine-grained, reduces adverse selection that otherwise prevents desirable trade in risk. The practice also tends, however, to replicate inequalities in original endowments of risk that may be no fault of the insured. Coarse classification, by contrast, may reduce inequalities resulting from factors that are no fault of the insured, but may also result in incomplete risk transfer due to adverse selection.

Hitherto, most scholarship involving regulation of these insurance underwriting practices has considered clustering based on only one dimension—perceived risk—and has correlatively involved offers that vary in only one feature—price. This simplification is due in substantial part to the difficulties in modeling more complex clustering and contracting.

This talk looks at how the symbolic and numeric capabilities of *Mathematica* can collaborate to permit more realistic models of insurance underwriting and thus more realistic appraisals of justice in the regulation of insurance underwriting. It considers "two-dimensional underwriting" in which insurers cluster based not only on the perceived level of risk the insured might pose without undertaking risk avoidance, but also on the effective price the insured faces to reduce risk. It correlatively considers contracts that vary in two ways: the price charged and the level of care demanded. Particular emphasis is placed on the statistical capabilities advanced in Version 8. The ambition of the project is to generate models that produce results in "real time".

Business Laboratory

Google and Wolfram Research are two exceptional companies producing amazing technology. What makes them even better is the prospect of combining the two product platforms in ways that deliver very powerful analyses of difficult problems across a wide range of fields. Here we will discuss the journey of *Mathematica*-to-Google integration with two examples: (1) using Google Docs as a source of cloud-based data; and (2) generating objects in Google Earth using *Mathematica* as the computational authoring engine. We will then summarize our discussion by speculating on future ways to bring these two powerful platforms together.

Charles University in Prague, Czech Republic

We study the performance of portfolios of assets from the point of view of an extended efficient market hypothesis. We assume that the investors make their decisions exclusively on the information based on the expected returns and the covariance structure of returns, and that this information is available to all investors. Further, we suppose that the investors' behavior is rational in the following sense: (1) the investors choose portfolios with the highest expected return among those with the same risk; and (2) the investors choose portfolios with the smallest risk among those with the same expected return (risk aversion). Under these assumptions, the whole market should be in an equilibrium.

In practice, even if data comes from the set of highly rated companies, it exhibits severe violations from what might be considered as a rational behavior of the market. In this contribution, we analyze the impact of this inconsistency on the optimal portfolio selection and the influence upon the re-balancing portfolio. The data from the *Mathematica* `FinancialData` Integrated Data Source was used for numerical illustrations as well for encouragement for this presentation.

The most difficult problems in modern finance often involve the determination of complex functions of simple securities like money, stocks, and equities. These problems are often resolved by using derivatives to minimize risk for a portfolio, purchasing bonds as a form of fixed risk investment, or investing in an annuity to establish a stable future cash flow. In this workshop, we will study how one can use built-in *Mathematica* functions to accomplish these tasks.

With Version 8, *Mathematica* has introduced a new array of finance functions that behave like operators on simple financial instruments like money, stocks, and equities. These higher-order financial functions allow us to calculate exotic options on securities, bonds with complex coupon payments, and any sequence of payments like cash flows and annuities. All of these can also be determined using either a schedule of forward rates or a term structure of interest rates.

The addition of GPU computing capabilities to *Mathematica* allows for a greater fusion of high performance and symbolic computing than was previously possible. In this workshop we will show how to access the latest built-in CUDA-accelerated finance functions. In addition, *Mathematica*'s powerful symbolic manipulation tools allow one to easily create or modify kernels on the fly based on potentially complex parameters. We will demonstrate a framework for very general GPU-accelerated stochastic calculus in *Mathematica* using symbolic generation of CUDA kernels. Further, we will show examples of real-world problems from the field of finance, which can see considerable performance improvement in this framework.

PFK Technologies

In this presentation we will present how we used *Mathematica* to test a new optimization algorithm for equality and inequality constrained target functions. The algorithm yields exact solutions to quadratic optimization problems.

This talk examines a number of data import themes and breaks down how *Mathematica* handles large datasets in real-world applications. Topics include import performance with various data formats and types and improvements in future versions of *Mathematica*, in an attempt to demystify operations involving very large data files for both personal and high-performance machines.

Department of Economics and Quantitative Methods, Faculty of Economics, University of West Bohemia in Pilsen, Czech Republic

This paper presents a comparative macroeconomic analysis based on GDP, inflation, and stock exchange index time series of Visegrad countries within the period from 1997 Q1 until 2010 Q4. Parametric representation of data in 3D state space enables us to tackle the macroeconomic development as a curve. Based on a differential geometry approach, Frenet frames are constructed at selected sets of discrete points along the curve. Investigation of frame translations provides both incremental and accumulated curve lengths, whereas frame rotations generate traces on a unit sphere. All numerical results are presented with `ParametricPlot3D` and are discussed in detail.

GluonVision GmbH

A consulting project implementing greenhouse calculations.

Retired

This presentation illustrates Bayes' rule as the tool for inductive reasoning using the context of single parameter binary trials like: a head/tail coin toss, a pass/fail regulatory inspection, or a guilty/not guilty jury decision. The binomial distribution quantifies the probability of *m* passes in *n* trials, given *p* equals the passing probability in each trial. Bayes' rule infers the probability distribution function (PDF) for *p* from the data (*m* passes in *n* trials) and easily generates stopping criteria by signaling during sequential trials when desired precision for the inferred value of *p* is achieved. When precision after *n* trials is insufficient, the Bayesian may perform *k* additional trials, while the frequentist must start over with *k + n* additional trials for a total of *k + 2n* trials. This is because the Bayesian prior uses available data while inferring the implications of new data. As used here, a likelihood function (LKF) differs from its underlying PDF by a factor, kk : LKF = kk*PDF and PDF = LKF/kk. Bayes' rule, in terms of likelihood functions, says the posterior LKF is the product of a prior LKF and a data LKF.

LKFpost[*p | m, n, c*] = LKFprior[*p*]*LKFdata[*m, n | c*].

The Bayesian credible interval (the range of *p* given *m* passes in *n* trials) is contrasted with the frequentist confidence interval (the range of *p* in a stated proportion of repeated experiments, each with *n* trials). The maximum likelihood estimate for *p* is shown to be a special case of the more general Bayesian inference for *p*.

Evolved Analytics LLC

Our presentation will use DataModeler to illustrate the ease and power of conversion of spreadsheet data into an interactive data story report (aka, computational document) using examples drawn from economics, wind turbine power generation, and industrial process monitoring.

Weber & Partner GbR

QuantLib is a widely recognized open source library for computational finance. With the new interface from *Mathematica* to QuantLib, the more than one thousand functions, instruments, curves, and so on from QuantLib are easily accessible to *Mathematica* users, allowing unprecedented financial modeling and computation in combination with the power of *Mathematica*. The talk demonstrates this with several examples, including curve building and calibration of CMS.

University of Economics in Katowice, Poland

The subject of the presentation is included in the field of research of a new, intensively developing scientific discipline called computational economics. Delay differential equations (DDEs), also known as mixed differential-difference equations, are in use as models for phenomena in the life sciences, physics, technology, and chemistry. In economics, DDEs appeared in the early 1930s in works of, among others, R. Frisch and M. Kalecki and (later) in works of J. Tinbergen and R. M. Goodwin. The purpose of the presentation is to show how to use *Mathematica*'s `NDSolve` and `DDESteps` functions for solving DDEs appearing in the Kalecki business cycle model and the neoclassical Solow–Swan model with a time lag in the investment process.

*During the conference, not only will you hear about what's new, but you will also be privy to details about what's on the horizon in talks given by Wolfram executives, developers, and more. As such, you will be required to sign a non-disclosure agreement to attend our talks labeled "NDA".