**July 15-19: Room 330, ****HERBERT HOOVER MEMORIAL BUILDING**

Theme: advanced computational tools – supercomputing, applications of cloud and grid computing technologies, GPUs, PETSc, Big Data

The focus will be on advanced topics related to large scale computing. The schedule is light in terms of talks, but I want this group to define and work on software projects of general value for solving economics problems, as well as help all of us understand these technologies.

Questions: Just what are GPUs good for? How can Yongyang and I make good use of the GPUs at Blue Waters? Does anyone have good adaptive sparse grid code? How can we implement asynchronous parallelization? What software allows workers to create other workers that they control?

**Monday, July 15**

************************************************************************************************

**10am: Simon Scheidegger: “Using Adaptive Sparse Grids to Solve High-Dimensional Dynamic Models”**

We present a scalable and flexible method to compute global solutions of high-dimensional dynamic models. For this, we embed an adaptive sparse grid algorithm with piecewise multi-linear hierarchical basis functions in a dynamic programming framework. As the dimensionality increases, these grids grow considerably slower than standard tensor product grids. In addition, the grid is automatically refined locally and can thus take care of steep gradients or even kinks. This enables us to capture local behavior of the policies of interest. To further increase the size of economical problems we can handle, our implementation is fully hybrid parallel (using MPI and OpenMP respectively). This enables us to use modern high-performance computing architectures. With this setting, our time iteration algorithm scales up nicely to at least 1,200 parallel processes. We show how powerful this method is by applying it to an international real business cycle model with capital adjustment costs, irreversible investment, and more than 20 continuous-valued state variables.

************************************************************************************************

************************************************************************************************

**2pm: Baker and Bejarano: Guts of Big Data Optimal Policy**

We describe a solution approach for optimal policy problems involving high dimensions of heterogeneity of the underlying population and policy instruments or feasible sets which exhibit nonconvexities. This is done by generating a large database of consumer outcomes at different points in the distribution of house-hold types and tax policies. To do this, our implementation is parallel and utilizes HDF5 databases. Equidistributed Sequences are used to populate the type space and policy space in order to increase the scalability and flexibility of the library.

See below for slides: “econsim_presentation_stanford.pdf“

************************************************************************************************

**Wednesday, July 17**

************************************************************************************************

**10am: Yongyang Cai: Numerical DP and Parallelization**

We present new advances in numerical dynamic programming, highlighting shape-preservation, Hermite approximation and parallelization in DP. Two new applications of DP are presented: one is dynamic portfolio optimization with transaction costs, another is dynamic and stochastic integration of climate and economy. Related papers are available in https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1542-4774.2010.tb00532.x

http://link.springer.com/article/10.1007%2Fs00186-012-0406-5

http://www.nber.org/authors_papers/yongyang_cai

The slides are below: DP&Parallel_Presentation.pdf

************************************************************************************************

************************************************************************************************

**2pm: Aldrich: GPU Computing in Economics**

This paper discusses issues related to GPU for Economic problems. It highlights new methodologies and resources that are available for solving and estimating economic models and emphasizes situations when they are useful and others where they are impractical. Two examples illustrate the di.erent ways these GPU parallel methods can be employed to speed computation.

************************************************************************************************

**Friday, July 19**

************************************************************************************************

**10am: Bo Cowgill: picloud**

************************************************************************************************

************************************************************************************************

**2pm: Bejarano: PETSc, TAO; **

I will discuss distributed memory computing–it’s potential and challenges. I will talk about the usefulness of PETSc and TAO for solving a variety of computationally large problems in economics efficiently on large, distributed memory supercomputer clusters.

**See below: bejarano_petsc_presentation.pdf**

************************************************************************************************

**Every day:** Judd: coming up with pesky questions

====================================================

====================================================

**July 22-26: ****Room 330, ****HERBERT HOOVER MEMORIAL BUILDING**

**Monday, July 22**

************************************************************************************************

**10-11:30am: Cai & Judd: solving for supergames with states**

(Presentation slides are below)

************************************************************************************************

**2:00-2:45pm: ****Rafael Valero — Smolyak method: efficient interpolation, anisotropic grid and adaptive domain**

(See below for a translation of a summary of Smolyak’s work)

************************************************************************************************

**Wednesday, July 24**

************************************************************************************************

**10-11:30am: Gregor Reich: “The Bus Engine Replacement Model withSerially Correlated Unobservables: A Deterministic Approach.”**

Most dynamic discrete choice modelsassume iid extreme value type I distributed errors, in order to usethe associated closed form solutions for the choice probabilities. Ifthe unobservables are serially correlated, two major difficultiesarise: First, the expected value function becomes a function of theprevious period’s error, and thus needs to be approximated bynumerical quadrature, and possibly interpolation. Second, thelikelihood function evaluation involves integration over the seriallycorrelated unobservables, which is — in general — a highdimensional integral.

In this paper, we develop a nestedfixed point algorithm to estimate the bus engine replacement model ofRust (1987), that allows for serial correlation in the error term incase of no replacement, thus violating the conditional independenceassumption necessary to establish the closed from solution of thechoice probabilities. We approximate the corresponding integrals(both for the expected value function as well as the likelihoodfunction) using Gaussian quadrature rules. The expected valuefunction is approximated using piecewise linear interpolation over anadaptive grid.

************************************************************************************************

************************************************************************************************

**2-3:30pm: Schmedders-Nico: R&D tax credits – example of a dynamic oligopoly**

************************************************************************************************

**Friday, July 26**

************************************************************************************************

**10am: Ben Tengelsen: Computing supergame equilibria with Python**

************************************************************************************************

**2pm: Juan Ricardo: Using Mathematica to solve simple optimal tax problems**

************************************************************************************************

====================================================

====================================================

**July 29-Aug. 2: ****HERBERT HOOVER MEMORIAL BUILDING**

Themes: macroeconomics, macro public finance, and finance**Monday, July 29: ****Room 130**

**9-10:15am: Judd — GSSA**

The paper and supplement are below – JMM GSSA.The slides are below: Summer_2013_GSSA.**10:45am-noon: Hasanhodzic-Kotlikoff — **

**Generational Risk – Is It a Big Deal?: Simulating an 80-Period OLG Model with Aggregate Shocks**

**2:00-3:15pm: Baker, Bejarano, Evans and Judd — “Big Data Optimal Policy”**

See “Multidimensional Optimal Tax Notes” below.**Tuesday, July 30: ****Room 330**

**9-10:15am: Lilia Maliar — EDS method: large-scale new Keynesian models with ZLB**

**10:45am-noon: Rick Evans and Jasmina Hasanhodzic**

afternoon: (kept open to facilitate discussions and work)**Wednesday, July 31: ****Room 130**

************************************************************************************************

**9-10:15am: ****Ole Wilms — “Asset pricing with fat tails”**

Most standard asset-pricing models assume that consumption growth is stationary (i.e. consumption has a unit root). We analyze the influence of mean reversion in the consumption process on the equity premium. We first assume that dividends are a fixed fraction of consumption. Later we drop this assumption and allow for time variation in the share of financial and labor income in aggregate consumption.

We find that i) with trend-stationary consumption we can match the empirical equity premium with much lower degrees of risk aversion than in the standard models; and ii) that time variation in the income share processes further increases the equity premium.

************************************************************************************************

************************************************************************************************

**10:45am-noon: ****Konstantin Kucheryavyy – Ricardian models of international trade**

Computable general equilibrium models of international trade are predominantly based on either the Armington or Krugman model. The crucial feature of both of these models is that all goods are differentiated — no two goods existing in the economy are the same. While the assumption about differentiation greatly simplifies analysis and is not unrealistic, there is a solid evidence that the Ricardian forces are important in shaping the patterns of international trade. In a Ricardian model of trade a good produced in different countries is still the same good, therefore, the price of this good in any location is not higher than the cheapest way to produce and ship this good to the location. Solving Ricardian models in a general form involves solving nonlinear complementary slackness conditions. Realistic models of trade can easily involve hundreds of thousands of complementary slackness conditions. We show how one can formulate such problems and solve them using state-of-the-art software PATH. Furthermore, we consider a Ricardian model of trade with an important assumption that transportation costs are of additive nature (i.e., paid per quantity of good transported), rather than of the conventional multiplicative nature (i.e., paid per value of good transported). We study the implications of the assumption about additive trade costs for the standard questions asked in international trade: welfare gains and patterns of trade.

************************************************************************************************

************************************************************************************************

**2:00-3:15pm: Walt Pohl** This talk introduces a new non-parametric procedure — the empirical projection method — for solving structural models. The method is a version of the Galerkin procedure, but has the crucial advantage that it requires no distributional assumptions for the random variables in the model. The procedure can be viewed as a GMM-type estimator, but is complementary to the usual applications of GMM — GMM provides estimates of structural parameters, while the empirical projection method takes structural parameter estimates, and uses them to produce a time-series of predictions for individual observations. In this way, it shares the advantage of calibration in that it allows models to be evaluated in dimensions other than the ability to test overidentification I apply this to evaluate the time-series predictive ability of several standard consumption-based asset pricing models, such as the standard CRRA model, the internal habit model of Ferson and Constanides, and the external habit model of Campbell and Cochrane. I will also present some preliminary work on extending the numerical method of Judd, Kubler, Schmedders for solving complete-market asset pricing models to recursive preferences such as Epstein-Zin.

************************************************************************************************

************************************************************************************************

**3:45-5:00pm: Yongyang Cai — Parallelization and GSSA**

We develop a parallel GSSA to solve high-dimensional problems efficiently. Our current method can speed up its serial version about 10 times faster when we use a 24-core computer and OpenMP, for 100-, 160-, 200-dimensional problems and quadratic approximation. For a 100-country optimal growth problem (with 200 state variables), our parallel GSSA method using a quadratic approximation and a 24-core computer converges in several hours.

The slides are below: GSSA&Parallel_Presentation.pdf

************************************************************************************************

************************************************************************************************

**Poster: Bulat Gafarov – Sparse grid for interpolation on a spherical domain**

Recent developments in solution techniques for solving DSGE models, such as GSSA and EDS, allow one to compute an approximate solution only on the essential ergodic set. In many examples the essential ergodic set has spherical shape after rescaling. A n-ball constitutes only a tiny fraction of the encompassing hypercube in spaces of high dimension so one can get the same precision of approximation with a smaller number of orthogonal basis

functions with support restricted to the ball rather than to the encompassing hypercube. I propose a set of algebraic orthogonal basis functions and the corresponding sparse grid. These basis and nodes are designed for uniform interpolation of a smooth function using the collocation method and have similar properties to Chebyshev polynomials and Smolyak

nodes on a hypercube. In particular, the number of the nodes is equal to the number of the corresponding basis functions, i.e. the identification is exact and the projection coefficients for a given function have a closed form representation. As in the original case with rectangular domain, the number of nodes and basis functions grows polynomially with dimension of the problem which make the method suitable for high-dimensional problems.

************************************************************************************************

====================================================

====================================================

**August 5-7**

************************************************************************************************

**Monday, August 5:**

**10 am: Inna Tsener – Geometric Programming: approaches for solving dynamic economic models**

Numerical methods for solving dynamic economic models often require finding solutions to non-linear optimization problems or systems of nonlinear equations. In this paper, we show how to compute solutions to such problems using a geometric programming (GP) technique. This technique solves optimization problems with an objective and constraint functions that have a special form. We show how to represent a typical economic problem in the form suitable for GP methods. The GP method does not depend on the initial guess and is tractable in problems with high dimensionality. We illustrate the application of the GP technique in the context of several economically relevant examples.

************************************************************************************************

I encourage all to attend the SITE session on macro (http://www.stanford.edu/group/SITE/SITE_2013/2013_program.html).

Judd will be in town that week only on Monday and Tuesday — niece getting married in Logan, UT (I wonder why anyone wants to go to Logan in August; probably will still wonder after I return)

====================================================

**Later in August:**

Theme: Algebraic geometry and its applications to economics

Speakers: Judd, Renner, TJ Canann

- BER_SCE_GR.pdf
- DP&Parallel_presentation.pdf
- EDS_JMM2013.pdf
- GSSA details.pdf
- GSSA&Parallel_presentation.pdf
- GSSA_Summer 2013.pdf
- JMM GSSA SUPP.pdf
- JMM GSSA.pdf
- Judd_Supergames_Lecture2.pdf
- Judd_Supergames_Lecture_1.pdf
- Mathematica_SimpleLifeCycleModel.pdf
- MultiDimensional Optimal Tax Notes.pdf
- Parallel-DP-PiCloud.pdf
- Smolyak summary.pdf
- Stanford_07_13.pdf
- bejarano_petsc_presentation.pdf
- compCA2013_supergames.pdf
- econsim_presentation_stanford.pdf
- stanford-gmm-presentation.pdf