I need your help!

If you find any typos, errors, or places where the text may be improved, please let me know. The best ways to provide feedback are by GitHub or hypothes.is annotations.

Opening an issue or submitting a pull request on GitHub

Hypothesis Adding an annotation using hypothes.is. To add an annotation, select some text and then click the on the pop-up menu. To see the annotations of others, click the in the upper right-hand corner of the page.

23 Model basics

23.1 Introduction

The option na.action determines how missing values are handled. It is a function. na.warn sets it so that there is a warning if there are any missing values. If it is not set (the default), R will silently drop them.

23.2 A simple model

Exercise 23.2.1

One downside of the linear model is that it is sensitive to unusual values because the distance incorporates a squared term. Fit a linear model to the simulated data below, and visualize the results. Rerun a few times to generate different simulated datasets. What do you notice about the model?

Let’s run it once and plot the results:

We can also do this more systematically, by generating several simulations and plotting the line.

What if we did the same things with normal distributions?

There are not large outliers, and the slopes are more similar.

The reason for this is that the Student’s \(t\)-distribution, from which we sample with rt() has heavier tails than the normal distribution (rnorm()). This means that the Student’s t-distribution assigns a larger probability to values further from the center of the distribution.

For a normal distribution with mean zero and standard deviation one, the probability of being greater than 2 is,

For a Student’s \(t\) distribution with degrees of freedom = 2, it is more than 3 times higher,

Exercise 23.2.2

One way to make linear models more robust is to use a different distance measure. For example, instead of root-mean-squared distance, you could use mean-absolute distance:

For the above function to work, we need to define a function, make_prediction(), that takes a numeric vector of length two (the intercept and slope) and returns the predictions,

Using the sim1a data, the best parameters of the least absolute deviation are:

Using the sim1a data, while the parameters the minimize the least squares objective function are:

In practice, I suggest not using optim() to fit this model, and instead using an existing implementation. The rlm() and lqs() functions in the MASS fit robust and resistant linear models.

Exercise 23.2.3

One challenge with performing numerical optimization is that it’s only guaranteed to find a local optimum. What’s the problem with optimizing a three parameter model like this?

The problem is that you for any values a[1] = a1 and a[3] = a3, any other values of a[1] and a[3] where a[1] + a[3] == (a1 + a3) will have the same fit.

Depending on our starting points, we can find different optimal values:

In fact there are an infinite number of optimal values for this model.

23.3 Visualising models

Exercise 23.3.1

Instead of using lm() to fit a straight line, you can use loess() to fit a smooth curve. Repeat the process of model fitting, grid generation, predictions, and visualization on sim1 using loess() instead of lm(). How does the result compare to geom_smooth()?

I’ll use add_predictions() and add_residuals() to add the predictions and residuals from a loess regression to the sim1 data.

This plots the loess predictions. The loess produces a nonlinear, smooth line through the data.

The predictions of loess are the same as the default method for geom_smooth() because geom_smooth() uses loess() by default; the message even tells us that.

We can plot the residuals (red), and compare them to the residuals from lm() (black). In general, the loess model has smaller residuals within the sample (out of sample is a different issue, and we haven’t considered the uncertainty of these estimates).

Exercise 23.3.2

add_predictions() is paired with gather_predictions() and spread_predictions(). How do these three functions differ?

The functions gather_predictions() and spread_predictions() allow for adding predictions from multiple models at once.

Taking the sim1_mod example,

The function add_predictions() adds only a single model at a time. To add two models:

The function gather_predictions() adds predictions from multiple models by stacking the results and adding a column with the model name,

The function spread_predictions() adds predictions from multiple models by adding multiple columns (postfixed with the model name) with predictions from each model.

The function spread_predictions() is similar to the example which runs add_predictions() for each model, and is equivalent to running spread() after running gather_predictions():

Exercise 23.3.3

What does geom_ref_line() do? What package does it come from? Why is displaying a reference line in plots showing residuals useful and important?

The geom geom_ref_line() adds as reference line to a plot. It is equivalent to running geom_hline() or geom_vline() with default settings that are useful for visualizing models. Putting a reference line at zero for residuals is important because good models (generally) should have residuals centered at zero, with approximately the same variance (or distribution) over the support of x, and no correlation. A zero reference line makes it easier to judge these characteristics visually.

Exercise 23.3.4

Why might you want to look at a frequency polygon of absolute residuals? What are the pros and cons compared to looking at the raw residuals?

Showing the absolute values of the residuals makes it easier to view the spread of the residuals. The model assumes that the residuals have mean zero, and using the absolute values of the residuals effectively doubles the number of residuals.

However, using the absolute values of residuals throws away information about the sign, meaning that the frequency polygon cannot show whether the model systematically over- or under-estimates the residuals.

23.4 Formulas and model families

Exercise 23.4.1

What happens if you repeat the analysis of sim2 using a model without an intercept. What happens to the model equation? What happens to the predictions?

To run a model without an intercept, add - 1 or + 0 to the right-hand-side o f the formula:

The predictions are exactly the same in the models with and without an intercept:

Exercise 23.4.2

Use model_matrix() to explore the equations generated for the models I fit to sim3 and sim4. Why is * a good shorthand for interaction?

For x1 * x2 when x2 is a categorical variable produces indicator variables x2b, x2c, x2d and variables x1:x2b, x1:x2c, and x1:x2d which are the products of x1 and x2* variables:

We can confirm that the variables x1:x2b is the product of x1 and x2b,

and similarly for x1:x2c and x2c, and x1:x2d and x2d:

For x1 * x2 where both x1 and x2 are continuous variables, model_matrix() creates variables x1, x2, and x1:x2:

Confirm that x1:x2 is the product of the x1 and x2,

The asterisk * is good shorthand for an interaction since an interaction between x1 and x2 includes terms for x1, x2, and the product of x1 and x2.

Exercise 23.4.3

Using the basic principles, convert the formulas in the following two models into functions. (Hint: start by converting the categorical variable into 0-1 variables.)

The problem is to convert the formulas in the models into functions. I will assume that the function is only handling the conversion of the right hand side of the formula into a model matrix. The functions will take one argument, a data frame with x1 and x2 columns, and it will return a data frame. In other words, the functions will be special cases of the model_matrix() function.

Consider the right hand side of the first formula, ~ x1 + x2. In the sim3 data frame, the column x1 is an integer, and the variable x2 is a factor with four levels.

Since x1 is numeric it is unchanged. Since x2 is a factor it is replaced with columns of indicator variables for all but one of its levels. I will first consider the special case in which x2 only takes the levels of x2 in sim3. In this case, “a” is considered the reference level and omitted, and new columns are made for “b”, “c”, and “d”.

A more general function for ~ x1 + x2 would not hard-code the specific levels in x2.

Consider the right hand side of the first formula, ~ x1 * x2. The output data frame will consist of x1, columns with indicator variables for each level (except the reference level) of x2, and columns with the x2 indicator variables multiplied by x1.

As with the previous formula, first I’ll write a function that hard-codes the levels of x2.

For a more general function which will handle arbitrary levels in x2, I will extend the model_matrix_mod1b() function that I wrote earlier.

These functions could be further generalized to allow for x1 and x2 to be either numeric or factors. However, generalizing much more than that and we will soon start reimplementing all of the matrix_model() function.

Exercise 23.4.4

For sim4, which of mod1 and mod2 is better? I think mod2 does a slightly better job at removing patterns, but it’s pretty subtle. Can you come up with a plot to support my claim?

Estimate models mod1 and mod2 on sim4,

and add the residuals from these models to the sim4 data,

Frequency plots of both the residuals,

and the absolute values of the residuals,

does not show much difference in the residuals between the models. However, mod2 appears to have fewer residuals in the tails of the distribution between 2.5 and 5 (although the most extreme residuals are from mod2.

This is confirmed by checking the standard deviation of the residuals of these models,

The standard deviation of the residuals of mod2 is smaller than that of mod1.

23.5 Missing values

No exercises

23.6 Other model families

No exercises