Reading Data into R =================== Preparing Data -------------- Believe it or not, reading in data is often the hardest part of working with R. If you collect and store your data in Excel or Google Docs, you will need to carefully format your spreadsheet. It should obey the following rules: 1. The spreadsheet should contain a single sheet. 2. Row 1 should contain variable names in consecutive cells, starting with Cell A1. For convenience, the names should be comprised of lowercase words and contain no symbols or punctuation. 3. Subsequent rows (starting with Row 2) should contain your observations (data). 4. If the value of a variable is missing for a particular observation, the coressponding cell in the spreadsheet should be empty. 5. All other cells in the spreadsheet should be empty. This sounds straightforward, but many spreadsheets that you find "in the wild" do not obey these rules. You will have to reformat these spreadsheets, usually by deleting empty rows and columns and by deleting notes and other annotations. Even if your spreadsheet is formatted as above, R cannot open Excel files. To save your data to an R-compatible file format, exportyour data as a "Column-Separated Value" (CSV) file. You can do this from the File menu in Excel. Reading CSV Files ----------------- If you have a property-formatted CSV file, you can read it into R using the `read.csv` function. There are two ways to specify the file. To use your system's file chooser, run the command. ```{r, eval=FALSE} data <- read.csv(file.choose()) ``` Alternatively, if you know the name of the file, you can pass it directly to the `read.csv` function. Note that if you pass the file name directly, you must either specify the full path to the file, or you must set the "working directory" to be the directory thatcontains the file. To set the working directory, either use the `setwd` function or run the *Set Working Directory* command from RStudio's **Session** menu. Suppose that I want to open a file named "bikedata.csv", which is stored in the "~/Datasets" directory on my system. I first set the working directory to "~/Datasets" by Choosing **Session** > *Set Working Directory* > *Choose Directory ...*. This will execute the command ```{r, eval=FALSE} setwd("~/Datasets") ``` (In fact, if I do not want to use the menu system, then I can just type this command directly to achieve the same effect.) Once the working directory is set, I can read in the data to a variable named `data` by executing the command ```{r, eval=FALSE} bikedata <- read.csv("bikedata.csv") ``` One nice feature of the `read.csv` function is that it correctly web addresses. For example to read the file "bikedata.csv" from the course website, you can run the command ```{r} bikedata <- read.csv("http://ptrckprry.com/course/forecasting/data/bikedata.csv") ``` This downloads the file and reads the data into the `bikedata` variable. -------------------------------------------------------------------------------- Basic Types and Operations ========================== Variables --------- In R, we use the term "variable" to refer to a name-value pair. You should not confuse this concept with the types of variables you have seen in your math classes (they are similar in some ways, but different in others). In the last section, when we ran the command `data <- read.csv(file.choose())` we created a variable with name `data` and value equal to the contents of the chosen file. To create a variable or to assign a new value to an existing variable, use the assignment command (`<-`), which is meant to look like an arrow pointing from the value to the variable name. For example, the command ```{r} a <- 2.7 ``` means "assign the value `2.7` to the variable named `a`". Another way to read this is "variable `a` gets the value `2.7`". When you have a variable, you can use the name in place of the value: ```{r} a + 10 5 * a ``` You can see the value of a variable, by typing its name and pressing enter: ```{r} a ``` An alternative way to read in a file to the variable named `data` is to run the following sequence of commands: ```{r, eval=FALSE} filename <- file.choose() data <- read.csv(filename) ``` The first command asks the user to choose a file, and stores the resulting name in the `filename` variable. This variable contains the name of the file, but not the actual contents. The second command takes the name of the file, opens it, reads the contents into memory, and stores the result in the `data` variable. Functions --------- Besides variables, the other main concept you need to learn in R is that of a *function*. You are probably familiar with the concept of a function from your mathematics courses, and a function in R is very similar: a function is something that takes zero or more values, then performs a sequence of actions and returns a result. We have already seen three functions: `file.choose`, `read.csv`, and `setwd`. We *call* a function by putting a pair of parentheses `()` after the function name. Many functions, including `read.csv` and `setwd` require one or more values as input. We refer to these values as *arguments*, and we specify them by putting the values inside the parentheses. When we do so, we say that we are *passing the value* of the argument to the function. Sometimes a function will have optional arguments. These are arguments that, if left unspecified, will be given reasonable default values. For example, by default, the `file.choose` function forces the user to choose an existing file. To allow the user to choose a name for new file, pass the argument `new=TRUE` to file `file.choose` function: ```{r, eval=FALSE} file.choose(new=TRUE) ``` Before, we did not specify the `new` argument, and it defaulted to the value `FALSE`. Data Frames ----------- The `read.csv` command opens the file and reads the data into a type of object called a ``dataframe''. Conceptually, a data frame is just like a spreadsheet: it has columns, corresponding to variables, and rows, corresponding to observations. Each row and column has a name. Usually, the row names are the character strings "1", "2", etc., but this is not always the case. To see the first 6 rows in the `bikedata` data frame, run the command ```{r} head(bikedata) ``` We can see that there are nine columns, named `vehicle`, `colour`, `passing.distance`, `street`, `helmet`, `kerb`, `datetime`, `bikelane`, and `city`. To see a summary of the entire data frame, use the ``summary`` function: ```{r} summary(bikedata) ``` Extracting Columns ------------------ Let's say we want to investigate the `passing.distance` variable. To do this, we must first *extract* that column from the `bikedata` dataframe. There are three ways to do this: ```{r} x <- bikedata$passing.distance x <- bikedata[["passing.distance"]] x <- bikedata[,"passing.distance"] ``` All three commands are equivalent ways to extract the `passing.distance` column and store it in a variable named `x`. The `$` form is the most common, but you will sometimes see the other two forms, as well. Vectors ------- Data frame columns are stored in a data type called a "vector". Conceptually, a vector is a one-dimensional array of values, indexed by integers starting at `1`. Most functions in R operate on vectors. You can access individual values by using double square-brackets. For example, to see the first element of the vector, type the command ```{r} x[[1]] ``` To see the fifth value, type the command ```{r} x[[5]] ``` To see how many elements are contained in the vector, use the `length` function: ```{r} length(x) ``` To see the last element, type ```{r} x[[length(x)]] ``` To extract a subvector, use single square brackets. For example, the subvector consisting of the first 25 elements is ```{r} x[1:25] ``` Here, `1:25` is shorthand for "integers 1 to 25". Since not all 25 values fit onto a single line, R wraps the values. At the start of each line, R prints the index of the first value on the line in square brackets. Looking at the output above, we can see that `1.410` is the 12th element and `1.492` is the 24th element of the result. You may have asked yourself earlier why the output of `x[[1]]` and other similar commands was prefixed by `[1]`. The reason for this is that R doesn't have the concept of a "single value" or "scalar". The only way to represent the value of `x[[1]]` is as a length-one vector. The output
[1] 2.114
denotes a vector with a single value (`2.114`), stored at index `1`. Since there is no concept of a "scalar" in R, the command `x[[1]]` is equivalent to `x[1:1]`, which is also the same as `x[1]`. In other programming languages, `x[[1]]`, "the first element of `x`", and `x[1]`, "the subvector of `x` starting and ending at index `1`" would be different; in R, these are identical. Because of this, most people use single brackets instead of double brackets when indexing vectors, writing `x[1]` and `x[5]` instead of `x[[1]]` and `x[[5]]]`. -------------------------------------------------------------------------------- Plots and Descriptive Statistics ================================ Descriptive Statistics for Numeric Variables -------------------------------------------- There are a variety of functions for computing descriptive statistics for the values stored in a vector. * Sum of values: ```{r} sum(x) ``` * Measures of central tendency (sample mean and median): ```{r} mean(x) median(x) ``` * Measures of variability (sample standard deviation and sample variance): ```{r} sd(x) var(x) ``` * Extreme values (minimum and maximum): ```{r} min(x) max(x) ``` * Quantiles: ```{r} quantile(x, .25) # first quartile quantile(x, .75) # third quartile quantile(x, .99) # 99th percentile ``` Plots for Numeric Variables --------------------------- We can use the `hist` command to make a histogram of the values stored in a vector: ```{r} hist(x) ``` By default, the output looks fine when printed in black and white, but it isn't very pretty. We can specify the bin color, change the axis labels, and omit the main title by passing additional arguments to this `hist` function ```{r, tidy=FALSE} hist(x, col="steelblue", xlab="Passing Distance", ylab="Count", main="") ``` Use the `boxplot` and `qqnorm` commands to make boxplots and normal probability plots, as in the following examples: ```{r, tidy=FALSE} boxplot(x, border="darkred", ylab="Passing Distance") qqnorm(x, col="darkgreen", xlab="Normal Quantiles", ylab="Passing Distance Quantiles", main="") ``` Scatterplots ------------ We can make a scatter plot using the `plot` command. For example, to plot passing distance versus distance to the kerb, run the command ```{r, tidy=FALSE} plot(bikedata$kerb, bikedata$passing.distance, xlab="Distance to Kerb", ylab="Passing Distance") ``` To connect the points with lines, use the `type="l"` argument. For example ```{r, tidy=FALSE} t <- 1:10 plot(t, t^2, type="l") ``` We can use the `lty` and `col` arguments to change the style and color of the line: ```{r, tidy=FALSE} t <- 1:10 plot(t, t^2, type="l", lty=2, col="green") ``` Categorical Variables --------------------- So far, we have seen how to use R to summarize and plot a numeric (quantitative) variable. R also has very good support for categorical (qualitative) variables, referred to as *factors*. To see levels, the set of possible values for a factor variable, use the `levels` function. For example, to see the levels of the `colour` variable: ```{r} levels(bikedata$colour) ``` To tabulate the values of the variable, use the `table` command, as in ```{r} table(bikedata$colour) ```` Note: be default, the `table` command omits missing values. To include these values in the output, include `useNA="ifany"` in the call to `table`: ```{r} table(bikedata$colour, useNA="ifany") ``` We can present tabulated counts in a bar plot using the following commands ```{r} tab <- table(bikedata$colour, useNA="ifany") barplot(tab) ``` Usually, it makes sense to arrange the table values in decreasing order. Here is an example with sorted counts that adds axis labels and changes the bar colors: ```{r, tidy=FALSE} barplot(sort(tab, decreasing=TRUE), xlab="Colour", ylab="Count", col="steelblue") ``` -------------------------------------------------------------------------------- Inference ========= Inference for a Population Mean ------------------------------- We can use the `t.test` function to test a hypothesis about a population mean. ```{r} t.test(bikedata$passing.distance) ``` This reports the t statistic, the degrees of freedom, the p-value, and the sample mean. The command also reports a 95% confidence interval for the population mean. To change the confidence level, use the `conf.level` argument, as in ```{r} t.test(bikedata$passing.distance, conf.level=0.99) ``` By default, the null hypothesis is that the true (population) mean is equal to 0, and the alternative hypothesis is that the true mean is not equal to 0. To use a different null, pass the `mu` argument. To use a different alternative, pass `alternative="less"` or `alternative="greater"`. For example, to test the null hypothesis that the true mean is equal to 1.5 against the alternative that it is greater, run the command ```{r} t.test(bikedata$passing.distance, alternative="greater", mu=1.5) ``` Note that for a one-sided alternative, the confidence interval is one-sided, as well. Inference for a Population Proportion ------------------------------------- To perform a test on a population proportion, use the `prop.test` function. This performs a test on a population proportion that is slightly different than the one we cover in the core statistics course, but it will give you a very similar answer. In the first argument, specify `x`, the number of successes; in the second argument, specify `n`, the number of trials. By default, the null value of the population proportion is `0.5`; to specify a different value, use the `p` argument. For example, to test the null hypothesis that the true proportion of cars passing the rider on his route is exactly equal to 40%, we first tabulate the `colour` variable, ```{r} table(bikedata$colour) ``` In this instance, the number of "successes" is equal to the number of blue cars, `636`. Recall that some of the values for the `colour` variable are missing. If the missingness is unrelated to the actual color, then we can safely ifnore these values; in this case, the number of "trials" is equal to the sumof the counts for all of the non-missing values. ```{r} sum(table(bikedata$colour)) ``` Now, to test the proportion, we run the command: ```{r} prop.test(636, 2341, p=0.40) ``` As with the `t.test` function, we can use a one-sided alternative or specify a different confidence level for the interval by using the `alternative` or `conf.level` argument, respectively. Here is a test of the null that the true proportion is equal to `0.5` against the alternative that it is less, along with a one-sided 99% confidence interval: ```{r} prop.test(636, 2341, p=0.50, alternative="less", conf.level=0.99) ``` -------------------------------------------------------------------------------- Linear Regression ================= Model Fitting ------------- We fit a linear regression model using the `lm` command. For example, suppose we want to fit a with response variable `sqrt(passing.distance)` and predictors `helmet`, `vehicle`, and `kerb`. We would use the following command: ```{r} model <- lm(sqrt(passing.distance) ~ helmet + vehicle + kerb, data = bikedata) ``` The formula syntax `y ~ x1 + x2 + x3` means use `y` as the response variable, use `x1`, `x2`, and `x3` as predictor variables, and include an intercept in the model. We can either store all of the variables in a data frame and use the `data` argument as above, or we can extract the variable first. For example, we could run the following commands to fit an equivalent model: ```{r} sqrt.passing.distance <- sqrt(bikedata$passing.distance) helmet <- bikedata$helmet vehicle <- bikedata$vehicle kerb <- bikedata$kerb model1 <- lm(sqrt.passing.distance ~ helmet + vehicle + kerb) ``` In the latter instance, there is no need to pass the `data` argument to `lm`; the response and the predictor variables exist in the environment. Inference for Regression Parameters ----------------------------------- When we have a fitted value, we can get the coefficient estimates, their standard errors, t statistics, and p values with the `summary` command. ```{r} summary(model) ``` This also reports some information about the residuals, along with the R^2 and the adjusted R^2 values. To get a confidence interval for a population regression coefficient, use the `confint` command. For example, we can get a 99% confidence interval for coefficient of `helmetY` with ```{r} confint(model, "helmetY", 0.99) ``` Regression Diagnostics ---------------------- Plotting a fitted model shows us regression diagnostics: ```{r} plot(model) ``` We can extract the raw or standardized residuals with the `residuals` or `rstandard` command. For example, to make scatter plots of residuals versus `kerb`, use ```{r} plot(bikedata$kerb, residuals(model)) ``` For standardized residuals, use ```{r} plot(bikedata$kerb, rstandard(model)) ``` Here is a boxplots of standardized residual versus `helmet`: ```{r} boxplot(rstandard(model) ~ bikedata$helmet) ``` Forecasting ----------- We use the `predict` command to forecast the response values for new observations. To use this command, we first make a data frame with the predictors for the new observations, then we pass this data frame and the model to the `predict` command. Suppose we want to get a prediction `sqrt(passing.distance)` when riding with a helmet, being passed by an SUV, and having a kerb distance of 1.2 meters. In this case, we run the commands ```{r} newdata <- data.frame(helmet="Y", vehicle="SUV", kerb=1.2) predict(model, newdata) ``` We can also ask for the standard error of the fit: ```{r} predict(model, newdata, se.fit = TRUE) ``` We can get a 95% confidence interval for the mean with ```{r} predict(model, newdata, interval = "confidence", level = 0.95) ``` Here, the reported confidence interval is approximately `(1.157, 1.209)`. Finally, we can get a prediction interval for the response with ```{r} predict(model, newdata, interval = "prediction", level = 0.95) ``` Here, the reported prediction interval is approximately `(0.897, 1.469)`.