Chapter 8 Data wrangling, Part I

In the real world you’ll often create a data set (or be given one) in a format that is less than ideal for analysis. This can happen for a number of reasons. For example, the data may have been recorded in a manner convenient for collection and visual inspection, but which does not work well for analysis and plotting. Or the data may be an amalgamation of multiple experiments, in which each of the experimenters used slightly different naming conventions. Or the data may have been produced by an instrument that produces output with a fixed format. Sometimes important experimental information is included in the column headers of a spreadsheet.

Whatever the case, we often find ourselves in the situation where we need to “wrangle” our data into a “tidy” format before we can proceed with visualization and analysis. The “R for Data Science” text discusses some desirable rules for “tidy” data in order to facilitate downstream analyses. These are:

  1. Each variable must have its own column.
  2. Each observation must have its own row.
  3. Each value must have its own cell.

In this lecture we’re going to walk through an extended example of wrangling some data into a “tidy” format.

8.2 Data

To illustrate a data wrangling pipeline, we’re going to use a gene expression microarray data set, based on the following paper:

  • Spellman PT, et al. 1998. Comprehensive identification of cell cycle-regulated genes of the yeast Saccharomyces cerevisiae by microarray hybridization. Mol Biol Cell 9(12): 3273-97.

In this paper, Spellman and colleagues tried to identify all the genes in the yeast genome (>6000 genes) that exhibited oscillatory behaviors suggestive of cell cycle regulation. To do so, they combined gene expression measurements from six different types of cell cycle synchronization experiments.

Download the Spellman data to your filesystem from this link (right-click the “Download” button and save to your Downloads folder or similar).

I suggest that once you download the data, you open it in a spreadsheet program (e.g. Excel) or use the RStudio Data Viewer to get a sense of what the data looks like.

Let’s load it into R, using the read_tsv() function, using the appropriate file path.

The initial dimenions of the data frame are:

The six types of cell cycle synchronization experiments included in this data set are:

  1. synchronization by alpha-factor = “alpha”
  2. synchronization by cdc15 temperature sensitive mutants = “cdc15”
  3. synchronization by cdc28 temperature sensitive mutants = “cdc28”
  4. synchronization by elutration = “elu”
  5. synchronization by cln3 mutatant strains = “cln3”
  6. synchronization by clb2 mutant strains = “clb2”

8.3 Renaming data frame columms

Notice that when we imported the data we got a warning message: Missing column names filled in: 'X1' [1]. In a data frame, every column must have a name. The first column of our data set did not have a name in the header, so read_tsv automatically gave it the name X1.

Our first task is to give the first column a more meaningful name. This column gives “systematic gene names” – a standardized naming scheme for genes in the yeast genome. We’ll use dplyr::rename to rename X1 to gene. Note that rename can take multiple arguments if you need to rename multiple columns simultaneously.

Note the use of the compound assingment operator – %<>% – from the magrittr package, which we introduced in our last class session.

8.4 Dropping unneeded columns

Take a look at the Spellman data again in your spreadsheet program (or the RStudio data viewer). You’ll notice there are some blank columns. For example there is a column with the header “alpha” that has no entries. These are simply visual organizing elements that the creator of the spreadsheet added to separate the different experiments that are included in the data set.

We can use dplyr::select() to drop columns by prepending column names with the negative sign:

Note that usually select() keeps only the variables you specify. However if the first expression is negative, select will instead automatically keep all variables, dropping only those you specify.

8.4.1 Finding all empty columns

In the example above, we looked at the data and saw that the “alpha” column was empty, and thus dropped it. This worked because there are only a modest number of columns in the data frame in it’s initial form. However, if our data frame contained thousands of columns, this “look and see” procedure would not be efficient. Can we come up with a general solution for removing empty columns from a data frame?

When you load a data frame from a spreadsheet, empty cells are given the value NA. In previous class sessions we were introduced to the function is.na() which tests each value in a vector or data frame for whether it’s NA or not. We can count NA values in a vector by summing the output of is.na(). Conversely we can count the number of “not NA” items by using the negation operator (!):

This seems like it should get us close to a solution but sum(is.na(..)) when applied to a data frame counts NAs across the entire data frame, not column-by-column.

If we want sums of NAs by column, we instead use the colSums() function:

Columns with all missing values can be more conveniently found by asking for those columns where the number of “not missing” values is zero:

We can combine the colSums(!is.na()) idiom with the dplyr::select_if function to quickly remove all empty columns as so:

8.4.2 Dropping columns by matching names

Only two time points from the cln3 and clb2 experiments were reported in the original publication. Since complete time series are unavailable for these two experimental conditions we will drop them from further consideration.

select() can be called be called with a number of “helper function” (?select_helpers). Here we’ll illustrate the matches() helper function which matches column names to a “regular expression”. Regular expressions (also referred to as “regex” or “regexp”) are a way of specifying patterns in strings. For the purposes of this document we’ll illustrate regexs by example; for a more detailed explanation of regular expressions see the the regex help(?regex) and the Chapter on Strings in “R for Data Analysis”:

Let’s see how to drop all the “cln3” and “clb2” columns from the data frame using matches():

If we wanted we could have collapsed our two match statements into one as follows:

In this second example, the character “|” is specifing an OR match within the regular expression, so this regular expression matches column names that contain “cln3” OR “clb2”.

8.5 Merging data frames

Often you’ll find yourself in the situation where you want to combine information from multiple data sources. The usual requirement is that the data sources have one or more shared columns, that allow you to relate the entities or observations (rows) between the data sets. dplyr provides a variety of join functions to handle different data merging operators.

To illustrating merging or joining data sources, we’ll add information about each genes “common name” and a description of the gene functions to our Spellman data set. I’ve prepared a file with this info based on info I downloaded from the Saccharomyces Genome Database.

Having loaded the data, let’s get a quick overview of it’s structure:

In gene.info, the ftr.name column corresponds to the gene column in our Spellman data set. The std.name column gives the “common” gene name (not every gene has a common name so there are lots of NAs). The description column gives a brief textual description of what the gene product does.

To combine spellmean.clean with gene.info we use the left_join function defined in dplyr. As noted in the description of the function, left_join(x, y) returns “all rows from x, and all columns from x and y. Rows in x with no match in y will have NA values in the new columns.” In addition, we have to specify the column to join by using the by argument to left_join.

By default, the joined columns are merged at the end of the data frame, so we’ll reorder variables to bring the std.name and description to the second and thirds columns, preserving the order of all the other colums.

spellman.merged %<>%
  select(gene, std.name, description, everything())

spellman.merged
#> # A tibble: 6,178 x 76
#>    gene  std.name description alpha0 alpha7 alpha14 alpha21 alpha28 alpha35
#>    <chr> <chr>    <chr>        <dbl>  <dbl>   <dbl>   <dbl>   <dbl>   <dbl>
#>  1 YAL0… TFC3     Subunit of…  -0.15  -0.15   -0.21    0.17   -0.42  -0.44 
#>  2 YAL0… VPS8     Membrane-b…  -0.11   0.1     0.01    0.06    0.04  -0.26 
#>  3 YAL0… EFB1     Translatio…  -0.14  -0.71    0.1    -0.32   -0.4   -0.580
#>  4 YAL0… <NA>     Dubious op…  -0.02  -0.48   -0.11    0.12   -0.03   0.19 
#>  5 YAL0… SSA1     ATPase inv…  -0.05  -0.53   -0.47   -0.06    0.11  -0.07 
#>  6 YAL0… ERP2     Member of …  -0.6   -0.45   -0.13    0.35   -0.01   0.49 
#>  7 YAL0… FUN14    Integral m…  -0.28  -0.22   -0.06    0.22    0.25   0.13 
#>  8 YAL0… SPO7     Putative r…  -0.03  -0.27    0.17   -0.12   -0.27   0.06 
#>  9 YAL0… MDM10    Subunit of…  -0.05   0.13    0.13   -0.21   -0.45  -0.21 
#> 10 YAL0… SWC3     Protein of…  -0.31  -0.43   -0.3    -0.23   -0.13  -0.07 
#> # ... with 6,168 more rows, and 67 more variables: alpha42 <dbl>,
#> #   alpha49 <dbl>, alpha56 <dbl>, alpha63 <dbl>, alpha70 <dbl>,
#> #   alpha77 <dbl>, alpha84 <dbl>, alpha91 <dbl>, alpha98 <dbl>,
#> #   alpha105 <dbl>, alpha112 <dbl>, alpha119 <dbl>, cdc15_10 <dbl>,
#> #   cdc15_30 <dbl>, cdc15_50 <dbl>, cdc15_70 <dbl>, cdc15_80 <dbl>,
#> #   cdc15_90 <dbl>, cdc15_100 <dbl>, cdc15_110 <dbl>, cdc15_120 <dbl>,
#> #   cdc15_130 <dbl>, cdc15_140 <dbl>, cdc15_150 <dbl>, cdc15_160 <dbl>,
#> #   cdc15_170 <dbl>, cdc15_180 <dbl>, cdc15_190 <dbl>, cdc15_200 <dbl>,
#> #   cdc15_210 <dbl>, cdc15_220 <dbl>, cdc15_230 <dbl>, cdc15_240 <dbl>,
#> #   cdc15_250 <dbl>, cdc15_270 <dbl>, cdc15_290 <dbl>, cdc28_0 <dbl>,
#> #   cdc28_10 <dbl>, cdc28_20 <dbl>, cdc28_30 <dbl>, cdc28_40 <dbl>,
#> #   cdc28_50 <dbl>, cdc28_60 <dbl>, cdc28_70 <dbl>, cdc28_80 <dbl>,
#> #   cdc28_90 <dbl>, cdc28_100 <dbl>, cdc28_110 <dbl>, cdc28_120 <dbl>,
#> #   cdc28_130 <dbl>, cdc28_140 <dbl>, cdc28_150 <dbl>, cdc28_160 <dbl>,
#> #   elu0 <dbl>, elu30 <dbl>, elu60 <dbl>, elu90 <dbl>, elu120 <dbl>,
#> #   elu150 <dbl>, elu180 <dbl>, elu210 <dbl>, elu240 <dbl>, elu270 <dbl>,
#> #   elu300 <dbl>, elu330 <dbl>, elu360 <dbl>, elu390 <dbl>

8.6 Reshaping data with tidyr

The tidyr package provides functions for reshaping or tidying data frames. tidyr is yet another component of the tidyverse, and thus was loaded by the library(tidyverse).

We’re going to look at two functions tidyr::gather() and tidyr::extract(), and how they can be combined with now familiar dplyr functions we’ve seen previously. The reading assignment for today’s class session covers a variety of other functions defined in tidyr.

The Spellman data, as I provided it to you, is in what we would call “wide” format. Each column (besides the gene column) corresponds to an experimental condition and time point. For example, “alpha0” is the alpha-factor experiment at time point 0 mins; “alpha7” is the alpha-factor experiment at time point 7 mins, etc. The cells within each column correspond to the expression of a corresponding gene (given by the first column which we renamed gene) in that particular experiment at that particular time point.

In every column (except “gene”), the cells represents the same abstract property of interest – the expression of a gene of interest in a particular experiment/time point. Our first task will be to rearrange our “wide” data frame that consists of many different columns representing gene expression into a “long” data frame with just a single column representing expression. We’ll also create a new column to keep track of which experiment and time point the measurement came from.

8.6.1 Wide to long conversions using tidyr::gather

tidyr::gather() takes multiple columns, and collapses them together into a smaller number of new columns. When using gather() you give the names of the new columns to create, as well as the names of any existing columns gather() should not collect together.

Here we want to collapse all 73 or the expression columns – “alpha0” to “elu390” – into two columns: 1) a column to represent the expt/time point of the measurement, and 2) a column to represent the corresponding expression value. The column we don’t want to touch are the gene, std.name, and description.

Take a moment to look at the data in the “long format”:

And compare the dimensions of the wide data to the new data:

As you see, we’ve gone from a data frame with 6178 rows and 76 columns (wide format), to a new data frame with 450994 rows and 5 columns (long format).

8.6.2 Extracting information from combined variables using tidyr::extract

The column expt.and.time violates one of our principles of tidy data: “Each variable must have its own column.”. This column conflates two different types of information – the experiment type and the time point of the measurement. Our next task is to split this information up into two new variables, which will help to facilitate downstream plotting and analysis.

One complicating factor is that the different experiments/time combinations have different naming conventions:

  • The “alpha” and “elu” experiments are of the form “alpha0”, “alpha7”, “elu0”, “elu30”, etc. In this case, the first part of the string gives the experiment type (either alpha or elu) and the following digits give the time point.

  • In the “cdc15” and “cdc28” experiments the convention is slightly different; they are of the form “cdc15_0”, “cdc15_10”, “cdc28_0”, “cdc28_10”, etc. Here the part of the string before the underscore gives the experiment type, and the digits after the underscore give the time point.

Because of the differences in naming conventions, we will find it easiest to break up spellman.long into a series of sub-data sets corresponding to each experiment type in order to extract out the experiment and time information. After processing each data subset separately, we will join the modified sub-data frames back together.

8.6.4 Splitting columns

Having subsetted the data, we can now split expt.and.time into two new variables – expt and time. To do this we use tidyr::extract.

Let’s take a moment to look at the regex argument to extractregex="(alpha)([[:digit:]]+)". The regex is specified as a character string. Each part we want to match and extract is surround by parentheses. In this case we have two sets of parentheses corresponding to the two matches we want to make. The first part of the regex is (alpha); here we’re looking to make an exact match to the string “alpha”. The second part of the regex reads ([[:digit:]]+). [[:digit:]] indicates we’re looking for a numeric digit. The + after [[:digit:]] indicates that we want to match one or more digits (i.e. to get a match we need to find at least one digit, but more than one digit should also be a match).

Let’s take a look at the new version of alpha.long following application of extract:

Notice our two new variables, both of which have appropriate types!

A data frame for the elutriation data can be created similarly:

8.6.4.1 A fancier regex for the cdc experiments

Now let’s process the cdc experiments (cdc15 and cdc28). As before we extract the corresponding rows of the data frame using filter and str_detect. We then split expt.and.time using tidyr::extract. In this case we carry out the two steps in a single code block using pipes:

The regex – "(cdc15|cdc28)_([[:digit:]]+)" – is slightly fancier in this example. As before there are two parts we’re extracting: (cdc15|cdc28) and ([[:digit:]]+). The first parenthesized regexp is an “OR” – i.e. match “cdc15” or “cdc28”. The second parenthesized regexp is the same as we saw previously. Separating the two parenthesized regexps is an underscore (_). The underscore isn’t parenthesized because we only want to use it to make a match not to extract the corresponding match.

8.6.5 Combining data frames

If you have two or more data frames with identical columns, the rows of the data frames can be combined into a single data frame using rbind (defined in the base package). For example, to reassemble the alpha.long, elu.long, and cdc.long data frames into a single data frame we do:

8.6.6 Sorting data frame rows

Currently the spellman.final data frame is sorted by time point and experiment.

It might be useful instead to sort by gene and experiment. To do this we can use dplyr::arrange:

8.7 Using your tidy data

Whew – that was a fair amount of work to tidy our data! But having done so we can now carry out a wide variety of very powerful analyses.

8.7.1 Visualizing gene expression time series

Let’s start by walking through a series of visualizations of gene expression time series. Each plot will show the expression of one or more genes, at different time points, in one or more experimental conditions. Our initial visualizations exploit the “long” versions of the tidy data.

First a single gene in a single experimental condition:

We can easily modify the above code block to visualize the expression of multiple genes of interest:

By employing facet_wrap() we can visualize the relationship between this set of genes in each of the experiment types:

The different experimental treatments were carried out for varying lengths of time due to the differences in their physiological effects. Plotting them all on the same time scale can obscure that patterns of oscillation we might be interested in, so let’s modify our code block so that plots that share the same y-axis, but have differently scaled x-axes.

8.7.2 Finding the most variable genes

When dealing with vary large data sets, one ad hoc filtering criteria that is often employed is to focus on those variables that exhibit that greatest variation (variation is measure of the spread of data; we will give a precise definition in a later lecture). To do this, we first need to order our variables (genes) by their variation. Let’s see how we can accomplish this using our long data frame:

The code above calculates the variance of each gene but ignores the fact that we have different experimental conditions. To take into account the experimental design of the data at hand, let’s calculate the average variance across the experimental conditions:

Based on the average experession variance across experimental conditions, let’s get the names of the 1000 most variable genes: