Blog Feed

It’s Time to Ditch SPSS

If you were trained in a social science, there is a good chance that you have had to use, or still use SPSS. For most social science research, SPSS is a powerful program and arguably the industry standard in academia, especially in the social sciences. Statistical Package for Social Sciences (SPSS) is a menu driven software that is easy enough to learn – as long as teachers provide assignments and tutorials – to get you analyzing data quickly. In other words, SPSS is a good program for statistical analysis, and the barrier for entry is not too high, making it a great fit for social scientists who are just learning statistics. However, SPSS does have its limitations and there are options out there that are not only more robust, but completely open source (i.e. free). 

Photo by Kevin Ku on Unsplash
Open Course Software:

One of the advantages of menu-driven statistical software has been the ease of use. Fortunately, many of the open source options – like Python and R – have come a long way towards lowering the learning curve in recent years. Graphical User Interfaces (GUIs), such as RStudio & Jupyter Notebooks (among many others) have made these programs much easier to learn, and come with many features unavailable in menu-driven programs like SPSS. In addition, when working on projects with Python or R, you can save them in a way that allows you to replicate, expand and share your analysis easily. Finally, what really sets languages like R and Python apart – besides being free – is that there a number of packages available from a community of people who like to share and help answer problems. This culture of learning and sharing has created options to do advanced statistical analysis and modeling – including machine learning – in addition to highly customizable data visualizations and tools for extracting, transforming, and loading (ETL) to create efficient data pipelines.

Transparency:

The ability to be transparent in your data analysis cannot be stressed enough. In statistics, it is easy to put a variable in the wrong place or select the wrong test, but still get results that are statistically significant (and wrong). Typically the people who are reading your analysis don’t get to see your raw data and the processes you went through with your analysis. Black box programs, like SPSS, only compound this. With language based programs like R and Python, you show every aspect of your work along the way; providing greater credibility in the process. Letting other researchers see how you reached your conclusion – or at least giving them the ability to – only strengthens your research and analysis. 

Demand:

One more advantage to learning a code based program like Python or R is that those skills are in demand. If you are a graduate student or just an academic that is looking for jobs outside of the academy, having skills in data science are very marketable. Industries everywhere are wanting to gain greater insights into their sales, operations, and consumer base. In addition, the federal government and many academic institutions need people with skills to handle the large databases that aren’t as easily compatible with many of the menu driven software programs like SPSS. 

What about SAS and Stata?

Both SAS and Stata are statistical software packages that rely on a code based language. These programs are not free, but they are very powerful software programs that offer a lot of options, and work with large datasets. For years, SAS has been the preferred software for manufacturing and the medical field while Stata is often used in the political science and survey field. Consequently, some organizations simply use it because they don’t want to take the time and money to train their employees in other languages, not to mention the legacy code they have relied on for years, if not decades. The big drawback of these two programs, besides cost, is you can’t easily access the packages available in open source software, like Python or R.

Versatility of Python and R

If you know how to program in Python or R, it much easier to switch over to other languages. To effectively code in Python or R, you have to clearly understand your variables and the algorithm. In other words, you have to tell the software exactly what to do. Once you get past the basics, these languages are not difficult to understand. The problem for most people though is getting past the basics, which just takes some patience, and honestly a lot of trial and error. Once you are comfortable summarizing, analyzing and visualizing data with language based software, you’ll have a deeper understanding of what the data is actually doing. More importantly, you can easily transfer those skills to other programs. 

Personally, I began on SPSS. It did more than what I needed for my classes in behavioral statistics. When I took a couple of applied statistics course we used Minitab, in part because it was developed and is still housed at Penn State (so it was free to us). Those two applied stats classes didn’t care what software you used, but they had clear tutorials that went along with our free access to Minitab, so I generally used that. Eventually, I started taking even more advanced stat classes which worked exclusively in R. We were required to turn all homework in through Markdown, knitting the document to html. We got up and running quickly with the Mosaic package and were able to do some interesting analysis and date visualization right away. After I completed my graduate certificate in Applied Statistics, I just kept on learning how to code better and analyze data in R. Once I had solid foundations with R, I was able to learn other languages like SQL, SAS, & Python fairly easily (when needed), because the conceptual foundation had already been set. As a result, I ended up being competitive for numerous jobs outside of academia in both government and industry.

Which option to choose?

This choice all depends on what kinds of work you see yourself doing. Personally I prefer R, because it works like a really fancy calculator, making it great for modeling. Also, there are great packages for visualizing like GGPlot, in addition to packages like dplyR for extracting, transforming, and loading (ETL) data. With that being said, I have worked at places where everyone uses SAS, so I did too. Finally, most positions in data science & statistics now typically list Python (in addition to SQL) as necessary languages. So if I were just starting out, that is probably where I would invest my time. Relatedly, there is one software program you will almost never find in data job postings: SPSS.

Thanks for reading!  

From Couch to Half Marathon

In the fall of 2020 I set out to be more active and took up running as a hobby. Right as I completed the Couch to 5K Program (C25K), lockdowns were being implemented across the country and I found myself with a lot more time on my hands. So, I set out to improve speed next by getting my 5K time to under 30 minutes before shifting my focus to running my first ever half marathon. This blog post hopes to take you on the journey with the data I collected along the way.

Going from couch to half marathon took me through three different running plans, using two different iPhone apps. The first running plan I used was the Couch to 5K Program (C25K); a standalone app and plan created by Active. To improve speed, I used the “Tempo Run: 5k” training plan, followed by distance using the “Half Marathon Goal” plan, both found within the RunTracker Pro app. Each of these apps had simple to follow prompts telling you when to run, walk, or pick up the pace, and are designed to progressively build speed and endurance over time.

The C25K running training plan utilizes the run / walk method and includes 3 runs per week – each between 20 and 30 minutes – with the program lasting 9 weeks in total. Over the course of the 27 training runs, the proportion of walking decreases while the proportion of running increases, culminating with three 30 minute runs in the last week of the program. The “Tempo Run: 5k” plan consisted of three runs per week for a total of eight weeks, with the same structure each week: an interval run, a tempo run, and a base run. Similar to the C25K plan, runs progressively increase in both mileage and intensity throughout. Finally, the “Half Marathon Goal” running plan consisted of four runs per week – a base run, an interval run, a tempo run, and a long run – for a total of twelve weeks. In this plan, each week ends with a long, slow distance (LSD) run, culminating in a final run of 2 hours and 15 minutes in the last week of the program. In the graphs below, we see great representations of both normal (bottom) and positively skewed (top) distributions when we look at speed and distances ran throughout these programs:

Overall Distribution of Running Distances & Paces

Given that each program had different goals, we see some clear distinctions between each of them. Unsurprisingly, the Half Marathon program featured the longest runs and the largest spread (i.e. variance) with respect to distance, but the least amount of variability with respect to speed. Another expected result was the with Tempo Run: 5K program, which featured the fastest runs with the least amount of variability in distance throughout the program. These results are clearly represented in the box plots below:

Distribution of Running Distances and Paces by Program

Since there was an ordered component to these programs, the best way to view these data is through a scatter plot, which allows us to vizualize progress over time. We can see that running pace improved at a significantly greater rate in the C25K & Faster 5K program when compared to the Half Marathon plan, which makes sense, given their respective goals. This also explains the curvature in the data when looking at running pace. When investigating distance, we see that most runs stayed within 2 to 4 miles throughout each program, with the exception of the long weekend runs in the Half Marathon plan, which clearly separate themselves from the pack linearly over time:

Scatter Plots of Running Distances & Paces over Time

Final Thoughts

While I initially did not set out to go from Couch to Half Marathon, that is what ended up happening, thanks to a few inexpensive running apps and some extra time on my hands due to a global pandemic. The C25K app is a great resource for anyone who is looking to get into running. Employing the run/walk method, the program consists of 27 runs, spread out over 9 weeks. To run faster I completed the Tempo Run: 5K (ie. Faster 5k) plan, before tackling the Half Marathon Goal plan, both of which were subsumed with the Runtracker Pro App. Both of these apps are inexpensive and helpful resources for those who are interested in getting into, or improving their running.

One word of caution: Many people who have completed this program inculcate that you should not be afraid to add extra rest days or repeat workouts as needed. I would agree with that. More importantly, you absolutely should not skip ahead, nor should run on back to back days in the beginning. The quickest way to halt any progress is through injury, so take your time and enjoy the run!

Below are links to posts breaking down each of the programs individually, along with the raw data and code used to create the charts and analyis.

Thanks for reading!

Couch to 5K

Faster 5K

Half Marathon Goal


# clean up (this clears out the previous environment)
ls()

# Load Packages 
library(tidyverse)
library(wordcloud2)
library(mosaic)
library(readxl)
library(hrbrthemes)
library(viridis)

# Likert Data Packages
library(psych)
library(FSA)
library(lattice)
library(boot)
library(likert)

#install.packages("wordcloud")
library(wordcloud)
library(tm)
library(wordcloud)


# Grid Extra for Multiplots
library("gridExtra")

# Multiple plot function (just copy paste code)

multiplot <- function(..., plotlist=NULL, file, cols=1, layout=NULL) {
  library(grid)

  # Make a list from the ... arguments and plotlist
  plots <- c(list(...), plotlist)

  numPlots = length(plots)

  # If layout is NULL, then use 'cols' to determine layout
  if (is.null(layout)) {
    # Make the panel
    # ncol: Number of columns of plots
    # nrow: Number of rows needed, calculated from # of cols
    layout <- matrix(seq(1, cols * ceiling(numPlots/cols)),
                    ncol = cols, nrow = ceiling(numPlots/cols))
  }

 if (numPlots==1) {
    print(plots[[1]])

  } else {
    # Set up the page
    grid.newpage()
    pushViewport(viewport(layout = grid.layout(nrow(layout), ncol(layout))))

    # Make each plot, in the correct location
    for (i in 1:numPlots) {
      # Get the i,j matrix positions of the regions that contain this subplot
      matchidx <- as.data.frame(which(layout == i, arr.ind = TRUE))

      print(plots[[i]], vp = viewport(layout.pos.row = matchidx$row,
                                      layout.pos.col = matchidx$col))
    }
  }
}


# Couch to Half

# Import data from CSV, no factors

Couch2Half <- read.csv("Couch2Half.csv", stringsAsFactors = FALSE)

Couch2Half <- Couch2Half %>%
  na.omit()

Couch2Half

Couch2Half %>% 
  count(Program)

ggplot(Couch2Half, aes(x = Program, fill = Program)) +
  geom_bar() + 
  labs( x ="", y = "Speed (Miles per Hour)", title = "Runs by Program",  subtitle = "Couch to Half Marathon", caption = "Data source: TheDataRunner.com") +
  theme(plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank(),
    legend.position = "none") +
  scale_fill_manual(values=c('#999999','#E69F00', '#56B4E9'))

# Plot 1 - Density Plot of Running Distances

p1 <- ggplot(Couch2Half, aes(x=Distance)) + 
  geom_density(color="#E69F00", fill="#999999") + labs( x ="Distance (Miles)", y = "", title = "Running Distances",  subtitle = "Couch to Half Marathon", caption = "Data source: TheDataRunner.com") +
  theme(plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12),
    plot.caption = element_text(hjust = 1, face = "italic"), 
    axis.text.y=element_blank(),
    axis.ticks.y=element_blank(),
    panel.background = element_blank())

# Plot 1 - Density Plot of of Running Speeds

p2 <- ggplot(Couch2Half, aes(x=Pace_MPH)) + 
  geom_density(color="#E69F00", fill="#56B4E9") + 
  labs( x ="Pace (Miles per Hour)", y = "", title = "Running Paces",  subtitle = "Couch to Half Marathon", caption = "Data source: TheDataRunner.com") +
  theme(plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12),
    plot.caption = element_text(hjust = 1, face = "italic"), 
    axis.text.y=element_blank(),
    axis.ticks.y=element_blank(),
    panel.background = element_blank())

# Combine plots using multi-plot function:

multiplot( p1, p2, cols=1)


# Plot
p3 <- Couch2Half %>%
  ggplot( aes(x=Program, y= Distance, fill=Program)) +
    geom_boxplot() +
    scale_fill_viridis(discrete = TRUE, alpha=0.6) +
    geom_jitter(color="Black", size=0.4, alpha=0.9) + 
  labs( x ="", y = "Distance (Miles)", title = "Distance by Workout",  subtitle = "Couch to Half Marathon", caption = "Data source: TheDataRunner.com") +
  theme(plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank(),
    legend.position = "none") +
  scale_fill_manual(values=c('#999999','#E69F00', '#56B4E9'))
  

# Plot
p4 <- Couch2Half %>%
  ggplot( aes(x=Program, y= Pace_MPH, fill=Program)) +
  geom_boxplot() +
    scale_fill_viridis(discrete = TRUE, alpha=0.6) +
    geom_jitter(color="Black", size=0.4, alpha=0.9) + 
  labs( x ="", y = "Speed (Miles per Hour)", title = "Speed by Workout",  subtitle = "Couch to Half Marathon", caption = "Data source: TheDataRunner.com") +
  theme(plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank(),
    legend.position = "none") +
  scale_fill_manual(values=c('#999999','#E69F00', '#56B4E9'))


# Combine plots using multi-plot function
multiplot( p3, p4, cols=2)


p5 <- ggplot(Couch2Half, aes(x=Run, y= Pace_MPH, color = Program)) + geom_point() +  geom_smooth(method=lm , color="Black", se=TRUE) + labs( x ="Training Session", y = "Pace (Miles per Hour)", title = "Running Pace",  subtitle = "Couch to Half Marathon", caption = "Data source: TheDataRunner.com") +
  theme(
    plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank()) + scale_color_manual(values=c('#999999','#E69F00', '#56B4E9'))



p6<- ggplot(Couch2Half, aes(x=Run, y= Distance, color = Program)) + geom_point() +  geom_smooth(method=lm , color="Black", se=TRUE) + labs( x ="Training Session", y = "Distance (Miles)", title = "Running Distance",  subtitle = "Couch to Half Marathon", caption = "Data source: TheDataRunner.com") +
  theme(
    plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank()) + scale_color_manual(values=c('#999999','#E69F00', '#56B4E9'))

# Combine plots using multi-plot function:

multiplot( p5, p6, cols=1)


# Summary Statistics of Distance
favstats(Couch2Half$Distance)

# Summary Statistics of Pace
favstats(Couch2Half$Pace_MPH)

# Pearson Product Correlation of Distance over Time (session)
cor.test(Couch2Half$Session, Couch2Half$Distance, method = "pearson")

# Pearson Product Correlation of Pace over Time (session)
cor.test(Couch2Half$Session, Couch2Half$Pace_MPH, method = "pearson")

Stumbling Into Statistics

One of the most common questions I get is how I found my way into statistics and data science. Honestly, it wasn’t on purpose, nor was it something I ever imagined. It just happened to work out that way, and I have found that the job can be quite interesting and fulfilling. Nevertheless, for someone who hadn’t taken a stats class until their forties, imagining me as a data scientist may seem wild; so allow me to explain:

When I was in grad school, my research interests were in Online Learning and Self Efficacy (i.e. confidence), both of which required a deeper understanding of survey design and measurement. For anyone that has done much survey research, they will be able to tell you that it is relatively easy to conduct a survey, but incredibly difficult to do it well. Fortunately, many universities have courses that are specific to deepening those skills, but they often require additional coursework in statistics as a prerequisite. While I could have taken another class in the social statistics, I wanted to see what it would be like to take a class in applied statistics. Also, I was curious if I could hang in a class with people from the hard sciences. Turns out I could, and I lucked out with a great professor who further sparked my interest. Inspired to learn more, the following semester I took another course in Applied Statistics (Sampling Methods), in addition to the Survey Design class I had originally wanted to take. By that point, I was fully invested in learning as much as I could and followed up with coursework in Regression Methods and Design of Experiments. In these classes  I learned to use scripting languages, like Python and R, to clean, visualize, and model data. Once I had those skills, things really started to take off.

The coursework where we were required to write in R helped me tremendously. First off, I am a strong believer that “writing is thinking.”  When you use a scripting language for statistical analysis, you literally have to write out your models, which reinforces understanding. Since R is a vector based language, it works like a really big calculator; making it great for modeling and visualizing, as well extracting, transforming, and loading (ETL) of data. Statistics and Machine Learning can often times seem like computer magic to some people. I can assure you that it’s not. Most of the time the math is based on relatively simple concepts; and we let the software do heavy lifting with respect to calculation. This gives you the ability to create projects that can be replicable, transparent, and shared when using a scripting software like R.

Photo by Markus Spiske on Unsplash

By the end of grad school,  I had leveraged these skills into part time work in statistics and data science to bring in some extra income, while doing something I enjoyed. Some of the projects I worked on ended up being a lot of fun, and were very well received, which led to more and more work. Then came a global pandemic that upended how many people viewed their work / life balance, so I decided to find a full time position as a data scientist and haven’t looked back since. 

Reflecting on my transition into statistics and data science, a few lessons stand out. The most important one is the role of finding data projects to work on. There is a reason why educational theorists inculcate the importance of Project Based Learning (PBL). Being able to investigate a problem by finding, cleaning, transforming, analyzing, and communicating the story of the data is the most valuable experience you can have if you are looking into a career in data science.  This is where the role of a scripting language comes in. While menu driven statistical programs like SPSS and Minitab used to be the norm,  scripting languages such as Python, R, SQL, etc. are the standard now for their flexibility and the fact that most of them are completely open sourced. Finally, I wasn’t prepared for how much my experience teaching and presenting would help me as a data scientist. Many people are scared of math, and many statistician aren’t the best at communicating with non-statisticians. So, if are able to tell the story of the data, you can bring a lot of value to an organization. 

Below are some resources I have found particularly useful along the way. If you have any questions or advice about data science, please leave them in the comments below!

Thanks for reading!

Resources

Running Through the Data: Half Marathon Goal by RunTracker

In the later half of 2020, I set a new goal for myself: run 13.1 miles by the end of the year. Earlier in the year I had completed the couch to 5k program and later set the goal to improve my time to under 30 minutes. Given the extra time at home thanks to a Global Pandemic, I set my sights on the half marathon distance. Since I was already familiar with the RunTracker app, I decided to stick with that and used their “Half Marathon Goal” training plan.

The Runtracker app, made by the Fitness 22 company, features a series of running plans tailored to individuals’ current fitness levels and goals. The “Half Marathon Goal” running plan consisted of four runs per week for a total of twelve weeks, with a consistent structure throughout most of the program. After a series of base runs in the first week, the next ten weeks featured a base run on Tuesdays, segments on Thursdays, intervals on Fridays, and long run on Sundays. Duration of workouts increase steadily over the course of the first ten weeks before tapering in the final two weeks of the program.

My experience with this running plan was great once I got used to the structure. Previously, the most I had run was three days a week, while this program requires four. This means there would be runs on consecutive days, which I was not used to. Having just finished a training plan geared towards speed work, I quickly learned I would need to slow down if I was going to keep from getting inured. Once I got settled into the format, mileage built progressively and speed eventually followed. By the end of the twelve-week program, I was able to confidently run 13.1 miles using my usual training route, which coincidentally looked like a shoe:

Distance & Pace

Since my goal was to complete a half marathon, the primary variable of interest was obviously distance. Like most runners, I also tend to focus on times, so average running pace served as the secondary variable of interest. Distances ran throughout the training program ranged from 2.14 to 13.12 miles per run, with a mean of 4.85 miles per run. Running paces ranged from 5.16 to 6.1 miles per run (11:38 to 9:50 min/mile ), with a mean of 5.54 miles per hour ( 10:50 min / mile). The distributions of my runs by distance and speed for this program can be seen in the density plots below:

Comparing Workouts

When taking a closer look at these distributions by workout type, we can see some clear patterns in the data. Distances for base runs, interval sessions, and segments, remained relatively close to one another, ranging from 2.14 to 6.02 miles per run. The long runs on Sundays though lived up to their name, ranging from 5.7 to 13.12, with an average of 9.16. Running pace for all workout types were somewhat consistent between groups, with each workout type averaging between 5.5 and 5.6 miles per hour. Distributions by workout type for distance and pace can be seen in the box plots below:

Training Progress

Given that there is an ordered component to training, we can look at these data linearly (i.e. regression). Below are scatter plots of distances covered and running speeds over the course of the 46 training runs in the program. We see a slightly positive association with trainings volume (mileage), while intensity (pace) remained relatable stable throughout the training program. When you take a closer look at the distance plot, we can see how the majority of volume is gained in training through the long runs on weekends, which is typical of most long distance training programs:

Cadence & Heart Rate

Two important considerations for runners are heart rate and cadence. When runners let their heart rates get too high, they tire much quicker. So, distance runners constantly work to keep their heart rate down while still running quickly. This can be aided by increasing cadence to the rate of approximately 180 beats per minute. Increasing cadence allows runners to develop better efficiency in their technique – typically by shortening the stride – which over time can lead to a lower heart rate. This translates into better performance with respect to both speed and endurance. In the plot below we can see that both cadence and heart rate are positively associated with running pace, with a clear interaction between these two variables as speed increases, represented by the slopes crossing one another:

Final Thoughts

The “Half Marathon Goal” plan on the RunTracker app is geared towards regular runners who are ready to tackle the 13.1 distance. The training structure consists of three runs per week with a base run, a session of mile repeats, an interval session, and one long run on the weekend. The variety of workouts in the program are designed primarily to build the strength and endurance to run a half marathon, with some speed work included to build anaerobic capacity as well. For anyone who has been running for a while and is ready to tackle longer distances, this program could be an excellent option.

Below are some links related on running a first half marathon, along with the raw data and code used to create the charts and analysis.

Thanks for reading!

Resources & Code

# FRONT MATTTER

### Note: The HM_1.xlxs file will need to be converted to HM_1.csv to read in correctly. Also, all packages can be downloaded using the install.packages() function. This only needs to be done once before loading. 

## clean up (this clears out the previous environment)
ls()

## Load Packages 
library(tidyverse)
library(wordcloud2)
library(mosaic)
library(readxl)
library(hrbrthemes)
library(viridis)

## Likert Data Packages
library(psych)
library(FSA)
library(lattice)
library(boot)
library(likert)

## Grid Extra for Multiplots
library("gridExtra")

## Multiple plot function (just copy paste code)

multiplot <- function(..., plotlist=NULL, file, cols=1, layout=NULL) {
  library(grid)

  # Make a list from the ... arguments and plotlist
  plots <- c(list(...), plotlist)

  numPlots = length(plots)

  # If layout is NULL, then use 'cols' to determine layout
  if (is.null(layout)) {
    # Make the panel
    # ncol: Number of columns of plots
    # nrow: Number of rows needed, calculated from # of cols
    layout <- matrix(seq(1, cols * ceiling(numPlots/cols)),
                    ncol = cols, nrow = ceiling(numPlots/cols))
  }

 if (numPlots==1) {
    print(plots[[1]])

  } else {
    # Set up the page
    grid.newpage()
    pushViewport(viewport(layout = grid.layout(nrow(layout), ncol(layout))))

    # Make each plot, in the correct location
    for (i in 1:numPlots) {
      # Get the i,j matrix positions of the regions that contain this subplot
      matchidx <- as.data.frame(which(layout == i, arr.ind = TRUE))

      print(plots[[i]], vp = viewport(layout.pos.row = matchidx$row,
                                      layout.pos.col = matchidx$col))
    }
  }
}


# HALF MARATHON GOAL by RUNTRACKER

## Import data from CSV, no factors

HM_1 <- read.csv("HM_1.csv", stringsAsFactors = FALSE)

HM_1 <- HM_1  %>%
  na.omit()

HM_1 


## Plot 1

p1 <- ggplot(HM_1 , aes(x=Distance)) + 
  geom_density(color="Pink", fill="Pink") + labs( x ="Distance (Miles)", y = "", title = "Running Distances",  subtitle = "Half Marathon Goal by Runtracker", caption = "Data source: TheDataRunner.com") +
  theme(plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12),
    plot.caption = element_text(hjust = 1, face = "italic"), 
    axis.text.y=element_blank(),
    axis.ticks.y=element_blank(),
    panel.background = element_blank())


## Plot 2

p2 <- ggplot(HM_1, aes(x=Pace_MPH)) + 
  geom_density(color="light blue", fill="light blue") + 
  labs( x ="Speed (Miles per Hour)", y = "", title = "Running Pace",  subtitle = "Half Marathon Goal by Runtracker", caption = "Data source: TheDataRunner.com") +
  theme(plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12),
    plot.caption = element_text(hjust = 1, face = "italic"), 
    axis.text.y=element_blank(),
    axis.ticks.y=element_blank(),
    panel.background = element_blank())


## Combine plots using multi-plot function:

multiplot( p1, p2, cols=1)

## Plot 3

p3 <- ggplot(HM_1 , aes(x= Session, y= Distance)) + geom_point(color="Black") +  geom_smooth(method=lm , color="Red", se=TRUE) + labs(x ="Training Session", y = "Distance (Miles)", title = "Running Distance",  subtitle = "Half Marathon Goal by Runtracker", caption = "Data source: TheDataRunner.com") +
   theme(
    plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank())

## Plot 4

p4<- ggplot(HM_1 , aes(x=Session, y= Pace_MPH)) + geom_point(color="Black") +  geom_smooth(method=lm , color="Blue", se=TRUE) + labs( x ="Training Session", y = "Speed (Miles per Hour)", title = "Running Pace",  subtitle = "Half Marathon Goal by Runtracker", caption = "Data source: TheDataRunner.com") +
  theme(
    plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank())

## Combine plots using multi-plot function
multiplot( p3, p4, cols=1)

## Summary Statistics of Distance
favstats(HM_1$Distance)

## Summary Statistics of Pace
favstats(HM_1$Pace_MPH)



## Pearson Product Correlation of Distance over Time (session)
cor.test(HM_1$Session, HM_1$Distance, method = "pearson")

## Pearson Product Correlation of Pace over Time (session)
cor.test(HM_1$Session, HM_1$Pace_MPH, method = "pearson")


## Plot
p5 <-  HM_1 %>%
  filter(Workout != "Race") %>%
  ggplot( aes(x=Workout, y= Distance, fill=Workout)) +
  geom_boxplot() +
    scale_fill_viridis(discrete = TRUE, alpha=0.6) +
    geom_jitter(color="Black", size=0.4, alpha=0.9) + 
  labs( x ="Workout Type", y = "Distance (Miles)", title = "Comparing Distances",  subtitle = "Half Marathon Goal by Runtracker", caption = "Data source: TheDataRunner.com") +
  theme(plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank(),
    legend.position = "none") +
    scale_fill_brewer(palette="Reds")
  
## Plot
p6  <-  HM_1 %>%
  filter(Workout != "Race") %>%
  ggplot( aes(x=Workout, y= Pace_MPH, fill=Workout)) +
    geom_boxplot() +
    scale_fill_viridis(discrete = TRUE, alpha=0.6) +
    geom_jitter(color="Black", size=0.4, alpha=0.9) + 
  labs( x ="Workout Type", y = "Speed (Miles per Hour)", title = "Comparing Paces",  subtitle = "Half Marathon Goal by Runtracker", caption = "Data source: TheDataRunner.com") +
  theme(plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank(),
    legend.position = "none") +
    scale_fill_brewer(palette="Blues")

## Combine plots using multi-plot function
multiplot( p5, p6, cols=2)

## Plot 7

p7 <- ggplot(HM_1 , aes(x= Cadence, y= Distance)) + geom_point(color="Black") +  geom_smooth(method=lm , color="Red", se=TRUE) + labs(x ="Average Running Cadence", y = "Distance (Miles)", title = "Cadence by Distance",  subtitle = "Half Marathon Goal by Runtracker", caption = "Data source: TheDataRunner.com") +
   theme(
    plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank())


## Plot 8

p8<- ggplot(HM_1 , aes(x=Cadence, y= Pace_MPH)) + geom_point(color="Black") +  geom_smooth(method=lm , color="Green", se=TRUE) + labs( x ="Average Running Cadence", y = "Speed (Miles per Hour)", title = "Cadence by Pace",  subtitle = "Half Marathon Goal by Runtracker", caption = "Data source: TheDataRunner.com") +
  theme(
    plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank())


## Plot 9

p9 <- ggplot(HM_1 , aes(x= Avg_Heart_Rate, y= Distance)) + geom_point(color="Black") +  geom_smooth(method=lm , color="Blue", se=TRUE) + labs(x ="Average Heart Rate", y = "Distance (Miles)", title = "Heart Rate by Distance",  subtitle = "Half Marathon Goal by Runtracker", caption = "Data source: TheDataRunner.com") +
   theme(
    plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank())

## Plot 10

p10<- ggplot(HM_1 , aes(x=Avg_Heart_Rate, y= Pace_MPH)) + geom_point(color="Black") +  geom_smooth(method=lm , color="Purple", se=TRUE) + labs( x ="Average Heart Rate", y = "Speed (Miles per Hour)", title = "Heart Rate by Pace",  subtitle = "Half Marathon Goal by Runtracker", caption = "Data source: TheDataRunner.com") +
  theme(
    plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank())

## Combine plots using multi-plot function
multiplot( p7, p8, p9, p10, cols=2)

## Pivot data from wide to long for next chart

HM_1A <- gather(HM_1, Measurement, BPM, Cadence, Avg_Heart_Rate)

HM_1A

## Plot 11

p11<- ggplot(HM_1A , aes(x=Pace_MPH, y= BPM, Color= Measurement)) +
     geom_point() +
     geom_smooth(method = "lm", alpha = .15, aes(fill = Measurement)) + labs(x ="Average Pace (Miles per Hour)", y = "Beats per Minute", title = "Heart Rate & Cadence by Pace",  subtitle = "Half Marathon Goal by Runtracker", caption = "Data source: TheDataRunner.com") +
  theme(
    plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank())

p11

## Plot 12

p12<- ggplot(HM_1A , aes(x=Distance, y= BPM, Color= Measurement)) +
     geom_point() +
     geom_smooth(method = "lm", alpha = .15, aes(fill = Measurement)) + labs( x ="Average Distance in Miles", y = "Beats per Minute", title = "Heart Rate & Cadence by Distance",  subtitle = "Half Marathon Goal by Runtracker", caption = "Data source: TheDataRunner.com") +
  theme(
    plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank())

p12

# Combine plots using multi-plot function
multiplot( p11, p12, cols=1)



## Plot 13
p13 <- ggplot(HM_1A , aes(x = Pace_MPH, y = BPM, color = Measurement) ) +
     geom_point() +
     geom_smooth(method = "lm", alpha = .15, aes(fill = Measurement)) + labs(x ="Average Pace (Miles per Hour)", y = "Beats per Minute", title = "Heart Rate & Cadence by Pace",  subtitle = "Half Marathon Goal by Runtracker", caption = "Data source: TheDataRunner.com") +
  theme(
    plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"))

Running Through the Data: Tempo Run: 5K by Runtracker

In the Summer of 2020, I set a really simple goal for myself: run a 5k under 30 minutes. At the time, I had just completed the couch to 5k (C25K) program and was able to complete the distance in around 32-33 minutes, but couldn’t seem to get much quicker than that and wanted to see if trying a different training plan would help. After some experimenting, I settled on the Tempo Run: 5k Plan on the Runtracker app to help me break the 30-minute mark.

Runtracker is an app made by the Fitness 22 company, featuring a series of running plans tailored to individuals’ current fitness levels and goals. Since I was a runner who could currently run the 5k distance and ran about 3 times a week, the app recommended the “Tempo Run: 5k” plan. This running plan consisted of three runs per week for a total of eight weeks, with the same structure each week. The first run of the week consisted of interval training of various lengths throughout the program, while the second  run of the week was always a tempo run of steadily increasing durations. The third and final run each week was a 35-minute base run at an easy pace. This format remained consistent over the course of all 8 weeks and was built to progressively increase both mileage and intensity throughout.

Tempo Run: 5K Training Plan, by Runtracker

My experience with this running plan was great for a variety of reasons. The most structured kind of running I had done before was the run/walk method used in couch to 5k (C25K). Interval sessions, which included high intensity running, easy pace running, and walking helped build power and figure out pacing. Tempo sessions pushed me to find the gear between interval and easy pace, which helped develop the habit of running the second half of my runs, faster than the first (i.e. “negative splits”). The long easy sessions on the weekends helped build confidence and efficiency. By the end of the program, I had taken minutes off my 5K time and had a way better understanding of pacing, which was the biggest takeaway for me. Many of the things I do now as a runner, mirror the types of workouts I was first introduced to in this app, so this data has been fun to look at a few years removed.  

Training Progress 

To get a better picture of my progress throughout the program, three primary variables came into focus: Pace measured in miles per hour (mph); Distance, measured in miles; and Training Session, numbering 1 to 24 and completed in order. Running paces ranged from 5.09 to 6.58 mph (11:47 min/mile to 9:07 min/mile), with a mean of 5.83 mph ( 10:18 min/mile), while distances ran ranged from 2.4 to 5.43 miles, with a mean of 3.44 miles per run. Since there is an ordered component to these workouts (by session), progress can be visualized through scatter plots. Below, are plots of running distance and pace over the course of the 24 workout sessions. Notice how the spread between data opens up as training progresses, especially with respect to distance ran. This “fanning effect” would normally be problematic in statistics, but for running this is often a desired feature in training: 

Image by Author

Workout Type

As I mentioned above, the biggest takeaway of the program for me was my understanding of pacing. Interval sessions, tempo runs, and base runs, require very different kinds of efforts, all of which can improve performance. Interval sessions remained the most consistent with respect to running pace, but had the largest range and highest average number of miles ran. Tempo runs and base runs remained relatively consistent in terms of mileage, with tempo runs having the widest range along with the highest average running pace. These findings can be better visualized through the box plots below for both paces and distances ran:

Comparing with C25K

In my previous blog post, we went through the data of the C25K program.  Since both of these trainings were focused on the same distance, I thought it would be fun to compare progress side by side on the primary variable of interest, pace. The C25K program had a range of 4.01 to 5.51 mph, with an average of 4.79 mph, while the Tempo Runner program had a range of 5.09 to 6.58 mph, with an average of 5.83 mph. Given that both programs had a sequential component (i.e. “training session”), these data can also be expressed as a regression. Below are box plots of running pace distributions (left) and scatter plots of running pace throughout training (right) for both programs. Notice how the Faster 5K program is noticeably higher on average than the C25K program, while the C25K program has a more positive slope. Since the Couch to 5K programs designed to take runners from sedentary to being able to complete a 3.1 mile run, there is naturally going to be much greater gains (i.e. higher slope) in the beginning, with later improvements occurring more incrementally:

Image by Author

Final Thoughts

The Tempo Runner: 5K plan on the runtracker app is geared towards regular runners who can currently run a 5K and are interested in improving performance. The training stricture consists of three runs per week with one interval session, one tempo run, and one 35-minute steady state run. The variety of workouts in the program are designed to build both aerobic (endurance) and anaerobic (speed) capacity in runners. For anyone who is new to running, or hasn’t had structured training before, this program could be an excellent introduction. 

Below are some links related to improving 5K times, along with the raw data and code used to create the charts and analysis.  If you are interested in my experience with Couch to 5K, you can find that post here and for my first half marathon, you can find that here.

Thanks for reading! 

Resources & Code:

# FRONT MATTTER

### Note: The Faster5k.xlxs file will need to be converted to Faster5k.csv to read in correctly. Also, all packages can be downloaded using the install.packages() function. This only needs to be done once before loading. 

# clean up (this clears out the previous environment)
ls()

# Load Packages 
library(tidyverse)
library(wordcloud2)
library(mosaic)
library(readxl)
library(hrbrthemes)
library(viridis)

# Likert Data Packages
library(psych)
library(FSA)
library(lattice)
library(boot)
library(likert)

#install.packages("wordcloud")
library(wordcloud)
library(tm)
library(wordcloud)


# Grid Extra for Multiplots
library("gridExtra")

# Multiple plot function (just copy paste code)

multiplot <- function(..., plotlist=NULL, file, cols=1, layout=NULL) {
  library(grid)

  # Make a list from the ... arguments and plotlist
  plots <- c(list(...), plotlist)

  numPlots = length(plots)

  # If layout is NULL, then use 'cols' to determine layout
  if (is.null(layout)) {
    # Make the panel
    # ncol: Number of columns of plots
    # nrow: Number of rows needed, calculated from # of cols
    layout <- matrix(seq(1, cols * ceiling(numPlots/cols)),
                    ncol = cols, nrow = ceiling(numPlots/cols))
  }

 if (numPlots==1) {
    print(plots[[1]])

  } else {
    # Set up the page
    grid.newpage()
    pushViewport(viewport(layout = grid.layout(nrow(layout), ncol(layout))))

    # Make each plot, in the correct location
    for (i in 1:numPlots) {
      # Get the i,j matrix positions of the regions that contain this subplot
      matchidx <- as.data.frame(which(layout == i, arr.ind = TRUE))

      print(plots[[i]], vp = viewport(layout.pos.row = matchidx$row,
                                      layout.pos.col = matchidx$col))
    }
  }
}



# FASTER 5K

# Import data from CSV, no factors

Faster5K <- read.csv("Faster5k.csv", stringsAsFactors = FALSE)

Faster5K <- Faster5K %>%
  na.omit()

Faster5K

# Plot 1 - Density Plot of Running Distances

p1 <- ggplot(Faster5K, aes(x=Distance)) + 
  geom_density(color="light blue", fill="Pink") + labs( x ="Distance (Miles)", y = "", title = "Running Distances",  subtitle = "Tempo Run: 5K Training Plan", caption = "Data source: TheDataRunner.com") +
  theme(plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12),
    plot.caption = element_text(hjust = 1, face = "italic"), 
    axis.text.y=element_blank(),
    axis.ticks.y=element_blank(),
    panel.background = element_blank())

p1

# Plot 1 - Density Plot of of Running Speeds

p2 <- ggplot(Faster5K, aes(x=Pace_MPH)) + 
  geom_density(color="Pink", fill="light blue") + 
  labs( x ="Speed (Miles per Hour)", y = "", title = "Running Speeds",  subtitle = "Tempo Run: 5K Training Plan", caption = "Data source: TheDataRunner.com") +
  theme(plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12),
    plot.caption = element_text(hjust = 1, face = "italic"), 
    axis.text.y=element_blank(),
    axis.ticks.y=element_blank(),
    panel.background = element_blank())

p2

# Combine plots using multi-plot function:

multiplot( p1, p2, cols=1)

# Plot 3 - Density Plot of of Running Distance over Time

p3 <- ggplot(Faster5K, aes(x= Session, y= Distance)) + geom_point(color="Purple") +  geom_smooth(method=lm , color="Green", se=TRUE) + labs(x ="Training Session", y = "Distance (Miles)", title = "Running Distance",  subtitle = "Tempo Run: 5K Training Plan", caption = "Data source: TheDataRunner.com") +
   theme(
    plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank())

p3

# Plot 4 - Density Plot of of Running Speed over Time

p4<- ggplot(Faster5K, aes(x=Session, y= Pace_MPH)) + geom_point(color="Green") +  geom_smooth(method=lm , color="Purple", se=TRUE) + labs( x ="Training Session", y = "Speed (Miles per Hour)", title = "Running Speed",  subtitle = "Tempo Run: 5K Training Plan", caption = "Data source: TheDataRunner.com") +
  theme(
    plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank())

p4

# Combine plots using multi-plot function
multiplot( p3, p4, cols=1)

# Summary Statistics of Distance
favstats(Faster5K$Distance)

# Summary Statistics of Pace
favstats(Faster5K$Pace_MPH)

# Pearson Product Correlation of Distance over Time (session)
cor.test(Faster5K$Session, Faster5K$Distance, method = "pearson")

# Pearson Product Correlation of Pace over Time (session)
cor.test(Faster5K$Session, Faster5K$Pace_MPH, method = "pearson")


# Pearson Product Correlation of Pace over Time (session)
cor.test(C25K$Session, C25K$Pace_MPH, method = "pearson")

# Simple Linear Model of Pace & Session
Distance <- lm(Distance ~ Session, data = Faster5K)
summary(Distance)

# Simple Linear Model of Pace & Session
Speed <- lm(Pace_MPH ~ Session, data = Faster5K)
summary(Speed)


# Import data from CSV, no factors

Plans_5K <- read.csv("5K_Plans.csv",  stringsAsFactors = FALSE)

Plans_5K

# Plot
p7 <- Faster5K %>%
  ggplot( aes(x=Workout, y= Distance, fill=Workout)) +
    geom_boxplot() +
    scale_fill_viridis(discrete = TRUE, alpha=0.6) +
    geom_jitter(color="Black", size=0.4, alpha=0.9) + 
  labs( x ="", y = "Distance (Miles)", title = "Distance by Workout",  subtitle = "Tempo Run: 5K Running Plan", caption = "Data source: TheDataRunner.com") +
  theme(plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank(),
    legend.position = "none") +
    scale_fill_brewer(palette="Greens")
  

# Plot
p8 <- Faster5K %>%
  ggplot( aes(x=Workout, y= Pace_MPH, fill=Workout)) +
  geom_boxplot() +
    scale_fill_viridis(discrete = TRUE, alpha=0.6) +
    geom_jitter(color="Black", size=0.4, alpha=0.9) + 
  labs( x ="", y = "Speed (Miles per Hour)", title = "Speed by Workout",  subtitle = "Tempo Run: 5K Running Plan", caption = "Data source: TheDataRunner.com") +
  theme(plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank(),
    legend.position = "none") +
    scale_fill_brewer(palette="Purples")


# Combine plots using multi-plot function
multiplot( p7, p8, cols=1)

# Combine plots using multi-plot function
multiplot( p7, p8, cols=2)


# Combine plots using multi-plot function
multiplot( p1, p7, cols=2)


# Combine plots using multi-plot function
multiplot( p2, p8, cols=2)
aggregate(Faster5K$Workout, list(Faster5K$Pace_MPH), FUN=mean) 


# Summarize Mean Distance & Pace by Workout Type
Faster5K  %>%
  group_by(Workout) %>%
  summarise_at(vars(Distance, Pace_MPH), list(Average = mean))

Plans_5K  %>%
  group_by(Program) %>%
  summarise_at(vars(Distance, Pace_MPH), list(Average = mean))

# Plot
p5 <- Plans_5K %>%
  ggplot( aes(x=Program, y= Pace_MPH, fill=Program)) +
  geom_boxplot() +
    scale_fill_viridis(discrete = TRUE, alpha=0.6) +
    geom_jitter(color="Black", size=0.4, alpha=0.9) + 
  labs( x ="Training Session", y = "Speed (Miles per Hour)", title = "Comparing Paces",  subtitle = "C25K & Tempo Run: 5K Training Plans", caption = "Data source: TheDataRunner.com") +
  theme(plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank(),
    legend.position = "none") +
    scale_fill_brewer(palette="BuPu")

p5

# Plot
p6 <- Plans_5K %>%
  ggplot( aes(x=Program, y= Distance, fill=Program)) +
    geom_boxplot() +
    scale_fill_viridis(discrete = TRUE, alpha=0.6) +
    geom_jitter(color="Black", size=0.4, alpha=0.9) + 
  labs( x ="Training Session", y = "Distance (Miles)", title = "Comparing Distances",  subtitle = "C25K & Tempo Run: 5K Training Plans", caption = "Data source: TheDataRunner.com") +
  theme(plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank(),
    legend.position = "none") +
    scale_fill_brewer(palette="PRGn")

p6


multiplot( p5, p6, cols=2)

t.test(Pace_MPH ~ Program, data = Plans_5K)

t.test(Distance ~ Program, data = Plans_5K)

# Plot

p10 <- ggplot(Plans_5K, aes(x=Session, y= Pace_MPH, color = Program )) + geom_point() +  geom_smooth(method=lm , se=TRUE,aes(color=Program)) + labs( x ="Training Session", y = "Speed (Miles per Hour)", title = "Pace Through Training",  subtitle = "C25K & Tempo Run: 5K Training Plans", caption = "Data source: TheDataRunner.com") +
  theme(
    plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 12), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank()) + 
  scale_color_manual(values=c('blue', 'orange'))+
  theme(legend.position="none")


p10


multiplot( p5, p10, cols=2)

Running Through the Data: C25K

I am not typically one for New Years resolutions, but in 2020 I made a really small one: keeping better track of my activity with my Apple Watch. I thought that simply tracking it, I might improve my overall activity level. I was pretty sedentary at the beginning, but after a few weeks I saw a noticeable improvement activity level, and I felt better. So, I decided to see if I could raise the bar a bit more by completing the Couch to 5K program, using the C25K  running app.

c25K Training App by Active

The C25K running app is based on Josh Clark’s running programs which scaffolds participants through a series of manageable expectations. The training plan includes 3 runs per week – each between 20 and 30 minutes – with the program lasting 9 weeks in total. The most noticeable feature of the training plan is it combines both running and walking. Over the course of the 27 training runs, the proportion of walking decreases while the proportion of running increases, culminating with three 30 minute runs in the last week of the program. 

C25K Training Plan Example (Week 1, Day 1)

This was my second time using the C25K program. My first time trying the app, I completed it, generally enjoyed it, and even ran a few 5K’s afterwards. However, I ended up getting hurt / burned out, and within a few years was definitely back to square one. This time, I made my primary goal to stay injury and pain free, so I focused more closely on listening to my body, slowing down, and taking rest as needed. Since I am still running today, I decided to take a look back at those training runs and share the data with anyone who is interested.

Speed, Distance, & Progress

The two most obvious variables to look at were speed and distance. The distances ran throughout this program ranged from 2.01 to 3.74 miles, with an average of 2.65 miles per run. Running speed ranged from 4.02 (14:55 min/mile) to 5.51 mph (10:53  min/ mile), with an average of 4.79 mph ( 12:31 min/mile). The distributions of my runs by distance and speed for the C25K program can be seen in the density plots below:

Distances & Speeds Ran in C25K Training Plan (Fall, 2020)

Since people are generally more interested in seeing progress, below are scatter plots of distances covered and running speeds over the course of the 27 training runs. At first glance, we see a strong positive association with training volume (mileage) and intensity (speed) throughout the duration of the training program. When you take a closer look at both scatter plots, you see clear cycles ebbing and flowing along the positive slopes. Most training plans are designed to take on this kind of shape, so neither of these results are surprising:

Running Progress (Distance & Speed), Fall 2020

Looking back at the data two years removed, a number of interesting things stand out to me. The first one is how tightly packed, and predictable, the data is. Both speed and distance remain very similar in adjacent runs. This is how the program is designed, and completely makes sense when developing a fitness base. However, most of the training I do now is very different than that. Speed and distance vary widely from run to run, to allow for different kinds of stress and recovery. The second novel finding was how strong the slope was for both variables. When first starting out, the good news is you are probably going to improve very quickly – although it may not feel like it at the time. The longer you run, the rate of improvement slows down considerably. Most of my work now as a runner is built on slow, gradual gains; so improvement like this over this short of a period would put me at risk of injury now. The key difference for me now is I can run much further distances and have a much higher top speed, but the rate of progress is far less noticeable.

Final Thoughts

With millions of downloads, the C25K app has consistently been one of the most popular training apps for new runners, and for good reason. Based on a series of running plans developed by Josh Clark’s in the 90’s, the C25K training plans are structured to build runners up slowly, using a run / walk method. Whenever I talk with people who are interested in starting a running routine, one of the first things I recommend is they get this app, primarily because it employs the run / walk method. Many people think running should not include walking, or that walking is cheating or a sign weakness. Objectively, it is not. The longer you run, the more important it is that you find your ideal pace, in order to keep your heart down and breathing under control. The run / walk method accomplishes this by slowly increasing the proportion of running to walking over time. Also, you would be freaked out by how fast and how far some people who use the run walk method are.

A couple of words of caution about the program though. First and foremost, no one training app is going to fit everyone. Depending on current level of fitness and variety of other factors, the training program may take longer than 9 weeks. One of the most consistent pieces of advice you will find on the C25K program is that you should not be afraid to repeat runs, repeat weeks, or add extra rest if your body needs it. I couldn’t agree with this more. There are a few times when the increase in running volume felt like a lot (week 5, for example), so don’t be scared to slow down or add some extra rest. Definitely don’t skip ahead or run back to back days. The app is built so you will get faster and you will run further as you progress through the program. That’s baked in, but none of that will matter if you get hurt. Increasing speed or volume too quickly is the faster way to injury, but if you listen to your body and aren’t afraid slow down (i.e. walk more), then C25K should work great. 

Below are some links to C25K reviews, along with the raw data and code used to create the charts and analysis. For my next post, I plan to break down the data for the Faster 5k Training Plan that I used to shave a few minutes off my 5k time by introducing speed work.

Thanks for reading!  

Resources & Code:

C25K Running Data can be found here. The code I used (in R) to create plots and analysis is below:

# FRONT MATTTER

### Note: The C25K.xlxs file will need to be converted to C25K.csv. Also, all packages can be downloaded using the install.packages() function. This only needs to be done once before loading. 

## Load Packages 
library(tidyverse)
library(wordcloud2)
library(mosaic)
library(readxl)

## Grid Extra for Multiplots
library("gridExtra")

## Multiple plot function
multiplot <- function(..., plotlist=NULL, file, cols=1, layout=NULL) {
  library(grid)

  # Make a list from the ... arguments and plotlist
  plots <- c(list(...), plotlist)

  numPlots = length(plots)

  # If layout is NULL, then use 'cols' to determine layout
  if (is.null(layout)) {
    # Make the panel
    # ncol: Number of columns of plots
    # nrow: Number of rows needed, calculated from # of cols
    layout <- matrix(seq(1, cols * ceiling(numPlots/cols)),
                    ncol = cols, nrow = ceiling(numPlots/cols))
  }

 if (numPlots==1) {
    print(plots[[1]])

  } else {
    # Set up the page
    grid.newpage()
    pushViewport(viewport(layout = grid.layout(nrow(layout), ncol(layout))))

    # Make each plot, in the correct location
    for (i in 1:numPlots) {
      # Get the i,j matrix positions of the regions that contain this subplot
      matchidx <- as.data.frame(which(layout == i, arr.ind = TRUE))

      print(plots[[i]], vp = viewport(layout.pos.row = matchidx$row,
                                      layout.pos.col = matchidx$col))
    }
  }
}

# COUCH TO 5K

## Import data from CSV, no factors

C25K <- read.csv("C25K.xlsx",  stringsAsFactors = FALSE)

C25K

## Plot 1 - Density Plot of Running Distances

p1 <- ggplot(C25K, aes(x=Distance)) + 
  geom_density(color="Green", fill="Purple") + labs( x ="Distance (Miles)", y = "", title = "Distribution of Running Distances",  subtitle = "Couch to 5K Training Plan", caption = "Data source: TheDataRunner.com") +
  theme(plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 14),
    plot.caption = element_text(hjust = 1, face = "italic"), 
    axis.text.y=element_blank(),
    axis.ticks.y=element_blank(),
    panel.background = element_blank())

## Plot 1 - Density Plot of of Running Speeds

p2 <- ggplot(C25K, aes(x=Pace_MPH)) + 
  geom_density(color="Purple", fill="Green") + 
  labs( x ="Speed (Miles per Hour)", y = "", title = "Distribution of Running Speeds",  subtitle = "Couch to 5K Training Plan", caption = "Data source: TheDataRunner.com") +
  theme(plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 14),
    plot.caption = element_text(hjust = 1, face = "italic"), 
    axis.text.y=element_blank(),
    axis.ticks.y=element_blank(),
    panel.background = element_blank())

## Combine plots using multi-plot function:

multiplot( p1, p2, cols=1)

## Plot 3 - Density Plot of of Running Distance over Time

p3 <- ggplot(C25K, aes(x=Session, y= Distance)) + geom_point(color="blue") +  geom_smooth(method=lm , color="red", se=TRUE) + labs(x ="Training Session", y = "Distance (Miles)", title = "Progression of Running Distance",  subtitle = "Couch to 5K Training Plan", caption = "Data source: TheDataRunner.com") +
   theme(
    plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 14), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank())

## Plot 4 - Density Plot of of Running Speed over Time

p4<- ggplot(C25K, aes(x=Session, y= Pace_MPH)) + geom_point(color="red") +  geom_smooth(method=lm , color="blue", se=TRUE) + labs( x ="Training Session", y = "Speed (Miles per Hour)", title = "Progression of Running Speed",  subtitle = "Couch to 5K Training Plan", caption = "Data source: TheDataRunner.com") +
  theme(
    plot.title = element_text(hjust = 0.5, size = 20, face = "bold"),
    plot.subtitle = element_text(hjust = 0.5, size = 14), 
    plot.caption = element_text(hjust = 1, face = "italic"),
    panel.background = element_blank())

## Combine plots using multi-plot function
multiplot( p3, p4, cols=1)

## Summary Statistics of Distance
favstats(C25K$Distance)

# Summary Statistics of Pace
favstats(C25K$Pace_MPH)

# Pearson Product Correlation of Distance over Time (session)
cor.test(C25K$Session, C25K$Distance, method = "pearson")

# Pearson Product Correlation of Pace over Time (session)
cor.test(C25K$Session, C25K$Pace_MPH, method = "pearson")

# Simple Linear Model of Pace & Session
Distance <- lm(Distance ~ Session, data =C25K)
summary(Distance)

# Simple Linear Model of Pace & Session
Speed <- lm(Pace_MPH ~ Session, data =C25K)
summary(Speed)

The Problem With Polling

I have gotten a lot of questions about political polls lately and I have found myself having the same conversation over and over about the reliability of polling in general. Those conversations have centered around the concept of Total Survey Error (TSE), or all the different ways that a survey can go wrong. So, I thought I would take a break from what I should be doing and write about the five basic forms of TSE.

Tallying the results of the presidential election on Nov. 2, 1948.
PHOTO: CBS PHOTO ARCHIVE/GETTY IMAGES

Coverage Error – This form of error is when your sampling frame (i.e. your list of people to potentially poll) does not accurately represent the population you are measuring. For example, if you want to conduct a poll by phone and your list only includes landlines, then you are leaving out everyone who does not have a landline. Dewey Defeats Truman is an example that fits this kind of TSE. They only surveyed people with telephones that year (1948), who were typically far wealthier and more likely to vote for Dewey, rather than Truman, the eventual winner. This coverage issue led to non-response bias (see below)

Specification Error – This error occurs when what is being measured isn’t clear. Typically, this is reserved for psychological constructs, which are oftentimes multidimensional. A political example of this would be ideology. We know that most people’s political beliefs lie along a spectrum, and those beliefs may be nuanced and context dependent. The Pew Research Center has an excellent example of measuring ideology as a construct. Fortunately, there is an easy way around this for political polls: ask them specifically which candidate(s) they are voting for.

Response Error – This form of bias has to do with who responded to the poll, and relatedly, who didn’t respond to the poll. This can be unit response (i.e. someone refuses to participate) or item response (i.e. someone refuses to answer a specific question). Again using a phone poll example: if you had a list of all numbers (cell phones and landlines) that you use to call on your poll, people with caller ID are less likely to pick up. Well, almost all cell phones have caller ID built in. This means that people with landlines – which are typically older people – are more likely to answer; younger people, less so.

Measurement Error – This form of error is probably the most well studied in the world of survey methodology, because it has so many parts to it. The order of the questions being asked, the tone of the interviewers voice or appearance, the wording of the questions themselves may unintentionally cause someone to answer a certain way. For example, I have seen many projections based solely on party identification, which does not account for people who plan on voting for one party in every race except one (i.e. “ticket splitters“). I imagine there will be a large number of people who cast their votes for all but one member of their preferred party this election. If you want to see an example of how not to predict an outcome, I humbly submit this one as an example of both specification error and measurement error.

Processing Error – Processing error is all the ways that things can go wrong with the data AFTER it is collected. Some forms of this occur in encoding, editing, and weighting. The weighting piece is especially tricky, because it adjusts results based on known population parameters. For example, if we know that a poll had 80% of its respondents to be female, we would need to adjust the weights of the males in the survey to account for the fact that population parameter is known to be roughly 50%. Now, imagine that we are also accounting for race, income, education level, and age; you will see that things can get complicated in a hurry. One strategy to account for this is an iterative approach, known as “raking

Supporters of presidential candidate Hillary Clinton watch televised coverage of the U.S. presidential election at Comet Tavern in the Capitol Hill neighborhood of Seattle on Nov. 8. (Photo by Jason Redmond/AFP/Getty Images)

So, what does all this mean? There are lots of ways things can go wrong, and good surveys are incredibly expensive. They take time to construct and a shocking amount of money and manpower to collect. Also, many political polls are collected to drive media viewership, which means they are often more concerned about expediency rather than accuracy. That right there should be enough to give you pause.The 2016 election gave polling – and to a certain extent, statistics – a bad name. However, people don’t realize that the national polls (i.e. popular vote) were right on the money. The popular vote is one model. The electoral college tally is 51 models (all 50 states plus DC), which may take different strategies for collecting and analyzing, depending on the state. Lots of room for mistakes. If we want to predict who will likely win the popular vote, the statistical evidence that Biden will win that is pretty solid. Does that mean it is a certainty? Objectively no. Of course the election is decided off the electoral college, which again is 51 separate models. Some of those states are pretty clear. Others, not so much.

“A margin of error of plus or minus 3 percentage points at the 95 percent confidence level means that if we fielded the same survey 100 times, we would expect the result to be within 3 percentage points of the true population value 95 of those times.”

5 key things to know about the margin of error in election polls

Finally, it appears that we are headed for record levels of turnout due in part to enthusiasm, mail in voting, COVID-19, etc. The unprecedented nature of these factors only makes polling even more fraught for potential error. I would encourage anyone following the polls closely to lower their expectations considerably. That doesn’t mean the polls are wrong, but they should be viewed with a healthy amount of circumspection. With that being said, if you are like me and cannot help yourself, look at Nate Cohn and Nate Silver’s stuff. It is typically the most robust and transparent. Not surprisingly, they are the often times the most accurate predictions.

Tl;dr – Ignore the polls. We won’t really know much of anything until we see actual vote totals being counted. The rest is just theater.

EDA: Open & Closed Data

Introduction

Each summer, nearly 8,000 incoming students attend New Student Orientation (NSO) at Penn State’s University Park campus. At the conclusion of NSO, each student is sent a survey to gather their perspectives on various aspects of their experiences at their respective sessions. Questions range from their experiences at check-in to their understanding of student services and various initiatives, such as Penn State’s commitment to diversity and inclusion. These data provide an opportunity to asses which aspects of NSO warrant further exploration.

Cleaning & Inspecting the Data

Survey results were provided from the office of New Student Orientation for the sessions occurring in the summers of 2017,2018, & 2019. Each of these databases were inspected to find which variables were consistent across all three spreadsheets. Variables that were not consistent across each survey were discarded combined into one master database, coded by their respective years. The variables that were consistent across all three years were:

  • Leader Connection – The extent to which a meaningful connection was made with their Orientation Leader during NSO.
  • Substances -The extent to which their understanding changed related to the consequences of alcohol and drug use and abuse during NSO.
  • Assault Resources – The extent to which their understanding changed as a result of attending NSO related to reporting and support services Penn State provides for victims of sexual harassment and sexual assault.
  • Bystander Prevention – The extent to which their understanding changed as a result of attending NSO related to how to handle dangerous situations.
  • Health Resources – The extent to which their understanding changed as a result of attending NSO related to support services Penn State provides for mental health, physical health, and personal well-being.
  • Safety Resources – The extent to which their understanding changed as a result of attending NSO related to support services Penn State provides to help keep me safe.
  • Diversity / Inclusion – The extent to which their understanding changed related to the importance of diversity and inclusion on our campus.
  • Definition of Consent – An open-ended survey question asking participants to define the term “consent.”

The final step in cleaning the data was to remove any personable identifiable information and missing values. Basic demographic information, such as race, gender, sexual orientation, resident status, and matriculation date were maintained, but not utilized for this analysis. Any observations that broke off from the survey prior to answering the 8 variables of interest were discarded as well.

Exploratory Data Analysis

Once data were gathered and cleaned, an exploratory data analysis was conducted to examine patterns in the data. Survey questions that utilized a Likert scale were compared against one another, revealing similar, right-skewed distributions on each of the factors, with the exception of leader connection, which was more widely distributed amongst Likert responses (figure 3.1). The leader connection variable, when examined in a bar chart, grouped by year (figure 3.2) showed the widest variety of distributions in comparison the remaining variables visualized in the same way (figure 3.3). Since Likert scale data is ordinal in nature, a Kruskil-Wallace test was conducted on each variable to examine differences by year, followed by a post-hoc analysis using the Dunn-Bonferroni correction to reveal where differences may occur. Each variable showed an upward trend in Likert responses over time, with with statistically significant differences (α < .05) in each variable over time, with the exception of the variable measuring the importance of diversity and inclusion at Penn State.

The open ended survey question asking for a definition of consent revealed interesting results across the three surveys. In 2017 (figure 3.4) & 2018 (figure 3.5), the top two words used to define consent were the words “consent” and “yes,” respectively. However, in 2019 (figure 3.6), the top two words were “given” and “freely.” It is known that the 2019 version of the Results Will Vary interactive play that all NSO students see featured a production related to consent. In this scene, the an acronym F.R.I.E.S is used to represent that consent must be freely given, reversible, informed, enthusiastic, & specific. Each of these words, along with the aforementioned acronym all occur in the top 10 words for the 2019 survey, while none of them were found in the top 10 of the previous two NSO years.

Likert Data Comparison (figure 3.1)

Leader Connection (FIGURE 3.2)

Protocols, Services, & Resources (FIGURE 3.3)

2017 Open Ended Data (FIGURE 3.4)

Frequency chart of top 20 Words (FIGURE 3.5)

2018 Open Ended Data (FIGURE 3.6)

Frequency chart of top 20 Words (FIGURE 3.7)

2019 Open Ended Data (FIGURE 3.8)

Frequency chart of top 20 Words (FIGURE 3.9)

Conclusion / Suggestions for the Future

While it is difficult to make inferences on observational data, we can see some trends towards greater understanding from students who attend NSO at Penn State’s University Park campus. These increases in understanding could be due to any variety of factors, including changes in the population of interest (e.g. incoming students) or a variety of areas within the NSO experiences. The open ended survey data defining consent showed the clearest picture of the differences between year with the 2019 data pointing clearly towards connections made with a scene dedicated to consent in the Results Will Vary interactive play.

To gain a greater understanding of student’s perceptions of new student orientation, a variety of opportunities exist. A clear definition of what kinds of insights you would like to gain from students regarding NSO should inform question formation. For example, an argument could be made that leader connection is a multi-dimensional construct that cannot be measured accurately with one question. Consequently, leader connection was the variable with both the lowest score and the widest distribution of responses among participants; further investigation is warranted. Finally, the use of open-ended survey responses could provide a wealth of feedback on specific initiatives, particularly if they are formed in conjunction with specific experiences during NSO. For example, the F.R.I.E.S scene from the Results Will Vary interactive play demonstrated clear connections towards changes in understanding of consent. Future new student orientations could benefit by exploring these connections in other topics covered within Results Will Vary to measure both the effectiveness of play and the perceptions of students.

Survey Data Analysis (The Hard Way)

Introduction

In the summer of 2019, Penn State held 38 New Student Orientation (NSO) sessions and 3 International Student Orientation (ISO) sessions, during which all incoming freshman watched an interactive play titled “Results Will Matter.” This play touches on a variety of topics related to the college experience and typical pitfalls for students in their first year. At the conclusion of each night of the play, incoming students filled out a card with questions they had about the play and/or the college experience.

The “Results Will Vary” database consists of 7,558 handwritten responses that incoming freshman provided after given the following prompt: “What lingering questions do you have regarding the show.” The 2019 freshman class at University Park was approximately 8,000 students, providing a coverage rate close to 100%, and a response rate approaching 95%. With this information, we have the opportunity to gain perspectives on the effectiveness of the play and the general concerns of incoming students. After reading, counting, and numbering these cards, some themes began to emerge. Questions of consent, consequences of underage drinking, alcohol, drugs, roommate issues, and various campus services rose to the top.

Sampling Procedure

Due to the size of the dataset and available resources, the decision was made to conduct a sample from the full dataset (N = 7,588). Since we knew the size of the population, the desired sample size was calculated using the finite population correction:

Since this data is exploratory in nature, we have an unknown population proportion and choose to use the most conservative estimate of .5, resulting in a desired sample of 366 cards:

The sample itself was conducted across all observations using a random number generator. The size of this sample provides us with ± 5% margin of error at the .05 level of significance, and all analysis was conducted strictly on the sampled data.

Cleaning & Inspecting the Data

Once the cards were sampled, the text from each one was entered into excel verbatim, and open coded once again. Using RStudio, the text was cleaned by converting to lowercase and removing punctuation. In text analysis, it is standard practice to remove extremely common words such as “if”, “and”, “but”, and “the,” as they have little to no value in determining key terms in the vocabulary. These words are called “stop words.”

Zipf’s law, named after linguist and mathematician George Zipf, states that given a large sample of words, the frequency of any word is inversely proportional to its rank in the frequency table. This creates a long-tailed distribution as the number of words approaches infinity, with the most frequent word occurring approximately twice as often as the second most frequent word, three times as often as the third most frequent word, etc. By removing “stop words”, we trim the bulk of the data along the Zipfian distribution and gain a greater understanding of what the central themes are in the data. Within this database, we removed standard English stop words in addition to a custom list of stop words to uncover the central themes within the data. Those words were “can”, “people”, “get”, “penn”, “state”, “student”, “thing”, “what”, “what’s”, “just”, “one”, “know”, “like”, “students”, and “campus”.

Once “stop words” were removed, the result was 792 unique words, represented in a word cloud below:

Upon further inspection, we see that the most frequent words are associated with alcohol, safety, health services, consent, and the consequences of underage drinking. These can be seen in the frequency plot below:

Developing Themes

After inspecting term frequency, open codes were revisited, resulting in 42 unique codes, with some observations requiring multiple codes. For example, one survey respondent wrote:

“I want to get involved in LGBT clubs / activities, but I am not out to my parents and they aren’t very accepting. Will they find out I am in those organizations (Through the internet or something)? Like, can they see what clubs I join?”

This example fell under the following four codes:

  1. Student resources
  2. FERPA / HIPPA / Privacy
  3. LGBTQ Community, concerns, resources
  4. Clubs

In addition, there were also some responses that didn’t fit within any themes, which were simply labeled “miscellaneous / unrelated.” An example of this was “What is your favorite color?” and “Who’s got the best gas on campus?”

Questions regarding alcohol, underage drinking, Responsible Action Protocol (RAP), and the consequences of alcohol/underage drinking occupied the largest portion of the data. In addition, questions of campus safety, student resources, residence halls, and consent emerged as central themes in the data. Feedback on the show was generally complimentary, with some questions regarding the intended meaning of scenes. For example, some participants cited the “ASMR scene” and the “Rollercoaster Scene” as areas of confusion. The distribution of the codes can be found in the frequency plot of all themes:

Sentiment & Emotional Analysis

Sentiment analysis refers to the use of natural language processing, text analysis, and computational linguistics to systematically identify, extract, quantify, and study affective states and subjective information. Sentiment analysis is widely applied to voice of the customer materials such as reviews of products and services as well as open ended survey data. With RStudio, we have access to open sourced packages that use natural language processing and text analysis to examine the sentiment and emotional content of each observation. Using the SentimentR package, text files were analyzed by comparing the data against known words that are associated with positive, negative, or neutral sentiments. Each sentence is extracted and scored from -1 to 1, with any sentences scored above .3 being considered positive, and any sentences below -.3 being considered negative. The sentences falling in the remaining 60% of the data are considered neutral. This data was overwhelmingly in the neutral category, with a mean of .031, a median of 0, and standard deviation of .243. When investigating sentiment by participant and organized by time, we see patterns in the data suggesting that the sentiment of the audience changed at times throughout the run of the show:

Finally, we conducted an emotional analysis once again using the SentimintR package in RStudio. This analysis is conducted by comparing known words associated with various emotions (anger, disgust, fear, joy, sadness, surprise, anticipation, trust, etc.) against the data set, classifying each sentence within an emotional category. Trust (n=153) was shown to be the most common emotion, with more than twice the number of occurrences of the second most common emotion, joy (n = 71). Fear (n = 59), anticipation (n=57), sadness (n = 38), surprise (n = 20), anger (n = 15), & disgust (n = 13) rounded out the top 10 emotions within the sampled data. These results are displayed in the frequency chart and pie chart below:

Conclusion / Suggestions for the Future

The 2019 interactive theater experience, “Results Will Vary,” discussed some of the common pitfalls of first year students, including sex, consent, alcohol, drug use, and peer pressure. Following each airing of the show, the incoming freshman who viewed the production were asked: “what lingering questions do you have about the show?” These questions were written on index cards and collected at the end of each orientation session, resulting in 7,588 responses. Survey cards (N = 7,588) were then read, numbered, sampled (n = 366), transcribed, and analyzed. The results indicated that the top questions of students were related to the consequences of alcohol and underage drinking. In addition, questions regarding campus safety, student resources, residence halls, and consent emerged as central themes from the audience members. With the SentimentR package in RStudio, the text was analyzed for overall sentiment and emotional content, which suggested that students enjoyed the show, with the most common emotional response being “trust,” followed by “joy.” This production, which is written and performed by current Penn State students provides an interesting model for engaging in difficult conversations.

As the Office of New Student Orientation and the Penn State’s School of Theatre begin plans for the future, a survey instrument that includes both open and closed ended questions may provide a better window into student perceptions and understanding. In addition, implementation of an online questionnaire that can store and share results quickly through mobile devices should be considered. The inclusion of an online questionnaire, if properly executed, should also allow for more granularity within the data and increased speed of data collection and analysis.

2017 US News Rankings (Part 2)

The U.S. News and World Report has collected, compiled, and published a list of the top colleges and universities around the country. This report is based on annual surveys sent to each school as well as general opinion surveys of university faculties and administrators who do not belong to the schools on the list. These rankings are among the most widely quoted of their kind in the United States and have played an important role among students making their college decisions. However, other factors may prove to be meaningful when making these decisions.  The data may indicate an interaction between some of the explanatory variables, such as tuition, cost of living, enrollment, and rank, warranting further investigation:

Heat Plot (2017 US News & World Report School Rankings)

Data Description:

The data consists of 222 observations, with 8 variables that describe the 2017 edition of the US News Universities Rankings as well as the cost of living population by state based on the US Census Bureau predictions for 2017. The databases for this analysis are available on data.world and the US Census Bureau website.

Building the Model:

To determine the model, both stepwise and best subsets were used to determine best fit. Before stepwise regression, the full model was evaluated:

According to the summary of the full model, the adjusted R-squared is 0.6588, indicating that the full model is explaining 65.8% of the variance of the response variable. Since a p-value below .001 (2.2e-16), this association does not appear to have occurred by chance. Based on the results of the ANOVA of the full model, we can predict that there are several explanatory variables, including tuition and enrollment that could possibly be significant predictors for determining the best model fit.

Regression Assumptions of the Final Model:

The next step is to evaluate the regression assumption. The assumptions are listed below:

  • Linear: Mean ranking at each set of the explanatory variables is a linear function of the explanatory variables.
  • Independent: Any observation in the data set do not rely on each other.
  • Normal: Ranking at each set of the explanatory variables is normally distributed.
  • Equal Variance: Ranking at each set of the explanatory variables has equal variance (i.e. homoscedastic).

To analyze linearity and equal variance, residual vs. fitted value plot is used. To evaluate normality, a normal Q-Q plot is generated:

According to the residual vs. fitted value plot, we can see no pattern in the data and conclude that the equal variance assumptions has been met. According to the Q-Q plot, we can see some deviation at the tails of the distrubition, but it appears that the normality assumption has been met.

The Cook’s distance plot indicates three potential outliers influencing the line of best fit. Surprisedly, BYU was not one of the outliers exerting he most leverge on the model. Instead, those were:

  • University of Central Florida (#51) – Rank: 176; Tuition: 22467; Enrollment: 54513.
  • University of Hawaii at Manoa (#56) – Rank: 169; Tuition: 33764; Enrollment: 13689.
  • SUNY College of Environmental Science and Forestry (#141) – Rank: 99; Tuition: 17620; Enrollment: 1839.

Based on these finding, the full model should suffice for concluding that there is a meaningful relationship between tuition, enrollment, and university ranking. However, further exploration is needed to determing whether or not this is the best model to explain the potential relationship between explanatory variables and the response variable.

Model Development

Following both the stepwise and best subsets regression, we see that tuition, enrollment, and population are recommended as predictors in the regression model:

When comparing the reduced model to the full model through an F-test, we see that there is not a significant difference (p-value: .77) between the two models:

The stepwise regression indictes that the model with tuition, enrollment, and population has an AIC of 1628.18, while another model that includes cost of living has an AIC of 1630. A model that accounts for the interaction between population and cost of living is worth exploring. However, after tests for multicollinearity using Variance Inflation Factors we see significant evidence of multicollinearity between population & cost of living:

Finally, a Variance Inflation Factor (VIF) test was conducted on the reduced model, which found no evidence of multicollinearity, suggesting that the reduced model is a better fit. Once the VIF test was conducted, assumptions were checked again finding similar results as the full model, with all conditions being met. The model was then cross validated using k-fold.

Summary & Conclusions:

After the initial exploratory data analysis (EDA) found here , a number of patterns emerged. Clearly, it appears that cost of tuition is strongly associated with ranking in the US News & World Report. What was not clear is the effects of the other variables (enrollment, region, & cost of living) on the response variable. While enrollment is weakly associated with ranking, it is moderately associated with tuition and cost of living. After initially analyzing the full model, we find that there is statistical evidence of a relationship between tuition and enrollment on university ranking. After examining best subsets and stepwise regression, it is suggested that we use a model which included tuition, enrollment, and population as the predictor variables. Comparing this model against the full model did not yield a significant difference between the two and suggested that the smaller model that only examined tuition and enrollment would yield the best results. An additional model (model 3) was investigated to include the interaction of cost of living and population, but found significant evidence of multicollinearity between those two variables. Multicollinearity was examined in the reduced model (model 2) using a VIF test, finding no evidence of multicollinearity within that model. Assumptions of linearity, normality, and equal variance were satisfied after examining a plot of residuals vs. fitted values as well as QQ plot of residuals. With a sample of 221 observations and a p-value of less than .001, we have statistical evidence to suggest that tuition, enrollment, and population are significant predictors of performance in the US News and World Report’s Best College Ranking. The final model is summarized below:

Limitations

As previously stated, cost of living data was available by state instead of by city or country where the university was listed. So, a university that is located in a community with a high cost of living may be in a state with an overall low COL index score, and vice versa. This eliminates some precision in our predictions. In addition, this list consists of the 231 schools which opten to participate in the US News & World Report Ranking Index. According to the US News, there are over 4,000 college and universities in the United States. This raises the concern of non-response bias and limits the generalizability beyond the scope of participating institutions in the US News Rankings.

One example of this is the University of Minnesota, which chose not to participate in the US News Best College Rankings. Minnesota’s in-state undergraduate tuition and fees are $14,142. The enrollment is 19,819, and the state population is 5,568,155 (in 2017):

This results in a 95% confidence interval of 111 to 266 for the University of Minnesota’s US News Best College Ranking. However, when we compare their US News 2019 ranking, we see that UM is ranked #76 (tied with Virginia Tech). This suggests that this model is not an accurate predictor of school ranking, but rather serves as an illustration of overall national trends between tuition, enrollment, and population with regard to university ranking in this report.

R Code Chunks:

# Heat Map Correlation Plot:
heat <- cor(rankingsreduced)
corrplot(heat, type = "upper", order = "hclust", 
         tl.col = "black", tl.srt = 45)

# Full Model with all variables:
fullmodel <- lm(Rank ~ Tuition + Enrollment + Region + CostOfLiving + Population, data = rankings ) 
 
# Model Summary (Full Model):
summary(fullmodel)

# ANOVA Table (Full Model:
anova(fullmodel)

# Residuals vs. Fitted:
mplot(fullmodel, which =1)

#QQ PLot:
mplot(fullmodel, which =2)

# Cook's Distance:
mplot(fullmodel, which =4)

#Stepwise Regression:
step(fullmodel, direction="both")

# Best subsets:
BestSubsets <- regsubsets(Rank ~ Tuition + Enrollment + Region + CostOfLiving + Population, data = rankings, method = "exhaustive", nbest = 2)  
Result <- summary(BestSubsets)

# Append fit statistics to include R^2, adj R^2, Mallows' Cp, BIC:
data.frame(Result$outmat, Result$rsq, Result$adjr2, Result$cp, Result$bic) 

# Model #2 (Based on Stepwise & Best Fits):
mod2 <- lm(Rank ~ Tuition + Enrollment + Population, data = rankings)
  
# Model Summary (Reduced Model):
msummary(mod2)

# Model Comparrison between Full & Reduced Model:
anova(mod2, fullmodel)

# Model with Interaction of Population & Cost of Living:
mod3 <- lm(Rank ~ Tuition + Enrollment + Population + CostOfLiving + Population:CostOfLiving, data = rankings)  

# Model Summary (Model 3):
msummary(mod3)

# Variance inflation factor (Model 3):
VIFtest1 <- lm(formula = Rank ~ Tuition + Enrollment + Population + CostOfLiving + Population:CostOfLiving, data = rankings)
vif(VIFtest1)

VIFtest2 <- lm(formula = Rank ~ Tuition + Enrollment + Population, data = rankings)
#Variance inflation factor (small model)
vif(VIFtest2)

# Checking model accuracy against "real world" data:
minnesota <- data.frame(Tuition = 14142, Enrollment = 29819, Population = 5568155)

#Confidence Interval:
predict(mod2, minnesota, interval="prediction")