Sentiment Analysis of a Podcast

 

There has been an increase in the exploration of text as a rich data source. Quantifying textual data can reveal trends, patterns, and characteristics that may go unnoticed initially by human interpretation. Combining quantitative analyses with computing capabilities of modern technology allows for quick processing of substantially large amounts of text.

Here we present a sentiment analysis of an intriguing form of text – podcast transcripts – to provide a discussion on the process of text analysis.

Podcast transcripts are a unique form of text because their initial intent was to be listened to, not read, creating a more intimate form of communication. The text used in this example are the transcripts from the NPR “Serial” podcast hosted by Sarah Koenig. “Serial” explores the investigation and trial of Adnan Syed who was accused of the murder of his girlfriend in 1999. The podcast consists of 12 episodes averaging 43 minutes and 7,480 words ­­each. Here we examine the 12 episodes together as a whole text.

Sentiment analysis involves processing text data to identify, quantify, and extract subjective information from the text. Using the available tools from the Tidyr package in R, we can examine the polarity (positive or negative tone) and the emotional association (joy, anger, fear, trust, anticipation, surprise, disgust, and sadness) of the text. We present one method of sentiment analysis which involves referencing a sentiment dictionary or list of words coded based on the objective. For examining the polarity, each word is given a positive, negative, or neutral value and for examining the emotions, each word is tagged with any associated emotions. As an example, the word “murder” is coded as negative and tagged with fear and sadness. We chose to use the NRC sentiment dictionary for this analysis as it is the only one that includes emotions, and it was created as a general purpose lexicon that works well for different types of text.

Starting with an overall visualization of the emotions and polarity of the podcast, a bar graph (Figure 1) displays the percentage of the text characterized by each emotion. In examining the text in this way, a particularly intriguing discovery is that the most common emotion is trust, which may be surprising for a podcast about a murder investigation and trial. The next most common emotion is anticipation. This confirms what one may expect in the context of podcasts: hosts would want to keep their listeners interested in the story so anticipation would play a key role in getting people to listen regularly.

Figure 2 shows that overall this text is positive as a larger percent of the words are coded as positive. Looking closer at which words occur most often within a specific sentiment or emotion, a sorted word cloud allows one to visually identify the most commonly used words coded as positive or negative.

The most frequently used negative words are crime, murder, kill, and calls. The most frequently used positive words are friend, talk, police, and pretty. It is important to examine the context of the most common words. Consider the word “pretty”, in the text “pretty” was used as an adverb not as an adjective (e.g. “I’m pretty sure I was in school. I think– no?”) All 53 instances of “pretty” in the text were used to show uncertainty.  However, the NRC dictionary defines and codes the word “pretty” as an adjective describing something as being attractive. This mismatch between usage within the text and the dictionary impacts the sentiment analysis. One should carefully consider how to handle such words appropriately.

Similarly we can examine each emotion in more detail. These graphs allow one to see which words were most represented in each emotion.

 

This graph again illustrates the importance of critically examining the results. The word “don” is coded as a top positive word, however, in this text “don” is the name of a person and like the other names it should be coded as neutral. However, the NRC lexicon codes the word “don” defined as a gentlemen or mentor. Similar concerns may be present for other words that may have multiple meanings. These words should be appropriately considered, particularly if among the most frequently used words in the text.

These graphs show a few of the many ways to quantify and visualize text data through a sentiment analysis to understand a text more objectively. As text analyses become more prevalent, it is imperative to actively engage in the process and critically examine results paying attention to not only the numbers and graphs but also the subject matter of the text.

 

Logo for Tableau Software

Software Review: Tableau as a Teaching Tool

Tableau is unique and a valuable teaching tool because it provides an easy interface for the creation of charts, graphs and even maps.  Students can explore data in sophisticated ways with only a short training session.  Even better, as students they can get free licenses for the software, allowing faculty to use it for classes without ensuing large financial commitments.

A map showing fatality and even types of different violent events in Africa

What sets Tableau apart from other data visualization or business intelligence software is its intuitive, user-friendly drag-and-drop interface. For more sophisticated applications this is supplemented by a variety of easy to understand menus. By using contextual menus and panels instead of typing in code, Tableau lowers the learning curve needed to create visualizations. For example, creating a line graph or a map is as easy as selecting the variables in question and selecting the appropriate type of visualization.

Classic tables like the one below are easy to construct and can also be augmented with color-coded hotspot analyses.

A highlight table showing the number of violent events happening in Egypt, Libya, South Sudan, and Sudan broken down by Country and Event Type

Tableau provides the opportunity to construct data visualizations that are more complex than those generated by most traditional statistical packages.  For example, the graphic below compares the number of conflicts over time for four North African countries in a fairly normal plot, but add an additional variable, the number of fatalities by varying line thickness.

A line graph showing the trend of the number of violent events in 4 African countries (Egypt, Libya, South Sudan, and Sudan) between 1997 and 2015. The thickness of the lines represent number of fatalities.

For classes working with data, Tableau presents a significant opportunity for instructors to integrate more data into the classroom, especially with students who might not have experience with more advanced statistical software. Making it easier for students to explore and understand data, as well as to ask their own questions through investigative learning, encourages them to gain a deeper appreciation for data as it relates to their discipline. In fact, as of the time of writing, Tableau is currently being successfully used in several of our classes at Grinnell College.

However, Tableau does have its drawbacks. In particular, visualizations created with Tableau are not as customizable as more powerful languages such as R or Javascript. In addition, Tableau is not created for data analysis.  It is a data visualization tool, not a statistical package. Another small downside is that data entered into Tableau must be formatted in a specific way.  While Tableau is able to do some data manipulation, spreadsheet programs like Excel are much easier for this. So, Tableau’s role in classrooms or in research might only be restricted to surface-level explorations of the data in question. Despite this limitation, Tableau remains a tool with great potential, especially in the possibilities it presents to the user in creating quick and easy visualizations.

Student Spotlight: Racial Bias in the NYPD Stop-and-frisk Policy

Donald Trump recently came out in favor of an old New York Police Department’s (NYPD) “stop-and-frisk” policy that allowed police officers to stop, question and frisk individuals for weapons or illegal items. This policy was under harsh criticism for racial profiling and was declared in violation of the constitution by a federal judge in 2013.

An earlier post by Krit Petrachainan showed a potential racial discrimination against African-Americans within different precincts. Expanding on this topic, we decided to look at data in 2014, one year after the policy had been reformed, but when major official policy changes had not yet taken place.

More specifically, this study examined whether race (Black or White) influenced the chance of being frisked after being stopped in NYC in 2014 after taking physical appearance, population distribution among different race suspect, and suspected crime types into account.

2014 Data From NYPD and Study Results

For this study, we used the 2014 Stop, Question and Frisked dataset retrieved from the city of New York Police Department. After data cleaning, the final dataset has 22054 observations. To address our research question, we built a logistic regression model and ran a drop-in-deviance test to determine the importance of Race variable in our model.

Our results suggest that after the suspect is stopped, race does not significantly influence the chance of being frisked in NYC in 2014. A drop-in-deviance tests after creating a logistic regression model predicting the likelihood of being frisked gave a G-statistic of 8.99, and corresponding p-value of 0.061. This marginally significant p-value shows we do not have enough evidence to conclude that adding terms associated with Race improves the predictive power of the model.

Logistic regression plot predicting probability of being frisked from precinct population Black, compared across race

Figure 1. Logistic regression plot predicting probability of being frisked from precinct population Black, compared across race

To better visualize the relationship in interactions between race and other variables, we created logistic regression plots predicting the probability of being frisked from either Black Pop or Age, and bar charts comparing proportion of suspects frisked across sex and race.

Interestingly, given that the suspects are stopped, as the precinct proportion of Blacks increases, both Black and White suspects are more likely to be frisked. Furthermore, this trend is more profound for Black than White suspects (Figure 1).

Additionally, young Black suspects are much more likely than their White counterparts to be frisked, given that they are stopped. This difference diminishes as suspect age increases (Figure 2).

Logistic regression plot predicting probability of being frisked from age, compared across age

Figure 2. Logistic regression plot predicting probability of being frisked from age, compared across age

Finally, male suspects are much more likely to be frisked than females, given they are stopped (Figure 3). However, the bar charts indicate that the effect of race on the probability of being frisked does not depend on gender.

Proportion frisked by race, compared across sex

Figure 3. Proportion frisked by race, compared across sex

Is stop-and-frisk prone to racial bias?

Our results suggest that given that the suspect is stopped, after taking other external factors into account, race does not significantly influence the chance of being frisked in NYC in 2014. However, after looking at relationships between race and precinct population Black, age, and sex, there is a possibility that the NYPD “stop-and-frisk” practices are prone to racism, posing threat to minority citizens in NYC. It is crucial that the NYPD continue to evaluate its “stop-and-frisk” policy and make appropriate changes to the policy and/or police officer training in order to prevent racial profiling at any level of investigation.

*** This study by Linh Pham, Takahiro Omura, and Han Trinh won 2nd place in the 2016 Undergraduate Class Project Competition (USCLAP).

Check out the 2016 USCLAP Winners here.

Enter your e-mail address to receive notifications of new blog posts.
You can leave the list at any time. Removal instructions are included in each message.

Powered by WPNewsman

5 Things To Do with a Data Set

Clustering

Like prediction and classification, understanding the way the data is organized can help us with analysis of data. One way to tease out the structure of the data is by examining clustering. Based on the patterns shown in the data, we can group individual observational units into distinct clusters. Clusters are defined so that observations within each cluster will have similar characteristics.  We can do further analysis of each group as well as comparing between groups. For example, marketers may want to know the customer segments to develop targeted marketing strategies. A cluster analysis will group customers so that people in the same customer segments tend to have similar needs but are different from those in other customer segments.  Some popular clustering methods are multi-dimensional scaling and latent class analysis.

An picture example of customer segmentation

Image source: http://www.dynamic-concepts.nl/en/segmentation/

Classification

The first step is constructing a classification system. The categories can be created based on either theories or observed statistical patterns such as those detected using clustering techniques. The next step is to identify the category or group to which a new observation belongs. For example, a new email can be put in the spam bin or non-spam bin based on the contents of the email. In statistics and machine learning, logistic regression, linear classifier, support vector machine and linear discriminant analysis are popular techniques used for classification problem.

Prediction

Predictive models can be built with the available data to tell you what is likely to happen. Predictive models assume either that a knowledge of past statistical patterning can be used to predict the future or the validity of some type of theoretical model.  For example, Netflix recommends movies to users based on the movies and shows which users have watched in the past.

Can we predict who will be the next president, Clinton or Trump? Yes, we can. Based on the polling data or candidates’ speeches, you can build a predictive model for the 2016 presidential election.  Nate Silver is well-known for the accuracy of his predictions of both political and sporting events. Here is his prediction model on the 2016 presidential election:

A map of polls-only forecast of the 2016 presidential election by Nate Silver

Source: http://projects.fivethirtyeight.com/2016-election-forecast/

Predictive modeling utilizes regression analysis, including linear regression, multiple regression and generalized linear models, as well as some machine learning algorithms, such as random forest tree and factor analysis.   Time series analysis can be used to forecast weather and the sales of a product of next season.

Anomaly Detection

Anomaly detection identifies unexpected or abnormal events. In the other words, we seek to find deviations from expected patterns. Detecting credit card fraud provides an example.  Credit card companies can analyze customers’ purchase behavior and history, so they can alert customers of possible fraud. Here are examples of popular anomaly detection techniques: k-nearest neighbor, neural network, support vector machine and cluster analysis.

Decision Making

One of the most common motivations for analyzing data is to drive better decision making. When a company needs to promote a new product, it can employ data analysis to set the price to maximize profit and avoid price wars with other competitors. Data analysis is so central to decision making that almost all analytic techniques – including not only the ones mentioned above but also geographical information systems, social network analysis, and qualitative analysis – can be applied.

Enter your e-mail address to receive notifications of new blog posts.
You can leave the list at any time. Removal instructions are included in each message.

Powered by WPNewsman

Software Review: NVivo as a Teaching Tool

nvivo-logoFor the past few weeks, DASIL has been publishing a series of blog posts comparing the two presidential candidates this year – Hillary Clinton and Donald Trump – using NVivo, a text analysis software. Given the increasing demand for qualitative data analysis in academic research and teaching, this blog post will discuss the strengths and weaknesses of NVivo as a teaching tool in qualitative analysis.

Efficiency and reliability

Using software like NVivo in content analysis can add rigor to qualitative research. Doing word search or coding using NVivo will produce more reliable results than doing so manually since the software rules out human error. Furthermore, NVivo proves to be really useful with large data sets – it would be extremely time-consuming to code hundreds of documents by hand with a highlighter pen.

Ease of use

NVivo is relatively simple to use. Users can import documents directly from word processing packages in various forms, including Word documents and pdfs, and code these documents easily on screen via the point-and-click system. Teachers and students can quickly become proficient in use of this software.

NVivo and social media

NVivo allows users to import Tweets, Facebook posts, and Youtube comments and incorporate them as part of their data. Given the rise of social media and increased interest in studying its impact on our society, this capability of NVivo may become more heavily employed.

Segmenting and identifying patterns 

NVivo allows users to create clusters of nodes and organize their data into categories and themes, making it easy for researchers to identify patterns. At the same time, the use of word clouds and cluster analysis also provides insight into prevailing themes and topics across data sets.

Limitations

While NVivo seems to be a great software that serves to provide a reliable, general picture of the data, it is important to be aware of its limitations. It may be tempting to limit the data analysis process to automatic word searches that yield a list of nodes and themes. While it is alluring to do so, in-depth analyses and critical thinking skill are needed for meaningful data analysis.

Although it is possible to search for particular words and derivations of those words, various ways in which ideas are expressed make it difficult to find all instances of a particular usage of words or ideas. Manual searches and evaluation of automatic word searches help to ensure that the data are, in fact, thoroughly examined.

Once individual themes in a data set are found, NVivo doesn’t provides tools to map out how these themes relate to one another, making it difficult to visualize the inter-relationships of the nodes and topics across data sets. Users need to think critically about ways in which these themes emerge and relate to each other to gain a deeper understanding of the data.

Enter your e-mail address to receive notifications of new blog posts.
You can leave the list at any time. Removal instructions are included in each message.

Powered by WPNewsman