The Twitter Methods Ur-Post

For some years now, I’ve been analyzing conference Twitter data and sporadically posting about it online, including twitter threads written from various airports. While I’ve had a method from the beginning, that method has evolved over time: most significantly, I got tired of endless hours of Open Refine data cleaning and automated the network creation process. (To be clear: I love Open Refine. But when you have a tedious, repetitive task, programming is your friend.) While it’s possible that my methods will continue to evolve over time – as technology changes and new research questions occur to me – this is the current state of my methodology and will be linked as the ur-post whenever I blog about Twitter analysis.

Data Collection

I collect my data from Twitter using Hawksey’s TAGS 6.0, which employ Google Sheets. Yes, I know there’s now a TAGS 6.1 but I subscribe to the philosophy of “if it ain’t broke, don’t fix it.”

The primary advantage of TAGS, for me, is the ability to “set and forget.” TAGS utilizes the Google Search API (as opposed to the Google Streaming API) in its limited, free version, which means that it can only capture twitter data from the last 7 days. To get around this limitation, TAGS can be set up to query the API every hour and capture whatever new tweets occurred since the tweet “archive sheet” has last been updated. This means it can be set up whenever I remember to set it up – usually weeks if not months before a conference – and it will continue running until I remember to tell it to stop – again, usually weeks if not months after a conference ends.

I try to download this data regularly to my computer, according to the data management principle LOCKSS: Lots of Copies Keeps Stuff Safe. Only having the data available in Google Sheets makes me dependent on Google to get to my data. By contrast, CSV files on my computer, which is Time Machined in two locations, have a decent chance of surviving anything short of nuclear/zombie apocalypse.

Data Cleaning/Pre-Processing

While the data that I get from TAGS is relatively clean, I do tidy it up a bit first. Most importantly, I deduplicate my dataset. Some of this duplication is my fault, when I’m trying to track hashtag variants and someone includes both variants in a single tweet (e.g. “aha2019” and “aha19”). TAGS also seems to duplicate some of its collection data, though I haven’t figured out why – manual inspection of each tweet’s unique ID makes clear when a tweet is an actual duplicate in the set vs. a delete-and-rewrite or a retweet of someone else’s tweet. Because deduplication is a simple process of checking whether each tweet’s unique ID occurs only once in the dataset, I’ve automated that process.

Depending on the analysis I want to conduct, I also tend to time-limit my dataset. Specifically, I delete any tweets (from a copy of the spreadsheet – no one panic!) that occur more than one “day” before the start of the event or one “day” after the end of the event. In this case, a “day” is defined as the GMT day, which may or may not correspond to the local timezone of the event. While this has the potential to cause slight discrepancies when comparing events across timezones – specifically, some events will have a few more hours of data capture before the event starts while some will have a few more hours after it starts – I don’t believe these changes to be statistically significant. If I ever do some hard math on the question, I’ll update this post to indicate the results.

Network Creation

Now to the fun stuff! I started analyzing conference tweets because I was interested in how people connect and share knowledge/ideas/opinions in this virtual space. As such, my primary interest lies in creating a social network from the twitter data – who tweeted at/mentioned who – which necessitates transforming the TAGS archival spreadsheet of tweets into a network of Twitter handles (and/or hashtags). Because Twitter handles are unique IDs, I only needed an edge list of sources and targets for each tweet (other data capture was and remains interesting-but-optional).

I originally did this manually. Aside from being tedious, this also created problems for replicability. That is, what if I slipped up and missed or repeated something while creating my edge lists? I therefore wrote a Python script to do this work for me, with a few variants for if I wanted to keep the hashtag, date/time information, or for my students using TAGS 6.1.

Next I imported the edge list into Gephi (though I’ve experimented with other software, Gephi’s old hat to me at this point and does what I need it to do) and allow it to sum repeated edges to give each edge a weight. That is, if I tweeted to or re-tweeted @epistolarybrown 173 times over the course of a conference, the edge from me to her would have weight 173.

Network Analysis

At this point in the process, I use Gephi’s built-in algorithms to conduct my network analysis, usually with an emphasis on metrics like degree, betweenness centrality, network diameter/path lengths, and modularity classes. For an example of how that works out in practice, check out one of my conference blog posts! And if you have any questions, feel free to ping me via Twitter.

Code

All of my code is available on Github under a MIT License.