Dashboard Week - Day 3 - Rick and Morty API

by Lyon Abido

 For the third day of dashboard week we were tasked with creating a dashboard from data that was gathered from an API. This was a wonderful opportunity to revisit and further practice connecting to an API and pulling data from it. What made this project even more enjoyable, especially towards the end of the day when we all presented our work, was how the data we used was centered on the infamous Rick and Morty TV show. While I haven’t watched the show before, I was loosely aware of some of the characters, memes, and tropes of the show. To learn more about the Rick and Morty API, please read through the documentation!

 This article will be broken up in two parts: first, I will talk through how I connected to the Rick and Morty API and second I will talk through the draft dashboard that I created. Overall, this project was great practice for me to hone my Alteryx skills, work with APIs and design an interactive dashboard.

 Before I delve into my Alteryx workflow, I have to shout out Liam Wood because a blog that he wrote about this project was instrumental for not only me but pretty much everyone else in my cohort. Liam did a fantastic job at documenting how he connected to the Rick and Morty API and pulled in the data that he wanted. His blog served as a reference and helped me to understand what the data meant.


 This workflow outlines, in three broad parts, how I was able to connect to the Rick and Morty API and create a dataset, centered on character information, which informs the dashboard screenshots below.

 The first part of the workflow focuses on actually connecting to the API and pulling in relevant data. Something to stress about this part of the workflow is the concept of pagination. Given how the API is structured and the size of the data contained within it, information is displayed in several pages. In this case, according to the documentation, character information is spread across 42 pages. This means that if I want to have access to all of the character information I would need to somehow iterate through each of the pages and collect the information for each page. Put another way, I need to send 42 separate requests to the API in order to pull in all the data that I need. If I did not do this, then I would only have the data that corresponds to a single page, which would hugely skew and limit the data product that I created.

 The second part of the workflow focuses on cleaning and preparing the raw data that is contained within the API. Like most other common APIs, the raw data for the Rick and Morty API was in a JSON structure. For this part, I worked on understanding what a row of data means and determined what I wanted my headers and fields to be. Once I collected those features and refined them to my liking, I pivoted the data to make the overall shape of the dataset I wanted to bring into Tableau. This section of the workflow took the most amount of time and could involve bringing in additional data (from other APIs, web scraping or other ready-made datasets). In my case, I did not bring in any additional data; however, I would like to try that if I find a chance to revisit this project. For example, it would be interesting to somehow bring in personality type-related data to see the general personality traits of all the characters in the show. Alternatively, I could bring in other data from the Rick and Morty API, such as location data or episode data, which would involve a similar workflow process as well as some joins to consolidate all the information into a single dataset.

 The third part of the workflow, which is the most straightforward and simple, just involves outputting the constructed dataset into a file type that Tableau can work with. In this case, I chose to output the data into a CSV file type.

 With that, I can now talk about the dashboard draft that I created using the character-based dataset that I made. For this dashboard draft, I wanted to create a simple and user-led dashboard experience. In this, I used filter actions to give the user the means to click different elements of a viz to change the behavior of related vizzes.

 Here’s what the dashboard looks like as a default.


 In this first screenshot, you can see what the dashboard looks overall, without any filtering effects applied. We can see three KPIs which represent the status of a character. That is, a character could be described as alive, dead or unknown. Unfortunately, I wasn’t able to figure out what a status of unknown meant. Beyond that, we have three vizzes. The first is a treemap where each tile represents different gender types. If you hover over a slice, you can see the number of characters with that gender. If you click on a slice, the two other vizzes reflect that selection. The second viz is a bar chart that lists out all the characters and how many versions there are of them. For those of you who are familiar with the show, this means that a character can exist in multiple dimensions. So, say, there could be a version of Morty in dimension A but also in dimensions B, C and D and so on. Like the previous viz, a user is able to select a character and then have the last viz be affected. The last viz, a colored text table, displays what type and species all the characters are. Lastly, the KPIs are dynamic. So, if a user selects a gender of “female” and a specific female character, the KPIs will change to reflect the status of that selected character.

 Here’s what the dashboard looks like when some selections are made.


 To close off this article, I want to talk about some additional elements I would like to incorporate into the future so that I can enhance this current draft. First, I would like to bring into information regarding the location and the place of origin of the characters. For this draft, I omitted that data due to space constraints. Second, I would like to expand the data (by bringing in episode and location data) so that I could associate characters with further location characteristics (what dimension is the character in and so on) and what episode and season did that character appear in. By including this additional data, I could create a dashboard that showcases which characters made the most appearances, which dimensions are the most populated and start to associate locations with, for example, whether or not there are lots of genderless characters in that location and so on. Ultimately, by doing this, I would have more opportunities to explore the data and glean some interesting takeaways from it. Finally, I would like to play around with incorporating character portraits. That would be nice because we could see the different versions of a given character.

 Overall, the day was a welcomed challenge and a very playful twist to regular dashboarding. I feel more comfortable and confident with working with APIs and saw a lot of awesome designs from the rest of my cohort.