My first four topics (crime, death, family, and messages/notes) were found using 25 topics, 2000 iterations, and 20 topic words printed, with the stop words removed. My next three topics (characteristics of a man, love, and house/home) were found using 30 topics, 1500 iterations, and 20 topic words printed, with the stop words removed. My final three topics (traveling, evidence, and time) were found using 20 topics, 2500 iterations, and 20 topic words printed, with the stop words removed. Despite having changed the numbers of topics and iterations multiple times, themes relating to crime, mystery, and investigation kept coming up, showing the main aspect of each of the Sherlock Holmes stories. This shows that topic modeling succeeds at highlighting the recurring topics that multiple texts have in common.
I found it most difficult to come up with common topics between the words that came up when I decreased the number of iterations. However, those topics (characteristics, love, and house/home) all ended up making sense in regards to the Sherlock Holmes stories. Describing one’s characteristics is part of solving any mystery, the theme of love is part of the backstory given in each text of those involved in or affected by the mystery/crime, and house/home can represent Holmes in his house/room or the home where a crime or mystery is being solved. Being somewhat familiar with various Sherlock Holmes stories definitely helped me recognize the topics – if I had never read any Holmes stories, I would have had more trouble.
Overall, MALLET successfully navigates multiple texts in an efficient manner to point out common topics. The fact that it offers additional information about these topics, such as the percentage of a topic’s presence in a text, makes it helpful in pointing out themes and ideas that one may have overlooked during a close reading of the text.