Over the course of two semesters, I first read Dummit and Foote's book to understand the intricacies of group actions, and free groups. Afterwards, we moved on to Trees, in which I studied Serre's beautiful proof of the Nielsen-Schreier theorem.
Theorem: Every subgroup of a free group is free
With the guidance of Connor Lockhart, I read this fantastic number theory book. ToN largely explores the depths of number theory we can uncover by studying a structure known as the Farey Diagram. My favorite section of the book went over a modern proof of the continued fraction theorem by Lagrange.
Theorem: A real number is a quadratic irrational if and only if it has an eventually periodic continued fraction representation.
At the end of the semester, I gave a presentation on the theorem and Hatcher's proof.
Perhaps the most common method to demonstrate the "significance" of one's data is to use a p-value. Unfortunately, many research scientists misinterperet low p-values, or fully manipulate their data, to say that their findings unequivocally demonstrate truth in their hypothesis. So, in 2016, the American Statistical Association, among other major research organzations, spoke out about the misuse of p-values, and encouraged many journals to watch closer for potential "p-hacking".
I wanted to analyze how restrictions and scrutinazation of p-value use affected how commonly they were used over time. So, with the guidance of Michael Radwin, we designed an algorithm on a distributed computer system to gather data from ~3.8 million published papers, and chart their use. You can see a poster I made here.