This step involves cleaning your gathered data. The drop_dups_and_na notebook shows how you can remove duplicate articles and Null values. If you want to follow our approach, you will have to manually inspect the text you scraped from each unique website and make a set of regex rules. Notebook Regex shows an example. The next step invloves cleaning and prepending the articles title to its body. Notebook Titles shows an example. Lastly, we add 1 space between every punctuation and apply a global rule that allows for 1 space at most between characters. Notebook Punct shows an example.