markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
A text file containing basic information about every NLSS episode must be organized into something usable | with open(r'data\NLSS_Dockets.txt') as f:
file = f.read()
shows = file.split('\n\n') #split into every show
shows[:5] | _____no_output_____ | MIT | OldFiles/NLSS.ipynb | AndrewRyan95/Twitch_Sentiment_Analysis |
This text file was taken from a webpage and so it contains links to Nick's livestream. Let's get rid of this since it's not needed. | index = 0
for s in shows:
shows[index] = s.replace(' Nick View', '')
index+=1
shows[-10:] | _____no_output_____ | MIT | OldFiles/NLSS.ipynb | AndrewRyan95/Twitch_Sentiment_Analysis |
Now I need to split up each show into their meaningful parts. Let's start with the games played on each episode. | games = []
for s in shows:
g = s.split('\n') #Text files has games on second line
games.append(g[1])
games | _____no_output_____ | MIT | OldFiles/NLSS.ipynb | AndrewRyan95/Twitch_Sentiment_Analysis |
I'll have to clean these up later to make sure all the games are spelled consistantly. | #Number of dockets, not individual games
len(games) | _____no_output_____ | MIT | OldFiles/NLSS.ipynb | AndrewRyan95/Twitch_Sentiment_Analysis |
Now lets take a look at the first lines of the text file which contain the date of the show and the people who joined the show that day. They are seperated in the file by (). | date_crew = []
for s in shows:
dc = s.split('\n')[0]
date_crew.append(dc)
print(date_crew) | ['(August 24, 2017) (NL, RLS, CS, rob)', '(August 23, 2017) (NL, RLS, rob w/ Baer, LGW, HCJ)', '(August 21, 2017) (NL, RLS, JS, rob)', '(August 17, 2017) (NL w/ Sin, RLS, LGW, HCJ, Baer)', '(August 16, 2017) (NL, RLS w/ rob, Baer, LGW, Dan)', '(August 14, 2017) (NL, JS, MALF, LGW w/ Baer, HCJ)', '(August 10, 2017) (NL, RLS, rob, LGW)', '(August 7, 2017) (NL, RLS, JS, rob w/ Baer)', '(August 3, 2017) (NL, RLS, CS w/ rob, MALF)', '(August 2, 2017) (NL, RLS, LGW w/ Baer, Kory)', '(July 31, 2017) (NL, RLS, JS, rob w/ Sin, Baer, TB)', '(July 27, 2017) (NL, RLS, CS w/ MALF)', '(July 26, 2017) (NL, RLS w/ LGW, Sin, Baer, Dan)', '(July 24, 2017) (NL, RLS, JS w/ LGW)', '(July 20, 2017) (NL, RLS, CS, LGW w/ Baer)', '(July 19, 2017) (NL, RLS, LGW w/ MALF, rob, Baer)', '(July 13, 2017) (NL, RLS, rob w/ LGW, Baer)', '(July 12, 2017) (NL, RLS, rob w/ Baer)', '(July 10, 2017) part 1, part 2 (NL, RLS, JS w/ rob, Sin)', '(July 6, 2017) (NL, RLS, CS, rob w/ Baer)', '(July 5, 2017) (NL, RLS, rob w/ LGW, Baer)', '(July 3, 2017) (NL, RLS, rob w/ Sin, LGW, Baer)', '(June 22, 2017) (NL, RLS, CS, rob w/ JS)', '(June 21, 2017) Nick view (NL, RLS, rob w/ LGW, Baer)', '(June 19, 2017) Nick view (NL, RLS, JS w/ rob, Kate, Baer)', 'Solo (June 15, 2017) (NL w/ MALF, rob)', 'Solo (June 14, 2017) (NL w/ MALF, LGW, JS, Baer)', 'NLSS Masters (June 12, 2017) (NL, RLS, JS, MALF)', '(June 8, 2017) (NL, RLS w/ rob, MALF, Baer)', '(June 7, 2017) (NL, RLS, rob w/ Dan, Baer, Sin, Blueman)', '(June 5, 2017) Nick view (NL, RLS, JS w/ MALF)', '(June 1, 2017) Nick view (NL, RLS, CS w/ rob, Baer)', '(May 31, 2017) (NL, rob, LGW, Baer)', '(May 29, 2017) (NL, RLS, JS w/ rob, MALF)', '(May 25, 2017) (NL, RLS, rob w/ MALF, Baer)', '(May 24, 2017) (NL, RLS, rob, LGW w/ MALF, Dan)', '(May 22, 2017) (NL, RLS, JS w/ MALF)', '(May 18, 2017) (NL, RLS, CS, rob w/ Kate, Baer)', '(May 17, 2017) (NL, RLS w/ Dan, LGW)', '(May 15, 2017) (NL, RLS, JS w/ rob, LGW, MALF)', '(May 11, 2017) (NL, RLS, rob, Baer w/ LGW, Sin)', '(May 10, 2017) (NL, RLS, rob w/ Baer, LGW, Dan)', '(May 8, 2017) (NL, RLS w/ rob, MALF)', '(May 4, 2017) (NL, RLS, rob w/ MALF, Baer)', '(May 3, 2017) (NL, RLS, rob w/ MALF)', '(May 1, 2017) (NL, RLS, JS w/ rob, MALF)', '(April 27, 2017) (NL, RLS w/ JS, MALF, Baer, LGW, Kate)', '(April 26, 2017) (NL, RLS, rob, LGW w/ Baer)', '(April 24, 2017) (NL, RLS, rob, LGW w/ Baer)', '(April 20, 2017) Nick view (NL, RLS, JS, CS w/ rob)', '(April 19, 2017) Nick view (NL, RLS, LGW w/ rob, Baer)', '(April 6, 2017) part 1 part 2 Nick view (NL, RLS, CS w/ LGW)', '(April 5, 2017) (NL, RLS, rob w/ Baer)', '(April 3, 2017) (NL, JS w/ MALF, LGW, rob, Baer, Sin)', '(March 30, 2017) (NL, RLS, CS w/ BaerBaer, MALF)', '(March 29, 2017) Nick Video (NL, RLS, rob, LGW w/ Dan)', '(March 27, 2017) (NL, RLS, JS w/ rob)', '(March 23, 2017) (NL, RLS, CS w/ MALF)', '(March 22, 2017) Nick Video (NL, RLS, LGW w/ MALF, Baer)', '(March 20, 2017) (NL, RLS, LGW w/ Baer, rob)', '(March 16, 2017) (NL, CS, LGW w/ MALF, Baer)', '(March 15, 2017) (NL, RLS, LGW w/ Kate)', '(March 8, 2017) (NL, RLS, rob w/ LGW, Baer, Sin)', '(March 6, 2017) (NL, RLS, JS w/ LGW, Baer, Sin)', '(March 1, 2017) (NL, rob, LGW w/ MALF, Baer)', '(February 27, 2017) (NL, RLS, JS w/ rob, LGW, Baer)', '(February 23, 2017) (NL, RLS, MALF w/ LGW, Baer)', '(February 22, 2017) (NL, RLS, rob, LGW w/ Dan)', '(February 20, 2017) Nick view (NL, RLS, JS, LGW w/ rob, Baer, Sin)', '(February 16, 2017) (NL, RLS, LGW w/ rob, Baer, MALF)', '(February 15, 2017) (NL, RLS, rob, LGW w/ Mathas)', '(February 13, 2017) (NL, JS w/ rob, LGW)', '(February 9, 2017) (NL, RLS, rob, LGW w/ MALF, Baer)', '(February 8, 2017) part 1 part 2 (NL, RLS, CS w/ rob, LGW, MALF, Baer)', '(February 6, 2017) (NL, RLS, JS w/ MALF)', '(February 2, 2017) (NL, RLS, rob, LGW w/ Baer, MALF)', '(February 1, 2017) (NL, RLS, rob, LGW)', '(January 31, 2017) (NL, JS, MALF, rob w/ Sin)', '(January 26, 2017) (NL, MALF, rob, LGW)', '(January 25, 2017) (NL, rob, LGW w/ MALF, Dan, Sin, Kate)', '(January 23, 2017) part 1 part 2 (NL, JS, MALF)', '(January 19, 2017) (NL, MALF, rob, LGW)', '(January 18, 2017) (NL, rob, LGW w/ MALF)', '(January 16, 2017) part 1 part 2 (NL, JS, MALF w/ LGW, rob, Baer, Sin, Dan)', '(January 12, 2017) (NL, RLS, MALF w/ LGW, Crendor)', '(January 11, 2017) (NL, RLS w/ rob, LGW, Dan)', '(January 9, 2017) (NL, RLS, JS w/ Baer, rob, LGW)', '(January 5, 2017) (NL, RLS, rob w/ MALF, Baer, rob)', '(January 4, 2017) part 1 part 2 (NL, RLS, rob w/ MALF, LGW)', '(January 2, 2017) (NL, RLS, JS w/ LGW)', '(December 29, 2016) (NL, RLS, LGW w/ rob, Baer, JS)', '(December 28, 2016) (NL, RLS, LGW w/ rob, Baer)', '(December 26, 2016) (NL, RLS, JS w/ rob, Sin)', 'Bootleg (December 22, 2016) part 1 part 2 part 3 (NL, LGW w/ Kate)', '(December 21, 2016) (NL, RLS, CS w/ Baer, rob, LGW, Dan)', '(December 19, 2016) (NL, RLS, JS, rob w/ LGW)', '(December 15, 2016) (NL, RLS, MALF, rob w/ Kate, LGW)', '(December 14, 2016) (NL, RLS, CS w/ LGW)', '(December 12, 2016) (NL, RLS, JS, LGW)', '(December 8, 2016) (NL, RLS, MALF, rob w/ LGW)', '(December 7, 2016) (NL, RLS, rob, LGW w/ MALF)', '(December 6, 2016) (NL, RLS, MALF, LGW w/ rob)', '(December 5, 2016) (NL, RLS, JS, rob w/ LGW, Baer)', '(November 30, 2016) (NL, RLS, rob, LGW)', '(November 28, 2016) (NL, RLS, rob, LGW w/ Baer)', '(November 24, 2016) (NL, MALF, rob, LGW)', '(November 23, 2016) (NL, RLS, CS w/ rob, LGW, Dan, MALF, Baer)', '(November 21, 2016) (NL, RLS, rob, LGW w/ Baer, Sin)', '(November 17, 2016) (NL, RLS, MALF, rob w/ Baer, LGW)', '(November 16, 2016) (NL, RLS, CS w/ Mathas, rob)', '(November 14, 2016) (NL, RLS, JS, rob w/ LGW, BRex)', '(November 10, 2016) (NL, RLS, rob, LGW w/ JS, MALF)', '(November 9, 2016) (NL, JS, MALF, rob, LGW)', '(November 7, 2016) (NL, RLS, JS, LGW)', '(November 3, 2016) (NL, RLS w/ rob, LGW)', '(November 2, 2016) Nick view (NL, RLS w/ MALF, rob, LGW, Dan)', '(October 31, 2016) (NL, RLS, JS, Sin w/ rob, LGW)', '(October 27, 2016) (NL, RLS, MALF w/ LGW, rob)', '(October 26, 2016) Nick view (NL, RLS, CS, rob w/ LGW)', '(October 24, 2016) (NL, RLS, JS, LGW w/ rob, Baer)', '(October 20, 2016) (NL, RLS, MALF w/ rob, Baer, LGW)', '(October 19, 2016)(NL, RLS, CS, rob w/ Baer)', '(October 17, 2016) (NL, RLS, JS, MALF w/ rob, LGW, Baer)', '(October 13, 2016) (NL, RLS w/ rob, LGW)', '(October 12, 2016) (NL, RLS, CS, LGW w/ MALF, rob, Baer)', '(October 10, 2016) (NL, RLS, JS, rob w/ LGW, GhostBill)', '(October 6, 2016) (NL, RLS, MALF, LGW w/ rob, Baer)', '(October 5, 2016) (NL, RLS, rob, LGW w/ Baer, Sin)', '(October 3, 2016) (NL, RLS, JS w/ rob)', '(September 29, 2016) (NL, RLS, rob, LGW)', '(September 28, 2016) (NL, RLS, CS w/ rob, LGW)', '(September 26, 2016) (NL, RLS, JS w/ rob, Kate)', '(September 22, 2016) (NL, RLS, MALF w/ rob, LGW)', '(September 12, 2016) (NL, RLS, JS w/ MALF, rob, LGW)', '(September 10, 2016) (NL, RLS, MALF w/ rob, Baer)', '(September 8, 2016) (NL, RLS, MALF w/ alpacapatrol, LGW, Sin)', 'Solo (September 7, 2016) (NL w/ Sin, MALF, LGW, Baer, Dan, rob)', '(August 31, 2016) (NL, RLS w/ rob, LGW, Dan)', '(August 29, 2016) Nick view (NL, RLS, JS w/ rob, LGW)', '(August 25, 2016) (NL, RLS w/ rob, MALF, LGW)', '(August 24, 2016) (NL, MALF w/ rob, LGW, dan)', '(August 22, 2016) (NL, MALF w/ rob, LGW)', '(August 18, 2016) (NL, RLS w/ JS, MALF, rob, LGW)', '(August 17, 2016) (NL, RLS, CS w/ rob, Baer, LGW, Dan)', '(August 15, 2016) part 1 part 2 Nick view (NL, RLS, JS w/ rob, Baer, LGW)', '(August 10, 2016) (NL, RLS w/ rob, LGW)', '(August 8, 2016) (NL, RLS, JS w/ LGW, Baer)', '(August 4, 2016) Nick view (NL, RLS, MALF w/ rob, LGW, Sin, Baer)', '(August 3, 2016) part 1 part 2 (NL, RLS, CS w/ rob, LGW)', '(August 1, 2016) (NL, RLS, JS w/ Baer, LGW)', 'Bootleg (July 28, 2016) (NL, MALF w/ JS, LGW)', 'Solo (July 27, 2016) (NL, MALF)', 'Solo (July 25, 2016) (NL)', '(July 21, 2016) (NL, RLS w/ LGW, Sin, Mathas)', '(July 20, 2016) (NL, RLS, CS w/ Baer, LGW)', '(July 18, 2016) Nick view (NL, RLS w/ MALF, LGW)', '(July 14, 2016) (NL, RLS w/ LGW, Sin)', '(July 13, 2016) (NL, RLS, CS w/ Sin)', '(July 11, 2016) (NL, RLS w/ Baer, rob, LGW)', 'Solo (July 7, 2016) (NL w/ rob)', '(July 6, 2016) (NL, RLS, CS w/ LGW)', '(July 4, 2016) (NL, RLS, JS w/ rob, LGW)', '(June 30, 2016) (NL, RLS w/ MALF, rob, Baer, LGW)', '(June 29, 2016) (NL, RLS, CS w/ rob, LGW)', '(June 27, 2016) (NL, RLS, JS w/ rob, LGW)', '(June 23, 2016) (NL, RLS w/ rob, MALF, Dan)', '(June 20, 2016) (NL, RLS, JS w/ rob, Mathas)', '(June 9, 2016) Nick view (NL, RLS, rob w/ LGW)', '(June 8, 2016) (NL, RLS, CS w/ rob, LGW)', '(June 6, 2016) (NL, RLS, JS w/ LGW, MALF, Dan)', '(June 2, 2016) (NL, RLS w/ rob, LGW)', '(June 1, 2016) (NL, RLS, CS w/ Baer, LGW)', '(May 30, 2016) (NL, RLS, JS w/ rob, Baer, LGW)', '(May 26, 2016) (NL, RLS w/ rob, LGW, Baer)', '(May 25, 2016) (NL, RLS, CS w/ rob, MALF)', '(May 23, 2016) (NL, JS w/ rob, Dan, Sin, LGW)', 'Bootleg Solo (May 19, 2016) (NL w/ Mathas, rob, LGW, Sin)', 'Bootleg Solo (May 18, 2016) (NL w/ Sin, rob, LGW)', 'Bootleg (May 16, 2016) (NL, RLS, JS w/ rob, LGW)', '(May 12, 2016) Nick view (NL, RLS w/ rob, LGW)', '(May 11, 2016) (NL, RLS, CS w/ rob)', 'Solo (May 9, 2016) (NL w/ Sin, rob, LGW)', '(May 5, 2016) (NL, RLS w/ JS, rob)', '(May 4, 2016) (NL, RLS, rob w/ MALF)', '(May 2, 2016)(NL, RLS, JS w/ MALF)', 'Solo (April 21, 2016) (NL w/ Sin, rob, LGW)', 'Solo (April 20, 2016) (NL w/ Sin)', '(April 18, 2016) (NL, RLS, JS)', 'Bootleg (April 14, 2016) (NL, RLS w/ MALF, rob, LGW)', '(April 13, 2016) (NL, RLS w/ rob, MALF, Dan, LGW)', '(April 11, 2016) (NL, RLS, JS)', '(April 7, 2016) (NL, RLS w/ MALF, rob, LGW)', '(April 6, 2016) (NL, RLS w/ LGW)', '(April 4, 2016) (NL, RLS, JS w/ rob)', '(March 31, 2016) (NL, RLS w/ MALF, rob, LGW)', '(March 30, 2016) (NL, RLS, CS w/ JS, rob)', '(March 28, 2016) (NL, RLS w/ MALF, rob)', '(March 24, 2016) part 1 part 2 Nick view (NL, RLS w/ LGW)', '(March 23, 2016) part 1 part 2 (NL, RLS, CS w/ MALF, rob)', 'Bootleg Solo (March 3, 2016) (NL, RLS w/ rob, Dan, LGW)', '(March 2, 2016) (NL, RLS, CS w/ MALF)', '(February 29, 2016) (NL, RLS, JS w/ rob)', '[3 year NLversary!] (February 25, 2016) (NL, RLS, dan)', '(February 24, 2016) (NL, RLS, CS)', '(February 22, 2016) (NL, RLS, JS w/ rob)', '(February 18, 2016) (NL, RLS)', 'Bootleg Solo (February 17, 2016) (NL)', '(February 15, 2016) (NL, RLS, JS w/ MALF)', '(February 10, 2016) (NL, RLS, CS w/ MALF)', '(February 8, 2016) (NL, RLS, JS)', '(February 4, 2016) (NL, RLS w/ MALF)', '(February 3, 2016) (NL, RLS, CS w/ MALF)', '(February 1, 2016) (NL, RLS, JS)', '(January 28, 2016) (NL, RLS, MALF)', '(January 27, 2016) (NL, RLS)', '(January 25, 2016) (NL, RLS, JS, MALF)', '(January 21, 2016) (NL, RLS, MALF w/ rob)', '(January 20, 2016) (NL, RLS w/ rob)', '(January 18, 2016) (NL, RLS, JS)', '(January 13, 2016) (NL, RLS, CS w/ rob)', '(January 11, 2016) (NL, RLS, JS w/ rob)', '(January 7, 2016) (NL, RLS w/ MALF, rob)', '(January 6, 2016) (NL, RLS, CS w/ rob)', '(January 4, 2016) Nick view (NL, RLS, JS w/ rob)', '(December 31, 2015) (NL, RLS w/ rob)', '(December 30, 2015) (NL, RLS, CS)', '(December 28, 2015)(NL, RLS, JS)', 'Solo (December 24, 2015) (NL w/ Kate)', '(December 23, 2015) part 1 part 2 (NL, RLS, CS)', '(December 21, 2015) (NL, RLS, JS, w/ MALF)', 'Solo (December 19, 2015) (NL w/ Kate, Baer)', 'Solo (December 18, 2015) (NL w/ Kate)', '(December 10, 2015) (NL, RLS, MALF)', '(December 9, 2015) (NL, RLS, CS w/ rob)', '(December 7, 2015) part 1 part 2 (NL, RLS)', '(December 2, 2015) (NL, RLS, CS w/ rob)', '(November 30, 2015) part 1 part 2 (NL, RLS, MALF w/ rob)', 'Solo (November 26, 2015) (NL)', '(November 25, 2015) (NL, RLS, CS w/ rob)', '(November 23, 2015) (NL, RLS, JS w/ rob)', '(November 19, 2015) (NL, RLS, MALF)', '(November 18, 2015) part 1 part 2 (NL, RLS w/ Baer, rob)', '(November 16, 2015) (NL, RLS, JS)', '(November 12, 2015) (NL, RLS w/ rob, Baer)', '(November 11, 2015) (NL, RLS, CS w/ rob)', '(November 9, 2015) (NL, RLS, JS w/ Baer)', '(November 5, 2015) (NL, RLS, MALF)', '(November 4, 2015) (NL, RLS, CS)', '(November 2, 2015) part 1 part 2 (NL, RLS, JS)', '(October 29, 2015) (NL, RLS)', 'Bootleg Solo (October 28, 2015) part 1, part 2, part 3 (NL)', 'Bootleg (October 26, 2015) part 1 part 2 part 3 (NL, JS, MALF)', 'Bootleg Solo (October 22, 2015) part 1, part 2, part 3 (NL)', 'Solo (October 21, 2015) part 1 part 2 (NL)', '(October 19, 2015) part 1 part 2 (NL, JS, MALF)', '(October 15, 2015) part 1 part 2 (NL, MALF)', '(October 14, 2015) part 1 part 2 (NL, MALF)', 'October 7, 2015 (NL, RLS, CS w/ rob)', '(October 5, 2015) part 1 part 2 (NL, RLS, JS w/ rob)', '(October 1, 2015) part 1 part 2 (NL, RLS w/ rob, Dan)', '(September 30, 2015) part 1 part 2 (NL, RLS, CS w/ rob)', '(September 28, 2015) part 1 part 2 (NL, RLS, JS)', '(September 24, 2015) (NL, RLS)', '(September 23, 2015) (NL, RLS, CS w/ rob)', '(September 21, 2015) (NL, RLS, JS w/ rob)', '(September 17, 2015) part 1 part 2 Nick view (NL, RLS)', '(September 16, 2015) part 1 part 2 (NL, RLS, CS)', '(September 14, 2015) part 1 part 2 (NL, RLS, JS)', '(September 10, 2015) (NL, RLS)', '(September 9, 2015) part 1 part 2 (NL, RLS, CS w/ rob)', '(September 7, 2015) part 1 part 2 (NL, RLS, JS)', '(September 3, 2015) part 1 part 2 (NL, RLS w/ Baer, MALF)', '(September 2, 2015) part 1 part 2 Nick view (NL, RLS, CS w/ rob)', '(August 24, 2015) (NL, RLS, JS)', '(August 20, 2015) part 1 part 2 (NL, RLS)', '(August 19, 2015) part 1 part 2 Nick view (NL, RLS, CS w/ rob)', '(August 17, 2015) part 1 part 2 Nick view (NL, RLS, JS)', '(August 13, 2015) part 1 part 2 Nick view (NL, RLS, rob)', '(August 12, 2015) part 1 part 2 Nick view (NL, RLS, CS w/ JS)', '(August 10, 2015) part 1 part 2 part 3 Nick view (NL, RLS, JS)', 'Bootleg (August 6, 2015) part 1, part 2 (NL, RLS w/ rob, Baer, MALF)', '(August 5, 2015) part 1 part 2 Nick view (NL, RLS, CS)', '(August 3, 2015) part 1 part 2 (NL, RLS, JS w/ rob)', '(July 30, 2015) part 1 part 2 Nick view (NL, RLS)', '(July 29, 2015) part 1 part 2 part 3 (NL, RLS w/ rob, Mathas)', '(July 27, 2015) part 1 part 2 (NL, RLS, JS)', '(July 16, 2015) part 1 part 2 Nick view (NL, RLS w/ Brex)', '(July 15, 2015) part 1 part 2 Nick view (NL, RLS, CS)', '(July 13, 2015) part 1 part 2 Nick view (NL, RLS, JS w/ Baer)', '(July 9, 2015) part 1 part 2 part 3 Nick view (NL, RLS w/ Baer, MALF)', '(July 8, 2015) Part 1 Part 2 (NL, RLS, CS)', '(July 6, 2015) part 1 part 2 (NL, RLS, JS)', '(July 2, 2015) part 1 part 2 Nick view (NL, RLS w/ rob)', '(July 1, 2015) Nick view (NL, RLS, CS w/ rob)', '(June 29, 2015) part 1 part 2 Nick view (NL, RLS, JS)', '(June 25, 2015) Nick view (NL, RLS w/ rob)', '(June 24, 2015) Nick view (NL, RLS, CS w/ rob)', '(June 22, 2015) part 1 part 2 Nick view (NL, RLS, JS)', '(June 18, 2015) part 1 part 2 (NL, RLS)', '(June 17, 2015) part 1 part 2 (NL, RLS)', '(June 15, 2015) part 1 part 2 (NL, RLS, JS w/ MALF)', '(June 11, 2015) part 1 part 2 (NL, RLS w/ rob, MALF)', '(June 10, 2015) part 1 part 2 (NL, RLS w/ rob, Baer)', '(June 8, 2015) part 1 part 2 (NL, RLS w/ rob)', '(May 28, 2015) part 1 part 2 (NL w/ Baer, Fox)', '(May 27, 2015) part 1 part 2 (NL)', '(May 25, 2015) part 1 part 2 (NL, Arumba)', '(May 21, 2015) part 1 part 2 (NL, RLS)', '(May 20, 2015) part 1 part 2 (NL, RLS)', 'Bootleg (May 18, 2015) part 1 part 2 part 3 (NL, RLS w/ rob, Baer)', 'Bootleg (May 14, 2015) part 1 part 2 part 3 (NL, RLS)', 'Bootleg (May 13, 2015) part 1 part 2 part 3 (NL, RLS w/ rob, Baer)', 'Bootleg (May 11, 2015) part 1 part 2 part 3 (NL, RLS)', '(May 7, 2015) part 1 part 2 (NL, RLS w/ rob, Baer)', '(May 6, 2015) part 1 part 2 (NL, RLS w/ rob, Baer)', '(May 4, 2015) Part 1 part 2 (NL, RLS w/ rob, Baer)', 'Bootleg Solo (April 23, 2015) part 2 part 3 (NL)', 'Bootleg (April 22, 2015) part 1 part 2 part 3 (NL, RLS w/ rob, Baer)', '(April 20, 2015) part 1 part 2 (NL, RLS w/ rob, Baer)', 'Bootleg (April 16, 2015) part 1 part 2 part 3 (NL, RLS w/ rob, Baer)', '(April 15, 2015) part 1 part 2 (NL, RLS w/ cobaltstreak, baer, rob)', 'Bootleg (April 13, 2015) part 1 part 2 part 3 (NL, RLS w/ rob, Baer)', '(April 9, 2015) part 1 part 2 (NL, RLS w/ rob, Baer)', '(April 8, 2015) part 1 part 2 part 3 (NL, RLS w/ rob)', '(April 6, 2015) part 1 part 2 (NL, RLS)', '(April 2, 2015) part 1 part 2 (NL, RLS w/ Baer)', '(April 1, 2015) part 1 part 2 (NL, RLS w/ rob, Baer)', '(March 26, 2015) part 1 part 2 (NL, RLS w/ rob, Baer)', 'Bootleg (March 25, 2015) part 1 part 2 part 3 (NL, RLS)', '(March 23, 2015) part 1 part 2 (NL, RLS)', '(March 19, 2015) part 1 part 2 (NL, RLS w/ rob, Baer)', '(March 18, 2015) part 1 part 2 (NL, RLS)', 'Solo (March 12, 2015) part 1 part 2 (NL)', '(March 11, 2015) part 1 part 2 (NL, RLS w/ rob, Baer)', 'Bootleg (February 26, 2015) part 1 part 2 part 3 (NL, RLS w/ rob, fox)', '[2 year NLversary!] Bootleg (February 25, 2015) part 1 part 2 part 3 (NL, RLS w/ JS, rob)', '(February 23, 2015) part 1 part 2 (NL, RLS w/ JS, rob)', 'Bootleg (February 19, 2015) part 1 part 2 part 3 (NL, RLS w/ rob)', '(February 18, 2015) part 1 part 2 (NL, RLS w/ JS, rob)', '(February 16, 2015) part 1 part 2 (NL, RLS w/ JS, rob)', '(February 12, 2015) part 1 part 2 (NL, RLS)', '(February 11, 2015) part 1 part 2 (NL w/ Baer)', '(February 9, 2015) part 1 part 2 (NL, RLS)', '(February 5, 2015) part 1 part 2 part 3 (NL, RLS w/ JS, rob)', '(February 2, 2015) part 1 part 2 (NL, RLS w/ rob)', '(January 29, 2015) part 1 part 2 (NL, RLS w/ rob)', '(January 28, 2015) part 1 part 2 (NL, RLS)', 'Bootleg (January 8, 2015) part 1, part 2, part 3 (NL, RLS)', '(January 7, 2015) part 1 part 2 (NL, RLS w/ JS, rob)', '(January 5, 2015) part 1 part 2 (NL, RLS w/ rob)', '(January 1, 2015) part 1 part 2 (NL, RLS)', '(December 31, 2014) part 1 part 2 (NL, RLS)', '(December 29, 2014) part 1, part 2 (NL, RLS w/ fox)', 'Bootleg (December 22, 2014) part 1, part 2 (NL, RLS)', '(December 18, 2014) part 1, part 2 (NL)', '(December 15, 2014) part 1, part 2 (NL, RLS)', 'Bootleg (December 11, 2014) part 1 part 2 (NL, RLS w/ Kate, Baer)', '(December 10, 2014) part 1, part 2 (NL, RLS)', '(December 8, 2014) part 1, part 2 (cat cam!) (NL, RLS w/ rob, Mag)', '(December 4,2014) part 1, part 2 (NL, RLS)', '(December 3, 2014) part 1, part 2 (NL, RLS)', '(November 27, 2014) part 1, part 2 (NL)', '(November 26, 2014) part 1, part 2 (NL)', '(November 24, 2014) part 1, part 2 (NL, RLS)', '(November 20, 2014) part 1, part 2 (NL, RLS)', 'Bootleg (November 19, 2014) part 1, part 2 (NLS, RLS, JS!)', '(November 17, 2014) part 1, part 2 (NL, RLS w/ rob, Baer)', '(November 13, 2014) part 1, part 2 (NL, RLS)', "(November 12, 2014) part 1 Bootleg Nick's view part 2 (NL, RLS w/ rob, Baer)", 'Bootleg (November 6, 2014) part 1 part 2 (NL, RLS w/ rob, Baer)', '(November 5, 2014) part 1, part 2 (NL, RLS)', '(November 3, 2014) part 1, part 2 (NL, RLS w/ rob)', '(October 30, 2014) part 1, part 2 (NL, RLS)', '(October 29, 2014) part 1, part 2 (NL, RLS)', '(October 27, 2014) part 1, part 2 (NL, RLS w/rob, Baer)', 'Bootleg (October 23, 2014) part 1, part 2 (NL, RLS)', '(October 22, 2014) Part 1, Part 2 (NL, RLS w/ rob, MALF)', '(October 20, 2014) Part 1, Part 2 (NL, RLS w/ rob, Baer)', '(October 16, 2014) part 1, part 2 (NL, RLS)', '(October 15, 2014) part 1, part 2, part 3 (NL, RLS w/ rob, Baer)', '(October 13, 2014) part 1, part 2 (NL, RLS)', '(October 9, 2014) part 1, part 2 (NL, RLS w/ rob, Baer)', '(October 8, 2014) part 1, part 2 (NL, RLS)', '(October 6, 2014) part 1, part 2, part 3 (NL, RLS w/ rob)', '(October 2, 2014) part 1, part 2 (NL, RLS w/ rob, Baer)', '(October 1, 2014) part 1, part 2 (NL, RLS)', '(September 29, 2014) part 1, part 2 (NL)', '(September 25, 2014) part 1, part 2, part 3 (NL, RLS w/ rob, Mag)', '(September 24, 2014) part 1, part 2 (NL, RLS w/ rob, Baer)', '(September 22, 2014) part 1, part 2 (NL, RLS w/ rob, Mag)', '(September 18, 2014) part 1 part 2 (NL, RLS)', '(September 17, 2014) part 1 part 2 (NL, RLS w/ JS, rob)', '(September 15, 2014) part 1, part 2 (NL, RLS w/ JS, rob)', '(September 11, 2014) part 1, part 2 (NL, RLS)', '(September 10, 2014) part 1, part 2 (NL, RLS w/ JS, Mag)', '(September 8, 2014) part 1, Part 2 (NL, RLS w/ JS)', '(August 27, 2014) part 1, part 2 (NL, RLS in person)', '(August 25, 2014) part 1, part 2 (NL, RLS, Kate in person)', '(August 13, 2014) part 1, part 2 (NL, RLS)', '(August 12, 2014) part 1, part 2 (NL, RLS w/ rob)', '(August 7, 2014) part 1, part 2 (NL, RLS w/ rob)', '(August 6, 2014) part 1, part 2 (NL, RLS)', '(August 4, 2014) part 1, part 2 (NL, RLS w/ rob)', '(July 31, 2014) part 1, part 2 (NL, RLS)', '(July 30, 2014) part 1, part 2 (NL, RLS)', '(July 28, 2014) part 1, part 2 (NL, RLS w/ Kate, Baer, rob)', '(July 24, 2014) part 1, part 2 (NL, RLS w/ rob, Baer)', '(July 21, 2014) part 1, part 2 (NL, RLS)', '(July 16, 2014) part 1, part 2, part 3 (NL, RLS)', '(July 14, 2014) part 1, part 2 (NL, RLS)', '(July 10, 2014) part 1, part 2 (NL, RLS w/ Baer)', '(July 9, 2014) part 1, part 2 (NL, RLS)', '(July 7, 2014) part 1, part 2 (NL, RLS w/ Baer)', '(July 2, 2014) part 1, part 2 (NL, RLS)', '(June 30, 2014) part 1, part 2 (NL, RLS w/ Kate, rob)', '(June 26, 2014) part 1, part 2 (NL, RLS)', '(June 25, 2014) part 1, part 2 (NL, RLS)', '(June 19, 2014) part 1, part 2 (NL, RLS w/ rob)', '(June 18, 2014) part 1, part 2 (NL, RLS w/ Kate, rob, Baer, Mathas)', '(June 16, 2014) part 1, part 2 (NL, RLS, JS)', '(June 12, 2014) part 1, part 2 (NL, RLS, JS)', '(June 11, 2014) part 1, part 2 (NL, RLS, JS w/ Mathas)', '(June 9, 2014) part 1, part 2 (NL, RLS, JS)', '(June 5, 2014) part 1, part 2 (NL, RLS, JS)', '(June 4, 2014) part 1, part 2 (NL, RLS, JS)', '(June 2, 2014) part 1, part 2 (NL, RLS, JS)', '(May 29, 2014) part 1, part 2 (NL, RLS, JS)', '(May 28, 2014) part 1, part 2 (NL, RLS, JS)', '(May 26, 2014) part 1, part 2 (NL, RLS, JS)', '(May 15, 2014) part 1, part 2 (NL, RLS)', '(May 14, 2014) part 1, part 2 (NL, RLS)', '(May 12, 2014) part 1, part 2 (NL, RLS)', '(May 8, 2014) part 1, part 2 (NL, RLS, JS)', '(May 5, 2014) part 1, part 2 (NL, RLS, JS w/ Mike Bithell)', '(May 1, 2014) part 1, part 2 (NL, RLS, JS)', '(April 30, 2014) part 1, part 2 (NL, RLS, JS)', '(April 28, 2014) part 1, part 2 (NL, RLS, JS)', '(April 24, 2014) part 1, part 2 (NL, RLS, JS)', '(April 23, 2014) part 1, part 2 (NL, RLS, JS)', '(April 21, 2014) part 1, part 2 (NL, RLS, JS)', '(April 17, 2014) part 1, part 2 (NL, RLS, JS)', '(April 16, 2014) part 1, part 2 (NL, RLS, JS)', '(April 7, 2014) part 1, part 2 (NL, RLS, JS)', '(April 3, 2014) part 1, part 2 (NL, RLS, JS)', '(April 2, 2014) part 1, part 2 (NL, RLS, JS)', '(March 31, 2014) part 1, part 2 (NL, RLS, JS)', '(March 27, 2014) part 1, part 2 (NL, RLS, JS)', '(March 26, 2014) part 1, part 2 (NL, RLS, JS)', '(March 24, 2014) part 1, part 2 (NL, RLS, JS)', '(March 13, 2014) part 1, part 2 (NL, RLS, JS)', '(March 12, 2014) part 1, part 2 (NL, RLS, JS)', '(March 10, 2014) part 1, part 2 (NL, RLS, JS)', '(March 6, 2014) part 1, part 2 (NL, RLS, JS)', '(March 5, 2014) part 1, part 2 (NL, RLS, JS)', '(March 3, 2014) part 1, part 2 (NL, RLS, JS)', '(February 27, 2014) part 1, part 2 (NL, RLS, JS)', '(February 26, 2014) part 1, part 2 (NL, RLS, JS)', '(February 24, 2014) part 1, part 2 (NL, RLS, JS)', '(February 20, 2014) part 1, part 2 (NL, RLS)', '(February 19, 2014) part 1, part 2 (NL, RLS, JS)', '(February 17, 2014) part 1, part 2 (NL, RLS, JS)', '(February 13, 2014) part 1, part 2 (NL, RLS)', '(February 12, 2014) part 1, part 2, part 3, part 4, part 5 (NL, RLS)', '(February 10, 2014) part 1, part 2 (NL, RLS, JS)', '(February 6, 2014) part 1, part 2 (NL, RLS, JS)', '(February 5, 2014) part 1, part 2 (NL, RLS, JS, MALF)', '(February 3, 2014) part 1, part 2 (NL, RLS w/ rob, MALF)', '(January 30, 2014) part 1, part 2 (NL, RLS, JS w/ Mike Bithell)', '(January 29, 2014) part 1, part 2 (NL, RLS, JS)', '(January 27, 2014) (NL, RLS, JS w/ Crendor)', '(January 20, 2014) part 1, part 2 (NL, RLS, JS)', '(January 16, 2014) part 1, part 2 (NL, RLS, JS)', '(January 15, 2014) part 1, part 2 (NL, RLS, JS)', '(January 13, 2014) part 1, part 2 (NL, RLS, MALF)', '(January 9, 2014) part 1, part 2 (NL, RLS, MALF w/ rob)', '(January 8, 2014) part 1, part 2 (NL, RLS, JS)', '(January 6, 2014) part 1, part 2 (NL, RLS, JS)', '(December 19, 2013), part 1, part 2 (NL, RLS, JS)', '(December 18, 2013), part 1, part 2 (NL, JS)', '(December 16, 2013), part 1, part 2 (NL, RLS, JS)', '(December 12, 2013), part 1, part 2 (NL, RLS, JS)', '(December 11, 2013), part 1, part 2 (NL, RLS, JS)', '(December 9, 2013), part 1, part 2 (NL, RLS, JS)', '(December 5, 2013), part 1, part 2 (NL, RLS, JS)', '(December 4, 2013), part 1, part 2 (NL, RLS, JS)', '(December 2, 2013), part 1, part 2 (NL, RLS, JS)', '(November 28, 2013), part 1, part 2 (NL, JS)', '(November 27, 2013), part 1, part 2 (NL, RLS, JS)', '(November 25, 2013), part 1, Part 2 (NL, RLS, JS)', '(November 21, 2013), part 1, part 2 (NL, RLS, JS)', '(November 20, 2013), part 1, part 2 (NL, RLS, JS)', '(November 18, 2013), part 1, part 2 (NL, RLS, JS)', '(November 14, 2013), part 1, part 2 (NL, RLS, JS)', '(November 13, 2013), part 1, part 2 (NL, RLS, MALF)', '(November 11, 2013), part 1, part 2 (NL, RLS, MALF)', '(November 7, 2013), part 1, part 2 (NL, RLS, JS, MALF)', '(November 6, 2013), part 1, part 2 (NL, RLS, JS)', '(November 4, 2013), part 1, part 2 (NL, RLS, MALF)', '(October 31, 2013) part 1 part 2 (NL, RLS, JS)', '(October 30, 2013), part 1, part 2 (NL, RLS, JS)', '(October 28, 2013) (NL, RLS, JS)', '(October 24, 2013) Part 1, Part 2 (NL, RLS, JS)', '(October 23, 2013) (NL, RLS, JS w/ rob)', '(October 21, 2013) (NL, RLS, JS)', '(October 17, 2013) (NL, RLS, JS w/ rob)', '(October 16, 2013) (NL, RLS, JS w/ rob)', '(October 14, 2013) (NL, RLS, JS)', '(October 10, 2013) (NL, RLS, JS w/ RPG)', '(October 9, 2013) (NL, RLS, JS)', '(October 7, 2013) (NL, RLS, JS)', '(October 3, 2013) (NL, RLS, JS)', '(October 2, 2013) (NL, RLS, JS)', '(September 30, 2013) Part 1, Part 2 (NL, RLS, JS)', '(September 26, 2013) (NL, RLS, JS)', '(September 25, 2013) Part 1, Part 2 (NL, RLS, JS)', '(September 23, 2013) (NL, RLS, JS)', '(September 19, 2013) (NL, RLS, JS)', '(September 18, 2013) (NL, RLS, JS, MALF)', '(September 16, 2013) (NL, RLS, JS)', '(September 12, 2013) (NL, RLS, JS)', '(September 11, 2013) (NL, RLS, JS)', '(September 9, 2013) (NL, RLS, JS)', '(September 5, 2013) (NL, RLS, JS)', '(September 4, 2013) (NL, RLS, JS w/ Ohm)', '(August 26, 2013) part 1, part 2 (NL, RLS, JS)', '(August 22, 2013) (NL, RLS, JS w/ Ohm)', '(August 21, 2013) (NL, RLS, JS w/ Ohm)', '(August 19, 2013) (NL, RLS, JS)', '(August 15, 2013) (NL, RLS, JS)', '(August 14, 2013) (NL, RLS, JS)', '(August 12, 2013) (NL, RLS, JS)', '(August 1, 2013) (NL, RLS, JS w/ Kate)', '(July 31, 2013) (NL, RLS, JS w/ Kate)', '(July 29, 2013) (NL, RLS, JS w/ Ohm)', '(July 25, 2013) (NL, RLS, JS w/ Kate)', '(July 24, 2013) (NL, RLS, JS w/ Kate, MALF)', '(July 22, 2013) (NL, RLS, JS w/ Mike Bithell)', '(July 18, 2013) (NL, RLS, JS w/ Kate)', '(July 17, 2013) part 1, part 2 (NL, RLS, JS w/ Kate)', '(July 15, 2013) (NL, RLS, JS)', '(July 11, 2013) (NL, RLS, JS)', '(July 8, 2013) (NL, RLS, JS)', '(July 4, 2013) (NL, RLS, JS w/ Ohm)', '(July 3, 2013) (NL, RLS, JS w/ Kate, Ohm, rob)', '(July 1, 2013) (NL, RLS, JS)', '(June 20, 2013) (NL, RLS, JS w/ Ohm, rob, Pixel)', '(June 19, 2013) (NL, RLS, JS w/ Ohm, rob)', '(June 17, 2013) (NL, RLS, JS w/ Ohm, Green)', '(June 13, 2013) (NL, RLS, JS w/ Ohm, rob, LGW)', '(June 12, 2013) (NL, RLS, JS w/ Ohm, rob, RPG, Mathas)', '(June 10, 2013) (NL, RLS, JS w/ Ohm, rob)', '(June 5, 2013) (NL, RLS, JS w/ Ohm)', '(June 3, 2013) (NL, RLS, JS w/ Green, rob)', '(May 30, 2013) (NL, RLS, JS w/ Ohm, rob, Green)', '(May 29, 2013) (NL, RLS, JS)', '(May 27, 2013) (NL, RLS, JS w/ Ohm, rob)', '(May 23, 2013) (NL, RLS, JS w/ Kate, Ohm)', '(May 22, 2013) (NL, RLS, JS w/ Kate, Ohm, rob, Green)', '(May 20, 2013) (NL, RLS, JS w/ Kate, Ohm, Green)', '(May 16, 2013) (NL, RLS w/ Ohm, rob, Pixel)', '(May 15, 2013) (NL, RLS, Ohm)', '(May 13, 2013) (NL, RLS w/ Ohm, Mathas, rob)', '(May 2, 2013) (NL, RLS, JS w/ Ohm, RPG, Green)', '(May 1, 2013) (NL, RLS, JS w/ Ohm)', '(April 29, 2013) (NL, RLS, JS w/ Ohm, rob, Mathas, Green)', '(April 25, 2013) (NL, RLS, JS w/ rob, Green)', '(April 24, 2013) (NL, RLS, JS w/ Ohm)', '(April 22, 2013) (NL, RLS, JS w/ Ohm, rob, Green, RPG)', '(April 18, 2013) (NL, RLS w/ RPG, Green, Ohm, rob)', '(April 17, 2013) (NL, RLS, JS w/ Green, Ohm)', '(April 15, 2013) (NL, RLS, JS w/ Ohm, Green, rob, Mathas, MALF)', '(April 11, 2013) (NL, RLS, JS w/ Green, rob)', '(April 10, 2013) (NL, RLS, JS w/ Green, Ohm)', '(April 8, 2013) (NL, RLS, JS w/ Ohm, RPG, MALF)', '(April 4, 2013) (NL, RLS, JS w/ Green, Ohm)', '(April 3, 2013) (NL, RLS, JS w/ MALF, Ohm)', '(April 1, 2013) (NL, RLS, JS w/ Ohm)', '(March 28, 2013) (NL, RLS, JS w/ RPG, Ohm)', '(March 27, 2013) (NL, RLS, JS w/ Ohm)', '(March 18, 2013) (NL, RLS, JS)', '(March 14, 2013) (NL, RLS, JS w/ MALF)', '(March 13, 2013) (NL, RLS, JS w/ Ohm)', '(March 11, 2013) (NL, RLS, JS w/ Ohm)', '(March 6, 2013) (NL, RLS, JS)', '(March 4, 2013) (NL, RLS, JS)', '(February 28, 2013) (NL, RLS, JS)', '(February 27, 2013) (NL, Kate)', '(February 25, 2013) (NL)']
| MIT | OldFiles/NLSS.ipynb | AndrewRyan95/Twitch_Sentiment_Analysis |
I'm going to use regex to split these up. | import re
date = []
crew = []
for entry in date_crew:
foo = re.search(r'\((.*)\)', entry).group(1)
d = foo.split(r')')[0]
date.append(d)
c = foo.split(r'(')[-1]
crew.append(c) | _____no_output_____ | MIT | OldFiles/NLSS.ipynb | AndrewRyan95/Twitch_Sentiment_Analysis |
Now I'll start creating a data frame of this information | date_df = pd.DataFrame(date, columns = ["Date"])
date_df.head()
games_df = pd.DataFrame(games, columns = ["Docket"])
games_df.head()
crew_df = pd.DataFrame(crew, columns = ["Crew"])
crew_df.head() | _____no_output_____ | MIT | OldFiles/NLSS.ipynb | AndrewRyan95/Twitch_Sentiment_Analysis |
Now combine them | nlss_df = pd.DataFrame()
nlss_df['Date'] = date_df['Date']
nlss_df['Crew'] = crew_df['Crew']
nlss_df['Docket'] = games_df['Docket']
nlss_df.head()
nlss_df.describe() | _____no_output_____ | MIT | OldFiles/NLSS.ipynb | AndrewRyan95/Twitch_Sentiment_Analysis |
I noticed that some lines had a link called "(continued)" in the games list. I want to get rid of these. While I'm at it, let's make the games docket contain the games as lists. | improved = []
#For each docket
for d in nlss_df['Docket']:
#Split docket into list of games
d = d.split(r',')
#For each game
for g in d:
#If game matches string to remove
if g == r" (continued)" or g == r" (Continued)":
#Remove game
d.remove(g)
improved.append(d)
nlss_df['Docket'] = improved
nlss_df.head()
nlss_df['Crew'] | _____no_output_____ | MIT | OldFiles/NLSS.ipynb | AndrewRyan95/Twitch_Sentiment_Analysis |
I want to split on "w/" so each crew member is individual item. I'm also going to put them into a list. | improved = []
#For each cast of crew
for e in nlss_df['Crew']:
#Split cast into list of members
e = e.split(r',')
#For each member
for m in e:
#If member contains a /w
if r'w/' in m:
both = m.split(r'w/')
e.remove(m)
e.extend(both)
improved.append(e)
improved[:20] | _____no_output_____ | MIT | OldFiles/NLSS.ipynb | AndrewRyan95/Twitch_Sentiment_Analysis |
Strip extra spaces | fullstripped = []
for entry in improved:
stripped = []
for member in entry:
member = member.strip(' ')
stripped.append(member)
fullstripped.append(stripped)
fullstripped[:10] | _____no_output_____ | MIT | OldFiles/NLSS.ipynb | AndrewRyan95/Twitch_Sentiment_Analysis |
Let's make the names consistant. Luckily I know the aliases that are used. Let's see what we're working with. | names = []
for entry in fullstripped:
for user in entry:
if user not in names:
names.append(user)
print(names) | ['NL', 'RLS', 'CS', 'rob', 'LGW', 'HCJ', 'Baer', 'JS', 'Sin', 'Dan', 'MALF', 'Kory', 'TB', 'Kate', 'Blueman', 'BaerBaer', 'Mathas', 'Crendor', 'BRex', 'GhostBill', 'alpacapatrol', 'dan', '', 'Brex', 'Fox', 'Arumba', 'baer', 'cobaltstreak', 'fox', 'Mag', 'NLS', 'JS!', 'RLS in person', 'Kate in person', 'Mike Bithell', 'RPG', 'Ohm', 'Pixel', 'Green']
| MIT | OldFiles/NLSS.ipynb | AndrewRyan95/Twitch_Sentiment_Analysis |
Translated: Northernlion, RockLeeSmile, CobaltStreak, AlpacaPatrol, LastGreyWolf, HCJustin, BaerTaffy, JSmithOTI, Sinvicta, DanGheesling, MALF, FlackBlag, TotalBiscuit, LovelyMomo, Blueman, BaerTaffy, MathasGames, Crendor, BananasaurusRex, NOTREAL, AlpacaPatrol, DanGheesling, BananasaurusRex, MALF, Arumba, BaerTaffy, CobaltStreak, MALF, Magresta, Northernlion, JSmithOTI, RockLeeSmile, LovelyMomo, MikeBithell, RedPandaGamer, OhmWrecker, PrescriptionPixel, Green9090 | foo = "Northernlion, RockLeeSmile, CobaltStreak, AlpacaPatrol, LastGreyWolf, HCJustin, BaerTaffy, JSmithOTI, Sinvicta, DanGheesling, MALF, FlackBlag, TotalBiscuit, LovelyMomo, Blueman, BaerTaffy, MathasGames, Crendor, BananasaurusRex, NOTREAL, AlpacaPatrol, DanGheesling, NOTREAL, BananasaurusRex, MALF, Arumba, BaerTaffy, CobaltStreak, MALF, Magresta, Northernlion, JSmithOTI, RockLeeSmile, LovelyMomo, MikeBithell, RedPandaGamer, OhmWrecker, PrescriptionPixel, Green9090"
translated = foo.split(", ")
translated
guests = []
for cast in fullstripped:
guests.append([translated[names.index(user)] for user in cast])
#Replace first names with second names
guests[0] | _____no_output_____ | MIT | OldFiles/NLSS.ipynb | AndrewRyan95/Twitch_Sentiment_Analysis |
Looking better. Let's swap it into our DF. | nlss_df['Crew'] = guests
nlss_df.head() | _____no_output_____ | MIT | OldFiles/NLSS.ipynb | AndrewRyan95/Twitch_Sentiment_Analysis |
Adding more stats File from https://sullygnome.com/channel/Northernlion/365/streamsThis version can only go back 365 days. Can we create a column for date that matches nlss_df format? If so, we can combine overlapping stats. I also have a larger CSV which I'm working on in FullCSV.ipynb. I will combine these once formated correctly. | import os
import glob
print(os.getcwd())
allFiles = glob.glob(r"data\*.csv")
stream_df = pd.DataFrame()
l = []
for foo in allFiles:
stream_df = pd.read_csv(foo,index_col=None, header=0)
l.append(stream_df)
stream_df = pd.concat(l)
#stream_df = pd.read_csv(r'StreamStats365.csv')
stream_df
formatted = []
order = [1,0,2]
for date in stream_df['Stream start time']:
dmy = date.split(' ')[1:-1] #Date/Month/Year
dmy[0] = dmy[0][:-2] #Remove day suffixes
mdy = [dmy[i] for i in order]
formatted.append(str(mdy[0] + " " + mdy[1] + ", " + mdy[2]))
formatted[:15]
stream_df["Date"] = formatted
stream_df = stream_df.reset_index()
stream_df.index = stream_df["index"]
stream_df = stream_df.drop('index', axis=1)
stream_df.head() | _____no_output_____ | MIT | OldFiles/NLSS.ipynb | AndrewRyan95/Twitch_Sentiment_Analysis |
There was a day where an extra non-NLSS stream happened. It messes up our ordering so let's remove it. | stream_df[stream_df["Date"]=='January 4, 2017']
stream_df = stream_df[stream_df['Unnamed: 0'] != 0]
stream_df[stream_df["Date"]=='January 4, 2017']
combined = nlss_df.merge(stream_df)
#drop eronious columns
combined = combined.drop('Games', 1)
combined = combined.drop('Unnamed: 0', 1)
nlss_df.head()
combined.head()
result = pd.concat([nlss_df, combined], axis=1)
#Removes repeat columns
result = result.T.groupby(level=0).first().T
#Reorder columns
result = result[['Date','Crew','Docket','Stream start time','End time','Stream length','Avg Viewers','Peak viewers','Followers gained','Followers per hour','Views','Views per hour']]
nlss_df = result
nlss_df.loc[50]
nlss_df[70:85]
nlss_df.loc[nlss_df['Date']=="Wednesday 8th February 2017 22:15"] | _____no_output_____ | MIT | OldFiles/NLSS.ipynb | AndrewRyan95/Twitch_Sentiment_Analysis |
Let's Explore Our stats have been compiled. Now let's look around. Which show had most peak viewers? | mpv = nlss_df.loc[nlss_df['Peak viewers'].idxmax()]
print("Date:", mpv["Date"])
print("Peak viewers:", mpv['Peak viewers'])
print("Peak percentage:", (mpv['Peak viewers']/mpv['Views'])*100)
print("Total viewers:", mpv['Views'])
print("Games:", mpv['Docket'])
nlss_df.loc[nlss_df['Peak viewers'].idxmax()]
nlss_df.head() | _____no_output_____ | MIT | OldFiles/NLSS.ipynb | AndrewRyan95/Twitch_Sentiment_Analysis |
NLSS Dataframe | len(nlss_df)
nlss_df.head()
nlss_df.tail() | _____no_output_____ | MIT | OldFiles/NLSS.ipynb | AndrewRyan95/Twitch_Sentiment_Analysis |
Boltzmann MachinesNotebook ini berdasarkan kursus __Deep Learning A-Z™: Hands-On Artificial Neural Networks__ di Udemy. [Lihat Kursus](https://www.udemy.com/deeplearning/). Informasi Notebook- __notebook name__: `taruma_udemy_boltzmann`- __notebook version/date__: `1.0.0`/`20190730`- __notebook server__: Google Colab- __python version__: `3.6`- __pytorch version__: `1.1.0` | #### NOTEBOOK DESCRIPTION
from datetime import datetime
NOTEBOOK_TITLE = 'taruma_udemy_boltzmann'
NOTEBOOK_VERSION = '1.0.0'
NOTEBOOK_DATE = 1 # Set 1, if you want add date classifier
NOTEBOOK_NAME = "{}_{}".format(
NOTEBOOK_TITLE,
NOTEBOOK_VERSION.replace('.','_')
)
PROJECT_NAME = "{}_{}{}".format(
NOTEBOOK_TITLE,
NOTEBOOK_VERSION.replace('.','_'),
"_" + datetime.utcnow().strftime("%Y%m%d_%H%M") if NOTEBOOK_DATE else ""
)
print(f"Nama Notebook: {NOTEBOOK_NAME}")
print(f"Nama Proyek: {PROJECT_NAME}")
#### System Version
import sys, torch
print("versi python: {}".format(sys.version))
print("versi pytorch: {}".format(torch.__version__))
#### Load Notebook Extensions
%load_ext google.colab.data_table
#### Download dataset
# ref: https://grouplens.org/datasets/movielens/
!wget -O boltzmann.zip "https://sds-platform-private.s3-us-east-2.amazonaws.com/uploads/P16-Boltzmann-Machines.zip"
!unzip boltzmann.zip
#### Atur dataset path
DATASET_DIRECTORY = 'Boltzmann_Machines/'
def showdata(dataframe):
print('Dataframe Size: {}'.format(dataframe.shape))
return dataframe | _____no_output_____ | MIT | notebook/taruma_udemy_boltzmann.ipynb | taruma/hidrokit-nb |
STEP 1-5 DATA PREPROCESSING | # Importing the libraries
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.optim as optim
import torch.utils.data
from torch.autograd import Variable
movies = pd.read_csv(DATASET_DIRECTORY + 'ml-1m/movies.dat', sep='::', header=None, engine='python', encoding='latin-1')
showdata(movies).head(10)
users = pd.read_csv(DATASET_DIRECTORY + 'ml-1m/users.dat', sep='::', header=None, engine='python', encoding='latin-1')
showdata(users).head(10)
ratings = pd.read_csv(DATASET_DIRECTORY + 'ml-1m/ratings.dat', sep='::', header=None, engine='python', encoding='latin-1')
showdata(ratings).head(10)
# Preparing the training set and the test set
training_set = pd.read_csv(DATASET_DIRECTORY + 'ml-100k/u1.base', delimiter='\t')
training_set = np.array(training_set, dtype='int')
test_set = pd.read_csv(DATASET_DIRECTORY + 'ml-100k/u1.test', delimiter='\t')
test_set = np.array(test_set, dtype='int')
# Getting the number of users and movies
nb_users = int(max(max(training_set[:, 0]), max(test_set[:, 0])))
nb_movies = int(max(max(training_set[:, 1]), max(test_set[:, 1])))
# Converting the data into an array with users in lines and movies in columns
def convert(data):
new_data = []
for id_users in range(1, nb_users+1):
id_movies = data[:, 1][data[:, 0] == id_users]
id_ratings = data[:, 2][data[:, 0] == id_users]
ratings = np.zeros(nb_movies)
ratings[id_movies - 1] = id_ratings
new_data.append(list(ratings))
return new_data
training_set = convert(training_set)
test_set = convert(test_set)
# Converting the data into Torch tensors
training_set = torch.FloatTensor(training_set)
test_set = torch.FloatTensor(test_set)
training_set. | _____no_output_____ | MIT | notebook/taruma_udemy_boltzmann.ipynb | taruma/hidrokit-nb |
STEP 6 | # Converting the ratings into binary ratings 1 (Liked) or 0 (Not Liked)
training_set[training_set == 0] = -1
training_set[training_set == 1] = 0
training_set[training_set == 2] = 0
training_set[training_set >= 3] = 1
test_set[test_set == 0] = -1
test_set[test_set == 1] = 0
test_set[test_set == 2] = 0
test_set[test_set >= 3] = 1
training_set | _____no_output_____ | MIT | notebook/taruma_udemy_boltzmann.ipynb | taruma/hidrokit-nb |
STEP 7 - 10 Building RBM Object | # Creating the architecture of the Neural Network
# nv = number visible nodes, nh = number hidden nodes
class RBM():
def __init__(self, nv, nh):
self.W = torch.randn(nh, nv)
self.a = torch.randn(1, nh)
self.b = torch.randn(1, nv)
def sample_h(self, x):
wx = torch.mm(x, self.W.t())
activation = wx + self.a.expand_as(wx)
p_h_given_v = torch.sigmoid(activation)
return p_h_given_v, torch.bernoulli(p_h_given_v)
def sample_v(self, y):
wy = torch.mm(y, self.W)
activation = wy + self.b.expand_as(wy)
p_v_given_h = torch.sigmoid(activation)
return p_v_given_h, torch.bernoulli(p_v_given_h)
def train(self, v0, vk, ph0, phk):
self.W += (torch.mm(v0.t(), ph0) - torch.mm(vk.t(), phk)).t()
self.b += torch.sum((v0 - vk), 0)
self.a += torch.sum((ph0 - phk), 0) | _____no_output_____ | MIT | notebook/taruma_udemy_boltzmann.ipynb | taruma/hidrokit-nb |
STEP 11 | nv = len(training_set[0])
nh = 100
batch_size = 100
rbm = RBM(nv, nh) | _____no_output_____ | MIT | notebook/taruma_udemy_boltzmann.ipynb | taruma/hidrokit-nb |
STEP 12-13 | # Training the RBM
nb_epochs = 10
for epoch in range(1, nb_epochs + 1):
train_loss = 0
s = 0.
for id_user in range(0, nb_users - batch_size, batch_size):
vk = training_set[id_user:id_user+batch_size]
v0 = training_set[id_user:id_user+batch_size]
ph0,_ = rbm.sample_h(v0)
for k in range(10):
_,hk = rbm.sample_h(vk)
_,vk = rbm.sample_v(hk)
vk[v0<0] = v0[v0<0]
phk,_ = rbm.sample_h(vk)
rbm.train(v0, vk, ph0, phk)
train_loss += torch.mean(torch.abs(v0[v0>=0] - vk[v0>=0]))
s += 1.
print('epoch: '+str(epoch)+' loss: '+str(train_loss/s)) | epoch: 1 loss: tensor(0.3424)
epoch: 2 loss: tensor(0.2527)
epoch: 3 loss: tensor(0.2509)
epoch: 4 loss: tensor(0.2483)
epoch: 5 loss: tensor(0.2474)
epoch: 6 loss: tensor(0.2478)
epoch: 7 loss: tensor(0.2467)
epoch: 8 loss: tensor(0.2461)
epoch: 9 loss: tensor(0.2482)
epoch: 10 loss: tensor(0.2491)
| MIT | notebook/taruma_udemy_boltzmann.ipynb | taruma/hidrokit-nb |
STEP 14 | # Testing the RBM
test_loss = 0
s = 0.
for id_user in range(nb_users):
v = training_set[id_user:id_user+1]
vt = test_set[id_user:id_user+1]
if len(vt[vt>=0]) > 0:
_,h = rbm.sample_h(v)
_,v = rbm.sample_v(h)
test_loss += torch.mean(torch.abs(vt[vt>=0] - v[vt>=0]))
s += 1.
print('test loss: '+str(test_loss/s)) | test loss: tensor(0.2403)
| MIT | notebook/taruma_udemy_boltzmann.ipynb | taruma/hidrokit-nb |
Volume-Weighted Moving Average (VWMA) https://www.tradingsetupsreview.com/volume-weighted-moving-average-vwma/ | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# fix_yahoo_finance is used to fetch data
import fix_yahoo_finance as yf
yf.pdr_override()
# input
symbol = 'AAPL'
start = '2018-12-01'
end = '2019-02-01'
# Read data
df = yf.download(symbol,start,end)
# View Columns
df.head()
import talib as ta
df['SMA'] = ta.SMA(df['Adj Close'], timeperiod=3)
df['VWMA'] = ((df['Adj Close']*df['Volume'])+(df['Adj Close'].shift(1)*df['Volume'].shift(1))+(df['Adj Close'].shift(2)*df['Volume'].shift(2))) / (df['Volume'].rolling(3).sum())
df.head()
def VWMA(close,volume, n):
cv =pd.Series(close.shift(n) * volume.shift(n))
tv = volume.rolling(n).sum()
vwma = cv/tv
return vwma
VWMA(df['Adj Close'],df['Volume'], 3)
plt.figure(figsize=(14,8))
plt.plot(df['Adj Close'])
plt.plot(df['VWMA'], label='Volume Weighted Moving Average')
plt.plot(df['SMA'], label='Simple Moving Average')
plt.legend(loc='best')
plt.title('Stock of Midpoint Method')
plt.xlabel('Date')
plt.ylabel('Price')
plt.show() | _____no_output_____ | MIT | Python_Stock/Technical_Indicators/Volume_Weighted_Moving_Average.ipynb | chunsj/Stock_Analysis_For_Quant |
Candlestick with VWMA | from matplotlib import dates as mdates
import datetime as dt
dfc = df.copy()
dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close']
#dfc = dfc.dropna()
dfc = dfc.reset_index()
dfc['Date'] = mdates.date2num(dfc['Date'].astype(dt.date))
dfc.head()
from mpl_finance import candlestick_ohlc
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot(2, 1, 1)
candlestick_ohlc(ax1,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax1.plot(df['VWMA'], label='Volume Weighted Moving Average')
ax1.plot(df['SMA'], label='Simple Moving Average')
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
ax1.xaxis_date()
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax1.grid(True, which='both')
ax1.minorticks_on()
ax1v = ax1.twinx()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
ax1v.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
ax1v.axes.yaxis.set_ticklabels([])
ax1v.set_ylim(0, 3*df.Volume.max())
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
ax1.legend(loc='best') | _____no_output_____ | MIT | Python_Stock/Technical_Indicators/Volume_Weighted_Moving_Average.ipynb | chunsj/Stock_Analysis_For_Quant |
As a demonstration, create an ARMA22 model drawing innovations from there different distributions, a bernoulli, normal and inverse normal. Then build a keras/tensorflow model for the 1-d scattering transform to create "features", use these features to classify which model for the innovations was used. | from blusky.blusky_models import build_model_1d
import matplotlib.pylab as plt
import numpy as np
from scipy.stats import bernoulli, norm, norminvgauss
def arma22(N, alpha, beta, rnd, eps=0.5):
inov = rnd.rvs(2*N)
x = np.zeros(2*N)
# arma22 mode
for i in range(2,N*2):
x[i] = (alpha[0] * x[i-1] + alpha[1]*x[i-2] +
beta[0] * inov[i-1] + beta[1] * inov[i-2] + eps * inov[i])
return x[N:]
N = 512
k = 10
alpha = [0.99, -0.1]
beta = [0.2, 0.0]
eps = 1
series = np.zeros((24*k, N))
y = np.zeros(24*k)
for i in range(8*k):
series[i, :] = arma22(N, alpha, beta, norm(1.0), eps=eps)
y[i] = 0
for i in range(8*k, 16*k):
series[i, :] = arma22(N, alpha, beta, norminvgauss(1,0.5), eps=eps)
y[i] = 1
for i in range(16*k, 24*k):
series[i, :] = arma22(N, alpha, beta, bernoulli(0.5), eps=eps)*2
y[i] = 2
plt.plot(series[3*k,:200], '-r')
plt.plot(series[8*k,:200])
plt.plot(series[-3*k,:200])
plt.legend(['normal', 'inverse normal', 'bernoulli'])
#Hold out data:
k = 8
hodl_series = np.zeros((24*k, N))
hodl_y = np.zeros(24*k)
for i in range(8*k):
hodl_series[i, :] = arma22(N, alpha, beta, norm(1.0), eps=eps)
hodl_y[i] = 0
for i in range(8*k, 16*k):
hodl_series[i, :] = arma22(N, alpha, beta, norminvgauss(1,0.5), eps=eps)
hodl_y[i] = 1
for i in range(16*k, 24*k):
hodl_series[i, :] = arma22(N, alpha, beta, bernoulli(0.5), eps=eps)*2
hodl_y[i] = 2
# hold out data
plt.plot(hodl_series[0,:200], '-r')
plt.plot(hodl_series[8*k,:200])
plt.plot(hodl_series[16*k,:200])
plt.legend(['normal', 'inverse normal', 'bernoulli']) | _____no_output_____ | BSD-3-Clause | notebooks/One1DExample.ipynb | enthought/sandia-blusky |
The scattering transform reduces the timeseries to a set of features, which we use for classification. The seperation between the series is more obvious looking at the log- of the features (see below). A support vector machine has an easy time classifying these processes. | base_model = build_model_1d(N, 7,6, concatenate=True)
result = base_model.predict(hodl_series)
plt.semilogy(np.mean(result[:,0,:], axis=0), '-r')
plt.semilogy(np.mean(result[8*k:16*k,0,:], axis=0), '-b')
plt.semilogy(np.mean(result[16*k:,0,:], axis=0), '-g')
from sklearn.svm import SVC
from sklearn.metrics import classification_report
model = build_model_1d(N, 7, 6, concatenate=True)
result = np.log(model.predict(series))
X = result[:,0,:]
rdf = SVC()
rdf.fit(X,y)
hodl_result = np.log(model.predict(hodl_series))
hodl_X = hodl_result[:,0,:]
y_pred = rdf.predict(hodl_X)
cls1 = classification_report(hodl_y, y_pred)
print(cls1) | precision recall f1-score support
0.0 0.95 0.91 0.93 64
1.0 0.97 0.95 0.96 64
2.0 0.94 1.00 0.97 64
accuracy 0.95 192
macro avg 0.95 0.95 0.95 192
weighted avg 0.95 0.95 0.95 192
| BSD-3-Clause | notebooks/One1DExample.ipynb | enthought/sandia-blusky |
Blusky build_model_1d creates a regular old keras model, which you can use like another, think VGG16 etc. The order (order < J) defines the depth of the network. If you want a deeper network, increase this parameter. Here we attach a set of fully connected layers to classify like we did previously with the SVM.Dropping in a batch normalization here, seeems to be important for regularizong the problem. | from tensorflow.keras import Input, Model
import tensorflow.keras.backend as K
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.layers import BatchNormalization, Dense, Flatten, Lambda
from tensorflow.keras.utils import to_categorical
early_stopping = EarlyStopping(monitor="val_loss", patience=50, verbose=True,
restore_best_weights=True)
J = 7
order = 6
base_model = build_model_1d(N, J, order, concatenate=True)
dnn = Flatten()(base_model.output)
# let's add the "log" here like we did above
dnn = Lambda(lambda x : K.log(x))(dnn)
dnn = BatchNormalization()(dnn)
dnn = Dense(32, activation='linear', name='dnn1')(dnn)
dnn = Dense(3, activation='softmax', name='softmax')(dnn)
deep_model_1 = Model(inputs=base_model.input, outputs=dnn)
deep_model_1.compile(optimizer='rmsprop', loss='categorical_crossentropy')
history_1 = deep_model_1.fit(series, to_categorical(y),
validation_data=(hodl_series, to_categorical(hodl_y)),
callbacks=[early_stopping],
epochs=200)
y_pred = deep_model_1.predict(hodl_series)
cls_2 = classification_report(hodl_y, np.argmax(y_pred, axis=1))
base_model.output
plt.plot(history_1.history['loss'][-100:])
plt.plot(history_1.history['val_loss'][-100:])
print(cls_2) | precision recall f1-score support
0.0 0.92 0.84 0.88 64
1.0 0.94 0.95 0.95 64
2.0 0.91 0.97 0.94 64
accuracy 0.92 192
macro avg 0.92 0.92 0.92 192
weighted avg 0.92 0.92 0.92 192
| BSD-3-Clause | notebooks/One1DExample.ipynb | enthought/sandia-blusky |
Apply Signature Analysis to Cell Morphology FeaturesGregory Way, 2020Here, I apply [`singscore`](https://bioconductor.org/packages/devel/bioc/vignettes/singscore/inst/doc/singscore.html) ([Foroutan et al. 2018](https://doi.org/10.1186/s12859-018-2435-4)) to our Cell Painting profiles.This notebook largely follows the [package vignette](https://bioconductor.org/packages/devel/bioc/vignettes/singscore/inst/doc/singscore.html).I generate two distinct signatures.1. Comparing Clone A and E resistant clones to sensitive wildtype cell lines. * Clones A and E both have a confirmed _PSMB5_ mutation which is known to cause bortezomib resistance.2. Derived from comparing four other resistant clones to four other sensitive wildtype clones. * We do not know the resistance mechanism in these four resistant clones.However, we can hypothesize that the mechanisms are similar based on single sample enrichment using the potential PSMB5 signature.To review how I derived these signatures see `0.build-morphology-signatures.ipynb`. | suppressPackageStartupMessages(library(singscore))
suppressPackageStartupMessages(library(dplyr))
suppressPackageStartupMessages(library(ggplot2))
seed <- 1234
num_permutations <- 1000
set.seed(seed) | _____no_output_____ | BSD-3-Clause | 3.feature-differences/1.apply-signatures.ipynb | DavidStirling/profiling-resistance-mechanisms |
Load Clone A/E (_PSMB5_ Mutations) Signature | sig_cols <- readr::cols(
feature = readr::col_character(),
estimate = readr::col_double(),
adj.p.value = readr::col_double()
)
sig_file <- file.path("results", "cloneAE_signature_tukey.tsv")
psmb_signature_scores <- readr::read_tsv(sig_file, col_types=sig_cols)
head(psmb_signature_scores, 2)
# Extract features that are up and down in the signature
up_features <- psmb_signature_scores %>% dplyr::filter(estimate > 0) %>% dplyr::pull(feature)
down_features <- psmb_signature_scores %>% dplyr::filter(estimate < 0) %>% dplyr::pull(feature) | _____no_output_____ | BSD-3-Clause | 3.feature-differences/1.apply-signatures.ipynb | DavidStirling/profiling-resistance-mechanisms |
Load Four Clone Dataset | col_types <- readr::cols(
.default = readr::col_double(),
Metadata_Plate = readr::col_character(),
Metadata_Well = readr::col_character(),
Metadata_plate_map_name = readr::col_character(),
Metadata_clone_number = readr::col_character(),
Metadata_clone_type = readr::col_character(),
Metadata_plate_ID = readr::col_character(),
Metadata_plate_filename = readr::col_character(),
Metadata_treatment = readr::col_character(),
Metadata_batch = readr::col_character()
)
# Do not load the feature selected data
profile_dir <- file.path("..", "2.describe-data", "data", "merged")
profile_file <- file.path(profile_dir, "combined_four_clone_dataset.csv")
fourclone_data_df <- readr::read_csv(profile_file, col_types = col_types)
print(dim(fourclone_data_df))
head(fourclone_data_df, 2)
# Generate unique sample names (for downstream merging of results)
sample_names <- paste(
fourclone_data_df$Metadata_clone_number,
fourclone_data_df$Metadata_Plate,
fourclone_data_df$Metadata_Well,
fourclone_data_df$Metadata_batch,
sep = "_"
)
fourclone_data_df <- fourclone_data_df %>%
dplyr::mutate(Metadata_unique_sample_name = sample_names) | _____no_output_____ | BSD-3-Clause | 3.feature-differences/1.apply-signatures.ipynb | DavidStirling/profiling-resistance-mechanisms |
Apply `singscore` | # Convert the four clone dataset into a feature x sample matrix without metadata
features_only_df <- t(fourclone_data_df %>% dplyr::select(!starts_with("Metadata_")))
# Apply the `rankGenes()` method to get feature rankings per feature for each sample
rankData <- rankGenes(features_only_df)
colnames(rankData) <- fourclone_data_df$Metadata_unique_sample_name
print(dim(rankData))
head(rankData, 3)
# Using the rank dataframe, up, and down features, get the sample scores
scoredf <- simpleScore(rankData, upSet = up_features, downSet = down_features)
# Merge scores with metadata features
full_result_df <- dplyr::bind_cols(
fourclone_data_df %>% dplyr::select(starts_with("Metadata_")),
scoredf
)
print(dim(full_result_df))
head(full_result_df, 2) | [1] 300 16
| BSD-3-Clause | 3.feature-differences/1.apply-signatures.ipynb | DavidStirling/profiling-resistance-mechanisms |
Perform Permutation Testing to Determine Significance of Observation | # Generate a null distribution of scores by randomly shuffling ranks
permuteResult <- generateNull(
upSet = up_features,
downSet = down_features,
rankData = rankData,
centerScore = TRUE,
knownDirection = TRUE,
B = num_permutations,
seed = seed,
useBPPARAM = NULL
)
# Calculate p values and add to list
pvals <- getPvals(permuteResult, scoredf)
pval_tidy <- broom::tidy(pvals)
colnames(pval_tidy) <- c("names", "Metadata_permuted_p_value")
full_result_df <- full_result_df %>%
dplyr::left_join(
pval_tidy,
by = c("Metadata_unique_sample_name" = "names")
)
# Are there differences in quantiles across batch?
batch_info <- gsub("^.*_", "", rownames(t(permuteResult)))
batch_permute <- t(permuteResult) %>%
dplyr::as_tibble() %>%
dplyr::mutate(batch = batch_info)
permute_bounds <- list()
for (batch_id in unique(batch_permute$batch)) {
subset_permute <- batch_permute %>% dplyr::filter(batch == !!batch_id) %>% dplyr::select(!batch)
min_val <- quantile(as.vector(as.matrix(subset_permute)), 0.005)
max_val <- quantile(as.vector(as.matrix(subset_permute)), 0.995)
permute_bounds[[batch_id]] <- c(batch_id, min_val, max_val)
}
do.call(rbind, permute_bounds) | Warning message:
“`as_tibble.matrix()` requires a matrix with column names or a `.name_repair` argument. Using compatibility `.name_repair`.
[90mThis warning is displayed once per session.[39m” | BSD-3-Clause | 3.feature-differences/1.apply-signatures.ipynb | DavidStirling/profiling-resistance-mechanisms |
Visualize Results | min_val <- quantile(as.vector(as.matrix(permuteResult)), 0.05)
max_val <- quantile(as.vector(as.matrix(permuteResult)), 0.95)
apply_psmb_signature_gg <- ggplot(full_result_df,
aes(y = TotalScore,
x = Metadata_clone_number)) +
geom_boxplot(aes(fill = Metadata_treatment), outlier.alpha = 0) +
geom_point(
aes(fill = Metadata_treatment, group = Metadata_treatment),
position = position_dodge(width=0.75),
size = 0.9,
alpha = 0.7,
shape = 21) +
scale_fill_manual(name = "Treatment",
labels = c("bortezomib" = "Bortezomib", "DMSO" = "DMSO"),
values = c("bortezomib" = "#9e0ba3", "DMSO" = "#fcba03")) +
theme_bw() +
annotate("rect", ymin = min_val,
ymax = max_val,
xmin = 0,
xmax = length(unique(full_result_df$Metadata_clone_number)) + 1,
alpha = 0.2,
color = "red",
linetype = "dashed",
fill = "grey") +
xlab("") +
ylab("PSMB5 Signature Score") +
theme(axis.text.x = element_text(angle=90)) +
facet_wrap("Metadata_batch~Metadata_plate_ID", nrow=3) +
theme(strip.text = element_text(size = 8, color = "black"),
strip.background = element_rect(colour = "black", fill = "#fdfff4"))
output_fig <- file.path("figures", "signature", "psmb5_signature_apply_fourclone.png")
ggsave(output_fig, dpi = 500, height = 5, width = 10)
apply_psmb_signature_gg
summarized_mean_result_df <- full_result_df %>%
dplyr::group_by(
Metadata_batch, Metadata_plate_map_name, Metadata_clone_number, Metadata_treatment, Metadata_clone_type
) %>%
dplyr::mutate(mean_score = mean(TotalScore)) %>%
dplyr::select(
Metadata_batch, Metadata_plate_map_name, Metadata_clone_number, Metadata_clone_type, Metadata_treatment, mean_score
) %>%
dplyr::distinct() %>%
tidyr::spread(key = "Metadata_treatment", value = "mean_score") %>%
dplyr::mutate(treatment_score_diff = DMSO - bortezomib)
head(summarized_mean_result_df)
apply_psmb_signature_diff_gg <- ggplot(summarized_mean_result_df,
aes(y = treatment_score_diff,
x = Metadata_clone_number,
fill = Metadata_clone_type)) +
geom_boxplot(outlier.alpha = 0) +
geom_jitter(
width = 0.2,
size = 2,
alpha = 0.7,
shape = 21) +
scale_fill_manual(name = "Clone Type",
labels = c("resistant" = "Resistant", "wildtype" = "Wildtype"),
values = c("resistant" = "#9e0ba3", "wildtype" = "#fcba03")) +
theme_bw() +
xlab("") +
ylab("Difference PSMB5 Signature Score\nDMSO - Bortezomib") +
theme(axis.text.x = element_text(angle=90)) +
theme(strip.text = element_text(size = 8, color = "black"),
strip.background = element_rect(colour = "black", fill = "#fdfff4"))
output_fig <- file.path("figures", "signature", "psmb5_signature_apply_fourclone_difference.png")
ggsave(output_fig, dpi = 500, height = 4.5, width = 6)
apply_psmb_signature_diff_gg | _____no_output_____ | BSD-3-Clause | 3.feature-differences/1.apply-signatures.ipynb | DavidStirling/profiling-resistance-mechanisms |
Load Four Clone Signature (Generic Resistance) | sig_file <- file.path("results", "fourclone_signature_tukey.tsv")
resistance_signature_scores <- readr::read_tsv(sig_file, col_types=sig_cols)
head(resistance_signature_scores, 2)
# Extract features that are up and down in the signature
up_resistance_features <- resistance_signature_scores %>%
dplyr::filter(estimate > 0) %>%
dplyr::pull(feature)
down_resistance_features <- resistance_signature_scores %>%
dplyr::filter(estimate < 0) %>%
dplyr::pull(feature) | _____no_output_____ | BSD-3-Clause | 3.feature-differences/1.apply-signatures.ipynb | DavidStirling/profiling-resistance-mechanisms |
Load Clone A/E Dataset | # Do not load the feature selected data
profile_file <- file.path(profile_dir, "combined_cloneAcloneE_dataset.csv")
cloneae_cols <- readr::cols(
.default = readr::col_double(),
Metadata_CellLine = readr::col_character(),
Metadata_Plate = readr::col_character(),
Metadata_Well = readr::col_character(),
Metadata_batch = readr::col_character(),
Metadata_plate_map_name = readr::col_character(),
Metadata_clone_type = readr::col_character()
)
cloneAE_data_df <- readr::read_csv(profile_file, col_types = cloneae_cols)
print(dim(cloneAE_data_df))
head(cloneAE_data_df, 2)
# Generate unique sample names (for downstream merging of results)
cloneae_sample_names <- paste(
cloneAE_data_df$Metadata_CellLine,
cloneAE_data_df$Metadata_Plate,
cloneAE_data_df$Metadata_Well,
cloneAE_data_df$Metadata_batch,
sep = "_"
)
cloneAE_data_df <- cloneAE_data_df %>%
dplyr::mutate(Metadata_unique_sample_name = cloneae_sample_names) | _____no_output_____ | BSD-3-Clause | 3.feature-differences/1.apply-signatures.ipynb | DavidStirling/profiling-resistance-mechanisms |
Apply `singscore` | # Convert the four clone dataset into a feature x sample matrix without metadata
features_only_res_df <- t(cloneAE_data_df %>% dplyr::select(!starts_with("Metadata_")))
# Apply the `rankGenes()` method to get feature rankings per feature for each sample
rankData_res <- rankGenes(features_only_res_df)
colnames(rankData_res) <- cloneAE_data_df$Metadata_unique_sample_name
print(dim(rankData_res))
head(rankData_res, 3)
# Using the rank dataframe, up, and down features, get the sample scores
scoredf_res <- simpleScore(rankData_res,
upSet = up_resistance_features,
downSet = down_resistance_features)
# Merge scores with metadata features
full_res_result_df <- dplyr::bind_cols(
cloneAE_data_df %>% dplyr::select(starts_with("Metadata_")),
scoredf_res
)
print(dim(full_res_result_df))
head(full_res_result_df, 2) | [1] 72 14
| BSD-3-Clause | 3.feature-differences/1.apply-signatures.ipynb | DavidStirling/profiling-resistance-mechanisms |
Perform Permutation Testing | # Generate a null distribution of scores by randomly shuffling ranks
permuteResult_res <- generateNull(
upSet = up_resistance_features,
downSet = down_resistance_features,
rankData = rankData_res,
centerScore = TRUE,
knownDirection = TRUE,
B = num_permutations,
seed = seed,
useBPPARAM = NULL
)
# Calculate p values and add to list
pvals_res <- getPvals(permuteResult_res, scoredf_res)
pval_res_tidy <- broom::tidy(pvals)
colnames(pval_res_tidy) <- c("names", "Metadata_permuted_p_value")
full_res_result_df <- full_res_result_df %>%
dplyr::left_join(
pval_res_tidy,
by = c("Metadata_unique_sample_name" = "names")
) | Warning message:
“'tidy.numeric' is deprecated.
See help("Deprecated")” | BSD-3-Clause | 3.feature-differences/1.apply-signatures.ipynb | DavidStirling/profiling-resistance-mechanisms |
Visualize Signature Results | min_val <- quantile(as.vector(as.matrix(permuteResult_res)), 0.05)
max_val <- quantile(as.vector(as.matrix(permuteResult_res)), 0.95)
append_dose <- function(string) paste0("Dose: ", string, "nM")
apply_res_signature_gg <- ggplot(full_res_result_df,
aes(y = TotalScore,
x = Metadata_CellLine)) +
geom_boxplot(aes(fill = Metadata_clone_type), outlier.alpha = 0) +
geom_point(
aes(fill = Metadata_clone_type, group = Metadata_clone_type),
position = position_dodge(width=0.75),
size = 0.9,
alpha = 0.7,
shape = 21) +
scale_fill_manual(name = "Clone Type",
labels = c("resistant" = "Resistant", "wildtype" = "WildType"),
values = c("resistant" = "#f5b222", "wildtype" = "#4287f5")) +
theme_bw() +
annotate("rect", ymin = min_val,
ymax = max_val,
xmin = 0,
xmax = length(unique(full_res_result_df$Metadata_CellLine)) + 1,
alpha = 0.2,
color = "red",
linetype = "dashed",
fill = "grey") +
xlab("") +
ylab("Generic Resistance Signature Score") +
theme(axis.text.x = element_text(angle=90)) +
facet_grid("Metadata_Dosage~Metadata_batch",
labeller = labeller(Metadata_Dosage = as_labeller(append_dose))) +
theme(strip.text = element_text(size = 8, color = "black"),
strip.background = element_rect(colour = "black", fill = "#fdfff4"))
output_fig <- file.path("figures", "signature", "generic_resistance_signature_apply_cloneAE.png")
ggsave(output_fig, dpi = 500, height = 5, width = 5)
apply_res_signature_gg
full_res_result_df$Metadata_Dosage <- factor(
full_res_result_df$Metadata_Dosage, levels = unique(sort(full_res_result_df$Metadata_Dosage))
)
full_res_result_df <- full_res_result_df %>%
dplyr::mutate(Metadata_group = paste0(Metadata_batch, Metadata_CellLine))
ggplot(full_res_result_df, aes(x = Metadata_Dosage, y = TotalScore, color = Metadata_CellLine, group = Metadata_group)) +
geom_point(size = 1) +
geom_smooth(aes(fill = Metadata_clone_type), method = "loess", lwd = 0.5) +
facet_wrap("~Metadata_batch", nrow = 2) +
theme_bw() +
scale_fill_manual(name = "Clone Type",
labels = c("resistant" = "Resistant", "wildtype" = "WildType"),
values = c("resistant" = "#f5b222", "wildtype" = "#4287f5")) +
ylab("Generic Resistance Signature Score") +
annotate("rect", ymin = min_val,
ymax = max_val,
xmin = 0,
xmax = length(unique(full_res_result_df$Metadata_CellLine)) + 2,
alpha = 0.2,
color = "red",
linetype = "dashed",
fill = "grey") +
theme(strip.text = element_text(size = 8, color = "black"),
strip.background = element_rect(colour = "black", fill = "#fdfff4"))
output_fig <- file.path("figures", "signature", "generic_resistance_signature_apply_cloneAE_xaxis_dosage.png")
ggsave(output_fig, dpi = 500, height = 5, width = 5) | Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“pseudoinverse used at 0.985”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“neighborhood radius 2.015”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“reciprocal condition number 4.2401e-17”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“There are other near singularities as well. 4.0602”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“pseudoinverse used at 0.985”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“neighborhood radius 2.015”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“reciprocal condition number 4.2401e-17”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“There are other near singularities as well. 4.0602”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“pseudoinverse used at 0.985”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“neighborhood radius 2.015”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“reciprocal condition number 4.2401e-17”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“There are other near singularities as well. 4.0602”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“pseudoinverse used at 0.985”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“neighborhood radius 2.015”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“reciprocal condition number 4.2401e-17”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“There are other near singularities as well. 4.0602”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“pseudoinverse used at 0.985”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“neighborhood radius 2.015”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“reciprocal condition number 4.2401e-17”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“There are other near singularities as well. 4.0602”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“pseudoinverse used at 0.985”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“neighborhood radius 2.015”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“reciprocal condition number 4.2401e-17”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“There are other near singularities as well. 4.0602”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“pseudoinverse used at 0.985”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“neighborhood radius 2.015”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“reciprocal condition number 4.2401e-17”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“There are other near singularities as well. 4.0602”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“pseudoinverse used at 0.985”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“neighborhood radius 2.015”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“reciprocal condition number 4.2401e-17”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“There are other near singularities as well. 4.0602”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“pseudoinverse used at 0.985”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“neighborhood radius 2.015”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“reciprocal condition number 4.2401e-17”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“There are other near singularities as well. 4.0602”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“pseudoinverse used at 0.985”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“neighborhood radius 2.015”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“reciprocal condition number 4.2401e-17”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“There are other near singularities as well. 4.0602”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“pseudoinverse used at 0.985”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“neighborhood radius 2.015”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“reciprocal condition number 4.2401e-17”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“There are other near singularities as well. 4.0602”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“pseudoinverse used at 0.985”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“neighborhood radius 2.015”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“reciprocal condition number 4.2401e-17”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“There are other near singularities as well. 4.0602”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
| BSD-3-Clause | 3.feature-differences/1.apply-signatures.ipynb | DavidStirling/profiling-resistance-mechanisms |
Generating an ROC Curve This notebook is meant to be be an introduction to generating an ROC curve for multi-class prediction problems and the code comes directly from an [Scikit-Learn demo](http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html). Please issue a comment on my Github account if you would like to suggest any changes to this notebook. | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
from sklearn.metrics import roc_curve, auc
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
from scipy import interp
# Import some data to play with
iris = datasets.load_iris()
X = iris.data
y = iris.target
# Binarize the output
y = label_binarize(y, classes=[0, 1, 2])
n_classes = y.shape[1]
# Add noisy features to make the problem harder
random_state = np.random.RandomState(0)
n_samples, n_features = X.shape
X = np.c_[X, random_state.randn(n_samples, 200 * n_features)]
# shuffle and split training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.5,
random_state=0)
# Learn to predict each class against the other
classifier = OneVsRestClassifier(svm.SVC(kernel='linear', probability=True,
random_state=random_state))
y_score = classifier.fit(X_train, y_train).decision_function(X_test)
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
##############################################################################
# Plot of a ROC curve for a specific class
plt.figure()
plt.plot(fpr[2], tpr[2], label='ROC curve (area = %0.2f)' % roc_auc[2])
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
##############################################################################
# Plot ROC curves for the multiclass problem
# Compute macro-average ROC curve and ROC area
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure()
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
linewidth=2)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
linewidth=2)
for i in range(n_classes):
plt.plot(fpr[i], tpr[i], label='ROC curve of class {0} (area = {1:0.2f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Some extension of Receiver operating characteristic to multi-class')
plt.legend(loc="lower right")
plt.show() | _____no_output_____ | MIT | notebooks/ROC-Example.ipynb | gditzler/UA-ECE-523-Sp2018 |
In-Class Coding Lab: StringsThe goals of this lab are to help you to understand:- String slicing for substrings- How to use Python's built-in String functions in the standard library.- Tokenizing and Parsing Data- How to create user-defined functions to parse and tokenize strings Strings Strings are immutable sequencesPython strings are immutable sequences.This means we cannot change them "in part" and there is impicit ordering. The characters in a string are zero-based. Meaning the index of the first character is 0.We can leverage this in a variety of ways.For example: | x = input("Enter something: ")
print ("You typed:", x)
print ("number of characters:", len(x) )
print ("First character is:", x[0])
print ("Last character is:", x[-1])
## They're sequences, so you can loop definately:
print("Printing one character at a time: ")
for ch in x:
print(ch) # print a character at a time! | Enter something: tony
You typed: tony
number of characters: 4
First character is: t
Last character is: y
Printing one character at a time:
t
o
n
y
| MIT | content/lessons/07/Class-Coding-Lab/CCL-Strings.ipynb | jvrecca-su/ist256project |
Slices as substringsPython lists and sequences use **slice notation** which is a clever way to get substring from a given string.Slice notation requires two values: A start index and the end index. The substring returned starts at the start index, and *ends at the position before the end index*. It ends at the position *before* so that when you slice a string into parts you know where you've "left off". For example: | state = "Mississippi"
print (state[0:4]) # Miss
print (state[4:len(state)]) # issippi | Miss
issippi
| MIT | content/lessons/07/Class-Coding-Lab/CCL-Strings.ipynb | jvrecca-su/ist256project |
In this next example, play around with the variable `split` adjusting it to how you want the string to be split up. Re run the cell several times with different values to get a feel for what happens. | state = "Mississippi"
split = 4 # TODO: play around with this number
left = state[0:split]
right = state[split:len(state)]
print(left, right)
state = "Mississippi"
split = 2 # TODO: play around with this number
left = state[0:split]
right = state[split:len(state)]
print(left, right)
state = "Mississippi"
split = 8 # TODO: play around with this number
left = state[0:split]
right = state[split:len(state)]
print(left, right)
state = "Mississippi"
split = 5 # TODO: play around with this number
left = state[0:split]
right = state[split:len(state)]
print(left, right) | Missi ssippi
| MIT | content/lessons/07/Class-Coding-Lab/CCL-Strings.ipynb | jvrecca-su/ist256project |
Slicing from the beginning or to the endIf you omit the begin or end slice, Python will slice from the beginnning of the string or all the way to the end. So if you say `x[:5]` its the same as `x[0:5]`For example: | state = "Ohio"
print(state[0:2], state[:2]) # same!
print(state[2:len(state)], state[2:]) # same
| Oh Oh
io io
| MIT | content/lessons/07/Class-Coding-Lab/CCL-Strings.ipynb | jvrecca-su/ist256project |
Now Try It!Split the string `"New Hampshire"` into two sub-strings one containing `"New"` the other containing `"Hampshire"` (without the space). | ## TODO: Write code here
state = "NewHampshire"
split = 3
left = state[0:split]
right = state[split:len(state)]
print(left, right) | New Hampshire
| MIT | content/lessons/07/Class-Coding-Lab/CCL-Strings.ipynb | jvrecca-su/ist256project |
Python's built in String FunctionsPython includes several handy built-in string functions (also known as *methods* in object-oriented parlance). To get a list of available functions, use the `dir()` function on any string variable, or on the type `str` itself. | print ( dir(str)) | ['__add__', '__class__', '__contains__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__getnewargs__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__len__', '__lt__', '__mod__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__rmod__', '__rmul__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'capitalize', 'casefold', 'center', 'count', 'encode', 'endswith', 'expandtabs', 'find', 'format', 'format_map', 'index', 'isalnum', 'isalpha', 'isdecimal', 'isdigit', 'isidentifier', 'islower', 'isnumeric', 'isprintable', 'isspace', 'istitle', 'isupper', 'join', 'ljust', 'lower', 'lstrip', 'maketrans', 'partition', 'replace', 'rfind', 'rindex', 'rjust', 'rpartition', 'rsplit', 'rstrip', 'split', 'splitlines', 'startswith', 'strip', 'swapcase', 'title', 'translate', 'upper', 'zfill']
| MIT | content/lessons/07/Class-Coding-Lab/CCL-Strings.ipynb | jvrecca-su/ist256project |
Let's suppose you want to learn how to use the `count` function. There are 2 ways you can do this.1. search the web for `python 3 str count` or1. bring up internal help `help(str.count)` Both have their advantages and disadvanges. I would start with the second one, and only fall back to a web search when you can't figure it out from the Python documenation. Here's the documentation for `count` | help(str.count) | Help on method_descriptor:
count(...)
S.count(sub[, start[, end]]) -> int
Return the number of non-overlapping occurrences of substring sub in
string S[start:end]. Optional arguments start and end are
interpreted as in slice notation.
| MIT | content/lessons/07/Class-Coding-Lab/CCL-Strings.ipynb | jvrecca-su/ist256project |
You'll notice in the help output it says S.count() this indicates this function is a method function. this means you invoke it like this `variable.count()`. Now Try ItTry to use the count() function method to count the number of `'i'`'s in the string `'Mississippi`: | state = 'Mississippi'
#TODO: use state.count
state.count("i")
print(state.count('i')) | 4
| MIT | content/lessons/07/Class-Coding-Lab/CCL-Strings.ipynb | jvrecca-su/ist256project |
TANGENT: The Subtle difference between function and method.You'll notice sometimes we call our function alone, other times it's attached to a variable, as was the case in the example above. when we say `state.count('i')` the period (`.`) between the variable and function indicates this function is a *method function*. The key difference between a the two is a method is attached to a variable. To call a method function you must say `variable.function()` whereas when you call a function its just `function()`. The variable associated with the method call is usually part of the function's context.Here's an example: | name = "Larry"
print( len(name) ) # a function call len(name) stands on its own. Gets length of 'Larry'
print( name.__len__() ) # a method call name.__len__() does the name thing for its variable 'Larry' | 5
2
| MIT | content/lessons/07/Class-Coding-Lab/CCL-Strings.ipynb | jvrecca-su/ist256project |
Now Try ItTry to figure out which built in string function to use to accomplish this task.Write some code to find the text `'is'` in some text. The program shoud output the first position of `'is'` in the text. Examples:```When: text = 'Mississippi' then position = 1When: text = "This is great" then position = 2When: text = "Burger" then position = -1``` | print(dir(str))
help(str.find)
# TODO: Write your code here
text = input("Enter some text: ")
text.find('is')
print("when text =", text,"then position =",text.find('is'))
text = input("Enter some text: ")
text.find('is')
print("when text =", text,"then position =",text.find('is'))
text = input("Enter some text: ")
text.find('is')
print("when text =", text,"then position =",text.find('is')) | Enter some text: fries
when text = fries then position = -1
| MIT | content/lessons/07/Class-Coding-Lab/CCL-Strings.ipynb | jvrecca-su/ist256project |
Now Try It**Is that a URL?**Try to write a rudimentary URL checker. The program should input a text string and then use the `startswith` function to check if the string begins with `"http://"` or `"https://"` If it does we can assume it is a URL. | ## TODO: write code here:
url = input("Enter a URL: ")
if url.startswith('http://'):
print("We can assume this is a URL")
elif url.startswith('https://'):
print("We can assume this is a URL")
else:
print("This is not a URL")
url = input("Enter a URL: ")
if url.startswith('http://'):
print("We can assume this is a URL")
elif url.startswith('https://'):
print("We can assume this is a URL")
else:
print("This is not a URL")
url = input("Enter a URL: ")
if url.startswith('http://'):
print("We can assume this is a URL")
elif url.startswith('https://'):
print("We can assume this is a URL")
else:
print("This is not a URL") | Enter a URL: jfksdlfjaldskl
This is not a URL
| MIT | content/lessons/07/Class-Coding-Lab/CCL-Strings.ipynb | jvrecca-su/ist256project |
CH. 8 - Market Basket Analysis Activities Activity 8.01: Load and Prep Full Online Retail Data | import matplotlib.pyplot as plt
import mlxtend.frequent_patterns
import mlxtend.preprocessing
import numpy
import pandas
online = pandas.read_excel(
io="./Online Retail.xlsx",
sheet_name="Online Retail",
header=0
)
online['IsCPresent'] = (
online['InvoiceNo']
.astype(str)
.apply(lambda x: 1 if x.find('C') != -1 else 0)
)
online1 = (
online
.loc[online["Quantity"] > 0]
.loc[online['IsCPresent'] != 1]
.loc[:, ["InvoiceNo", "Description"]]
.dropna()
)
invoice_item_list = []
for num in list(set(online1.InvoiceNo.tolist())):
tmp_df = online1.loc[online1['InvoiceNo'] == num]
tmp_items = tmp_df.Description.tolist()
invoice_item_list.append(tmp_items)
online_encoder = mlxtend.preprocessing.TransactionEncoder()
online_encoder_array = online_encoder.fit_transform(invoice_item_list)
online_encoder_df = pandas.DataFrame(
online_encoder_array,
columns=online_encoder.columns_
)
## COL in different order
online_encoder_df.loc[
20125:20135,
online_encoder_df.columns.tolist()[100:110]
] | _____no_output_____ | MIT | Chapter08/Activity8.01-Activity8.03/Activity8.01-Activity8.03.ipynb | PacktWorkshops/Applied-Unsupervised-Learning-with-Python |
Activity 8.02: Apriori on the Complete Online Retail Data Set | mod_colnames_minsupport = mlxtend.frequent_patterns.apriori(
online_encoder_df,
min_support=0.01,
use_colnames=True
)
mod_colnames_minsupport.loc[0:6]
mod_colnames_minsupport[
mod_colnames_minsupport['itemsets'] == frozenset(
{'10 COLOUR SPACEBOY PEN'}
)
]
mod_colnames_minsupport['length'] = (
mod_colnames_minsupport['itemsets'].apply(lambda x: len(x))
)
## item set order different
mod_colnames_minsupport[
(mod_colnames_minsupport['length'] == 2) &
(mod_colnames_minsupport['support'] >= 0.02) &
(mod_colnames_minsupport['support'] < 0.021)
]
mod_colnames_minsupport.hist("support", grid=False, bins=30)
plt.xlabel("Support of item")
plt.ylabel("Number of items")
plt.title("Frequency distribution of Support")
plt.show() | _____no_output_____ | MIT | Chapter08/Activity8.01-Activity8.03/Activity8.01-Activity8.03.ipynb | PacktWorkshops/Applied-Unsupervised-Learning-with-Python |
Activity 8.03: Find the Association Rules on the Complete Online Retail Data Set | rules = mlxtend.frequent_patterns.association_rules(
mod_colnames_minsupport,
metric="confidence",
min_threshold=0.6,
support_only=False
)
rules.loc[0:6]
print("Number of Associations: {}".format(rules.shape[0]))
rules.plot.scatter("support", "confidence", alpha=0.5, marker="*")
plt.xlabel("Support")
plt.ylabel("Confidence")
plt.title("Association Rules")
plt.show()
rules.hist("lift", grid=False, bins=30)
plt.xlabel("Lift of item")
plt.ylabel("Number of items")
plt.title("Frequency distribution of Lift")
plt.show()
rules.hist("leverage", grid=False, bins=30)
plt.xlabel("Leverage of item")
plt.ylabel("Number of items")
plt.title("Frequency distribution of Leverage")
plt.show()
plt.hist(rules[numpy.isfinite(rules['conviction'])].conviction.values, bins = 30)
plt.xlabel("Conviction of item")
plt.ylabel("Number of items")
plt.title("Frequency distribution of Conviction")
plt.show()
| _____no_output_____ | MIT | Chapter08/Activity8.01-Activity8.03/Activity8.01-Activity8.03.ipynb | PacktWorkshops/Applied-Unsupervised-Learning-with-Python |
Welcome to Jupyter Notebooks! Author: Shelley Knuth Date: 23 August 2019 Purpose: This is a general purpose tutorial to designed to provide basic information about Jupyter notebooks Outline1. General information about notebooks1. Formatting text in notebooks1. Formatting mathematics in notebooks1. Importing graphics1. Plotting General Information about Notebooks What is a Jupyter Notebook?It's an interactive web platform that allows one to create and edit live code, add text descriptions, and visualizations in a document that can be easily shared and displayed. How to work with a NotebookTo run a cell, hit "shift" and "enter" at the same time Don't be alarmed if your notebook runs for awhile - indicated by [*] Sometimes takes awhile Different cell typesCode, Markdown are the two I use most frequently ExerciseWrite one sentence on what you are planning to do this weekend in a cell. Opening, saving notebooksOpening: File -> New Notebook -> Python 3 Saving: File -> Save as -> Save and Checkpoint (Ctrl + S) Printing: File -> Print PreviewDownload: File -> Download as PDF (or others) Keyboard shortcutsToggle between edit and command mode with Esc and Enter, respectively. Once in command mode: Scroll up and down your cells with your Up and Down keys. Press A or B to insert a new cell above or below the active cell. M will transform the active cell to a Markdown cell. Y will set the active cell to a code cell. D + D (D twice) will delete the active cell. Z will undo cell deletion. Hold Shift and press Up or Down to select multiple cells at once. With multiple cells selected, Shift + M will merge your selection. Ctrl + Shift + -, in edit mode, will split the active cell at the cursor. You can also click and Shift + Click in the margin to the left of your cells to select them. (from https://www.dataquest.io/blog/jupyter-notebook-tutorial/) Formatting text in notebooks | __Bold__ or **bold**
_italics_ or *italics* | _____no_output_____ | CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
Jupyter notebooks are __really__ cool! Jupyter notebooks are _really_ cool! Two spaces after text gives you a newline! Headings | # title
## major headings
### subheadings
#### 4th level subheadings | _____no_output_____ | CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
Jupyter notebooks are really cool! Do you know what else is cool? Turtles! And Bon Jovi! CodeThe best program to use for this is the `grep` command | The best program to use for this is the `grep` command | _____no_output_____ | CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
Text color and sizeThe sky is blue! Sometimes the color doesn't turn out WELL | The sky is <font color = blue, size = 30>blue!</font>
Sometimes the <font color = blue>color</font> doesn't turn out <font size=30> WELL</font> | _____no_output_____ | CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
Indent or list your text> This is how!- This is how! - This is how!1. This is how! | > This is how!
- This is how!
- This is how!
1. This is how! | _____no_output_____ | CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
* This is also how! * This is also how! | * This is also how!
* This is also how! | _____no_output_____ | CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
HyperlinksSometimes copy and paste is just fine too! https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet [I'm an inline-style link](https://www.google.com)[I'm a reference-style link][Arbitrary case-insensitive reference text][I'm a relative reference to a repository file](../blob/master/LICENSE)[You can use numbers for reference-style link definitions][1]Or leave it empty and use the [link text itself].URLs and URLs in angle brackets will automatically get turned into links. http://www.example.com or and sometimes example.com (but not on Github, for example).Some text to show that the reference links can follow later.[arbitrary case-insensitive reference text]: https://www.mozilla.org[1]: http://slashdot.org[link text itself]: http://www.reddit.com(from https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet) | [I'm an inline-style link](https://www.google.com)
[I'm a reference-style link][Arbitrary case-insensitive reference text]
[I'm a relative reference to a repository file](../blob/master/LICENSE)
[You can use numbers for reference-style link definitions][1]
Or leave it empty and use the [link text itself].
URLs and URLs in angle brackets will automatically get turned into links.
http://www.example.com or <http://www.example.com> and sometimes
example.com (but not on Github, for example).
Some text to show that the reference links can follow later.
[arbitrary case-insensitive reference text]: https://www.mozilla.org
[1]: http://slashdot.org
[link text itself]: http://www.reddit.com
(from https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet) | _____no_output_____ | CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
Mathematical Equations in Notebooks$F=ma$This is an equation, $x=y+z$, where $y=10$ and $z=20$ | $F = ma$
This is an equation, $x=y+z$, where $y=10$ and $z=20$ | _____no_output_____ | CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
Superscripts and Subscripts$y = x^3 + x^2 + 3x$ $F_g = m g$ | $a^2 = b^2 + c^2$
$F_g = m g$ | _____no_output_____ | CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
Grouping$6.022\times 10^{23}$ . | $6.022\times 10^{23}$ . | _____no_output_____ | CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
Greek Letters$\pi = 3.1415926$ $\Omega = 10$ $\delta$ | $\pi = 3.1415926$
$\Omega = 10$
$\delta$ | _____no_output_____ | CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
Special Symbols$\pm$, $\gg$, $\ll$, $\infty$ $i = \sqrt{-1}$ $\int_a^b$ | $\pm$, $\gg$, $\ll$, $\infty$
$i = \sqrt{-1}$
$\int_a^b$ | _____no_output_____ | CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
Fractions and DerivativesFractions $\frac{1}{2}$Derivatives $\frac{dm}{dt}$, $\frac{\partial m}{\partial t}$ | Fractions
$\frac{1}{2}$
Derivative
$\frac{dm}{dt}$
$\frac{\partial m}{\partial t}$ | _____no_output_____ | CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
Matrices$$\begin{matrix} a & b \\ c & d \end{matrix}$$ | $$\begin{matrix} a & b \\ c & d \end{matrix}$$ | _____no_output_____ | CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
ExerciseWrite out an equation where the total derivative of x over y is equal to the square root of 10 added to 7/8 pi $\frac{dx}{dy} = \sqrt{10} + \pi$ Exercise Write out an equation where x sub j is equal to a 2x2 matrix containing 10, 20, 30, and 40 $x_j = \begin{matrix} 10 & 20 \\ 30 & 40 \end{matrix}$ Importing Graphics Easy way: Drag and drop! Or "Edit -> Insert image" when in MarkdownHarder ways: Python: | from IPython.display import Image
Image("bonjovi.jpg")
| _____no_output_____ | CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
HTML: | <img src="bonjovi.jpg"> | _____no_output_____ | CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
Basic Programming with Python Print statements | print("Hello, World!") | Hello, World!
| CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
Look at how the input changed (to the left of the cell). Look at the output! (This and several cells from https://www.dataquest.io/blog/jupyter-notebook-tutorial/) Anything run in the kernal persists in the notebookCan run code and import libraries in the cells Variables in Python* Variables are not declared* Variables are created at assignment time* Variable type determined implicitly via assignmentx=2 Int x=2.0 . Float Z="hello" str (single or double quotes) z=True Boolean Note capital "T" or "F" * Can convert types using conversion functions: int(), float(), str(), bool()* Python is case sensitive * Check variable type using type function (from https://github.com/ResearchComputing/Python_Spring_2019/blob/master/session1_overview/session1_slides.pdf) | z=10.0
print('z is: ', type(z) )
x=int(43.4)
print(x) | z is: <class 'float'>
43
| CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
Arithmetic in Python respects the order of operations * Addition: +* Subtraction: -* Multiplication: * * Division: / (returns float) * Floor Division: // (returns int or float; rounds down) * Mod: % (3%2 -> 1) * Exponentiation: ** 2**4 -> 16) Can concatenate strings using "+"(from https://github.com/ResearchComputing/Python_Spring_2019/blob/master/session1_overview/session1_slides.pdf) | x='hello '+'there'
print(x) | hello there
| CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
Lists Multiple Values can be grouped into a list | - lists
- basic plotting with matplotlib
- arrays and numpy; doing calculations
- plotting using numpy
- importing data from csv files
mylist=[1, 2, 10]
print(mylist) | [1, 2, 10]
| CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
* List elements accessed with [] notation* Element numbering starts at 0 | print(mylist[1]) | 2
| CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
* Lists can contain different variable types | mylist=[1, 'two', 10.0]
print(mylist) | [1, 'two', 10.0]
| CC-BY-4.0 | lessons/jupyter/general_jupyter_notebook_tutorial.ipynb | huxiaoni/espin |
Experiment DescriptionProduce PDP for a randomly picked data from cslg.> This notebook is for experiment \ and data sample \. Initialization | %load_ext autoreload
%autoreload 2
import numpy as np, sys, os
in_colab = 'google.colab' in sys.modules
# fetching code and data(if you are using colab
if in_colab:
!rm -rf s2search
!git clone --branch pipelining https://github.com/youyinnn/s2search.git
sys.path.insert(1, './s2search')
%cd s2search/pipelining/pdp-exp1/
pic_dir = os.path.join('.', 'plot')
if not os.path.exists(pic_dir):
os.mkdir(pic_dir)
| The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
| Apache-2.0 | pipelining/pdp-exp1/pdp-exp1_cslg-rand-5000_plotting.ipynb | ZeruiW/s2search |
Loading data | sys.path.insert(1, '../../')
import numpy as np, sys, os, pandas as pd
from s2search_score_pdp import pdp_based_importance, apply_order
sample_name = 'cslg-rand-5000'
f_list = ['title', 'abstract', 'venue', 'authors', 'year', 'n_citations']
pdp_xy = {}
pdp_metric = pd.DataFrame(columns=['feature_name', 'pdp_range', 'pdp_importance'])
for f in f_list:
file = os.path.join('.', 'scores', f'{sample_name}_pdp_{f}.npz')
if os.path.exists(file):
data = np.load(file)
sorted_pdp_data = apply_order(data)
feature_pdp_data = [np.mean(pdps) for pdps in sorted_pdp_data]
pdp_xy[f] = {
'y': feature_pdp_data,
'numerical': True
}
if f == 'year' or f == 'n_citations':
pdp_xy[f]['x'] = np.sort(data['arr_1'])
else:
pdp_xy[f]['y'] = feature_pdp_data
pdp_xy[f]['x'] = list(range(len(feature_pdp_data)))
pdp_xy[f]['numerical'] = False
pdp_metric.loc[len(pdp_metric.index)] = [f, np.max(feature_pdp_data) - np.min(feature_pdp_data), pdp_based_importance(feature_pdp_data, f)]
pdp_xy[f]['weird'] = feature_pdp_data[len(feature_pdp_data) - 1] > 30
print(pdp_metric.sort_values(by=['pdp_importance'], ascending=False))
| feature_name pdp_range pdp_importance
1 abstract 17.129012 6.155994
0 title 15.760054 3.788171
2 venue 12.542338 0.971027
4 year 2.190717 0.470416
5 n_citations 0.977882 0.193581
3 authors 0.000000 0.000000
| Apache-2.0 | pipelining/pdp-exp1/pdp-exp1_cslg-rand-5000_plotting.ipynb | ZeruiW/s2search |
PDP | import matplotlib.pyplot as plt
categorical_plot_conf = [
{
'xlabel': 'Title',
'ylabel': 'Scores',
'pdp_xy': pdp_xy['title']
},
{
'xlabel': 'Abstract',
'pdp_xy': pdp_xy['abstract']
},
{
'xlabel': 'Authors',
'pdp_xy': pdp_xy['authors']
},
{
'xlabel': 'Venue',
'pdp_xy': pdp_xy['venue'],
'zoom': {
'inset_axes': [0.15, 0.45, 0.47, 0.47],
'x_limit': [4900, 5050],
'y_limit': [-9, 7],
'connects': [True, True, False, False]
}
},
]
numerical_plot_conf = [
{
'xlabel': 'Year',
'ylabel': 'Scores',
'pdp_xy': pdp_xy['year']
},
{
'xlabel': 'Citation Count',
'pdp_xy': pdp_xy['n_citations'],
'zoom': {
'inset_axes': [0.4, 0.2, 0.47, 0.47],
'x_limit': [-100, 500],
'y_limit': [-7.5, -6.2],
'connects': [True, False, False, True]
}
}
]
def pdp_plot(confs, title):
fig, axes = plt.subplots(nrows=1, ncols=len(confs), figsize=(20, 5), dpi=100)
subplot_idx = 0
# plt.suptitle(title, fontsize=20, fontweight='bold')
# plt.autoscale(False)
for conf in confs:
axess = axes if len(confs) == 1 else axes[subplot_idx]
axess.plot(conf['pdp_xy']['x'], conf['pdp_xy']['y'])
axess.grid(alpha = 0.4)
if ('ylabel' in conf):
axess.set_ylabel(conf.get('ylabel'), fontsize=20, labelpad=10)
axess.set_xlabel(conf['xlabel'], fontsize=16, labelpad=10)
if not (conf['pdp_xy']['weird']):
if (conf['pdp_xy']['numerical']):
axess.set_ylim([-9, -6])
pass
else:
axess.set_ylim([-15, 10])
pass
if 'zoom' in conf:
axins = axess.inset_axes(conf['zoom']['inset_axes'])
axins.plot(conf['pdp_xy']['x'], conf['pdp_xy']['y'])
axins.set_xlim(conf['zoom']['x_limit'])
axins.set_ylim(conf['zoom']['y_limit'])
axins.grid(alpha=0.3)
rectpatch, connects = axess.indicate_inset_zoom(axins)
connects[0].set_visible(conf['zoom']['connects'][0])
connects[1].set_visible(conf['zoom']['connects'][1])
connects[2].set_visible(conf['zoom']['connects'][2])
connects[3].set_visible(conf['zoom']['connects'][3])
subplot_idx += 1
pdp_plot(categorical_plot_conf, "PDPs for four categorical features")
plt.savefig(os.path.join('.', 'plot', f'{sample_name}-categorical.png'), facecolor='white', transparent=False, bbox_inches='tight')
# second fig
pdp_plot(numerical_plot_conf, "PDPs for two numerical features")
plt.savefig(os.path.join('.', 'plot', f'{sample_name}-numerical.png'), facecolor='white', transparent=False, bbox_inches='tight')
| _____no_output_____ | Apache-2.0 | pipelining/pdp-exp1/pdp-exp1_cslg-rand-5000_plotting.ipynb | ZeruiW/s2search |
Alzhippo Pr0gress Possible Tasks- **Visualizing fibers** passing through ERC and hippo, for both ipsi and contra cxns (4-figs) (GK)- **Dilate hippocampal parcellations**, to cover entire hippocampus by nearest neighbour (JV)- **Voxelwise ERC-to-hippocampal** projections + clustering (Both) Visulaizating fibers1. Plot group average connectome2. Find representative subject X (i.e. passes visual inspection match to the group)3. Visualize fibers with parcellation4. Repeat 3. on dilated parcellation5. If connections appear more symmetric in 4., regenerate graphs with dilated parcellation 1. Plot group average connectome | import numpy as np
import networkx as nx
import nibabel as nib
import scipy.stats as stats
import matplotlib.pyplot as plt
from nilearn import plotting
import os
import seaborn as sns
import pandas
%matplotlib notebook
def matrixplotter(data, log=True, title="Connectivity between ERC and Hippocampus"):
plotdat = np.log(data + 1) if log else data
plt.imshow(plotdat)
labs = ['ERC-L', 'Hippo-L-noise', 'Hippo-L-tau',
'ERC-R', 'Hippo-R-noise', 'Hippo-R-tau']
plt.xticks(np.arange(0, 6), labs, rotation=40)
plt.yticks(np.arange(0, 6), labs)
plt.title(title)
plt.colorbar()
plt.show()
avg = np.load('../data/connection_matrix.npy')
matrixplotter(np.mean(avg, axis=2)) | _____no_output_____ | MIT | code/visualizing_tracts.ipynb | gkiar/alzhippo |
2. Find representative subject | tmp = np.reshape(avg.T, (355, 36))
tmp[0]
corrs = np.corrcoef(tmp)[-1]
corrs[corrs == 1] = 0
bestfit = int(np.where(corrs == np.max(corrs))[0])
print("Most similar graph: {}".format(bestfit))
dsets = ['../data/graphs/BNU1/combined_erc_hippo_labels/',
'../data/graphs/BNU3/',
'../data/graphs/HNU1/']
files = [os.path.join(d,f) for d in dsets for f in os.listdir(d)]
graph_fname = files[bestfit]
gx = nx.read_weighted_edgelist(graph_fname)
adjx = np.asarray(nx.adjacency_matrix(gx).todense())
matrixplotter(adjx)
print(graph_fname) | _____no_output_____ | MIT | code/visualizing_tracts.ipynb | gkiar/alzhippo |
Optimization> Marcos Duarte > Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/)) > Federal University of ABC, Brazil If there occur some changes in nature, the amount of action necessary for this change must be as small as possible. Maupertuis (sec XVIII) **Optimization is the process of finding the best value from possible alternatives with regards to a certain criteria** ([Wikipedia](http://en.wikipedia.org/wiki/Mathematical_optimization)). Typically, such best value is the value that maximizes or minimizes the criteria. In this context, to solve a (mathematical) optimization problem is to find the maximum or minimum (a.k.a., a stationary point) of a function (and we can use maximum or minimum interchangeably because the maximum of a function is the minimum of the negative of that function). To solve an optimization problem, we first have to model the problem and define the objective, the variables, and the constraints of the problem. In optimization, these terms are usually defined as:1. Objective function (or also, cost, loss, utility, or fitness function): a function describing what we want to optimize. 2. Design variable(s): variables that will be manipulated to optimize the cost function. 3. Constraint functions: a set of constraints, equalities or inequalities that constrains the possible solutions to possible values of the design variables (candidate solutions or feasible solutions or feasible set).A feasible solution that minimizes (or maximizes) the objective function is called an optimal solution.The optimization problem is the calculation of the minimum or maximum values of an objective function over a set of **unknown** possible values of the design variables. Even in case of a finite number of possible values of the objective function and design variables (e.g., after discretization and a manual or a grid search), in general the evaluation of the objective function is computationally expensive and should be avoided. Of note, even if there is no other option, a random search is in fact more efficient than a manual or a grid search! See [Bergstra, Bengio (2012)](http://jmlr.csail.mit.edu/papers/volume13/bergstra12a/bergstra12a.pdf).A typical problem of optimization: [Knapsack problem](https://en.wikipedia.org/wiki/Knapsack_problem).Read more about that in [Introduction to Optimization](http://neos-guide.org/content/optimization-introduction) from the [NEOS Guide](http://neos-guide.org/). Some jargon in mathematical optimization - **Linear versus nonlinear optimization**: linear optimization refers to when the objective function and the constraints are linear mathematical functions. When the objective function is linear, an optimal solution is always found at the constraint boundaries and a local optimum is also a global optimum. See [Wikipedia 1](https://en.wikipedia.org/wiki/Linear_programming) and [Wikipedia 2](https://en.wikipedia.org/wiki/Nonlinear_programming). - **Constrained versus unconstrained optimization**: in constrained optimization there are no constraints. - **Convex optimization**: the field of optimization that deals with finding the minimum of convex functions (or the maximum of concave functions) over a convex constraint set. The convexity of a function facilitates the optimization because a local minimum must be a global minimum and first-order conditions (the first derivatives) are sufficient conditions for finding the optimal solution. Note that although convex optimization is a particular case of nonlinear optimization, it is a relatively simple optimization problem, with robust and mature methods of solution. See [Wikipedia](https://en.wikipedia.org/wiki/Convex_optimization). - **Multivariate optimization**: optimization of a function of several variables. - **Multimodal optimization**: optimization of a function with several local minima to find the multiple (locally) optimal solutions, as opposed to a single best solution. - **Multi-objective optimization**: optimization involving more than one objective function to be optimized simultaneously. - **Optimal control**: finding a control law for a given system such that a certain optimality criterion is achieved. See [Wikipedia](https://en.wikipedia.org/wiki/Optimal_control). - **Quadratic programming**: optimization of a quadratic function subject to linear constraints. See [Wikipedia](https://en.wikipedia.org/wiki/Quadratic_programming). - **Simplex algorithm**: linear optimization algorithm that begins at a starting vertex and moves along the edges of the polytope (the feasible region) until it reaches the vertex of the optimum solution. See [Wikipedia](https://en.wikipedia.org/wiki/Simplex_algorithm). Maxima and minimaIn mathematics, the maximum and minimum of a function are the largest and smallest values that the function takes at a point either within a neighborhood (local) or on the function entire domain (global) ([Wikipedia](http://en.wikipedia.org/wiki/Maxima_and_minima)). For a function of one variable, if the maximum or minimum of a function is not at the limits of the domain and if at least the first and second derivatives of the function exist, a maximum and minimum can be found as the point where the first derivative of the function is zero. If the second derivative on that point is positive, then it's a minimum, if it is negative, it's a maximum. Figure. Maxima and minima of a function of one variable. - Note that the requirement that the second derivative on the extremum to be positive for a minimum or negative for a maximum is sufficient but not a necessary condition. For instance, the function $f(x)=x^4$ has an extremum in $x=0$ since $f'(x)=4x^3$ and $f'(0)=0$, but its second derivative at $x=0$ is also zero: $f''(x)=12x^2;\: f''(0)=0$. In fact, the requirement is that the first non-zero derivative on that point should be positive for a minimum or negative for a maximum: $f''''(0)=24$; the extremum is a minimum. Let's now apply optimization to solve a problem with a univariate function. | # import Python libraries
import numpy as np
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import sympy as sym
from sympy.plotting import plot
import pandas as pd
from IPython.display import display
from IPython.core.display import Math | _____no_output_____ | MIT | courses/modsim2018/tasks/Tasks_DuringLecture18/BMC-master/notebooks/Optimization.ipynb | raissabthibes/bmc |
Example 1: Maximum volume of a cardboard boxWe want to make a box from a square cardboard with side $a$ such that its volume should be maximum. What is the optimal distance where the square cardboard should be cut and folded to make a box with maximum volume? Figure. A box to be made from a cardboard such that its volume should be maximum. Where we should cut? If the distance where to cut and fold the cardboard is $b$, see figure above, the volume of the box will be:\begin{equation}\begin{array}{l l}V(b) = b(a-2b)(a-2b) \\\\V(b) = a^2b - 4ab^2 + 4b^3\end{array}\label{}\end{equation}In the context of optimization: **The expression for $V$ is the cost function, $b$ is the design variable, and the constraint is that feasible values of $b$ are in the interval $]0, \dfrac{a}{2}[$, i.e., $b>0$ and $b<\dfrac{a}{2}$.** The first and second derivatives of $V$ w.r.t. $b$ are:\begin{equation}\begin{array}{l l}\dfrac{\mathrm{d}V}{\mathrm{d}b} = a^2 - 8ab + 12b^2 \\\\\dfrac{\mathrm{d}^2 V}{\mathrm{d}b^2} = - 8a + 24b\end{array}\label{}\end{equation}We have to find the values for $b$ where the first derivative of $V$ is zero (the extrema) and then use the expression for the second derivative of $V$ to find whether each of these extrema is a minimum (positive value) or a maximum (negative value). Let's use Sympy for that: | a, b = sym.symbols('a b')
V = b*(a - 2*b)*(a - 2*b)
Vdiff = sym.expand(sym.diff(V, b))
roots = sym.solve(Vdiff, b)
display(Math(sym.latex('Roots:') + sym.latex(roots)))
roots | _____no_output_____ | MIT | courses/modsim2018/tasks/Tasks_DuringLecture18/BMC-master/notebooks/Optimization.ipynb | raissabthibes/bmc |
Discarding the solution $b=\dfrac{a}{2}$ (where $V=0$, which is a minimum), $b=\dfrac{a}{6}$ results in the maximum volume. We can check that by plotting the volume of the cardboard box for $a=1$ and $b: [0,\:0.5]$: | plot(V.subs({a: 1}), (b, 0, .5), xlabel='b', ylabel='V')
display(Math(sym.latex('V_{a=1}^{max}(b=%s)=%s'
%(roots[0].evalf(n=4, subs={a: 1}), V.evalf(n=3, subs={a: 1, b: roots[0]}))))) | _____no_output_____ | MIT | courses/modsim2018/tasks/Tasks_DuringLecture18/BMC-master/notebooks/Optimization.ipynb | raissabthibes/bmc |
- Note that although the problem above is a case of nonlinear constrained optimization, because the objective function is univariate, well-conditioned and the constraints are linear inequalities, the optimization is simple. Unfortunately, this is seldom the case. Curve fitting as an optimization problemCurve fitting is the process of fitting a model, expressed in terms of a mathematical function, that depends on adjustable parameters to a series of data points and once adjusted, that curve has the best fit to the data points.The general approach to the fitting procedure involves the definition of a merit function that measures the agreement between data and model. The model parameters are then adjusted to yield the best-fit parameters as a problem of minimization (an optimization problem, where the merit function is the cost function). A classical solution, termed least-squares fitting, is to find the best fit by minimizing the sum of the squared differences between data points and the model function (the sum of squared residuals as the merit function).For more on curve fitting see the video below and the notebook [Curve fitting](http://nbviewer.jupyter.org/github/demotu/BMC/blob/master/notebooks/CurveFitting.ipynb). | from IPython.display import YouTubeVideo
YouTubeVideo('Rxp7o7_RxII', width=480, height=360, rel=0) | _____no_output_____ | MIT | courses/modsim2018/tasks/Tasks_DuringLecture18/BMC-master/notebooks/Optimization.ipynb | raissabthibes/bmc |
Gradient descentGradient descent is a first-order iterative optimization algorithm for finding the minimum of a function ([Wikipedia](https://en.wikipedia.org/wiki/Gradient_descent)). In the gradient descent algorithm, a local minimum of a function is found starting from an initial point and taking steps proportional to the negative of the derivative of the function (gradient) at the current point and we evaluate if the current point is lower than then the previous point until a local minimum in reached (hopefully). It follows that, if\begin{equation}x_{n+1} = x_n - \gamma \nabla f(x)\label{}\end{equation}for $\gamma$ small enough, then $f(x_{n}) \geq f(x_{n+1})$.This process is repeated iteratively until the step size (which is proportional to the gradient!) is below a required precision (hopefully the sequence $x_{n}$ converges to the desired local minimum). Example 2: Minimum of a function by gradient descentFrom https://en.wikipedia.org/wiki/Gradient_descent: Calculate the minimum of $f(x)=x^4-3x^3+2$. | # From https://en.wikipedia.org/wiki/Gradient_descent
# The local minimum of $f(x)=x^4-3x^3+2$ is at x=9/4
cur_x = 6 # The algorithm starts at x=6
gamma = 0.01 # step size multiplier
precision = 0.00001
step_size = 1 # initial step size
max_iters = 10000 # maximum number of iterations
iters = 0 # iteration counter
f = lambda x: x**4 - 3*x**3 + 2 # lambda function for f(x)
df = lambda x: 4*x**3 - 9*x**2 # lambda function for the gradient of f(x)
while (step_size > precision) & (iters < max_iters):
prev_x = cur_x
cur_x -= gamma*df(prev_x)
step_size = abs(cur_x - prev_x)
iters+=1
print('True local minimum at {} with function value {}.'.format(9/4, f(9/4)))
print('Local minimum by gradient descent at {} with function value {}.'.format(cur_x, f(cur_x))) | _____no_output_____ | MIT | courses/modsim2018/tasks/Tasks_DuringLecture18/BMC-master/notebooks/Optimization.ipynb | raissabthibes/bmc |
Multivariate optimization When there is more than one design variable (the cost function depends on more than one variable), it's a multivariate optimization. The general idea of finding minimum and maximum values where the derivatives are zero still holds for a multivariate function. The second derivative of a multivariate function can be described by the Hessian matrix:\begin{equation}\mathbf{H} = \begin{bmatrix}{\dfrac {\partial ^{2}f}{\partial x_{1}^{2}}}&{\dfrac {\partial ^{2}f}{\partial x_{1}\,\partial x_{2}}}&\cdots &{\dfrac {\partial ^{2}f}{\partial x_{1}\,\partial x_{n}}}\\[2.2ex]{\dfrac {\partial ^{2}f}{\partial x_{2}\,\partial x_{1}}}&{\dfrac {\partial ^{2}f}{\partial x_{2}^{2}}}&\cdots &{\dfrac {\partial ^{2}f}{\partial x_{2}\,\partial x_{n}}}\\[2.2ex]\vdots &\vdots &\ddots &\vdots \\[2.2ex]{\dfrac {\partial ^{2}f}{\partial x_{n}\,\partial x_{1}}}&{\dfrac {\partial ^{2}f}{\partial x_{n}\,\partial x_{2}}}&\cdots &{\dfrac {\partial ^{2}f}{\partial x_{n}^{2}}}\end{bmatrix}\label{}\end{equation}Let's see now a classical problem in biomechanics where optimization is useful and there is more than one design variable. The distribution problem in biomechanicsUsing the inverse dynamics approach in biomechanics, we can determine the net force and torque acting on a joint if we know the external forces on the segments and the kinematics and inertial properties of the segments. But with this approach we are unable to determine the individual muscles forces that created such torque, as expressed in the following equation:\begin{equation}M_{total} = M_1 + M_2 + \dots + M_n = r_1F_1 + r_2F_2 + \dots + r_nF_n\label{}\end{equation}where $r_i$ is the moment arm of the force $F_i$ that generates a torque $M_i$, a parcel of the (known) total torque $M_{total}$. Even if we know the moment arm of each muscle (e.g., from cadaveric data or from image analysis), the equation above has $n$ unknowns. Because there is more than one muscle that potentially created such torque, there are more unknowns than equations, and the problem is undetermined. So, the problem is how to find how the torque is distributed among the muscles of that joint.One solution is to consider that we (biological systems) optimize our effort in order to minimize energy expenditure, stresses on our tissues, fatigue, etc. The principle of least action, stated in the opening of this text, is an allusion that optimization might be ubiquitous in nature. With this rationale, let's solve the distribution problem in biomechanics using optimization and find the minimum force of each muscle necessary to complete a given task.The following cost functions have been proposed to solve the distribution problem in biomechanics:\begin{equation}\begin{array}{l l}\displaystyle\sum_{i=1}^N F_i \quad &\text{e.g., Seireg and Arkivar (1973)}\\\displaystyle\sum_{i=1}^N F_i^2 \quad &\\\displaystyle\sum_{i=1}^N \left(\dfrac{F_i}{pcsa_i}\right)^2 \quad &\text{e.g., Crowninshield and Brand (1981)}\\\displaystyle\sum_{i=1}^N \left(\dfrac{F_i}{M_{max,i}}\right)^3 \quad &\text{e.g., Herzog (1987)}\end{array}\label{}\end{equation}Where $pcsa_i$ is the physiological cross-sectional area of muscle $i$ and $M_{max,i}$ is the maximum torque muscle $i$ can produce. Each muscle force $F_i$ is a design variable and the following constraints must be satisfied:\begin{equation}\begin{array}{l l}0 \leq F_i \leq F_{max}\\\displaystyle\sum_{i=1}^N r_i \times F_i = M\end{array}\label{}\end{equation}Let's apply this concept to solve a distribution problem in biomechanics. Muscle force estimationConsider the following main flexors of the elbow joint (see figure below): biceps long head, biceps short head, and brachialis. Suppose that the elbow net joint torque determined using inverse dynamics is 20 Nm (flexor). How much each of these muscles contributed to the net torque? Figure. A view in OpenSim of the arm26 model showing three elbow flexors (Biceps long and short heads and Brachialis). For the optimization, we will need experimental data for the moment arm, maximum moment, and *pcsa* of each muscle. Let's import these data from the OpenSim arm26 model: | # time elbow_flexion BIClong BICshort BRA
r_ef = np.loadtxt('./../data/r_elbowflexors.mot', skiprows=7)
f_ef = np.loadtxt('./../data/f_elbowflexors.mot', skiprows=7) | _____no_output_____ | MIT | courses/modsim2018/tasks/Tasks_DuringLecture18/BMC-master/notebooks/Optimization.ipynb | raissabthibes/bmc |
The maximum isometric force of these muscles are defined in the arm26 model as: Biceps long head: 624.3 N, Biceps short head: 435.56 N, and Brachialis: 987.26 N. Let's compute the mamimum torques that each muscle could produce considering a static situation at the different elbow flexion angles: | m_ef = r_ef*1
m_ef[:, 2:] = r_ef[:, 2:]*f_ef[:, 2:] | _____no_output_____ | MIT | courses/modsim2018/tasks/Tasks_DuringLecture18/BMC-master/notebooks/Optimization.ipynb | raissabthibes/bmc |
And let's visualize these data: | labels = ['Biceps long head', 'Biceps short head', 'Brachialis']
fig, ax = plt.subplots(nrows=1, ncols=3, sharex=True, figsize=(10, 4))
ax[0].plot(r_ef[:, 1], r_ef[:, 2:])
#ax[0].set_xlabel('Elbow angle $(\,^o)$')
ax[0].set_title('Moment arm (m)')
ax[1].plot(f_ef[:, 1], f_ef[:, 2:])
ax[1].set_xlabel('Elbow angle $(\,^o)$', fontsize=16)
ax[1].set_title('Maximum force (N)')
ax[2].plot(m_ef[:, 1], m_ef[:, 2:])
#ax[2].set_xlabel('Elbow angle $(\,^o)$')
ax[2].set_title('Maximum torque (Nm)')
ax[2].legend(labels, loc='best', framealpha=.5)
ax[2].set_xlim(np.min(r_ef[:, 1]), np.max(r_ef[:, 1]))
plt.tight_layout()
plt.show() | _____no_output_____ | MIT | courses/modsim2018/tasks/Tasks_DuringLecture18/BMC-master/notebooks/Optimization.ipynb | raissabthibes/bmc |
These data don't have the *pcsa* value of each muscle. We will estimate the *pcsa* considering that the amount of maximum muscle force generated per area is constant and equal to 50N/cm$^2$. Consequently, the *pcsa* (in cm$^2$) for each muscle is: | a_ef = np.array([624.3, 435.56, 987.26])/50 # 50 N/cm2
print(a_ef) | _____no_output_____ | MIT | courses/modsim2018/tasks/Tasks_DuringLecture18/BMC-master/notebooks/Optimization.ipynb | raissabthibes/bmc |
Static versus dynamic optimizationIn the context of biomechanics, we can solve the distribution problem separately for each angle (instant) of the elbow; we will refer to that as static optimization. However, there is no guarantee that when we analyze all these solutions across the range of angles, they will be the best solution overall. One reason is that static optimization ignores the time history of the muscle force. Dynamic optimization refers to the optimization over a period of time. For such, we will need to input a cost function spanning the entire period of time at once. Dynamic optimization usually has a higher computational cost than static optimization.For now, we will solve the present problem using static optimization. Solution of the optimization problemFor the present case, we are dealing with a problem of minimization, multidimensional (function of several variables), nonlinear, constrained, and we can't assume that the cost function is convex. Numerical optimization is hardly a simple task. There are many different algorithms and public and commercial software for performing optimization. For instance, look at [NEOS Server](http://www.neos-server.org/neos/), a free internet-based service for solving numerical optimization problems. We will solve the present problem using the [scipy.optimize](http://docs.scipy.org/doc/scipy/reference/optimize.htmlmodule-scipy.optimize) package which provides several optimization algorithms. We will use the function `minimize`:```pythonscipy.optimize.minimize(fun, x0, args=(), method=None, jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None)"""Minimization of scalar function of one or more variables."""```Now, let's write Python functions for each cost function: | from scipy.optimize import minimize
def cf_f1(x):
"""Cost function: sum of forces."""
return x[0] + x[1] + x[2]
def cf_f2(x):
"""Cost function: sum of forces squared."""
return x[0]**2 + x[1]**2 + x[2]**2
def cf_fpcsa2(x, a):
"""Cost function: sum of squared muscle stresses."""
return (x[0]/a[0])**2 + (x[1]/a[1])**2 + (x[2]/a[2])**2
def cf_fmmax3(x, m):
"""Cost function: sum of cubic forces normalized by moments."""
return (x[0]/m[0])**3 + (x[1]/m[1])**3 + (x[2]/m[2])**3 | _____no_output_____ | MIT | courses/modsim2018/tasks/Tasks_DuringLecture18/BMC-master/notebooks/Optimization.ipynb | raissabthibes/bmc |
Let's also define the Jacobian for each cost function (which is an optional parameter for the optimization): | def cf_f1d(x):
"""Derivative of cost function: sum of forces."""
dfdx0 = 1
dfdx1 = 1
dfdx2 = 1
return np.array([dfdx0, dfdx1, dfdx2])
def cf_f2d(x):
"""Derivative of cost function: sum of forces squared."""
dfdx0 = 2*x[0]
dfdx1 = 2*x[1]
dfdx2 = 2*x[2]
return np.array([dfdx0, dfdx1, dfdx2])
def cf_fpcsa2d(x, a):
"""Derivative of cost function: sum of squared muscle stresses."""
dfdx0 = 2*x[0]/a[0]**2
dfdx1 = 2*x[1]/a[1]**2
dfdx2 = 2*x[2]/a[2]**2
return np.array([dfdx0, dfdx1, dfdx2])
def cf_fmmax3d(x, m):
"""Derivative of cost function: sum of cubic forces normalized by moments."""
dfdx0 = 3*x[0]**2/m[0]**3
dfdx1 = 3*x[1]**2/m[1]**3
dfdx2 = 3*x[2]**2/m[2]**3
return np.array([dfdx0, dfdx1, dfdx2]) | _____no_output_____ | MIT | courses/modsim2018/tasks/Tasks_DuringLecture18/BMC-master/notebooks/Optimization.ipynb | raissabthibes/bmc |
Let's define initial values: | M = 20 # desired torque at the elbow
iang = 69 # which will give the closest value to 90 degrees
r = r_ef[iang, 2:]
f0 = f_ef[iang, 2:]
a = a_ef
m = m_ef[iang, 2:]
x0 = f_ef[iang, 2:]/10 # far from the correct answer for the sum of torques
print('M =', M)
print('x0 =', x0)
print('r * x0 =', np.sum(r*x0)) | _____no_output_____ | MIT | courses/modsim2018/tasks/Tasks_DuringLecture18/BMC-master/notebooks/Optimization.ipynb | raissabthibes/bmc |
Inequality constraints (such as boundaries in our problem) can be entered with the parameter `bounds` to the `minimize` function: | bnds = ((0, f0[0]), (0, f0[1]), (0, f0[2])) | _____no_output_____ | MIT | courses/modsim2018/tasks/Tasks_DuringLecture18/BMC-master/notebooks/Optimization.ipynb | raissabthibes/bmc |
Equality constraints (such as the sum of torques should equals the desired torque in our problem), as well as inequality constraints, can be entered with the parameter `constraints` to the `minimize` function (and we can also opt to enter the Jacobian of these constraints): | # use this in combination with the parameter bounds:
cons = ({'type': 'eq',
'fun' : lambda x, r, f0, M: np.array([r[0]*x[0] + r[1]*x[1] + r[2]*x[2] - M]),
'jac' : lambda x, r, f0, M: np.array([r[0], r[1], r[2]]), 'args': (r, f0, M)})
# to enter everything as constraints:
cons = ({'type': 'eq',
'fun' : lambda x, r, f0, M: np.array([r[0]*x[0] + r[1]*x[1] + r[2]*x[2] - M]),
'jac' : lambda x, r, f0, M: np.array([r[0], r[1], r[2]]), 'args': (r, f0, M)},
{'type': 'ineq', 'fun' : lambda x, r, f0, M: f0[0]-x[0],
'jac' : lambda x, r, f0, M: np.array([-1, 0, 0]), 'args': (r, f0, M)},
{'type': 'ineq', 'fun' : lambda x, r, f0, M: f0[1]-x[1],
'jac' : lambda x, r, f0, M: np.array([0, -1, 0]), 'args': (r, f0, M)},
{'type': 'ineq', 'fun' : lambda x, r, f0, M: f0[2]-x[2],
'jac' : lambda x, r, f0, M: np.array([0, 0, -1]), 'args': (r, f0, M)},
{'type': 'ineq', 'fun' : lambda x, r, f0, M: x[0],
'jac' : lambda x, r, f0, M: np.array([1, 0, 0]), 'args': (r, f0, M)},
{'type': 'ineq', 'fun' : lambda x, r, f0, M: x[1],
'jac' : lambda x, r, f0, M: np.array([0, 1, 0]), 'args': (r, f0, M)},
{'type': 'ineq', 'fun' : lambda x, r, f0, M: x[2],
'jac' : lambda x, r, f0, M: np.array([0, 0, 1]), 'args': (r, f0, M)}) | _____no_output_____ | MIT | courses/modsim2018/tasks/Tasks_DuringLecture18/BMC-master/notebooks/Optimization.ipynb | raissabthibes/bmc |
Although more verbose, if all the Jacobians of the constraints are also informed, this alternative seems better than informing bounds for the optimization process (less error in the final result and less iterations). Given the characteristics of the problem, if we use the function `minimize` we are limited to the SLSQP (Sequential Least SQuares Programming) solver. Finally, let's run the optimization for the four different cost functions and find the optimal muscle forces: | f1r = minimize(fun=cf_f1, x0=x0, args=(), jac=cf_f1d,
constraints=cons, method='SLSQP',
options={'disp': True})
f2r = minimize(fun=cf_f2, x0=x0, args=(), jac=cf_f2d,
constraints=cons, method='SLSQP',
options={'disp': True})
fpcsa2r = minimize(fun=cf_fpcsa2, x0=x0, args=(a,), jac=cf_fpcsa2d,
constraints=cons, method='SLSQP',
options={'disp': True})
fmmax3r = minimize(fun=cf_fmmax3, x0=x0, args=(m,), jac=cf_fmmax3d,
constraints=cons, method='SLSQP',
options={'disp': True}) | _____no_output_____ | MIT | courses/modsim2018/tasks/Tasks_DuringLecture18/BMC-master/notebooks/Optimization.ipynb | raissabthibes/bmc |
Let's compare the results for the different cost functions: | dat = np.vstack((np.around(r*100,1), np.around(a,1), np.around(f0,0), np.around(m,1)))
opt = np.around(np.vstack((f1r.x, f2r.x, fpcsa2r.x, fmmax3r.x)), 1)
er = ['-', '-', '-', '-',
np.sum(r*f1r.x)-M, np.sum(r*f2r.x)-M, np.sum(r*fpcsa2r.x)-M, np.sum(r*fmmax3r.x)-M]
data = np.vstack((np.vstack((dat, opt)).T, er)).T
rows = ['$\text{Moment arm}\;[cm]$', '$pcsa\;[cm^2]$', '$F_{max}\;[N]$', '$M_{max}\;[Nm]$',
'$\sum F_i$', '$\sum F_i^2$', '$\sum(F_i/pcsa_i)^2$', '$\sum(F_i/M_{max,i})^3$']
cols = ['Biceps long head', 'Biceps short head', 'Brachialis', 'Error in M']
df = pd.DataFrame(data, index=rows, columns=cols)
print('\nComparison of different cost functions for solving the distribution problem')
df | _____no_output_____ | MIT | courses/modsim2018/tasks/Tasks_DuringLecture18/BMC-master/notebooks/Optimization.ipynb | raissabthibes/bmc |
______Copyright Pierian DataFor more information, visit us at www.pieriandata.com Pandas Data Visualization ExercisesThis is just a quick exercise to review the various plots we showed earlier. Use df3.csv to replicate the following plots.IMPORTANT NOTE! Make sure you don't run the cells directly above the example output shown, otherwise you will end up writing over the example output! | # RUN THIS CELL
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df3 = pd.read_csv('df3.csv')
print(len(df3))
print(df3.head()) | 500
weekday produced defective
0 1.Monday 73 7
1 2.Tuesday 75 10
2 3.Wednesday 86 7
3 4.Thursday 64 7
4 5.Friday 70 6
| MIT | pacote-download/02-Pandas-Data-Visualization-Exercise.ipynb | stephacastro/Coleta_dados |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.