markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Step 2 - Define Model Here is a diagram of the model we'll use: -->Now we'll define the model. See how our model consists of three blocks of `Conv2D` and `MaxPool2D` layers (the base) followed by a head of `Dense` layers. We can translate this diagram more or less directly into a Keras `Sequential` model just by filling in the appropriate parameters.
import tensorflow.keras as keras import tensorflow.keras.layers as layers model = keras.Sequential([ # First Convolutional Block layers.Conv2D(filters=32, kernel_size=5, activation="relu", padding='same', # give the input dimensions in the first layer # [height, width, color channels(RGB)] input_shape=[128, 128, 3]), layers.MaxPool2D(), # Second Convolutional Block layers.Conv2D(filters=64, kernel_size=3, activation="relu", padding='same'), layers.MaxPool2D(), # Third Convolutional Block layers.Conv2D(filters=128, kernel_size=3, activation="relu", padding='same'), layers.MaxPool2D(), # Classifier Head layers.Flatten(), layers.Dense(units=6, activation="relu"), layers.Dense(units=1, activation="sigmoid"), ]) model.summary()
_____no_output_____
Apache-2.0
notebooks/computer_vision/raw/tut5.ipynb
guesswhohaha/learntools
Notice in this definition is how the number of filters doubled block-by-block: 64, 128, 256. This is a common pattern. Since the `MaxPool2D` layer is reducing the *size* of the feature maps, we can afford to increase the *quantity* we create. Step 3 - Train We can train this model just like the model from Lesson 1: compile it with an optimizer along with a loss and metric appropriate for binary classification.
model.compile( optimizer=tf.keras.optimizers.Adam(epsilon=0.01), loss='binary_crossentropy', metrics=['binary_accuracy'] ) history = model.fit( ds_train, validation_data=ds_valid, epochs=40, ) import pandas as pd history_frame = pd.DataFrame(history.history) history_frame.loc[:, ['loss', 'val_loss']].plot() history_frame.loc[:, ['binary_accuracy', 'val_binary_accuracy']].plot();
_____no_output_____
Apache-2.0
notebooks/computer_vision/raw/tut5.ipynb
guesswhohaha/learntools
Cascade FilesOpenCV comes with these pre-trained cascade files, we've relocated the .xml files for you in our own DATA folder. Face Detection¶
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') def detect_face(img): face_img = img.copy() face_rects = face_cascade.detectMultiScale(face_img) for (x, y, w, h) in face_rects: cv2.rectangle(face_img, (x, y), (x + w, y + h), (255, 255, 255), 10) return face_img
_____no_output_____
Apache-2.0
11_Face_Detection.ipynb
EliasPapachristos/Computer_Vision_with_OpenCV
Conjunction with Video
cap = cv2.VideoCapture(0) while True: ret, frame = cap.read(0) frame = detect_face(frame) cv2.imshow('Video Face Detection', frame) c = cv2.waitKey(1) if c == 27: break cap.release() cv2.destroyAllWindows()
_____no_output_____
Apache-2.0
11_Face_Detection.ipynb
EliasPapachristos/Computer_Vision_with_OpenCV
Looking up Trig RatiosThere are three ways you could find the value of a trig function at a particular angle.**1. Use a table** - This is how engineers used to find trig ratios before the days of computers. For example, from the table below I can see that $\sin(60)=0.866$| angle | sin | cos | tan || :---: | :---: | :---: | :---: || 0 | 0.000 | 1.000 | 0.000 || 10 | 0.174 | 0.985 | 0.176 || 20 | 0.342 | 0.940 | 0.364 || 30 | 0.500 | 0.866 | 0.577 || 40 | 0.643 | 0.766 | 0.839 || 50 | 0.766 | 0.643 | 1.192 || 60 | 0.866 | 0.500 | 1.732 || 70 | 0.940 | 0.342 | 2.747 || 80 | 0.985 | 0.174 | 5.671 |The problem with this technique is that there will always be gaps in a table. **2. Use a graph** - One way to try to fill these gaps is by consulting a graph of a trigonometric function. For example, the image below shows a plot of $\sin(\theta)$ for $0 \leq \theta \leq 360$![](https://d17h27t6h515a5.cloudfront.net/topher/2017/December/5a2efe68_sine/sine.png)These graphs are nice because they give a good visual sense for how these ratios behave, but they aren't great for getting accurate values. Which leads us to the **best** way to look up trig ratios...**3. Use a computer!** This probably isn't a surprise, but python has built in functions to calculate sine, cosine, and tangent... In fact, you can even type "sin(60 degrees)" into **Google** and you'll get the correct answer!![](https://d17h27t6h515a5.cloudfront.net/topher/2017/December/5a2f0062_img-1742/img-1742.jpg)Note how I wrote in "sin(60 degrees)" instead of just "sin(60)". That's because these functions generally expect their input to be in **radians**. Now let's calculate these ratios with Python.
# Python's math module has functions called sin, cos, and tan # as well as the constant "pi" (which we will find useful shortly) from math import sin, cos, tan, pi # Run this cell. What do you expect the output to be? print(sin(60))
-0.3048106211022167
MIT
3-object-tracking-and-localization/activities/8-vehicle-motion-and-calculus/Looking up Trig Ratios.ipynb
S1lv10Fr4gn4n1/udacity-cv
Did the output match what you expected?If not, it's probably because we didn't convert our angle to radians. EXERCISE 1 - Write a function that converts degrees to radiansImplement the following math in code:$$\theta_{\text{radians}} = \theta_{\text{degrees}} \times \frac{\pi}{180}$$
from math import pi def deg2rad(theta): """Converts degrees to radians""" return theta * (pi/180) # TODO - implement this function (solution # code at end of notebook) assert(deg2rad(45.0) == pi / 4) assert(deg2rad(90.0) == pi / 2) print("Nice work! Your degrees to radians function works!") for theta in [0, 30, 45, 60, 90]: theta_rad = deg2rad(theta) sin_theta = sin(theta_rad) print("sin(", theta, "degrees) =", sin_theta)
Nice work! Your degrees to radians function works! sin( 0 degrees) = 0.0 sin( 30 degrees) = 0.49999999999999994 sin( 45 degrees) = 0.7071067811865475 sin( 60 degrees) = 0.8660254037844386 sin( 90 degrees) = 1.0
MIT
3-object-tracking-and-localization/activities/8-vehicle-motion-and-calculus/Looking up Trig Ratios.ipynb
S1lv10Fr4gn4n1/udacity-cv
EXERCISE 2 - Make plots of cosine and tangent
import numpy as np from matplotlib import pyplot as plt def plot_sine(min_theta, max_theta): """ Generates a plot of sin(theta) between min_theta and max_theta (both of which are specified in degrees). """ angles_degrees = np.linspace(min_theta, max_theta) angles_radians = deg2rad(angles_degrees) values = np.sin(angles_radians) X = angles_degrees Y = values plt.plot(X,Y) plt.show() # EXERCISE 2.1 Implement this! Try not to look at the # implementation of plot_sine TOO much... def plot_cosine(min_theta, max_theta): """ Generates a plot of sin(theta) between min_theta and max_theta (both of which are specified in degrees). """ angles_degrees = np.linspace(min_theta, max_theta) angles_radians = deg2rad(angles_degrees) values = np.cos(angles_radians) X = angles_degrees Y = values plt.plot(X,Y) plt.show() plot_sine(0, 360) plot_cosine(0, 360) # # # # # SOLUTION CODE # # # # from math import pi def deg2rad_solution(theta): """Converts degrees to radians""" return theta * pi / 180 assert(deg2rad_solution(45.0) == pi / 4) assert(deg2rad_solution(90.0) == pi / 2) import numpy as np from matplotlib import pyplot as plt def plot_cosine_solution(min_theta, max_theta): """ Generates a plot of sin(theta) between min_theta and max_theta (both of which are specified in degrees). """ angles_degrees = np.linspace(min_theta, max_theta) angles_radians = deg2rad_solution(angles_degrees) values = np.cos(angles_radians) X = angles_degrees Y = values plt.plot(X,Y) plt.show() plot_cosine_solution(0, 360)
_____no_output_____
MIT
3-object-tracking-and-localization/activities/8-vehicle-motion-and-calculus/Looking up Trig Ratios.ipynb
S1lv10Fr4gn4n1/udacity-cv
In this notebook, we'll look at entry points for G10 vol, look for crosses with the largest downside sensivity to SPX, indicatively price several structures and analyze their carry profile.* [1: FX entry point vs richness](1:-FX-entry-point-vs-richness)* [2: Downside sensitivity to SPX](2:-Downside-sensitivity-to-SPX)* [3: AUDJPY conditional relationship with SPX](3:-AUDJPY-conditional-relationship-with-SPX)* [4: Price structures](4:-Price-structures)* [5: Analyse rates package](5:-Analyse-rates-package) 1: FX entry point vs richnessLet's pull [GS FX Spot](https://marquee.gs.com/s/developer/datasets/FXSPOT_PREMIUM) and [GS FX Implied Volatility](https://marquee.gs.com/s/developer/datasets/FXIMPLIEDVOL_PREMIUM) and look at implied vs realized vol as well as current implied level as percentile relative to the last 2 years.
def format_df(data_dict): df = pd.concat(data_dict, axis=1) df.columns = data_dict.keys() return df.fillna(method='ffill').dropna() g10 = ['USDJPY', 'EURUSD', 'AUDUSD', 'GBPUSD', 'USDCAD', 'USDNOK', 'NZDUSD', 'USDSEK', 'USDCHF', 'AUDJPY'] start_date = date(2005, 8, 26) end_date = business_day_offset(date.today(), -1, roll='preceding') fxspot_dataset, fxvol_dataset = Dataset('FXSPOT_PREMIUM'), Dataset('FXIMPLIEDVOL_PREMIUM') spot_data, impvol_data, spot_fx = {}, {}, {} for cross in g10: spot = fxspot_dataset.get_data(start_date, end_date, bbid=cross)[['spot']].drop_duplicates(keep='last') spot_fx[cross] = spot['spot'] spot_data[cross] = volatility(spot['spot'], 63) # realized vol vol = fxvol_dataset.get_data(start_date, end_date, bbid=cross, tenor='3m', deltaStrike='DN', location='NYC')[['impliedVolatility']] impvol_data[cross] = vol.drop_duplicates(keep='last') * 100 spdata, ivdata = format_df(spot_data), format_df(impvol_data) diff = ivdata.subtract(spdata).dropna() _slice = ivdata['2018-09-01': '2020-09-08'] pct_rank = {} for x in _slice.columns: pct = percentiles(_slice[x]) pct_rank[x] = pct.iloc[-1] for fx in pct_rank: plt.scatter(pct_rank[fx], diff[fx]['2020-09-08']) plt.legend(pct_rank.keys(),loc='best', bbox_to_anchor=(0.9, -0.13), ncol=3) plt.xlabel('Percentile of Current Implied Vol') plt.ylabel('Implied vs Realized Vol') plt.title('Entry Point vs Richness') plt.show()
_____no_output_____
Apache-2.0
gs_quant/content/events/00_virtual_event/0003_trades.ipynb
KabbalahOracle/gs-quant
2: Downside sensitivity to SPXLet's now look at beta and correlation with SPX across G10.
spx_spot = Dataset('TREOD').get_data(start_date, end_date, bbid='SPX')[['closePrice']] spx_spot = spx_spot.fillna(method='ffill').dropna() df = pd.DataFrame(spx_spot) #FX Spot data fx_spots = format_df(spot_fx) data = pd.concat([spx_spot, fx_spots], axis=1).dropna() data.columns = ['SPX'] + g10 beta_spx, corr_spx = {}, {} #calculate rolling 84d or 4m beta to S&P for cross in g10: beta_spx[cross] = beta(data[cross],data['SPX'], 84) corr_spx[cross] = correlation(data['SPX'], data[cross], 84) fig, axs = plt.subplots(5, 2, figsize=(18, 20)) for j in range(2): for i in range(5): color='tab:blue' axs[i,j].plot(beta_spx[g10[i + j*5]], color=color) axs[i,j].set_title(g10[i + j*5]) color='tab:blue' axs[i,j].set_ylabel('Beta', color=color) axs[i,j].plot(beta_spx[g10[i + j*5]], color=color) ax2 = axs[i,j].twinx() color = 'tab:orange' ax2.plot(corr_spx[g10[i + j*5]], color=color) ax2.set_ylabel('Correlation', color=color) plt.show()
_____no_output_____
Apache-2.0
gs_quant/content/events/00_virtual_event/0003_trades.ipynb
KabbalahOracle/gs-quant
Part 3: AUDJPY conditional relationship with SPXLet's focus on AUDJPY and look at its relationship with SPX when SPX is significantly up and down.
# resample data to weekly from daily & get weekly returns wk_data = data.resample('W-FRI').last() rets = returns(wk_data, 1) sns.set(style='white', color_codes=True) spx_returns = [-.1, -.05, .05, .1] r2 = lambda x,y: stats.pearsonr(x,y)[0]**2 betas = pd.DataFrame(index=spx_returns, columns=g10) for ret in spx_returns: dns = rets[rets.SPX <= ret].dropna() if ret < 0 else rets[rets.SPX >= ret].dropna() j = sns.jointplot(x='SPX', y='AUDJPY', data=dns, kind='reg') j.set_axis_labels('SPX with {}% Returns'.format(ret*100), 'AUDJPY') j.fig.subplots_adjust(wspace=.02) plt.show()
_____no_output_____
Apache-2.0
gs_quant/content/events/00_virtual_event/0003_trades.ipynb
KabbalahOracle/gs-quant
Let's use the beta for all S&P returns to price a structure
sns.jointplot(x='SPX', y='AUDJPY', data=rets, kind='reg', stat_func=r2)
_____no_output_____
Apache-2.0
gs_quant/content/events/00_virtual_event/0003_trades.ipynb
KabbalahOracle/gs-quant
4: Price structures Let's now look at a few AUDJPY structures as potential hedges* Buy 4m AUDJPY put using spx beta to size. Max loss limited to premium paid.* Buy 4m AUDJPY put spread (4.2%/10.6% OTMS). Max loss limited to premium paid.For more info on this trade, check out our market strats piece [here](https://marquee.gs.com/content//article/2020/08/28/gs-marketstrats-audjpy-as-us-election-hedge)
#buy 4m AUDJPY put audjpy_put = FXOption(option_type='Put', pair='AUDJPY', strike_price= 's-4.2%', expiration_date='4m', buy_sell='Buy') print('cost in bps: {:,.2f}'.format(audjpy_put.premium / audjpy_put.notional_amount * 1e4)) #buy 4m AUDJPY put spread (5.3%/10.6% OTMS) from gs_quant.markets.portfolio import Portfolio put1 = FXOption(option_type='Put', pair='AUDJPY', strike_price= 's-4.2%', expiration_date='4m', buy_sell='Buy') put2 = FXOption(option_type='Put', pair='AUDJPY', strike_price= 's-10.6%', expiration_date='4m', buy_sell='Sell') fx_package = Portfolio((put1, put2)) cost = put2.premium/put2.notional_amount - put1.premium/put1.notional_amount print('cost in bps: {:,.2f}'.format(cost * 1e4))
_____no_output_____
Apache-2.0
gs_quant/content/events/00_virtual_event/0003_trades.ipynb
KabbalahOracle/gs-quant
...And some rates ideas* Sell straddle. Max loss unlimited.* Sell 3m30y straddle, buy 2y30y straddle in a 0 pv package. Max loss unlimited.
leg = IRSwaption('Straddle', '30y', notional_currency='USD', expiration_date='3m', buy_sell='Sell') print('PV in USD: {:,.2f}'.format(leg.dollar_price())) leg1 = IRSwaption('Straddle', '30y', notional_currency='USD', expiration_date='3m', buy_sell='Sell',name='3m30y ATM Straddle') leg2 = IRSwaption('Straddle', '30y', notional_currency='USD', expiration_date='2y', notional_amount='{}/pv'.format(leg1.price()), buy_sell='Buy', name = '2y30y ATM Straddle') rates_package = Portfolio((leg1, leg2)) rates_package.resolve() print('Package cost in USD: {:,.2f}'.format(rates_package.price().aggregate())) print('PV Flat notionals ($$m):', round(leg1.notional_amount/1e6, 1),' by ',round(leg2.notional_amount/1e6, 1))
_____no_output_____
Apache-2.0
gs_quant/content/events/00_virtual_event/0003_trades.ipynb
KabbalahOracle/gs-quant
5: Analyse rates package
dates = pd.bdate_range(date(2020, 6, 8), leg1.expiration_date, freq='5B').date.tolist() with BackToTheFuturePricingContext(dates=dates, roll_to_fwds=True): future = rates_package.price() rates_future = future.result().aggregate() rates_future.plot(figsize=(10, 6), title='Historical PV and carry for rates package') print('PV breakdown between legs:') results = future.result().to_frame() results /= 1e6 results.index=[leg1.name,leg2.name] results.loc['Total'] = results.sum() results.round(1)
_____no_output_____
Apache-2.0
gs_quant/content/events/00_virtual_event/0003_trades.ipynb
KabbalahOracle/gs-quant
Let's focus on the next 3m and how the calendar carries in different rates shocks.
dates = pd.bdate_range(dt.date.today(), leg1.expiration_date, freq='5B').date.tolist() shocked_pv = pd.DataFrame(columns=['Base', '5bp per week', '50bp instantaneous'], index=dates) p1, p2, p3 = [], [], [] with PricingContext(is_batch=True): for t, d in enumerate(dates): with CarryScenario(date=d, roll_to_fwds=True): p1.append(rates_package.price()) with MarketDataShockBasedScenario({MarketDataPattern('IR', 'USD'): MarketDataShock(MarketDataShockType.Absolute, t*0.0005)}): p2.append(rates_package.price()) with MarketDataShockBasedScenario({MarketDataPattern('IR', 'USD'): MarketDataShock(MarketDataShockType.Absolute, 0.005)}): p3.append(rates_package.price()) shocked_pv.Base = [p.result().aggregate() for p in p1] shocked_pv['5bp per week'] = [p.result().aggregate() for p in p2] shocked_pv['50bp instantaneous'] = [p.result().aggregate() for p in p3] shocked_pv/=1e6 shocked_pv.round(1) shocked_pv.plot(figsize=(10, 6), title='Carry + scenario analysis')
_____no_output_____
Apache-2.0
gs_quant/content/events/00_virtual_event/0003_trades.ipynb
KabbalahOracle/gs-quant
💡 SolutionsBefore trying out these solutions, please start the [gqlalchemy-workshop notebook](../workshop/gqlalchemy-workshop.ipynb) to import all data. Also, this solutions manual is here to help you out, and it is recommended you try solving the exercises first by yourself. Exercise 1**Find out how many genres there are in the database.**The correct Cypher query is:```MATCH (g:Genre)RETURN count(g) AS num_of_genres;```You can try it out in Memgraph Lab at `localhost:3000`.With GQLAlchemy's query builder, the solution is:
from gqlalchemy import match total_genres = ( match() .node(labels="Genre", variable="g") .return_({"count(g)": "num_of_genres"}) .execute() ) results = list(total_genres) for result in results: print(result["num_of_genres"])
22084
MIT
solutions/gqlalchemy-solutions.ipynb
pyladiesams/graphdatabases-gqlalchemy-beginner-mar2022
Exercise 2**Find out to how many genres movie 'Matrix, The (1999)' belongs to.**The correct Cypher query is:```MATCH (:Movie {title: 'Matrix, The (1999)'})-[:OF_GENRE]->(g:Genre)RETURN count(g) AS num_of_genres;```You can try it out in Memgraph Lab at `localhost:3000`.With GQLAlchemy's query builder, the solution is:
matrix = ( match() .node(labels="Movie", variable="m") .to("OF_GENRE") .node(labels="Genre", variable="g") .where("m.title", "=", "Matrix, The (1999)") .return_({"count(g)": "num_of_genres"}) .execute() ) results = list(matrix) for result in results: print(result["num_of_genres"])
3
MIT
solutions/gqlalchemy-solutions.ipynb
pyladiesams/graphdatabases-gqlalchemy-beginner-mar2022
Exercise 3**Find out the title of the movies that the user with `id` 1 rated.**The correct Cypher query is:```MATCH (:User {id: 1})-[:RATED]->(m:Movie)RETURN m.title;```You can try it out in Memgraph Lab at `localhost:3000`.With GQLAlchemy's query builder, the solution is:
movies = ( match() .node(labels="User", variable="u") .to("RATED") .node(labels="Movie", variable="m") .where("u.id", "=", 1) .return_({"m.title": "movie"}) .execute() ) results = list(movies) for result in results: print(result["movie"])
Toy Story (1995) Grumpier Old Men (1995) Heat (1995) Seven (a.k.a. Se7en) (1995) Usual Suspects, The (1995) From Dusk Till Dawn (1996) Bottle Rocket (1996) Braveheart (1995) Rob Roy (1995) Canadian Bacon (1995) Desperado (1995) Billy Madison (1995) Clerks (1994) Dumb & Dumber (Dumb and Dumber) (1994) Ed Wood (1994) Star Wars: Episode IV - A New Hope (1977) Pulp Fiction (1994) Stargate (1994) Tommy Boy (1995) Clear and Present Danger (1994) Forrest Gump (1994) Jungle Book, The (1994) Mask, The (1994) Blown Away (1994) Dazed and Confused (1993) Fugitive, The (1993) Jurassic Park (1993) Mrs. Doubtfire (1993) Schindler's List (1993) So I Married an Axe Murderer (1993) Three Musketeers, The (1993) Tombstone (1993) Dances with Wolves (1990) Batman (1989) Silence of the Lambs, The (1991) Pinocchio (1940) Fargo (1996) Mission: Impossible (1996) James and the Giant Peach (1996) Space Jam (1996) Rock, The (1996) Twister (1996) Independence Day (a.k.a. ID4) (1996) She's the One (1996) Wizard of Oz, The (1939) Citizen Kane (1941) Adventures of Robin Hood, The (1938) Ghost and Mrs. Muir, The (1947) Mr. Smith Goes to Washington (1939) Escape to Witch Mountain (1975) Winnie the Pooh and the Blustery Day (1968) Three Caballeros, The (1945) Sword in the Stone, The (1963) Dumbo (1941) Pete's Dragon (1977) Bedknobs and Broomsticks (1971) Alice in Wonderland (1951) That Thing You Do! (1996) Ghost and the Darkness, The (1996) Swingers (1996) Willy Wonka & the Chocolate Factory (1971) Monty Python's Life of Brian (1979) Reservoir Dogs (1992) Platoon (1986) Basic Instinct (1992) E.T. the Extra-Terrestrial (1982) Abyss, The (1989) Monty Python and the Holy Grail (1975) Star Wars: Episode V - The Empire Strikes Back (1980) Princess Bride, The (1987) Raiders of the Lost Ark (Indiana Jones and the Raiders of the Lost Ark) (1981) Clockwork Orange, A (1971) Apocalypse Now (1979) Star Wars: Episode VI - Return of the Jedi (1983) Goodfellas (1990) Alien (1979) Psycho (1960) Blues Brothers, The (1980) Full Metal Jacket (1987) Henry V (1989) Quiet Man, The (1952) Terminator, The (1984) Duck Soup (1933) Shining, The (1980) Groundhog Day (1993) Back to the Future (1985) Highlander (1986) Young Frankenstein (1974) Fantasia (1940) Indiana Jones and the Last Crusade (1989) Pink Floyd: The Wall (1982) Nosferatu (Nosferatu, eine Symphonie des Grauens) (1922) Batman Returns (1992) Sneakers (1992) Last of the Mohicans, The (1992) McHale's Navy (1997) Best Men (1997) Grosse Pointe Blank (1997) Austin Powers: International Man of Mystery (1997) Con Air (1997) Face/Off (1997) Men in Black (a.k.a. MIB) (1997) Conan the Barbarian (1982) L.A. Confidential (1997) Kiss the Girls (1997) Game, The (1997) I Know What You Did Last Summer (1997) Starship Troopers (1997) Big Lebowski, The (1998) Wedding Singer, The (1998) Welcome to Woop-Woop (1997) Newton Boys, The (1998) Wild Things (1998) Small Soldiers (1998) All Quiet on the Western Front (1930) Rocky (1976) Labyrinth (1986) Lethal Weapon (1987) Goonies, The (1985) Back to the Future Part III (1990) Bambi (1942) Saving Private Ryan (1998) Black Cauldron, The (1985) Flight of the Navigator (1986) Great Mouse Detective, The (1986) Honey, I Shrunk the Kids (1989) Negotiator, The (1998) Jungle Book, The (1967) Rescuers, The (1977) Return to Oz (1985) Rocketeer, The (1991) Sleeping Beauty (1959) Song of the South (1946) Tron (1982) Indiana Jones and the Temple of Doom (1984) Lord of the Rings, The (1978) Charlotte's Web (1973) Secret of NIMH, The (1982) American Tail, An (1986) Legend (1985) NeverEnding Story, The (1984) Beetlejuice (1988) Willow (1988) Toys (1992) Few Good Men, A (1992) Rush Hour (1998) Edward Scissorhands (1990) American History X (1998) I Still Know What You Did Last Summer (1998) Enemy of the State (1998) King Kong (1933) Very Bad Things (1998) Psycho (1998) Rushmore (1998) Romancing the Stone (1984) Young Sherlock Holmes (1985) Thin Red Line, The (1998) Howard the Duck (1986) Texas Chainsaw Massacre, The (1974) Crocodile Dundee (1986) ¡Three Amigos! (1986) 20 Dates (1998) Office Space (1999) Logan's Run (1976) Planet of the Apes (1968) Lock, Stock & Two Smoking Barrels (1998) Matrix, The (1999) Go (1999) SLC Punk! (1998) Dick Tracy (1990) Mummy, The (1999) Star Wars: Episode I - The Phantom Menace (1999) Superman (1978) Superman II (1980) Dracula (1931) Frankenstein (1931) Wolf Man, The (1941) Rocky Horror Picture Show, The (1975) Run Lola Run (Lola rennt) (1998) South Park: Bigger, Longer and Uncut (1999) Ghostbusters (a.k.a. Ghost Busters) (1984) Iron Giant, The (1999) Big (1988) 13th Warrior, The (1999) American Beauty (1999) Excalibur (1981) Gulliver's Travels (1939) Total Recall (1990) Dirty Dozen, The (1967) Goldfinger (1964) From Russia with Love (1963) Dr. No (1962) Fight Club (1999) RoboCop (1987) Who Framed Roger Rabbit? (1988) Live and Let Die (1973) Thunderball (1965) Being John Malkovich (1999) Spaceballs (1987) Robin Hood (1973) Dogma (1999) Messenger: The Story of Joan of Arc, The (1999) Longest Day, The (1962) Green Mile, The (1999) Easy Rider (1969) Talented Mr. Ripley, The (1999) Encino Man (1992) Sister Act (1992) Wayne's World (1992) Scream 3 (2000) JFK (1991) Teenage Mutant Ninja Turtles II: The Secret of the Ooze (1991) Teenage Mutant Ninja Turtles III (1993) Red Dawn (1984) Good Morning, Vietnam (1987) Grumpy Old Men (1993) Ladyhawke (1985) Hook (1991) Predator (1987) Gladiator (2000) Road Trip (2000) Man with the Golden Gun, The (1974) Blazing Saddles (1974) Mad Max (1979) Road Warrior, The (Mad Max 2) (1981) Shaft (1971) Big Trouble in Little China (1986) Shaft (2000) X-Men (2000) What About Bob? (1991) Transformers: The Movie (1986) M*A*S*H (a.k.a. MASH) (1970)
MIT
solutions/gqlalchemy-solutions.ipynb
pyladiesams/graphdatabases-gqlalchemy-beginner-mar2022
Exercise 4**List 15 movies of 'Documentary' and 'Comedy' genres and sort them by title descending.**The correct Cypher query is:```MATCH (m:Movie)-[:OF_GENRE]->(:Genre {name: "Documentary"})MATCH (m)-[:OF_GENRE]->(:Genre {name: "Comedy"})RETURN m.titleORDER BY m.title DESCLIMIT 15;```You can try it out in Memgraph Lab at `localhost:3000`.With GQLAlchemy's query builder, the solution is:
movies = ( match() .node(labels="Movie", variable="m") .to("OF_GENRE") .node(labels="Genre", variable="g1") .where("g1.name", "=", "Documentary") .match() .node(labels="Movie", variable="m") .to("OF_GENRE") .node(labels="Genre", variable="g2") .where("g2.name", "=", "Comedy") .return_({"m.title": "movie"}) .order_by("m.title DESC") .limit(15) .execute() ) results = list(movies) for result in results: print(result["movie"])
What the #$*! Do We Know!? (a.k.a. What the Bleep Do We Know!?) (2004) Union: The Business Behind Getting High, The (2007) Super Size Me (2004) Super High Me (2007) Secret Policeman's Other Ball, The (1982) Richard Pryor Live on the Sunset Strip (1982) Religulous (2008) Paper Heart (2009) Original Kings of Comedy, The (2000) Merci Patron ! (2016) Martin Lawrence Live: Runteldat (2002) Kevin Hart: Laugh at My Pain (2011) Jeff Ross Roasts Criminals: Live at Brazos County Jail (2015) Jackass: The Movie (2002) Jackass Number Two (2006)
MIT
solutions/gqlalchemy-solutions.ipynb
pyladiesams/graphdatabases-gqlalchemy-beginner-mar2022
Exercise 5**Find out the minimum rating of the 'Star Wars: Episode I - The Phantom Menace (1999)' movie.**The correct Cypher query is:```MATCH (:User)-[r:RATED]->(:Movie {title: 'Star Wars: Episode I - The Phantom Menace (1999)'})RETURN min(r.rating);```You can try it out in Memgraph Lab at `localhost:3000`.With GQLAlchemy's query builder, the solution is:
rating = ( match() .node(labels="User") .to("RATED", variable="r") .node(labels="Movie", variable="m") .where("m.title", "=", "Star Wars: Episode I - The Phantom Menace (1999)") .return_({"min(r.rating)": "min_rating"}) .execute() ) results = list(rating) for result in results: print(result["min_rating"])
0.5
MIT
solutions/gqlalchemy-solutions.ipynb
pyladiesams/graphdatabases-gqlalchemy-beginner-mar2022
Can We Predict If a PGA Tour Player Won a Tournament in a Given Year?Golf is picking up popularity, so I thought it would be interesting to focus my project here. I set out to find what sets apart the best golfers from the rest. I decided to explore their statistics and to see if I could predict which golfers would win in a given year. My original dataset was found on Kaggle, and the data was scraped from the PGA Tour website. From this data, I performed an exploratory data analysis to explore the distribution of players on numerous aspects of the game, discover outliers, and further explore how the game has changed from 2010 to 2018. I also utilized numerous supervised machine learning models to predict a golfer's earnings and wins.To predict the golfer's win, I used classification methods such as logisitic regression and Random Forest Classification. The best performance came from the Random Forest Classification method. 1. The DatapgaTourData.csv contains 1674 rows and 18 columns. Each row indicates a golfer's performance for that year.
# Player Name: Name of the golfer # Rounds: The number of games that a player played # Fairway Percentage: The percentage of time a tee shot lands on the fairway # Year: The year in which the statistic was collected # Avg Distance: The average distance of the tee-shot # gir: (Green in Regulation) is met if any part of the ball is touching the putting surface while the number of strokes taken is at least two fewer than par # Average Putts: The average number of strokes taken on the green # Average Scrambling: Scrambling is when a player misses the green in regulation, but still makes par or better on a hole # Average Score: Average Score is the average of all the scores a player has played in that year # Points: The number of FedExCup points a player earned in that year # Wins: The number of competition a player has won in that year # Top 10: The number of competitions where a player has placed in the Top 10 # Average SG Putts: Strokes gained: putting measures how many strokes a player gains (or loses) on the greens # Average SG Total: The Off-the-tee + approach-the-green + around-the-green + putting statistics combined # SG:OTT: Strokes gained: off-the-tee measures player performance off the tee on all par-4s and par-5s # SG:APR: Strokes gained: approach-the-green measures player performance on approach shots # SG:ARG: Strokes gained: around-the-green measures player performance on any shot within 30 yards of the edge of the green # Money: The amount of prize money a player has earned from tournaments #collapse # importing packages import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # Importing the data df = pd.read_csv('pgaTourData.csv') # Examining the first 5 data df.head() #collapse df.info() #collapse df.shape
_____no_output_____
Apache-2.0
_notebooks/2021_04_28_PGA_Wins.ipynb
brennanashley/lambdalost
2. Data CleaningAfter looking at the dataframe, the data needs to be cleaned:-For the columns Top 10 and Wins, convert the NaNs to 0s-Change Top 10 and Wins into an int -Drop NaN values for players who do not have the full statistics-Change the columns Rounds into int-Change points to int-Remove the dollar sign ($) and commas in the column Money
# Replace NaN with 0 in Top 10 df['Top 10'].fillna(0, inplace=True) df['Top 10'] = df['Top 10'].astype(int) # Replace NaN with 0 in # of wins df['Wins'].fillna(0, inplace=True) df['Wins'] = df['Wins'].astype(int) # Drop NaN values df.dropna(axis = 0, inplace=True) # Change Rounds to int df['Rounds'] = df['Rounds'].astype(int) # Change Points to int df['Points'] = df['Points'].apply(lambda x: x.replace(',','')) df['Points'] = df['Points'].astype(int) # Remove the $ and commas in money df['Money'] = df['Money'].apply(lambda x: x.replace('$','')) df['Money'] = df['Money'].apply(lambda x: x.replace(',','')) df['Money'] = df['Money'].astype(float) #collapse df.info() #collapse df.describe()
_____no_output_____
Apache-2.0
_notebooks/2021_04_28_PGA_Wins.ipynb
brennanashley/lambdalost
3. Exploratory Data Analysis
#collapse_output # Looking at the distribution of data f, ax = plt.subplots(nrows = 6, ncols = 3, figsize=(20,20)) distribution = df.loc[:,df.columns!='Player Name'].columns rows = 0 cols = 0 for i, column in enumerate(distribution): p = sns.distplot(df[column], ax=ax[rows][cols]) cols += 1 if cols == 3: cols = 0 rows += 1
/usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) /usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) /usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) /usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) /usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) /usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) /usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) /usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) /usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) /usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) /usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) /usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) /usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) /usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) /usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) /usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) /usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning)
Apache-2.0
_notebooks/2021_04_28_PGA_Wins.ipynb
brennanashley/lambdalost
From the distributions plotted, most of the graphs are normally distributed. However, we can observe that Money, Points, Wins, and Top 10s are all skewed to the right. This could be explained by the separation of the best players and the average PGA Tour player. The best players have multiple placings in the Top 10 with wins that allows them to earn more from tournaments, while the average player will have no wins and only a few Top 10 placings that prevent them from earning as much.
#collapse_output # Looking at the number of players with Wins for each year win = df.groupby('Year')['Wins'].value_counts() win = win.unstack() win.fillna(0, inplace=True) # Converting win into ints win = win.astype(int) print(win)
Wins 0 1 2 3 4 5 Year 2010 166 21 5 0 0 0 2011 156 25 5 0 0 0 2012 159 26 4 1 0 0 2013 152 24 3 0 0 1 2014 142 29 3 2 0 0 2015 150 29 2 1 1 0 2016 152 28 4 1 0 0 2017 156 30 0 3 1 0 2018 158 26 5 3 0 0
Apache-2.0
_notebooks/2021_04_28_PGA_Wins.ipynb
brennanashley/lambdalost
From this table, we can see that most players end the year without a win. It's pretty rare to find a player that has won more than once!
# Looking at the percentage of players without a win in that year players = win.apply(lambda x: np.sum(x), axis=1) percent_no_win = win[0]/players percent_no_win = percent_no_win*100 print(percent_no_win) #collapse_output # Plotting percentage of players without a win each year fig, ax = plt.subplots() bar_width = 0.8 opacity = 0.7 index = np.arange(2010, 2019) plt.bar(index, percent_no_win, bar_width, alpha = opacity) plt.xticks(index) plt.xlabel('Year') plt.ylabel('%') plt.title('Percentage of Players without a Win')
_____no_output_____
Apache-2.0
_notebooks/2021_04_28_PGA_Wins.ipynb
brennanashley/lambdalost
From the box plot above, we can observe that the percentages of players without a win are around 80%. There was very little variation in the percentage of players without a win in the past 8 years.
#collapse_output # Plotting the number of wins on a bar chart fig, ax = plt.subplots() index = np.arange(2010, 2019) bar_width = 0.2 opacity = 0.7 def plot_bar(index, win, labels): plt.bar(index, win, bar_width, alpha=opacity, label=labels) # Plotting the bars rects = plot_bar(index, win[0], labels = '0 Wins') rects1 = plot_bar(index + bar_width, win[1], labels = '1 Wins') rects2 = plot_bar(index + bar_width*2, win[2], labels = '2 Wins') rects3 = plot_bar(index + bar_width*3, win[3], labels = '3 Wins') rects4 = plot_bar(index + bar_width*4, win[4], labels = '4 Wins') rects5 = plot_bar(index + bar_width*5, win[5], labels = '5 Wins') plt.xticks(index + bar_width, index) plt.xlabel('Year') plt.ylabel('Number of Players') plt.title('Distribution of Wins each Year') plt.legend()
_____no_output_____
Apache-2.0
_notebooks/2021_04_28_PGA_Wins.ipynb
brennanashley/lambdalost
By looking at the distribution of Wins each year, we can see that it is rare for most players to even win a tournament in the PGA Tour. Majority of players do not win, and a very few number of players win more than once a year.
# Percentage of people who did not place in the top 10 each year top10 = df.groupby('Year')['Top 10'].value_counts() top10 = top10.unstack() top10.fillna(0, inplace=True) players = top10.apply(lambda x: np.sum(x), axis=1) no_top10 = top10[0]/players * 100 print(no_top10)
Year 2010 17.187500 2011 25.268817 2012 23.157895 2013 18.888889 2014 16.477273 2015 18.579235 2016 20.000000 2017 15.789474 2018 17.187500 dtype: float64
Apache-2.0
_notebooks/2021_04_28_PGA_Wins.ipynb
brennanashley/lambdalost
By looking at the percentage of players that did not place in the top 10 by year, We can observe that only approximately 20% of players did not place in the Top 10. In addition, the range for these player that did not place in the Top 10 is only 9.47%. This tells us that this statistic does not vary much on a yearly basis.
# Who are some of the longest hitters distance = df[['Year','Player Name','Avg Distance']].copy() distance.sort_values(by='Avg Distance', inplace=True, ascending=False) print(distance.head())
Year Player Name Avg Distance 162 2018 Rory McIlroy 319.7 1481 2011 J.B. Holmes 318.4 174 2018 Trey Mullinax 318.3 732 2015 Dustin Johnson 317.7 350 2017 Rory McIlroy 316.7
Apache-2.0
_notebooks/2021_04_28_PGA_Wins.ipynb
brennanashley/lambdalost
Rory McIlroy is one of the longest hitters in the game, setting the average driver distance to be 319.7 yards in 2018. He was also the longest hitter in 2017 with an average of 316.7 yards.
# Who made the most money money_ranking = df[['Year','Player Name','Money']].copy() money_ranking.sort_values(by='Money', inplace=True, ascending=False) print(money_ranking.head())
Year Player Name Money 647 2015 Jordan Spieth 12030465.0 361 2017 Justin Thomas 9921560.0 303 2017 Jordan Spieth 9433033.0 729 2015 Jason Day 9403330.0 520 2016 Dustin Johnson 9365185.0
Apache-2.0
_notebooks/2021_04_28_PGA_Wins.ipynb
brennanashley/lambdalost
We can see that Jordan Spieth has made the most amount of money in a year, earning a total of 12 million dollars in 2015.
#collapse_output # Who made the most money each year money_rank = money_ranking.groupby('Year')['Money'].max() money_rank = pd.DataFrame(money_rank) indexs = np.arange(2010, 2019) names = [] for i in range(money_rank.shape[0]): temp = df.loc[df['Money'] == money_rank.iloc[i,0],'Player Name'] names.append(str(temp.values[0])) money_rank['Player Name'] = names print(money_rank)
Money Player Name Year 2010 4910477.0 Matt Kuchar 2011 6683214.0 Luke Donald 2012 8047952.0 Rory McIlroy 2013 8553439.0 Tiger Woods 2014 8280096.0 Rory McIlroy 2015 12030465.0 Jordan Spieth 2016 9365185.0 Dustin Johnson 2017 9921560.0 Justin Thomas 2018 8694821.0 Justin Thomas
Apache-2.0
_notebooks/2021_04_28_PGA_Wins.ipynb
brennanashley/lambdalost
With this table, we can examine the earnings of each player by year. Some of the most notable were Jordan Speith's earning of 12 million dollars and Justin Thomas earning the most money in both 2017 and 2018.
#collapse_output # Plot the correlation matrix between variables corr = df.corr() sns.heatmap(corr, xticklabels=corr.columns.values, yticklabels=corr.columns.values, cmap='coolwarm') df.corr()['Wins']
_____no_output_____
Apache-2.0
_notebooks/2021_04_28_PGA_Wins.ipynb
brennanashley/lambdalost
From the correlation matrix, we can observe that Money is highly correlated to wins along with the FedExCup Points. We can also observe that the fairway percentage, year, and rounds are not correlated to Wins. 4. Machine Learning Model (Classification)To predict winners, I used multiple machine learning models to explore which models could accurately classify if a player is going to win in that year.To measure the models, I used Receiver Operating Characterisitc Area Under the Curve. (ROC AUC) The ROC AUC tells us how capable the model is at distinguishing players with a win. In addition, as the data is skewed with 83% of players having no wins in that year, ROC AUC is a much better metric than the accuracy of the model.
#collapse # Importing the Machine Learning modules from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import roc_curve, roc_auc_score from sklearn.metrics import confusion_matrix from sklearn.feature_selection import RFE from sklearn.metrics import classification_report from sklearn.preprocessing import PolynomialFeatures from sklearn.svm import SVC from sklearn.ensemble import RandomForestClassifier from sklearn.preprocessing import MinMaxScaler
_____no_output_____
Apache-2.0
_notebooks/2021_04_28_PGA_Wins.ipynb
brennanashley/lambdalost
Preparing the Data for ClassificationWe know from the calculation above that the data for wins is skewed. Even without machine learning we know that approximately 83% of the players does not lead to a win. Therefore, we will be utilizing ROC AUC as the metric of these models
# Adding the Winner column to determine if the player won that year or not df['Winner'] = df['Wins'].apply(lambda x: 1 if x>0 else 0) # New DataFrame ml_df = df.copy() # Y value for machine learning is the Winner column target = df['Winner'] # Removing the columns Player Name, Wins, and Winner from the dataframe to avoid leakage ml_df.drop(['Player Name','Wins','Winner'], axis=1, inplace=True) print(ml_df.head()) ## Logistic Regression Baseline per_no_win = target.value_counts()[0] / (target.value_counts()[0] + target.value_counts()[1]) per_no_win = per_no_win.round(4)*100 print(str(per_no_win)+str('%')) #collapse_show # Function for the logisitic regression def log_reg(X, y): X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 10) clf = LogisticRegression().fit(X_train, y_train) y_pred = clf.predict(X_test) print('Accuracy of Logistic regression classifier on training set: {:.2f}' .format(clf.score(X_train, y_train))) print('Accuracy of Logistic regression classifier on test set: {:.2f}' .format(clf.score(X_test, y_test))) cf_mat = confusion_matrix(y_test, y_pred) confusion = pd.DataFrame(data = cf_mat) print(confusion) print(classification_report(y_test, y_pred)) # Returning the 5 important features #rfe = RFE(clf, 5) # rfe = rfe.fit(X, y) # print('Feature Importance') # print(X.columns[rfe.ranking_ == 1].values) print('ROC AUC Score: {:.2f}'.format(roc_auc_score(y_test, y_pred))) #collapse_show log_reg(ml_df, target)
Accuracy of Logistic regression classifier on training set: 0.90 Accuracy of Logistic regression classifier on test set: 0.91 0 1 0 345 8 1 28 38 precision recall f1-score support 0 0.92 0.98 0.95 353 1 0.83 0.58 0.68 66 accuracy 0.91 419 macro avg 0.88 0.78 0.81 419 weighted avg 0.91 0.91 0.91 419 ROC AUC Score: 0.78
Apache-2.0
_notebooks/2021_04_28_PGA_Wins.ipynb
brennanashley/lambdalost
From the logisitic regression, we got an accuracy of 0.9 on the training set and an accuracy of 0.91 on the test set. This was surprisingly accurate for a first run. However, the ROC AUC Score of 0.78 could be improved. Therefore, I decided to add more features as a way of possibly improving the model.
## Feature Engineering # Adding Domain Features ml_d = ml_df.copy() # Top 10 / Money might give us a better understanding on how well they placed in the top 10 ml_d['Top10perMoney'] = ml_d['Top 10'] / ml_d['Money'] # Avg Distance / Fairway Percentage to give us a ratio that determines how accurate and far a player hits ml_d['DistanceperFairway'] = ml_d['Avg Distance'] / ml_d['Fairway Percentage'] # Money / Rounds to see on average how much money they would make playing a round of golf ml_d['MoneyperRound'] = ml_d['Money'] / ml_d['Rounds'] #collapse_show log_reg(ml_d, target) #collapse_show # Adding Polynomial Features to the ml_df mldf2 = ml_df.copy() poly = PolynomialFeatures(2) poly = poly.fit(mldf2) poly_feature = poly.transform(mldf2) print(poly_feature.shape) # Creating a DataFrame with the polynomial features poly_feature = pd.DataFrame(poly_feature, columns = poly.get_feature_names(ml_df.columns)) print(poly_feature.head()) #collapse_show log_reg(poly_feature, target)
Accuracy of Logistic regression classifier on training set: 0.90 Accuracy of Logistic regression classifier on test set: 0.91 0 1 0 346 7 1 32 34 precision recall f1-score support 0 0.92 0.98 0.95 353 1 0.83 0.52 0.64 66 accuracy 0.91 419 macro avg 0.87 0.75 0.79 419 weighted avg 0.90 0.91 0.90 419 ROC AUC Score: 0.75
Apache-2.0
_notebooks/2021_04_28_PGA_Wins.ipynb
brennanashley/lambdalost
From feature engineering, there were no improvements in the ROC AUC Score. In fact as I added more features, the accuracy and the ROC AUC Score decreased. This could signal to us that another machine learning algorithm could better predict winners.
#collapse_show ## Randon Forest Model def random_forest(X, y): X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 10) clf = RandomForestClassifier(n_estimators=200).fit(X_train, y_train) y_pred = clf.predict(X_test) print('Accuracy of Random Forest classifier on training set: {:.2f}' .format(clf.score(X_train, y_train))) print('Accuracy of Random Forest classifier on test set: {:.2f}' .format(clf.score(X_test, y_test))) cf_mat = confusion_matrix(y_test, y_pred) confusion = pd.DataFrame(data = cf_mat) print(confusion) print(classification_report(y_test, y_pred)) # Returning the 5 important features rfe = RFE(clf, 5) rfe = rfe.fit(X, y) print('Feature Importance') print(X.columns[rfe.ranking_ == 1].values) print('ROC AUC Score: {:.2f}'.format(roc_auc_score(y_test, y_pred))) #collapse_show random_forest(ml_df, target) #collapse_show random_forest(ml_d, target) #collapse_show random_forest(poly_feature, target)
Accuracy of Random Forest classifier on training set: 1.00 Accuracy of Random Forest classifier on test set: 0.94 0 1 0 340 13 1 14 52 precision recall f1-score support 0 0.96 0.96 0.96 353 1 0.80 0.79 0.79 66 accuracy 0.94 419 macro avg 0.88 0.88 0.88 419 weighted avg 0.94 0.94 0.94 419 Feature Importance ['Year Points' 'Average Putts Points' 'Average Scrambling Top 10' 'Average Score Points' 'Points^2'] ROC AUC Score: 0.88
Apache-2.0
_notebooks/2021_04_28_PGA_Wins.ipynb
brennanashley/lambdalost
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
import sys import gym import numpy as np from collections import defaultdict from plot_utils import plot_blackjack_values, plot_policy
_____no_output_____
MIT
monte-carlo/Monte_Carlo.ipynb
jbdekker/deep-reinforcement-learning
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
env = gym.make('Blackjack-v0')
_____no_output_____
MIT
monte-carlo/Monte_Carlo.ipynb
jbdekker/deep-reinforcement-learning
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
print(f"Observation space: \t{env.observation_space}") print(f"Action space: \t\t{env.action_space}")
Observation space: Tuple(Discrete(32), Discrete(11), Discrete(2)) Action space: Discrete(2)
MIT
monte-carlo/Monte_Carlo.ipynb
jbdekker/deep-reinforcement-learning
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
for i_episode in range(3): state = env.reset() while True: print(state) action = env.action_space.sample() state, reward, done, info = env.step(action) if done: print('End game! Reward: ', reward) print('You won :)\n') if reward > 0 else print('You lost :(\n') break
(19, 10, False) End game! Reward: 1.0 You won :) (14, 6, False) (15, 6, False) End game! Reward: 1.0 You won :) (16, 3, False) End game! Reward: 1.0 You won :)
MIT
monte-carlo/Monte_Carlo.ipynb
jbdekker/deep-reinforcement-learning
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
def generate_episode_from_limit_stochastic(bj_env): episode = [] state = bj_env.reset() while True: probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8] action = np.random.choice(np.arange(2), p=probs) next_state, reward, done, info = bj_env.step(action) episode.append((state, action, reward)) state = next_state if done: break return episode
_____no_output_____
MIT
monte-carlo/Monte_Carlo.ipynb
jbdekker/deep-reinforcement-learning
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
for i in range(5): print(generate_episode_from_limit_stochastic(env))
[((18, 2, True), 0, 1.0)] [((16, 5, False), 1, 0.0), ((18, 5, False), 1, -1.0)] [((13, 5, False), 1, 0.0), ((17, 5, False), 1, -1.0)] [((14, 4, False), 1, 0.0), ((17, 4, False), 1, -1.0)] [((20, 10, False), 0, -1.0)]
MIT
monte-carlo/Monte_Carlo.ipynb
jbdekker/deep-reinforcement-learning
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0): # initialize empty dictionaries of arrays returns_sum = defaultdict(lambda: np.zeros(env.action_space.n)) N = defaultdict(lambda: np.zeros(env.action_space.n)) Q = defaultdict(lambda: np.zeros(env.action_space.n)) R = defaultdict(lambda: np.zeros(env.action_space.n)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 1000 == 0: print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="") sys.stdout.flush() episode = generate_episode(env) n = len(episode) states, actions, rewards = zip(*episode) discounts = np.array([gamma**i for i in range(n+1)]) for i, state in enumerate(states): returns_sum[state][actions[i]] += sum(rewards[i:] * discounts[:-(i+1)]) N[state][actions[i]] += 1 # comnpute Q table for state in returns_sum.keys(): for action in range(env.action_space.n): Q[state][action] = returns_sum[state][action] / N[state][action] return Q, returns_sum, N
_____no_output_____
MIT
monte-carlo/Monte_Carlo.ipynb
jbdekker/deep-reinforcement-learning
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
# obtain the action-value function Q, R, N = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic) # obtain the corresponding state-value function V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \ for k, v in Q.items()) # plot the state-value function plot_blackjack_values(V_to_plot)
Episode 500000/500000.
MIT
monte-carlo/Monte_Carlo.ipynb
jbdekker/deep-reinforcement-learning
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
def generate_episode_from_Q(env, Q, epsilon, n): """ generates an episode following the epsilon-greedy policy""" episode = [] state = env.reset() while True: if state in Q: action = np.random.choice(np.arange(n), p=get_props(Q[state], epsilon, n)) else: action = env.action_space.sample() next_state, reward, done, _ = env.step(action) episode.append((state, action, reward)) state = next_state if done: break return episode def get_props(Q_s, epsilon, n): policy_s = np.ones(n) * epsilon / n best_a = np.argmax(Q_s) policy_s[best_a] = 1 - epsilon + (epsilon / n) return policy_s def update_Q(episode, Q, alpha, gamma): n = len(episode) states, actions, rewards = zip(*episode) discounts = np.array([gamma**i for i in range(n+1)]) for i, state in enumerate(states): R = sum(rewards[i:] * discounts[:-(1+i)]) Q[state][actions[i]] = Q[state][actions[i]] + alpha * (R - Q[state][actions[i]]) return Q def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05): nA = env.action_space.n # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(nA)) epsilon = eps_start # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 1000 == 0: print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = max(eps_min, epsilon * eps_decay) episode = generate_episode_from_Q(env, Q, epsilon, nA) Q = update_Q(episode, Q, alpha, gamma) policy = dict((s, np.argmax(v)) for s, v in Q.items()) return policy, Q
_____no_output_____
MIT
monte-carlo/Monte_Carlo.ipynb
jbdekker/deep-reinforcement-learning
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
# obtain the estimated optimal policy and action-value function policy, Q = mc_control(env, 500000, 0.02)
Episode 500000/500000.
MIT
monte-carlo/Monte_Carlo.ipynb
jbdekker/deep-reinforcement-learning
Next, we plot the corresponding state-value function.
# obtain the corresponding state-value function V = dict((k,np.max(v)) for k, v in Q.items()) # plot the state-value function plot_blackjack_values(V)
_____no_output_____
MIT
monte-carlo/Monte_Carlo.ipynb
jbdekker/deep-reinforcement-learning
Finally, we visualize the policy that is estimated to be optimal.
# plot the policy plot_policy(policy)
_____no_output_____
MIT
monte-carlo/Monte_Carlo.ipynb
jbdekker/deep-reinforcement-learning
Carregando dados dos usuários premium
df = pd.read_csv("../data/processed/premium_students.csv",parse_dates=[1,2],index_col=[0]) print(df.shape) df.head()
(6260, 2)
MIT
notebooks/1_0_EDA_BASE_A.ipynb
teoria/PD_datascience
--- Novas colunas auxiliares
df['diffDate'] = (df.SubscriptionDate - df.RegisteredDate) df['diffDays'] = [ item.days for item in df['diffDate']] df['register_time'] = df.RegisteredDate.map( lambda x : int(x.strftime("%H")) ) df['register_time_AM_PM'] = df.register_time.map( lambda x : 1 if x>=12 else 0) df['register_num_week'] = df.RegisteredDate.map( lambda x : int(x.strftime("%V")) ) df['register_week_day'] = df.RegisteredDate.map( lambda x : int(x.weekday()) ) df['register_month'] = df.RegisteredDate.map( lambda x : int(x.strftime('%m')) ) df['subscription_time'] = df.SubscriptionDate.map( lambda x : int(x.strftime("%H") )) df['subscription_time_AM_PM'] = df.subscription_time.map( lambda x : 1 if x>=12 else 0) df['subscription_num_week'] = df.SubscriptionDate.map( lambda x : int(x.strftime("%V")) ) df['subscription_week_day'] = df.SubscriptionDate.map( lambda x : int(x.weekday()) ) df['subscription_month'] = df.SubscriptionDate.map( lambda x : int(x.strftime('%m')) ) df.tail()
_____no_output_____
MIT
notebooks/1_0_EDA_BASE_A.ipynb
teoria/PD_datascience
--- Verificando distribuições
df.register_time.hist() df.subscription_time.hist() df.register_time_AM_PM.value_counts() df.subscription_time_AM_PM.value_counts() df.subscription_week_day.value_counts() df.diffDays.hist() df.diffDays.quantile([.25,.5,.75,.95])
_____no_output_____
MIT
notebooks/1_0_EDA_BASE_A.ipynb
teoria/PD_datascience
Separando os dados em 2 momentos.
lt_50 = df.loc[(df.diffDays <50) & (df.diffDays >3)] lt_50.diffDays.hist() lt_50.diffDays.value_counts() lt_50.diffDays.quantile([.25,.5,.75,.95]) range_0_3 = df.loc[(df.diffDays < 3)] range_3_18 = df.loc[(df.diffDays >= 3)&(df.diffDays < 18)] range_6_11 = df.loc[(df.diffDays >= 6) & (df.diffDays < 11)] range_11_18 = df.loc[(df.diffDays >= 11) & (df.diffDays < 18)] range_18_32 = df.loc[(df.diffDays >= 18 )& (df.diffDays <= 32)] range_32 = df.loc[(df.diffDays >=32)] total_subs = df.shape[0] ( round(range_0_3.shape[0] / total_subs,2), round(range_3_18.shape[0] / total_subs,2), round(range_18_32.shape[0] / total_subs,2), round(range_32.shape[0] / total_subs,2) ) gte_30 = df.loc[df.diffDays >=32] gte_30.diffDays.hist() gte_30.diffDays.value_counts() gte_30.shape gte_30.diffDays.quantile([.25,.5,.75,.95]) range_32_140 = df.loc[(df.diffDays > 32)&(df.diffDays <=140)] range_140_168 = df.loc[(df.diffDays > 140)&(df.diffDays <=168)] range_168_188 = df.loc[(df.diffDays > 168)&(df.diffDays <=188)] range_188 = df.loc[(df.diffDays > 188)] total_subs_gte_32 = gte_30.shape[0] ( round(range_32_140.shape[0] / total_subs,2), round(range_140_168.shape[0] / total_subs,2), round(range_168_188.shape[0] / total_subs,2), round(range_188.shape[0] / total_subs,2) ) ( round(range_32_140.shape[0] / total_subs_gte_32,2), round(range_140_168.shape[0] / total_subs_gte_32,2), round(range_168_188.shape[0] / total_subs_gte_32,2), round(range_188.shape[0] / total_subs_gte_32,2) )
_____no_output_____
MIT
notebooks/1_0_EDA_BASE_A.ipynb
teoria/PD_datascience
NumPy Operations ArithmeticYou can easily perform array with array arithmetic, or scalar with array arithmetic. Let's see some examples:
import numpy as np arr = np.arange(0,10) arr + arr arr * arr arr - arr arr**3
_____no_output_____
Apache-2.0
Numpy/Numpy Operations.ipynb
aaavinash85/100-Days-of-ML-
Universal Array Functions
#Taking Square Roots np.sqrt(arr) #Calcualting exponential (e^) np.exp(arr) np.max(arr) #same as arr.max() np.sin(arr) np.log(arr)
<ipython-input-3-a67b4ae04e95>:1: RuntimeWarning: divide by zero encountered in log np.log(arr)
Apache-2.0
Numpy/Numpy Operations.ipynb
aaavinash85/100-Days-of-ML-
シンプルなBERTの実装訓練済みのモデルを使用し、文章の一部の予測、及び2つの文章が連続しているかどうかの判定を行います。 ライブラリのインストールPyTorch-Transformers、および必要なライブラリのインストールを行います。
!pip install folium==0.2.1 !pip install urllib3==1.25.11 !pip install transformers==4.13.0
_____no_output_____
MIT
section_2/03_simple_bert.ipynb
derwind/bert_nlp
文章の一部の予測文章における一部の単語をMASKし、それをBERTのモデルを使って予測します。
import torch from transformers import BertForMaskedLM from transformers import BertTokenizer text = "[CLS] I played baseball with my friends at school yesterday [SEP]" tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") words = tokenizer.tokenize(text) print(words)
_____no_output_____
MIT
section_2/03_simple_bert.ipynb
derwind/bert_nlp
文章の一部をMASKします。
msk_idx = 3 words[msk_idx] = "[MASK]" # 単語を[MASK]に置き換える print(words)
_____no_output_____
MIT
section_2/03_simple_bert.ipynb
derwind/bert_nlp
単語を対応するインデックスに変換します。
word_ids = tokenizer.convert_tokens_to_ids(words) # 単語をインデックスに変換 word_tensor = torch.tensor([word_ids]) # テンソルに変換 print(word_tensor)
_____no_output_____
MIT
section_2/03_simple_bert.ipynb
derwind/bert_nlp
BERTのモデルを使って予測を行います。
msk_model = BertForMaskedLM.from_pretrained("bert-base-uncased") msk_model.cuda() # GPU対応 msk_model.eval() x = word_tensor.cuda() # GPU対応 y = msk_model(x) # 予測 result = y[0] print(result.size()) # 結果の形状 _, max_ids = torch.topk(result[0][msk_idx], k=5) # 最も大きい5つの値 result_words = tokenizer.convert_ids_to_tokens(max_ids.tolist()) # インデックスを単語に変換 print(result_words)
_____no_output_____
MIT
section_2/03_simple_bert.ipynb
derwind/bert_nlp
文章が連続しているかどうかの判定BERTのモデルを使って、2つの文章が連続しているかどうかの判定を行います。 以下の関数`show_continuity`では、2つの文章の連続性を判定し、表示します。
from transformers import BertForNextSentencePrediction def show_continuity(text, seg_ids): words = tokenizer.tokenize(text) word_ids = tokenizer.convert_tokens_to_ids(words) # 単語をインデックスに変換 word_tensor = torch.tensor([word_ids]) # テンソルに変換 seg_tensor = torch.tensor([seg_ids]) nsp_model = BertForNextSentencePrediction.from_pretrained('bert-base-uncased') nsp_model.cuda() # GPU対応 nsp_model.eval() x = word_tensor.cuda() # GPU対応 s = seg_tensor.cuda() # GPU対応 y = nsp_model(x, token_type_ids=s) # 予測 result = torch.softmax(y[0], dim=1) print(result) # Softmaxで確率に print(str(result[0][0].item()*100) + "%の確率で連続しています。")
_____no_output_____
MIT
section_2/03_simple_bert.ipynb
derwind/bert_nlp
`show_continuity`関数に、自然につながる2つの文章を与えます。
text = "[CLS] What is baseball ? [SEP] It is a game of hitting the ball with the bat [SEP]" seg_ids = [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ,1, 1] # 0:前の文章の単語、1:後の文章の単語 show_continuity(text, seg_ids)
_____no_output_____
MIT
section_2/03_simple_bert.ipynb
derwind/bert_nlp
`show_continuity`関数に、自然につながらない2つの文章を与えます。
text = "[CLS] What is baseball ? [SEP] This food is made with flour and milk [SEP]" seg_ids = [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1] # 0:前の文章の単語、1:後の文章の単語 show_continuity(text, seg_ids)
_____no_output_____
MIT
section_2/03_simple_bert.ipynb
derwind/bert_nlp
Binary Search or Bust> Binary search is useful for searching, but its implementation often leaves us searching for edge cases- toc: true - badges: true- comments: true- categories: [data structures & algorithms, coding interviews, searching]- image: images/binary_search_gif.gif Why should you care?Binary search is useful for searching through a set of values (which typically are sorted) efficiently. At each step, it reduces the search space by half, thereby running in $O(log(n))$ complexity. While it sounds simple enough to understand, it is deceptively tricky to implement and use in problems. Over the next few sections, let's take a look at binary search and it can be applied to some commonly encountered interview problems. A Recipe for Binary SearchingHow does binary search reduce the search space by half? It leverages the fact that the input is sorted (_most of the time_) and compares the middle value of the search space at any step with the target value that we're searching for. If the middle value is smaller than the target, then we know that the target can only lie to its right, thus eliminating all the values to the left of the middle value and vice versa. So what information do we need to implement binary search?1. The left and right ends of the search space 2. The target value we're searching for3. What to store at each step if anyHere's a nice video which walks through the binary search algorithm: > youtube: https://youtu.be/P3YID7liBug Next, let's look at an implementation of vanilla binary search.
#hide from typing import List, Dict, Tuple def binary_search(nums: List[int], target: int) -> int: """Vanilla Binary Search. Given a sorted list of integers and a target value, find the index of the target value in the list. If not present, return -1. """ # Left and right boundaries of the search space left, right = 0, len(nums) - 1 while left <= right: # Why not (left + right) // 2 ? # Hint: Doesn't matter for Python middle = left + (right - left) // 2 # Found the target, return the index if nums[middle] == target: return middle # The middle value is less than the # target, so look to the right elif nums[middle] < target: left = middle + 1 # The middle value is greater than the # target, so look to the left else: right = middle - 1 return -1 # Target not found
_____no_output_____
Apache-2.0
_notebooks/2022-05-03-binary-search-or-bust.ipynb
boolean-pandit/non-faangable-tokens
Here're a few examples of running our binary search implementation on a list and target values
#hide_input nums = [1,4,9,54,100,123] targets = [4, 100, 92] for val in targets: print(f"Result of searching for {val} in {nums} : \ {binary_search(nums, val)}\n")
Result of searching for 4 in [1, 4, 9, 54, 100, 123] : 1 Result of searching for 100 in [1, 4, 9, 54, 100, 123] : 4 Result of searching for 92 in [1, 4, 9, 54, 100, 123] : -1
Apache-2.0
_notebooks/2022-05-03-binary-search-or-bust.ipynb
boolean-pandit/non-faangable-tokens
> Tip: Using the approach middle = left + (right - left) // 2 helps avoid overflow. While this isn&39;t a concern in Python, it becomes a tricky issue to debug in other programming languages such as C++. For more on overflow, check out this [article](https://ai.googleblog.com/2006/06/extra-extra-read-all-about-it-nearly.html). Before we look at some problems that can be solved using binary search, let's run a quick comparison of linear search and binary search on some large input.
def linear_search(nums: List[int], target: int) -> int: """Linear Search. Given a list of integers and a target value, return find the index of the target value in the list. If not present, return -1. """ for idx, elem in enumerate(nums): # Found the target value if elem == target: return idx return -1 # Target not found #hide n = 1000000 large_nums = range((1, n + 1)) target = 99999
_____no_output_____
Apache-2.0
_notebooks/2022-05-03-binary-search-or-bust.ipynb
boolean-pandit/non-faangable-tokens
Let's see the time it takes linear search and binary search to find $99999$ in a sorted list of numbers from $[1, 1000000]$ - Linear Search
#hide_input %timeit linear_search(large_nums, target)
5.19 ms ± 26.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Apache-2.0
_notebooks/2022-05-03-binary-search-or-bust.ipynb
boolean-pandit/non-faangable-tokens
- Binary Search
#hide_input %timeit binary_search(large_nums, target)
6.05 µs ± 46.9 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Apache-2.0
_notebooks/2022-05-03-binary-search-or-bust.ipynb
boolean-pandit/non-faangable-tokens
Hopefully, that drives the point home :wink:. Naïve Binary Search ProblemsHere's a list of problems that can be solved using vanilla binary search (or slightly modifying it). Anytime you see a problem statement which goes something like _"Given a sorted list.."_ or _"Find the position of an element"_, think of using binary search. You can also consider **sorting** the input in case it is an unordered collection of items to reduce it to a binary search problem. Note that this list is by no means exhaustive, but is a good starting point to practice binary search:- [Search Insert Position](https://leetcode.com/problems/search-insert-position/)- [Find the Square Root of x](https://leetcode.com/problems/sqrtx/)- [Find First and Last Position of Element in Sorted Array](https://leetcode.com/problems/find-first-and-last-position-of-element-in-sorted-array/)- [Search in a Rotated Sorted Array](https://leetcode.com/problems/search-in-rotated-sorted-array/)In the problems above, we can either directly apply binary search or adapt it slightly to solve the problem. For example, take the square root problem. We know that the square root of a positive number $n$ has to lie between $[1, n / 2]$. This gives us the bounds for the search space. Applying binary search over this space allows us to find the a good approximation of the square root. See the implementation below for details:
def find_square_root(n: int) -> int: """Integer square root. Given a positive integer, return its square root. """ left, right = 1, n // 2 + 1 while left <= right: middle = left + (right - left) // 2 if middle * middle == n: return middle # Found an exact match elif middle * middle < n: left = middle + 1 # Go right else: right = middle - 1 # Go left return right # This is the closest value to the actual square root #hide_input nums = [1,4,8,33,100] for val in nums: print(f"Square root of {val} is: {find_square_root(val)}\n")
Square root of 1 is: 1 Square root of 4 is: 2 Square root of 8 is: 2 Square root of 33 is: 5 Square root of 100 is: 10
Apache-2.0
_notebooks/2022-05-03-binary-search-or-bust.ipynb
boolean-pandit/non-faangable-tokens
**INITIALIZATION:**- I use these three lines of code on top of my each notebooks because it will help to prevent any problems while reloading the same project. And the third line of code helps to make visualization within the notebook.
#@ INITIALIZATION: %reload_ext autoreload %autoreload 2 %matplotlib inline
_____no_output_____
MIT
06. VGGNet Architecture/Mini VGGNet.ipynb
ThinamXx/ComputerVision
**LIBRARIES AND DEPENDENCIES:**- I have downloaded all the libraries and dependencies required for the project in one particular cell.
#@ IMPORTING NECESSARY LIBRARIES AND DEPENDENCIES: from keras.models import Sequential from keras.layers import BatchNormalization from keras.layers.convolutional import Conv2D from keras.layers.convolutional import MaxPooling2D from keras.layers.core import Activation from keras.layers.core import Flatten from keras.layers.core import Dense, Dropout from keras import backend as K from tensorflow.keras.optimizers import SGD from tensorflow.keras.datasets import cifar10 from keras.callbacks import LearningRateScheduler from sklearn.preprocessing import LabelBinarizer from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report import matplotlib.pyplot as plt import numpy as np
_____no_output_____
MIT
06. VGGNet Architecture/Mini VGGNet.ipynb
ThinamXx/ComputerVision
**VGG ARCHITECTURE:**- I will define the build method of Mini VGGNet architecture below. It requires four parameters: width of input image, height of input image, depth of image, number of class labels in the classification task. The Sequential class, the building block of sequential networks sequentially stack one layer on top of the other layer initialized below. Batch Normalization operates over the channels, so in order to apply BN, we need to know which axis to normalize over.
#@ DEFINING VGGNET ARCHITECTURE: class MiniVGGNet: # Defining VGG Network. @staticmethod def build(width, height, depth, classes): # Defining Build Method. model = Sequential() # Initializing Sequential Model. inputShape = (width, height, depth) # Initializing Input Shape. chanDim = -1 # Index of Channel Dimension. if K.image_data_format() == "channels_first": inputShape = (depth, width, height) # Initializing Input Shape. chanDim = 1 # Index of Channel Dimension. model.add(Conv2D(32, (3, 3), padding='same', input_shape=inputShape)) # Adding Convolutional Layer. model.add(Activation("relu")) # Adding RELU Activation Function. model.add(BatchNormalization(axis=chanDim)) # Adding Batch Normalization Layer. model.add(Conv2D(32, (3, 3), padding='same')) # Adding Convolutional Layer. model.add(Activation("relu")) # Adding RELU Activation Function. model.add(BatchNormalization(axis=chanDim)) # Adding Batch Normalization Layer. model.add(MaxPooling2D(pool_size=(2, 2))) # Adding Max Pooling Layer. model.add(Dropout(0.25)) # Adding Dropout Layer. model.add(Conv2D(64, (3, 3), padding="same")) # Adding Convolutional Layer. model.add(Activation("relu")) # Adding RELU Activation Function. model.add(BatchNormalization(axis=chanDim)) # Adding Batch Normalization Layer. model.add(Conv2D(64, (3, 3), padding='same')) # Adding Convolutional Layer. model.add(Activation("relu")) # Adding RELU Activation Function. model.add(BatchNormalization(axis=chanDim)) # Adding Batch Normalization Layer. model.add(MaxPooling2D(pool_size=(2, 2))) # Adding Max Pooling Layer. model.add(Dropout(0.25)) # Adding Dropout Layer. model.add(Flatten()) # Adding Flatten Layer. model.add(Dense(512)) # Adding FC Dense Layer. model.add(Activation("relu")) # Adding Activation Layer. model.add(BatchNormalization()) # Adding Batch Normalization Layer. model.add(Dropout(0.5)) # Adding Dropout Layer. model.add(Dense(classes)) # Adding Dense Output Layer. model.add(Activation("softmax")) # Adding Softmax Layer. return model #@ CUSTOM LEARNING RATE SCHEDULER: def step_decay(epoch): # Definig step decay function. initAlpha = 0.01 # Initializing initial LR. factor = 0.25 # Initializing drop factor. dropEvery = 5 # Initializing epochs to drop. alpha = initAlpha*(factor ** np.floor((1 + epoch) / dropEvery)) return float(alpha)
_____no_output_____
MIT
06. VGGNet Architecture/Mini VGGNet.ipynb
ThinamXx/ComputerVision
**VGGNET ON CIFAR10**
#@ GETTING THE DATASET: ((trainX, trainY), (testX, testY)) = cifar10.load_data() # Loading Dataset. trainX = trainX.astype("float") / 255.0 # Normalizing Dataset. testX = testX.astype("float") / 255.0 # Normalizing Dataset. #@ PREPARING THE DATASET: lb = LabelBinarizer() # Initializing LabelBinarizer. trainY = lb.fit_transform(trainY) # Converting Labels to Vectors. testY = lb.transform(testY) # Converting Labels to Vectors. labelNames = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"] # Initializing LabelNames. #@ INITIALIZING OPTIMIZER AND MODEL: callbacks = [LearningRateScheduler(step_decay)] # Initializing Callbacks. opt = SGD(0.01, nesterov=True, momentum=0.9) # Initializing SGD Optimizer. model = MiniVGGNet.build(width=32, height=32, depth=3, classes=10) # Initializing VGGNet Architecture. model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"]) # Compiling VGGNet Model. H = model.fit(trainX, trainY, validation_data=(testX, testY), batch_size=64, epochs=40, verbose=1, callbacks=callbacks) # Training VGGNet Model.
Epoch 1/40 782/782 [==============================] - 29s 21ms/step - loss: 1.6339 - accuracy: 0.4555 - val_loss: 1.1509 - val_accuracy: 0.5970 - lr: 0.0100 Epoch 2/40 782/782 [==============================] - 16s 21ms/step - loss: 1.1813 - accuracy: 0.5932 - val_loss: 0.9222 - val_accuracy: 0.6733 - lr: 0.0100 Epoch 3/40 782/782 [==============================] - 16s 21ms/step - loss: 0.9908 - accuracy: 0.6567 - val_loss: 0.8341 - val_accuracy: 0.7159 - lr: 0.0100 Epoch 4/40 782/782 [==============================] - 16s 21ms/step - loss: 0.8854 - accuracy: 0.6945 - val_loss: 0.8282 - val_accuracy: 0.7167 - lr: 0.0100 Epoch 5/40 782/782 [==============================] - 16s 21ms/step - loss: 0.7380 - accuracy: 0.7421 - val_loss: 0.6881 - val_accuracy: 0.7598 - lr: 0.0025 Epoch 6/40 782/782 [==============================] - 17s 21ms/step - loss: 0.6845 - accuracy: 0.7586 - val_loss: 0.6600 - val_accuracy: 0.7711 - lr: 0.0025 Epoch 7/40 782/782 [==============================] - 17s 21ms/step - loss: 0.6628 - accuracy: 0.7683 - val_loss: 0.6435 - val_accuracy: 0.7744 - lr: 0.0025 Epoch 8/40 782/782 [==============================] - 16s 21ms/step - loss: 0.6391 - accuracy: 0.7755 - val_loss: 0.6362 - val_accuracy: 0.7784 - lr: 0.0025 Epoch 9/40 782/782 [==============================] - 16s 21ms/step - loss: 0.6204 - accuracy: 0.7830 - val_loss: 0.6499 - val_accuracy: 0.7744 - lr: 0.0025 Epoch 10/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5912 - accuracy: 0.7909 - val_loss: 0.6161 - val_accuracy: 0.7856 - lr: 6.2500e-04 Epoch 11/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5812 - accuracy: 0.7936 - val_loss: 0.6054 - val_accuracy: 0.7879 - lr: 6.2500e-04 Epoch 12/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5730 - accuracy: 0.7978 - val_loss: 0.5994 - val_accuracy: 0.7907 - lr: 6.2500e-04 Epoch 13/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5698 - accuracy: 0.7974 - val_loss: 0.6013 - val_accuracy: 0.7882 - lr: 6.2500e-04 Epoch 14/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5623 - accuracy: 0.8009 - val_loss: 0.5973 - val_accuracy: 0.7910 - lr: 6.2500e-04 Epoch 15/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5496 - accuracy: 0.8064 - val_loss: 0.5961 - val_accuracy: 0.7905 - lr: 1.5625e-04 Epoch 16/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5484 - accuracy: 0.8048 - val_loss: 0.5937 - val_accuracy: 0.7914 - lr: 1.5625e-04 Epoch 17/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5573 - accuracy: 0.8037 - val_loss: 0.5950 - val_accuracy: 0.7902 - lr: 1.5625e-04 Epoch 18/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5477 - accuracy: 0.8062 - val_loss: 0.5927 - val_accuracy: 0.7907 - lr: 1.5625e-04 Epoch 19/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5446 - accuracy: 0.8073 - val_loss: 0.5904 - val_accuracy: 0.7923 - lr: 1.5625e-04 Epoch 20/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5391 - accuracy: 0.8104 - val_loss: 0.5926 - val_accuracy: 0.7920 - lr: 3.9062e-05 Epoch 21/40 782/782 [==============================] - 17s 21ms/step - loss: 0.5419 - accuracy: 0.8080 - val_loss: 0.5915 - val_accuracy: 0.7929 - lr: 3.9062e-05 Epoch 22/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5438 - accuracy: 0.8099 - val_loss: 0.5909 - val_accuracy: 0.7925 - lr: 3.9062e-05 Epoch 23/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5467 - accuracy: 0.8075 - val_loss: 0.5914 - val_accuracy: 0.7919 - lr: 3.9062e-05 Epoch 24/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5376 - accuracy: 0.8103 - val_loss: 0.5918 - val_accuracy: 0.7920 - lr: 3.9062e-05 Epoch 25/40 782/782 [==============================] - 17s 21ms/step - loss: 0.5410 - accuracy: 0.8085 - val_loss: 0.5923 - val_accuracy: 0.7917 - lr: 9.7656e-06 Epoch 26/40 782/782 [==============================] - 17s 21ms/step - loss: 0.5406 - accuracy: 0.8084 - val_loss: 0.5910 - val_accuracy: 0.7915 - lr: 9.7656e-06 Epoch 27/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5384 - accuracy: 0.8097 - val_loss: 0.5901 - val_accuracy: 0.7919 - lr: 9.7656e-06 Epoch 28/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5431 - accuracy: 0.8089 - val_loss: 0.5915 - val_accuracy: 0.7927 - lr: 9.7656e-06 Epoch 29/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5417 - accuracy: 0.8095 - val_loss: 0.5921 - val_accuracy: 0.7925 - lr: 9.7656e-06 Epoch 30/40 782/782 [==============================] - 17s 21ms/step - loss: 0.5385 - accuracy: 0.8108 - val_loss: 0.5900 - val_accuracy: 0.7926 - lr: 2.4414e-06 Epoch 31/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5451 - accuracy: 0.8073 - val_loss: 0.5910 - val_accuracy: 0.7923 - lr: 2.4414e-06 Epoch 32/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5402 - accuracy: 0.8103 - val_loss: 0.5899 - val_accuracy: 0.7925 - lr: 2.4414e-06 Epoch 33/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5405 - accuracy: 0.8091 - val_loss: 0.5909 - val_accuracy: 0.7928 - lr: 2.4414e-06 Epoch 34/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5427 - accuracy: 0.8091 - val_loss: 0.5914 - val_accuracy: 0.7921 - lr: 2.4414e-06 Epoch 35/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5416 - accuracy: 0.8105 - val_loss: 0.5906 - val_accuracy: 0.7928 - lr: 6.1035e-07 Epoch 36/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5375 - accuracy: 0.8109 - val_loss: 0.5905 - val_accuracy: 0.7927 - lr: 6.1035e-07 Epoch 37/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5372 - accuracy: 0.8092 - val_loss: 0.5900 - val_accuracy: 0.7923 - lr: 6.1035e-07 Epoch 38/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5438 - accuracy: 0.8090 - val_loss: 0.5907 - val_accuracy: 0.7927 - lr: 6.1035e-07 Epoch 39/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5424 - accuracy: 0.8097 - val_loss: 0.5906 - val_accuracy: 0.7922 - lr: 6.1035e-07 Epoch 40/40 782/782 [==============================] - 16s 21ms/step - loss: 0.5385 - accuracy: 0.8116 - val_loss: 0.5909 - val_accuracy: 0.7928 - lr: 1.5259e-07
MIT
06. VGGNet Architecture/Mini VGGNet.ipynb
ThinamXx/ComputerVision
**MODEL EVALUATION:**
#@ INITIALIZING MODEL EVALUATION: predictions = model.predict(testX, batch_size=64) # Getting Model Predictions. print(classification_report(testY.argmax(axis=1), predictions.argmax(axis=1), target_names=labelNames)) # Inspecting Classification Report. #@ INSPECTING TRAINING LOSS AND ACCURACY: plt.style.use("ggplot") plt.figure() plt.plot(np.arange(0, 40), H.history["loss"], label="train_loss") plt.plot(np.arange(0, 40), H.history["val_loss"], label="val_loss") plt.plot(np.arange(0, 40), H.history["accuracy"], label="train_acc") plt.plot(np.arange(0, 40), H.history["val_accuracy"], label="val_acc") plt.title("Training Loss and Accuracy") plt.xlabel("Epoch") plt.ylabel("Loss/Accuracy") plt.legend() plt.show();
_____no_output_____
MIT
06. VGGNet Architecture/Mini VGGNet.ipynb
ThinamXx/ComputerVision
내가 닮은 연예인은?사진 모으기얼굴 영역 자르기얼굴 영역 Embedding 추출연예인들의 얼굴과 거리 비교하기시각화회고1. 사진 모으기2. 얼굴 영역 자르기이미지에서 얼굴 영역을 자름image.fromarray를 이용하여 PIL image로 변환한 후, 추후에 시각화에 사용
# 필요한 모듈 불러오기 import os import re import glob import glob import pickle import pandas as pd import matplotlib.pyplot as plt import matplotlib.image as img import face_recognition %matplotlib inline from PIL import Image import numpy as np import face_recognition import os from PIL import Image dir_path = os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data' file_list = os.listdir(dir_path) print(len(file_list)) # 이미지 파일 불러오기 print('연예인 이미지 파일 갯수:', len(file_list) - 5) # 추가한 내 사진 수를 뺀 나머지 사진 수 세기 # 이미지 파일 리스트 확인 print ("파일 리스트:\n{}".format(file_list)) # 이미지 파일 일부 확인 # Set figsize here fig, axes = plt.subplots(nrows=2, ncols=3, figsize=(24,10)) # flatten axes for easy iterating for i, ax in enumerate(axes.flatten()): image = img.imread(dir_path+'/'+file_list[i]) ax.imshow(image) plt.show() fig.tight_layout() # 이미지 파일 경로를 파라미터로 넘기면 얼굴 영역만 잘라주는 함수 def get_cropped_face(image_file): image = face_recognition.load_image_file(image_file) face_locations = face_recognition.face_locations(image) a, b, c, d = face_locations[0] cropped_face = image[a:c,d:b,:] return cropped_face # 얼굴 영역이 정확히 잘리는 지 확인 image_path = os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data/이원재_02.jpg' cropped_face = get_cropped_face(image_path) plt.imshow(cropped_face)
_____no_output_____
MIT
celebrity.ipynb
peter1505/AIFFEL
Step3. 얼굴 영역의 임베딩 추출하기
# 얼굴 영역을 가지고 얼굴 임베딩 벡터를 구하는 함수 def get_face_embedding(face): return face_recognition.face_encodings(face, model='cnn') # 파일 경로를 넣으면 embedding_dict를 리턴하는 함수 def get_face_embedding_dict(dir_path): file_list = os.listdir(dir_path) embedding_dict = {} for file in file_list: try: img_path = os.path.join(dir_path, file) face = get_cropped_face(img_path) embedding = get_face_embedding(face) if len(embedding) > 0: # 얼굴영역 face가 제대로 detect되지 않으면 len(embedding)==0인 경우가 발생하므로 # os.path.splitext(file)[0]에는 이미지파일명에서 확장자를 제거한 이름이 담깁니다. embedding_dict[os.path.splitext(file)[0]] = embedding[0] # embedding_dict[] 이미지 파일의 임베딩을 구해 담음 키=사람이름, 값=임베딩 벡터 # os.path.splitext(file)[0] 파일의 확장자를 제거한 이름만 추출 # embedding[0]은 넣고 싶은 요소값 except: continue return embedding_dict embedding_dict = get_face_embedding_dict(dir_path)
_____no_output_____
MIT
celebrity.ipynb
peter1505/AIFFEL
Step4. 모은 연예인들과 비교하기
# 이미지 간 거리를 구하는 함수 def get_distance(name1, name2): return np.linalg.norm(embedding_dict[name1]-embedding_dict[name2], ord=2) # 본인 사진의 거리를 확인해보자 print('내 사진끼리의 거리는?:', get_distance('이원재_01', '이원재_02')) # name1과 name2의 거리를 비교하는 함수를 생성하되, name1은 미리 지정하고, name2는 호출시에 인자로 받도록 합니다. def get_sort_key_func(name1): def get_distance_from_name1(name2): return get_distance(name1, name2) return get_distance_from_name1 # 닮은꼴 순위, 이름, 임베딩 거리를 포함한 Top-5 리스트 출력하는 함수 def get_nearest_face(name, top=5): sort_key_func = get_sort_key_func(name) sorted_faces = sorted(embedding_dict.items(), key=lambda x:sort_key_func(x[0])) rank_cnt = 1 # 순위를 세는 변수 pass_cnt = 1 # 건너뛴 숫자를 세는 변수(본인 사진 카운트) end = 0 # 닮은 꼴 5번 출력시 종료하기 위해 세는 변수 for i in range(top+15): rank_cnt += 1 if sorted_faces[i][0].find('이원재_02') == 0: # 본인 사진인 mypicture라는 파일명으로 시작하는 경우 제외합니다. pass_cnt += 1 continue if sorted_faces[i]: print('순위 {} : 이름({}), 거리({})'.format(rank_cnt - pass_cnt, sorted_faces[i][0], sort_key_func(sorted_faces[i][0]))) end += 1 if end == 5: # end가 5가 된 경우 연예인 5명 출력되었기에 종료합니다. break # '이원재_01'과 가장 닮은 사람은 누굴까요? get_nearest_face('이원재_01') # '이원재_02'와 가장 닮은 사람은 누굴까요? get_nearest_face('이원재_02')
순위 1 : 이름(이원재_01), 거리(0.27525162596989655) 순위 2 : 이름(euPhemia), 거리(0.38568278214648233) 순위 3 : 이름(공명), 거리(0.445581489047543) 순위 4 : 이름(김동완), 거리(0.44765017085662295) 순위 5 : 이름(강성필), 거리(0.4536061116328271)
MIT
celebrity.ipynb
peter1505/AIFFEL
Step5. 다양한 재미있는 시각화 시도해 보기
# 사진 경로 설정 mypicture1 = os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data/이원재_01.jpg' mypicture2 = os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data/이원재_02.jpg' mc= os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data/MC몽.jpg' gahee = os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data/가희.jpg' seven = os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data/SE7EN.jpg' gam = os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data/감우성.jpg' gang = os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data/강경준.jpg' gyung = os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data/강경현.jpg' gi = os.getenv('HOME')+'/aiffel/EXP_07_face_embedding/data/강기영.jpg' # 크롭한 얼굴을 저장해 보자 a1 = get_cropped_face(mypicture1) a2 = get_cropped_face(mypicture2) b1 = get_cropped_face(mc) b2 = get_cropped_face(gahee) b3 = get_cropped_face(gam) plt.figure(figsize=(10,8)) plt.subplot(231) plt.imshow(a1) plt.axis('off') plt.title('1st') plt.subplot(232) plt.imshow(a2) plt.axis('off') plt.title('me') plt.subplot(233) plt.imshow(b1) plt.axis('off') plt.title('2nd') plt.subplot(234) print('''mypicture의 순위 순위 1 : 이름(사쿠라), 거리(0.36107689719729225) 순위 2 : 이름(트와이스나연), 거리(0.36906292012955577) 순위 3 : 이름(아이유), 거리(0.3703590842312735) 순위 4 : 이름(유트루), 거리(0.3809516850126146) 순위 5 : 이름(지호), 거리(0.3886670633997685)''')
mypicture의 순위 순위 1 : 이름(사쿠라), 거리(0.36107689719729225) 순위 2 : 이름(트와이스나연), 거리(0.36906292012955577) 순위 3 : 이름(아이유), 거리(0.3703590842312735) 순위 4 : 이름(유트루), 거리(0.3809516850126146) 순위 5 : 이름(지호), 거리(0.3886670633997685)
MIT
celebrity.ipynb
peter1505/AIFFEL
5. Arbitrary Value Imputation this technique was derived from kaggle competition It consists of replacing NAN by an arbitrary value
import pandas as pd df=pd.read_csv("titanic.csv", usecols=["Age","Fare","Survived"]) df.head() def impute_nan(df,variable): df[variable+'_zero']=df[variable].fillna(0) df[variable+'_hundred']=df[variable].fillna(100) df['Age'].hist(bins=50)
_____no_output_____
Apache-2.0
Feature - Handling missing Values/5. Arbitrary Value Imputation.ipynb
deepakkum21/Feature-Engineering
Advantages Easy to implement Captures the importance of missingess if there is one Disadvantages Distorts the original distribution of the variable If missingess is not important, it may mask the predictive power of the original variable by distorting its distribution Hard to decide which value to use
impute_nan(df,'Age') df.head() print(df['Age'].std()) print(df['Age_zero'].std()) print(df['Age_hundred'].std()) print(df['Age'].mean()) print(df['Age_zero'].mean()) print(df['Age_hundred'].mean()) import matplotlib.pyplot as plt %matplotlib inline fig = plt.figure() ax = fig.add_subplot(111) df['Age'].plot(kind='kde', ax=ax) df.Age_zero.plot(kind='kde', ax=ax, color='red') df.Age_hundred.plot(kind='kde', ax=ax, color='green') lines, labels = ax.get_legend_handles_labels() ax.legend(lines, labels, loc='best')
_____no_output_____
Apache-2.0
Feature - Handling missing Values/5. Arbitrary Value Imputation.ipynb
deepakkum21/Feature-Engineering
loading the libraries
import os import sys import pyvista as pv import trimesh as tm import numpy as np import topogenesis as tg import pickle as pk sys.path.append(os.path.realpath('..\..')) # no idea how or why this is not working without adding this to the path TODO: learn about path etc. from notebooks.resources import RES as res
_____no_output_____
MIT
notebooks/Toy_problem/Tp4_criterium_2_daylighting_potential.ipynb
Maxketelaar/thesis
loading the configuration of the test
# load base lattice CSV file lattice_path = os.path.relpath('../../data/macrovoxels.csv') macro_lattice = tg.lattice_from_csv(lattice_path) # load random configuration for testing config_path = os.path.relpath('../../data/random_lattice.csv') configuration = tg.lattice_from_csv(config_path) # load environment environment_path = os.path.relpath("../../data/movedcontext.obj") environment_mesh = tm.load(environment_path) # load solar vectors vectors = pk.load(open("../../data/sunvectors.pk", "rb")) # load vector intensities intensity = pk.load(open("../../data/dnival.pk", "rb"))
_____no_output_____
MIT
notebooks/Toy_problem/Tp4_criterium_2_daylighting_potential.ipynb
Maxketelaar/thesis
during optimization, arrays like these will be passed to the function:
variable = [0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0]
_____no_output_____
MIT
notebooks/Toy_problem/Tp4_criterium_2_daylighting_potential.ipynb
Maxketelaar/thesis
calling the objective function
# input is the decision variables, a referenca lattice, the visibility vectors, their magnitude (i.e. direct normal illuminance for daylight), and a mesh of the environment # output is the total objective score in 100s of lux on the facade, and 100s of lux per each surface (voxel roofs) crit, voxcrit = res.crit_2_DL(variable, macro_lattice, vectors, intensity, environment_mesh)
_____no_output_____
MIT
notebooks/Toy_problem/Tp4_criterium_2_daylighting_potential.ipynb
Maxketelaar/thesis
generating mesh
meshes, _, _ = res.construct_vertical_mesh(configuration, configuration.unit) facademesh = tm.util.concatenate(meshes)
_____no_output_____
MIT
notebooks/Toy_problem/Tp4_criterium_2_daylighting_potential.ipynb
Maxketelaar/thesis
visualisation
p = pv.Plotter(notebook=True) configuration.fast_vis(p,False,False,opacity=0.1) # p.add_arrows(ctr_per_ray, -ray_per_ctr, mag=5, show_scalar_bar=False) # p.add_arrows(ctr_per_ray, nrm_per_ray, mag=5, show_scalar_bar=False) # p.add_mesh(roof_mesh) p.add_mesh(environment_mesh) p.add_mesh(facademesh, cmap='fire', scalars=np.repeat(voxcrit,2)) p.add_points(vectors*-300) # p.add_points(horizontal_test_points) p.show(use_ipyvtk=True)
_____no_output_____
MIT
notebooks/Toy_problem/Tp4_criterium_2_daylighting_potential.ipynb
Maxketelaar/thesis
Módulo e pacote
# importando módulo, math para operações matemáticas import math # verificando todos os metodos do modulo dir(math) # usando um dos metódos do módulo, sqrt, raiz quadrada print(math.sqrt(25)) # importando apenas uma função do módulo math from math import sqrt # usando este método, como importou somente a função do módulo pode usar somente # a função sem o nome do pacote print(sqrt(25)) # imprimindo todos os metodos do módulo math print(dir(math)) # help da função sqrt do módulo math print(help(sqrt)) # random import random # random choice(), escolha, buscando os elementos de maneira aleatória print(random.choice(['Maça', 'Banana', 'Laranja'])) # renadom sample(), amostra apartir de uma amostra de valores print(random.sample(range(100), 10)) # módulo para estatistíca import statistics # criando uma lista de números reais dados = [2.75, 1.75, 1.25, 0.25, 1.25, 3.5]
_____no_output_____
Apache-2.0
Cap04/.ipynb_checkpoints/modulos_pacotes-checkpoint.ipynb
carlos-freitas-gitHub/python-analytics
Notebook Content1. [Import Packages](1)1. [Helper Functions](2)1. [Input](3)1. [Model](4)1. [Prediction](5)1. [Complete Figure](6) 1. Import PackagesImporting all necessary and useful packages in single cell.
import numpy as np import keras import tensorflow as tf from numpy import array from keras.models import Sequential from keras.layers import LSTM from keras.layers import Dense from keras.layers import Flatten from keras.layers import TimeDistributed from keras.layers.convolutional import Conv1D from keras.layers.convolutional import MaxPooling1D from keras_tqdm import TQDMNotebookCallback from sklearn.preprocessing import MinMaxScaler from tqdm import tqdm_notebook import matplotlib.pyplot as plt import pandas as pd import random from random import randint
_____no_output_____
MIT
BareBones 1D CNN LSTM MLP - Sequence Prediction.ipynb
codeWhim/Sequence-Prediction
2. Helper FunctionsDefining Some helper functions which we will need later in code
# split a univariate sequence into samples def split_sequence(sequence, n_steps, look_ahead=0): X, y = list(), list() for i in range(len(sequence)-look_ahead): # find the end of this pattern end_ix = i + n_steps # check if we are beyond the sequence if end_ix > len(sequence)-1-look_ahead: break # gather input and output parts of the pattern seq_x, seq_y = sequence[i:end_ix], sequence[end_ix+look_ahead] X.append(seq_x) y.append(seq_y) return array(X), array(y) def plot_multi_graph(xAxis,yAxes,title='',xAxisLabel='number',yAxisLabel='Y'): linestyles = ['-', '--', '-.', ':'] plt.figure() plt.title(title) plt.xlabel(xAxisLabel) plt.ylabel(yAxisLabel) for key, value in yAxes.items(): plt.plot(xAxis, np.array(value), label=key, linestyle=linestyles[randint(0,3)]) plt.legend() def normalize(values): values = array(values, dtype="float64").reshape((len(values), 1)) # train the normalization scaler = MinMaxScaler(feature_range=(0, 1)) scaler = scaler.fit(values) #print('Min: %f, Max: %f' % (scaler.data_min_, scaler.data_max_)) # normalize the dataset and print the first 5 rows normalized = scaler.transform(values) return normalized,scaler
_____no_output_____
MIT
BareBones 1D CNN LSTM MLP - Sequence Prediction.ipynb
codeWhim/Sequence-Prediction
3. Input3-1. Sequence PreProcessingSplitting and Reshaping
n_features = 1 n_seq = 20 n_steps = 1 def sequence_preprocessed(values, sliding_window, look_ahead=0): # Normalization normalized,scaler = normalize(values) # Try the following if randomizing the sequence: # random.seed('sam') # set the seed # raw_seq = random.sample(raw_seq, 100) # split into samples X, y = split_sequence(normalized, sliding_window, look_ahead) # reshape from [samples, timesteps] into [samples, subsequences, timesteps, features] X = X.reshape((X.shape[0], n_seq, n_steps, n_features)) return X,y,scaler
_____no_output_____
MIT
BareBones 1D CNN LSTM MLP - Sequence Prediction.ipynb
codeWhim/Sequence-Prediction
3-2. Providing SequenceDefining a raw sequence, sliding window of data to consider and look ahead future timesteps
# define input sequence sequence_val = [i for i in range(5000,7000)] sequence_train = [i for i in range(1000,2000)] sequence_test = [i for i in range(10000,14000)] # choose a number of time steps for sliding window sliding_window = 20 # choose a number of further time steps after end of sliding_window till target start (gap between data and target) look_ahead = 20 X_train, y_train, scaler_train = sequence_preprocessed(sequence_train, sliding_window, look_ahead) X_val, y_val ,scaler_val = sequence_preprocessed(sequence_val, sliding_window, look_ahead) X_test,y_test,scaler_test = sequence_preprocessed(sequence_test, sliding_window, look_ahead)
_____no_output_____
MIT
BareBones 1D CNN LSTM MLP - Sequence Prediction.ipynb
codeWhim/Sequence-Prediction
4. Model4-1. Defining LayersAdding 1D Convolution, Max Pooling, LSTM and finally Dense (MLP) layer
# define model model = Sequential() model.add(TimeDistributed(Conv1D(filters=64, kernel_size=1, activation='relu'), input_shape=(None, n_steps, n_features) )) model.add(TimeDistributed(MaxPooling1D(pool_size=1))) model.add(TimeDistributed(Flatten())) model.add(LSTM(50, activation='relu', stateful=False)) model.add(Dense(1))
_____no_output_____
MIT
BareBones 1D CNN LSTM MLP - Sequence Prediction.ipynb
codeWhim/Sequence-Prediction
4-2. Training ModelDefined early stop, can be used in callbacks param of model fit, not using for now since it's not recommended at first few iterations of experimentation with new data
# Defining multiple metrics, leaving it to a choice, some may be useful and few may even surprise on some problems metrics = ['mean_squared_error', 'mean_absolute_error', 'mean_absolute_percentage_error', 'mean_squared_logarithmic_error', 'logcosh'] # Compiling Model model.compile(optimizer='adam', loss='mape', metrics=metrics) # Defining early stop, call it in model fit callback early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10) # Fit model history = model.fit(X_train, y_train, epochs=100, verbose=3, validation_data=(X_val,y_val))
Train on 960 samples, validate on 1960 samples Epoch 1/100 Epoch 2/100 Epoch 3/100 Epoch 4/100 Epoch 5/100 Epoch 6/100 Epoch 7/100 Epoch 8/100 Epoch 9/100 Epoch 10/100 Epoch 11/100 Epoch 12/100 Epoch 13/100 Epoch 14/100 Epoch 15/100 Epoch 16/100 Epoch 17/100 Epoch 18/100 Epoch 19/100 Epoch 20/100 Epoch 21/100 Epoch 22/100 Epoch 23/100 Epoch 24/100 Epoch 25/100 Epoch 26/100 Epoch 27/100 Epoch 28/100 Epoch 29/100 Epoch 30/100 Epoch 31/100 Epoch 32/100 Epoch 33/100 Epoch 34/100 Epoch 35/100 Epoch 36/100 Epoch 37/100 Epoch 38/100 Epoch 39/100 Epoch 40/100 Epoch 41/100 Epoch 42/100 Epoch 43/100 Epoch 44/100 Epoch 45/100 Epoch 46/100 Epoch 47/100 Epoch 48/100 Epoch 49/100 Epoch 50/100 Epoch 51/100 Epoch 52/100 Epoch 53/100 Epoch 54/100 Epoch 55/100 Epoch 56/100 Epoch 57/100 Epoch 58/100 Epoch 59/100 Epoch 60/100 Epoch 61/100 Epoch 62/100 Epoch 63/100 Epoch 64/100 Epoch 65/100 Epoch 66/100 Epoch 67/100 Epoch 68/100 Epoch 69/100 Epoch 70/100 Epoch 71/100 Epoch 72/100 Epoch 73/100 Epoch 74/100 Epoch 75/100 Epoch 76/100 Epoch 77/100 Epoch 78/100 Epoch 79/100 Epoch 80/100 Epoch 81/100 Epoch 82/100 Epoch 83/100 Epoch 84/100 Epoch 85/100 Epoch 86/100 Epoch 87/100 Epoch 88/100 Epoch 89/100 Epoch 90/100 Epoch 91/100 Epoch 92/100 Epoch 93/100 Epoch 94/100 Epoch 95/100 Epoch 96/100 Epoch 97/100 Epoch 98/100 Epoch 99/100 Epoch 100/100
MIT
BareBones 1D CNN LSTM MLP - Sequence Prediction.ipynb
codeWhim/Sequence-Prediction
4-3. Evaluating ModelPlotting Training and Validation mean square error
# Plot Errors for metric in metrics: xAxis = history.epoch yAxes = {} yAxes["Training"]=history.history[metric] yAxes["Validation"]=history.history['val_'+metric] plot_multi_graph(xAxis,yAxes, title=metric,xAxisLabel='Epochs')
_____no_output_____
MIT
BareBones 1D CNN LSTM MLP - Sequence Prediction.ipynb
codeWhim/Sequence-Prediction
5. Prediction5-1. Single Value PredictionPredicting a single value slided 20 (our provided figure for look_ahead above) values ahead
# demonstrate prediction x_input = array([i for i in range(100,120)]) print(x_input) x_input = x_input.reshape((1, n_seq, n_steps, n_features)) yhat = model.predict(x_input) print(yhat)
[100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119] [[105.82992]]
MIT
BareBones 1D CNN LSTM MLP - Sequence Prediction.ipynb
codeWhim/Sequence-Prediction
5-2. Sequence PredictionPredicting complete sequence (determining closeness to target) based on data change variable for any other sequence though
# Prediction from Training Set predict_train = model.predict(X_train) # Prediction from Test Set predict_test = model.predict(X_test) """ df = pd.DataFrame(({"normalized y_train":y_train.flatten(), "normalized predict_train":predict_train.flatten(), "actual y_train":scaler_train.inverse_transform(y_train).flatten(), "actual predict_train":scaler_train.inverse_transform(predict_train).flatten(), })) """ df = pd.DataFrame(({ "normalized y_test":y_test.flatten(), "normalized predict_test":predict_test.flatten(), "actual y_test":scaler_test.inverse_transform(y_test).flatten(), "actual predict_test":scaler_test.inverse_transform(predict_test).flatten() })) df
_____no_output_____
MIT
BareBones 1D CNN LSTM MLP - Sequence Prediction.ipynb
codeWhim/Sequence-Prediction
6. Complete FigureData, Target, Prediction - all in one single graph
xAxis = [i for i in range(len(y_train))] yAxes = {} yAxes["Data"]=sequence_train[sliding_window:len(sequence_train)-look_ahead] yAxes["Target"]=scaler_train.inverse_transform(y_train) yAxes["Prediction"]=scaler_train.inverse_transform(predict_train) plot_multi_graph(xAxis,yAxes,title='') xAxis = [i for i in range(len(y_test))] yAxes = {} yAxes["Data"]=sequence_test[sliding_window:len(sequence_test)-look_ahead] yAxes["Target"]=scaler_test.inverse_transform(y_test) yAxes["Prediction"]=scaler_test.inverse_transform(predict_test) plot_multi_graph(xAxis,yAxes,title='') print(metrics) print(model.evaluate(X_test,y_test))
['mean_squared_error', 'mean_absolute_error', 'mean_absolute_percentage_error', 'mean_squared_logarithmic_error', 'logcosh'] 3960/3960 [==============================] - ETA: - ETA: - ETA: - ETA: - ETA: - ETA: - ETA: - ETA: - ETA: - ETA: - ETA: - ETA: - ETA: - ETA: - ETA: - ETA: - ETA: - ETA: - ETA: - ETA: - ETA: - ETA: - 1s 294us/step [7.694095613258053, 0.00023503987094495595, 0.015312134466990077, 7.694095613258053, 0.00011939386936549021, 0.0001175134772149084]
MIT
BareBones 1D CNN LSTM MLP - Sequence Prediction.ipynb
codeWhim/Sequence-Prediction
**Libraries**
from google.colab import drive drive.mount('/content/drive') # *********************** # *****| LIBRARIES |***** # *********************** %tensorflow_version 2.x import pandas as pd import numpy as np import os import json from sklearn.model_selection import train_test_split import tensorflow as tf from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.layers import Input, Embedding, Activation, Flatten, Dense from keras.layers import Conv1D, MaxPooling1D, Dropout from keras.models import Model from keras.utils import to_categorical from keras.optimizers import SGD from keras.wrappers.scikit_learn import KerasClassifier from sklearn.model_selection import RandomizedSearchCV, GridSearchCV device_name = tf.test.gpu_device_name() if device_name != '/device:GPU:0': print("GPU not found") else: print('Found GPU at: {}'.format(device_name)) # ****************************** # *****| GLOBAL VARIABLES |***** # ****************************** test_size = 0.2 convsize = 256 convsize2 = 1024 embedding_size = 27 input_size = 1000 conv_layers = [ [convsize, 7, 3], [convsize, 7, 3], [convsize, 3, -1], [convsize, 3, -1], [convsize, 3, -1], [convsize, 3, 3] ] fully_connected_layers = [convsize2, convsize2] num_of_classes= 2 dropout_p = 0.5 optimizer= 'adam' batch = 128 loss = 'categorical_crossentropy'
_____no_output_____
MIT
models/Character_Level_CNN.ipynb
TheBlueEngineer/Serene-1.0
**Utility functions**
# ***************** # *** GET FILES *** # ***************** def getFiles( driverPath, directory, basename, extension): # Define a function that will return a list of files pathList = [] # Declare an empty array directory = os.path.join( driverPath, directory) # for root, dirs, files in os.walk( directory): # Iterate through roots, dirs and files recursively for file in files: # For every file in files if os.path.basename(root) == basename: # If the parent directory of the current file is equal with the parameter if file.endswith('.%s' % (extension)): # If the searched file ends in the parameter path = os.path.join(root, file) # Join together the root path and file name pathList.append(path) # Append the new path to the list return pathList # **************************************** # *** GET DATA INTO A PANDAS DATAFRAME *** # **************************************** def getDataFrame( listFiles, maxFiles, minWords, limit): counter_real, counter_max, limitReached = 0, 0, 0 text_list, label_list = [], [] print("Word min set to: %i." % ( minWords)) # Iterate through all the files for file in listFiles: # Open each file and look into it with open(file) as f: if(limitReached): break if maxFiles == 0: break else: maxFiles -= 1 objects = json.loads( f.read())['data'] # Get the data from the JSON file # Look into each object from the file and test for limiters for object in objects: if limit > 0 and counter_real >= (limit * 1000): limitReached = 1 break if len( object['text'].split()) >= minWords: text_list.append(object['text']) label_list.append(object['label']) counter_real += 1 counter_max += 1 if(counter_real > 0 and counter_max > 0): ratio = counter_real / counter_max * 100 else: ratio = 0 # Print the final result print("Lists created with %i/%i (%.2f%%) data objects." % ( counter_real, counter_max, ratio)) print("Rest ignored due to minimum words limit of %i or the limit of %i data objects maximum." % ( minWords, limit * 1000)) # Return the final Pandas DataFrame return text_list, label_list, counter_real
_____no_output_____
MIT
models/Character_Level_CNN.ipynb
TheBlueEngineer/Serene-1.0
**Gather the path to files**
# *********************************** # *** GET THE PATHS FOR THE FILES *** # *********************************** # Path to the content of the Google Drive driverPath = "/content/drive/My Drive" # Sub-directories in the driver paths = ["processed/depression/submission", "processed/depression/comment", "processed/AskReddit/submission", "processed/AskReddit/comment"] files = [None] * len(paths) for i in range(len(paths)): files[i] = getFiles( driverPath, paths[i], "text", "json") print("Gathered %i files from %s." % ( len(files[i]), paths[i]))
Gathered 750 files from processed/depression/submission. Gathered 2892 files from processed/depression/comment. Gathered 1311 files from processed/AskReddit/submission. Gathered 5510 files from processed/AskReddit/comment.
MIT
models/Character_Level_CNN.ipynb
TheBlueEngineer/Serene-1.0
**Gather the data from files**
# ************************************ # *** GATHER THE DATA AND SPLIT IT *** # ************************************ # Local variables rand_state_splitter = 1000 test_size = 0.2 min_files = [ 750, 0, 1300, 0] max_words = [ 50, 0, 50, 0] limit_packets = [300, 0, 300, 0] message = ["Depression submissions", "Depression comments", "AskReddit submissions", "AskReddit comments"] text, label = [], [] # Get the pandas data frames for each category print("Build the Pandas DataFrames for each category.") for i in range(4): dummy_text, dummy_label, counter = getDataFrame( files[i], min_files[i], max_words[i], limit_packets[i]) if counter > 0: text += dummy_text label += dummy_label dummy_text, dummy_label = None, None print("Added %i samples to data list: %s.\n" % ( counter ,message[i]) ) # Splitting the data x_train, x_test, y_train, y_test = train_test_split(text, label, test_size = test_size, shuffle = True, random_state = rand_state_splitter) print("Training data: %i samples." % ( len(y_train)) ) print("Testing data: %i samples." % ( len(y_test)) ) # Clear data no longer needed del rand_state_splitter, min_files, max_words, message, dummy_label, dummy_text
Build the Pandas DataFrames for each category. Word min set to: 50. Lists created with 300000/349305 (85.88%) data objects. Rest ignored due to minimum words limit of 50 or the limit of 300000 data objects maximum. Added 300000 samples to data list: Depression submissions. Word min set to: 0. Lists created with 0/0 (0.00%) data objects. Rest ignored due to minimum words limit of 0 or the limit of 0 data objects maximum. Word min set to: 50. Lists created with 300000/554781 (54.08%) data objects. Rest ignored due to minimum words limit of 50 or the limit of 300000 data objects maximum. Added 300000 samples to data list: AskReddit submissions. Word min set to: 0. Lists created with 0/0 (0.00%) data objects. Rest ignored due to minimum words limit of 0 or the limit of 0 data objects maximum. Training data: 480000 samples. Testing data: 120000 samples.
MIT
models/Character_Level_CNN.ipynb
TheBlueEngineer/Serene-1.0
**Process the data at a character-level**
# ******************************* # *** CONVERT STRING TO INDEX *** # ******************************* print("Convert the strings to indexes.") tk = Tokenizer(num_words = None, char_level = True, oov_token='UNK') tk.fit_on_texts(x_train) print("Original:", x_train[0]) # ********************************* # *** CONSTRUCT A NEW VOCABULARY*** # ********************************* print("Construct a new vocabulary") alphabet = "abcdefghijklmnopqrstuvwxyz" char_dict = {} for i, char in enumerate(alphabet): char_dict[char] = i + 1 print("dictionary") tk.word_index = char_dict.copy() # Use char_dict to replace the tk.word_index print(tk.word_index) tk.word_index[tk.oov_token] = max(char_dict.values()) + 1 # Add 'UNK' to the vocabulary print(tk.word_index) # ************************* # *** TEXT TO SEQUENCES *** # ************************* print("Text to sequence.") x_train = tk.texts_to_sequences(x_train) x_test = tk.texts_to_sequences(x_test) print("After sequences:", x_train[0]) # *************** # *** PADDING *** # *************** print("Padding the sequences.") x_train = pad_sequences( x_train, maxlen = input_size, padding = 'post') x_test = pad_sequences( x_test, maxlen= input_size , padding = 'post') # ************************ # *** CONVERT TO NUMPY *** # ************************ print("Convert to Numpy arrays") x_train = np.array( x_train, dtype = 'float32') x_test = np.array(x_test, dtype = 'float32') # ************************************** # *** GET CLASSES FOR CLASSIFICATION *** # ************************************** y_test_copy = y_test y_train_list = [x-1 for x in y_train] y_test_list = [x-1 for x in y_test] y_train = to_categorical( y_train_list, num_of_classes) y_test = to_categorical( y_test_list, num_of_classes)
Convert the strings to indexes. Original: i did not think i had have to post in this subreddit i just feel empty and completely alone i am hanging out with friends but nothing makes me feel happy as i used to be i know people generally have it worse i just want someone to talk to and just be silly with Construct a new vocabulary dictionary {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5, 'f': 6, 'g': 7, 'h': 8, 'i': 9, 'j': 10, 'k': 11, 'l': 12, 'm': 13, 'n': 14, 'o': 15, 'p': 16, 'q': 17, 'r': 18, 's': 19, 't': 20, 'u': 21, 'v': 22, 'w': 23, 'x': 24, 'y': 25, 'z': 26} {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5, 'f': 6, 'g': 7, 'h': 8, 'i': 9, 'j': 10, 'k': 11, 'l': 12, 'm': 13, 'n': 14, 'o': 15, 'p': 16, 'q': 17, 'r': 18, 's': 19, 't': 20, 'u': 21, 'v': 22, 'w': 23, 'x': 24, 'y': 25, 'z': 26, 'UNK': 27} Text to sequence. After sequences: [9, 27, 4, 9, 4, 27, 14, 15, 20, 27, 20, 8, 9, 14, 11, 27, 9, 27, 8, 1, 4, 27, 8, 1, 22, 5, 27, 20, 15, 27, 16, 15, 19, 20, 27, 9, 14, 27, 20, 8, 9, 19, 27, 19, 21, 2, 18, 5, 4, 4, 9, 20, 27, 9, 27, 10, 21, 19, 20, 27, 6, 5, 5, 12, 27, 5, 13, 16, 20, 25, 27, 1, 14, 4, 27, 3, 15, 13, 16, 12, 5, 20, 5, 12, 25, 27, 1, 12, 15, 14, 5, 27, 9, 27, 1, 13, 27, 8, 1, 14, 7, 9, 14, 7, 27, 15, 21, 20, 27, 23, 9, 20, 8, 27, 6, 18, 9, 5, 14, 4, 19, 27, 2, 21, 20, 27, 14, 15, 20, 8, 9, 14, 7, 27, 13, 1, 11, 5, 19, 27, 13, 5, 27, 6, 5, 5, 12, 27, 8, 1, 16, 16, 25, 27, 1, 19, 27, 9, 27, 21, 19, 5, 4, 27, 20, 15, 27, 2, 5, 27, 9, 27, 11, 14, 15, 23, 27, 16, 5, 15, 16, 12, 5, 27, 7, 5, 14, 5, 18, 1, 12, 12, 25, 27, 8, 1, 22, 5, 27, 9, 20, 27, 23, 15, 18, 19, 5, 27, 9, 27, 10, 21, 19, 20, 27, 23, 1, 14, 20, 27, 19, 15, 13, 5, 15, 14, 5, 27, 20, 15, 27, 20, 1, 12, 11, 27, 20, 15, 27, 1, 14, 4, 27, 10, 21, 19, 20, 27, 2, 5, 27, 19, 9, 12, 12, 25, 27, 23, 9, 20, 8, 27] Padding the sequences. Convert to Numpy arrays
MIT
models/Character_Level_CNN.ipynb
TheBlueEngineer/Serene-1.0