markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
2.1. Criacao de Hipótesis 2.1.1 Hipóteses Loja **1.** Lojas com maior quadro de funcionarios deveriam vender mais**2.** Lojas com maior capacidade de estoque deveriam vender mais**3.** Lojas com maior porte deveriam vender mais **4.** Lojas com maior sortimento deveriam vender mais**5.** Lojas com competidores mais ...
# year df2['year'] = df2['date'].dt.year # month df2['month'] = df2['date'].dt.month # day df2['day'] = df2['date'].dt.day # week of year df2['week_of_year'] = df2['date'].dt.weekofyear # year week df2['year_week'] = df2['date'].dt.strftime( '%Y-%W' ) # competition since df2['competition_since'] = df2.apply( lambd...
_____no_output_____
MIT
m03_v01_store_sales_predict.ipynb
artavale/Rossman-Forcast-Sales
3.0. PASSO 03 - FILTRAGEM DE VARIÁVEIS
df3 = df2.copy() df3.head().T
_____no_output_____
MIT
m03_v01_store_sales_predict.ipynb
artavale/Rossman-Forcast-Sales
3.1. Filtragem das Linhas
df3 = df3[(df3['open'] != 0) & (df3['sales'] > 0 )]
_____no_output_____
MIT
m03_v01_store_sales_predict.ipynb
artavale/Rossman-Forcast-Sales
3.2. Seleção das Colunas
cols_drop = ['customers' , 'open', 'promo_interval', 'month_map'] df3 = df3.drop( cols_drop, axis= 1 ) df3.columns
_____no_output_____
MIT
m03_v01_store_sales_predict.ipynb
artavale/Rossman-Forcast-Sales
Calculate Detector Counts: NEICompute the AIA response use our full emission model, including non-equilibrium ionization.
import os import time import h5py os.environ['MKL_NUM_THREADS'] = '1' os.environ['OMP_NUM_THREADS'] = '1' os.environ['NUMEXPR_NUM_THREADS'] = '1' import numpy as np import astropy.units as u import sunpy.sun.constants import matplotlib.pyplot as plt import dask import distributed import synthesizAR from synthesizAR.in...
_____no_output_____
MIT
notebooks/cooling/detect_nei.ipynb
rice-solar-physics/synthetic-observables-paper-models
Load in the desired field and emission model
field = synthesizAR.Field.restore('/storage-home/w/wtb2/data/timelag_synthesis_v2/cooling/field_checkpoint/')
_____no_output_____
MIT
notebooks/cooling/detect_nei.ipynb
rice-solar-physics/synthetic-observables-paper-models
We are using an emission model which includes only the most dominant ions. Comparisons to the temperature response functions show these provide accurate coverage.
em_model = EmissionModel.restore('/storage-home/w/wtb2/data/timelag_synthesis_v2/base_emission_model.json')
_____no_output_____
MIT
notebooks/cooling/detect_nei.ipynb
rice-solar-physics/synthetic-observables-paper-models
Compute and store the non-equilibrium ionization populations for each loop
futures = em_model.calculate_ionization_fraction(field, '/storage-home/w/wtb2/data/timelag_synthesis_v2/cooling/nei/ionization_fractions.h5', interface=EbtelInterface, client=client) em_model.save('/storage-home/w/wtb2/data/timelag_synthesis_v2/cooling/nei/emission_model.json') futures =...
_____no_output_____
MIT
notebooks/cooling/detect_nei.ipynb
rice-solar-physics/synthetic-observables-paper-models
Or just reload the emission model
em_model = EmissionModel.restore('/storage-home/w/wtb2/data/timelag_synthesis_v2/cooling/nei/emission_model.json')
_____no_output_____
MIT
notebooks/cooling/detect_nei.ipynb
rice-solar-physics/synthetic-observables-paper-models
Compute the detector counts
aia = InstrumentSDOAIA([0,10000]*u.s, field.magnetogram.observer_coordinate) observer = synthesizAR.Observer(field,[aia],parallel=True) observer.build_detector_files('/storage-home/w/wtb2/data/timelag_synthesis_v2/cooling/nei/', ds=0.5*u.Mm) futures_flat = observer.flatten_detector_counts(...
_____no_output_____
MIT
notebooks/cooling/detect_nei.ipynb
rice-solar-physics/synthetic-observables-paper-models
And finally build the maps
futures_bin = observer.bin_detector_counts( '/storage-home/w/wtb2/data/timelag_synthesis_v2/cooling/nei/')
_____no_output_____
MIT
notebooks/cooling/detect_nei.ipynb
rice-solar-physics/synthetic-observables-paper-models
Sonic The Hedgehog 2 with Dueling dqn Step 1: Import the libraries
import time import retro import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt from IPython.display import clear_output import math %matplotlib inline import sys sys.path.append('../../') from algos.agents.dqn_agent import DDQNAgent from algos.models.dqn_cnn import...
_____no_output_____
MIT
cgames/06_sonic2/sonic2_ddqn.ipynb
deepanshut041/Reinforcement-Learning-Basic
Step 2: Create our environmentInitialize the environment in the code cell below.
env = retro.make(game='SonicTheHedgehog2-Genesis', state='EmeraldHillZone.Act1', scenario='contest') env.seed(0) # if gpu is to be used device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print("Device: ", device)
_____no_output_____
MIT
cgames/06_sonic2/sonic2_ddqn.ipynb
deepanshut041/Reinforcement-Learning-Basic
Step 3: Viewing our Enviroment
print("The size of frame is: ", env.observation_space.shape) print("No. of Actions: ", env.action_space.n) env.reset() plt.figure() plt.imshow(env.reset()) plt.title('Original Frame') plt.show() possible_actions = { # No Operation 0: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], # Left ...
_____no_output_____
MIT
cgames/06_sonic2/sonic2_ddqn.ipynb
deepanshut041/Reinforcement-Learning-Basic
Execute the code cell below to play Pong with a random policy.
def random_play(): score = 0 env.reset() for i in range(200): env.render() action = possible_actions[np.random.randint(len(possible_actions))] state, reward, done, _ = env.step(action) score += reward if done: print("Your Score at end of game is: ", score)...
_____no_output_____
MIT
cgames/06_sonic2/sonic2_ddqn.ipynb
deepanshut041/Reinforcement-Learning-Basic
Step 4:Preprocessing Frame
plt.figure() plt.imshow(preprocess_frame(env.reset(), (1, -1, -1, 1), 84), cmap="gray") plt.title('Pre Processed image') plt.show()
_____no_output_____
MIT
cgames/06_sonic2/sonic2_ddqn.ipynb
deepanshut041/Reinforcement-Learning-Basic
Step 5: Stacking Frame
def stack_frames(frames, state, is_new=False): frame = preprocess_frame(state, (1, -1, -1, 1), 84) frames = stack_frame(frames, frame, is_new) return frames
_____no_output_____
MIT
cgames/06_sonic2/sonic2_ddqn.ipynb
deepanshut041/Reinforcement-Learning-Basic
Step 6: Creating our Agent
INPUT_SHAPE = (4, 84, 84) ACTION_SIZE = len(possible_actions) SEED = 0 GAMMA = 0.99 # discount factor BUFFER_SIZE = 100000 # replay buffer size BATCH_SIZE = 32 # Update batch size LR = 0.0001 # learning rate TAU = 1e-3 # for soft update of target parameters UPDATE_EVERY = 100 ...
_____no_output_____
MIT
cgames/06_sonic2/sonic2_ddqn.ipynb
deepanshut041/Reinforcement-Learning-Basic
Step 7: Watching untrained agent play
env.viewer = None # watch an untrained agent state = stack_frames(None, env.reset(), True) for j in range(200): env.render(close=False) action = agent.act(state, eps=0.01) next_state, reward, done, _ = env.step(possible_actions[action]) state = stack_frames(state, next_state, False) if done: ...
_____no_output_____
MIT
cgames/06_sonic2/sonic2_ddqn.ipynb
deepanshut041/Reinforcement-Learning-Basic
Step 8: Loading AgentUncomment line to load a pretrained agent
start_epoch = 0 scores = [] scores_window = deque(maxlen=20)
_____no_output_____
MIT
cgames/06_sonic2/sonic2_ddqn.ipynb
deepanshut041/Reinforcement-Learning-Basic
Step 9: Train the Agent with DDQN
epsilon_by_epsiode = lambda frame_idx: EPS_END + (EPS_START - EPS_END) * math.exp(-1. * frame_idx /EPS_DECAY) plt.plot([epsilon_by_epsiode(i) for i in range(1000)]) def train(n_episodes=1000): """ Params ====== n_episodes (int): maximum number of training episodes """ for i_episode in range...
_____no_output_____
MIT
cgames/06_sonic2/sonic2_ddqn.ipynb
deepanshut041/Reinforcement-Learning-Basic
Step 10: Watch a Smart Agent!
env.viewer = None # watch an untrained agent state = stack_frames(None, env.reset(), True) for j in range(10000): env.render(close=False) action = agent.act(state, eps=0.91) next_state, reward, done, _ = env.step(possible_actions[action]) state = stack_frames(state, next_state, False) if done: ...
_____no_output_____
MIT
cgames/06_sonic2/sonic2_ddqn.ipynb
deepanshut041/Reinforcement-Learning-Basic
Analyze A/B Test ResultsThis project will assure you have mastered the subjects covered in the statistics lessons. The hope is to have this project be as comprehensive of these topics as possible. Good luck! Table of Contents- [Introduction](intro)- [Part I - Probability](probability)- [Part II - A/B Test](ab_test)-...
import pandas as pd import numpy as np import random import matplotlib.pyplot as plt import gc %matplotlib inline #We are setting the seed to assure you get the same answers on quizzes as we set up random.seed(42)
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
`1.` Now, read in the `ab_data.csv` data. Store it in `df`. **Use your dataframe to answer the questions in Quiz 1 of the classroom.**a. Read in the dataset and take a look at the top few rows here:
df = pd.read_csv('ab_data.csv') df.head()
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
b. Use the below cell to find the number of rows in the dataset.
df.shape
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
c. The number of unique users in the dataset.
df.user_id.nunique()
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
d. The proportion of users converted.
df[df['converted'] == 1]['user_id'].nunique() / df.user_id.nunique()
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
e. The number of times the `new_page` and `treatment` don't line up.
df[(df['group'] == 'treatment') & (df['landing_page'] == 'old_page')].shape[0] + df[(df['group'] == 'control') & (df['landing_page'] == 'new_page')].shape[0]
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
f. Do any of the rows have missing values?
df.isnull().sum()
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
`2.` For the rows where **treatment** is not aligned with **new_page** or **control** is not aligned with **old_page**, we cannot be sure if this row truly received the new or old page. Use **Quiz 2** in the classroom to provide how we should handle these rows. a. Now use the answer to the quiz to create a new datase...
remov = df[(df['group'] == 'treatment') & (df['landing_page'] == 'old_page')].append(df[(df['group'] == 'control') & (df['landing_page'] == 'new_page')]) df2 = df.append(remov).drop_duplicates(keep=False) df2.shape # Double Check all of the correct rows were removed - this should be 0 df2[((df2['group'] == 'treatment')...
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
`3.` Use **df2** and the cells below to answer questions for **Quiz3** in the classroom. a. How many unique **user_id**s are in **df2**?
df2.user_id.nunique()
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
b. There is one **user_id** repeated in **df2**. What is it?
df2[df2.duplicated(['user_id'], keep='first')].index
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
c. What is the row information for the repeat **user_id**?
df2[df2.duplicated(['user_id'], keep='last')] df2.shape
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
d. Remove **one** of the rows with a duplicate **user_id**, but keep your dataframe as **df2**.
df2.user_id.drop_duplicates(keep='first', inplace=False).shape
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
`4.` Use **df2** in the below cells to answer the quiz questions related to **Quiz 4** in the classroom.a. What is the probability of an individual converting regardless of the page they receive?
df2[df2['converted'] == 1].shape[0] / df2['converted'].shape[0]
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
b. Given that an individual was in the `control` group, what is the probability they converted?
prob_old = df2[df2['group'] == 'control']['converted'].mean() prob_old
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
c. Given that an individual was in the `treatment` group, what is the probability they converted?
prob_new = df2[df2['group'] == 'treatment']['converted'].mean() prob_new
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
d. What is the probability that an individual received the new page?
df2[df2['landing_page'] == 'new_page'].shape[0] / df2.shape[0]
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
e. Consider your results from a. through d. above, and explain below whether you think there is sufficient evidence to say that the new treatment page leads to more conversions. **Your answer goes here.** From the observations, each group has almost equal proportion. The probability of conversion after seeing the old ...
p_new = df2[df2['converted'] == 1].shape[0] / df2['converted'].shape[0] p_new
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
b. What is the **convert rate** for $p_{old}$ under the null?
p_old = df2[df2['converted'] == 1].shape[0] / df2['converted'].shape[0] p_old
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
c. What is $n_{new}$?
n_new = df2[df2['landing_page'] == 'new_page'].shape[0] n_new
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
d. What is $n_{old}$?
n_old = df2[df2['landing_page'] == 'old_page'].shape[0] n_old
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
e. Simulate $n_{new}$ transactions with a convert rate of $p_{new}$ under the null. Store these $n_{new}$ 1's and 0's in **new_page_converted**.
new_page_converted = [] sim_new = np.random.choice([0,1], size = n_new, p=[1-p_new, p_new]) new_page_converted.append(sim_new) new_page_converted = np.array(new_page_converted)
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
f. Simulate $n_{old}$ transactions with a convert rate of $p_{old}$ under the null. Store these $n_{old}$ 1's and 0's in **old_page_converted**.
old_page_converted = [] sim_old = np.random.choice([0,1], size = n_old, p=[1-p_old, p_old]) old_page_converted.append(sim_old) old_page_converted = np.array(old_page_converted)
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
g. Find $p_{new}$ - $p_{old}$ for your simulated values from part (e) and (f).
diff = new_page_converted.mean() - old_page_converted.mean() diff
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
h. Simulate 10,000 $p_{new}$ - $p_{old}$ values using this same process similarly to the one you calculated in parts **a. through g.** above. Store all 10,000 values in a numpy array called **p_diffs**.
p_diffs = [] for i in range(10000): new_conv = np.random.choice([0,1], size = n_new, p=[1-p_new, p_new]) old_conv = np.random.choice([0,1], size = n_old, p=[1-p_old, p_old]) p_diffs.append(new_conv.mean() - old_conv.mean()) obsv_diff = prob_new - prob_old obsv_diff
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
i. Plot a histogram of the **p_diffs**. Does this plot look like what you expected? Use the matching problem in the classroom to assure you fully understand what was computed here.
plt.hist(p_diffs, alpha=0.5) plt.axvline(x=np.percentile(p_diffs, 2.5), color='red') plt.axvline(x=np.percentile(p_diffs, 97.5), color='red') plt.axvline(x=obsv_diff, color='green', linestyle = '--') plt.title('Sampling distribution of conversion rates') plt.ylabel('Frequency') plt.xlabel('Sample mean');
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
j. What proportion of the **p_diffs** are greater than the actual difference observed in **ab_data.csv**?
(np.array(p_diffs) > obsv_diff).mean()
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
k. Please explain using the vocabulary you've learned in this course what you just computed in part **j.** What is this value called in scientific studies? What does this value mean in terms of whether or not there is a difference between the new and old pages? **Put your answer here.** The value computed above is, t...
import statsmodels.api as sm convert_old = len(df2.query('landing_page == "old_page" & converted == 1')) convert_new = len(df2.query('landing_page == "new_page" & converted == 1')) #n_old previously computed #n_new previously computed
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
m. Now use `stats.proportions_ztest` to compute your test statistic and p-value. [Here](https://docs.w3cub.com/statsmodels/generated/statsmodels.stats.proportion.proportions_ztest/) is a helpful link on using the built in.
stat, pval = sm.stats.proportions_ztest(count=[convert_old, convert_new], nobs=[n_old,n_new], value = 0, alternative='smaller') print('z-statistic=',stat) print('p-value=',pval)
z-statistic= 1.3116075339133115 p-value= 0.905173705140591
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
n. What do the z-score and p-value you computed in the previous question mean for the conversion rates of the old and new pages? Do they agree with the findings in parts **j.** and **k.**? **Put your answer here.** With 0.05 type 1 error rate (0.95 confidence level), the null hypothesis gets rejected if Z-score of the...
df2['ab_page'] = pd.get_dummies(df2['landing_page']).iloc[:,0] df2['intercept'] = 1 df2.head()
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
c. Use **statsmodels** to instantiate your regression model on the two columns you created in part b., then fit the model using the two columns you created in part **b.** to predict whether or not an individual converts.
lm = sm.Logit(df2['converted'], df2[['intercept', 'ab_page']]) results = lm.fit()
Optimization terminated successfully. Current function value: 0.366118 Iterations 6
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
d. Provide the summary of your model below, and use it as necessary to answer the following questions.
results.summary2() print('With all other elements held constant treatment users are', 1/np.exp(-0.0150), 'times less likely to convert than control users')
With all other elements held constant treatment users are 1.015113064615719 times less likely to convert than control users
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
e. What is the p-value associated with **ab_page**? Why does it differ from the value you found in **Part II**? **Hint**: What are the null and alternative hypotheses associated with your regression model, and how do they compare to the null and alternative hypotheses in **Part II**? **Put your answer here.** The p-va...
country = pd.read_csv('countries.csv') country.head() country.shape country.isnull().sum() country['country'].value_counts() df2 = df2.join(country.set_index('user_id'), on='user_id') df2[['UK', 'CA']] = pd.get_dummies(df2['country'], drop_first=True) df2.head() lm = sm.Logit(df2['converted'], df2[['intercept', 'ab_pag...
Optimization terminated successfully. Current function value: 0.366112 Iterations 6
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
h. Though you have now looked at the individual factors of country and page on conversion, we would now like to look at an interaction between page and country to see if there significant effects on conversion. Create the necessary additional columns, and fit the new model. Provide the summary results, and your concl...
results.summary2() print('With every other factor held constant, users from Uk are',np.exp(0.0506).round(2), 'times more likely to convert than users from the US') print('With every other factor held constant, users from CA are',np.exp(0.0408).round(2), 'times more likely to convert than users from the US')
With every other factor held constant, users from Uk are 1.05 times more likely to convert than users from the US With every other factor held constant, users from CA are 1.04 times more likely to convert than users from the US
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
**Does it appear that country had an impact on conversion?** No, it appears not**Explaination:** The p-value associated with the coeefficiect of both Canada and UK are both past the 0.05(5%) threshold for a 95% confidence interval
from subprocess import call call(['python', '-m', 'nbconvert', 'Analyze_ab_test_results_notebook.ipynb'])
_____no_output_____
MIT
Analyze_ab_test_results_notebook.ipynb
Rashwan94/Conversion-A-B-test
Warning - The notebook needs the following files to run well: The database of metabolites used to see the usual ratios between the different elements was from ChEBI, specifically https://www.ebi.ac.uk/chebi/downloadsForward.do. Specifically, two files were used from this site:- ChEBI_complete_3star.sdf file contains a...
import pandas as pd import numpy as np import matplotlib.pyplot as plt
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
ChEBI with only 3 star annotated metabolites Reading and Storing Formulas from a Database
# Open Database file with open('ChEBI_complete_3star.sdf/ChEBI_complete_3star.sdf') as file: formula = False form=[] for line in file: # Append all formulas if formula == True: form.append(line[:-1]) #print(line[:-1]) # Read next line if line.startswi...
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
Filter the formulas in the databaseFilter formulas with:- an unspecified R group or X halogen, - formulas that could have a section repeated n times (polymers), - formulas with non-covalent bonds represented by a '.' such as '(C6H8O6)n.H2O',- formulas with isotopes such as C6H11[18F]O5,- formulas without carbons or hy...
formulas = [] tot_form = len(form) #c,r,n,p,x,n=0,0,0,0,0,0 for i in form: if 'C' in i: # Has to have Carbon (this is actually: has to have carbon, Cl, Ca, Cu, Cr, Cs or Cd - fixed later) #c=c+1 if 'R' not in i: # No unspecified R groups #r=r+1 if 'n' not in i: # No polymer...
Nº of formulas after filtering: 43779 Nº of non-repeating formulas after filtering: 21164
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
There are 43779 formulas remaining after filtering, 21164 of those were unique different formulas (more than half of the formulas were repeated).
#len(set(formulas)) #formulas = set(formulas) #formulas[:20]
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
Transform formulas into Dictionary/DataFrame format (from string format) and take out repeating formulas (done automatically by this process)
def formula_process(formula): """Transforms a formula in string format into a DataFrame. Element order: C, H, N, O, S, P, F, Cl.""" # Make the row DataFrame to store the results #results = pd.DataFrame(np.zeros((1,8)), columns = ['C','H','O','N','S','P','Cl','F']) results = {} count = '' le...
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
See the elements present in the list of formulas and also the number of times each element appear in a formulasNote that C appear in 21104 and H in 21159 formulas out of the 21164 formulas, when they should appear in all formulas. This discrepancy is explaiend by the comment made in the formula filtering cell a bit abo...
#final_db[final_db['Se'].notnull()].T final_db.notnull().sum()
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
Guarantee that each formula has at least 1 C and 1 H, and that only have the following elements: C,H,O,N,S,P,Cl and FOnly formulas with C,H,O,N,S,P,Cl and F were kept since they are (as we can also see above) by far the most common elements in metabolites and are the elements that can be considered (right now as it is...
# Only keep formulas that have carbon or hydrogen atoms for i in range(2): teste = final_db.iloc[:,i].notnull() #print(final_db.iloc[:,i].isnull()) final_db = final_db.loc[teste] # Take out formulas that have an element outside of the C,H,O,N,S,P,Cl and F for i in range(8,len(final_db.columns)): te...
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
This finally filters the number of formulas from 21164 to 19593 formulas - final number of formulas consideredAs it can be seen by this small filtering, very few formulas had elements outside of the main 8 mentioned.
final_db.notnull().sum() # Truncate the DataFrame to only the elements we want to see and replace NaNs for 0 db_df = final_db[['C','H','O','N','S','P','F','Cl']] db_df = db_df.replace({np.nan:0}) db_df
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
Calculate the distribution of ratios to carbon in the 19593 formulas - slowest cell of this analysis
# Calculate the different ratios ratios_df = pd.DataFrame(index=db_df.index, columns = ['H/C','O/C','N/C','S/C','P/C','F/C','Cl/C']) c = 0 for i in db_df.index: ratios_df.loc[i] = [db_df.loc[i,'H']/db_df.loc[i,'C'], db_df.loc[i,'O']/db_df.loc[i,'C'], db_df.loc[i,'N']...
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
Analysis of the ratios of 'Element'/Carbon Histograms and more importantly Cumulative Graphs of the ratios - Not eliminating ratios with 0Outside of the ['H/C'] ratios, the ratios were dominated by formulas that didn't have the element that is not carbon which skewed the following results. Histograms of the each of ...
f, ax = plt.subplots(figsize=(4,4)) data = ratios_df['H/C'] plt.hist(data, bins=np.arange(min(data), 4, 0.1)) plt.title(['H/C']) plt.show() for i in ratios_df.columns[1:]: data = ratios_df[i] plt.hist(data, bins=np.arange(min(data), 1.3, 0.05)) plt.title(i) plt.show()
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
(Percent) Cumulative Graphs of % of formulas by a certain element/Carbon ratioFirst is presented the general cumulative graph, and then two different subsections of the graph to present in more detail: the 'end' of the curves (all except 'H/C') and the 'start' of the curves (especially 'H/C').
# Plot the cumulative graph and adjust parameters f, ax = plt.subplots(figsize=(12,9)) for i in ratios_df.columns: data = ratios_df[i] # Make the histogram with the intended ranges between the bins values, base = np.histogram(data, bins=np.arange(min(data), 4, 0.05)) # Set X # Calculate the cumulative %...
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
Cumulative Graphs of the ratios - Eliminating ratios with 0Outside of the ['H/C'] ratios, the ratios were dominated by formulas that didn't have the element that is not carbon. Thus for each ratio 'element/C', the formulas that didn't have the corresponding elements were not taken into account for the graph. (Percent)...
# Plot the Cumulative Graph and Adjust Parameters f, ax = plt.subplots(figsize=(12,9)) for i in ratios_df.columns: data = ratios_df[i] values, base = np.histogram(data, bins=np.arange(min(data), 4, 0.05)) # Subtract the number of formulas that don't have the otehr element that not carbon in the ratio (...
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
ChEBI with all metabolites in the Database - SlowerThis analysis is exactly the same as the made before from the 2nd cell down, only the cell that reads the file is different. Reading and Storing Formulas from a Database
# Open Database file with open('ChEBI_complete_3star.sdf/ChEBI_complete.sdf') as file: formula = False form2=[] for line in file: # Append all formulas if formula == True: form2.append(line[:-1]) #print(line[:-1]) # Read next line if line.startswith('...
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
Filter the formulas in the databaseFilter formulas with:- an unspecified R group or X halogen, - formulas that could have a section repeated n times (polymers), - formulas with non-covalent bonds represented by a '.' such as '(C6H8O6)n.H2O',- formulas with isotopes such as C6H11[18F]O5,- formulas without carbons or hy...
formulas2 = [] tot_form = len(form) #c,r,n,p,x,n=0,0,0,0,0,0 for i in form2: if 'C' in i: # Has to have Carbon (this is actually: has to have carbon, Cl, Ca, Cu, Cr, Cs or Cd - fixed later) #c=c+1 if 'R' not in i: # No unspecified R groups #r=r+1 if 'n' not in i: # No polym...
Nº of formulas after filtering: 106052 Nº of non-repeating formulas after filtering: 37938
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
There are 106052 formulas remaining after filtering, 37938 of those were unique different formulas (more than half of the formulas were repeated).
#len(set(formulas2)) #formulas2 = set(formulas2) #formulas2[:20]
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
Transform formulas into Dictionary/DataFrame format (from string format) and take out repeating formulas (done automatically by this process)
# Transform each formula into Dictionary or DataFrame format # This also elimiantes repeating formulas db2 = {} for i in formulas2: #print(i) db2[i] = formula_process(i) # Transform information into a DataFrame final_db2 = pd.DataFrame.from_dict(db2).T final_db2
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
See the elements present in the list of formulas and also the number of times each element appear in a formulasNote that C appear in 37874 and H in 37933 formulas out of the 37938 formulas, when they should appear in all formulas. This discrepancy is explaiend by the comment made in the formula filtering cell a bit abo...
#final_db[final_db['Se'].notnull()].T final_db2.notnull().sum()
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
Guarantee that each formula has at least 1 C and 1 H, and that only have the following elements: C,H,O,N,S,P,Cl and FOnly formulas with C,H,O,N,S,P,Cl and F were kept since they are (as we can also see above) by far the most common elements in metabolites and are the elements that can be considered (right now as it is...
# Only keep formulas that have carbon or hydrogen atoms for i in range(2): teste = final_db2.iloc[:,i].notnull() #print(final_db2.iloc[:,i].isnull()) final_db2 = final_db2.loc[teste] # Take out formulas that have an element outside of the C,H,O,N,S,P,Cl and F for i in range(8,len(final_db2.columns)): ...
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
This finally filters the number of formulas from 37938 to 35245 formulas - final number of formulas consideredAs it can be seen by this small filtering, very few formulas had elements outside of the main 8 mentioned.
final_db2.notnull().sum() # Truncate the DataFrame to only the elements we want to see and replace NaNs for 0 db_df2 = final_db2[['C','H','O','N','S','P','F','Cl']] db_df2 = db_df2.replace({np.nan:0}) db_df2
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
Calculate the distribution of ratios to carbon in the 19593 formulas - slowest cell of the notebook
# Calculate the different ratios ratios_df2 = pd.DataFrame(index=db_df2.index, columns = ['H/C','O/C','N/C','S/C','P/C','F/C','Cl/C']) c = 0 for i in db_df2.index: ratios_df2.loc[i] = [db_df2.loc[i,'H']/db_df2.loc[i,'C'], db_df2.loc[i,'O']/db_df2.loc[i,'C'], db_df2...
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
Analysis of the ratios of 'Element'/Carbon Histograms and more importantly Cumulative Graphs of the ratios - Not eliminating ratios with 0Outside of the ['H/C'] ratios, the ratios were dominated by formulas that didn't have the element that is not carbon which skewed the following results. Histograms of the each of ...
f, ax = plt.subplots(figsize=(4,4)) data = ratios_df2['H/C'] plt.hist(data, bins=np.arange(min(data), 4, 0.1)) plt.title(['H/C']) plt.show() for i in ratios_df2.columns[1:]: data = ratios_df2[i] plt.hist(data, bins=np.arange(min(data), 1.3, 0.05)) plt.title(i) plt.show()
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
(Percent) Cumulative Graphs of % of formulas by a certain element/Carbon ratioFirst is presented the general cumulative graph, and then two different subsections of the graph to present in more detail: the 'end' of the curves (all except 'H/C') and the 'start' of the curves (especially 'H/C').
# Plot the cumulative graph and adjust parameters f, ax = plt.subplots(figsize=(12,9)) for i in ratios_df2.columns: data = ratios_df2[i] # Make the histogram with the intended ranges between the bins values, base = np.histogram(data, bins=np.arange(min(data), 4, 0.05)) # Set X # Calculate the cumulative...
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
Cumulative Graphs of the ratios - Eliminating ratios with 0Outside of the ['H/C'] ratios, the ratios were dominated by formulas that didn't have the element that is not carbon. Thus for each ratio 'element/C', the formulas that didn't have the corresponding elements were not taken into account for the graph. (Percent)...
# Plot the Cumulative Graph and Adjust Parameters f, ax = plt.subplots(figsize=(12,9)) for i in ratios_df2.columns: data = ratios_df2[i] values, base = np.histogram(data, bins=np.arange(min(data), 4, 0.05)) # Subtract the number of formulas that don't have the otehr element that not carbon in the ratio...
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
Conclusions about the ranges to use Ranges conclusionThe ranges presented below (short_range) were applied at the form_checker_ratios function presented at the end of FormGeneration_Assignment.ipynb as an extra criteria for Formula Assignment to impede attribution of formulas with really rare ratios over formulas wit...
short_range = {'H/C':(0.5,2.2),'N/C':(0,0.6),'O/C':(0,1.2),'P/C':(0,0.3),'S/C':(0,0.5),'F/C':(0,0.5), 'Cl/C':(0,0.5)} stricter_ranges = {'H/C':(0.6,2.2),'N/C':(0,0.5),'O/C':(0,1),'P/C':(0,0.3),'S/C':(0,0.3),'F/C':(0,0.5), 'Cl/C':(0,0.5)}
_____no_output_____
MIT
Form_ratio_test(Needs files to run).ipynb
aeferreira/similarity_share
Welcome!Below, we will learn to implement and train a policy to play atari-pong, using only the pixels as input. We will use convolutional neural nets, multiprocessing, and pytorch to implement and train our policy. Let's get started!
# install package for displaying animation !pip install JSAnimation # custom utilies for displaying animation, collecting rollouts and more import pong_utils %matplotlib inline # check which device is being used. # I recommend disabling gpu until you've made sure that the code runs device = pong_utils.device print(...
List of available actions: ['NOOP', 'FIRE', 'RIGHT', 'LEFT', 'RIGHTFIRE', 'LEFTFIRE']
MIT
pong/pong-PPO.ipynb
tomkommando/deep-reinforcement-learning
PreprocessingTo speed up training, we can simplify the input by cropping the images and use every other pixel
import matplotlib import matplotlib.pyplot as plt # show what a preprocessed image looks like env.reset() _, _, _, _ = env.step(0) # get a frame after 20 steps for _ in range(20): frame, _, _, _ = env.step(1) plt.subplot(1,2,1) plt.imshow(frame) plt.title('original image') plt.subplot(1,2,2) plt.title('preproces...
_____no_output_____
MIT
pong/pong-PPO.ipynb
tomkommando/deep-reinforcement-learning
Policy Exercise 1: Implement your policy Here, we define our policy. The input is the stack of two different frames (which captures the movement), and the output is a number $P_{\rm right}$, the probability of moving left. Note that $P_{\rm left}= 1-P_{\rm right}$
import torch import torch.nn as nn import torch.nn.functional as F # set up a convolutional neural net # the output is the probability of moving right # P(left) = 1-P(right) class Policy(nn.Module): def __init__(self): super(Policy, self).__init__() ######## ## ## Modify y...
_____no_output_____
MIT
pong/pong-PPO.ipynb
tomkommando/deep-reinforcement-learning
Game visualizationpong_utils contain a play function given the environment and a policy. An optional preprocess function can be supplied. Here we define a function that plays a game and shows learning progress
pong_utils.play(env, policy, time=200) # try to add the option "preprocess=pong_utils.preprocess_single" # to see what the agent sees
_____no_output_____
MIT
pong/pong-PPO.ipynb
tomkommando/deep-reinforcement-learning
Function DefinitionsHere you will define key functions for training. Exercise 2: write your own function for training(what I call scalar function is the same as policy_loss up to a negative sign) PPOLater on, you'll implement the PPO algorithm as well, and the scalar function is given by$\frac{1}{T}\sum^T_t \min\left...
def clipped_surrogate(policy, old_probs, states, actions, rewards, discount = 0.995, epsilon=0.1, beta=0.01): ######## ## ## WRITE YOUR OWN CODE HERE ## ######## actions = torch.tensor(actions, dtype=torch.int8, device=device) # convert states to policy (or prob...
_____no_output_____
MIT
pong/pong-PPO.ipynb
tomkommando/deep-reinforcement-learning
TrainingWe are now ready to train our policy!WARNING: make sure to turn on GPU, which also enables multicore processing. It may take up to 45 minutes even with GPU enabled, otherwise it will take much longer!
from parallelEnv import parallelEnv import numpy as np # keep track of how long training takes # WARNING: running through all 800 episodes will take 30-45 minutes # training loop max iterations episode = 500 # widget bar to display progress !pip install progressbar import progressbar as pb widget = ['training loop: '...
_____no_output_____
MIT
pong/pong-PPO.ipynb
tomkommando/deep-reinforcement-learning
enriched (main) Persons with Places* write Persons (main) to .txt file* use this file to annotate LOC within prodigy* train a LOC-model* run it and annotate Persons
import ast import random from spacytei.train import batch_train persons = Person.objects.filter(is_main_person__isnull=False) persons.count() items = [x.written_name for x in persons] items = sorted(iter(items), key=lambda k: random.random()) filename = "person__main.txt" with open(filename, 'w', encoding="utf-8") as f...
_____no_output_____
MIT
add_places.ipynb
reading-in-the-alps/rita-invs
Read some raw data
#EDF file original_data_folder = Path('/Volumes/Macintosh HD - Data/Master Thesis/chb-mit-scalp-eeg-database-1.0.0') Patient = ['chb04','chb06','chb08','chb15','chb17','chb19'] raw_file = os.path.join(original_data_folder,Patient[0],'{}_{}.edf'.format(Patient[0],'28')) #Read in raw data raw = mne.io.read_raw_edf(raw_f...
_____no_output_____
MIT
src/03WelchFFT.ipynb
XiruiXian/Master-thesis-project
FFT
raw.plot_psd(picks=channels[0],n_overlap=128,n_fft=256,dB=False) psd,freqs = mne.time_frequency.psd_array_welch(raw_data[0][0],sfreq=raw.info['sfreq'], n_fft=256,n_overlap=128) plt.figure(figsize=(10, 4)) plt.plot(freqs,np.sqrt(psd)) plt.xlabel('Frequency(Hz)') plt.ylabel(r'V/$\sqrt{Hz}$') plt.title('{}_{} EEG Channel ...
_____no_output_____
MIT
src/03WelchFFT.ipynb
XiruiXian/Master-thesis-project
Explanations of RPS-LJE and Influence Function on German Credit Risk Analysis with XGBoostTable 2 and Table 11 (appendix)
import numpy as np import torch import pandas as pd path = "../data" X_train_clean_res = pd.read_csv('{}/X_train_clean_res.csv'.format(path), index_col=0) y_train_clean_res = pd.read_csv('{}/Y_train_clean_res.csv'.format(path), index_col=0) X_test_clean = pd.read_csv('{}/X_test_clean.csv'.format(path), index_col=0) y_t...
_____no_output_____
MIT
models/Xgboost/experiments/case_study.ipynb
echoyi/RPS_LJE
Feature Engineering
SUFFIX_CAT = '__cat' for feat in df.columns: if isinstance(df[feat][0], list): continue factorizes_values = df[feat].factorize()[0] if SUFFIX_CAT in feat: df[feat] = factorizes_values else: df[feat + SUFFIX_CAT] = factorizes_values df['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x: -1...
[10:03:01] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror. [10:03:05] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror. [10:03:09] WARNING: /workspace/src/objective/regression_obj.cu:152...
MIT
day5.ipynb
ElaRom/dw_matrix_car
Hyperopt
def obj_func(params): print("Training with params: ") print(params) mean_mae, scroe_std = run_model(xgb.XGBRegressor(**params), feats) return {'loss': np.abs(mean_mae), 'status': STATUS_OK} #space xgb_reg_params = { 'learning rate': hp.choice('learning_rate', np.arange(0.05, 0.31, 0.05)), 'max_depth...
_____no_output_____
MIT
day5.ipynb
ElaRom/dw_matrix_car
Laboratorio 9
import pandas as pd import altair as alt import matplotlib.pyplot as plt from vega_datasets import data alt.themes.enable('opaque') %matplotlib inline
_____no_output_____
MIT
labs/lab09.ipynb
CloeRomero/mat281_portfolio
En este laboratorio utilizaremos un conjunto de datos _famoso_, el GapMinder. Esta es una versión reducida que solo considera países, ingresos, salud y población. ¿Hay alguna forma natural de agrupar a estos países?
gapminder = data.gapminder_health_income() gapminder
_____no_output_____
MIT
labs/lab09.ipynb
CloeRomero/mat281_portfolio
Ejercicio 1(1 pto.)Realiza un Análisis exploratorio, como mínimo un `describe` del dataframe y una visualización adecuada, por ejemplo un _scatter matrix_ con los valores numéricos.
gapminder.describe() alt.Chart(gapminder).mark_circle(opacity=0.5).encode( alt.X(alt.repeat("column"),type='quantitative'), alt.Y(alt.repeat("row"),type='quantitative') ).properties( width=200, height=200 ).repeat( row=['population', 'health', 'income'], column=['income', 'health', 'population']...
_____no_output_____
MIT
labs/lab09.ipynb
CloeRomero/mat281_portfolio
__Pregunta:__ ¿Hay alguna variable que te entregue indicios a simple vista donde se puedan separar países en grupos?__Respuesta:__ A simple vista no pareciera haber alguna variable que nos permita separar los paises en grupo, ya que no se forman grupos de identificables de datos, a lo más 1 o 2 datos aisaldos del rest...
import numpy as np from sklearn.preprocessing import StandardScaler X_raw = np.array(gapminder.drop(columns='country')) X = StandardScaler().fit(X_raw).transform(X_raw)
_____no_output_____
MIT
labs/lab09.ipynb
CloeRomero/mat281_portfolio
Ejercicio 3(1 pto.)Definir un _estimator_ `KMeans` con `k=3` y `random_state=42`, luego ajustar con `X` y finalmente, agregar los _labels_ obtenidos a una nueva columna del dataframe `gapminder` llamada `cluster`. Finalmente, realizar el mismo gráfico del principio pero coloreado por los clusters obtenidos.
from sklearn.cluster import KMeans k = 3 kmeans = KMeans(n_clusters=k, random_state=42) kmeans.fit(X) clusters = kmeans.labels_ gapminder["cluster"] = clusters alt.Chart(gapminder).mark_circle(opacity=0.8).encode( alt.X(alt.repeat("column"),type='quantitative'), alt.Y(alt.repeat("row"),type='quantitative'), ...
_____no_output_____
MIT
labs/lab09.ipynb
CloeRomero/mat281_portfolio
Ejercicio 4(1 pto.)__Regla del codo____¿Cómo escoger la mejor cantidad de _clusters_?__En este ejercicio hemos utilizado que el número de clusters es igual a 3. El ajuste del modelo siempre será mejor al aumentar el número de clusters, pero ello no significa que el número de clusters sea el apropiado. De hecho, si ten...
elbow = pd.Series(name="inertia", dtype="float64").rename_axis(index="k") for k in range(1, 10): kmeans = KMeans(n_clusters=k, random_state=42).fit(X) elbow.loc[k] = kmeans.inertia_ # Inertia: Sum of distances of samples to their closest cluster center elbow = elbow.reset_index() alt.Chart(elbow).mark_line(poin...
_____no_output_____
MIT
labs/lab09.ipynb
CloeRomero/mat281_portfolio
Ex1 - Getting and knowing your DataCheck out [World Food Facts Exercises Video Tutorial](https://youtu.be/_jCSK4cMcVw) to watch a data scientist go through the exercises Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
import pandas as pd import numpy as np
_____no_output_____
BSD-3-Clause
01_Getting_&_Knowing_Your_Data/World Food Facts/Exercises_with_solutions.ipynb
KarimaCha/pandas_exercises
Step 3. Use the tsv file and assign it to a dataframe called food
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False. interactivity=interactivity, compiler=compiler, result=result)
BSD-3-Clause
01_Getting_&_Knowing_Your_Data/World Food Facts/Exercises_with_solutions.ipynb
KarimaCha/pandas_exercises