markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Set minimum percentage among plotted cells for a cell type to be included. This prevents the plot from becoming overcrowded with labels and including lines that are too thin to even see:
min_ct_perc = 0.02
_____no_output_____
MIT
notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb
LungCellAtlas/HLCA_reproducibility
Now generate the two sankey plots.
fig, ax = plt.subplots(figsize=(8, 8)) cts_ordered_left_lev1 = [ ct for ct in color_prop_df.l1_label if ct in adata.obs.original_ann_level_1_clean.values ] ct_to_color_lev1 = { ct: col for ct, col in zip(color_prop_df.l1_label, color_prop_df.l1_rgba) } # get level 1 anns: y_lev1 = adata.obs.original_ann_level_1_clean lev1_percs = {ct: n / len(y_lev1) * 100 for ct, n in Counter(y_lev1).items()} lev1_ct_to_keep = [ct for ct, perc in lev1_percs.items() if perc > min_ct_perc] # get level 1 anns, set "None" in level 2 compartment specific, # remove cell types that make up less than min_ct_perc of cells plotted y_lev2 = adata.obs.original_ann_level_2_clean.cat.remove_unused_categories() y_lev2 = [ f"{ct} ({lev1ann})" if ct == "None" else ct for ct, lev1ann in zip(y_lev2, adata.obs.original_ann_level_1_clean) ] lev2_percs = {ct: n / len(y_lev2) * 100 for ct, n in Counter(y_lev2).items()} lev2_ct_to_keep = [ct for ct, perc in lev2_percs.items() if perc > min_ct_perc] # plot sankeyy sankey.sankey( x=[ lev1 for lev1, lev2 in zip(y_lev1, list(y_lev2)) if lev1 in lev1_ct_to_keep and lev2 in lev2_ct_to_keep ], y=[ lev2 for lev1, lev2 in zip(y_lev1, list(y_lev2)) if lev1 in lev1_ct_to_keep and lev2 in lev2_ct_to_keep ], title="Hierarchical cell type annotation", title_left="Level 1", title_right="Level 2", ax=ax, fontsize="x-small", left_order=cts_ordered_left_lev1, colors={ ct: to_hex(ast.literal_eval(ct_to_color_lev1[ct])) for ct in cts_ordered_left_lev1 }, alpha=0.8, ) plt.show() plt.close() FIGURES["2b_sankey_1_2"] = fig fig, ax = plt.subplots(figsize=(8, 8)) # use order from earlier sankey plot cts_ordered_left_lev2 = [ ct for ct in [ "Airway epithelium", "Alveolar epithelium", "Submucosal Gland", "None (Epithelial)", "Myeloid", "Lymphoid", "Megakaryocytic and erythroid", "Granulocytes", "Blood vessels", "Lymphatic EC", "None (Endothelial)", "Fibroblast lineage", "Smooth muscle", "None (Stroma)", "Mesothelium", "None (Proliferating cells)", ] if ct in lev2_ct_to_keep ] # ct for ct in color_prop_df.l2_label if ct in adata.obs.ann_level_2_clean.values # ] ct_to_color_lev2 = { ct: col for ct, col in zip(color_prop_df.l2_label, color_prop_df.l2_rgba) } # manually locate colors fo "None" cell type annotations: for none_ct in "Epithelial", "Endothelial", "Stroma", "Proliferating cells": ct_to_color_lev2[f"None ({none_ct})"] = color_prop_df.loc[ color_prop_df.l1_label == none_ct, "l1_rgba" ].values[0] y_lev3 = adata.obs.original_ann_level_3_clean y_lev3 = [ f"{ct} ({lev1ann})" if ct.startswith("None") else ct for ct, lev1ann in zip(y_lev3, adata.obs.original_ann_level_1_clean) ] lev3_percs = {ct: n / len(y_lev3) * 100 for ct, n in Counter(y_lev3).items()} lev3_ct_to_keep = [ct for ct, perc in lev3_percs.items() if perc > min_ct_perc] sankey.sankey( x=[ lev2 for lev2, lev3 in zip(y_lev2, list(y_lev3)) if lev2 in lev2_ct_to_keep and lev3 in lev3_ct_to_keep ], y=[ lev3 for lev2, lev3 in zip(y_lev2, list(y_lev3)) if lev2 in lev2_ct_to_keep and lev3 in lev3_ct_to_keep ], title="Hierarchical cell type annotation", title_left="Level 2", title_right="Level 3", ax=ax, fontsize=5, # "xx-small", left_order=cts_ordered_left_lev2, colors={ ct: to_hex(ast.literal_eval(ct_to_color_lev2[ct])) for ct in cts_ordered_left_lev2 }, alpha=0.8, ) plt.show() plt.close() FIGURES["2b_sankey_2_3"] = fig
_____no_output_____
MIT
notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb
LungCellAtlas/HLCA_reproducibility
2c Sample compositions: In the paper we use ann level 2 and group by sample:
ann_level_number = "2" grouping_covariate = "sample" # choose e.g. "dataset" or "subject_ID" or "sample"
_____no_output_____
MIT
notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb
LungCellAtlas/HLCA_reproducibility
Use the "clean" version, i.e. without forward-propagated labels for cells not annotated at the chosen label, but leaving those cells set to "None":
if ann_level_number == "1": ann_level = "original_ann_level_" + ann_level_number else: ann_level = "original_ann_level_" + ann_level_number + "_clean"
_____no_output_____
MIT
notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb
LungCellAtlas/HLCA_reproducibility
Now plot:
FIGURES[ "2c_sample_compositions" ] = celltype_composition_plotting.plot_celltype_composition_per_sample( adata, ann_level_number, color_prop_df, return_fig=True, title="original cell type annotations (level 2) per sample", ann_level_name_prefix="original_ann_level_", )
_____no_output_____
MIT
notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb
LungCellAtlas/HLCA_reproducibility
Store figures
# for figname, fig in FIGURES.items(): # print("Saving", figname) # fig.savefig(os.path.join(dir_figures, f"{figname}.png"), bbox_inches="tight", dpi=140)
_____no_output_____
MIT
notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb
LungCellAtlas/HLCA_reproducibility
**Reading**zyBooks Ch 2 Variables and Expressions**Reference**[Python Basics Cheatsheet](https://www.pythoncheatsheet.org/Python-Basics)**Practice** In class practice: zyBooks Lab 1.15 - No parking signzyBooks Ch 2 Practice (graded participation activity)**Learning Outcomes** Upon successful completion of this chapter, students will be familiar with and able to apply the following concepts in their programs * Variables and Assignments* Identifiers* Objects* Numeric data types: Floating point* Expressions * Operators: Division and Modulo* Import and call functions from the math module* Text (string) data type
print (7+5) print(7, 5, 7+5) print() print(7, 5, end=" ") print("Seven plus five is", 12) # Assignments x = 2 ** 3 print (x) y = 5 print(y) # Variable values can be changed y = 8 print(y) # type() function tells the type of object print(type(4)) print(type(3.14)) print(type("Hello")) print(type(True)) # id() function returns the memory address (identity) of an object print(id(3+4)) print(id("Hello!")) # Calculate the value of your coins change in dollars quarters = int(input("Quarters: ")) dimes = int(input("Dimes: ")) nickels = int(input("Nickels: ")) pennies = int(input("Pennies: ")) dollars = quarters * 0.25 + dimes * 0.10 + nickels * 0.05 + pennies * 0.01 print() print("You have", round(dollars, 2), "dollars.") # math library import math print(math.pow(2,3)) print(math.sqrt(144)) # Text print("Welcome \nto \nIT 111") s = "How are you?" print(len(s)) print(s.upper())
Welcome to IT 111 12 HOW ARE YOU?
Apache-2.0
w2_variables_expressions.ipynb
kelseypatterson817/su21-it161
OOP Syntax Exercise - Part 2Now that you've had some practice instantiating objects, it's time to write your own class from scratch. This lesson has two parts. In the first part, you'll write a Pants class. This class is similar to the shirt class with a couple of changes. Then you'll practice instantiating Pants objectsIn the second part, you'll write another class called SalesPerson. You'll also instantiate objects for the SalesPerson.For this exercise, you can do all of your work in this Jupyter notebook. You will not need to import the class because all of your code will be in this Jupyter notebook.Answers are also provided. If you click on the Jupyter icon, you can open a folder called 2.OOP_syntax_pants_practice, which contains this Jupyter notebook ('exercise.ipynb') and a file called answer.py. Pants classWrite a Pants class with the following characteristics:* the class name should be Pants* the class attributes should include * color * waist_size * length * price* the class should have an init function that initializes all of the attributes* the class should have two methods * change_price() a method to change the price attribute * discount() to calculate a discount
### TODO: # - code a Pants class with the following attributes # - color (string) eg 'red', 'yellow', 'orange' # - waist_size (integer) eg 8, 9, 10, 32, 33, 34 # - length (integer) eg 27, 28, 29, 30, 31 # - price (float) eg 9.28 ### TODO: Declare the Pants Class ### TODO: write an __init__ function to initialize the attributes ### TODO: write a change_price method: # Args: # new_price (float): the new price of the shirt # Returns: # None ### TODO: write a discount method: # Args: # discount (float): a decimal value for the discount. # For example 0.05 for a 5% discount. # # Returns: # float: the discounted price class Pants: """The Pants class represents an article of clothing sold in a store """ def __init__(self, color, waist_size, length, price): """Method for initializing a Pants object Args: color (str) waist_size (int) length (int) price (float) Attributes: color (str): color of a pants object waist_size (str): waist size of a pants object length (str): length of a pants object price (float): price of a pants object """ self.color = color self.waist_size = waist_size self.length = length self.price = price def change_price(self, new_price): """The change_price method changes the price attribute of a pants object Args: new_price (float): the new price of the pants object Returns: None """ self.price = new_price def discount(self, percentage): """The discount method outputs a discounted price of a pants object Args: percentage (float): a decimal representing the amount to discount Returns: float: the discounted price """ return self.price * (1 - percentage) class SalesPerson: """The SalesPerson class represents an employee in the store """ def __init__(self, first_name, last_name, employee_id, salary): """Method for initializing a SalesPerson object Args: first_name (str) last_name (str) employee_id (int) salary (float) Attributes: first_name (str): first name of the employee last_name (str): last name of the employee employee_id (int): identification number of the employee salary (float): yearly salary of the employee pants_sold (list): a list of pants objects sold by the employee total_sales (float): sum of all sales made by the employee """ self.first_name = first_name self.last_name = last_name self.employee_id = employee_id self.salary = salary self.pants_sold = [] self.total_sales = 0 def sell_pants(self, pants_object): """The sell_pants method appends a pants object to the pants_sold attribute Args: pants_object (obj): a pants object that was sold Returns: None """ self.pants_sold.append(pants_object) def display_sales(self): """The display_sales method prints out all pants that have been sold Args: None Returns: None """ for pants in self.pants_sold: print('color: {}, waist_size: {}, length: {}, price: {}'\ .format(pants.color, pants.waist_size, pants.length, pants.price)) def calculate_sales(self): """The calculate_sales method sums the total price of all pants sold Args: None Returns: float: sum of the price for all pants sold """ total = 0 for pants in self.pants_sold: total += pants.price self.total_sales = total return total def calculate_commission(self, percentage): """The calculate_commission method outputs the commission based on sales Args: percentage (float): the commission percentage as a decimal Returns: float: the commission due """ sales_total = self.calculate_sales() return sales_total * percentage
_____no_output_____
MIT
notebooks/Object Oriented Programming/2.OOP_syntax_pants_practice/exercise.ipynb
jesussantana/AWS-Machine-Learning-Foundations
Run the code cell below to check resultsIf you run the next code cell and get an error, then revise your code until the code cell doesn't output anything.
def check_results(): pants = Pants('red', 35, 36, 15.12) assert pants.color == 'red' assert pants.waist_size == 35 assert pants.length == 36 assert pants.price == 15.12 pants.change_price(10) == 10 assert pants.price == 10 assert pants.discount(.1) == 9 print('You made it to the end of the check. Nice job!') check_results()
You made it to the end of the check. Nice job!
MIT
notebooks/Object Oriented Programming/2.OOP_syntax_pants_practice/exercise.ipynb
jesussantana/AWS-Machine-Learning-Foundations
SalesPerson classThe Pants class and Shirt class are quite similar. Here is an exercise to give you more practice writing a class. **This exercise is trickier than the previous exercises.**Write a SalesPerson class with the following characteristics:* the class name should be SalesPerson* the class attributes should include * first_name * last_name * employee_id * salary * pants_sold * total_sales* the class should have an init function that initializes all of the attributes* the class should have four methods * sell_pants() a method to change the price attribute * calculate_sales() a method to calculate the sales * display_sales() a method to print out all the pants sold with nice formatting * calculate_commission() a method to calculate the salesperson commission based on total sales and a percentage
### TODO: # Code a SalesPerson class with the following attributes # - first_name (string), the first name of the salesperson # - last_name (string), the last name of the salesperson # - employee_id (int), the employee ID number like 5681923 # - salary (float), the monthly salary of the employee # - pants_sold (list of Pants objects), # pants that the salesperson has sold # - total_sales (float), sum of sales of pants sold ### TODO: Declare the SalesPerson Class ### TODO: write an __init__ function to initialize the attributes ### Input Args for the __init__ function: # first_name (str) # last_name (str) # employee_id (int) # . salary (float) # # You can initialize pants_sold as an empty list # You can initialize total_sales to zero. # ### ### TODO: write a sell_pants method: # # This method receives a Pants object and appends # the object to the pants_sold attribute list # # Args: # pants (Pants object): a pants object # Returns: # None ### TODO: write a display_sales method: # # This method has no input or outputs. When this method # is called, the code iterates through the pants_sold list # and prints out the characteristics of each pair of pants # line by line. The print out should look something like this # # color: blue, waist_size: 34, length: 34, price: 10 # color: red, waist_size: 36, length: 30, price: 14.15 # # # ### ### TODO: write a calculate_sales method: # This method calculates the total sales for the sales person. # The method should iterate through the pants_sold attribute list # and sum the prices of the pants sold. The sum should be stored # in the total_sales attribute and then return the total. # # Args: # None # Returns: # float: total sales # ### ### TODO: write a calculate_commission method: # # The salesperson receives a commission based on the total # sales of pants. The method receives a percentage, and then # calculate the total sales of pants based on the price, # and then returns the commission as (percentage * total sales) # # Args: # percentage (float): comission percentage as a decimal # # Returns: # float: total commission # # ###
_____no_output_____
MIT
notebooks/Object Oriented Programming/2.OOP_syntax_pants_practice/exercise.ipynb
jesussantana/AWS-Machine-Learning-Foundations
Run the code cell below to check resultsIf you run the next code cell and get an error, then revise your code until the code cell doesn't output anything.
def check_results(): pants_one = Pants('red', 35, 36, 15.12) pants_two = Pants('blue', 40, 38, 24.12) pants_three = Pants('tan', 28, 30, 8.12) salesperson = SalesPerson('Amy', 'Gonzalez', 2581923, 40000) assert salesperson.first_name == 'Amy' assert salesperson.last_name == 'Gonzalez' assert salesperson.employee_id == 2581923 assert salesperson.salary == 40000 assert salesperson.pants_sold == [] assert salesperson.total_sales == 0 salesperson.sell_pants(pants_one) salesperson.pants_sold[0] == pants_one.color salesperson.sell_pants(pants_two) salesperson.sell_pants(pants_three) assert len(salesperson.pants_sold) == 3 assert round(salesperson.calculate_sales(),2) == 47.36 assert round(salesperson.calculate_commission(.1),2) == 4.74 print('Great job, you made it to the end of the code checks!') check_results()
Great job, you made it to the end of the code checks!
MIT
notebooks/Object Oriented Programming/2.OOP_syntax_pants_practice/exercise.ipynb
jesussantana/AWS-Machine-Learning-Foundations
Check display_sales() methodIf you run the code cell below, you should get output similar to this:```pythoncolor: red, waist_size: 35, length: 36, price: 15.12color: blue, waist_size: 40, length: 38, price: 24.12color: tan, waist_size: 28, length: 30, price: 8.12```
pants_one = Pants('red', 35, 36, 15.12) pants_two = Pants('blue', 40, 38, 24.12) pants_three = Pants('tan', 28, 30, 8.12) salesperson = SalesPerson('Amy', 'Gonzalez', 2581923, 40000) salesperson.sell_pants(pants_one) salesperson.sell_pants(pants_two) salesperson.sell_pants(pants_three) salesperson.display_sales()
color: red, waist_size: 35, length: 36, price: 15.12 color: blue, waist_size: 40, length: 38, price: 24.12 color: tan, waist_size: 28, length: 30, price: 8.12
MIT
notebooks/Object Oriented Programming/2.OOP_syntax_pants_practice/exercise.ipynb
jesussantana/AWS-Machine-Learning-Foundations
YBIGTA ML PROJECT / ์—ผ์ •์šด Setting
import numpy as np import pandas as pd pd.set_option("max_columns", 999) pd.set_option("max_rows", 999) from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier import seaborn as sns import matplotlib.pyplot as plt #sns.set(rc={'figure.figsize':(11.7,10)})
_____no_output_____
MIT
19_ybita_fraud-detection/identity_pre-processing.ipynb
ski02049/repos
Identity data Variables in this table are identity information โ€“ network connection information (IP, ISP, Proxy, etc) and digital signature (UA/browser/os/version, etc) associated with transactions. They're collected by Vestaโ€™s fraud protection system and digital security partners.The field names are masked and pairwise dictionary will not be provided for privacy protection and contract agreement)Categorical Features:DeviceTypeDeviceInfoid12 - id38
#train_identity๊ฐ€ ๋ถˆํŽธํ•ด์„œ ๋‚˜๋Š” i_merged๋ผ๋Š” isFraud๋ฅผ mergeํ•˜๊ณ  column ์ˆœ์„œ๋ฅผ ์กฐ๊ธˆ ๋ฐ”๊พผ ์ƒˆ๋กœ์šด Dataframe์„ ๋งŒ๋“ค์—ˆ์–ด! ์ด๊ฑด ๊ทธ ์ฝ”๋“œ! #i_merged = train_i.merge(train_t[['TransactionID', 'isFraud']], how = 'left', on = 'TransactionID') #order_list =['TransactionID', 'isFraud', 'DeviceInfo', 'DeviceType', 'id_01', 'id_02', 'id_03', 'id_04', 'id_05', 'id_06', 'id_07', 'id_08', # 'id_09', 'id_10', 'id_11', 'id_12', 'id_13', 'id_14', 'id_15', 'id_16', 'id_17', 'id_18', 'id_19', 'id_20', 'id_21', # 'id_22', 'id_23', 'id_24', 'id_25', 'id_26', 'id_27', 'id_28', 'id_29', 'id_30', 'id_31', 'id_32', 'id_33', 'id_34', # 'id_35', 'id_36', 'id_37', 'id_38'] #i_merged = i_merged[order_list] #i_merged.head() #i_merged.to_csv('identity_merged.csv', index = False) save = pd.read_csv('identity_merged.csv') i_merged = pd.read_csv('identity_merged.csv')
_____no_output_____
MIT
19_ybita_fraud-detection/identity_pre-processing.ipynb
ski02049/repos
NaN ๋น„์œจ
nullrate = (((i_merged.isnull().sum() / len(i_merged)))*100).sort_values(ascending = False) nullrate.plot(kind='barh', figsize=(15, 9)) i_merged.head()
_____no_output_____
MIT
19_ybita_fraud-detection/identity_pre-processing.ipynb
ski02049/repos
DeviceType nan(3.1%) < desktop(6.5%) < mobile(10.1%) ์ˆœ์œผ๋กœ isFraud ์ฆ๊ฐ€ ์ถ”์ด*์ „์ฒด datatset์—์„œ isFraud = 1์˜ ๋น„์œจ 7.8%
#DeviceType i_merged.groupby(['DeviceType', 'isFraud']).size().unstack() i_merged[i_merged.DeviceType.isnull()].groupby('isFraud').size()
_____no_output_____
MIT
19_ybita_fraud-detection/identity_pre-processing.ipynb
ski02049/repos
Null count in row ๊ฒฐ์ธก์น˜ ์ •๋„์™€ isFraud์˜ ์œ ์˜๋ฏธํ•œ ์ƒ๊ด€๊ด€๊ณ„ ์ฐพ์ง€ ๋ชปํ•จ
i_merged = i_merged.assign(NaN_count = i_merged.isnull().sum(axis = 1)) print(i_merged.assign(NaN_count = i_merged.isnull().sum(axis = 1)).groupby('isFraud')['NaN_count'].mean(), i_merged.assign(NaN_count = i_merged.isnull().sum(axis = 1)).groupby('isFraud')['NaN_count'].std(), i_merged.assign(NaN_count = i_merged.isnull().sum(axis = 1)).groupby('isFraud')['NaN_count'].min(), i_merged.assign(NaN_count = i_merged.isnull().sum(axis = 1)).groupby('isFraud')['NaN_count'].max()) #isFraud = 1 i_merged[i_merged.isFraud == 1].hist('NaN_count') #isFraud = 0 i_merged[i_merged.isFraud == 0].hist('NaN_count') i_merged.head()
_____no_output_____
MIT
19_ybita_fraud-detection/identity_pre-processing.ipynb
ski02049/repos
๋ณ€์ˆ˜๋ณ„ EDA - Continous
#Correlation Matrix rs = np.random.RandomState(0) df = pd.DataFrame(rs.rand(10, 10)) corr = i_merged.corr() corr.style.background_gradient(cmap='coolwarm') #id_01 : 0 ์ดํ•˜์˜ ๊ฐ’๋“ค์„ ๊ฐ€์ง€๋ฉฐ skewed ํ˜•ํƒœ. ํ•„์š”์‹œ log ๋ณ€ํ™˜์„ ํ†ตํ•œ ์ฒ˜๋ฆฌ๊ฐ€ ๊ฐ€๋Šฅํ•  ๋“ฏ. i_merged.id_01.plot(kind='hist', bins=22, figsize=(12,6), title='id_01 dist.') print(i_merged.groupby('isFraud')['id_01'].mean(), i_merged.groupby('isFraud')['id_01'].std(), i_merged.id_01.min(), i_merged.id_01.max(), sep = '\n') Fraud = (i_merged[i_merged.isFraud == 1]['id_01']) notFraud = i_merged[i_merged.isFraud == 0]['id_01'] plt.hist([Fraud, notFraud],bins = 5, label=['Fraud', 'notFraud']) plt.legend(loc='upper left') plt.show() #id02: ์ตœ์†Ÿ๊ฐ’ 1์„ ๊ฐ€์ง€๋ฉฐ skewed ํ˜•ํƒœ. ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๋กœ๊ทธ ๋ณ€ํ™˜ ๊ฐ€๋Šฅ i_merged.id_02.plot(kind='hist', bins=22, figsize=(12,6), title='id_02 dist.') print(i_merged.groupby('isFraud')['id_02'].mean(), i_merged.groupby('isFraud')['id_02'].std(), i_merged.id_02.min(), i_merged.id_02.max(), sep = '\n') Fraud = (i_merged[i_merged.isFraud == 1]['id_02']) notFraud = i_merged[i_merged.isFraud == 0]['id_02'] plt.hist([Fraud, notFraud],bins = 5, label=['Fraud', 'notFraud']) plt.legend(loc='upper left') plt.show() #id_05 i_merged.id_05.plot(kind='hist', bins=22, figsize=(9,6), title='id_05 dist.') print(i_merged.groupby('isFraud')['id_05'].mean(), i_merged.groupby('isFraud')['id_05'].std()) Fraud = (i_merged[i_merged.isFraud == 1]['id_05']) notFraud = i_merged[i_merged.isFraud == 0]['id_05'] plt.hist([Fraud, notFraud],bins = 10, label=['Fraud', 'notFraud']) plt.legend(loc='upper left') plt.show() #id_06 i_merged.id_06.plot(kind='hist', bins=22, figsize=(12,6), title='id_06 dist.') print(i_merged.groupby('isFraud')['id_06'].mean(), i_merged.groupby('isFraud')['id_06'].std()) Fraud = (i_merged[i_merged.isFraud == 1]['id_06']) notFraud = i_merged[i_merged.isFraud == 0]['id_06'] plt.hist([Fraud, notFraud],bins = 20, label=['Fraud', 'notFraud']) plt.legend(loc='upper left') plt.show() #id_11 i_merged.id_11.plot(kind='hist', bins=22, figsize=(12,6), title='id_11 dist.') print(i_merged.groupby('isFraud')['id_11'].mean(), i_merged.groupby('isFraud')['id_11'].std()) Fraud = (i_merged[i_merged.isFraud == 1]['id_11']) notFraud = i_merged[i_merged.isFraud == 0]['id_11'] plt.hist([Fraud, notFraud],bins = 20, label=['Fraud', 'notFraud']) plt.legend(loc='upper left') plt.show()
_____no_output_____
MIT
19_ybita_fraud-detection/identity_pre-processing.ipynb
ski02049/repos
๋ณ€์ˆ˜๋ณ„ EDA - Categorical
sns.jointplot(x = 'id_09', y = 'id_03', data = i_merged)
_____no_output_____
MIT
19_ybita_fraud-detection/identity_pre-processing.ipynb
ski02049/repos
Feature Engineering ** Categorical์ด์ง€๋งŒ ๊ฐ€์ง“์ˆ˜๊ฐ€ ๋งŽ์€ ๊ฒฝ์šฐ ์ •๋ณด๊ฐ€ ์žˆ์„ ๋•Œ 1, ์•„๋‹ ๋•Œ 0์œผ๋กœ ์ฒ˜๋ฆฌํ•จ. BaseModel ๋Œ๋ฆฌ๊ธฐ ์œ„ํ•ด ์ด๋ ‡๊ฒŒ ์„ค์ •ํ•˜์˜€์ง€๋งŒ, ์ „์ฒ˜๋ฆฌ๋ฅผ ๋ฐ”๊ฟ”๊ฐ€๋Š” ์ž‘์—…์—์„œ๋Š” ์ด ๋ณ€์ˆ˜๋“ค์„ ๋‹ค๋ฅธ ๋ฐฉ์‹์œผ๋กœ ์ฒ˜๋ฆฌ ํ•  ํ•„์š”๊ฐ€ ๋” ์ƒ๊ธธ ์ˆ˜๋„ ์žˆ์Œ.** Pair ๊ด€๊ณ„๊ฐ€ ์žˆ์Œ. id03,04 / id05,06 / id07,08, 21~26 / id09, 10 ::ํ•จ๊ป˜ ๋ฐ์ดํ„ฐ๊ฐ€ ์กด์žฌํ•˜๊ฑฐ๋‚˜(1) NaN์ด๊ฑฐ๋‚˜(0). ํ•œํŽธ EDA-Category๋ฅผ ๋ณด๋ฉด id03, 09์˜ ๊ฒฝ์šฐ ์ƒ๊ด€๊ด€๊ณ„๊ฐ€ ์žˆ๋Š” ๊ฒƒ์œผ๋กœ ์ถ”์ •๋˜์–ด ์ถ”๊ฐ€์ ์ธ ๋ณ€ํ˜•์„ ํ•˜์ง€ ์•Š์•˜์Œ.** https://www.kaggle.com/pablocanovas/exploratory-analysis-tidyverse ์—์„œ ๋ณ€์ˆ˜๋ณ„ EDA ์‹œ๊ฐํ™” ์ฐธ๊ณ ํ•˜์˜€๊ณ , nan๊ฐ’ ์ œ์™ธํ•˜๊ณ ๋Š” Fraud ๋น„์œจ์ด ๋‚ฎ์€ ๋ณ€์ˆ˜๋ถ€ํ„ฐ 1,2..์ฐจ๋ก€๋กœ ํ• ๋‹นํ•จ $Contionous Features$id01:: ๊ฒฐ์ธก์น˜๊ฐ€ ์—†์œผ๋ฉฐ ๋กœ๊ทธ๋ณ€ํ˜•์„ ํ†ตํ•ด ์–‘์ˆ˜ํ™” ๋ฐ Scailing ์‹œํ‚ด. 5์˜ ๋ฐฐ์ˆ˜์ž„์„ ๊ฐ์•ˆํ•  ๋•Œ 5๋กœ ๋‚˜๋ˆ„๋Š” scailing์„ ์ง„ํ–‰ํ•ด๋ด๋„ ์ข‹์„ ๋“ฏ.id02:: ๊ฒฐ์ธก์น˜๊ฐ€ ์กด์žฌํ•˜๋‚˜, ๋กœ๊ทธ ๋ณ€ํ˜•์„ ํ†ตํ•ด ์ •๊ทœ๋ถ„ํฌ์— ํก์‚ฌํ•œ ๋ชจ์–‘์œผ๋กœ ๋งŒ๋“ค๊ณ  ๋งค์šฐ ํฐ ๋‹จ์œ„๋ฅผ Scailingํ•˜์˜€์Œ. ๊ฒฐ์ธก์น˜๋Š” Random ๋ฐฉ์‹์„ ์ด์šฉํ•˜์—ฌ ์ฑ„์› ์œผ๋‚˜ ๊ฐ€์žฅ ์œ„ํ—˜ํ•œ ๋ฐฉ์‹์œผ๋กœ imputation์œผ๋กœ ํ•œ ๊ฒƒ์ด๋ฏ€๋กœ ์ฃผ์˜๊ฐ€ ํ•„์š”ํ•จ. $Categorical Features$DeviceType:: {NaN: 0, 'desktop': 1, 'mobile': 2} DeviceInfo:: {Nan: 0, ์ •๋ณด์žˆ์Œ:1}id12::{0:0, 'Found': 1, 'NotFound': 2}id13::{Nan: 0, ์ •๋ณด์žˆ์Œ:1}id14::{Nan: 0, ์ •๋ณด์žˆ์Œ:1}id15::{Nan:0, 'New':1, 'Unknown':2, 'Found':3} 15, 16์€ ์—ฐ๊ด€์„ฑ์ด ๋ณด์ž„ id16::{Nan:0, 'NotFound':1, 'Found':2}id17::{Nan: 0, ์ •๋ณด์žˆ์Œ:1}id18::{Nan: 0, ์ •๋ณด์žˆ์Œ:1} ๊ฐ€์ง“์ˆ˜ ๋‹ค์†Œ ์ ์Œid19::{Nan: 0, ์ •๋ณด์žˆ์Œ:1}id20::{Nan: 0, ์ •๋ณด์žˆ์Œ:1} id 17, 19, 20์€ Pairid21id22id23::{IP_PROXY:ANONYMOUS:2, else:1, nan:0} id 7,8 21~26์€ Pair. Anonymous๋งŒ ์œ ๋… Fraud ๋น„์œจ์ด ๋†’๊ธฐ์— ๊ณ ๋ คํ•จ. ์šฐ์„ ์€ ๋ฒ ์ด์Šค ๋ชจ๋ธ์—์„œ๋Š” id_23๋งŒ ์‚ฌ์šฉid24id25id26id27:: {Nan:0, 'NotFound':1, 'Found':2}id28:: {0:0, 'New':1, 'Found':2}id29:: {0:0, 'NotFound':1, 'Found':2}id30(OS):: {Nan: 0, ์ •๋ณด์žˆ์Œ:1}, ๋ฐ์ดํ„ฐ๊ฐ€ ์žˆ๋‹ค / ์—†๋‹ค๋กœ ์ฒ˜๋ฆฌํ•˜์˜€์ง€๋งŒ Safari Generic์—์„œ ์‚ฌ๊ธฐ ํ™•๋ฅ ์ด ๋†’๋‹ค ๋“ฑ์˜ ์กฐ๊ฑด์„ ๊ณ ๋ คํ•ด์•ผํ•œ๋‹ค๋ฉด ๋‹ค๋ฅธ ๋ฐฉ์‹์œผ๋กœ ์ „์ฒ˜๋ฆฌ ํ•„์š”ํ•  ๋“ฏid31(browser):: {Nan: 0, ์ •๋ณด์žˆ์Œ:1}, id30๊ณผ ๊ฐ™์Œid32::{nan:0, 24:1, 32:2, 16:3, 0:4}id33(ํ•ด์ƒ๋„)::{Nan: 0, ์ •๋ณด์žˆ์Œ:1} id34:: {nan:0, matchstatus= -1:1, matchstatus=0 :2, matchstatus=1 :3, matchstatus=2 :4} , matchstatus๊ฐ€ -1์ด๋ฉด fraud์ผ ํ™•๋ฅ  ๋งค์šฐ ๋‚ฎ์Œid35:: {Nan:0, 'T':1, 'F':2}id36:: {Nan:0, 'T':1, 'F':2}id37:: {Nan:0, 'T':2, 'F':1}id38:: {Nan:0, 'T':1, 'F':2}
#Continous Features i_merged.id_01 = np.log(-i_merged.id_01 + 1) i_merged.id_02 = np.log(i_merged.id_02) medi = i_merged.id_02.median() i_merged.id_02 = i_merged.id_02.fillna(medi) i_merged.id_02.hist() #id_02์˜ NaN๊ฐ’์„ randomํ•˜๊ฒŒ ์ฑ„์›Œ์คŒ #i_merged['id_02_filled'] = i_merged['id_02'] #temp = (i_merged['id_02'].dropna() # .sample(i_merged['id_02'].isnull().sum()) # ) #temp.index = i_merged[lambda x: x.id_02.isnull()].index #i_merged.loc[i_merged['id_02'].isnull(), 'id_02_filled'] = temp #Categorical Features i_merged.DeviceType = i_merged.DeviceType.fillna(0).map({0:0, 'desktop': 1, 'mobile': 2}) i_merged.DeviceInfo = i_merged.DeviceInfo.notnull().astype(int) i_merged.id_12 = i_merged.id_12.fillna(0).map({0:0, 'Found': 1, 'NotFound': 2}) i_merged.id_13 = i_merged.id_13.notnull().astype(int) i_merged.id_14 = i_merged.id_14.notnull().astype(int) i_merged.id_14 = i_merged.id_14.notnull().astype(int) i_merged.id_15 = i_merged.id_15.fillna(0).map({0:0, 'New':1, 'Unknown':2, 'Found':3}) i_merged.id_16 = i_merged.id_16.fillna(0).map({0:0, 'NotFound':1, 'Found':2}) i_merged.id_17 = i_merged.id_17.notnull().astype(int) i_merged.id_18 = i_merged.id_18.notnull().astype(int) i_merged.id_19 = i_merged.id_19.notnull().astype(int) i_merged.id_20 = i_merged.id_20.notnull().astype(int) i_merged.id_23 = i_merged.id_23.fillna('temp').map({'temp':0, 'IP_PROXY:ANONYMOUS':2}).fillna(1) i_merged.id_27 = i_merged.id_27.fillna(0).map({0:0, 'NotFound':1, 'Found':2}) i_merged.id_28 = i_merged.id_28.fillna(0).map({0:0, 'New':1, 'Found':2}) i_merged.id_29 = i_merged.id_29.fillna(0).map({0:0, 'NotFound':1, 'Found':2}) i_merged.id_30 = i_merged.id_30.notnull().astype(int) i_merged.id_31 = i_merged.id_31.notnull().astype(int) i_merged.id_32 = i_merged.id_32.fillna('temp').map({'temp':0, 24:1, 32:2, 16:3, 0:4}) i_merged.id_33 = i_merged.id_33.notnull().astype(int) i_merged.id_34 = i_merged.id_34.fillna('temp').map({'temp':0, 'match_status:-1':1, 'match_status:0':3, 'match_status:1':4, 'match_status:2':2}) i_merged.id_35 = i_merged.id_35.fillna(0).map({0:0, 'T':1, 'F':2}) i_merged.id_36 = i_merged.id_38.fillna(0).map({0:0, 'T':1, 'F':2}) i_merged.id_37 = i_merged.id_38.fillna(0).map({0:0, 'T':2, 'F':1}) i_merged.id_38 = i_merged.id_38.fillna(0).map({0:0, 'T':1, 'F':2})
_____no_output_____
MIT
19_ybita_fraud-detection/identity_pre-processing.ipynb
ski02049/repos
Identity_Device FE
i_merged['Device_info_clean'] = i_merged['DeviceInfo'] i_merged['Device_info_clean'] = i_merged['Device_info_clean'].fillna('unknown') def name_divide(name): if name == 'Windows': return 'Windows' elif name == 'iOS Device': return 'iOS Device' elif name == 'MacOS': return 'MacOS' elif name == 'Trident/7.0': return 'Trident/rv' elif "rv" in name: return 'Trident/rv' elif "SM" in name: return 'SM/moto/lg' elif name == 'SAMSUNG': return 'SM' elif 'LG' in name: return 'SM/Moto/LG' elif 'Moto' in name: return 'SM/Moto/LG' elif name == 'unknown': return 'unknown' else: return 'others' i_merged['Device_info_clean'] = i_merged['Device_info_clean'].apply(name_divide) i_merged['Device_info_clean'].value_counts()
_____no_output_____
MIT
19_ybita_fraud-detection/identity_pre-processing.ipynb
ski02049/repos
Identity_feature engineered_dataset
i_merged.columns selected = [] selected.extend(['TransactionID', 'isFraud', 'id_01', 'id_02', 'DeviceType','Device_info_clean']) id_exist = i_merged[selected].assign(Exist = 1) id_exist.DeviceType.fillna('unknown', inplace = True) id_exist.to_csv('identity_first.csv',index = False)
_____no_output_____
MIT
19_ybita_fraud-detection/identity_pre-processing.ipynb
ski02049/repos
Test: Decision Tree / Random Forest Test
from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import GridSearchCV from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import train_test_split from sklearn.metrics import precision_score, recall_score, f1_score, accuracy_score, roc_auc_score X = id_exist.drop(['isFraud'], axis = 1) Y = id_exist['isFraud'] X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3) tree_clf = DecisionTreeClassifier(max_depth=10) tree_clf.fit(X_train, y_train) pred = tree_clf.predict(X_test) print('F1:{}'.format(f1_score(y_test, pred)))
F1:0.22078820004609356
MIT
19_ybita_fraud-detection/identity_pre-processing.ipynb
ski02049/repos
--------------------------
param_grid = { 'max_depth': list(range(10,51,10)), 'n_estimators': [20, 20, 20] } rf = RandomForestClassifier() gs = GridSearchCV(estimator = rf, param_grid = param_grid, cv = 5, n_jobs = -1, verbose = 2) gs.fit(X_train,y_train) best_rf = gs.best_estimator_ print('best parameter: \n',gs.best_params_) y_pred = best_rf.predict(X_test) print('Accuracy:{}'.format(accuracy_score(y_test, y_pred)), 'Precision:{}'.format(precision_score(y_test, y_pred)), 'Recall:{}'.format(recall_score(y_test, y_pred)), 'F1:{}'.format(f1_score(y_test, y_pred)), 'ROC_AUC:{}'.format(roc_auc_score(y_test, y_pred)), sep = '\n')
Accuracy:0.9298821354287035 Precision:0.6993006993006993 Recall:0.20390329158170697 F1:0.31574199368516015 ROC_AUC:0.5981737508690471
MIT
19_ybita_fraud-detection/identity_pre-processing.ipynb
ski02049/repos
----------------------- ๊ฑฐ๋ž˜ + ID merge
transaction_c = pd.read_csv('train_combined.csv') id_c = pd.read_csv('identity_first.csv') region = pd.read_csv('region.csv') country = region[['TransactionID', 'Country_code']] country.head() f_draft = transaction_c.merge(id_c.drop(['isFraud'], axis = 1) ,how = 'left', on = 'TransactionID') f_draft.drop('DeviceInfo', axis = 1, inplace = True) f_draft = f_draft.merge(country, how = 'left', on = 'TransactionID') f_draft.head() f_draft.dtypes
_____no_output_____
MIT
19_ybita_fraud-detection/identity_pre-processing.ipynb
ski02049/repos
Categorical: 'ProductCD', 'card4', 'card6', 'D15', 'DeviceType', 'Device_info_clean'
print( f_draft.ProductCD.unique(), f_draft.card4.unique(), f_draft.card6.unique(), f_draft.D15.unique(), f_draft.DeviceType.unique(), f_draft.Device_info_clean.unique(), ) print(map_ProductCD, map_card4,map_card6,map_D15, sep = '\n')
{'W': 0, 'H': 1, 'C': 2, 'S': 3, 'R': 4} {'discover': 0, 'mastercard': 1, 'visa': 2, 'american express': 3} {'credit': 0, 'debit': 1, 'debit or credit': 2, 'charge card': 3} {'credit': 0, 'debit': 1, 'debit or credit': 2, 'charge card': 3}
MIT
19_ybita_fraud-detection/identity_pre-processing.ipynb
ski02049/repos
map_ProductCD = {'W': 0, 'H': 1, 'C': 2, 'S': 3, 'R': 4}map_card4 = {'discover': 0, 'mastercard': 1, 'visa': 2, '}american express': 3}map_card6 = {'credit': 0, 'debit': 1, 'debit or credit': 2, 'charge card': 3}map_D15 = {'credit': 0, 'debit': 1, 'debit or credit': 2, 'charge card': 3}map_DeviceType = {'mobile':2 'desktop':1 'unknown':0}map_Device_info_clean = {'SM/moto/lg':1, 'iOS Device':2, 'Windows':3, 'unknown':0, 'MacOS':4, 'others':5, 'Trident/rv':6}
f_draft.ProductCD = f_draft.ProductCD.map(map_ProductCD) f_draft.card4 = f_draft.card4.map(map_card4) f_draft.card6 = f_draft.card6.map(map_card6) f_draft.D15 = f_draft.D15.map(map_D15) f_draft.DeviceType = f_draft.DeviceType.map(map_DeviceType) f_draft.Device_info_clean = f_draft.Device_info_clean.map(map_Device_info_clean) f_draft.to_csv('transaction_id_combined(no_label_encoded).csv', index = False) f_draft.ProductCD = f_draft.ProductCD.astype('category') f_draft.card4 = f_draft.card4.astype('category') f_draft.card6 = f_draft.card6.astype('category') f_draft.card1 = f_draft.card1.astype('category') f_draft.card2 = f_draft.card2.astype('category') f_draft.card3 = f_draft.card3.astype('category') f_draft.card5 = f_draft.card5.astype('category') f_draft.D15 = f_draft.D15.astype('category') f_draft.DeviceType = f_draft.DeviceType.astype('category') f_draft.Device_info_clean = f_draft.Device_info_clean.astype('category') f_draft.Country_code = f_draft.Country_code.astype('category') f_draft.card1 = f_draft.card1.astype('category') f_draft.card2 = f_draft.card2.astype('category') f_draft.card3 = f_draft.card3.astype('category') f_draft.card5 = f_draft.card5.astype('category') f_draft.dtypes f_draft.to_csv('transaction_id_combined.csv', index = False) f_draft.head()
_____no_output_____
MIT
19_ybita_fraud-detection/identity_pre-processing.ipynb
ski02049/repos
READMEThis notebook extracts some information about fitting. For each molecule, it creates a CSV file.It calculates the Euclidean distance and topological distance (number of bonds separating an atom and the halogen).
def parsePrepAc(prep_ac): # read file content with open(prep_ac) as stream: lines = stream.readlines() # browse file content atoms = {} bonds = [] ref_at_name = None for line in lines: l_spl = line.split() # skip short if len(l_spl) == 0: continue # save atom if l_spl[0] == "ATOM": at_id = int(l_spl[1]) at_name = l_spl[2] at_type = l_spl[-1] x = float(line[30:38]) y = float(line[38:46]) z = float(line[46:54]) atoms[at_name] = [at_id, at_type, np.array((x, y, z))] if "I" in at_name or "Cl" in at_name or "Br" in at_name: ref_at_name = at_name continue if l_spl[0] == "BOND": at_name1 = l_spl[-2] at_name2 = l_spl[-1] bonds.append([at_name1, at_name2]) return atoms, bonds, ref_at_name def getNBDistances(atoms, bonds, ref_at_name): distances = [] for atom in atoms: distance = findShortestNBDistance(atom, bonds, ref_at_name) distances.append(distance) return distances def findShortestNBDistance(atom, bonds, ref_atom): dist = 0 starts = [atom] while True: ends = [] for start in starts: if start == ref_atom: return dist for bond in bonds: if start in bond: end = [i for i in bond if i != start][0] ends.append(end) starts = ends dist += 1 def getEuclideanDistances(atoms, ref_at_name): distances = [] coords_ref = atoms[ref_at_name][2] for at_name, at_values in atoms.items(): at_id, at_type, coords = at_values distance = np.linalg.norm(coords_ref - coords) distances.append(distance) return distances def getChargesFromPunch(punch, n_atoms, sigma=False): # initialize output container charges = [] # read file content with open(punch) as stream: lines = stream.readlines() # define, where to find atoms and charges lines_start = 11 lines_end = lines_start + n_atoms if sigma: lines_end += 1 # browse selected lines and save charges for line in lines[lines_start:lines_end]: l_spl = line.split() charge = float(l_spl[3]) charges.append(charge) return charges def sortAtoms(atoms): at_names = list(atoms.keys()) at_ids = [i[0] for i in atoms.values()] at_types = [i[1] for i in atoms.values()] atoms_unsorted = list(zip(at_names, at_ids, at_types)) atoms_sorted = sorted(atoms_unsorted, key=lambda x: x[1]) at_names_sorted = [a[0] for a in atoms_sorted] at_types_sorted = [a[2] for a in atoms_sorted] return at_names_sorted, at_types_sorted for halogen in "chlorine bromine iodine".split(): mols = sorted(glob.glob(f"../{halogen}/ZINC*")) for mol in mols: # get info about atoms and bonds prep_ac = mol + "/antechamber/ANTECHAMBER_PREP.AC" atoms, bonds, ref_at_name = parsePrepAc(prep_ac) n_atoms = len(atoms) # number-of-bond distance from the halogen nb_distances = getNBDistances(atoms, bonds, ref_at_name) # eucledian distances from the halogen distances = getEuclideanDistances(atoms, ref_at_name) # standard RESP charges punch_std = mol + "/antechamber/punch" qs_std = getChargesFromPunch(punch_std, n_atoms) # modified RESP charges including sigma-hole punch_mod = mol + "/mod2/punch" qs_mod = getChargesFromPunch(punch_mod, n_atoms, sigma=True) # correct sorting of atoms atom_names_sorted, atom_types_sorted = sortAtoms(atoms) # output dataframe df = pd.DataFrame({"name": atom_names_sorted + ["X"], "type": atom_types_sorted + ["x"], "nb_distance": nb_distances + [-1], "distance": distances + [-1], "q_std": qs_std + [0], "q_mod": qs_mod}) # save df.to_csv(mol + "/overview.csv", index=False) "done" df
_____no_output_____
MIT
scripts/.ipynb_checkpoints/getOverviews-checkpoint.ipynb
mhkoscience/leskourova-offcenter
Load Data
import pandas as pd import numpy as np isMergedDatasetAvailabel = True if not isMergedDatasetAvailabel: train_bodies_df = pd.read_csv('train_bodies.csv') train_stance_df = pd.read_csv('train_stances.csv') test_bodies_df = pd.read_csv('competition_test_bodies.csv') test_stance_df = pd.read_csv('competition_test_stances.csv') #merge the training dataframe train_merged = pd.merge(train_stance_df,train_bodies_df,on='Body ID',how='outer') test_merged = pd.merge(test_stance_df,test_bodies_df,on='Body ID', how='outer') else: train_merged = pd.read_csv('train_merged.csv',index_col=0) test_merged = pd.read_csv('test_merged.csv',index_col=0) train_merged.head() test_merged.head()
_____no_output_____
MIT
DataCleansingAndEda.ipynb
Pager07/A-Hackers-AI-Voice-Assistant
Data Cleaning
import re import numpy as np from sklearn import feature_extraction from sklearn.feature_extraction.text import CountVectorizer import nltk from nltk.corpus import wordnet from nltk.tokenize import word_tokenize #downloads nltk.download('punkt') nltk.download('wordnet') nltk.download('stopwords') wnl= nltk.WordNetLemmatizer() def normalize(word): ''' Helper function fo get_normalized_tokens() Takes a word and lemmatizes it eg. bats -> bat Args: word: str ''' return wnl.lemmatize(word,wordnet.VERB).lower() def get_normalized_tokens(seq): ''' Takes a sentence and returns normalized tokens Args: seq: str, A sentece ''' normalized_tokens = [] for token in nltk.word_tokenize(seq): normalized_tokens.append(normalize(token)) return normalized_tokens def clean(seq): ''' Takes a senetence and removes emojies, non-numerical, non-alphabetically words Args: seq: str, A sentece ''' valid = re.findall(r'\w+', seq, flags=re.UNICODE) seq = ' '.join(valid).lower() return seq def remove_stopwords(token_list): ''' Args: token_list: List, containg tokens ''' filtered_token_list = [] for w in token_list: if w not in feature_extraction.text.ENGLISH_STOP_WORDS: filtered_token_list.append(w) return filtered_token_list def preprocess(sentence): ''' This function takes in a raw body sentence|title and returns preproccesed sentence ''' #Remove non-alphabatically, non-numerical,emojis etc.. sentence = clean(sentence) #(normalization/lemmatization) tokens = get_normalized_tokens(sentence) #remove any stopwords tokens = remove_stopwords(tokens) sentence = ' '.join(tokens) return sentence train_merged['articleBody']= train_merged['articleBody'].apply(preprocess) test_merged['articleBody'] = test_merged['articleBody'].apply(preprocess) train_merged['Headline']=train_merged['Headline'].apply(preprocess) test_merged['Headline']= test_merged['Headline'].apply(preprocess) train_merged.to_csv('train_merged.csv') test_merged.to_csv('test_merged.csv')
_____no_output_____
MIT
DataCleansingAndEda.ipynb
Pager07/A-Hackers-AI-Voice-Assistant
EDA
def get_top_trigrams(corpus, n=10): vec = CountVectorizer(ngram_range=(3, 3)).fit(corpus) # parameter is set for 2 (bigram) bag_of_words = vec.transform(corpus) sum_words = bag_of_words.sum(axis=0) words_freq = [(word, sum_words[0, idx]) for word, idx in vec.vocabulary_.items()] words_freq = sorted(words_freq, key = lambda x: x[1], reverse=True) return words_freq[:n] #first let us check the biagram of all the data plt.figure(figsize=(10, 5)) top_tweet_bigrams = get_top_trigrams(train_merged['Headline'],n=20) y, x = map(list, zip(*top_tweet_bigrams)) sns.barplot(x=x, y=y) plt.title('Biagrams (Headline)') #first let us check the biagram of all the data plt.figure(figsize=(10, 5)) top_tweet_bigrams = get_top_trigrams(train_merged['articleBody'],n=20) y, x = map(list, zip(*top_tweet_bigrams)) sns.barplot(x=x, y=y) plt.title('Biagrams (articleBody)') word = 'plays' out = normalize(word) assert out == 'play' text ='hello I #like to eatsfood 123' out = get_normalized_tokens(text) assert out == ['hello', 'i', '#', 'like', 'to', 'eatsfood','123'] text ='. hello I #like to eatsfood 123 -+~@:%^&www.*๐Ÿ˜”๐Ÿ˜”' out = clean(text);out assert out == 'hello i like to eatsfood 123 www' token_list = ['hello', 'i', '#', 'like', 'to', 'eatsfood','123'] out = remove_stopwords(token_list); assert out == ['hello', '#', 'like', 'eatsfood', '123'] text ='. hello bats,cats, alphakenny I am #like to eatsfood 123 -+~@:%^&www.*๐Ÿ˜”๐Ÿ˜”' out = preprocess(text); out #Very imblanaced train_merged['Stance'].hist() test_merged['Stance'].hist() lens = train_merged['Headline'].str.len() lens.mean(), lens.std(), lens.max() lens = test_merged['Headline'].str.len() lens.mean(), lens.std(), lens.max() #The lenght seem to vary alot lens = train_merged['articleBody'].str.len() lens.mean(), lens.std(), lens.max() lens = test_merged['articleBody'].str.len() lens.mean(), lens.std(), lens.max()
_____no_output_____
MIT
DataCleansingAndEda.ipynb
Pager07/A-Hackers-AI-Voice-Assistant
1.a tf-idf feature extraction
from sklearn.feature_extraction.text import TfidfVectorizer from scipy.sparse import hstack totaldata= (train_merged['articleBody'].tolist() + train_merged['Headline'].tolist()+test_merged['articleBody'].tolist()+test_merged['Headline'].tolist()) tfidf_vect = TfidfVectorizer(analyzer='word', token_pattern=r'\w{1,}', max_features=80, stop_words='english') tfidf_vect.fit(totaldata) print('===Starting train headline====') train_head_feature= tfidf_vect.transform(train_merged['Headline']) #(49972, 80) print('===Starting Train body====') train_body_feature= tfidf_vect.transform(train_merged['articleBody']) #(49972, 80) print('===Starting Test headline====') test_head_feature= tfidf_vect.transform(test_merged['Headline']) #(25413, 80) print('===Starting Test articleBody====') test_body_feature = tfidf_vect.transform(test_merged['articleBody']) #(25413, 80) def binary_labels(label): if label in ['discuss', 'agree', 'disagree']: return 'related' elif label in ['unrelated']: return label else: assert f'{label} not found!' train_merged_labels = train_merged['Stance'].apply(binary_labels) test_merged_labels = test_merged['Stance'].apply(binary_labels) print(train_merged_labels.unique(), test_merged_labels.unique()) X_train_tfidf,Y_train = hstack([train_head_feature,train_body_feature]).toarray(), train_merged_labels.values X_test_tfidf,Y_test = hstack([test_head_feature,test_body_feature]).toarray(), test_merged_labels.values
===Starting train headline==== ===Starting Train body==== ===Starting Test headline==== ===Starting Test articleBody==== ['unrelated' 'related'] ['unrelated' 'related']
MIT
DataCleansingAndEda.ipynb
Pager07/A-Hackers-AI-Voice-Assistant
Train with tf-idf features - Navie Bayes
from sklearn.naive_bayes import MultinomialNB from sklearn.metrics import accuracy_score from sklearn import metrics from sklearn.metrics import confusion_matrix,accuracy_score,roc_auc_score,roc_curve,auc,f1_score from sklearn.preprocessing import LabelEncoder def binary_labels(label): if label in ['discuss', 'agree', 'disagree']: return 'related' elif label in ['unrelated']: return label else: assert f'{label} not found!' train_merged_labels = train_merged['Stance'].apply(binary_labels) test_merged_labels = test_merged['Stance'].apply(binary_labels) print(train_merged_labels.unique(), test_merged_labels.unique()) X_train_tfidf,Y_train = hstack([train_head_feature,train_body_feature]).toarray(), train_merged_labels.values X_test_tfidf,Y_test = hstack([test_head_feature,test_body_feature]).toarray(), test_merged_labels.values print(X_train_tfidf.shape,X_test_tfidf.shape ) train_merged['Stance'].unique() net = MultinomialNB(alpha=0.39) net.fit(X_train_tfidf, Y_train) print("train score:", net.score(X_train_tfidf, Y_train)) print("validation score:", net.score(X_test_tfidf, Y_test)) import matplotlib.pyplot as plt import seaborn as sn plt.style.use('ggplot') # Create the confussion matrix def plot_confussion_matrix(y_test, y_pred): ''' Plot the confussion matrix for the target labels and predictions ''' cm = confusion_matrix(y_test, y_pred) # Create a dataframe with the confussion matrix values df_cm = pd.DataFrame(cm, range(cm.shape[0]), range(cm.shape[1])) # Plot the confussion matrix sn.set(font_scale=1.4) #for label size sn.heatmap(df_cm, annot=True,fmt='.0f',cmap="YlGnBu",annot_kws={"size": 10})# font size plt.show() # ROC Curve # plot no skill # Calculate the points in the ROC curve def plot_roc_curve(y_test, y_pred): ''' Plot the ROC curve for the target labels and predictions''' enc = LabelEncoder() y_test = enc.fit_transform(y_test) y_pred = enc.fit_transform(y_pred) fpr, tpr, thresholds = roc_curve(y_test, y_pred, pos_label=1) roc_auc= auc(fpr,tpr) plt.figure(figsize=(12, 12)) ax = plt.subplot(121) ax.set_aspect(1) plt.title('Receiver Operating Characteristic') plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc) plt.legend(loc = 'lower right') plt.plot([0, 1], [0, 1],'r--') plt.xlim([0, 1]) plt.ylim([0, 1]) plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.show() # Predicting the Test set results prediction = net.predict(X_test_tfidf) #print the classification report to highlight the accuracy with f1-score, precision and recall print(metrics.classification_report(prediction, Y_test)) plot_confussion_matrix(prediction, Y_test) plot_roc_curve(prediction, Y_test)
_____no_output_____
MIT
DataCleansingAndEda.ipynb
Pager07/A-Hackers-AI-Voice-Assistant
TF-idf Binary classification with logistice regression Steps: - Create k-fold starififed dataloader [x] - Use Sampler to have more control over the batch - Write a function get_dataloaders() that will return dict of shape fold_id x tuple. The tuple contains dataloader - Train the modle on all the splits [x] - How? - Write a function that will train for 1 single fold - It will take the train_loader and test_loader of that split - These loaders can be accessed by the get_dataloaders()- Evaluate the model - Do we need to evaluate the model after each epoch? - Yes we need need to - Print the stats - Track the stats - Use tracked stats of (fold x stats) to generate global stats - What is stats, in other words what are we using to measure the performance? - Accurracy and F-Score?? - the class-wise and the macro-averaged F1scores - this metrics are not affected by the large size of the majority class. - What is class-wise F1score? - harmoic means of precison and recalls of four class - What is F1m meteric? - The macro F1 Score - What is macro F1 Score? - Draw/do compuatation across all the rows then compute average across that - How can we get this score? - Use sklearn classification report - set the output_dict=1 - out['macro avg']['f1-score'] - out['macro avg']['accuracy'] - How will I know if the model is overfitting? - calcualte the test loss - At last I can send the whole test set for classification - then plot ROC - confusion matrxi - What about the class weights? - FNC-1 paper: - 0.25 reward crrectly classfiying reward R - 1-0.25: 0.75 (extra pentaly) - Total Pentatlty: 1+0.75 - 0.25 reward crrectly classfiying reward UR - Train the model - Load the dataset - load the csv - load the X_Train,Y-train - load the X_text , Y_test - Send them into gpu - trian
from torch.utils.data import DataLoader,Dataset import torch from sklearn.model_selection import KFold from sklearn.model_selection import StratifiedKFold from torch.utils.data import ConcatDataset,SubsetRandomSampler from collections import defaultdict class TfidfBinaryStanceDataset(Dataset): def __init__(self, X,Y): ''' Args: X: (samples x Features) Y: (samples). containing binary class eg. [1,0,1,1,....] ''' super(TfidfBinaryStanceDataset, self).__init__() self.x = torch.tensor(X).float() self.y = torch.tensor(Y).long() def __len__(self): return len(self.x) def __getitem__(self,idx): return (self.x[idx] ,self.y[idx]) def get_dataloaders(x_train,y_train,x_test,y_test,bs=256,nfold=5): ''' Args: x_train: nd.array of shape (samples x features) y_train: nd.array of shape (labels ) x_test: nd.array of shape (samples x features) y_test: nd.array of shape (labels ) nfold: Scalar, number of total folds, It can't be greater than number of samples in each class Returns: loaders: Dict of shape (nfolds x 2), where the keys are fold ids and tuple containing train and test loader for that split ''' train_dataset = TfidfBinaryStanceDataset(x_train,y_train) test_dataset = TfidfBinaryStanceDataset(x_test,y_test) dataset = ConcatDataset([train_dataset,test_dataset]) #A big dataset kfold = StratifiedKFold(n_splits=nfold, shuffle=False) labels = [data[1] for data in dataset] loaders = defaultdict(tuple) for fold,(train_ids,test_ids) in enumerate(kfold.split(dataset,labels)): train_subsampler = SubsetRandomSampler(train_ids) test_subsampler = SubsetRandomSampler(test_ids) train_loader = torch.utils.data.DataLoader(dataset,batch_size=bs, sampler=train_subsampler) # test_loader = torch.utils.data.DataLoader(dataset,batch_size=bs, sampler=test_subsampler) loaders[fold] = (train_loader,test_loader) return loaders import torch import torch.nn as nn from torch.optim import Adam import numpy as np from collections import defaultdict from sklearn.preprocessing import LabelEncoder from sklearn.metrics import classification_report class LogisticRegression(nn.Module): def __init__(self, input_dim, output_dim): super(LogisticRegression,self).__init__() self.linear = nn.Linear(input_dim,output_dim) def forward(self,x): out = self.linear(x) return out def eval_one_epoch(net,dataloader,optim,lfn,triplet_lfn,margin): net.eval() losses = [] f1m = [] for batch_id, (x,y) in enumerate(dataloader): assert len(torch.unique(y)) != 1 x = x.to(device).float() y = y.to(device).long() hs = net(x) #(sampels x 2) #BCE-loss ce_loss = lfn(hs,y) #triplet-loss #generate the triplet probs = hs.softmax(dim=1) #(samples x 2) y_hat = probs.argmax(dim=1) anchors,positives, negatives = generate_triplets(hs,y_hat,y) #(misclassified_samples, d_model=2) anchors,positives, negatives = mine_hard_triplets(anchors,positives,negatives,margin) triplet_loss = triplet_lfn(anchors,positives,negatives) #total-loss loss = (ce_loss + triplet_loss)/2 losses += [loss.item()] target_names = ['unrelated','related'] f1m += [classification_report(y_hat.detach().cpu().numpy(), y.detach().cpu().numpy(), target_names=target_names,output_dict=1)['macro avg']['f1-score']] return np.mean(losses), np.mean(f1m) def train_one_epoch(net,dataloader,optim,lfn,triplet_lfn,margin): net.train() losses = [] for batch_id, (x_train,y_train) in enumerate(dataloader): x_train = x_train.to(device).float() y_train = y_train.to(device).long() hs = net(x_train) #(sampels x 2) #BCE-loss ce_loss = lfn(hs,y_train) #triplet-loss #generate the triplet probs = hs.softmax(dim=1) #(samples x 2) y_hat = probs.argmax(dim=1) anchors,positives, negatives = generate_triplets(hs,y_hat,y_train) #(misclassified_samples, d_model=2) anchors,positives, negatives = mine_hard_triplets(anchors,positives,negatives,margin) triplet_loss = triplet_lfn(anchors,positives,negatives) #total-loss loss = (ce_loss + triplet_loss)/2 loss.backward() optim.step() optim.zero_grad() losses += [loss.item()] return sum(losses)/len(losses) def mine_hard_triplets(anchors,positives,negatives,margin): ''' Args: anchor: Tensor of shape (missclassified_samples x 2 ) positive: Tensor of shape (missclassified_smaples_positive x 2) negative: Tensor of shape (missclassified_smaples_negative x 2) Returns: anchor: Tensor of shape (hard_missclassified_samples x 2 ) positive: Tensor of shape (hard_missclassified_smaples_positive x 2) negative: Tensor of shape (hard_missclassified_smaples_negative x 2) ''' #mine-semihar triplets l2_dist = nn.PairwiseDistance() d_p = l2_dist(anchors, positives) d_n = l2_dist(anchors, negatives) hard_triplets = torch.where((d_n - d_p < margin))[0] anchors = anchors[hard_triplets] positives = positives[hard_triplets] negatives = negatives[hard_triplets] return anchors,positives,negatives def generate_triplets(hs,y_hat,y): ''' Args: hs: (Samples x 2) y_hat: Tensor of shape (samples,), Containing predicted label eg. [1,0,1,1,1,1] y: Tensor of shape (samples,), Containing GT label eg. [1,0,1,1,1,1] Returns: anchor: Tensor of shape (missclassified_samples x 2 ) positive: Tensor of shape (missclassified_smaples_positive x 2) negative: Tensor of shape (missclassified_smaples_negative x 2) ''' mismatch_indices = torch.where(y_hat != y)[0] anchors = hs[mismatch_indices] #(miscalssfied_samples x 2) positives = get_positives(hs,mismatch_indices,y) #(miscalssfied_samples x 2) negatives = get_negatives(hs,mismatch_indices,y) return anchors,positives, negatives def get_positives(hs,misclassified_indicies,y): ''' For each misclassfied sample we, randomly pick 1 positive anchor Args: hs: (Samples x 2) mismatch_indices: A tensor of shape [misclassified], containing row indices relatie to hs y: Tensor of shape (samples,), Containing GT label eg. [1,0,1,1,1,1] Returns: positive: Tensor of shape [misclassified x 2] ''' positives_indices = [] negative_indices = [] for anchor_index in misclassified_indicies: anchor_class = y[anchor_index] possible_positives = torch.where(y == anchor_class)[0] positive_index = anchor_index while anchor_index == positive_index: positive_index = np.random.choice(possible_positives.detach().cpu().numpy()) positives_indices += [positive_index] positives = hs[positives_indices] return positives def get_negatives(hs,misclassified_indicies,y): ''' For each misclassfied sample we, randomly pick 1 negative anchor Args: hs: (Samples x 2) mismatch_indices: A tensor of shape [misclassified], containing row indices relatie to hs y: Tensor of shape (samples,), Containing GT label eg. [1,0,1,1,1,1] Returns: positive: Tensor of shape [misclassified x 2] ''' negative_indices = [] for anchor_index in misclassified_indicies: anchor_class = y[anchor_index] possible_negatives = torch.where(y != anchor_class)[0] negative_index = np.random.choice(possible_negatives.detach().cpu().numpy()) #possible_negatives are empty negative_indices += [negative_index] negatives = hs[negative_indices] return negatives def save_model(net,macro_fs,fs): if fs>=max(macro_fs): torch.save(net,'./net.pth') #TODO: Find the class wegihts device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") epoch = 20 margin= 0.5 lr = 4.33E-02 bs = 1024 nfolds = 5 enc = LabelEncoder() x_train = X_train_tfidf y_train =enc.fit_transform(Y_train) x_test = X_test_tfidf y_test = enc.fit_transform(Y_test) def train(): class_weights = torch.tensor([1.75,1]).to(device) lfn = nn.CrossEntropyLoss(weight=class_weights).to(device) triplet_lfn = nn.TripletMarginLoss(margin=margin).to(device) loaders = get_dataloaders(x_train,y_train,x_test,y_test,bs=bs, nfold=nfolds) #dict of shape (nfold x 2),2 because it consist of train_loader and test_loader macro_f1m = [] for fold in range(nfolds): fold_macro_f1m =[] print(f'Starting training for fold:{fold}') net = LogisticRegression(input_dim= x_train.shape[1], output_dim= 2).to(device) optim = Adam(net.parameters(), lr=lr) for e in range(epoch): train_loss = train_one_epoch(net, loaders[fold][0], optim, lfn, triplet_lfn, margin) eval_loss,f1m = eval_one_epoch(net, loaders[fold][1], optim, lfn, triplet_lfn, margin) macro_f1m += [f1m] fold_macro_f1m += [f1m] save_model(net,macro_f1m,f1m) if (e+1)%5==0: print(f'nfold:{fold},epoch:{e},train loss:{train_loss}, eval loss:{eval_loss}, fm1:{f1m}') print(f'Fold:{fold}, Average F1-Macro:{np.mean(fold_macro_f1m)}') print('=======================================') print(f'{nfolds}-Folds Average F1-Macro:{np.mean(macro_f1m)}') return np.mean(macro_f1m) #Use Cyclical Learning Rates for Training Neural Networks to roughly estimate good lr #!pip install torch_lr_finder from torch_lr_finder import LRFinder loaders = get_dataloaders(x_train,y_train,x_test,y_test,bs=256, nfold=nfolds) train_loader = loaders[0][0] model = LogisticRegression(160,2) criterion = nn.CrossEntropyLoss() optimizer = Adam(model.parameters(), lr=1e-7, weight_decay=1e-2) lr_finder = LRFinder(model, optimizer, criterion, device="cuda") lr_finder.range_test(train_loader, end_lr=100, num_iter=100) lr_finder.plot() # to inspect the loss-learning rate graph lr_finder.reset() # to reset the model and optimizer to their initial state train()
Starting training for fold:0 nfold:0,epoch:4,train loss:0.6024146110324536, eval loss:0.6200127760569255, fm1:0.41559849509023383 nfold:0,epoch:9,train loss:0.5978681485531694, eval loss:0.6050375978151957, fm1:0.4030031157333551 nfold:0,epoch:14,train loss:0.5998595609503278, eval loss:0.5948281049728393, fm1:0.407867282161874 nfold:0,epoch:19,train loss:0.5981861807532229, eval loss:0.6027443011601766, fm1:0.42075218038799256 Fold:0, Average F1-Macro:0.4127324674774284 Starting training for fold:1 nfold:1,epoch:4,train loss:0.6025787218142364, eval loss:0.6016569177309672, fm1:0.43250310569308986 nfold:1,epoch:9,train loss:0.5973691990820028, eval loss:0.6045343279838562, fm1:0.44679272166665146 nfold:1,epoch:14,train loss:0.6005635180715787, eval loss:0.5813541769981384, fm1:0.4268086770363271 nfold:1,epoch:19,train loss:0.6011431914264873, eval loss:0.5997809410095215, fm1:0.42668907904661124 Fold:1, Average F1-Macro:0.44108015002020895 Starting training for fold:2 nfold:2,epoch:4,train loss:0.5987946845717349, eval loss:0.5846388498942058, fm1:0.42370208601695203 nfold:2,epoch:9,train loss:0.5992476344108582, eval loss:0.6136406898498535, fm1:0.43747891095105446 nfold:2,epoch:14,train loss:0.5968704809576778, eval loss:0.6002291361490886, fm1:0.42699673979421504 nfold:2,epoch:19,train loss:0.6000207652479915, eval loss:0.5900104999542236, fm1:0.4247215177374668 Fold:2, Average F1-Macro:0.42944124669108985 Starting training for fold:3 nfold:3,epoch:4,train loss:0.5997520897348049, eval loss:0.590477712949117, fm1:0.40392972882751793 nfold:3,epoch:9,train loss:0.5979607580071788, eval loss:0.5996671398480733, fm1:0.4073626480036444 nfold:3,epoch:14,train loss:0.5993322851294178, eval loss:0.5894471764564514, fm1:0.4100946046554814 nfold:3,epoch:19,train loss:0.5980879502781367, eval loss:0.5922741095225016, fm1:0.41269239065377616 Fold:3, Average F1-Macro:0.40943418809767385 Starting training for fold:4 nfold:4,epoch:4,train loss:0.6021876820063187, eval loss:0.6027048508326213, fm1:0.41075418397788915 nfold:4,epoch:9,train loss:0.5996838737342317, eval loss:0.6106006304423014, fm1:0.4247308204855665 nfold:4,epoch:14,train loss:0.5994551697019803, eval loss:0.5860225121180217, fm1:0.4144711033184684 nfold:4,epoch:19,train loss:0.6028875496427891, eval loss:0.596651287873586, fm1:0.4246737953911698 Fold:4, Average F1-Macro:0.41547834566387065 ======================================= 5-Folds Average F1-Macro:0.42163327959005437
MIT
DataCleansingAndEda.ipynb
Pager07/A-Hackers-AI-Voice-Assistant
test
#net = torch.load('./net.pth') net.eval() x_test = torch.from_numpy(X_test_tfidf).to(device).float() probs = net(x_test) prediction = probs.argmax(dim=1).detach().cpu().numpy() # #print the classification report to highlight the accuracy with f1-score, precision and recall prediction = ['unrelated' if p else 'related' for p in prediction ] print(metrics.classification_report(prediction, Y_test)) plot_confussion_matrix(prediction, Y_test) plot_roc_curve(prediction, Y_test) #test for get_positives test_hs = torch.tensor([[0.8799, 0.0234], [0.2341, 0.8839], [0.8705, 0.1356], [0.9723, 0.1930], [0.7416, 0.4498]]) test_mi = torch.tensor([0,1,2]) y = torch.tensor([0,0,1,1,1]) out = get_positives(test_hs,test_mi, y) assert out.shape == (3,2) #test for get_negatives test_hs = torch.tensor([[0.8799, 0.0234], [0.2341, 0.8839], [0.8705, 0.1356], [0.9723, 0.1930], [0.7416, 0.4498]]) test_mi = torch.tensor([0,1,2]) y = torch.tensor([0,0,1,1,1]) out = get_negatives(test_hs,test_mi, y) assert out.shape == (3,2) #test for generate_triplets test_hs = torch.tensor([[0.8799, 0.0234], [0.2341, 0.8839], [0.8705, 0.1356], [0.9723, 0.1930], [0.7416, 0.4498]]) y_hat = torch.tensor([1,1,1,1,1]) # y = torch.tensor([1,1,1,0,0]) a,p,n = generate_triplets(test_hs,y_hat,y) assert a.shape == (2,2) assert p.shape == (2,2) assert n.shape == (2,2) #test for mine_hard_triplets a = torch.tensor([[0.8799, 0.0234], [0.2341, 0.8839], [0.7416, 0.4498]]) p = torch.tensor([[0.8799, 0.0234], [0.2341, 0.8839], [0.7416, 0.4498]]) n = torch.tensor([[0.8799, 0.0234], [0.2341, 0.8839], [0.7416, 0.4498]]) h_a , h_p ,h_n= mine_hard_triplets(a,p,n,0.5) assert h_a.shape == (3,2) assert h_p.shape == (3,2) assert h_n.shape == (3,2) x_train = torch.tensor([[0.8799, 0.0234], [0.2341, 0.8839], [0.7416, 0.4498]]) y_train = [1,1,0] x_test = torch.tensor([[0.8799, 0.0234], [0.2341, 0.8839], [0.7416, 0.4498]]) y_test = [1,1,0,0] loader = get_dataloaders(x_train,y_train,x_test,y_test,bs=1,nfold=2) assert len(loader) == 2 for k,(train_loader,test_loader) in loader.items(): print(loaders) for x,y in train_loader: print(x.shape) print(y.shape) x ,y = loader[0]
_____no_output_____
MIT
DataCleansingAndEda.ipynb
Pager07/A-Hackers-AI-Voice-Assistant
Using Google Colab with GitHub [Google Colaboratory](http://colab.research.google.com) is designed to integrate cleanly with GitHub, allowing both loading notebooks from github and saving notebooks to github. Loading Public Notebooks Directly from GitHubColab can load public github notebooks directly, with no required authorization step.For example, consider the notebook at this address: https://github.com/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb.The direct colab link to this notebook is: https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb.To generate such links in one click, you can use the [Open in Colab](https://chrome.google.com/webstore/detail/open-in-colab/iogfkhleblhcpcekbiedikdehleodpjo) Chrome extension. Browsing GitHub Repositories from ColabColab also supports special URLs that link directly to a GitHub browser for any user/organization, repository, or branch. For example:- http://colab.research.google.com/github will give you a general github browser, where you can search for any github organization or username.- http://colab.research.google.com/github/googlecolab/ will open the repository browser for the ``googlecolab`` organization. Replace ``googlecolab`` with any other github org or user to see their repositories.- http://colab.research.google.com/github/googlecolab/colabtools/ will let you browse the main branch of the ``colabtools`` repository within the ``googlecolab`` organization. Substitute any user/org and repository to see its contents.- http://colab.research.google.com/github/googlecolab/colabtools/blob/master will let you browse ``master`` branch of the ``colabtools`` repository within the ``googlecolab`` organization. (don't forget the ``blob`` here!) You can specify any valid branch for any valid repository. Loading Private NotebooksLoading a notebook from a private GitHub repository is possible, but requires an additional step to allow Colab to access your files.Do the following:1. Navigate to http://colab.research.google.com/github.2. Click the "Include Private Repos" checkbox.3. In the popup window, sign-in to your Github account and authorize Colab to read the private files.4. Your private repositories and notebooks will now be available via the github navigation pane. Saving Notebooks To GitHub or DriveAny time you open a GitHub hosted notebook in Colab, it opens a new editable view of the notebook. You can run and modify the notebook without worrying about overwriting the source.If you would like to save your changes from within Colab, you can use the File menu to save the modified notebook either to Google Drive or back to GitHub. Choose **Fileโ†’Save a copy in Drive** or **Fileโ†’Save a copy to GitHub** and follow the resulting prompts. To save a Colab notebook to GitHub requires giving Colab permission to push the commit to your repository. Open In Colab BadgeAnybody can open a copy of any github-hosted notebook within Colab. To make it easier to give people access to live views of GitHub-hosted notebooks,colab provides a [shields.io](http://shields.io/)-style badge, which appears as follows:[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb)The markdown for the above badge is the following:```markdown[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb)```The HTML equivalent is:```HTML ```Remember to replace the notebook URL in this template with the notebook you want to link to.
_____no_output_____
MIT
colab-example-notebooks/colab_github_demo.ipynb
tuanavu/deep-learning-tutorials
AzMeta Resize Recommendations
import datetime print("Report Date:", datetime.datetime.now().isoformat()) print("Total Annual Savings:", "${:,.2f}".format(local_data[('AzMeta', 'annual_savings')].sum()), "(Non-RI Pricing, SQL and Windows AHUB Licensing)") # Present the dataset import matplotlib as plt import itertools from matplotlib import colors def background_limit_coloring(row): cmap="coolwarm" text_color_threshold=0.408 limit_index = (row.index.get_level_values(0)[0], 'new_limit') smin = 0 smax = row[limit_index] if pd.isna(smax): return [''] * len(row) rng = smax - smin norm = colors.Normalize(smin, smax) rgbas = plt.cm.get_cmap(cmap)(norm(row.to_numpy(dtype=float))) def relative_luminance(rgba): r, g, b = ( x / 12.92 if x <= 0.03928 else ((x + 0.055) / 1.055 ** 2.4) for x in rgba[:3] ) return 0.2126 * r + 0.7152 * g + 0.0722 * b def css(rgba): dark = relative_luminance(rgba) < text_color_threshold text_color = "#f1f1f1" if dark else "#000000" return f"background-color: {colors.rgb2hex(rgba)};color: {text_color};" return [css(rgba) for rgba in rgbas[0:-1]] + [''] def build_header_style(col_groups): start = 0 styles = [] palette = ['#f6f6f6', '#eae9e9', '#d4d7dd', '#f6f6f6', '#eae9e9', '#d4d7dd', '#f6f6f6', '#eae9e9', '#d4d7dd'] for i,group in enumerate(itertools.groupby(col_groups, lambda c:c[0])): styles.append({'selector': f'.col_heading.level0.col{start}', 'props': [('background-color', palette[i])]}) group_len = len(tuple(group[1])) for j in range(group_len): styles.append({'selector': f'.col_heading.level1.col{start + j}', 'props': [('background-color', palette[i])]}) start += group_len return styles data_group_names = [x for x in full_data.columns.get_level_values(0).unique() if x not in ('Resource', 'AzMeta', 'Advisor')] num_mask = [x[0] in data_group_names for x in full_data.columns.to_flat_index()] styler = full_data.style.hide_index() \ .set_properties(**{'font-weight': 'bold'}, subset=[('Resource', 'resource_name')]) \ .format('{:.1f}', subset=num_mask, na_rep='N/A') \ .format('${:.2f}', subset=[('AzMeta', 'annual_savings')], na_rep='N/A') \ .set_table_styles(build_header_style(full_data.columns)) for data_group in data_group_names: mask = [x == data_group for x in full_data.columns.get_level_values(0)] styler = styler.apply(background_limit_coloring, axis=1, subset=mask) styler
_____no_output_____
MIT
src/rightsize/rightsizereport.ipynb
wpbrown/azmeta-rightsize-iaas
Cowell's formulationFor cases where we only study the gravitational forces, solving the Kepler's equation is enough to propagate the orbit forward in time. However, when we want to take perturbations that deviate from Keplerian forces into account, we need a more complex method to solve our initial value problem: one of them is **Cowell's formulation**.In this formulation we write the two body differential equation separating the Keplerian and the perturbation accelerations:$$\ddot{\mathbb{r}} = -\frac{\mu}{|\mathbb{r}|^3} \mathbb{r} + \mathbb{a}_d$$ For an in-depth exploration of this topic, still to be integrated in poliastro, check out https://github.com/Juanlu001/pfc-uc3m An earlier version of this notebook allowed for more flexibility and interactivity, but was considerably more complex. Future versions of poliastro and plotly might bring back part of that functionality, depending on user feedback. You can still download the older version here. First exampleLet's setup a very simple example with constant acceleration to visualize the effects on the orbit.
import numpy as np from astropy import units as u from matplotlib import pyplot as plt plt.ion() from poliastro.bodies import Earth from poliastro.twobody import Orbit from poliastro.examples import iss from poliastro.twobody.propagation import cowell from poliastro.plotting import OrbitPlotter3D from poliastro.util import norm from plotly.offline import init_notebook_mode init_notebook_mode(connected=True)
_____no_output_____
MIT
docs/source/examples/Propagation using Cowell's formulation.ipynb
nikita-astronaut/poliastro
To provide an acceleration depending on an extra parameter, we can use **closures** like this one:
accel = 2e-5 def constant_accel_factory(accel): def constant_accel(t0, u, k): v = u[3:] norm_v = (v[0]**2 + v[1]**2 + v[2]**2)**.5 return accel * v / norm_v return constant_accel def custom_propagator(orbit, tof, rtol, accel=accel): # Workaround for https://github.com/poliastro/poliastro/issues/328 if tof == 0: return orbit.r.to(u.km).value, orbit.v.to(u.km / u.s).value else: # Use our custom perturbation acceleration return cowell(orbit, tof, rtol, ad=constant_accel_factory(accel)) times = np.linspace(0, 10 * iss.period, 500) times times, positions = iss.sample(times, method=custom_propagator)
_____no_output_____
MIT
docs/source/examples/Propagation using Cowell's formulation.ipynb
nikita-astronaut/poliastro
And we plot the results:
frame = OrbitPlotter3D() frame.set_attractor(Earth) frame.plot_trajectory(positions, label="ISS") frame.show()
_____no_output_____
MIT
docs/source/examples/Propagation using Cowell's formulation.ipynb
nikita-astronaut/poliastro
Error checking
def state_to_vector(ss): r, v = ss.rv() x, y, z = r.to(u.km).value vx, vy, vz = v.to(u.km / u.s).value return np.array([x, y, z, vx, vy, vz]) k = Earth.k.to(u.km**3 / u.s**2).value rtol = 1e-13 full_periods = 2 u0 = state_to_vector(iss) tf = ((2 * full_periods + 1) * iss.period / 2).to(u.s).value u0, tf iss_f_kep = iss.propagate(tf * u.s, rtol=1e-18) r, v = cowell(iss, tf, rtol=rtol) iss_f_num = Orbit.from_vectors(Earth, r * u.km, v * u.km / u.s, iss.epoch + tf * u.s) iss_f_num.r, iss_f_kep.r assert np.allclose(iss_f_num.r, iss_f_kep.r, rtol=rtol, atol=1e-08 * u.km) assert np.allclose(iss_f_num.v, iss_f_kep.v, rtol=rtol, atol=1e-08 * u.km / u.s) assert np.allclose(iss_f_num.a, iss_f_kep.a, rtol=rtol, atol=1e-08 * u.km) assert np.allclose(iss_f_num.ecc, iss_f_kep.ecc, rtol=rtol) assert np.allclose(iss_f_num.inc, iss_f_kep.inc, rtol=rtol, atol=1e-08 * u.rad) assert np.allclose(iss_f_num.raan, iss_f_kep.raan, rtol=rtol, atol=1e-08 * u.rad) assert np.allclose(iss_f_num.argp, iss_f_kep.argp, rtol=rtol, atol=1e-08 * u.rad) assert np.allclose(iss_f_num.nu, iss_f_kep.nu, rtol=rtol, atol=1e-08 * u.rad)
_____no_output_____
MIT
docs/source/examples/Propagation using Cowell's formulation.ipynb
nikita-astronaut/poliastro
Numerical validationAccording to [Edelbaum, 1961], a coplanar, semimajor axis change with tangent thrust is defined by:$$\frac{\operatorname{d}\!a}{a_0} = 2 \frac{F}{m V_0}\operatorname{d}\!t, \qquad \frac{\Delta{V}}{V_0} = \frac{1}{2} \frac{\Delta{a}}{a_0}$$So let's create a new circular orbit and perform the necessary checks, assuming constant mass and thrust (i.e. constant acceleration):
ss = Orbit.circular(Earth, 500 * u.km) tof = 20 * ss.period ad = constant_accel_factory(1e-7) r, v = cowell(ss, tof.to(u.s).value, ad=ad) ss_final = Orbit.from_vectors(Earth, r * u.km, v * u.km / u.s, ss.epoch + tof) da_a0 = (ss_final.a - ss.a) / ss.a da_a0 dv_v0 = abs(norm(ss_final.v) - norm(ss.v)) / norm(ss.v) 2 * dv_v0 np.allclose(da_a0, 2 * dv_v0, rtol=1e-2)
_____no_output_____
MIT
docs/source/examples/Propagation using Cowell's formulation.ipynb
nikita-astronaut/poliastro
This means **we successfully validated the model against an extremely simple orbit transfer with approximate analytical solution**. Notice that the final eccentricity, as originally noticed by Edelbaum, is nonzero:
ss_final.ecc
_____no_output_____
MIT
docs/source/examples/Propagation using Cowell's formulation.ipynb
nikita-astronaut/poliastro
Print DependenciesDependences are fundamental to record the computational environment.
%load_ext watermark # python, ipython, packages, and machine characteristics %watermark -v -m -p pandas,keras,numpy,math,tensorflow,matplotlib,h5py # date print (" ") %watermark -u -n -t -z
Python implementation: CPython Python version : 3.8.8 IPython version : 7.22.0 pandas : 1.2.3 keras : 2.4.3 numpy : 1.19.5 math : unknown tensorflow: 2.4.1 matplotlib: 3.4.0 h5py : 2.10.0 Compiler : Clang 12.0.0 (clang-1200.0.32.29) OS : Darwin Release : 19.6.0 Machine : x86_64 Processor : i386 CPU cores : 8 Architecture: 64bit Last updated: Wed Apr 14 2021 11:56:48CEST
MIT
notebooks/.ipynb_checkpoints/Comp_Scatt_NeuralNetwork_Classification_noposition-Copy1-checkpoint.ipynb
chiarabadiali/comp_scatt_ML
Load of the data
from process import loaddata class_data0 = loaddata("../data/{}.csv".format('low_ene')) class_data0 = class_data0[class_data0[:,0] > 0.001] class_data0.shape y0 = class_data0[:,0] A0 = class_data0 A0[:,9] = A0[:,13] x0 = class_data0[:,1:10]
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/Comp_Scatt_NeuralNetwork_Classification_noposition-Copy1-checkpoint.ipynb
chiarabadiali/comp_scatt_ML
Check to see if the data are balanced now
from matplotlib import pyplot y0 = np.array(y0) bins = np.linspace(0, 0.55, 50) n, edges, _ = pyplot.hist(y0, bins, color = 'indianred', alpha=0.5, label='Osiris') #pyplot.hist(y_pred, bins, color = 'mediumslateblue', alpha=0.5, label='NN') pyplot.legend(loc='upper right') pyplot.xlabel('Probability') pyplot.yscale('log') pyplot.title('Trained on ($p_e$, $p_{\gamma}$, $\omega_e$, $\omega_{\gamma}$, n)') pyplot.show() def balance_data(class_data, nbins): from matplotlib import pyplot as plt y = class_data[:,0] n, edges, _ = plt.hist(y, nbins, color = 'indianred', alpha=0.5, label='Osiris') n_max = n.max() data = [] for class_ in class_data: for i in range(len(n)): edges_min = edges[i] edges_max = edges[i+1] if class_[0] > edges_min and class_[0] < edges_max: for j in range(int(n_max/n[i])): data.append(class_) break return np.array(data) class_data = balance_data(class_data0, 100) np.random.shuffle(class_data) y = class_data[:,0] A = class_data print(A[0]) A[:,9] = A[:,13] print(A[0]) x = class_data[:,1:10] print(x[0]) print(x.shape) from matplotlib import pyplot y0 = np.array(y0) bins = np.linspace(0, 0.55, 100) pyplot.hist(y, bins, color = 'indianred', alpha=0.5, label='Osiris') #pyplot.hist(y_pred, bins, color = 'mediumslateblue', alpha=0.5, label='NN') pyplot.legend(loc='upper right') pyplot.xlabel('Probability') pyplot.yscale('log') pyplot.title('Trained on ($p_e$, $p_{\gamma}$, $\omega_e$, $\omega_{\gamma}$, n)') pyplot.show() train_split = 0.75 train_limit = int(len(y)*train_split) print("Training sample: {0} \nValuation sample: {1}".format(train_limit, len(y)-train_limit)) x_train = x[:train_limit] x_val = x[train_limit:] y_train = y[:train_limit] y_val = y[train_limit:]
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/Comp_Scatt_NeuralNetwork_Classification_noposition-Copy1-checkpoint.ipynb
chiarabadiali/comp_scatt_ML
Model Build
from keras.models import Sequential from keras.layers.core import Dense import keras.backend as K from keras import optimizers from keras import models from keras import layers from keras.layers.normalization import BatchNormalization def build_model() : model = models.Sequential() model.add (BatchNormalization(input_dim = 9)) model.add (layers.Dense (12 , activation = "sigmoid")) model.add (layers.Dense (9 , activation = "relu")) model.add (layers.Dense (1 , activation = "sigmoid")) model.compile(optimizer = "adam" , loss = 'mae' , metrics = ["mape"]) return model model = build_model () history = model.fit ( x_train, y_train, epochs = 1000, batch_size = 10000 , validation_data = (x_val, y_val) ) model.save("../models/classifier/{}_noposition2.h5".format('probability')) model.summary() import matplotlib.pyplot as plt loss = history.history['loss'] val_loss = history.history['val_loss'] accuracy = history.history['mape'] val_accuracy = history.history['val_mape'] epochs = range(1, len(loss) + 1) fig, ax1 = plt.subplots() l1 = ax1.plot(epochs, loss, 'bo', label='Training loss') vl1 = ax1.plot(epochs, val_loss, 'b', label='Validation loss') ax1.set_title('Training and validation loss') ax1.set_xlabel('Epochs') ax1.set_ylabel('Loss (mae))') ax2 = ax1.twinx() ac2= ax2.plot(epochs, accuracy, 'o', c="red", label='Training acc') vac2= ax2.plot(epochs, val_accuracy, 'r', label='Validation acc') ax2.set_ylabel('mape') lns = l1 + vl1 + ac2 + vac2 labs = [l.get_label() for l in lns] ax2.legend(lns, labs, loc="center right") fig.tight_layout() #fig.savefig("acc+loss_drop.pdf") fig.show()
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/Comp_Scatt_NeuralNetwork_Classification_noposition-Copy1-checkpoint.ipynb
chiarabadiali/comp_scatt_ML
Probability density distribution
y0 = class_data0[:,0] A0 = class_data0 A0[:,9] = A0[:,13] x0 = class_data0[:,1:10] y_pred = model.predict(x0) y_pred from matplotlib import pyplot y = np.array(y) bins = np.linspace(0, 0.8, 100) pyplot.hist(y0, bins, color = 'indianred', alpha=0.5, label='Osiris') pyplot.hist(y_pred, bins, color = 'mediumslateblue', alpha=0.5, label='NN') pyplot.legend(loc='upper right') pyplot.xlabel('Probability') pyplot.yscale('log') pyplot.title('Trained on ($p_e$, $p_{\gamma}$, $\omega_e$, $\omega_{\gamma}$, n)') pyplot.show()
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/Comp_Scatt_NeuralNetwork_Classification_noposition-Copy1-checkpoint.ipynb
chiarabadiali/comp_scatt_ML
Recreation of Terry's Notebook with NgSpiceIn this experiment we are going to recreate Terry's notebook with NgSpice simulation backend. Step 1: Set up Python3 and NgSpice
%matplotlib inline import matplotlib.pyplot as plt # check if ngspice can be found from python from ctypes.util import find_library ngspice_lib_filename = find_library('libngspice') print(ngspice_lib_filename) ## if the result is none, make sure that libngspice is installed import PySpice.Logging.Logging as Logging logger = Logging.setup_logging() from PySpice.Spice.NgSpice.Shared import NgSpiceShared ngspice = NgSpiceShared.new_instance() print(ngspice.exec_command('version -f')) import nengo import numpy as np
/usr/local/lib/libngspice.dylib ****** ** ngspice-32 : Circuit level simulation program ** The U. C. Berkeley CAD Group ** Copyright 1985-1994, Regents of the University of California. ** Copyright 2001-2020, The ngspice team. ** Please get your ngspice manual from http://ngspice.sourceforge.net/docs.html ** Please file your bug-reports at http://ngspice.sourceforge.net/bugrep.html ** ** CIDER 1.b1 (CODECS simulator) included ** XSPICE extensions included ** Relevant compilation options (refer to user's manual): ** X11 interface not compiled into ngspice ** ******
MIT
Ngspice/Recreating_Terrys_Notebook_with_spice_Part_1.ipynb
JD-14/grill-spice
Step 2: Define a single neuron Let's start with the subcircuit of a single neuron. We are going to use voltage amplifier leaky-integrate and fire neurons discussed in Section 3.3 of Indiveri et al.(May 2011).
neuron_model = ''' .subckt my_neuron Vmem out cvar=100p vsupply=1.8 vtau=0.4 vthr=0.2 vb=1 V1 Vdd 0 {vsupply} V6 Vtau 0 {vtau} V2 Vthr 0 {vthr} V3 Vb1 0 {vb} C1 Vmem 0 {cvar} M5 N001 N001 Vdd Vdd pmos l=0.5 w=1.2 ad=1.2 as=1.2 pd=4.4 ps=4.4 M6 N002 N001 Vdd Vdd pmos l=0.5 w=1.2 ad=1.2 as=1.2 pd=4.4 ps=4.4 M8 N001 Vmem N004 N004 nmos l=0.5 w=0.6 ad=0.6 as=0.6 pd=3.2 ps=3.2 M9 N002 Vthr N004 N004 nmos l=0.5 w=0.6 ad=0.6 as=0.6 pd=3.2 ps=3.2 M10 N004 Vb1 0 0 nmos l=0.5 w=0.6 ad=0.6 as=0.6 pd=3.2 ps=3.2 Mreset Vmem out 0 0 nmos l=0.5 w=0.6 ad=0.6 as=0.6 pd=3.2 ps=3.2 M7 N003 N002 0 0 nmos l=0.5 w=0.6 ad=0.6 as=0.6 pd=3.2 ps=3.2 M18 out N003 0 0 nmos l=0.5 w=0.6 ad=0.6 as=0.6 pd=3.2 ps=3.2 M19 N003 N002 Vdd Vdd pmos l=0.5 w=1.2 ad=1.2 as=1.2 pd=4.4 ps=4.4 M20 out N003 Vdd Vdd pmos l=0.5 w=1.2 ad=1.2 as=1.2 pd=4.4 ps=4.4 Mleak Vmem Vtau 0 0 nmos l=0.5 w=0.6 ad=0.6 as=0.6 pd=3.2 ps=3.2 .ends my_neuron '''
_____no_output_____
MIT
Ngspice/Recreating_Terrys_Notebook_with_spice_Part_1.ipynb
JD-14/grill-spice
Create the neuron's netlist
def create_neuron_netlist(N): # N is the number of neurons netlist = '' for i in range(N): netlist += 'x'+str(i)+' Vmem'+str(i)+' out'+str(i)+' my_neuron vsupply={vsource} cvar=150p vthr=0.25 \n' netlist += 'Rload'+str(i)+' out'+str(i)+ ' 0 100k\n' return netlist netlist_neurons = create_neuron_netlist(1)
_____no_output_____
MIT
Ngspice/Recreating_Terrys_Notebook_with_spice_Part_1.ipynb
JD-14/grill-spice
Step 3: Generate the input Now, let's generate some input and see what it does. We are going to use the WhiteSignal that Terry used; however,we are going to shink the signal in amplitude (since this would be a current signal in the circuit) and also increase the frequency of the signal.
stim = nengo.processes.WhiteSignal(period=10, high=5, seed=1).run(1, dt=0.001) input_signal = [[i*1e-6, J[0]*10e-6] for i, J in enumerate(stim)] #scaling
_____no_output_____
MIT
Ngspice/Recreating_Terrys_Notebook_with_spice_Part_1.ipynb
JD-14/grill-spice
Lets convert this signal to a current source.
def pwl_conv(signal): # signal should be a list of lists where wach sublist has this form [time_value, current_value] pwl_string = '' for i in signal: pwl_string += str(i[0]) + ' ' + str(i[1]) + ' ' return pwl_string
_____no_output_____
MIT
Ngspice/Recreating_Terrys_Notebook_with_spice_Part_1.ipynb
JD-14/grill-spice
Step 4: Generate remaining parts of the Spice Netlist
netlist_input = 'Iin0 Vdd Vmem0 PWL(' + pwl_conv(input_signal) +')\n' # Converting the input to a current source ## other setup parameters args= {} args['simulation_time'] = '1m' args['simulation_step'] = '1u' args['simulation_lib'] = '180nm.lib' netlist_top= '''*Sample SPICE file .include {simulation_lib} .option scale=1u .OPTIONS ABSTOL=1N VNTOL=1M. .options savecurrents .tran {simulation_step} {simulation_time} UIC '''.format(**args) netlist_bottom = ''' .end''' ## define the sources netlist_source = ''' .param vsource = 1.8 Vdd Vdd 0 {vsource} ''' netlist = netlist_top + netlist_source + neuron_model + netlist_input+ netlist_neurons+ netlist_bottom
_____no_output_____
MIT
Ngspice/Recreating_Terrys_Notebook_with_spice_Part_1.ipynb
JD-14/grill-spice
Step 5: Simulate the netlist
def simulate(circuit): ngspice.load_circuit(circuit) ngspice.run() print('Plots:', ngspice.plot_names) plot = ngspice.plot(simulation=None, plot_name=ngspice.last_plot) return plot out=simulate(netlist) plt.plot(out['time']._data,out['@rload0[i]']._data, label='output_current') plt.plot(out['time']._data,out['@iin0[current]']._data, label = 'input_current') plt.legend()
_____no_output_____
MIT
Ngspice/Recreating_Terrys_Notebook_with_spice_Part_1.ipynb
JD-14/grill-spice
Great! We have a system that does some sort of nonlinearity. Now let's create a feedforward system with a bunch of neurons and see if the system can be used for approximating a function. Step 6: Function approximation with a feedforward network
N = 50 # how many neurons there are E = np.random.normal(size=(N, 1)) B = np.random.normal(size=(N))*0.1 netlist_neurons = create_neuron_netlist(N)
_____no_output_____
MIT
Ngspice/Recreating_Terrys_Notebook_with_spice_Part_1.ipynb
JD-14/grill-spice
*Now let's feed that same stimulus to all the neurons and see how they behave.*
def create_neuron_current_netlist(E,B,stim,N): # take the A matrix and the number of neurons # refactor netlist_input='\n' signal = np.zeros((len(stim), N)) for i, J in enumerate(stim): Js = np.dot(E, J) for k, JJ in enumerate(Js): signal[i][k] = JJ+B[k] for k in range(N): input_signal = [[i*1e-6, J*10e-6] for i, J in enumerate(signal[:,k])] netlist_input += 'Iin'+str(k)+' Vdd Vmem'+str(k)+' PWL(' + pwl_conv(input_signal) +')\n\n' return netlist_input netlist_inputs = create_neuron_current_netlist(E,B,stim,N) netlist = netlist_top + netlist_source + neuron_model + netlist_inputs+ netlist_neurons+ netlist_bottom out=simulate(netlist)
2020-09-08 17:06:15,916 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin49: no DC value, transient time 0 value used 2020-09-08 17:06:15,917 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin48: no DC value, transient time 0 value used 2020-09-08 17:06:15,919 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin47: no DC value, transient time 0 value used 2020-09-08 17:06:15,920 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin46: no DC value, transient time 0 value used 2020-09-08 17:06:15,921 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin45: no DC value, transient time 0 value used 2020-09-08 17:06:15,924 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin44: no DC value, transient time 0 value used 2020-09-08 17:06:15,926 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin43: no DC value, transient time 0 value used 2020-09-08 17:06:15,928 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin42: no DC value, transient time 0 value used 2020-09-08 17:06:15,929 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin41: no DC value, transient time 0 value used 2020-09-08 17:06:15,937 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin40: no DC value, transient time 0 value used 2020-09-08 17:06:15,938 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin39: no DC value, transient time 0 value used 2020-09-08 17:06:15,943 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin38: no DC value, transient time 0 value used 2020-09-08 17:06:15,949 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin37: no DC value, transient time 0 value used 2020-09-08 17:06:15,951 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin36: no DC value, transient time 0 value used 2020-09-08 17:06:15,952 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin35: no DC value, transient time 0 value used 2020-09-08 17:06:15,954 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin34: no DC value, transient time 0 value used 2020-09-08 17:06:15,959 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin33: no DC value, transient time 0 value used 2020-09-08 17:06:15,964 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin32: no DC value, transient time 0 value used 2020-09-08 17:06:15,969 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin31: no DC value, transient time 0 value used 2020-09-08 17:06:15,970 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin30: no DC value, transient time 0 value used 2020-09-08 17:06:15,976 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin29: no DC value, transient time 0 value used 2020-09-08 17:06:15,979 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin28: no DC value, transient time 0 value used 2020-09-08 17:06:15,983 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin27: no DC value, transient time 0 value used 2020-09-08 17:06:15,986 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin26: no DC value, transient time 0 value used 2020-09-08 17:06:15,987 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin25: no DC value, transient time 0 value used 2020-09-08 17:06:15,995 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin24: no DC value, transient time 0 value used 2020-09-08 17:06:15,997 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin23: no DC value, transient time 0 value used 2020-09-08 17:06:15,998 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin22: no DC value, transient time 0 value used 2020-09-08 17:06:15,999 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin21: no DC value, transient time 0 value used 2020-09-08 17:06:16,001 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin20: no DC value, transient time 0 value used 2020-09-08 17:06:16,002 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin19: no DC value, transient time 0 value used 2020-09-08 17:06:16,004 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin18: no DC value, transient time 0 value used 2020-09-08 17:06:16,008 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin17: no DC value, transient time 0 value used 2020-09-08 17:06:16,014 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin16: no DC value, transient time 0 value used 2020-09-08 17:06:16,016 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin15: no DC value, transient time 0 value used 2020-09-08 17:06:16,017 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin14: no DC value, transient time 0 value used 2020-09-08 17:06:16,020 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin13: no DC value, transient time 0 value used 2020-09-08 17:06:16,023 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin12: no DC value, transient time 0 value used 2020-09-08 17:06:16,028 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin11: no DC value, transient time 0 value used 2020-09-08 17:06:16,031 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin10: no DC value, transient time 0 value used 2020-09-08 17:06:16,032 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin9: no DC value, transient time 0 value used 2020-09-08 17:06:16,034 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin8: no DC value, transient time 0 value used 2020-09-08 17:06:16,036 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin7: no DC value, transient time 0 value used 2020-09-08 17:06:16,037 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin6: no DC value, transient time 0 value used 2020-09-08 17:06:16,042 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin5: no DC value, transient time 0 value used 2020-09-08 17:06:16,045 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin4: no DC value, transient time 0 value used 2020-09-08 17:06:16,046 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin3: no DC value, transient time 0 value used 2020-09-08 17:06:16,048 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin2: no DC value, transient time 0 value used 2020-09-08 17:06:16,049 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin1: no DC value, transient time 0 value used 2020-09-08 17:06:16,051 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin0: no DC value, transient time 0 value used Plots: ['tran2', 'tran1', 'const']
MIT
Ngspice/Recreating_Terrys_Notebook_with_spice_Part_1.ipynb
JD-14/grill-spice
So it seems we have some output from the ensemble. Lets convert this output to get the A matrix
def extract_A_matrix(result, N, stim): t = np.linspace(min(out['time']._data), max(out['time']._data), len(stim)) temp_time = out['time']._data inpterpolated_result = np.zeros((len(stim), N)) A = np.zeros((len(stim), N)) for j in range(N): temp_str = '@rload'+str(j)+'[i]' temp_out = result[temp_str]._data inpterpolated_result[:,j] = np.interp(t, temp_time, temp_out) A[:,j] = inpterpolated_result[:,j] > max(inpterpolated_result[:,j])/2 return A A_from_spice = extract_A_matrix(out, N, stim) plt.figure(figsize=(12,6)) plt.imshow(A_from_spice.T, aspect='auto', cmap='gray_r') plt.show()
_____no_output_____
MIT
Ngspice/Recreating_Terrys_Notebook_with_spice_Part_1.ipynb
JD-14/grill-spice
Cool! This is similar to the A matrix we got from Terry's notebook. We can also calculate the D matrix from this output and approximate the y(t)=x(t) function.
target = stim D_from_spice, info = nengo.solvers.LstsqL2()(A_from_spice, target) plt.plot(A_from_spice.dot(D_from_spice), label='output') plt.plot(target, lw=3, label='target') plt.legend() plt.show() print('RMSE:', info['rmses'])
_____no_output_____
MIT
Ngspice/Recreating_Terrys_Notebook_with_spice_Part_1.ipynb
JD-14/grill-spice
*With spiking neuron models, it's very common to have a low-pass filter (i.e. a synapse) after the spike. Let's see what our output looks like with a low-pass filter applied.*
filt = nengo.synapses.Lowpass(0.01) #need to implement synapses in circuit plt.plot(filt.filt(A_from_spice.dot(D_from_spice)), label='output (filtered)') plt.plot(target, lw=3, label='target') plt.legend() plt.show()
_____no_output_____
MIT
Ngspice/Recreating_Terrys_Notebook_with_spice_Part_1.ipynb
JD-14/grill-spice
Python in Jupyter Fremont bicycle data demo Antero Kangas 3.6.2018
%matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn') from jupyterworkflow2.data import get_fremont_data data = get_fremont_data() data.head() data.resample("W").sum().plot() ax = data.resample("D").sum().rolling(365).sum().plot() ax.set_ylim(0, None) data.groupby(data.index.time).mean().plot() pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date) pivoted.iloc[:5, :7] pivoted.plot(legend=False, alpha=0.01) get_fremont_data??
_____no_output_____
MIT
JupyterWorkflow2.ipynb
Andycappfi/JupyterWorkflow2
Scale everything
scaler = StandardScaler() X_scale = scaler.fit_transform(X_res) X_scale_torch = torch.FloatTensor(X_scale) y_scale_torch = torch.FloatTensor(y_res) y_scale_torch from skorch import NeuralNetBinaryClassifier from classes import MyModule class toTensor(BaseEstimator, TransformerMixin): def fit(self, X, y=None): return self def transform(self, X): return torch.FloatTensor(X) class MyModule(nn.Module): def __init__(self, num_units=128, dropoutrate = 0.5): super(MyModule, self).__init__() self.dropoutrate = dropoutrate self.layer1 = nn.Linear(23, num_units) self.nonlin = nn.ReLU() self.dropout1 = nn.Dropout(self.dropoutrate) self.dropout2 = nn.Dropout(self.dropoutrate) self.layer2 = nn.Linear(num_units, num_units) self.output = nn.Linear(num_units,1) self.batchnorm1 = nn.BatchNorm1d(128) self.batchnorm2 = nn.BatchNorm1d(128) def forward(self, X, **kwargs): X = self.nonlin(self.layer1(X)) X = self.batchnorm1(X) X = self.dropout1(X) X = self.nonlin(self.layer2(X)) X = self.batchnorm2(X) X = self.dropout2(X) X = self.output(X) return X model = NeuralNetBinaryClassifier( MyModule(dropoutrate = 0.2), max_epochs=40, lr=0.01, batch_size=128, # Shuffle training data on each epoch iterator_train__shuffle=True, ) model.fit(X_scale_torch, y_scale_torch) val_loss = [] train_loss = [] epochs = range(1,41) for i in range(40): val_loss.append(model.history[i]['valid_loss']) train_loss.append(model.history[i]['train_loss']) dfloss = (pd.DataFrame({'epoch': epochs, 'val_loss': val_loss, 'train_loss': train_loss}, columns=['epoch', 'val_loss', 'train_loss']).set_index('epoch')) sns.lineplot(data=dfloss) from skorch.helper import SliceDataset from sklearn.model_selection import cross_val_score from sklearn.model_selection import cross_validate train_slice = SliceDataset(X_scale_torch) y_slice = SliceDataset(y_scale_torch) scores = cross_validate(model, X_scale_torch, y_scale_torch, scoring='accuracy', cv=4) import functools as f print('validation accuracy for each fold: {}'.format(scores)) #print('avg validation accuracy: {:.3f}'.format(scores.mean())) #loop through the dictionary for key,value in scores.items(): #use reduce to calculate the avg print(f"Average {key}", f.reduce(lambda x, y: x + y, scores[key]) / len(scores[key])) from sklearn.model_selection import GridSearchCV params = { 'lr': [0.01, 0.001], 'module__dropoutrate': [0.2, 0.5] } model.module gs = GridSearchCV(model, params, refit=False, cv=4, scoring='accuracy', verbose=2) gs_results = gs.fit(X_scale_torch,y_scale_torch ) for key in gs.cv_results_.keys(): print(key, gs.cv_results_[key]) import pickle with open('model1.pkl', 'wb') as f: pickle.dump(model, f) model.save_params( f_params='model.pkl', f_optimizer='opt.pkl', f_history='history.json') from sklearn.pipeline import Pipeline from sklearn.base import TransformerMixin, BaseEstimator from classes import toTensor pipeline = Pipeline([ ('scale', StandardScaler()), ('tensor',toTensor()), ('classification',model) ]) pipeline.fit(X_res, torch.FloatTensor(y_res)) import joblib with open('model1.pkl', 'wb') as f: joblib.dump(pipeline,f) jinput = X_res.iloc[15].to_json() jinput { "LIMIT_BAL": 20000, "SEX": 2, "EDUCATION": 2, "MARRIAGE": 1, "AGE": 24, "PAY_0": 2, "PAY_2": -1, "PAY_3": -1, "PAY_4": -1, "PAY_5": -2, "PAY_6": -2, "BILL_AMT1": 3913, "BILL_AMT2": 3102, "BILL_AMT3": 689, "BILL_AMT4": 0, "BILL_AMT5": 0, "BILL_AMT6": 0, "PAY_AMT1": 0, "PAY_AMT2": 689, "PAY_AMT3": 0, "PAY_AMT4": 0, "PAY_AMT5": 0, "PAY_AMT6": 0 } import requests bashCommand = f"""curl -X 'POST' 'http://127.0.0.1:8000/predict' -H 'accept: application/json' -H 'Content-Type: application/json' -d {jinput}""" headers = { } res = requests.post('http://127.0.0.1:8000/predict', data=jinput, headers=headers) res.text %%timeit res = requests.post('http://127.0.0.1:8000/predict', data=jinput, headers=headers)
10.4 ms ยฑ 538 ยตs per loop (mean ยฑ std. dev. of 7 runs, 100 loops each)
MIT
NeuralNet-Classification-REST/NeuralNet.ipynb
rphillip/Case-Studies
Embeds
df.columns from sklearn.compose import ColumnTransformer from sklearn.preprocessing import StandardScaler lincolumns = (['LIMIT_BAL','AGE', 'BILL_AMT1', 'BILL_AMT2', 'BILL_AMT3', 'BILL_AMT4', 'BILL_AMT5', 'BILL_AMT6', 'PAY_AMT1', 'PAY_AMT2', 'PAY_AMT3', 'PAY_AMT4', 'PAY_AMT5', 'PAY_AMT6']) ct = ColumnTransformer([ ('scalethis', StandardScaler(), lincolumns) ], remainder='passthrough') ct2 = ct.fit_transform(df.iloc[:,:23]) dfct2 = pd.DataFrame(ct2) dfct2 df_numeric = dfct2.iloc[:,:14] df_cat = dfct2.iloc[:,14:] df_cat1 = df_cat.iloc[:,0] df_cat2 = df_cat.iloc[:,1] df_cat3 = df_cat.iloc[:,2] df_cat4 = df_cat.iloc[:,3:] df_cat4 def emb_sz_rule(n_cat): return min(600, round(1.6 * n_cat**0.56)) embed = nn.Embedding(2, emb_sz_rule(2)) embed(torch.tensor(df_cat1.values).to(torch.int64)) def emb_sz_rule(n_cat): return min(600, round(1.6 * n_cat**0.56)) class MyModule(nn.Module): def __init__(self, num_inputs=23, num_units_d1=128, num_units_d2=128)): super(MyModule, self).__init__() self.dense0 = nn.Linear(20, num_units) self.nonlin = nonlin self.dropout = nn.Dropout(0.5) self.dense1 = nn.Linear(num_units, num_units) self.output = nn.Linear(num_units, 2) self.softmax = nn.Softmax(dim=-1) self.embed1 = nn.Embedding(2, emb_sz_rule(2)) self.embed2 = nn.Embedding(7, emb_sz_rule(7)) self.embed3 = nn.Embedding(4, emb_sz_rule(4)) self.embed4 = nn.Embedding(11, emb_sz_rule(11)) def forward(self, X, cat1, cat2, cat3, cat4): x1 = self.embed1(cat1) x2 = self.embed2(cat2) x3 = self.embed3(cat3) x4 = self.embed4(cat4) X = torch.cat((X,x1,x2,x3,x4), dim=1) X = self.nonlin(self.dense0(X)) X = self.dropout(X) X = self.nonlin(self.dense1(X)) X = self.softmax(self.output(X)) return X model = NeuralNetBinaryClassifier( MyModule, max_epochs=40, lr=0.001, # Shuffle training data on each epoch iterator_train__shuffle=True, ) EPOCHS = 50 BATCH_SIZE = 64 LEARNING_RATE = 0.001 class BinaryClassification(nn.Module): def __init__(self): super(BinaryClassification, self).__init__() self.layer_1 = nn.Linear(23, 64) self.layer_2 = nn.Linear(64, 64) self.layer_out = nn.Linear(64, 1) self.relu = nn.ReLU() self.dropout = nn.Dropout(p=0.1) self.batchnorm1 = nn.BatchNorm1d(64) self.batchnorm2 = nn.BatchNorm1d(64) def forward(self, inputs): x = self.relu(self.layer_1(inputs)) x = self.batchnorm1(x) x = self.relu(self.layer_2(x)) x = self.batchnorm2(x) x = self.dropout(x) x = self.layer_out(x) return x class MyModule(nn.Module): def __init__(self, num_inputs=23, num_units_d1=128, num_units_d2=128)): super(MyModule, self).__init__() self.dense0 = nn.Linear(20, num_units) self.nonlin = nonlin self.dropout = nn.Dropout(0.5) self.dense1 = nn.Linear(num_units, num_units) self.output = nn.Linear(num_units, 2) self.softmax = nn.Softmax(dim=-1) def forward(self, X, **kwargs): X = self.nonlin(self.dense0(X)) X = self.dropout(X) X = self.nonlin(self.dense1(X)) X = self.softmax(self.output(X)) return X model = NeuralNetBinaryClassifier( MyModule, max_epochs=40, lr=0.001, # Shuffle training data on each epoch iterator_train__shuffle=True, )
_____no_output_____
MIT
NeuralNet-Classification-REST/NeuralNet.ipynb
rphillip/Case-Studies
Py Torch
## train data class TrainData(Dataset): def __init__(self, X_data, y_data): self.X_data = X_data self.y_data = y_data def __getitem__(self, index): return self.X_data[index], self.y_data[index] def __len__ (self): return len(self.X_data) train_data = TrainData(torch.FloatTensor(X_train), torch.FloatTensor(y_train.to_numpy(dtype=np.float64))) ## test data class TestData(Dataset): def __init__(self, X_data): self.X_data = X_data def __getitem__(self, index): return self.X_data[index] def __len__ (self): return len(self.X_data) test_data = TestData(torch.FloatTensor(X_test)) train_loader = DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True) test_loader = DataLoader(dataset=test_data, batch_size=1) class BinaryClassification(nn.Module): def __init__(self): super(BinaryClassification, self).__init__() self.layer_1 = nn.Linear(23, 64) self.layer_2 = nn.Linear(64, 64) self.layer_out = nn.Linear(64, 1) self.relu = nn.ReLU() self.dropout = nn.Dropout(p=0.1) self.batchnorm1 = nn.BatchNorm1d(64) self.batchnorm2 = nn.BatchNorm1d(64) def forward(self, inputs): x = self.relu(self.layer_1(inputs)) x = self.batchnorm1(x) x = self.relu(self.layer_2(x)) x = self.batchnorm2(x) x = self.dropout(x) x = self.layer_out(x) return x device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print(device) model = BinaryClassification() model.to(device) print(model) criterion = nn.BCEWithLogitsLoss() optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE) def binary_acc(y_pred, y_test): y_pred_tag = torch.round(torch.sigmoid(y_pred)) correct_results_sum = (y_pred_tag == y_test).sum().float() acc = correct_results_sum/y_test.shape[0] acc = torch.round(acc * 100) return acc model.train() for e in range(1, EPOCHS+1): epoch_loss = 0 epoch_acc = 0 for X_batch, y_batch in train_loader: X_batch, y_batch = X_batch.to(device), y_batch.to(device) optimizer.zero_grad() y_pred = model(X_batch) loss = criterion(y_pred, y_batch.unsqueeze(1)) acc = binary_acc(y_pred, y_batch.unsqueeze(1)) loss.backward() optimizer.step() epoch_loss += loss.item() epoch_acc += acc.item() print(f'Epoch {e+0:03}: | Loss: {epoch_loss/len(train_loader):.5f} | Acc: {epoch_acc/len(train_loader):.3f}') y_pred_list = [] model.eval() with torch.no_grad(): for X_batch in test_loader: X_batch = X_batch.to(device) y_test_pred = model(X_batch) y_test_pred = torch.sigmoid(y_test_pred) y_pred_tag = torch.round(y_test_pred) y_pred_list.append(y_pred_tag.cpu().numpy()) y_pred_list = [a.squeeze().tolist() for a in y_pred_list] confusion_matrix(y_test, y_pred_list) print(classification_report(y_test, y_pred_list)) # use the original data scaler = StandardScaler() X_og = scaler.fit_transform(df.iloc[:,:23]) X_og og_data = TestData(torch.FloatTensor(X_og)) og_loader = DataLoader(dataset=og_data, batch_size=1) og_y_pred_list = [] model.eval() with torch.no_grad(): for X_batch in og_loader: X_batch = X_batch.to(device) y_test_pred = model(X_batch) y_test_pred = torch.sigmoid(y_test_pred) y_pred_tag = torch.round(y_test_pred) og_y_pred_list.append(y_pred_tag.cpu().numpy()) og_y_pred_list = [a.squeeze().tolist() for a in og_y_pred_list] confusion_matrix(df['default payment next month'].to_numpy(dtype=np.float64), og_y_pred_list) print(classification_report(df['default payment next month'].to_numpy(dtype=np.float64), og_y_pred_list)) torch.save(model.state_dict(), "model1.pt") https://towardsdatascience.com/pytorch-tabular-binary-classification-a0368da5bb89
_____no_output_____
MIT
NeuralNet-Classification-REST/NeuralNet.ipynb
rphillip/Case-Studies
For Loops (2) - Looping through the items in a sequence In the last lesson we introduced the concept of a For loop and learnt how we can use them to repeat a section of code. We learnt how to write a For loop that repeats a piece of code a specific number of times using the range() function, and saw that we have to create a variable to keep track of our position in the loop (conventionally called i). We also found out how to implement if-else statements within our loop to change which code is run inside the loop.As well as writing a loop which runs a specific number of times, we can also create a loop which acts upon each item in a sequence. In this lesson we'll learn how to implement this functionality and find out how to use this knowledge to help us make charts with Plotly. Looping through each item in a sequenceBeing able to access each item in turn in a sequence is a really useful ability and one which we'll use often in this course. The syntax is very similar to that which we use to loop through the numbers in a range:```` pythonfor in : ````The difference here is that the variable which keeps track of our position in the loop does not increment by 1 each time the loop is run. Instead, the variable takes the value of each item in the sequence in turn:
list1 = ['a', 'b', 'c', 'd', 'e'] for item in list1: print(item)
a b c d e
MIT
PlotlyandPython/Lessons/(01) Intro to Python/Notebooks/(12) For Loops (2) - Looping through the items in a sequence.ipynb
peternewman22/Python_Courses
It's not important what we call this variable:
for banana in list1: print(banana)
a b c d e
MIT
PlotlyandPython/Lessons/(01) Intro to Python/Notebooks/(12) For Loops (2) - Looping through the items in a sequence.ipynb
peternewman22/Python_Courses
But it's probably a good idea to call the variable something meaningful:
data = [20, 50, 10, 67] for d in data: print(d)
20 50 10 67
MIT
PlotlyandPython/Lessons/(01) Intro to Python/Notebooks/(12) For Loops (2) - Looping through the items in a sequence.ipynb
peternewman22/Python_Courses
Using these loopsWe can use these loops in conjunction with other concepts we have already learnt. For example, imagine that you had a list of proportions stored as decimals, but that you needed to create a new list to store them as whole numbers.We can use list.append() with a for loop to create this new list. First, we have to create an empty list to which we'll append the percentages:
proportions = [0.3, 0.45, 0.99, 0.23, 0.46] percentages = []
_____no_output_____
MIT
PlotlyandPython/Lessons/(01) Intro to Python/Notebooks/(12) For Loops (2) - Looping through the items in a sequence.ipynb
peternewman22/Python_Courses
Next, we'll loop through each item in proportions, multiply it by 100 and append it to percentages:
for prop in proportions: percentages.append(prop * 100) print(percentages)
[30.0, 45.0, 99.0, 23.0, 46.0]
MIT
PlotlyandPython/Lessons/(01) Intro to Python/Notebooks/(12) For Loops (2) - Looping through the items in a sequence.ipynb
peternewman22/Python_Courses
Using for loops with dictionariesWe've seen how to loop through each item in a list. We will also make great use of the ability to loop through the keys and values in a dictionary.If you remember from the dictionaries lessons, we can get the keys and values in a dictionary by using dict.items(). We can use this in conjunction with a for loop to manipulate each item in a dictionary. This is something which we'll use often; we'll often have data for several years stored in a dictionary; looping through these items will let us plot the data really easily.In the cell below, I've created a simple data structure which we'll access using a for loop. Imagine that this data contains sales figures for the 4 quarters in a year:
data = {2009 : [10,20,30,40], 2010 : [15,30,45,60], 2011 : [7,14,21,28], 2012 : [5,10,15,20]}
_____no_output_____
MIT
PlotlyandPython/Lessons/(01) Intro to Python/Notebooks/(12) For Loops (2) - Looping through the items in a sequence.ipynb
peternewman22/Python_Courses
We can loop through the keys by using dict.keys():
for k in data.keys(): print(k)
2009 2010 2011 2012
MIT
PlotlyandPython/Lessons/(01) Intro to Python/Notebooks/(12) For Loops (2) - Looping through the items in a sequence.ipynb
peternewman22/Python_Courses
And we can loop through the values (which are lists):
for v in data.values(): print(v)
[10, 20, 30, 40] [15, 30, 45, 60] [7, 14, 21, 28] [5, 10, 15, 20]
MIT
PlotlyandPython/Lessons/(01) Intro to Python/Notebooks/(12) For Loops (2) - Looping through the items in a sequence.ipynb
peternewman22/Python_Courses
We can loop through them both together:
for k, v in data.items(): print(k, v)
2009 [10, 20, 30, 40] 2010 [15, 30, 45, 60] 2011 [7, 14, 21, 28] 2012 [5, 10, 15, 20]
MIT
PlotlyandPython/Lessons/(01) Intro to Python/Notebooks/(12) For Loops (2) - Looping through the items in a sequence.ipynb
peternewman22/Python_Courses
Having the data available to compare each year is really handy, but it might also be helpful to store them as one long list so we can plot the data and see trends over time. First, we'll make a new list to store all of the data items:
allYears = []
_____no_output_____
MIT
PlotlyandPython/Lessons/(01) Intro to Python/Notebooks/(12) For Loops (2) - Looping through the items in a sequence.ipynb
peternewman22/Python_Courses
And then we'll loop through the dictionary and concatenate each year's data to the allYears list:
for v in data.values(): allYears = allYears + v print(allYears)
[10, 20, 30, 40, 15, 30, 45, 60, 7, 14, 21, 28, 5, 10, 15, 20]
MIT
PlotlyandPython/Lessons/(01) Intro to Python/Notebooks/(12) For Loops (2) - Looping through the items in a sequence.ipynb
peternewman22/Python_Courses
Import modules
import numpy as np from scipy import interpolate import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm import os import sys import warnings warnings.filterwarnings("ignore", category=DeprecationWarning) sys.path.append(os.path.abspath(os.path.join('lib', 'xbeach-toolbox', 'scripts'))) from xbeachtools import xb_read_output plt.style.use(os.path.join('lib', 'xbeach-toolbox', 'scripts', 'xb.mplstyle'))
_____no_output_____
MIT
post-processing.ipynb
Sustainable-Science/northbay
Image data prep
catalog_eng= pd.read_csv("/kaggle/input/textphase1/data/catalog_english_taxonomy.tsv",sep="\t") X_train= pd.read_csv("/kaggle/input/textphase1/data/X_train.tsv",sep="\t") Y_train= pd.read_csv("/kaggle/input/textphase1/data/Y_train.tsv",sep="\t") X_test=pd.read_csv("/kaggle/input/textphase1/data/x_test_task1_phase1.tsv",sep="\t") dict_code_to_id = {} dict_id_to_code={} list_tags = list(Y_train['Prdtypecode'].unique()) for i,tag in enumerate(list_tags): dict_code_to_id[tag] = i dict_id_to_code[i]=tag Y_train['labels']=Y_train['Prdtypecode'].map(dict_code_to_id) train=pd.merge(left=X_train,right=Y_train, how='left',left_on=['Integer_id','Image_id','Product_id'], right_on=['Integer_id','Image_id','Product_id']) prod_map=pd.Series(catalog_eng['Top level category'].values,index=catalog_eng['Prdtypecode']).to_dict() train['product']=train['Prdtypecode'].map(prod_map) def get_img_path(img_id,prd_id,path): pattern = 'image'+'_'+str(img_id)+'_'+'product'+'_'+str(prd_id)+'.jpg' return path + pattern train_img = train[['Image_id','Product_id','labels','product']] train_img['image_path']=train_img.progress_apply(lambda x: get_img_path(x['Image_id'],x['Product_id'], path = '/kaggle/input/imagetrain/image_training/'),axis=1) X_test['image_path']=X_test.progress_apply(lambda x: get_img_path(x['Image_id'],x['Product_id'], path='/kaggle/input/imagetest/image_test/image_test_task1_phase1/'),axis=1) train_df, val_df, _, _ = train_test_split(train_img, train_img['labels'],random_state=2020, test_size = 0.1, stratify=train_img['labels']) input_size = 224 # for Resnt # Applying Transforms to the Data from torchvision import datasets, models, transforms image_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(size=256, scale=(0.8, 1.0)), transforms.RandomRotation(degrees=15), transforms.RandomHorizontalFlip(), transforms.Resize(size=256), transforms.CenterCrop(size=input_size), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'valid': transforms.Compose([ transforms.Resize(size=256), transforms.CenterCrop(size=input_size), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'test': transforms.Compose([ transforms.Resize(size=256), transforms.CenterCrop(size=input_size), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) } from torch.utils.data import Dataset, DataLoader, Subset import cv2 from PIL import Image class FusionDataset(Dataset): def __init__(self,df,inputs_cam,masks_cam,inputs_flau,masks_flau,transform=None,mode='train'): self.df = df self.transform=transform self.mode=mode self.inputs_cam=inputs_cam self.masks_cam=masks_cam self.inputs_flau=inputs_flau self.masks_flau=masks_flau def __len__(self): return len(self.df) def __getitem__(self,idx): im_path = self.df.iloc[idx]['image_path'] img = cv2.imread(im_path) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img=Image.fromarray(img) if self.transform is not None: img = self.transform(img) img=img.cuda() input_id_cam=self.inputs_cam[idx].cuda() input_mask_cam=self.masks_cam[idx].cuda() input_id_flau=self.inputs_flau[idx].cuda() input_mask_flau=self.masks_flau[idx].cuda() if self.mode=='test': return img,input_id_cam,input_mask_cam,input_id_flau,input_mask_flau else: # labels = torch.tensor(self.df.iloc[idx]['labels']) labels = torch.tensor(self.df.iloc[idx]['labels']).cuda() return img,input_id_cam,input_mask_cam,input_id_flau,input_mask_flau,labels a1 = torch.randn(3,10,10) reduce_dim=nn.Conv1d(in_channels = 10 , out_channels = 1 , kernel_size= 1) reduce_dim(a1).view(3,10).shape class vector_fusion(nn.Module): def __init__(self): super(vector_fusion, self).__init__() self.img_model = SEResnext50_32x4d(pretrained=None) self.img_model.load_state_dict(torch.load('../input/seresnext2048/best_model.pt')) self.img_model.l0=Identity() for params in self.img_model.parameters(): params.requires_grad=False self.cam_model= vec_output_CamembertForSequenceClassification.from_pretrained( 'camembert-base', # Use the 12-layer BERT model, with an uncased vocab. num_labels = len(Preprocess.dict_code_to_id), # The number of output labels--2 for binary classification. # You can increase this for multi-class tasks. output_attentions = False, # Whether the model returns attentions weights. output_hidden_states = False,) # Whether the model returns all hidden-states. cam_model_path = '../input/camembert-vec-256m768-10ep/best_model.pt' checkpoint = torch.load(cam_model_path) # model = checkpoint['model'] self.cam_model.load_state_dict(checkpoint) for param in self.cam_model.parameters(): param.requires_grad=False self.cam_model.out_proj=Identity() self.flau_model=vec_output_FlaubertForSequenceClassification.from_pretrained( 'flaubert/flaubert_base_cased', num_labels = len(Preprocess.dict_code_to_id), output_attentions = False, output_hidden_states = False,) flau_model_path='../input/flaubert-8933/best_model.pt' checkpoint = torch.load(flau_model_path) self.flau_model.load_state_dict(checkpoint) for param in self.flau_model.parameters(): param.requires_grad=False self.flau_model.classifier=Identity() self.reduce_dim=nn.Conv1d(in_channels = 2048 , out_channels = 768 , kernel_size= 1) self.reduce_dim2=nn.Conv1d(in_channels = 768 , out_channels = 1 , kernel_size= 1) self.out=nn.Linear(768*3, 27) #gamma # self.w1 = nn.Parameter(torch.zeros(1)) # self.w2 = nn.Parameter(torch.zeros(1)) # self.w3 = nn.Parameter(torch.zeros(1)) def forward(self,img,input_id_cam,input_mask_cam,input_id_flau,input_mask_flau): cam_emb,vec1 =self.cam_model(input_id_cam, token_type_ids=None, attention_mask=input_mask_cam) flau_emb,vec2 =self.flau_model(input_id_flau, token_type_ids=None, attention_mask=input_mask_flau) #Projecting the image embedding to lower dimension img_emb=self.img_model(img) img_emb=img_emb.view(img_emb.shape[0],img_emb.shape[1],1) img_emb=self.reduce_dim(img_emb) img_emb=img_emb.view(img_emb.shape[0],img_emb.shape[1]) ###### bs * 768 #summing up the vectors #text_emb = cam_emb[0] + flau_emb[0] #Bilinear #text_emb = text_emb.view(text_emb.shape[0],1,text_emb.shape[1]) ##### bs * 1 * 768 #Bilinear Pooling #pool_emb = torch.bmm(img_emb,text_emb) ### bs * 768 * 768 #pool_emb = self.reduce_dim2(pool_emb).view(text_emb.shape[0],768) #### bs * 1 * 768 fuse= torch.cat([img_emb,cam_emb[0],flau_emb[0]],axis=1) logits=self.out(fuse) return logits model=vector_fusion() model.cuda() train_dataset=FusionDataset(train_df,tr_inputs_cam,tr_masks_cam,tr_inputs_flau,tr_masks_flau,transform=image_transforms['test']) val_dataset=FusionDataset(val_df,val_inputs_cam,val_masks_cam,val_inputs_flau,val_masks_flau,transform=image_transforms['test']) # test_dataset=FusionDataset(X_test,test_inputs,test_makss,transform=image_transforms['test'],mode='test') batch_size=64 train_dataloader=DataLoader(train_dataset,batch_size=batch_size,shuffle=True) validation_dataloader=DataLoader(val_dataset,batch_size=batch_size,shuffle=False) # test_data=DataLoader(test_dataset,batch_size=batch_size,shuffle=False) optimizer = AdamW(model.parameters(), lr = 2e-5, # args.learning_rate - default is 5e-5, our notebook had 2e-5 eps = 1e-8 # args.adam_epsilon - default is 1e-8. ) def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) count_parameters(model) from transformers import get_linear_schedule_with_warmup # Number of training epochs. The BERT authors recommend between 2 and 4. # We chose to run for 4, but we'll see later that this may be over-fitting the # training data. epochs = 3 # Total number of training steps is [number of batches] x [number of epochs]. # (Note that this is not the same as the number of training samples). total_steps = len(train_dataloader) * epochs # Create the learning rate scheduler. scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps = 0, # Default value in run_glue.py num_training_steps = total_steps) import torch.nn as nn loss_criterion = nn.CrossEntropyLoss() def flat_accuracy(preds, labels): pred_flat = np.argmax(preds, axis=1).flatten() labels_flat = labels.flatten() return np.sum(pred_flat == labels_flat) / len(labels_flat) from sklearn.metrics import f1_score seed_val = 42 random.seed(seed_val) np.random.seed(seed_val) torch.manual_seed(seed_val) torch.cuda.manual_seed_all(seed_val) # We'll store a number of quantities such as training and validation loss, # validation accuracy, and timings. training_stats = [] # Measure the total training time for the whole run. total_t0 = time.time() # For each epoch... for epoch_i in range(0, epochs): # ======================================== # Training # ======================================== # Perform one full pass over the training set. print("") print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs)) print('Training...') #tr and val # vec_output_tr = [] # vec_output_val =[] # Measure how long the training epoch takes. t0 = time.time() # Reset the total loss for this epoch. total_train_loss = 0 # Put the model into training mode. Don't be mislead--the call to # `train` just changes the *mode*, it doesn't *perform* the training. # `dropout` and `batchnorm` layers behave differently during training # vs. test (source: https://stackoverflow.com/questions/51433378/what-does-model-train-do-in-pytorch) best_f1 = 0 model.train() # For each batch of training data... for step, batch in tqdm(enumerate(train_dataloader)): # Unpack this training batch from our dataloader. # # As we unpack the batch, we'll also copy each tensor to the GPU using the # `to` method. # # `batch` contains three pytorch tensors: # [0]: input ids # [1]: attention masks # [2]: labels # return img,input_id_cam,input_mask_cam,input_id_flau,input_mask_flau b_img=batch[0].to(device) b_input_id_cam = batch[1].to(device) b_input_mask_cam = batch[2].to(device) b_input_id_flau = batch[3].to(device) b_input_mask_flau = batch[4].to(device) b_labels = batch[5].to(device) model.zero_grad() logits = model(b_img,b_input_id_cam ,b_input_mask_cam,b_input_id_flau,b_input_mask_flau) #Defining the loss loss = loss_criterion(logits, b_labels) #saving the features_tr # vec = vec.detach().cpu().numpy() # vec_output_tr.extend(vec) # Accumulate the training loss over all of the batches so that we can # calculate the average loss at the end. `loss` is a Tensor containing a # single value; the `.item()` function just returns the Python value # from the tensor. total_train_loss += loss.item() # Perform a backward pass to calculate the gradients. loss.backward() # Clip the norm of the gradients to 1.0. # This is to help prevent the "exploding gradients" problem. torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) # Update parameters and take a step using the computed gradient. # The optimizer dictates the "update rule"--how the parameters are # modified based on their gradients, the learning rate, etc. optimizer.step() # Update the learning rate. scheduler.step() # Calculate the average loss over all of the batches. avg_train_loss = total_train_loss / len(train_dataloader) # Measure how long this epoch took. training_time = format_time(time.time() - t0) print("") print(" Average training loss: {0:.2f} ".format(avg_train_loss)) print(" Training epcoh took: {:} ".format(training_time)) # ======================================== # Validation # ======================================== # After the completion of each training epoch, measure our performance on # our validation set. print("") print("Running Validation...") t0 = time.time() # Put the model in evaluation mode--the dropout layers behave differently # during evaluation. model.eval() # Tracking variables total_eval_accuracy = 0 total_eval_loss = 0 nb_eval_steps = 0 predictions=[] true_labels=[] # Evaluate data for one epoch for batch in tqdm(validation_dataloader): # Unpack this training batch from our dataloader. # # As we unpack the batch, we'll also copy each tensor to the GPU using # the `to` method. # # `batch` contains three pytorch tensors: # [0]: input ids # [1]: attention masks # [2]: labels b_img=batch[0].to(device) b_input_id_cam = batch[1].to(device) b_input_mask_cam = batch[2].to(device) b_input_id_flau = batch[3].to(device) b_input_mask_flau = batch[4].to(device) b_labels = batch[5].to(device) # Tell pytorch not to bother with constructing the compute graph during # the forward pass, since this is only needed for backprop (training). with torch.no_grad(): # Forward pass, calculate logit predictions. # token_type_ids is the same as the "segment ids", which # differentiates sentence 1 and 2 in 2-sentence tasks. # The documentation for this `model` function is here: # https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification # Get the "logits" output by the model. The "logits" are the output # values prior to applying an activation function like the softmax. logits = model(b_img,b_input_id_cam ,b_input_mask_cam,b_input_id_flau,b_input_mask_flau) #new #defining the val loss loss = loss_criterion(logits, b_labels) # Accumulate the validation loss. total_eval_loss += loss.item() # Move logits and labels to CPU logits = logits.detach().cpu().numpy() # Move logits and labels to CPU predicted_labels=np.argmax(logits,axis=1) predictions.extend(predicted_labels) label_ids = b_labels.to('cpu').numpy() true_labels.extend(label_ids) #saving the features_tr # vec = vec.detach().cpu().numpy() # vec_output_val.extend(vec) # Calculate the accuracy for this batch of test sentences, and # accumulate it over all batches. total_eval_accuracy += flat_accuracy(logits, label_ids) # Report the final accuracy for this validation run. avg_val_accuracy = total_eval_accuracy / len(validation_dataloader) print(" Accuracy: {0:.2f}".format(avg_val_accuracy)) # Calculate the average loss over all of the batches. avg_val_loss = total_eval_loss / len(validation_dataloader) # Measure how long the validation run took. validation_time = format_time(time.time() - t0) print(" Validation Loss: {0:.2f}".format(avg_val_loss)) print(" Validation took: {:}".format(validation_time)) print("Validation F1-Score: {}".format(f1_score(true_labels,predictions,average='macro'))) curr_f1=f1_score(true_labels,predictions,average='macro') if curr_f1 > best_f1: best_f1=curr_f1 torch.save(model.state_dict(), 'best_model.pt') # np.save('best_vec_train_model_train.npy',vec_output_tr) # np.save('best_vec_val.npy',vec_output_val) # Record all statistics from this epoch. # training_stats.append( # { # 'epoch': epoch_i + 1, # 'Training Loss': avg_train_loss, # 'Valid. Loss': avg_val_loss, # 'Valid. Accur.': avg_val_accuracy, # 'Training Time': training_time, # 'Validation Time': validation_time # } # ) print("") print("Training complete!") print("Total training took {:} (h:mm:ss)".format(format_time(time.time()-total_t0))) from sklearn.metrics import f1_score print("Validation F1-Score: {}".format(f1_score(true_labels,predictions,average='macro')))
Validation F1-Score: 0.9093939540881769
MIT
multi-modal_concatenate_fusion.ipynb
depshad/Deep-Learning-Framework-for-Multi-modal-Product-Classification
OptimizationThings to try:- change the number of samples- without and without bias- with and without regularization- changing the number of layers- changing the amount of noise- change number of degrees- look at parameter values (high) in OLS- tarin network for many epochs
from fastprogress.fastprogress import progress_bar import torch import matplotlib.pyplot as plt from jupyterthemes import jtplot jtplot.style(context="talk") def plot_regression_data(model=None, MSE=None, poly_deg=0): # Plot the noisy scatter points and the "true" function plt.scatter(x_train, y_train, label="Noisy Samples") plt.plot(x_true, y_true, "--", label="True Function") # Plot the model's learned regression function if model: x = x_true.unsqueeze(-1) x = x.pow(torch.arange(poly_deg + 1)) if poly_deg else x with torch.no_grad(): yhat = model(x) plt.plot(x_true, yhat, label="Learned Function") plt.xlim([min_x, max_x]) plt.ylim([-5, 5]) plt.legend() if MSE: plt.title(f"MSE = ${MSE}$")
_____no_output_____
CC0-1.0
lectures/l15-optimization-part1.ipynb
davidd-55/cs152fa21
Create Fake Training Data
def fake_y(x, add_noise=False): y = 10 * x ** 3 - 5 * x return y + torch.randn_like(y) * 0.5 if add_noise else y N = 20 min_x, max_x = -1, 1 x_true = torch.linspace(min_x, max_x, 100) y_true = fake_y(x_true) x_train = torch.rand(N) * (max_x - min_x) + min_x y_train = fake_y(x_train, add_noise=True) plot_regression_data()
_____no_output_____
CC0-1.0
lectures/l15-optimization-part1.ipynb
davidd-55/cs152fa21
Train A Simple Linear Model Using Batch GD
# Hyperparameters learning_rate = 0.1 num_epochs = 100 # Model parameters m = torch.randn(1, requires_grad=True) b = torch.zeros(1, requires_grad=True) params = (b, m) # Torch utils criterion = torch.nn.MSELoss() optimizer = torch.optim.SGD(params, lr=learning_rate) # Regression for epoch in range(num_epochs): # Model yhat = m * x_train + b # Update parameters optimizer.zero_grad() loss = criterion(yhat, y_train) loss.backward() optimizer.step() plot_regression_data(lambda x: m * x + b, MSE=loss.item())
_____no_output_____
CC0-1.0
lectures/l15-optimization-part1.ipynb
davidd-55/cs152fa21
Train Linear Regression Model Using Batch GD
# Hyperparameters learning_rate = 0.1 num_epochs = 1000 # Model parameters w2 = torch.randn(1, requires_grad=True) w1 = torch.randn(1, requires_grad=True) b = torch.zeros(1, requires_grad=True) params = (b, w1, w2) # Torch utils criterion = torch.nn.MSELoss() optimizer = torch.optim.SGD(params, lr=learning_rate) # Regression for epoch in range(num_epochs): # Model yhat = b + w1 * x_train + w2 * x_train ** 2 # Update parameters optimizer.zero_grad() loss = criterion(yhat, y_train) loss.backward() optimizer.step() plot_regression_data(lambda x: b + w1 * x + w2 * x ** 2, MSE=loss.item())
_____no_output_____
CC0-1.0
lectures/l15-optimization-part1.ipynb
davidd-55/cs152fa21
Train Complex Linear Regression Model Using Batch GD
# Hyperparameters learning_rate = 0.1 num_epochs = 1000 # Model parameters degrees = 50 # 3, 4, 16, 32, 64, 128 powers = torch.arange(degrees + 1) x_poly = x_train.unsqueeze(-1).pow(powers) params = torch.randn(degrees + 1, requires_grad=True) # Torch utils criterion = torch.nn.MSELoss() optimizer = torch.optim.SGD([params], lr=learning_rate) # Regression for epoch in range(num_epochs): # Model yhat = x_poly @ params # Update parameters optimizer.zero_grad() loss = criterion(yhat, y_train) loss.backward() optimizer.step() plot_regression_data(lambda x: x @ params, poly_deg=degrees, MSE=loss.item()) params
_____no_output_____
CC0-1.0
lectures/l15-optimization-part1.ipynb
davidd-55/cs152fa21
Compute Linear Regression Model Using Ordinary Least Squares
params = ((x_poly.T @ x_poly).inverse() @ x_poly.T) @ y_train mse = torch.nn.functional.mse_loss(x_poly @ params, y_train) plot_regression_data(lambda x: x @ params, poly_deg=degrees, MSE=mse) # params params
_____no_output_____
CC0-1.0
lectures/l15-optimization-part1.ipynb
davidd-55/cs152fa21
Train Neural Network Model Using Batch GD
# Hyperparameters learning_rate = 0.01 num_epochs = 100000 regularization = 1e-2 # Model parameters model = torch.nn.Sequential( torch.nn.Linear(1, 100), torch.nn.ReLU(), torch.nn.Linear(100, 100), torch.nn.ReLU(), torch.nn.Linear(100, 100), torch.nn.ReLU(), torch.nn.Linear(100, 1), ) # Torch utils criterion = torch.nn.MSELoss() optimizer = torch.optim.SGD( model.parameters(), lr=learning_rate, weight_decay=regularization ) # Training for epoch in progress_bar(range(num_epochs)): # Model yhat = model(x_train.unsqueeze(-1)) # Update parameters optimizer.zero_grad() loss = criterion(yhat.squeeze(), y_train) loss.backward() optimizer.step() plot_regression_data(model, loss.item()) for param in model.parameters(): print(param.mean())
tensor(-0.0212, grad_fn=<MeanBackward0>) tensor(0.0280, grad_fn=<MeanBackward0>) tensor(-0.0008, grad_fn=<MeanBackward0>) tensor(-0.0142, grad_fn=<MeanBackward0>) tensor(0.0008, grad_fn=<MeanBackward0>) tensor(-0.0032, grad_fn=<MeanBackward0>) tensor(0.0359, grad_fn=<MeanBackward0>) tensor(-0.1043, grad_fn=<MeanBackward0>)
CC0-1.0
lectures/l15-optimization-part1.ipynb
davidd-55/cs152fa21
Hybrid Neural Net to solve Regression Problem We use a neural net with a quantum layer to predict the second half betting lines given the result of the first half and the opening line. The quantum layer is an 8 qubit layer and the model is from Keras.
import pandas as pd import numpy as np import tensorflow as tf from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split import pennylane as qml import warnings warnings.filterwarnings('ignore') tf.keras.backend.set_floatx('float64') import warnings warnings.filterwarnings('ignore') ###predict 2nd half line using 1st half total and open ## df1 = pd.read_csv("nfl_odds.csv") df1['1H'] = df1['1st'] + df1['2nd'] df2 = pd.read_csv('bet.csv') df = df1.merge(df2, left_on = 'Team', right_on = 'Tm') df = df[['1H','Open', 'TO%','PF','Yds','ML', '2H']] df.head() n_qubits = 8 dev = qml.device("default.qubit", wires=n_qubits) @qml.qnode(dev) def qnode(inputs, weights): qml.templates.AngleEmbedding(inputs, wires=range(n_qubits)) qml.templates.BasicEntanglerLayers(weights, wires=range(n_qubits)) return [qml.expval(qml.PauliZ(wires=i)) for i in range(n_qubits)] n_layers = 4 weight_shapes = {"weights": (n_layers, n_qubits)} qlayer = qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=n_qubits) clayer_1 = tf.keras.layers.Dense(8, activation="relu") clayer_2 = tf.keras.layers.Dense(2, activation="relu") model = tf.keras.models.Sequential([clayer_1, qlayer, clayer_2]) opt = tf.keras.optimizers.SGD(learning_rate=0.2) model.compile(opt, loss="mae", metrics=["mean_absolute_error"]) df = df[df.Open != 'pk'] df = df[df['2H'] != 'pk'] df['Open'] = df['Open'].astype(float) df['2H'] = df['2H'].astype(float) X = df[['1H','Open','TO%','PF','Yds','ML']] y = df['2H'] X = np.asarray(X).astype(np.float32) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15, random_state=0) scaler = MinMaxScaler(feature_range = (0,1)) scaler.fit(X_train) X_train = scaler.transform(X_train) fitting = model.fit(X_train, y_train, epochs=10, batch_size=5, validation_split=0.15, verbose=2) X_test = scaler.transform(X_test) preds = model.predict(X_test) pred = pd.DataFrame(preds, columns =[ 'prediction1', 'prediction2']) pred = pred[(pred.prediction1 > 0) & (pred.prediction1 < 30)] y_test = y_test.reset_index() y_test = y_test[y_test['2H'] > 6] compare = pd.concat([pred, y_test], axis=1) compare = compare.drop('index', axis=1) compare.dropna()
_____no_output_____
MIT
Loo Boys/PythonNotebooks/betting.ipynb
ushnishray/Hackathon2021
Classical NN (Benchmarking) The MAE is twice as large for the purely classical NN. The quantum layer is helping the solution converge more quickly! (As an aside, the quantum NN takes alot longer to run)
clayer_1 = tf.keras.layers.Dense(8, activation="relu") clayer_2 = tf.keras.layers.Dense(2, activation="relu") model = tf.keras.models.Sequential([clayer_1, clayer_2]) opt = tf.keras.optimizers.SGD(learning_rate=0.2) model.compile(opt, loss="mae", metrics=["mean_absolute_error"]) df = df[df.Open != 'pk'] df = df[df['2H'] != 'pk'] df['Open'] = df['Open'].astype(float) df['2H'] = df['2H'].astype(float) X = df[['1H','Open','TO%','PF','Yds','ML']] y = df['2H'] X = np.asarray(X).astype(np.float32) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15, random_state=0) scaler = MinMaxScaler(feature_range = (0,1)) scaler.fit(X_train) X_train = scaler.transform(X_train) fitting = model.fit(X_train, y_train, epochs=15, batch_size=10, validation_split=0.15, verbose=2)
Epoch 1/15 37/37 - 1s - loss: 10.3219 - mean_absolute_error: 10.3219 - val_loss: 9.8135 - val_mean_absolute_error: 9.8135 Epoch 2/15 37/37 - 0s - loss: 10.2575 - mean_absolute_error: 10.2575 - val_loss: 9.8118 - val_mean_absolute_error: 9.8118 Epoch 3/15 37/37 - 0s - loss: 10.2742 - mean_absolute_error: 10.2742 - val_loss: 9.8250 - val_mean_absolute_error: 9.8250 Epoch 4/15 37/37 - 0s - loss: 10.2460 - mean_absolute_error: 10.2460 - val_loss: 10.3656 - val_mean_absolute_error: 10.3656 Epoch 5/15 37/37 - 0s - loss: 10.1914 - mean_absolute_error: 10.1914 - val_loss: 9.8487 - val_mean_absolute_error: 9.8487 Epoch 6/15 37/37 - 0s - loss: 10.2714 - mean_absolute_error: 10.2714 - val_loss: 10.2363 - val_mean_absolute_error: 10.2363 Epoch 7/15 37/37 - 0s - loss: 10.3317 - mean_absolute_error: 10.3317 - val_loss: 10.0592 - val_mean_absolute_error: 10.0592 Epoch 8/15 37/37 - 0s - loss: 10.2152 - mean_absolute_error: 10.2152 - val_loss: 9.8159 - val_mean_absolute_error: 9.8159 Epoch 9/15 37/37 - 0s - loss: 10.2130 - mean_absolute_error: 10.2130 - val_loss: 9.8297 - val_mean_absolute_error: 9.8297 Epoch 10/15 37/37 - 0s - loss: 10.2410 - mean_absolute_error: 10.2410 - val_loss: 9.8285 - val_mean_absolute_error: 9.8285 Epoch 11/15 37/37 - 0s - loss: 10.2607 - mean_absolute_error: 10.2607 - val_loss: 9.8165 - val_mean_absolute_error: 9.8165 Epoch 12/15 37/37 - 0s - loss: 10.2595 - mean_absolute_error: 10.2595 - val_loss: 10.1155 - val_mean_absolute_error: 10.1155 Epoch 13/15 37/37 - 0s - loss: 10.2000 - mean_absolute_error: 10.2000 - val_loss: 9.9506 - val_mean_absolute_error: 9.9506 Epoch 14/15 37/37 - 0s - loss: 10.2299 - mean_absolute_error: 10.2299 - val_loss: 10.1072 - val_mean_absolute_error: 10.1072 Epoch 15/15 37/37 - 0s - loss: 10.1927 - mean_absolute_error: 10.1927 - val_loss: 9.8151 - val_mean_absolute_error: 9.8151
MIT
Loo Boys/PythonNotebooks/betting.ipynb
ushnishray/Hackathon2021
1-5.2 Python Intro conditionals, type, and mathematics extended- conditionals: `elif` - casting - **basic math operators** -----> Student will be able to- code more than two choices using `elif` - gather numeric input using type casting - **perform subtraction, multiplication and division operations in code** &nbsp; Concepts Math basic operators `+` addition `-` subtraction `*` multiplication `/` division [![view video](https://iajupyterprodblobs.blob.core.windows.net/imagecontainer/common/play_video.png)]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/5bc97f7e-3015-4178-ac20-371a5302def1/Unit1_Section5.2-Math-operators.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/5bc97f7e-3015-4178-ac20-371a5302def1/Unit1_Section5.2-Math-operators.vtt","srclang":"en","kind":"subtitles","label":"english"}]) &nbsp; Examples
# [ ] review and run example print("3 + 5 =",3 + 5) print("3 + 5 - 9 =", 3 + 5 - 9) print("48/9 =", 48/9) print("5*5 =", 5*5) print("(14 - 8)*(19/4) =", (14 - 8)*(19/4)) # [ ] review and run example - 'million_maker' def million_maker(): make_big = input("enter a non-decimal number you wish were bigger: ") return int(make_big)*1000000 print("Now you have", million_maker())
enter a non-decimal number you wish were bigger: 5 Now you have 5000000
MIT
Python Absolute Beginner/Module_3_5_Absolute_Beginner.ipynb
sdavi187/pythonteachingcode
&nbsp; Task 1 use math operators to solve the set of tasks below
# [ ] print the result of subtracting 15 from 43 print (43 - 15) # [ ] print the result of multiplying 15 and 43 print (15*43) # [ ] print the result of dividing 156 by 12 print (156/12) # [ ] print the result of dividing 21 by 0.5 print (21/0.5) # [ ] print the result of adding 111 plus 84 and then subtracting 45 print (111+84-45) # [ ] print the result of adding 21 and 4 and then multiplying that sum by 4 print ((21+4)*4)
100
MIT
Python Absolute Beginner/Module_3_5_Absolute_Beginner.ipynb
sdavi187/pythonteachingcode
&nbsp; Task 2 Program: Multiplying Calculator Function- define function **`multiply()`**, and within the function: - gets user input() of 2 *strings* made of whole numbers - cast the input to **`int()`** - multiply the integers and **return** the equation with result as a **`str()`** - **return** example ```python 9 * 13 = 117 ```
# [ ] create and test multiply() function def multiply(): num_1 = input ("Enter a whole number:") num_2 = input ("Enter a second whole number:") return (str(int(num_1)*int(num_2))) print(multiply() + " is a string")
Enter a whole number:23 Enter a second whole number:34 782 is a string
MIT
Python Absolute Beginner/Module_3_5_Absolute_Beginner.ipynb
sdavi187/pythonteachingcode
&nbsp; Task 3 Project: Improved Multiplying Calculator Function putting together conditionals, input casting and math- update the multiply() function to multiply or divide - single parameter is **`operator`** with arguments of **`*`** or **`/`** operator - default operator is "*" (multiply) - **return** the result of multiplication or division - if operator other than **`"*"`** or **`"/"`** then **` return "Invalid Operator"`**
# [ ] create improved multiply() function and test with /, no argument, and an invalid operator ($) def multiply(operator = "*"): num_1 = input ("Enter a whole number:") num_2 = input ("Enter a second whole number:") if operator == "*": return (str(int(num_1)*int(num_2))) elif operator =="/": return (str(int(num_1)/int(num_2))) else: print ("Corruption occurred") ops = input("Would you like to multiply (m) or divide (d)?" ) if ops == "m": print (multiply ("*")) elif ops == "d": print (multiply ("/")) else: print ("Invalid operator")
Would you like to multiply (m) or divide (d)?d Enter a whole number:3 Enter a second whole number:1 3.0
MIT
Python Absolute Beginner/Module_3_5_Absolute_Beginner.ipynb
sdavi187/pythonteachingcode
&nbsp; Task 4 Fix the Errors
# Review, run, fix student_name = input("enter name: ").capitalize() if student_name.startswith("F"): print(student_name,"Congratulations, names starting with 'F' get to go first today!") elif student_name.startswith("G"): print(student_name,"Congratulations, names starting with 'G' get to go second today!") else: print(student_name, "please wait for students with names staring with 'F' and 'G' to go first today.")
enter name: frank Frank Congratulations, names starting with 'F' get to go first today!
MIT
Python Absolute Beginner/Module_3_5_Absolute_Beginner.ipynb
sdavi187/pythonteachingcode
1. Import packages
from keras.models import Sequential, Model from keras.layers import * from keras.layers.advanced_activations import LeakyReLU from keras.activations import relu from keras.initializers import RandomNormal from keras.applications import * import keras.backend as K from tensorflow.contrib.distributions import Beta import tensorflow as tf from keras.optimizers import Adam from image_augmentation import random_transform from image_augmentation import random_warp from utils import get_image_paths, load_images, stack_images from pixel_shuffler import PixelShuffler import time import numpy as np from PIL import Image import cv2 import glob from random import randint, shuffle from IPython.display import clear_output from IPython.display import display import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
MIT
0-newbooks/faceswap-GAN/FaceSwap_GAN_v2_test_img.ipynb
gopala-kr/ds-notebooks
4. Configmixup paper: https://arxiv.org/abs/1710.09412Default training data directories: `./faceA/` and `./faceB/`
K.set_learning_phase(1) channel_axis=-1 channel_first = False IMAGE_SHAPE = (64, 64, 3) nc_in = 3 # number of input channels of generators nc_D_inp = 6 # number of input channels of discriminators use_perceptual_loss = False use_lsgan = True use_instancenorm = False use_mixup = True mixup_alpha = 0.2 # 0.2 batchSize = 32 lrD = 1e-4 # Discriminator learning rate lrG = 1e-4 # Generator learning rate # Path of training images img_dirA = './faceA/*.*' img_dirB = './faceB/*.*'
_____no_output_____
MIT
0-newbooks/faceswap-GAN/FaceSwap_GAN_v2_test_img.ipynb
gopala-kr/ds-notebooks
5. Define models
from model_GAN_v2 import * encoder = Encoder() decoder_A = Decoder_ps() decoder_B = Decoder_ps() x = Input(shape=IMAGE_SHAPE) netGA = Model(x, decoder_A(encoder(x))) netGB = Model(x, decoder_B(encoder(x))) netDA = Discriminator(nc_D_inp) netDB = Discriminator(nc_D_inp)
_____no_output_____
MIT
0-newbooks/faceswap-GAN/FaceSwap_GAN_v2_test_img.ipynb
gopala-kr/ds-notebooks
6. Load Models
try: encoder.load_weights("models/encoder.h5") decoder_A.load_weights("models/decoder_A.h5") decoder_B.load_weights("models/decoder_B.h5") #netDA.load_weights("models/netDA.h5") #netDB.load_weights("models/netDB.h5") print ("model loaded.") except: print ("Weights file not found.") pass
model loaded.
MIT
0-newbooks/faceswap-GAN/FaceSwap_GAN_v2_test_img.ipynb
gopala-kr/ds-notebooks
7. Define Inputs/Outputs Variables distorted_A: A (batch_size, 64, 64, 3) tensor, input of generator_A (netGA). distorted_B: A (batch_size, 64, 64, 3) tensor, input of generator_B (netGB). fake_A: (batch_size, 64, 64, 3) tensor, output of generator_A (netGA). fake_B: (batch_size, 64, 64, 3) tensor, output of generator_B (netGB). mask_A: (batch_size, 64, 64, 1) tensor, mask output of generator_A (netGA). mask_B: (batch_size, 64, 64, 1) tensor, mask output of generator_B (netGB). path_A: A function that takes distorted_A as input and outputs fake_A. path_B: A function that takes distorted_B as input and outputs fake_B. path_mask_A: A function that takes distorted_A as input and outputs mask_A. path_mask_B: A function that takes distorted_B as input and outputs mask_B. path_abgr_A: A function that takes distorted_A as input and outputs concat([mask_A, fake_A]). path_abgr_B: A function that takes distorted_B as input and outputs concat([mask_B, fake_B]). real_A: A (batch_size, 64, 64, 3) tensor, target images for generator_A given input distorted_A. real_B: A (batch_size, 64, 64, 3) tensor, target images for generator_B given input distorted_B.
def cycle_variables(netG): distorted_input = netG.inputs[0] fake_output = netG.outputs[0] alpha = Lambda(lambda x: x[:,:,:, :1])(fake_output) rgb = Lambda(lambda x: x[:,:,:, 1:])(fake_output) masked_fake_output = alpha * rgb + (1-alpha) * distorted_input fn_generate = K.function([distorted_input], [masked_fake_output]) fn_mask = K.function([distorted_input], [concatenate([alpha, alpha, alpha])]) fn_abgr = K.function([distorted_input], [concatenate([alpha, rgb])]) return distorted_input, fake_output, alpha, fn_generate, fn_mask, fn_abgr distorted_A, fake_A, mask_A, path_A, path_mask_A, path_abgr_A = cycle_variables(netGA) distorted_B, fake_B, mask_B, path_B, path_mask_B, path_abgr_B = cycle_variables(netGB) real_A = Input(shape=IMAGE_SHAPE) real_B = Input(shape=IMAGE_SHAPE)
_____no_output_____
MIT
0-newbooks/faceswap-GAN/FaceSwap_GAN_v2_test_img.ipynb
gopala-kr/ds-notebooks
11. Helper Function: face_swap()This function is provided for those who don't have enough VRAM to run dlib's CNN and GAN model at the same time. INPUTS: img: A RGB face image of any size. path_func: a function that is either path_abgr_A or path_abgr_B. OUPUTS: result_img: A RGB swapped face image after masking. result_mask: A single channel uint8 mask image.
def swap_face(img, path_func): input_size = img.shape img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR) # generator expects BGR input ae_input = cv2.resize(img, (64,64))/255. * 2 - 1 result = np.squeeze(np.array([path_func([[ae_input]])])) result_a = result[:,:,0] * 255 result_bgr = np.clip( (result[:,:,1:] + 1) * 255 / 2, 0, 255 ) result_a = np.expand_dims(result_a, axis=2) result = (result_a/255 * result_bgr + (1 - result_a/255) * ((ae_input + 1) * 255 / 2)).astype('uint8') #result = np.clip( (result + 1) * 255 / 2, 0, 255 ).astype('uint8') result = cv2.cvtColor(result, cv2.COLOR_BGR2RGB) result = cv2.resize(result, (input_size[1],input_size[0])) result_a = np.expand_dims(cv2.resize(result_a, (input_size[1],input_size[0])), axis=2) return result, result_a whom2whom = "BtoA" # default trainsforming faceB to faceA if whom2whom is "AtoB": path_func = path_abgr_B elif whom2whom is "BtoA": path_func = path_abgr_A else: print ("whom2whom should be either AtoB or BtoA") input_img = plt.imread("./IMAGE_FILENAME.jpg") plt.imshow(input_img) result_img, result_mask = swap_face(input_img, path_func) plt.imshow(result_img) plt.imshow(result_mask[:, :, 0]) # cmap='gray'
_____no_output_____
MIT
0-newbooks/faceswap-GAN/FaceSwap_GAN_v2_test_img.ipynb
gopala-kr/ds-notebooks
In this notebook I'm going to generate simulated MIRI time series imaging data, to provide as test set for ESA Datalabs. To install Mirisim, see the [the public release webpage](http://miri.ster.kuleuven.be/bin/view/Public/MIRISim_Public). The target for the mock observations is WASP-103, an exoplanet host star with the following properties from [the exoplanet encyclopaedia](http://exoplanet.eu/catalog/wasp-103_b/):* spectral type F8V* T_bb = 6110 K* V = 12.0, K = 10.7K magnitude of 10.7 corresponds to a flux of 32.5 mJy or 32.5e3 microJy.Using the ETC, I calculated the following number of groups for a high-SNR but unsaturated image:* FULL array: NGROUPS = 5* SUB64 subarray: NGROUPS = 60We want to simulate a medium length exposure in both FULL and SUB64 subarras. In total that's 2 simulations.| Sim no | Array | NGroups | NInt | NExp | Exp time || -------|---------| ---------|--------|--------|----------||1 |FULL | 5 | 200 | 1 | 0.77 hr ||2 |SUB64 | 60 | 600 | 1 | 0.85 hr | Steps in setting up the simulationThis notebook will go through the following steps:* Create the scene* Set up the simulation* Run the simulationEach step has its own function. Steps 1 and 2 will each write out a .ini file, which will be used as input for the final step.
arr = ['FULL', 'SUB64'] ngrp = [5, 60] #nints = [200, 600] nints = [1, 1]
_____no_output_____
BSD-3-Clause
TSO-imaging-sims/datalabs-sim/MIRI_im_tso_datalabs.ipynb
STScI-MIRI/TSO-MIRI-simulations
Step 1: Creating the input scene (WASP-103)Here we'll create the input scene for the simulations using the function wasp103_scene(). Arguments:* scene_file: the filename for the .ini file* write_cube: write the scene image out to a FITS file (optional; default=False)The function returns a mirisim.skysim.scenes.CompositeSkyScene object.
scene_ini = wasp103_scene(scene_file='wasp103_scene.ini', write_cube=False) print(scene_ini)
wasp103_scene.ini
BSD-3-Clause
TSO-imaging-sims/datalabs-sim/MIRI_im_tso_datalabs.ipynb
STScI-MIRI/TSO-MIRI-simulations
Step 2: Configuring the simulationNow I'll set up the simulations and prepare to run them. I'll set it up to loop through the 2 simulations. For this I wrote the function wasp103_sim_config. Check the docstring for descriptions and default values of the arguments. The function will write out another .ini file containing the simulation configuration, and it returns the output filename for further use.
#reload(tso_img_sims_setup) #from tso_img_sims_setup import wasp103_sim_config for (a, g, i) in zip(arr, ngrp, nints): sim_ini = wasp103_sim_config(mode='imaging', arr=a, ngrp=g, nint=i, nexp=1, filt='F770W', scene_file=scene_ini, out=True) print(sim_ini)
Found scene file wasp103_scene.ini wasp103_FULL_5G1I1E_simconfig.ini exists, overwrite (y/[n])?y wasp103_FULL_5G1I1E_simconfig.ini Found scene file wasp103_scene.ini wasp103_SUB64_60G1I1E_simconfig.ini exists, overwrite (y/[n])?y wasp103_SUB64_60G1I1E_simconfig.ini
BSD-3-Clause
TSO-imaging-sims/datalabs-sim/MIRI_im_tso_datalabs.ipynb
STScI-MIRI/TSO-MIRI-simulations