code
stringlengths
38
801k
repo_path
stringlengths
6
263
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.5 64-bit (''base'': conda)' # language: python # name: python3 # --- import pandas as pd import re df = pd.read_csv('GoodReads_100k_books.csv').dropna() regex = re.compile('[®™ÙŠØ©Ð§‡Œ¯ƒŸ]') def regex_filter(val): if val: mo = re.search(regex, val) if mo: return False else: return True else: return True df = df[df['desc'].apply(regex_filter)] df df = df[df['bookformat'].apply(regex_filter)] df['bookformat'] = df['bookformat'].str.title() df = df[df['bookformat'] != '³Ãÿック'] df
books/cleaning_data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="view-in-github" # <a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W2D1_DeepLearning/student/W2D1_Intro.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> &nbsp; <a href="https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W2D1_DeepLearning/student/W2D1_Intro.ipynb" target="_parent"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open in Kaggle"/></a> # + [markdown] pycharm={"name": "#%% md\n"} # # Intro # - # **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** # # <p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p> # + [markdown] pycharm={"name": "#%% md\n"} # ## Overview # # This day introduces you to some of the applications of deep learning in neuroscience. In the intro, Aude Oliva covers the basics of convolutional neural networks trained to do image recognition and how to compare these artificial neural networks to neural activity in the brain. In the three tutorials, we apply deep learning principles in three key ways they are used in neuroscience: decoding models, encoding models, and representational similarity analysis. In each of the tutorials we use the same neural activity which was recorded from the visual cortex of awake mice while the mice were presented oriented grating stimuli. In tutorial 1, we start with even simpler neural networks consisting of fully connected linear layers. We introduce non-linear activation functions and how to optimize these deep networks using pytorch and back-propagation. We optimize the network to decode the presented visual stimulus from the recorded neural activity in visual cortex. Next, in tutorial 2, we introduce convolutional layers, the building blocks of networks for visual tasks. The bonus in that tutorial is to fit an encoding model from visual stimuli to neural activity. Finally, in tutorial 3, we optimize a convolutional neural network to perform an orientation discrimination task and compare the internal representations of the artificial neural networks to neural activity using a technique called representational similarity analysis. In the outro, the caveats of treating neural activity like a deep convolutional neural network are introduced and explored, including approaches to make deep networks more biologically plausible. In the second optional outro, deep learning is used to perform pose estimation of infants and used to make clinical judgments. # # There is a growing need for data analysis tools as neuroscientists gain the ability to record larger neural populations during more complex behaviors. Deep neural networks can approximate a wide range of non-linear functions and can be easily fit, allowing them to be flexible model architectures for building decoding and encoding models of large-scale data. Generalized linear models were used as decoding and encoding models in W1D4 Machine Learning. A model that decodes a variable from neural activity can tell us how much information a brain area contains about that variable. An encoding model is a model from an input variable, like visual stimulus, to neural activity. The encoding model is meant to approximate the same transformation that the brain performs on input variables and therefore help us understand how the brain represents information. # # The final application of deep neural networks in tutorial 3 is the most common one in neuroscience currently and involves comparing the activity of artificial neural networks to brain activity. Since deep convolutional neural networks are the only types of models that can perform at human accuracy on visual tasks like object recognition, it can often make sense to use them as starting points for comparison with neural data. This comparison can be done at a variety of scales, such as at the population level as in the tutorial, or at the level of single neurons and single units in the deep network. This type of research can help answer questions such as what types of datasets and tasks create neural networks that best approximate the brain (e.g. neural taskonomy), and what that means for the architecture and learning rules of the brain. There are other complex tasks that deep networks are trained for that involve learning how to explore environments and determine rewarding stimuli, stay tuned for more of this in W3D4 Reinforcement Learning. # + [markdown] pycharm={"name": "#%% md\n"} # ## Video # + cellView="form" pycharm={"name": "#%%\n"} # @markdown from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1sk4y1B7Ej", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"IZvcy0Myb3M", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) # + [markdown] pycharm={"name": "#%% md\n"} # ## Slides # + cellView="form" pycharm={"name": "#%%\n"} # @markdown from IPython.display import IFrame IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/jw829/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
tutorials/W2D1_DeepLearning/student/W2D1_Intro.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import math import networkx as nx from nxpd import draw import matplotlib.pyplot as plt # ### Operations # 1. Insertion (to last element, then bubble up if necessary) # 2. Min (first element) # 3. Deletion Min (replace first with last, delete last, bubble down) # 4. Root must be smaller than its children # 5. Bubble up/down is just a switch a = [4,3,2,1,7,8] G = nx.DiGraph() G.graph['dpi'] = 120 G.add_edges_from([(1,2), (1,3), (2,4), (2,7), (3,8),]) draw(G, show='ipynb') ##result # [1,2,3,4,7,8] # ### Implementations class minHeap: def __init__(self): self.heap = [] def findRoot(self,i): if i%2 == 1: # odd root = i//2 elif i%2 == 0: # even root = i//2-1 return root def bubbledown(self): idx = len(self.heap)-1 while idx > 0: parentidx = self.findRoot(idx) if self.heap[idx] < self.heap[parentidx]: self.heap[idx],self.heap[parentidx]=\ self.heap[parentidx],self.heap[idx] idx = parentidx def insert(self,num): self.heap.append(num) # bubble up if necessary self.bubbledown() def findMinChild(self,i,heap=None): leftidx = i*2+1 rightidx = i*2+2 size = len(self.heap) if leftidx<size and rightidx<size: # print(self.heap[leftidx],self.heap[rightidx]) if self.heap[leftidx]<self.heap[rightidx]: return leftidx elif self.heap[leftidx]>=self.heap[rightidx]: return rightidx elif leftidx<size and rightidx>=size: return leftidx elif leftidx>=size and rightidx<size: return rightidx return None def bubbleup(self): idx = 0 while idx is not None: minchild = self.findMinChild(idx) # print(f,minchild) if minchild: if self.heap[minchild]<self.heap[idx]: self.heap[minchild],self.heap[idx]=\ self.heap[idx],self.heap[minchild] idx = minchild def extractMin(self): if len(self.heap) < 1: return None return self.heap[0] def deleteMin(self): self.heap[0],self.heap[-1] = self.heap[-1],self.heap[0] self.heap.pop() # bubble down if necessary self.bubbleup() def plotHeap(self): G = nx.DiGraph() for i in range(1,len(self.heap)): r = self.findRoot(i) G.add_edge(self.heap[r],self.heap[i]) draw(G) mH = minHeap() mH.insert(4) mH.insert(3) mH.insert(2) mH.insert(1) mH.insert(5) mH.insert(0) mH.insert(10) mH.insert(11) print(mH.heap) print(mH.extractMin()) mH.deleteMin() print(mH.heap) print(mH.plotHeap()) def findRoot(i): if i%2 == 1: # odd root = i//2 elif i%2 == 0: # even root = i//2-1 return root # ### Insertion G = nx.DiGraph() G.add_edges_from([(4,3)]) draw(G, show='ipynb') # [4.3]>[3,4] # 0,1 G = nx.DiGraph() G.add_edges_from([(3,4)]) draw(G, show='ipynb') G = nx.DiGraph() G.add_edges_from([(3,4),(3,2)]) draw(G, show='ipynb') # [3,4,2]>[2,4,3] # 0,2 G = nx.DiGraph() G.add_edges_from([(2,4),(2,3)]) draw(G, show='ipynb') G = nx.DiGraph() G.add_edges_from([(2,4),(2,3),(4,1)]) draw(G, show='ipynb') # [2,4,3,1]>[2,1,3,4]>[1,2,3,4] # 1,3;0,1 G = nx.DiGraph() G.add_edges_from([(2,1),(2,3),(1,4)]) draw(G, show='ipynb') G = nx.DiGraph() G.add_edges_from([(1,2),(1,3),(2,4)]) draw(G, show='ipynb') # Find root, 1>0, 3>1, 4>1, 5>2, 6>2, 7>3, 8>3, 9>4, 10>4 # + e = [] e.append(2) e.append(4) e.append(3) e.append(1) print(e) idx = len(e)-1 while idx > 0: parentidx = findRoot(idx) if e[idx] < e[parentidx]: e[idx],e[parentidx]=e[parentidx],e[idx] idx = parentidx print(e) # - # ### Deletion-min G = nx.DiGraph() G.add_edges_from([(1,2),(1,3),(2,4)]) draw(G, show='ipynb') # [1,2,3,4]>[4,2,3,1]>[4,2,3]>[2,4,3] G = nx.DiGraph() G.add_edges_from([(4,2),(4,3),(2,1)]) draw(G, show='ipynb') G = nx.DiGraph() G.add_edges_from([(4,2),(4,3)]) draw(G, show='ipynb') G = nx.DiGraph() G.add_edges_from([(2,4),(2,3)]) draw(G, show='ipynb') G = nx.DiGraph() G.add_edges_from([(1,2),(1,3),(2,4),(2,5)]) draw(G, show='ipynb') # [1,2,3,4,5]>[5,2,3,4,1]>[5,2,3,4]>[2,5,3,4]>[2,4,3,5] G = nx.DiGraph() G.add_edges_from([(5,2),(5,3),(2,4),(2,1)]) draw(G, show='ipynb') G = nx.DiGraph() G.add_edges_from([(5,2),(5,3),(2,4)]) draw(G, show='ipynb') G = nx.DiGraph() G.add_edges_from([(2,5),(2,3),(5,4)]) draw(G, show='ipynb') G = nx.DiGraph() G.add_edges_from([(2,4),(2,3),(4,5)]) draw(G, show='ipynb') f = [1,2,3,4,5] f[0],f[-1]=f[-1],f[0] f.pop() print(f) # 0>1,2; 1>3,4; 2>5,6 def findMinChild(i,heap): leftidx = i*2+1 rightidx = i*2+2 size = len(heap) if leftidx<size and rightidx<size: print(heap[leftidx],heap[rightidx]) if heap[leftidx]<heap[rightidx]: return leftidx elif heap[leftidx]>=heap[rightidx]: return rightidx elif leftidx<size and rightidx>=size: return leftidx elif leftidx>=size and rightidx<size: return rightidx return None findMinChild(0,[1,2,3,4,5]) findMinChild(1,[1,2,3,5,4]) print(findMinChild(2,[1,2,3,5,4])) print(findMinChild(1,[1,2,3,5])) None is None # + f = [0, 2, 1, 4, 5, 3] f[0],f[-1]=f[-1],f[0] f.pop() print(f) idx = 0 while idx is not None: minchild = findMinChild(idx,f) print(f,minchild) if minchild: if f[minchild]<f[idx]: f[minchild],f[idx]=f[idx],f[minchild] idx = minchild # - f # ### Plot distinct numbers G = nx.DiGraph() G.add_edges_from([(1,2),(1,3),(2,4)]) G.add_edge(2,5) draw(G, show='ipynb') # [1,2,3,4]>[4,2,3,1]>[4,2,3]>[2,4,3] g = [0, 2, 1, 4, 5, 3, 10, 11] G = nx.DiGraph() for i in range(1,len(g)): r = findRoot(i) G.add_edge(g[r],g[i]) draw(G, show='ipynb')
resources/algopy/minHeap.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # <font color="blue"><h3 align="center">Pandas Merge Tutorial</h3></font> # ## <font color='blue'>Basic Merge Using a Dataframe Column</font> import pandas as pd df1 = pd.DataFrame({ "city": ["new york","chicago","orlando"], "temperature": [21,14,35], }) df1 df2 = pd.DataFrame({ "city": ["chicago","new york","orlando"], "humidity": [65,68,75], }) df2 df3 = pd.merge(df1, df2, on="city") df3 help(pd.merge) # ## <font color='blue'>Type Of DataBase Joins</font> # + language="html" # <img src="db_joins.jpg" height="800", width="800"> # - df1 = pd.DataFrame({ "city": ["new york","chicago","orlando", "baltimore"], "temperature": [21,14,35, 38], }) df1 df2 = pd.DataFrame({ "city": ["chicago","new york","san diego"], "humidity": [65,68,71], }) df2 df3=pd.merge(df1,df2,on="city",how="inner") df3 df3=pd.merge(df1,df2,on="city",how="outer") df3 df3=pd.merge(df1,df2,on="city",how="left") df3 df3=pd.merge(df1,df2,on="city",how="right") df3 # ## <font color='blue'>indicator flag</font> df3=pd.merge(df1,df2,on="city",how="outer",indicator=True) df3 # ## <font color='blue'>suffixes</font> df1 = pd.DataFrame({ "city": ["new york","chicago","orlando", "baltimore"], "temperature": [21,14,35,38], "humidity": [65,68,71, 75] }) df1 df2 = pd.DataFrame({ "city": ["chicago","new york","san diego"], "temperature": [21,14,35], "humidity": [65,68,71] }) df2 df3= pd.merge(df1,df2,on="city",how="outer", suffixes=('_first','_second')) df3 # ## <font color='blue'>join</font> df1 = pd.DataFrame({ "city": ["new york","chicago","orlando"], "temperature": [21,14,35], }) df1.set_index('city',inplace=True) df1 df2 = pd.DataFrame({ "city": ["chicago","new york","orlando"], "humidity": [65,68,75], }) df2.set_index('city',inplace=True) df2 df1.join(df2,lsuffix='_l', rsuffix='_r') help(df1.join) df1.join(df2)
Pandas/09_merge/pandas_merge.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # MATPLOTLIB # # CONTENT # ## 1)Introduction # ## 2)Controlling Line Properties # ## 3)Formatting the Plot # ## 4)Example Plot Types # ### 4.1)Bar Plot # ### 4.2)Scatter Plot # ### 4.3)Heatmap # ### 4.4)3D Plot # ## 5)Multiple Figures # ## 1)Introduction # Matplotlib is one of the most common visualization libraries of the Python language. It is very powerful library that visualize your data and help the user to get a better understanding of data. # # For more detailed information, you can visit its official page. https://matplotlib.org/ #as always, we import the libraries first import matplotlib.pyplot as plt #pyplot is a collection of functions that we are going to use. import numpy as np # %matplotlib notebook plt.plot([1,2,3,4]) #just a basic list to draw plt.ylabel("random numbers") #here we label the y-axis plt.xlabel("here lies the x") #and x-axis plt.show() # In the above example, we gave just a basic line. However, we can also create an X versus Y plot as well plt.plot([1,2,3,4],["a","b","c","d"]) plt.show() plt.plot([1,2,3,4],[10,20,30,40]) plt.show() #We can add legend to the plots plt.plot([1,2,3,4],[10,20,30,40], label="Revenue") plt.xlabel plt.xlabel("Revenue") plt.legend() plt.show() # ## 2)Controlling Line Properties # Matplotlib gives us the ability to change the properties of the line as well. # + plt.plot([1,2,3,4],[10,20,30,40], linewidth=3.0,) #as you can see with the linewidth parameter we increased the size of the line plt.show() # - # ## 3)Formatting the Plot # As you can see from the previous examples, by default our line is drawn with a blue line. The minimum and maximum values of the axes was also taken from our data. However, we can also change these formatting settings. For example: plt.plot([1,2,3,4],[10,20,30,40], "ms") #here we added a new parameter "mo". m means magenta and o means circle. #Because of these parameters, a plot with magenta circles is drawn. plt.show() # + #we could also make green squares by adding "bs" plt.plot([1,2,3,4],[10,20,30,40], "gs") plt.show() # - plt.plot([1,2,3,4],[10,20,30,40], "mo") #as stated previously, we can also set min/max value for the axes. plt.axis([0,10,0,100]) #order of this command is : xmin, xmax, ymin, ymax plt.show() # ## 4)Example Plot Types # ### 4.1)Bar Plot # A bar plot is a very common data type for visualization of categorical values. For example, let's plot the student numbers of an arbitrary university based on their majors. majors=["mechanical eng.","civil eng.","electrical eng."] values=[213,178,256] plt.bar(majors, values) plt.ylabel("majors") plt.show() # ### 4.2)Scatter Plot # Another common type is scatter plot. notes=[1,2,3,4] student_numbers=[22,45,55,34] plt.scatter(notes, student_numbers) plt.ylabel("#students") plt.xlabel("notes") plt.axis([0,5,0,75]) plt.show() # And again, we can show it as barplot notes=[1,2,3,4] student_numbers=[22,45,55,34] plt.bar(notes, student_numbers) plt.ylabel("#students") plt.xlabel("notes") plt.axis([0,5,0,75]) plt.show() # There are also other types of visualization as well, e.g. heatmaps or contour plots. # # However, it can be said that visualization is almost a form or art. Because of that, it is very important which plot you choose. In the above example, as you can see bar plot looks more appropriate. # ### 4.3)Heatmap # It is often desirable to show data which depends on two independent variables as a color coded image plot. This is often referred to as a heatmap. If the data is categorical, this would be called a categorical heatmap. # # Source:https://matplotlib.org/3.1.1/gallery/images_contours_and_fields/image_annotated_heatmap.html # + vegetables = ["cucumber", "tomato", "lettuce", "asparagus", "potato", "wheat", "barley"] farmers = ["<NAME>", "Upland Bros.", "<NAME>", "Agrifun", "Organiculture", "BioGoods Ltd.", "Cornylee Corp."] harvest = np.array([[0.8, 2.4, 2.5, 3.9, 0.0, 4.0, 0.0], [2.4, 0.0, 4.0, 1.0, 2.7, 0.0, 0.0], [1.1, 2.4, 0.8, 4.3, 1.9, 4.4, 0.0], [0.6, 0.0, 0.3, 0.0, 3.1, 0.0, 0.0], [0.7, 1.7, 0.6, 2.6, 2.2, 6.2, 0.0], [1.3, 1.2, 0.0, 0.0, 0.0, 3.2, 5.1], [0.1, 2.0, 0.0, 1.4, 0.0, 1.9, 6.3]]) plt.rcParams["figure.figsize"] = (16,10) fig, ax = plt.subplots() im = ax.imshow(harvest) # We want to show all ticks... ax.set_xticks(np.arange(len(farmers))) ax.set_yticks(np.arange(len(vegetables))) # ... and label them with the respective list entries ax.set_xticklabels(farmers) ax.set_yticklabels(vegetables) # Rotate the tick labels and set their alignment. plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor") # Loop over data dimensions and create text annotations. for i in range(len(vegetables)): for j in range(len(farmers)): text = ax.text(j, i, harvest[i, j], ha="center", va="center", color="w") ax.set_title("Harvest of local farmers (in tons/year)") fig.tight_layout() plt.show() # - # ### 4.4)3D Plot # However, some datasets require to be visualized in 3D plot. In that case, matplotlib can also create 3D plots as well. # # Note: This example aims to show you matplotlib's capabilities. You need not to understand all the code. # # Source:https://matplotlib.org/3.1.0/gallery/mplot3d/surface3d.html # + from mpl_toolkits.mplot3d import Axes3D # noqa: F401 unused import import matplotlib.pyplot as plt from matplotlib import cm from matplotlib.ticker import LinearLocator, FormatStrFormatter import numpy as np plt.rcParams["figure.figsize"] = (16,10) fig = plt.figure() ax = fig.gca(projection='3d') # Make data. X = np.arange(-5, 5, 0.25) Y = np.arange(-5, 5, 0.25) X, Y = np.meshgrid(X, Y) R = np.sqrt(X**2 + Y**2) Z = np.sin(R) # Plot the surface. surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm, linewidth=0, antialiased=False) # Customize the z axis. ax.set_zlim(-1.01, 1.01) ax.zaxis.set_major_locator(LinearLocator(10)) ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f')) # Add a color bar which maps values to colors. fig.colorbar(surf, shrink=0.5, aspect=5) plt.show() # - from matplotlib import pyplot as plt plt.figure(figsize=(100,100)) x = [1,2,3] plt.plot(x, x) plt.show() # ## 5)Multiple Figures # As you can see, until now we always drew a single plot. However, this may not be the case always. We might need multiple plots in a single figure. # + plt.subplot(121) # the first subplot in the figure. #three numbers indicates position of the plot: nrows, ncolumns, index plt.plot([1, 2, 3]) plt.subplot(122) # the second subplot in the figure plt.plot([4, 5, 6],"r") plt.show()
Requirements/matplotlib_notebook.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # !conda env list # !pwd # + import gzip import numpy as np from scipy.special import softmax #average over the different augmentations def load_deterministic_labels(pred_folder): subfolder_names = [ pred_folder+"/xyflip-False_horizontalflip-False_verticalflip-False", pred_folder+"/xyflip-False_horizontalflip-False_verticalflip-True", pred_folder+"/xyflip-False_horizontalflip-True_verticalflip-False", pred_folder+"/xyflip-False_horizontalflip-True_verticalflip-True", pred_folder+"/xyflip-True_horizontalflip-False_verticalflip-False", pred_folder+"/xyflip-True_horizontalflip-False_verticalflip-True", pred_folder+"/xyflip-True_horizontalflip-True_verticalflip-False", pred_folder+"/xyflip-True_horizontalflip-True_verticalflip-True" ] softmax_logits = [] for subfolder in subfolder_names: softmax_logits.append( np.array([[float(y) for y in x.decode("utf-8").split("\t")[1:]] for x in gzip.open(subfolder+"/deterministic_preds.txt.gz", 'rb')])) softmax_logits = np.mean(softmax_logits, axis=0) return softmax_logits kaggle_labels = np.array([int(x.decode("utf-8").split("\t")[1]) for x in gzip.open("valid_labels.txt.gz", 'rb')]) kaggle_softmax_logits = load_deterministic_labels("kaggle_preds") kaggle_softmax_preds = softmax(kaggle_softmax_logits, axis=-1) from sklearn.metrics import roc_auc_score kaggle_binary_labels = 1.0*(kaggle_labels > 0.0) kaggle_binary_preds = 1-kaggle_softmax_preds[:,0] kaggle_binary_logits = (np.log(np.maximum(kaggle_binary_preds,1e-7)) -np.log(np.maximum(1-kaggle_binary_preds, 1e-7))) print(roc_auc_score(y_true=kaggle_binary_labels, y_score=kaggle_binary_preds)) messidor_labels = np.array([ int(x[1].decode("utf-8").split("\t")[2]) for x in enumerate(gzip.open("messidor_preds/messidor_labels_withcorrections.txt.gz", 'rb')) if x[0] > 0]) messidor_softmax_logits = load_deterministic_labels("messidor_preds") messidor_softmax_preds = softmax(messidor_softmax_logits, axis=-1) from sklearn.metrics import roc_auc_score messidor_binary_labels = 1.0*(messidor_labels > 0.0) messidor_binary_preds = 1-messidor_softmax_preds[:,0] messidor_binary_logits = (np.log(np.maximum(messidor_binary_preds,1e-7)) -np.log(np.maximum(1-messidor_binary_preds,1e-7))) print(roc_auc_score(y_true=messidor_binary_labels, y_score=messidor_binary_preds)) # + # %matplotlib inline #apply calibration to the kaggle set import abstention from abstention.calibration import PlattScaling from abstention.label_shift import EMImbalanceAdapter from sklearn.calibration import calibration_curve from matplotlib import pyplot as plt def plot_calibration_curve(y_true, y_preds, **kwargs): prob_true, prob_pred = calibration_curve( y_true=y_true, y_prob=y_preds, **kwargs) plt.plot(prob_true, prob_pred) plt.plot([0,1],[0,1], color="black") plt.show() calibrator = PlattScaling()(valid_preacts=kaggle_binary_logits, valid_labels=kaggle_binary_labels) calibrated_kaggle_preds = calibrator(kaggle_binary_logits) calibrated_messidor_preds = calibrator(messidor_binary_logits) adaptation_func = EMImbalanceAdapter()( tofit_initial_posterior_probs=calibrated_messidor_preds, valid_posterior_probs=calibrated_kaggle_preds) adapted_calibrated_messidor_preds = adaptation_func(calibrated_messidor_preds) print("Kaggle before calibration") plot_calibration_curve(y_true=kaggle_binary_labels, y_preds=kaggle_binary_preds, n_bins=5) print("Kaggle after calibration") plot_calibration_curve(y_true=kaggle_binary_labels, y_preds=calibrated_kaggle_preds) print("Messidor before calibration") plot_calibration_curve(y_true=messidor_binary_labels, y_preds=messidor_binary_preds, n_bins=5) print("Messidor after calibration") plot_calibration_curve(y_true=messidor_binary_labels, y_preds=calibrated_messidor_preds, n_bins=5) print("Messidor after adaptation") plot_calibration_curve(y_true=messidor_binary_labels, y_preds=adapted_calibrated_messidor_preds, n_bins=5) # + # %matplotlib inline #investigate difference in positives/negatives for the two datasets from matplotlib import pyplot as plt import seaborn as sns sns.distplot(kaggle_binary_logits[kaggle_binary_labels==1.0], bins=20) sns.distplot(messidor_binary_logits[messidor_binary_labels==1.0], bins=20) plt.show() sns.distplot(kaggle_binary_logits[kaggle_binary_labels==0.0], bins=20) sns.distplot(messidor_binary_logits[messidor_binary_labels==0.0], bins=20) plt.show() # -
notebooks/DomainAdaptedMessidorPredictions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ![example](exmp.jpg) from sklearn.linear_model import LinearRegression import numpy as np X_train = np.array([[5,3,2],[9,2,4],[8,6,3],[5,4,5]]) y_train = np.array([151022,183652,482466,202541]) linreg = LinearRegression() linreg.fit(X_train,y_train) print("7+2+5 = %d" % linreg.predict([[7,2,5]])) # + from sklearn.linear_model import LinearRegression from sklearn.svm import SVR from sklearn.neighbors import KNeighborsRegressor from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import GradientBoostingRegressor from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.cross_decomposition import PLSRegression from sklearn.ensemble import AdaBoostRegressor from sklearn.model_selection import GridSearchCV # - import numpy as np X_train = np.array([[5,3,2],[9,2,4],[8,6,3],[5,4,5]]) y_train = np.array([151022,183652,482466,202541]) models = [ [LinearRegression(), {"fit_intercept": [True, False]}], [SVR(), {"kernel": ["linear", "poly", "rbf", "sigmoid"]}], [KNeighborsRegressor(), {"n_neighbors": [1,2], "weights": ["uniform", "distance"]}], [DecisionTreeRegressor(), {"criterion": ["mse", "friedman_mse"], "splitter": ["best", "random"], "min_samples_split": [x for x in range(2,6)] # generates a list [2,3,4,5] }], [GradientBoostingRegressor(), {"loss": ["ls", "lad", "huber", "quantile"]}], [GaussianProcessRegressor(), {}], [PLSRegression(), {}], [AdaBoostRegressor(), {}] ] best_model_pred = [] for model in models: regressor = model[0] param_grid = model[1] model = GridSearchCV(regressor, param_grid) model.fit(X_train, y_train) accuracy = model.score(X_train, y_train) print('Model name: ', model) print('Accuracy: ',accuracy) print("7+2+5 = %d\n" % model.predict([[7,2,5]])) if accuracy == 1: best_model_pred.append("7+2+5 = %d\n" % model.predict([[7,2,5]])) best_model_pred
math-from-meme/math-meme-solve.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import pickle import pandas as pd import time # + def get_gender_and_color_method_one(df): filename = 'models/carb_dataset_composite.sav' loaded_model = pickle.load(open(filename, 'rb')) gender_and_color = loaded_model.predict(df)[0] gender_and_color = gender_and_color.split("_") return "{} carb body color {} ".format(gender_and_color[0],gender_and_color[1]) def get_color(gender): if gender == "M": filename = 'models/carb_dataset_male_color.sav' loaded_model = pickle.load(open(filename, 'rb')) color = loaded_model.predict(df)[0] else : filename = 'models/carb_dataset_female_color.sav' loaded_model = pickle.load(open(filename, 'rb')) color = loaded_model.predict(df)[0] return color def get_gender_and_color_method_two(df): filename = 'models/carb_dataset_male_female.sav' loaded_model = pickle.load(open(filename, 'rb')) gender = loaded_model.predict(df)[0] color = get_color(gender) return "{} carb body color {} ".format(gender,color) # + FL = 8.1 RW = 6.7 CL = 16.1 CW = 19.0 BD = 7.0 data = { "FL" : [FL], "RW" : [RW], "CL" : [CL], "CW" : [CW], "BD" : [BD] } df = pd.DataFrame.from_dict(data) # starting time start = time.time() print(get_gender_and_color_method_one(df)) # end time end = time.time() # total time taken print(f"Runtime of the method one is {end - start}") # starting time start = time.time() print(get_gender_and_color_method_two(df)) # end time end = time.time() # total time taken print(f"Runtime of the method two is {end - start}") # -
Carb classification .ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Melodic Expectation with Markov Models # # In this notebook we will look at Markov Chains for modelling musical expectation. # We have already seen a Markov Model in the class on key estimation with HMMs (Hidden Markov Models). import os import numpy as np import partitura # + from rnn import load_data # To filter out short melodies The minimum number of notes that a sequence should have min_seq_len = 10 sequences = load_data(min_seq_len) # - # ## Tasks 1; data loading & preparing: # 1. check out the content of the variable "sequences", if unclear have a look at the loading function. # 2. which musical texture do these sequences exhibit? (https://en.wikipedia.org/wiki/Texture_(music)) # 3. write a function to derive sequences of pitches from this data. # 4. write a function to derive sequences of durations from this data. Modify this to compute inter onset intervals (IOIs; the time between two consecutive onsets). Can you encode rests as well by comparing duration with IOI? s = list() for l in sequences: s+=list(l["pitch"]%12) # ## Tasks 2; data exploration: # # 1. compute and draw a histogram of pitches. Modify this to show pitch classes! # 2. compute and draw a histogram of IOIs. The input MIDI files are deadpan, i.e. the IOIs in seconds correspond to the notated duration exactly. Look through the IOIs and make an educated guess for some smallest float time unit that could serve as integer smallest time division. Encode the IOIs as multiples of this smallest integer. Which multiples make musical sense? import matplotlib.pyplot as plt # %matplotlib inline print(len(s)) plt.hist(s, bins=12, range=(0,12)) # ## Tasks 3; A Markov Chain: # # 1. choose a data type to model: pitch, pitch class, IOIs, or durations (including or without an encoding for rests). Concatenate all the sequences into one long data sequence. # # 2. You have now a sequence **X** of symbols from an alphabet **A** (set of possible symbols of your chosen data type): # # $$ \mathbf{X} = \{\mathbf{x_0}, \dots, \mathbf{x_n} \mid \mathbf{x}_{i} \in \mathbf{A} \forall i \in 0, \dots, n \}$$ # # Compute the empirical conditional probability of seeing any symbol after just having seen any other: # # $$ \mathbb{P}(\mathbf{x_i}\mid \mathbf{x_{i-1}}) $$ # # What is the dimensionality of this probability given $\lvert A \rvert = d $? Do you recall what this probability was called in the context of HMMs? # # 3. compute the entropy of the data (only your chosen type). Recall https://en.wikipedia.org/wiki/Entropy_(information_theory) # # + probs = np.zeros((12,12)) for (p1, p2) in zip(s[:-1],s[1:]): probs[p1, p2] += 1 probsum = np.sum(probs, axis = 1) print(probsum) normalized_distribution = (probs.T/probsum).T plt.imshow(normalized_distribution) plt.colorbar() print(np.sum(normalized_distribution, axis = 1)) # - # ## Tasks 4; Markov Chain Generation: # # 1. By computing the probability $ \mathbb{P}(\mathbf{x_i}\mid \mathbf{x_{i-1}}) $ in task 3 you have fully specified a discrete-time finite state space Markov Chain model (https://en.wikipedia.org/wiki/Discrete-time_Markov_chain)! Given an initial symbol "s_0", you can generate the subsequent symbols by sampling from the conditional probability distribution # # $$ \mathbb{P}(\mathbf{x_i}\mid \mathbf{x_{i-1}} = \mathbf{s_{0}}) $$ # # Write a function that samples from a finite state space given an input probability distribution. # # 2. Use the previously defined function and the Markov Chain to write a sequence generator based on an initial symbol. # 3. Start several "walkers", i.e. sampled/generated sequences. Compute the entropy of this generated data and compare it to the entropy in task 3. # # + def sample(distribution): cs = distribution.cumsum() samp = np.random.rand(1) return list(samp < cs).index(True) def generate(start = 0, length = 100): melody = [start] for k in range(length): melody.append(sample(normalized_distribution[melody[-1],:])) return melody print(generate()) # - # ## Tasks 5; n-gram Context Model: # # 1. The Markov Chains used until now have only very limited memory. In fact, they only ever know the last played pitch or duration. Longer memory models can be created by using the conditional probability of any new symbol based on an n-gram context of the symbol (https://en.wikipedia.org/wiki/N-gram): # $$ \mathbb{P}(\mathbf{x_i}\mid \mathbf{x_{i-1}}, \dots, \mathbf{x_{i-n}}) $$ # # This probability will generally not look like a matrix anymore, but we can easily encode it as a dictionary. Write a function that creates a 3-gram context model from the data sequence **X**! # # 2. The longer the context, the more data we need to get meaningful or even existing samples for all contexts (note that the number of different contexts grows exponentially with context length). What could we do to approximate the distribution for unseen contexts? # + from collections import defaultdict import copy def create_context_model(sequence, n): a_priori_probability = np.array([1,1,1,1,1,1,1,1,1,1,1,1])/12 context_model = defaultdict(lambda: copy.copy(a_priori_probability)) for idx in range(len(sequence)-n): local_string = "" for p in sequence[idx:idx+n]: local_string += str(p) context_model[local_string][sequence[idx+n]] += 1 for key in context_model.keys(): prob_dist = context_model[key] context_model[key] = prob_dist/ prob_dist.sum() return context_model cm = create_context_model(s, 5) # + def generate_with_context_model(start = [0,0,0], length = 100, context_model= cm): melody = start for k in range(length): key = "" for p in melody[-3:]: key += str(p) melody.append(sample(context_model[key])) return melody print(generate_with_context_model()) # - # ## Tasks 6; multi-type Markov Chains and back to music: # # 1. To generate a somewhat interesting melody, we want to get a sequence of both pitches and durations. If we encode rests too, we can generate any melody like this. So far our Markov Chains dealt with either pitch or duration/IOI. What could we do to combine them? Describe two approaches and why to choose which one. # # 2. Implement a simple melody generator with pitch and IOI/duration (simplest; modify taska 4; 2 to a generator of the other type and use them to create independent seuqnces). Write some generated melodies to MIDI files! # ## (Tasks 7); more stuff for music: # # 1. Keys are perceptual centers of gravity in the pitch space, so if we transpose all the input sequences to the same key we can compute empirical pitch distributions within a key! # # 2. One solution to tasks 5, 2 is to use Prediction by Partial Matching. This is the basis of the most elaborate probabilitstic model ofsymbolic music the Information Dynamics of Music (IDyOM). See references here: # https://researchcommons.waikato.ac.nz/bitstream/handle/10289/9913/uow-cs-wp-1993-12.pdf # https://mtpearce.github.io/idyom/ #
expectation/markov_models_melodic_expectation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + [markdown] nbsphinx="hidden" # This notebook is part of the $\omega radlib$ documentation: http://wradlib.org/wradlib-docs. # # Copyright (c) 2018, $\omega radlib$ developers. # Distributed under the MIT License. See LICENSE.txt for more info. # - # # Converting Reflectivity to Rainfall # Reflectivity (Z) and precipitation rate (R) can be related in form of a power law $Z=a \cdot R^b$. The parameters ``a`` and ``b`` depend on the type of precipitation (i.e. drop size distribution and water temperature). $\omega radlib$ provides a couple of functions that could be useful in this context. # + nbsphinx="hidden" import wradlib as wrl import matplotlib.pyplot as pl import warnings warnings.filterwarnings('ignore') try: get_ipython().magic("matplotlib inline") except: pl.ion() import numpy as np # - # The following example demonstrates the steps to convert from the common unit *dBZ* (decibel of the reflectivity factor *Z*) to rainfall intensity (in the unit of mm/h). This is an array of typical reflectivity values (**unit: dBZ**) dBZ = np.array([20., 30., 40., 45., 50., 55.]) print(dBZ) # Convert to reflectivity factor Z (**unit**: $mm^6/m^3$): Z = wrl.trafo.idecibel(dBZ) print(Z) # Convert to rainfall intensity (**unit: mm/h**) using the Marshall-Palmer Z(R) parameters: R = wrl.zr.z2r(Z, a=200., b=1.6) print(np.round(R, 2)) # Convert to rainfall depth (**unit: mm**) assuming a rainfall duration of five minutes (i.e. 300 seconds) depth = wrl.trafo.r2depth(R, 300) print(np.round(depth, 2)) # ## An example with real radar data # The following example is based on observations of the DWD C-band radar on mount Feldberg (SW-Germany). # The figure shows a 15 minute accumulation of rainfall which was produced from three consecutive radar # scans at 5 minute intervals between 17:30 and 17:45 on June 8, 2008. # # The radar data are read using [wradlib.io.readDX](http://wradlib.org/wradlib-docs/latest/generated/wradlib.io.readDX.html) function which returns an array of dBZ values and a metadata dictionary (see also [Reading-DX-Data](../fileio/wradlib_reading_dx.ipynb#Reading-DX-Data)). The conversion is carried out the same way as in the example above. The plot is produced using # the function [wradlib.vis.plot_ppi](http://wradlib.org/wradlib-docs/latest/generated/wradlib.vis.plot_ppi.html). def read_data(dtimes): """Helper function to read raw data for a list of datetimes <dtimes> """ data = np.empty((len(dtimes),360,128)) for i, dtime in enumerate(dtimes): f = wrl.util.get_wradlib_data_file('dx/raa00-dx_10908-{0}-fbg---bin.gz'.format(dtime)) data[i], attrs = wrl.io.readDX(f) return data # Read data from radar Feldberg for three consecutive 5 minute intervals and compute the accumulated rainfall depth. # Read dtimes = ["0806021735","0806021740","0806021745"] dBZ = read_data(dtimes) # Convert to rainfall intensity (mm/h) Z = wrl.trafo.idecibel(dBZ) R = wrl.zr.z2r(Z, a=200., b=1.6) # Convert to rainfall depth (mm) depth = wrl.trafo.r2depth(R, 300) # Accumulate 15 minute rainfall depth over all three 5 minute intervals accum = np.sum(depth, axis=0) # Plot PPI of 15 minute rainfall depth pl.figure(figsize=(10,8)) ax, cf = wrl.vis.plot_ppi(accum, cmap="spectral") pl.xlabel("Easting from radar (km)") pl.ylabel("Northing from radar (km)") pl.title("Radar Feldberg\n15 min. rainfall depth, 2008-06-02 17:30-17:45 UTC") cb = pl.colorbar(cf, shrink=0.8) cb.set_label("mm") pl.xlim(-128,128) pl.ylim(-128,128) pl.grid(color="grey")
notebooks/basics/wradlib_get_rainfall.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:packt] * # language: python # name: conda-env-packt-py # --- # # Assigning variables print('Hello World') pi = 3.14159265359 # decimal name = 'Philipp' # text age = 31 # integer sky_is_blue = True # boolean print(pi) # ## Packing multiple assignments on one line x, y = 10, 5 print(x) print(y) # ## Re-assigning (Updating the variable) pi = 'Philipp' print(pi) x, y = y, x x # ## Using non-defined variables will raise a NameError print(non_existent_variable) # ## Tips - other ways to define value ten_millions = 10_000_000 ten_millions small = .25 small sci_thousand = 10e2 sci_thousand # # Naming the variable # # 1. No keywords and operator symbols (class, def, for, ..., +, -, @, etc) # 2. No whitespace # 2. Cannot start with number counter = 0 pricey_car = 'Mercedes' income = 120_000 class = 'Mercedes' pricey@@car 1y_income = 120_000 # # Data types pi = 3.14159265359 # reassing it back type(pi) type(name) type(age) type(sky_is_blue) # ## Floats and Integers A = 6 B = 5 A + B A - B A / B A * B # **Exponent** 2**3 3**2 # **Integer division** 10 / 2 9 // 4 # way faster 10.0 // 4 2.0 # **Remainder** 10 % 3 # ## Self-assignment count = 0 count +=1 count count -=1 count count +=1 count *=2 count count **= 2 count count /=2 count count //= 2 count 10.0 // 4 # ## Order of operations (2 + 10) / 2 10 / (1 + 1) # # Strings text1 = 'This is a so-called “speakeasy”' text2 = "This is <NAME>" text3 = ''' This is <NAME>. "Welcome everyone!" - is written on the door. ''' print('Hello\nWorld!') # ## Operators 'Hello ' + 'World' 'Hello' * 3 # ## Methods 'Hello World'.upper() 'Hello World'.lower() 'hello world'.title() 'Hello world'.replace('world', 'planet') # ## Formatting 'hello'.rjust(10, ' ') 'hello'.ljust(10, ' ') '999'.zfill(10) # **Format method** 'Hello {} world and our blue {}'.format('beautiful','planet') '{0} {2} {1} and our {2} {3}!'.format('Hello','World','Beautiful', 'Planet') 'Hello {adj} world!'.format(adj='beautiful') # **F-strings** adj = 'beautiful' f'Hello {adj} world!' name = 'pHILIPP' f'Hello, mr. {name.title()}' # **Legacy formatting (do not use)** name = 'David' print('Hello, mr. %s' % name) # **Formatting mini-language** pct = .345 value = 45500 f'Price grew by {pct:.1%} or {value:,}' # [Formatting mini-language link](https://docs.python.org/3/library/string.html#formatspec) # ## String as iterable "Hello World"[0] "Hello World"[0:5] "World" in "Hello World!" "Planet" in "Hello World!" # # Boolean 'World' == 'World' pi == pi "World" in "Hello World!" pi != pi # ## Logical operators # not (5 > 4) (5 > 4) | (6 < 5) (5 > 4) & (6 < 5) (5 > 4) ^ (5 < 6) # # Conversion float("2.5") int("45") int(4.521) float(5) int(True) float(False) str(True) bool(0) bool('Hello') bool(115.5) # **Can't convert** int("2.5") int("Hello") # # Practice # + CONST = 32 RATIO = 5/9 T_f = 100 # - T_c = (T_f - CONST) * RATIO T_c T_f2 = (T_c / RATIO) + CONST T_f2 T_f = 70 T_c = (T_f - CONST) * RATIO T_c
Chapter02/Variables.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/fpinell/hands_on_python_for_ds/blob/main/Lecture_3_Deep_Learning.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="ELbqlV_DC9Jw" # # Hands on Python for Data Science # # # ### Master II Livello - Data Science and Statistical Learning (MD2SL) 2020-2021 # # #### <NAME> # <a href="mailto:<EMAIL>"><EMAIL></a><br/> # IMT School for Advanced Studies Lucca<br/> # 2020/2021<br/> # June, 26 2021 # + [markdown] id="HUepE3bODBa2" # # Outline of today # # - Pytorch # - Neural Network (fully connected) # - CNN # - RNN # # + [markdown] id="tPNVyHGcD3mW" # # Pytorch # # ## Why ```pytorch```? # # # + colab={"base_uri": "https://localhost:8080/"} id="lipDnPDxEQ_R" outputId="dc12df69-e718-4956-83aa-f9dbb5d4d7a9" import torch # GPU available check # Go to Menu > Runtime > Change runtime. print('GPU available check {}'.format(torch.cuda.is_available())) print(torch.rand(2,2)) # + [markdown] id="vk7_93nOEAQ8" # ## Tensors # # - A tensor is both a container for **numbers** and for a set of rules that define transformations between tensors producing a new tensor # # - **Essentially?** A multidimensional array # - Every tensor has a rank # - scalar --> rank 0 # - array --> rank 1 # - $n \times n$ matrix --> rank 2 # # ```python torch.rand(2,2)``` creates a rank 2 tensor with random values by using ```python torch.rand()``` # # # + colab={"base_uri": "https://localhost:8080/"} id="a_ATkkKhEX7D" outputId="842f6de6-9d70-402d-ce51-84e8a4209f1a" # we can create a tensor from lists x = torch.tensor([[0,0,1],[1,1,1],[0,0,0]]) print(x) # + colab={"base_uri": "https://localhost:8080/"} id="Ac60DL6_HTLp" outputId="c4d06b72-19b5-4655-b073-cb182628cd8b" # we can change an element in a tensor by using standard Python indexing x[0][0] = 5 print(x) # + id="HOVjXHSFHi4l"
Lecture_3_Deep_Learning.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # <span style='color:darkred'> 3 Running an MD simulation </span> # # *** # # ## <span style='color:darkred'> 3.1 Using the Advanced Research Computing (ARC) </span> # # # ### <span style='color:darkred'> 3.1.1 Connect to the ARC </span> # # # *Note: if using Windows you will need to connect via MobaXterm as detailed in the setup instructions.* # # Open a terminal and remotely connect to ARCUS using ssh (secure shell), with your username: # # ``` # % ssh <EMAIL>@<EMAIL> # ``` # # When prompted, enter your password and hit Enter. # Now, you need to connect to the arcus-htc cluster, so in your command line type: # # ``` # % ssh arcus-htc # ``` # # You can now check the path to the directory you are at, by typing: # # ``` # % pwd # ``` # # It should be your home directory (/home/username). This is where you will work from. # # ### <span style='color:darkred'> 3.1.2 Obtain the required input files </span> # # Now, you need to clone the repository where all the required input files for the simulation are stored. # # ``` # % git clone https://github.com/bigginlab/OxCompBio-Datafiles.git # ``` # # Type: # # ``` # % ls # ``` # # You should see that a new directory has been generated, labelled `OxCompBio-Datafiles` # # Move to this directory: # # ``` # % cd OxCompBio-Datafiles # ``` # # In there, if you type `ls` again, you should see four more subdirectories: # # `data` Here, you will find the input files that are required to perform the MD simulation. # # `setup` Here, you can perform all the steps to setup the protein system, as will be explained below. # # `run` Here, you can perform the energy minimization and the MD production run. # # `prerun` In this directory, you will find the data from an identical simulation that has already been perform. You can use these files for your analysis, in case you haven't managed for some reason to perform you own simulation successfully. # # Now, go to the `setup` directory to start setting up the protein system: # # ``` # % cd setup # ``` # # *** # ## <span style='color:darkred'> 3.2 The GROMACS Molecular Dynamics engine </span> # # [GROMACS](http://www.gromacs.org/) is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. GROMACS is one of the many MD simulation packages, it is free to use and it is what we will be using for the purposes of this tutorial. Note that this tutorial will only scratch the surface of all the powerful things GROMACS can do and there are many functionalities that we will not cover today. # # ### <span style='color:darkred'> 3.2.1 A brief overview of GROMACS file types </span> # # Gromacs supports many different file types, here we give a brief overview of the different file types we will encounter during this tutorial. # # - Coordinates files have the extension .gro and the default name is conf.gro. # - The topology file (default name topol.top) contains all the information about which atoms are bonded to which and what force-field parameters are applied etc. # - The trajectory files have the extension .xtc and .trr, the former does not contain velocity information and coordinates are held at a reduced precision, and so occupies less disk space. However you will need velocities if you want to continue a simulation. # - The .edr file contains the energy information from the trajectory. # - The .mdp file contains the information that was used to setup the actual simulation. Things related to temperature, pressure, how the electrostatics is calculated etc. # - The .tpr file is a binary file that contains all the information needed to perform the actual run (this allows gromacs to do lots of self-consistency checks to minimize user errors). # # ### <span style='color:darkred'> 3.2.2 Using GROMACS on ARCUS </span> # # In order to use GROMACS on ARCUS, you need to load the already available module: # # ``` # % module load gpu/gromacs/2020.1 # ``` # # By using the above command, the software becomes available for you to use, a bit like installing the software on the fly. # # Check that it was successfully loaded, by typing: # # ``` # % gmx help commands # ``` # # This will print basic information for every built-in GROMACS command. If you need more detailed information on a command in particular, you can type: # # ``` # % gmx [command] -h # ``` # # *** # ## <span style='color:darkred'> 3.3 Molecular Dynamics Simulation of Protein </span> # # ### <span style='color:darkred'> 3.3.1 Protein System Setup </span> # # # In this section, we will obtain our protein coordinates and perform some routine Molecular Dynamics calculations on them. # # As previously discussed in notebook 2, we will use the HIV-1 protease structure (1HSG). It is a homodimer with two chains of 99 residues each. Even though you have already obtained the .pdb file of the protein from the [protein data bank](https://www.rcsb.org/), for this step it is probably easier to just copy it from the data directory to your working directory. # # `% cp ../data/1hsg.pdb .` # # If you look at this file (using the graphical editors vi or nano for example, you should immediately see that it has two chains; A and B: # # `% vi 1hsg.pdb` # # or # # `% nano 1hsg.pdb` # # To quit the file, type: # # - `% :q`, if you used `vi` # # - `% Ctrl + x`, if you used `nano` # # # Now we need to prepare our protein for simulation. First of all we will extract only the protein coordinates from the pdb file into a new file called `protein.pdb`. To do this enter the following command: # # `% grep ATOM 1hsg.pdb > protein.pdb` # # Use your preferred text editor again to open the `protein.pdb` file and see how it differs from the `1hsg.pdb` file. # # We now need to generate the parameter/topology files we need. This process will also make sure that all the hydrogens are added to our protein. # # `% gmx pdb2gmx -f protein.pdb -ignh -o protein.gro` # # The `-ignh` flag will ignore any hydrogen atoms that might already be present in the `.pdb` file. # # The program should run and present a list of force-fields from which to select. Select the AMBER99SB-ILDN force field which should be option 6 in the list followed by 1 to select the recommended TIP3P water. If all goes well this should generate several files: # # 1. topol.top # 2. topol_Protein_chain_A.itp # 3. topol_Protein_chain_B.itp # 3. posre_Protein_chain_A.itp # 4. posre_Protein_chain_B.itp # 5. protein.gro # # Type: # # ``` # % ls # ``` # # to verify that all the above files have been created and are in your directory.\ # Note that the protein has a net charge of +4e. You should see a line that says "Total charge # in system 4.000 e". # # Before we can add water we need to define a box in which to put the protein and the water: # # `% gmx editconf -f protein.gro -box 7 7 7 -c -o boxed.gro` # # This puts the protein in the centre of the box that is 7 nm x 7 nm x 7 nm and creates the resulting file `boxed.gro`. As you may remember from the lecture, in this setup we will apply periodic boundary conditions, which means that the box is replicated indefinitely in all three directions. # # Next, we need to add water to the system. This can be done by using the `gmx solvate` command, which will fill the box with water by repeatedly overlaying a small box of water into the system (216 molecules). # # `% gmx solvate -cp boxed.gro -cs -o solvated.gro -p topol.top` # # You may have noticed in some of the output generated that the total system charge is +4. In order for us to use an Ewald method to calculate the electrostatic interactions we need to have a neutral system overall. Therefore we will add counterions (chloride ions, in this case) using the option `-neutral` and enough ions to make the solution up to 150 mM (`-conc 0.15`). This is done by replacing random water molecules (SOL) with NA+ or CL- ions. # # ``` # % gmx grompp -c solvated.gro -p topol.top -f ../data/genion.mdp -o genion.tpr # # % gmx genion -s genion.tpr -conc 0.15 -neutral -pname NA -nname CL -o system.gro -p topol.top # ``` # # When prompted, enter the group that corresponds to SOL (should be 13 or thereabouts). # # ### <span style='color:darkred'> 3.3.2 Energy Minimization </span> # # Before we can run the actual dynamics, we need to first minimize the energy of the system. This means that we need to allow the structure to relax so that the optimal geometry is obtained and there are no steric clashes between neighbouring atoms. Ideally you would minimize down until the forces were below a certain level (tolerance), but we will just give a quick burst of 200 steps here. Since we have finished setting up the system, we will now move to the `run` directory to perform our simulation from there: # # `% cd ../run` # # The `grompp` command will read the information of the system that we will provide (coordinates, topologies and simulation parameters) and will generate a .tpr run input file: # # You can first have a look at the molecular dynamics parameter file `em.mdp`. Open it with a text editor, such as `vi:` # # `% vi ../data/em.mdp` # # You should be able to see for example that we have selected the steepest descent algorithm for the minimization, we have defined 200 steps and that the minimization run will stop if the maximum force falls below 100 kJ/mol/nm. # # Quit the file, without changing anything! # # `% :q` # # If you accidentally edited the file, to quit without saving, type: # # `% :q!` # # The `grompp` command will read the information of the system that we will provide (coordinates, topologies and simulation parameters) and will generate a `.tpr` run input file: # # `% gmx grompp -c ../setup/system.gro -p ../setup/topol.top -f ../data/em.mdp -o em.tpr` # # It is time now to invoke the `mdrun` command, which is the main engine within GROMACS which performs the MD simulation. We cannot run the simulation on the login nodes of ARC; these can only be used to prepare the system for the simulation. Therefore instead of typing the `mdrun` command -that will initiate the energy minimisation- directly on the command line, we will submit a script that will submit the job to the job scheduler. Copy this script to your working directory: # # `% cp ../data/submit_em.sh .` # # You can explore its contents and see that the last line in the file contains the `mdrun` command by typing: # # `% cat submit_em.sh` # # The file contents will be printed on your terminal.\ # Now submit it to the cluster queue. # # `% sbatch submit_em.sh` # # It will take a few minutes to run, depending on the waiting times of the queue. Check on the status of the run by typing: # # ``` # % squeue -u $USER # ``` # # Remember to replace `username` with your own username! It should print something like this: # # ``` # JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) # 1311337 htc EM bioc1550 PD 0:00 1 (Priority) # ``` # # The `PD` in the fifth column denotes that the job is in the queue and has not started running yet. It will change to `R` once it starts running: # # ``` # JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) # 1311337 htc EM bioc1550 R 1:26 1 arcus-htc-node110 # ``` # # If the job has finished (or if it has failed), the above command will print nothing: # # ``` # JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)` # ``` # # If you made a mistake and need to cancel this job, type: # # ``` # % scancel JOBID # ``` # # `JOBID` should be replaced by the ID of this particular job (printed in the first column; see above). # # While the job is running, you can monitor its progress by typing: # # ``` # % tail -n 12 em.log # ``` # # This will show you what step the simulation is at or if it has finished.\ # Once the energy minimization has finished, we can examine it in terms of the potential energy versus the number of minimization steps: # # ``` # % gmx energy -s em.tpr -f em.edr -o em_potential_energy.xvg # ``` # # type in 10 when prompted (should correspond to potential energy from the list of options presented), then a zero to finish. # # The output will be an `.xvg` file that will contain the system potential energy as a function of the minimization step.\ # You can open the file with `vi` and observe whether the potential energy decreases during the energy minimization process, as would be expected: # # ``` # % vi em_potential_energy.xvg # ``` # # To quit the file, type: # # ``` # % :q # ``` # # ### <span style='color:darkred'> 3.3.3 Production Run </span> # # At this stage, we would normally run a short simulation where the protein atoms are restrained while the water molecules # and ions are allowed to freely move around and equilibrate around the protein. This step would also allow the system to equilibrate at the desired temperature and pressure. For this tutorial however, we will skip this bit due to # limited time. # # Once again, you can open the `md.mdp` file to have a look at the parameters we have selected for this MD simulation: # # ``` # % vi ../data/md.mdp # ``` # # <span style='color:Blue'> **Questions** </span> # # * Can you find out for how many steps we will run the simulation and to what total simulation time they corresponds to? # # * Can you also see the selected thermostat and barostat and what the target temperature and the target pressure values are? # # Quit the file without editing it: # # ``` # % :q! # ``` # # Now finally let us perform some molecular dynamics: # # ``` # % gmx grompp -c em.gro -p ../setup/topol.top -f ../data/md.mdp -maxwarn 1 -o md.tpr # # % cp ../data/submit_md.sh . # # % sbatch submit_md.sh # ``` # # You can check the status of the job again as described previously. # # At the moment it is set up to run for 1000 ps. This will take several minutes to complete depending on the waiting time in the queue - time for lunch! You don't have to wait for it to finish completely though, although now might be a good time for a break to allow at least some data to appear. The analysis can be done on the output files that will be generated or you can always use the output data from the simulation that we have already performed, found in the directory `../prerun/run` (This is 1000 ps simulation of the same system). # # # ### <span style='color:darkred'> 3.3.4 MD trajectory modification </span> # # During the simulation, the protein was free to rotate and translate and it is possible that it has diffused out of the box (remember that due to the periodic boundary conditions, the protein could appear to exit from one side of the box while entering from the opposite side). It will be easier to perform certain types of analysis later on, if we now put the protein back at the center of the box and then remove the rotational and translational motion. To this end, we need to modify the simulation trajectory `md.xtc' in three consecutive steps. # # First, let's ensure that no atoms jump across the box and that the protein remains whole, using the `-pbc nojump` flag: # # ``` # % gmx trjconv -f md.xtc -s md.tpr -pbc nojump -o md_nojump.xtc # ``` # # When prompted, type `0` to include the entire system in the `md_nojump.xtc` trajectory. # # In the next command, we will ensure that the center of mass of the protein is placed at the center of a compact box: # # ``` # % gmx trjconv -f md_nojump.xtc -s md.tpr -pbc mol -ur compact -center -o md_center.xtc # ``` # # When prompted, type `1` to center the protein in the box, and when you are prompted again, type `0` to include the entire system in the output trajectory `md_center.xtc`. # # In the final step, we will remove the rotational and translation motion by fitting the protein to a reference structure, which in fact will be the starting structure of the simulation: # # ``` # % gmx trjconv -f md_center.xtc -s md.tpr -fit rot+trans -o md_fit.xtc # ``` # # When prompted, type `1` to select the protein for the least squares fit and then type `0` for the output. # # The `.xtc` files are large in size and since we are only going to need the original trajectory (`md.xtc`) (never delete this!) and the final trajectory (`md_fit.xtc`), we can remove the intermediate trajectories that we generated: # # ``` # % rm md_nojump.xtc md_center.xtc # ``` # # Always be very cautious with the `rm` command! If you accidentally delete a file you need, you will not be able to recover it. # # ### <span style='color:darkred'> 3.3.5 Obtain Temperature and Energy data </span> # # After the end of the production run, it would be useful to obtain some properties that will give us an insight into our protein system.\ # There are various so-called ensembles that are used for protein simulations - probably the most common is a system where the number of particles, the pressure and the temperature are held constant (NPT). This is usually achieved by employing algorithms that regulate the temperature and pressure, known as thermostats and barostats, respectively. Here, for the regulation of temperature, we use the velocity rescale thermostat, defined in the `md.mdp ` file, which couples the system to a "heat-bath" and ensures the reproduction of a correct kinetic ensemble. # # Nevertheless, it is usually a good idea to check these as a function of time through the trajectory just to make sure nothing unexpected happened. First let us check the temperature of our simulation. # # ``` # % gmx energy -f md.edr -s md.tpr -o 1hsg_temperature.xvg # ``` # # The program will then present you with a large table of all the values recorded in the energy (.edr) file. We want to examine temperature so type 15, press enter and then 0 and press enter again. The program will then analyse the temperature and present some statistics of the analysis at the end. # # Another set of properties that is quite useful to examine is the various energetic contributions to the energy. The total energy should be constant. but the various contributions can change and this can sometimes indicate something interesting or strange happening in your simulation. Let us look at some energetic properties of the simulation. # # ``` # % gmx energy -s md.tpr -f md.edr -o 1hsg_energies.xvg # ``` # # We shall select short-range lennard-jones (7), short range coulombic (9) and the potential energy (11). End your selection with a zero. # # We will plot and explore the temperature and the energetic components that we obtained in the next section of the tutorial. # # # ### <span style='color:darkred'> 3.3. File Transfer </span> # # As soon as the simulation is finished, you should go to your local terminal and transfer the files from the remote directory to your local directory. # # *Note: If your are using Windows and the MobaXterm, you will need to follow the instructions included in the `00_Setup` notebook.* # # First, in your local terminal, go to the `OxCompBio/tutorials/MD` directory, i.e. the directory that the jupyter notebooks are located at. # # The use `scp` to transfer the remote directory to your local directory: # # ``` # % scp -r <EMAIL>:/home/username/OxCompBio-Datafiles/ . # ``` # # When prompted, enter your password. It might take a while for the files to be transfered. Once the transfer is finished, you can check that the files are indeed copied to the `OxCompBio-Datafiles` directory: # # ``` # % ls OxCompBio-Datafiles/ # ``` # # *** # ## <span style='color:darkred'> Next Step </span> # # Now you are ready to perform some types of analysis of the simulation trajectory working through the `04_Trajectory_Analysis.ipynb`. #
tutorials/MD/03_Running_an_MD_simulation.ipynb
# # 📝 Exercise M4.04 # # In the previous notebook, we saw the effect of applying some regularization # on the coefficient of a linear model. # # In this exercise, we will study the advantage of using some regularization # when dealing with correlated features. # # We will first create a regression dataset. This dataset will contain 2,000 # samples and 5 features from which only 2 features will be informative. # + from sklearn.datasets import make_regression data, target, coef = make_regression( n_samples=2_000, n_features=5, n_informative=2, shuffle=False, coef=True, random_state=0, noise=30, ) # - # When creating the dataset, `make_regression` returns the true coefficient # used to generate the dataset. Let's plot this information. # + import pandas as pd feature_names = [f"Features {i}" for i in range(data.shape[1])] coef = pd.Series(coef, index=feature_names) coef.plot.barh() coef # - # Create a `LinearRegression` regressor and fit on the entire dataset and # check the value of the coefficients. Are the coefficients of the linear # regressor close to the coefficients used to generate the dataset? # + from sklearn.linear_model import LinearRegression linear_regression = LinearRegression() linear_regression.fit(data, target) linear_regression.coef_ # - feature_names = [f"Features {i}" for i in range(data.shape[1])] coef = pd.Series(linear_regression.coef_, index=feature_names) _ = coef.plot.barh() # We see that the coefficients are close to the coefficients used to generate # the dataset. The dispersion is indeed cause by the noise injected during the # dataset generation. # Now, create a new dataset that will be the same as `data` with 4 additional # columns that will repeat twice features 0 and 1. This procedure will create # perfectly correlated features. # Write your code here. # data['Features 5']=data['Features 0'] # data['Features 6']=data['Features 0'] # data['Features 7']=data['Features 1'] # data['Features 8']=data['Features 1'] new_data = [[dat[0], dat[1], dat[2], dat[3], dat[4], dat[0], dat[0], dat[1], dat[1]] for dat in data] import numpy as np new_data = np.array(new_data) new_data.shape # Fit again the linear regressor on this new dataset and check the # coefficients. What do you observe? # + # Write your code here. from sklearn.linear_model import LinearRegression linear_regression = LinearRegression() linear_regression.fit(new_data, target) print(f' coefficients: {linear_regression.coef_}') feature_names = [f"Features {i}" for i in range(new_data.shape[1])] coef = pd.Series(linear_regression.coef_, index=feature_names) _ = coef.plot.barh() # - # Create a ridge regressor and fit on the same dataset. Check the coefficients. # What do you observe? # Write your code here. from sklearn.linear_model import RidgeCV model = RidgeCV( alphas=[0.001, 0.1, 1, 10, 1000], store_cv_values=True ) model.fit(new_data, target) print(model.alpha_) print(f' coefficients: {model.coef_}') feature_names = [f"Features {i}" for i in range(new_data.shape[1])] coef = pd.Series(model.coef_, index=feature_names) _ = coef.plot.barh() # Can you find the relationship between the ridge coefficients and the original # coefficients? # + # Write your code here.
notebooks/linear_models_ex_04.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.5 64-bit (''base'': conda)' # name: python3 # --- # %pylab inline # # Notebook magic from IPython.core.magic import Magics, magics_class, line_cell_magic from IPython.core.magic import cell_magic, register_cell_magic, register_line_magic from IPython.core.magic_arguments import argument, magic_arguments, parse_argstring import subprocess import os # + @magics_class class PyboardMagic(Magics): @cell_magic @magic_arguments() @argument('-skip') @argument('-unix') @argument('-pyboard') @argument('-file') @argument('-data') @argument('-time') @argument('-memory') def micropython(self, line='', cell=None): args = parse_argstring(self.micropython, line) if args.skip: # doesn't care about the cell's content print('skipped execution') return None # do not parse the rest if args.unix: # tests the code on the unix port. Note that this works on unix only with open('/dev/shm/micropython.py', 'w') as fout: fout.write(cell) proc = subprocess.Popen(["../../micropython/ports/unix/micropython", "/dev/shm/micropython.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) print(proc.stdout.read().decode("utf-8")) print(proc.stderr.read().decode("utf-8")) return None if args.file: # can be used to copy the cell content onto the pyboard's flash spaces = " " try: with open(args.file, 'w') as fout: fout.write(cell.replace('\t', spaces)) printf('written cell to {}'.format(args.file)) except: print('Failed to write to disc!') return None # do not parse the rest if args.data: # can be used to load data from the pyboard directly into kernel space message = pyb.exec(cell) if len(message) == 0: print('pyboard >>>') else: print(message.decode('utf-8')) # register new variable in user namespace self.shell.user_ns[args.data] = string_to_matrix(message.decode("utf-8")) if args.time: # measures the time of executions pyb.exec('import utime') message = pyb.exec('t = utime.ticks_us()\n' + cell + '\ndelta = utime.ticks_diff(utime.ticks_us(), t)' + "\nprint('execution time: {:d} us'.format(delta))") print(message.decode('utf-8')) if args.memory: # prints out memory information message = pyb.exec('from micropython import mem_info\nprint(mem_info())\n') print("memory before execution:\n========================\n", message.decode('utf-8')) message = pyb.exec(cell) print(">>> ", message.decode('utf-8')) message = pyb.exec('print(mem_info())') print("memory after execution:\n========================\n", message.decode('utf-8')) if args.pyboard: message = pyb.exec(cell) print(message.decode('utf-8')) ip = get_ipython() ip.register_magics(PyboardMagic) # - # ## pyboard import pyboard pyb = pyboard.Pyboard('/dev/ttyACM0') pyb.enter_raw_repl() pyb.exit_raw_repl() pyb.close() # + # %%micropython -pyboard 1 import utime import ulab as np def timeit(n=1000): def wrapper(f, *args, **kwargs): func_name = str(f).split(' ')[1] def new_func(*args, **kwargs): run_times = np.zeros(n, dtype=np.uint16) for i in range(n): t = utime.ticks_us() result = f(*args, **kwargs) run_times[i] = utime.ticks_diff(utime.ticks_us(), t) print('{}() execution times based on {} cycles'.format(func_name, n, (delta2-delta1)/n)) print('\tbest: %d us'%np.min(run_times)) print('\tworst: %d us'%np.max(run_times)) print('\taverage: %d us'%np.mean(run_times)) print('\tdeviation: +/-%.3f us'%np.std(run_times)) return result return new_func return wrapper def timeit(f, *args, **kwargs): func_name = str(f).split(' ')[1] def new_func(*args, **kwargs): t = utime.ticks_us() result = f(*args, **kwargs) print('execution time: ', utime.ticks_diff(utime.ticks_us(), t), ' us') return result return new_func # - # __END_OF_DEFS__ # # ndarray, the base class # # The `ndarray` is the underlying container of numerical data. It can be thought of as micropython's own `array` object, but has a great number of extra features starting with how it can be initialised, which operations can be done on it, and which functions can accept it as an argument. One important property of an `ndarray` is that it is also a proper `micropython` iterable. # # The `ndarray` consists of a short header, and a pointer that holds the data. The pointer always points to a contiguous segment in memory (`numpy` is more flexible in this regard), and the header tells the interpreter, how the data from this segment is to be read out, and what the bytes mean. Some operations, e.g., `reshape`, are fast, because they do not operate on the data, they work on the header, and therefore, only a couple of bytes are manipulated, even if there are a million data entries. A more detailed exposition of how operators are implemented can be found in the section titled [Programming ulab](#Programming_ula). # # Since the `ndarray` is a binary container, it is also compact, meaning that it takes only a couple of bytes of extra RAM in addition to what is required for storing the numbers themselves. `ndarray`s are also type-aware, i.e., one can save RAM by specifying a data type, and using the smallest reasonable one. Five such types are defined, namely `uint8`, `int8`, which occupy a single byte of memory per datum, `uint16`, and `int16`, which occupy two bytes per datum, and `float`, which occupies four or eight bytes per datum. The precision/size of the `float` type depends on the definition of `mp_float_t`. Some platforms, e.g., the PYBD, implement `double`s, but some, e.g., the pyboard.v.11, do not. You can find out, what type of float your particular platform implements by looking at the output of the [.itemsize](#.itemsize) class property, or looking at the exact `dtype`, when you print out an array. # # In addition to the five above-mentioned numerical types, it is also possible to define Boolean arrays, which can be used in the indexing of data. However, Boolean arrays are really nothing but arrays of type `uint8` with an extra flag. # # On the following pages, we will see how one can work with `ndarray`s. Those familiar with `numpy` should find that the nomenclature and naming conventions of `numpy` are adhered to as closely as possible. We will point out the few differences, where necessary. # # For the sake of comparison, in addition to the `ulab` code snippets, sometimes the equivalent `numpy` code is also presented. You can find out, where the snippet is supposed to run by looking at its first line, the header of the code block. # ## The ndinfo function # # A concise summary of a couple of the properties of an `ndarray` can be printed out by calling the `ndinfo` # function. In addition to finding out what the *shape* and *strides* of the array array, we also get the `itemsize`, as well as the type. An interesting piece of information is the *data pointer*, which tells us, what the address of the data segment of the `ndarray` is. We will see the significance of this in the section [Slicing and indexing](#Slicing-and-indexing). # # Note that this function simply prints some information, but does not return anything. If you need to get a handle of the data contained in the printout, you should call the dedicated `shape`, `strides`, or `itemsize` functions directly. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array(range(5), dtype=np.float) b = np.array(range(25), dtype=np.uint8).reshape((5, 5)) np.ndinfo(a) print('\n') np.ndinfo(b) # - # ## Initialising an array # # A new array can be created by passing either a standard micropython iterable, or another `ndarray` into the constructor. # ### Initialising by passing iterables # # If the iterable is one-dimensional, i.e., one whose elements are numbers, then a row vector will be created and returned. If the iterable is two-dimensional, i.e., one whose elements are again iterables, a matrix will be created. If the lengths of the iterables are not consistent, a `ValueError` will be raised. Iterables of different types can be mixed in the initialisation function. # # If the `dtype` keyword with the possible `uint8/int8/uint16/int16/float` values is supplied, the new `ndarray` will have that type, otherwise, it assumes `float` as default. # + # %%micropython -unix 1 from ulab import numpy as np a = [1, 2, 3, 4, 5, 6, 7, 8] b = np.array(a) print("a:\t", a) print("b:\t", b) # a two-dimensional array with mixed-type initialisers c = np.array([range(5), range(20, 25, 1), [44, 55, 66, 77, 88]], dtype=np.uint8) print("\nc:\t", c) # and now we throw an exception d = np.array([range(5), range(10), [44, 55, 66, 77, 88]], dtype=np.uint8) print("\nd:\t", d) # - # ### Initialising by passing arrays # # An `ndarray` can be initialised by supplying another array. This statement is almost trivial, since `ndarray`s are iterables themselves, though it should be pointed out that initialising through arrays is a bit faster. This statement is especially true, if the `dtype`s of the source and output arrays are the same, because then the contents can simply be copied without further ado. While type conversion is also possible, it will always be slower than straight copying. # + # %%micropython -unix 1 from ulab import numpy as np a = [1, 2, 3, 4, 5, 6, 7, 8] b = np.array(a) c = np.array(b) d = np.array(b, dtype=np.uint8) print("a:\t", a) print("\nb:\t", b) print("\nc:\t", c) print("\nd:\t", d) # - # Note that the default type of the `ndarray` is `float`. Hence, if the array is initialised from another array, type conversion will always take place, except, when the output type is specifically supplied. I.e., # + # %%micropython -unix 1 from ulab import numpy as np a = np.array(range(5), dtype=np.uint8) b = np.array(a) print("a:\t", a) print("\nb:\t", b) # - # will iterate over the elements in `a`, since in the assignment `b = np.array(a)`, no output type was given, therefore, `float` was assumed. On the other hand, # + # %%micropython -unix 1 from ulab import numpy as np a = np.array(range(5), dtype=np.uint8) b = np.array(a, dtype=np.uint8) print("a:\t", a) print("\nb:\t", b) # - # will simply copy the content of `a` into `b` without any iteration, and will, therefore, be faster. Keep this in mind, whenever the output type, or performance is important. # ## Array initialisation functions # # There are nine functions that can be used for initialising an array. # # 1. [numpy.arange](#arange) # 1. [numpy.concatenate](#concatenate) # 1. [numpy.diag](#diag) # 1. [numpy.empty](#empty) # 1. [numpy.eye](#eye) # 1. [numpy.frombuffer](#frombuffer) # 1. [numpy.full](#full) # 1. [numpy.linspace](#linspace) # 1. [numpy.logspace](#logspace) # 1. [numpy.ones](#ones) # 1. [numpy.zeros](#zeros) # ### arange # # `numpy`: https://numpy.org/doc/stable/reference/generated/numpy.arange.html # # The function returns a one-dimensional array with evenly spaced values. Takes 3 positional arguments (two are optional), and the `dtype` keyword argument. # + # %%micropython -unix 1 from ulab import numpy as np print(np.arange(10)) print(np.arange(2, 10)) print(np.arange(2, 10, 3)) print(np.arange(2, 10, 3, dtype=np.float)) # - # ### concatenate # # `numpy`: https://numpy.org/doc/stable/reference/generated/numpy.concatenate.html # # The function joins a sequence of arrays, if they are compatible in shape, i.e., if all shapes except the one along the joining axis are equal. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array(range(25), dtype=np.uint8).reshape((5, 5)) b = np.array(range(15), dtype=np.uint8).reshape((3, 5)) c = np.concatenate((a, b), axis=0) print(c) # - # **WARNING**: `numpy` accepts arbitrary `dtype`s in the sequence of arrays, in `ulab` the `dtype`s must be identical. If you want to concatenate different types, you have to convert all arrays to the same type first. Here `b` is of `float` type, so it cannot directly be concatenated to `a`. However, if we cast the `dtype` of `b`, the concatenation works: # + # %%micropython -unix 1 from ulab import numpy as np a = np.array(range(25), dtype=np.uint8).reshape((5, 5)) b = np.array(range(15), dtype=np.float).reshape((5, 3)) d = np.array(b+1, dtype=np.uint8) print('a: ', a) print('='*20 + '\nd: ', d) c = np.concatenate((d, a), axis=1) print('='*20 + '\nc: ', c) # - # ## diag # # `numpy`: https://numpy.org/doc/stable/reference/generated/numpy.diag.html # # Extract a diagonal, or construct a diagonal array. # # The function takes two arguments, an `ndarray`, and a shift. If the first argument is a two-dimensional array, the function returns a one-dimensional array containing the diagonal entries. The diagonal can be shifted by an amount given in the second argument. # # If the first argument is a one-dimensional array, the function returns a two-dimensional tensor with its diagonal elements given by the first argument. # # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([1, 2, 3, 4]) print(np.diag(a)) # + # %%micropython -unix 1 from ulab import numpy as np a = np.array(range(16)).reshape((4, 4)) print('a: ', a) print() print('diagonal of a: ', np.diag(a)) # - # ## empty # # `numpy`: https://numpy.org/doc/stable/reference/generated/numpy.empty.html # # `empty` is simply an alias for `zeros`, i.e., as opposed to `numpy`, the entries of the tensor will be initialised to zero. # ### eye # # `numpy`: https://docs.scipy.org/doc/numpy/reference/generated/numpy.eye.html # # Another special array method is the `eye` function, whose call signature is # # ```python # eye(N, M, k=0, dtype=float) # ``` # where `N` (`M`) specify the dimensions of the matrix (if only `N` is supplied, then we get a square matrix, otherwise one with `M` rows, and `N` columns), and `k` is the shift of the ones (the main diagonal corresponds to `k=0`). Here are a couple of examples. # #### With a single argument # + # %%micropython -unix 1 from ulab import numpy as np print(np.eye(5)) # - # #### Specifying the dimensions of the matrix # + # %%micropython -unix 1 from ulab import numpy as np print(np.eye(4, M=6, k=-1, dtype=np.int16)) # + # %%micropython -unix 1 from ulab import numpy as np print(np.eye(4, M=6, dtype=np.int8)) # - # ### frombuffer # # `numpy`: https://numpy.org/doc/stable/reference/generated/numpy.frombuffer.html # # The function interprets a contiguous buffer as a one-dimensional array, and thus can be used for piping buffered data directly into an array. This method of analysing, e.g., ADC data is much more efficient than passing the ADC buffer into the `array` constructor, because `frombuffer` simply creates the `ndarray` header and blindly copies the memory segment, without inspecting the underlying data. # # The function takes a single positional argument, the buffer, and three keyword arguments. These are the `dtype` with a default value of `float`, the `offset`, with a default of 0, and the `count`, with a default of -1, meaning that all data are taken in. # + # %%micropython -unix 1 from ulab import numpy as np buffer = b'\x01\x02\x03\x04\x05\x06\x07\x08' print('buffer: ', buffer) a = np.frombuffer(buffer, dtype=np.uint8) print('a, all data read: ', a) b = np.frombuffer(buffer, dtype=np.uint8, offset=2) print('b, all data with an offset: ', b) c = np.frombuffer(buffer, dtype=np.uint8, offset=2, count=3) print('c, only 3 items with an offset: ', c) # - # ### full # # `numpy`: https://docs.scipy.org/doc/numpy/reference/generated/numpy.full.html # # The function returns an array of arbitrary dimension, whose elements are all equal to the second positional argument. The first argument is a tuple describing the shape of the tensor. The `dtype` keyword argument with a default value of `float` can also be supplied. # + # %%micropython -unix 1 from ulab import numpy as np # create an array with the default type print(np.full((2, 4), 3)) print('\n' + '='*20 + '\n') # the array type is uint8 now print(np.full((2, 4), 3, dtype=np.uint8)) # - # ### linspace # # `numpy`: https://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html # # This function returns an array, whose elements are uniformly spaced between the `start`, and `stop` points. The number of intervals is determined by the `num` keyword argument, whose default value is 50. With the `endpoint` keyword argument (defaults to `True`) one can include `stop` in the sequence. In addition, the `dtype` keyword can be supplied to force type conversion of the output. The default is `float`. Note that, when `dtype` is of integer type, the sequence is not necessarily evenly spaced. This is not an error, rather a consequence of rounding. (This is also the `numpy` behaviour.) # + # %%micropython -unix 1 from ulab import numpy as np # generate a sequence with defaults print('default sequence:\t', np.linspace(0, 10)) # num=5 print('num=5:\t\t\t', np.linspace(0, 10, num=5)) # num=5, endpoint=False print('num=5:\t\t\t', np.linspace(0, 10, num=5, endpoint=False)) # num=5, endpoint=False, dtype=uint8 print('num=5:\t\t\t', np.linspace(0, 5, num=7, endpoint=False, dtype=np.uint8)) # - # ### logspace # # `linspace`' equivalent for logarithmically spaced data is `logspace`. This function produces a sequence of numbers, in which the quotient of consecutive numbers is constant. This is a geometric sequence. # # `numpy`: https://docs.scipy.org/doc/numpy/reference/generated/numpy.logspace.html # # This function returns an array, whose elements are uniformly spaced between the `start`, and `stop` points. The number of intervals is determined by the `num` keyword argument, whose default value is 50. With the `endpoint` keyword argument (defaults to `True`) one can include `stop` in the sequence. In addition, the `dtype` keyword can be supplied to force type conversion of the output. The default is `float`. Note that, exactly as in `linspace`, when `dtype` is of integer type, the sequence is not necessarily evenly spaced in log space. # # In addition to the keyword arguments found in `linspace`, `logspace` also accepts the `base` argument. The default value is 10. # + # %%micropython -unix 1 from ulab import numpy as np # generate a sequence with defaults print('default sequence:\t', np.logspace(0, 3)) # num=5 print('num=5:\t\t\t', np.logspace(1, 10, num=5)) # num=5, endpoint=False print('num=5:\t\t\t', np.logspace(1, 10, num=5, endpoint=False)) # num=5, endpoint=False print('num=5:\t\t\t', np.logspace(1, 10, num=5, endpoint=False, base=2)) # - # ### ones, zeros # # `numpy`: https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html # # `numpy`: https://docs.scipy.org/doc/numpy/reference/generated/numpy.ones.html # # A couple of special arrays and matrices can easily be initialised by calling one of the `ones`, or `zeros` functions. `ones` and `zeros` follow the same pattern, and have the call signature # # ```python # ones(shape, dtype=float) # zeros(shape, dtype=float) # ``` # where shape is either an integer, or a tuple specifying the shape. # + # %%micropython -unix 1 from ulab import numpy as np print(np.ones(6, dtype=np.uint8)) print(np.zeros((6, 4))) # - # When specifying the shape, make sure that the length of the tuple is not larger than the maximum dimension of your firmware. # + # %%micropython -unix 1 from ulab import numpy as np import ulab print('maximum number of dimensions: ', ulab.__version__) print(np.zeros((2, 2, 2))) # - # ## Customising array printouts # `ndarray`s are pretty-printed, i.e., if the number of entries along the last axis is larger than 10 (default value), then only the first and last three entries will be printed. Also note that, as opposed to `numpy`, the printout always contains the `dtype`. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array(range(200)) print("a:\t", a) # - # ### set_printoptions # # The default values can be overwritten by means of the `set_printoptions` function [numpy.set_printoptions](https://numpy.org/doc/1.18/reference/generated/numpy.set_printoptions.html), which accepts two keywords arguments, the `threshold`, and the `edgeitems`. The first of these arguments determines the length of the longest array that will be printed in full, while the second is the number of items that will be printed on the left and right hand side of the ellipsis, if the array is longer than `threshold`. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array(range(20)) print("a printed with defaults:\t", a) np.set_printoptions(threshold=200) print("\na printed in full:\t\t", a) np.set_printoptions(threshold=10, edgeitems=2) print("\na truncated with 2 edgeitems:\t", a) # - # ### get_printoptions # # The set value of the `threshold` and `edgeitems` can be retrieved by calling the `get_printoptions` function with no arguments. The function returns a *dictionary* with two keys. # + # %%micropython -unix 1 from ulab import numpy as np np.set_printoptions(threshold=100, edgeitems=20) print(np.get_printoptions()) # - # ## Methods and properties of ndarrays # # Arrays have several *properties* that can queried, and some methods that can be called. With the exception of the flatten and transpose operators, properties return an object that describe some feature of the array, while the methods return a new array-like object. # # 1. [.byteswap](#.byteswap) # 1. [.copy](#.copy) # 1. [.dtype](#.dtype) # 1. [.flat](#.flat) # 1. [.flatten](#.flatten) # 1. [.itemsize](#.itemsize) # 1. [.reshape](#.reshape) # 1. [.shape](#.shape) # 1. [.size](#.size) # 1. [.T](#.transpose) # 1. [.transpose](#.transpose) # 1. [.sort](#.sort) # ### .byteswap # # `numpy` https://numpy.org/doc/stable/reference/generated/numpy.char.chararray.byteswap.html # # The method takes a single keyword argument, `inplace`, with values `True` or `False`, and swaps the bytes in the array. If `inplace = False`, a new `ndarray` is returned, otherwise the original values are overwritten. # # The `frombuffer` function is a convenient way of receiving data from peripheral devices that work with buffers. However, it is not guaranteed that the byte order (in other words, the _endianness_) of the peripheral device matches that of the microcontroller. The `.byteswap` method makes it possible to change the endianness of the incoming data stream. # # Obviously, byteswapping makes sense only for those cases, when a datum occupies more than one byte, i.e., for the `uint16`, `int16`, and `float` `dtype`s. When `dtype` is either `uint8`, or `int8`, the method simply returns a view or copy of self, depending upon the value of `inplace`. # + # %%micropython -unix 1 from ulab import numpy as np buffer = b'\x01\x02\x03\x04\x05\x06\x07\x08' print('buffer: ', buffer) a = np.frombuffer(buffer, dtype=np.uint16) print('a: ', a) b = a.byteswap() print('b: ', b) # - # ### .copy # # The `.copy` method creates a new *deep copy* of an array, i.e., the entries of the source array are *copied* into the target array. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([1, 2, 3, 4], dtype=np.int8) b = a.copy() print('a: ', a) print('='*20) print('b: ', b) # - # ### .dtype # # `numpy`: https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.dtype.htm # # The `.dtype` property is the `dtype` of an array. This can then be used for initialising another array with the matching type. `ulab` implements two versions of `dtype`; one that is `numpy`-like, i.e., one, which returns a `dtype` object, and one that is significantly cheaper in terms of flash space, but does not define a `dtype` object, and holds a single character (number) instead. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([1, 2, 3, 4], dtype=np.int8) b = np.array([5, 6, 7], dtype=a.dtype) print('a: ', a) print('dtype of a: ', a.dtype) print('\nb: ', b) # - # If the `ulab.h` header file sets the pre-processor constant `ULAB_HAS_DTYPE_OBJECT` to 0 as # # ```c # #define ULAB_HAS_DTYPE_OBJECT (0) # ``` # then the output of the previous snippet will be # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([1, 2, 3, 4], dtype=np.int8) b = np.array([5, 6, 7], dtype=a.dtype) print('a: ', a) print('dtype of a: ', a.dtype) print('\nb: ', b) # - # Here 98 is nothing but the ASCII value of the character `b`, which is the type code for signed 8-bit integers. The object definition adds around 600 bytes to the firmware. # ### .flat # # numpy: https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.flat.htm # # `.flat` returns the array's flat iterator. For one-dimensional objects the flat iterator is equivalent to the standart iterator, while for higher dimensional tensors, it amounts to first flattening the array, and then iterating over it. Note, however, that the flat iterator does not consume RAM beyond what is required for holding the position of the iterator itself, while flattening produces a new copy. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([1, 2, 3, 4], dtype=np.int8) for _a in a: print(_a) a = np.array([[1, 2, 3, 4], [5, 6, 7, 8]], dtype=np.int8) print('a:\n', a) for _a in a: print(_a) for _a in a.flat: print(_a) # - # ### .flatten # # `numpy`: https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.flatten.htm # # `.flatten` returns the flattened array. The array can be flattened in `C` style (i.e., moving along the last axis in the tensor), or in `fortran` style (i.e., moving along the first axis in the tensor). # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([1, 2, 3, 4], dtype=np.int8) print("a: \t\t", a) print("a flattened: \t", a.flatten()) b = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.int8) print("\nb:", b) print("b flattened (C): \t", b.flatten()) print("b flattened (F): \t", b.flatten(order='F')) # - # ### .itemsize # # `numpy`: https://numpy.org/doc/stable/reference/generated/numpy.ndarray.itemsize.html # # The `.itemsize` property is an integer with the size of elements in the array. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([1, 2, 3], dtype=np.int8) print("a:\n", a) print("itemsize of a:", a.itemsize b= np.array([[1, 2], [3, 4]], dtype=np.float) print("\nb:\n", b) print("itemsize of b:", b.itemsize # - # ### .reshape # # `numpy`: https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html # # `reshape` re-writes the shape properties of an `ndarray`, but the array will not be modified in any other way. The function takes a single 2-tuple with two integers as its argument. The 2-tuple should specify the desired number of rows and columns. If the new shape is not consistent with the old, a `ValueError` exception will be raised. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]], dtype=np.uint8) print('a (4 by 4):', a) print('a (2 by 8):', a.reshape((2, 8))) print('a (1 by 16):', a.reshape((1, 16))) # - Note that `ndarray.reshape()` can also be called by assigning to `ndarray.shape`. # ### .shape # # `numpy`: https://numpy.org/doc/stable/reference/generated/numpy.ndarray.shape.html # # The `.shape` property is a tuple whose elements are the length of the array along each axis. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([1, 2, 3, 4], dtype=np.int8) print("a:\n", a) print("shape of a:", a.shape) b= np.array([[1, 2], [3, 4]], dtype=np.int8) print("\nb:\n", b) print("shape of b:", b.shape) # - # By assigning a tuple to the `.shape` property, the array can be `reshape`d: # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9]) print('a:\n', a) a.shape = (3, 3) print('\na:\n', a) # - # ### .size # # `numpy`: https://numpy.org/doc/stable/reference/generated/numpy.ndarray.size.html # # The `.size` property is an integer specifying the number of elements in the array. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([1, 2, 3], dtype=np.int8) print("a:\n", a) print("size of a:", a.size) b= np.array([[1, 2], [3, 4]], dtype=np.int8) print("\nb:\n", b) print("size of b:", b.size) # - # .T # # The `.T` property of the `ndarray` is equivalent to [.transpose](#.transpose). # ### .tobytes # # `numpy`: https://numpy.org/doc/stable/reference/generated/numpy.ndarray.tobytes.html # # The `.tobytes` method can be used for acquiring a handle of the underlying data pointer of an array, and it returns a new `bytearray` that can be fed into any method that can accep a `bytearray`, e.g., ADC data can be buffered into this `bytearray`, or the `bytearray` can be fed into a DAC. Since the `bytearray` is really nothing but the bare data container of the array, any manipulation on the `bytearray` automatically modifies the array itself. # # Note that the method raises a `ValueError` exception, if the array is not dense (i.e., it has already been sliced). # + # %%micropython -unix 1 from ulab import numpy as np a = np.array(range(8), dtype=np.uint8) print('a: ', a) b = a.tobytes() print('b: ', b) # modify b b[0] = 13 print('='*20) print('b: ', b) print('a: ', a) # - # ### .transpose # # `numpy`: https://docs.scipy.org/doc/numpy/reference/generated/numpy.transpose.html # # Returns the transposed array. Only defined, if the number of maximum dimensions is larger than 1. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]], dtype=np.uint8) print('a:\n', a) print('shape of a:', a.shape) a.transpose() print('\ntranspose of a:\n', a) print('shape of a:', a.shape) # - # The transpose of the array can also be gotten through the `T` property: # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=np.uint8) print('a:\n', a) print('\ntranspose of a:\n', a.T) # - # ### .sort # # `numpy`: https://docs.scipy.org/doc/numpy/reference/generated/numpy.sort.html # # In-place sorting of an `ndarray`. For a more detailed exposition, see [sort](#sort). # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([[1, 12, 3, 0], [5, 3, 4, 1], [9, 11, 1, 8], [7, 10, 0, 1]], dtype=np.uint8) print('\na:\n', a) a.sort(axis=0) print('\na sorted along vertical axis:\n', a) a = np.array([[1, 12, 3, 0], [5, 3, 4, 1], [9, 11, 1, 8], [7, 10, 0, 1]], dtype=np.uint8) a.sort(axis=1) print('\na sorted along horizontal axis:\n', a) a = np.array([[1, 12, 3, 0], [5, 3, 4, 1], [9, 11, 1, 8], [7, 10, 0, 1]], dtype=np.uint8) a.sort(axis=None) print('\nflattened a sorted:\n', a) # - # ## Unary operators # # With the exception of `len`, which returns a single number, all unary operators manipulate the underlying data element-wise. # ### len # # This operator takes a single argument, the array, and returns either the length of the first axis. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([1, 2, 3, 4, 5], dtype=np.uint8) b = np.array([range(5), range(5), range(5), range(5)], dtype=np.uint8) print("a:\t", a) print("length of a: ", len(a)) print("shape of a: ", a.shape) print("\nb:\t", b) print("length of b: ", len(b)) print("shape of b: ", b.shape) # - # The number returned by `len` is also the length of the iterations, when the array supplies the elements for an iteration (see later). # ### invert # # The function is defined for integer data types (`uint8`, `int8`, `uint16`, and `int16`) only, takes a single argument, and returns the element-by-element, bit-wise inverse of the array. If a `float` is supplied, the function raises a `ValueError` exception. # # With signed integers (`int8`, and `int16`), the results might be unexpected, as in the example below: # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([0, -1, -100], dtype=np.int8) print("a:\t\t", a) print("inverse of a:\t", ~a) a = np.array([0, 1, 254, 255], dtype=np.uint8) print("\na:\t\t", a) print("inverse of a:\t", ~a) # - # ### abs # # This function takes a single argument, and returns the element-by-element absolute value of the array. When the data type is unsigned (`uint8`, or `uint16`), a copy of the array will be returned immediately, and no calculation takes place. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([0, -1, -100], dtype=np.int8) print("a:\t\t\t ", a) print("absolute value of a:\t ", abs(a)) # - # ### neg # # This operator takes a single argument, and changes the sign of each element in the array. Unsigned values are wrapped. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([10, -1, 1], dtype=np.int8) print("a:\t\t", a) print("negative of a:\t", -a) b = np.array([0, 100, 200], dtype=np.uint8) print("\nb:\t\t", b) print("negative of b:\t", -b) # - # ### pos # # This function takes a single argument, and simply returns a copy of the array. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([10, -1, 1], dtype=np.int8) print("a:\t\t", a) print("positive of a:\t", +a) # - # ## Binary operators # # `ulab` implements the `+`, `-`, `*`, `/`, `**`, `<`, `>`, `<=`, `>=`, `==`, `!=`, `+=`, `-=`, `*=`, `/=`, `**=` binary operators that work element-wise. Broadcasting is available, meaning that the two operands do not even have to have the same shape. If the lengths along the respective axes are equal, or one of them is 1, or the axis is missing, the element-wise operation can still be carried out. # A thorough explanation of broadcasting can be found under https://numpy.org/doc/stable/user/basics.broadcasting.html. # # **WARNING**: note that relational operators (`<`, `>`, `<=`, `>=`, `==`, `!=`) should have the `ndarray` on their left hand side, when compared to scalars. This means that the following works # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([1, 2, 3]) print(a > 2) # - # while the equivalent statement, `2 < a`, will raise a `TypeError` exception: # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([1, 2, 3]) print(2 < a) # - # **WARNING:** `circuitpython` users should use the `equal`, and `not_equal` operators instead of `==`, and `!=`. See the section on [array comparison](#Comparison-of-arrays) for details. # ### Upcasting # # Binary operations require special attention, because two arrays with different typecodes can be the operands of an operation, in which case it is not trivial, what the typecode of the result is. This decision on the result's typecode is called upcasting. Since the number of typecodes in `ulab` is significantly smaller than in `numpy`, we have to define new upcasting rules. Where possible, I followed `numpy`'s conventions. # # `ulab` observes the following upcasting rules: # # 1. Operations on two `ndarray`s of the same `dtype` preserve their `dtype`, even when the results overflow. # # 2. if either of the operands is a float, the result is automatically a float # # 3. When one of the operands is a scalar, it will internally be turned into a single-element `ndarray` with the *smallest* possible `dtype`. Thus, e.g., if the scalar is 123, it will be converted into an array of `dtype` `uint8`, while -1000 will be converted into `int16`. An `mp_obj_float`, will always be promoted to `dtype` `float`. Other micropython types (e.g., lists, tuples, etc.) raise a `TypeError` exception. # # 4. # # | left hand side | right hand side | ulab result | numpy result | # |----------------|-----------------|-------------|--------------| # |`uint8` |`int8` |`int16` |`int16` | # |`uint8` |`int16` |`int16` |`int16` | # |`uint8` |`uint16` |`uint16` |`uint16` | # |`int8` |`int16` |`int16` |`int16` | # |`int8` |`uint16` |`uint16` |`int32` | # |`uint16` |`int16` |`float` |`int32` | # # Note that the last two operations are promoted to `int32` in `numpy`. # # **WARNING:** Due to the lower number of available data types, the upcasting rules of `ulab` are slightly different to those of `numpy`. Watch out for this, when porting code! # # Upcasting can be seen in action in the following snippet: # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([1, 2, 3, 4], dtype=np.uint8) b = np.array([1, 2, 3, 4], dtype=np.int8) print("a:\t", a) print("b:\t", b) print("a+b:\t", a+b) c = np.array([1, 2, 3, 4], dtype=np.float) print("\na:\t", a) print("c:\t", c) print("a*c:\t", a*c) # - # ### Benchmarks # # The following snippet compares the performance of binary operations to a possible implementation in python. For the time measurement, we will take the following snippet from the micropython manual: # + # %%micropython -pyboard 1 import utime def timeit(f, *args, **kwargs): func_name = str(f).split(' ')[1] def new_func(*args, **kwargs): t = utime.ticks_us() result = f(*args, **kwargs) print('execution time: ', utime.ticks_diff(utime.ticks_us(), t), ' us') return result return new_func # + # %%micropython -pyboard 1 from ulab import numpy as np @timeit def py_add(a, b): return [a[i]+b[i] for i in range(1000)] @timeit def py_multiply(a, b): return [a[i]*b[i] for i in range(1000)] @timeit def ulab_add(a, b): return a + b @timeit def ulab_multiply(a, b): return a * b a = [0.0]*1000 b = range(1000) print('python add:') py_add(a, b) print('\npython multiply:') py_multiply(a, b) a = np.linspace(0, 10, num=1000) b = np.ones(1000) print('\nulab add:') ulab_add(a, b) print('\nulab multiply:') ulab_multiply(a, b) # - # The python implementation above is not perfect, and certainly, there is much room for improvement. However, the factor of 50 difference in execution time is very spectacular. This is nothing but a consequence of the fact that the `ulab` functions run `C` code, with very little python overhead. The factor of 50 appears to be quite universal: the FFT routine obeys similar scaling (see [Speed of FFTs](#Speed-of-FFTs)), and this number came up with font rendering, too: [fast font rendering on graphical displays](https://forum.micropython.org/viewtopic.php?f=15&t=5815&p=33362&hilit=ufont#p33383). # ## Comparison operators # # The smaller than, greater than, smaller or equal, and greater or equal operators return a vector of Booleans indicating the positions (`True`), where the condition is satisfied. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([1, 2, 3, 4, 5, 6, 7, 8], dtype=np.uint8) print(a < 5) # - # **WARNING**: at the moment, due to `micropython`'s implementation details, the `ndarray` must be on the left hand side of the relational operators. # # That is, while `a < 5` and `5 > a` have the same meaning, the following code will not work: # + # %%micropython -unix 1 import ulab as np a = np.array([1, 2, 3, 4, 5, 6, 7, 8], dtype=np.uint8) print(5 > a) # - # ## Iterating over arrays # # `ndarray`s are iterable, which means that their elements can also be accessed as can the elements of a list, tuple, etc. If the array is one-dimensional, the iterator returns scalars, otherwise a new reduced-dimensional *view* is created and returned. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([1, 2, 3, 4, 5], dtype=np.uint8) b = np.array([range(5), range(10, 15, 1), range(20, 25, 1), range(30, 35, 1)], dtype=np.uint8) print("a:\t", a) for i, _a in enumerate(a): print("element %d in a:"%i, _a) print("\nb:\t", b) for i, _b in enumerate(b): print("element %d in b:"%i, _b) # - # ## Slicing and indexing # # # ### Views vs. copies # # `numpy` has a very important concept called *views*, which is a powerful extension of `python`'s own notion of slicing. Slices are special python objects of the form # # ```python # slice = start:end:stop # ``` # # where `start`, `end`, and `stop` are (not necessarily non-negative) integers. Not all of these three numbers must be specified in an index, in fact, all three of them can be missing. The interpreter takes care of filling in the missing values. (Note that slices cannot be defined in this way, only there, where an index is expected.) For a good explanation on how slices work in python, you can read the stackoverflow question https://stackoverflow.com/questions/509211/understanding-slice-notation. # # In order to see what slicing does, let us take the string `a = '012345679'`! We can extract every second character by creating the slice `::2`, which is equivalent to `0:len(a):2`, i.e., increments the character pointer by 2 starting from 0, and traversing the string up to the very end. string = '0123456789' string[::2] # Now, we can do the same with numerical arrays. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array(range(10), dtype=np.uint8) print('a:\t', a) print('a[::2]:\t', a[::2]) # - # This looks similar to `string` above, but there is a very important difference that is not so obvious. Namely, `string[::2]` produces a partial copy of `string`, while `a[::2]` only produces a *view* of `a`. What this means is that `a`, and `a[::2]` share their data, and the only difference between the two is, how the data are read out. In other words, internally, `a[::2]` has the same data pointer as `a`. We can easily convince ourselves that this is indeed the case by calling the [ndinfo](#The_ndinfo_function) function: the *data pointer* entry is the same in the two printouts. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array(range(10), dtype=np.uint8) print('a: ', a, '\n') np.ndinfo(a) print('\n' + '='*20) print('a[::2]: ', a[::2], '\n') np.ndinfo(a[::2]) # - # If you are still a bit confused about the meaning of *views*, the section [Slicing and assigning to slices](#Slicing-and-assigning-to-slices) should clarify the issue. # ### Indexing # # The simplest form of indexing is specifying a single integer between the square brackets as in # + # %%micropython -unix 1 from ulab import numpy as np a = np.array(range(10), dtype=np.uint8) print("a: ", a) print("the first, and last element of a:\n", a[0], a[-1]) print("the second, and last but one element of a:\n", a[1], a[-2]) # - # Indexing can be applied to higher-dimensional tensors, too. When the length of the indexing sequences is smaller than the number of dimensions, a new *view* is returned, otherwise, we get a single number. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array(range(9), dtype=np.uint8).reshape((3, 3)) print("a:\n", a) print("a[0]:\n", a[0]) print("a[1,1]: ", a[1,1]) # - # Indices can also be a list of Booleans. By using a Boolean list, we can select those elements of an array that satisfy a specific condition. At the moment, such indexing is defined for row vectors only; when the rank of the tensor is higher than 1, the function raises a `NotImplementedError` exception, though this will be rectified in a future version of `ulab`. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array(range(9), dtype=np.float) print("a:\t", a) print("a < 5:\t", a[a < 5]) # - # Indexing with Boolean arrays can take more complicated expressions. This is a very concise way of comparing two vectors, e.g.: # + # %%micropython -unix 1 from ulab import numpy as np a = np.array(range(9), dtype=np.uint8) b = np.array([4, 4, 4, 3, 3, 3, 13, 13, 13], dtype=np.uint8) print("a:\t", a) print("\na**2:\t", a*a) print("\nb:\t", b) print("\n100*sin(b):\t", np.sin(b)*100.0) print("\na[a*a > np.sin(b)*100.0]:\t", a[a*a > np.sin(b)*100.0]) # - # Boolean indices can also be used in assignments, if the array is one-dimensional. The following example replaces the data in an array, wherever some condition is fulfilled. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array(range(9), dtype=np.uint8) b = np.array(range(9)) + 12 print(a[b < 15]) a[b < 15] = 123 print(a) # - # On the right hand side of the assignment we can even have another array. # + # %%micropython -unix 1 from ulab import numpy as np a = np.array(range(9), dtype=np.uint8) b = np.array(range(9)) + 12 print(a[b < 15], b[b < 15]) a[b < 15] = b[b < 15] print(a) # - # ### Slicing and assigning to slices # # You can also generate sub-arrays by specifying slices as the index of an array. Slices are special python objects of the form # + # %%micropython -unix 1 from ulab import numpy as np a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=np.uint8) print('a:\n', a) # the first row print('\na[0]:\n', a[0]) # the first two elements of the first row print('\na[0,:2]:\n', a[0,:2]) # the zeroth element in each row (also known as the zeroth column) print('\na[:,0]:\n', a[:,0]) # the last row print('\na[-1]:\n', a[-1]) # the last two rows backwards print('\na[-1:-3:-1]:\n', a[-1:-3:-1]) # - # Assignment to slices can be done for the whole slice, per row, and per column. A couple of examples should make these statements clearer: # + # %%micropython -unix 1 from ulab import numpy as np a = np.zeros((3, 3), dtype=np.uint8) print('a:\n', a) # assigning to the whole row a[0] = 1 print('\na[0] = 1\n', a) a = np.zeros((3, 3), dtype=np.uint8) # assigning to a column a[:,2] = 3.0 print('\na[:,0]:\n', a) # - # Now, you should notice that we re-set the array `a` after the first assignment. Do you care to see what happens, if we do not do that? Well, here are the results: # + # %%micropython -unix 1 from ulab import numpy as np a = np.zeros((3, 3), dtype=np.uint8) b = a[:,:] # assign 1 to the first row b[0] = 1 # assigning to the last column b[:,2] = 3 print('a: ', a) # - # Note that both assignments involved `b`, and not `a`, yet, when we print out `a`, its entries are updated. This proves our earlier statement about the behaviour of *views*: in the statement `b = a[:,:]` we simply created a *view* of `a`, and not a *deep* copy of it, meaning that whenever we modify `b`, we actually modify `a`, because the underlying data container of `a` and `b` are shared between the two object. Having a single data container for two seemingly different objects provides an extremely powerful way of manipulating sub-sets of numerical data. # If you want to work on a *copy* of your data, you can use the `.copy` method of the `ndarray`. The following snippet should drive the point home: # + # %%micropython -unix 1 from ulab import numpy as np a = np.zeros((3, 3), dtype=np.uint8) b = a.copy() # get the address of the underlying data pointer np.ndinfo(a) print() np.ndinfo(b) # assign 1 to the first row of b, and do not touch a b[0] = 1 print() print('a: ', a) print('='*20) print('b: ', b) # - # The `.copy` method can also be applied to views: below, `a[0]` is a *view* of `a`, out of which we create a *deep copy* called `b`. This is a row vector now. We can then do whatever we want to with `b`, and that leaves `a` unchanged. # + # %%micropython -unix 1 from ulab import numpy as np a = np.zeros((3, 3), dtype=np.uint8) b = a[0].copy() print('b: ', b) print('='*20) # assign 1 to the first entry of b, and do not touch a b[0] = 1 print('a: ', a) print('='*20) print('b: ', b) # - # The fact that the underlying data of a view is the same as that of the original array has another important consequence, namely, that the creation of a view is cheap. Both in terms of RAM, and execution time. A view is really nothing but a short header with a data array that already exists, and is filled up. Hence, creating the view requires only the creation of its header. This operation is fast, and uses virtually no RAM.
docs/ulab-ndarray.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Objective # # _todo: introduction to objective_ # # In order to accomplish this objective, it has been used a connection between Python and R. The reason behind this aciton is mainly driven by the need of a code that is easier to understand and implement, that is why Python is the language used for the code development of this paper. # # On the other hand, the library that returns the results to Python, is the ECoL library, implemented in R. That'is why there exits the necessity to connect both languages. That is done by using some connecting libraries implemented in both Python and R. # # For this process, it is created a server process in RStudio using `Rserve`. This allows Python to connect as a client using the library `pyRserve`. # def safe_connect(self, operation) : connection = self.__connection try: metric = connection.r(operation) except : metric = None print ('Could not retrieve {}!'.format (operation) ) finally : return metric # This connection allows to implement a aforamentioned link, which calculates the metrics of a dataset in R and it exports those metrics back to the Python environment, where their manipulation is smoother and simpler. # # The obtained metrics are not altered or tampered with in any part of the process, but it is implemented a dictionary to store those values and be able to associate them to their respective metric domain. The structure uses a key-value organization, where the value is not only formed by the metrics, but also by a message associated with the metrics - which allows later representation of those values: # # ```python # metrics.update ( # {'<metric_name>': [message, metric_value]} # ) # ``` # # The code representing this process is: # def get_metrics (self, X=None, Y=None): if X is None and Y is None : if self.__metrics is not None : return self.__metrics else : # No data and no parameters: finish execution error_message = ''' No metrics so far! Try given the dataset vector and target vector as parameters.\n ''' raise Exception(error_message) sys.exit (400) else : # Stores connection to R's RPC. connect = self.__connection # Sends the input matrix and the output vector to R. connect.r.X = X connect.r.y = Y # Library to use in R. connect.r('df_X <- as.data.frame(X)') connect.r('df_y <- as.data.frame(y)') connect.r('library("ECoL")') ## Metrics, uses a dictionary to provide a faster access to its # contents. metrics = {} # Balance balance = self.safe_connect('balance(df_X, df_y)') message = '# Balance (C1, C2):\t' balance_dic_entry = { 'balance' : [message, balance] } metrics.update (balance_dic_entry) # Correlation correlation = self.safe_connect('correlation(df_X, df_y)') message = '# Correlation (C1, C2, C3, C4):\t' correlation_dic_entry = { 'correlation' : [message, correlation] } metrics.update (correlation_dic_entry) # Dimensionality dimensionality = self.safe_connect('dimensionality(df_X, df_y)') message = '# Dimensionality (T2, T3, T4):' dimensionality_dic_entry = { 'dimensionality' : [message, dimensionality] } metrics.update (dimensionality_dic_entry) # Linearity linearity = self.safe_connect('linearity(df_X, df_y)') message = '# Linearity (L1, L2, L3):\t' linearity_dic_entry = { 'linearity' : [message, linearity] } metrics.update (linearity_dic_entry) # Neighborhood neighborhood = self.safe_connect('neighborhood(df_X, df_y)') message = '# Neighborhood (N1, N2, N3, N4, T1, LSC):\t' neighborhood_dic_entry = { 'neighborhood' : [message, neighborhood] } metrics.update (neighborhood_dic_entry) # Network network = self.safe_connect('network(df_X, df_y)') message = '# Network (Density, ClsCoef, Hubs):\t' network_dic_entry = { 'network' : [message, network] } metrics.update (network_dic_entry) # Overlap overlap = self.safe_connect('overlapping(df_X, df_y)') message = '# Overlap (F1, F1v, F2, F3, F4):\t' overlap_dic_entry = { 'overlap' : [message, overlap] } metrics.update (overlap_dic_entry) # Smoothness smoothness = self.safe_connect('smoothness(df_X, df_y)') message = '# Smoothness (S1, S2, S3, S4):\t' smoothness_dic_entry = { 'smoothness' : [message, smoothness] } metrics.update (smoothness_dic_entry) self.__metrics = metrics return metrics # In order to test the code implementation, it has to be used a dataset on which to try the connection. The `iris dataset` has been the one elected to perform the testing on. # # That information is loaded and formated so that passing it to R does not return any exception. # + # Global variables DATASET_PATH = '../dataset/iris.csv' DATASET = [] ''' Load Dataset data into an array ''' with open (DATASET_PATH, 'r') as csv_file: # Skip header next (csv_file) # Interator object to read the CSV csv_reader = csv.reader ( csv_file, delimiter=',', quoting=csv.QUOTE_ALL ) # Create array from CSV for row in csv_reader : DATASET.append (row) def parse_dataset () : ## Data # Input X = numpy.array (DATASET) # Transformed to numpy array to allow more X = X[:, 0 : -1] # operations on it. X = numpy.array ( [ numpy.array (row).astype (numpy.float) for row in X ] ) # Target Y = numpy.array( [ row[-1] for row in DATASET ] ) return X, Y # - # Now the dataset can freely be used and passed to R. if __name__ == '__main__': #Data inputs, target = parse_dataset () # R does not take string values. So each class is translated into a # numerical value. for row in range (len(target)) : if target[row] == 'setosa' : target[row] = 1 elif target[row] == 'versicolor' : target[row] = 2 elif target[row] == 'virginica' : target[row] = 3 else : target[row] = 0 # Connect to R connector = r_connect() # Compute and print metrics for dataset connector.get_print_metrics(inputs, target) # The results returned should look like: # ```bash # === Printing metrics === # # # Balance (C1, C2): <TaggedList(C1=0.9999999999999998, C2=0.0)> # # Correlation (C1, C2, C3, C4): None # # Dimensionality (T2, T3, T4): [0.02666667 0.01333333 0.5 ] # # Linearity (L1, L2, L3): <TaggedList(L1=TaggedArray([0.00433569, 0.00750964], key=['mean', 'sd']), L2=TaggedArray([0.01333333, 0.02309401], key=['mean', 'sd']), L3=TaggedArray([0., 0.], key=['mean', 'sd']))> # # Neighborhood (N1, N2, N3, N4, T1, LSC): <TaggedList(N1=0.10666666666666667, N2=TaggedArray([0.19739445, 0.14762821], key=['mean', 'sd']), N3=TaggedArray([0.06 , 0.23828244], key=['mean', 'sd']), N4=TaggedArray([0.01333333, 0.11508192], key=['mean', 'sd']), T1=TaggedArray([0.05555556, 0.09094996], key=['mean', 'sd']), LSC=0.8164)> # # Network (Density, ClsCoef, Hubs): <TaggedList(Density=0.8340044742729307, ClsCoef=0.2652736191628974, Hubs=TaggedArray([0.83805083, 0.27533194], key=['mean', 'sd']))> # # Overlap (F1, F1v, F2, F3, F4): <TaggedList(F1=TaggedArray([0.27981465, 0.26490069], key=['mean', 'sd']), F1v=TaggedArray([0.02677319, 0.03379179], key=['mean', 'sd']), F2=TaggedArray([0.00638177, 0.01105354], key=['mean', 'sd']), F3=TaggedArray([0.12333333, 0.2136196 ], key=['mean', 'sd']), F4=TaggedArray([0.04333333, 0.07505553], key=['mean', 'sd']))> # # Smoothness (S1, S2, S3, S4): None # ```
notebook/project_notebook.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Oceanbolt Python SDK Lesson 6: Port Distances at Scale # # For certain maritime analyses, you will require a distance parameter. E.g. to calculate voyage times or estimate bunker consumption. It is often a tedious process to access many port to port distances or calculating the distance from a current position of a vessel to a port. We have made this easy by exposing our proprietary Oceanbolt Port Distance calculator. # # In this lesson, we will demonstrate how. This is Lesson 6 in our Python learning series. You can find past lessons on our blog: https://www.oceanbolt.com/blog/. You can find the full documentation on our distance calculator here: https://python-sdk.oceanbolt.com/distance_v3/distance.html # + # Import the relevant libraries from oceanbolt.sdk.client import APIClient from oceanbolt.sdk.distance import DistanceCalculator import pandas as pd # - # Create the base API client using your token. Token can be created in the Oceanbolt App (app.oceanbolt.com) base_client = APIClient("<token>") # ### Port to Port Distance # You can use the API to get simple port to port distances. You also have the option to include relevant way points for bunker stops etc. # Connect to the relevant Oceanbolt data endpoints using the base client object, ie: Distance distance = DistanceCalculator(base_client).distance( locations=[ {"unlocode": "BRSSZ"}, {"unlocode": "CNQDG"}, ] ) print("There are",round(distance),"nautical miles from Santos to Qingdao") # ### Vessel to Port Distance # Say you wanted instead to find the remaining distance from a particular vessel to a port. # # As an example, you can find the distance from the vessel MAGIC VELA (IMO: 9473327) to Santos. distance = DistanceCalculator(base_client).distance( locations=[ {"imo": 9473327}, {"unlocode": "BRSSZ"}, ] ) print("There are",round(distance),"nautical miles from MAGIC VELA to Santos") # This is what the query in our dashboard would look like. # # <img src="santos_to_9473327.png" width="500" align="center"> # ### Multiple Ports to Ports # If you have ever had to calculate the distance from multiple ports to multiple ports, you know how tedious this process can be. With our SDK, the process is easy. Simply create a csv file containing all the pairs you want to calculate and process it with the SDK. You can find the steps below. The example shows distances from Santos to 8 different Chinese ports. # Load CSV into dataframe for easy manipulation df = pd.read_csv('.../Port_Pairs.csv') #update file path df # + # Loop to add distances into dataframe port_distances = [] for i in range(len(df['loadport'])): distance = DistanceCalculator(base_client).distance( locations=[ {"unlocode": df['loadport'][i]}, {"unlocode": df['disport'][i]}, ] ) port_distances.append(distance) port_distances # - # Appending new column to DataFrame df2 = pd.DataFrame(port_distances) df['Distance'] = df2 # Export to CSV df.to_csv(path_or_buf='<path_to_file>') #update file path
docs/examples/21_port_distances.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import glob import cv2 import matplotlib.pyplot as plt from sklearn.preprocessing import LabelBinarizer from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score import keras from keras.models import Sequential, load_model from keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D, Lambda, Cropping2D from keras.utils import np_utils from keras import optimizers from keras.callbacks import EarlyStopping SEED = 2017 # + # Data can be downloaded at http://download.tensorflow.org/example_images/flower_photos.tgz # - # Specify data directory and extract all file names DATA_DIR = 'Data/' images = glob.glob(DATA_DIR + "flower_photos/*/*.jpg") # Extract labels from file names labels = [x.split('/')[3] for x in images] unique_labels = set(labels) plt.figure(figsize=(15, 15)) i = 1 for label in unique_labels: image = images[labels.index(label)] img = cv2.imread(image) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) plt.subplot(5, 5, i) plt.title("{0} ({1})".format(label, labels.count(label))) i += 1 _ = plt.imshow(img) plt.show() encoder = LabelBinarizer() encoder.fit(labels) y = encoder.transform(labels).astype(float) X_train, X_val, y_train , y_val = train_test_split(images, y, test_size=0.2, random_state=SEED) # + # Define architecture model = Sequential() model.add(Lambda(lambda x: (x / 255.) - 0.5, input_shape=(100, 100, 3))) model.add(Conv2D(16, (5, 5), activation='relu', padding='same')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.75)) model.add(Conv2D(32, (5, 5), activation='relu', padding='same')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.75)) model.add(Conv2D(64, (3, 3), activation='relu', padding='same')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.75)) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dropout(0.75)) model.add(Dense(5, activation='softmax')) # Define optimizer and compile opt = optimizers.Adam() model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy']) # + img_rows = img_cols = 100 img_channels = 3 def batchgen(x, y, batch_size, transform=False): # Create empty numpy arrays images = np.zeros((batch_size, img_rows, img_cols, img_channels)) class_id = np.zeros((batch_size, len(y[0]))) while 1: for n in range(batch_size): i = np.random.randint(len(x)) x_ = cv2.imread(x[i]) x_ = cv2.cvtColor(x_, cv2.COLOR_BGR2RGB) # The images have different sizes, we transform all to 100x100 pixels x_ = cv2.resize(x_, (100, 100)) images[n] = x_ class_id[n] = y[i] yield images, class_id # - callbacks = [EarlyStopping(monitor='val_acc', patience=5)] len(X_val) # + batch_size = 256 n_epochs = 100 steps_per_epoch = len(X_train) // batch_size val_steps = len(X_val) // batch_size train_generator = batchgen(X_train, y_train, batch_size, True) val_generator = batchgen(X_val, y_val, batch_size, True) history = model.fit_generator(train_generator, steps_per_epoch=steps_per_epoch, epochs=n_epochs, validation_data=val_generator, validation_steps=val_steps, callbacks=callbacks ) # - plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('Model accuracy') plt.ylabel('accuracy') plt.xlabel('epochs') plt.legend(['train', 'validation'], loc='lower right') plt.show() # + test_generator = batchgen(X_val, y_val, 1, False) preds = model.predict_generator(test_generator, steps=len(X_val)) y_val_ = [np.argmax(x) for x in y_val] y_preds = [np.argmax(x) for x in preds] accuracy_score(y_val_, y_preds) # - n_predictions = 5 plt.figure(figsize=(15, 15)) for i in range(n_predictions): plt.subplot(n_predictions, n_predictions, i+1) plt.title("{0} ({1})".format(list(set(labels))[np.argmax(preds[i])], list(set(labels))[np.argmax(y_val[i])])) img = cv2.imread(X_val[i]) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) plt.axis('off') plt.imshow(img) plt.tight_layout() plt.show()
Chapter07/Chapter 7 - Classifying objects in images.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # Advent of code Day 3: # # Part 1: # given a string of binary numbers, figure out which is the most[least] common in each column. Create a new binary number out of the most[least] common values and convert it to decimal import pandas as pd # + mywidths = [1,1,1,1,1] digits = pd.read_fwf("day3_sample.txt", widths=mywidths) digits.head() # - digits.info() jj = digits.sum() # + d = {'a': 0, 'b': 0, 'c': 0, 'd':0, 'e':0} gamma = pd.Series(data=d, index=['a', 'b', 'c', 'd', 'e']) epsilon = pd.Series(data=d, index=['a','b','c','d','e']) num_of_elements = len(digits) for ind in jj.index: if jj[ind] > num_of_elements / 2: gamma[ind] = 1 else: epsilon[ind]=1 gamma # - epsilon def to_decimal(series): #print(series['a']) my_number = series['a']*2**4 + series['b']*2**3 + series['c']*2**2 + series['d']*2**1 + series['e']*2**0 return my_number eps_number = to_decimal(epsilon) gamma_number = to_decimal(gamma) power_consumption = eps_number * gamma_number power_consumption # Final input has more columns. It should totally be possible to make the above flexible for the number of columns, but... It's only the advent of code -- so let's brute force it abit! # # + mywidths = [1,1,1,1,1,1,1,1,1,1,1,1] digits = pd.read_fwf("day3_input.txt", widths=mywidths) digits.head() # - jj = digits.sum() # + d = {'a': 0, 'b': 0, 'c': 0, 'd':0, 'e':0,'f':0,'g':0,'h':0,'i':0,'j':0,'k':0,'l':0} gamma = pd.Series(data=d, index=['a', 'b', 'c', 'd', 'e','f','g','h','i','j','k','l']) epsilon = pd.Series(data=d, index=['a','b','c','d','e','f','g','h','i','j','k','l']) num_of_elements = len(digits) for ind in jj.index: if jj[ind] > num_of_elements / 2: gamma[ind] = 1 else: epsilon[ind]=1 gamma # - def to_decimal(series): #print(series['a']) my_number = series['a']*2**11 + series['b']*2**10 + series['c']*2**9 + series['d']*2**8 my_number = my_number + series['e']*2**7 + series['f']*2**6 + series['g']*2**5 + series['h']*2**4 my_number = my_number + series['i']*2**3 + series['j']*2**2 + series['k']*2**1 + series['l']*2**0 return my_number eps = to_decimal(epsilon) gam = to_decimal(gamma) power = eps * gam power # Part 2 is a bit more fussy... to get the oxygen rating or each bit we have to identify which is the most common, and then only keep the entries that have that entry. And repeat until there is only one number left. To get the CO2 scrubber rating, we have to go through the same process -- only identifying the least common bit at every opportunity. # + #starting with the sample... mywidths = [1,1,1,1,1] digits = pd.read_fwf("day3_sample.txt", widths=mywidths) #now with the real file mywidths = [1,1,1,1,1,1,1,1,1,1,1,1] digits = pd.read_fwf("day3_input.txt", widths=mywidths) # + oxygen = digits.copy() num_of_elements = len(oxygen) jj = oxygen.sum() for ind in jj.index: if jj[ind] >= num_of_elements / 2: oxygen = oxygen[oxygen[ind]==1] else: oxygen = oxygen[oxygen[ind]==0] print(ind,len(oxygen)) if len(oxygen)== 1: break else: jj = oxygen.sum() num_of_elements = len(oxygen) print(oxygen) # + co2= digits.copy() num_of_elements = len(co2) jj = co2.sum() for ind in jj.index: if jj[ind] >= num_of_elements / 2: co2 = co2[co2[ind]==0] else: co2 = co2[co2[ind]==1] print(ind, len(co2)) if len(co2)== 1: break else: jj = co2.sum() num_of_elements = len(co2) print(co2) # + oxygen_dec = to_decimal(oxygen) co2_dec = to_decimal(co2) life_support = oxygen_dec.iloc[0] * co2_dec.iloc[0] life_support # -
day3.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Clase 1: Introducción # Hoy vamos a repasar los conceptos más básicos de Python. Utilizamos Python 3, que tiene algunas diferencias con Python 2. # ### 1.1 Print # Utilizamos el comando `print` para escribir mensajes en la consola. Empezemos imprimiento el mensaje `Hello World!`. # Mi primer programa print('Hello World!') # Además del código usamos `#` para escribir un comentario. Esto sirve para entender mejor el código, también lo podemos usar para desactivar una línea de código que no queremos borrar. print('Jessica') print('Only Child') #print('Illinois, Chicago') # ### 1.2 Input # Si queremos darle información a la computadora podemos usar el comando `input`. my_name = input('Por favor escribe tu nombre: ') print('Hola',my_name,'!') # ### 1.3 Uso de variables # En el ejemplo anterior usamos la palabra `my_name` para guardar el nombre que escribimos. A estos contenedores les llamamos variables. Las variables se encargan de guardar valores pero podemos cambiar su contenido en cualquier momento. x = 5 print(x) x = 3 print(x) # En el ejemplo anterior usamos dos veces el comando `print(x)` pero produjo diferentes resultados. Esto es porque cambiamos el valor de `x` antes de volver a escribir el comando. # # Podemos crear muchas variables y podemos crear nuevos valores con operaciones matemáticas. a = 5 b = 3 c = a+b print(c) # Hagamos un par de ejemplos más. a = 5 print(a) a = a - 1 print(a) a = a + 1 print(a) # ### 1.4 Operadores matemáticos # #### 1.4.1: Operadores Aritméticos: # En los ejemplos anteriores usamos los símbolos `+` y `-` para hacer operaciones matemáticas. Estos son operadores aritméticos, pero no son los únicos: # # 1. Suma: `+` # 2. Resta: `-` # 3. Multiplicación: `*` # 4. División: `/` # 5. Potencia: `**` # 6. Residuo (Modulus): `%` # + a = 5 b = 2 print("a + b = ",a+b) print("a - b = ",a-b) print("a * b = ",a*b) print("a / b = ",a/b) print("a ** b = ",a**b) print("a % b = ",a%b) # - # Estudiemos un poco más el operador de residuo. En una división si no encontramos los puntos decimales nos quedamos con un residuo. El operador `%` sirve para obtener este residuos. En el ejemplo anterior hicimos: # # `5 % 2` y el resultado fue `1`. # # Podemos pensarlo como que dividimos `5 / 2` pero en vez de calcular la división completa encontramos el siguente divisor de 5, es decir 4. Le restamos 4 a 5 y obtenemos el residuo 1. # # Veamos otros ejemplos. print('4 % 2 = ', 4 % 2) print('10 % 2 = ', 10 % 2) print('10 % 3 = ', 10 % 3) print('11 % 3 = ', 11 % 3) print('12 % 3 = ', 12 % 3) print('13 % 3 = ', 13 % 3) print('14 % 3 = ', 14 % 3) # En el primer ejemplo vemos que `4 % 2 = 0`, esto es porque 4 es divisible entre 2 y no tenemos nigún residuo. Lo mismo sucede en el segundo ejemplo. # # En el tercer ejemplo el resultado es uno. El número más cercano anterior divisible entre 3 es 9, restamos 9 a 10 (`10-9`) y obtenemos 1. Lo mismo aplica con el siguiente ejemplo, pero en este caso como estamos dividiendo `11 / 3` entonces restamos 9 a 11 (`11-9`). # # Pero algo cambia cuando llegamos a 12, como 12 es divisible entre 3, entonces el resultado es 0 otra vez. Cuando dividimos 13 el número más cercano anterior divisible entre 3 es ahora 12 en lugar de 9. Por lo que ahora restamos 12 a 13 en lugar de 9. # # ¿Notas un patrón? Qué crees que pasaría si calculamos `15 % 3` ó `29 % 3`. # Extras: # 1. ¿Para que crees que sirve el operador `%`? # 1. El código en los ejemplos anteriores se veía un poco repetitivo, ¿sabes por qué había que escribirlo así? Si no se te ocurre intenta ver cuál es el problema con el siguiente código print('5 + 2 = ',5+1) print('5 + 2 = ',5-2) # #### 1.4.2 Operadores Lógicos # # Otro tipo importante de operadores son los operadores lógicos: # # 1. Igual a: `==` Igual a # 1. Menor que: `<` # 1. Mayor que: `>` # 1. Menor o igual a `<=` # 1. Mayor o igual a `>=` # 1. Diferente a `!=` # # El resultado de estos operadores siempre es `True` o `False`. # + a = 5 b = 4 print('a == b:',a==b) print('a < b:', a<b) print('a > b:', a>b) print('a <= b:', a<=b) print('a >= b:', a>=b) print('a != b:', a!=b) # - # Es muy importante la diferencia entre `=` y `==`. # # 1. `=` es para asignar un valor # # `x = 5` sirve para asignar el valor de 5 a `x` # # # 1. `==` es para hacer una comparación # # `x == 5` pregunta si `x` es igual a 5. Dependiendo del valor de equis eso va a regresar el valor `True` o `False`. # Existen más tipos de operadores pero los estudiaremos más adelante. # ### 1.5 Tipos de datos # Al principio, usamos una variable para guardar un nombre y después usamos variables para guardar valores numéricos. # # Los tipos de datos son tipos distintos de variables que sirven para guardar diferentes cosas. Repasemos algunos de los tipos más comunes: # # 1. `int`: Números enteros # 1. `float`: Números con decimal # 1. `bool`: True o False # 1. `str`: Cadena de caracteres # # Podemos usar el comando `type` para que Python nos diga el tipo de variable que estamos usando a = 1 print(type(a)) b = 2.5 print(type(b)) c = True print(type(c)) d = 'perro' print(type(d)) # Una lista guarda el valor de varias variables que pueden ser de distintos tipos. Podemos acceder a sus valores con corchetes `[]` y el índice. Nota que empezamos en `0`. Más adelante estudiaremos las listas. lista_1 = [1,'perro',5.2] print(lista_1[0]) print(lista_1[1]) print(lista_1[2]) # ### Ejercicio # # Utiliza los conceptos vistos en clase para hacer los siguientes programas # 1. Escribe un programa que pregunte el diámetro de un circulo y calcule su área y perímetro # + # Escribe aquí tu código # - # 2. Escribe un programa que pregunte el nombre y la edad de una persona y haga un saludo personalizado. # + # Escribe aquí tu código
01 - Introduccion.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <table> # <tr><td align="right" style="background-color:#ffffff;"> # <img src="../images/logo.jpg" width="20%" align="right"> # </td></tr> # <tr><td align="right" style="color:#777777;background-color:#ffffff;font-size:12px;"> # <NAME> | April 04, 2019 (updated) # </td></tr> # <tr><td align="right" style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;"> # This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. # </td></tr> # </table> # $ \newcommand{\bra}[1]{\langle #1|} $ # $ \newcommand{\ket}[1]{|#1\rangle} $ # $ \newcommand{\braket}[2]{\langle #1|#2\rangle} $ # $ \newcommand{\dot}[2]{ #1 \cdot #2} $ # $ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $ # $ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $ # $ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $ # $ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $ # $ \newcommand{\mypar}[1]{\left( #1 \right)} $ # $ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $ # $ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $ # $ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $ # $ \newcommand{\onehalf}{\frac{1}{2}} $ # $ \newcommand{\donehalf}{\dfrac{1}{2}} $ # $ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $ # $ \newcommand{\vzero}{\myvector{1\\0}} $ # $ \newcommand{\vone}{\myvector{0\\1}} $ # $ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $ # $ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $ # $ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $ # $ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $ # $ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $ # $ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $ # $ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $ # $ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $ # <h2>One Qubit</h2> # # A qubit (quantum bit) has two states: state 0 and state 1. # # They are denoted by ket-notation: # # $ \ket{0} $ and $ \ket{1} $. # # We can show them as vectors: # # $ \ket{0} = \myvector{1 \\ 0} $ and $ \ket{1} = \myvector{0\\ 1} $. # <h3> NOT operator </h3> # # NOT operator flips the value of a qubit. # # We use capital letter for the matrix form of the operators: # # $ X = \X$. # <div style="background-color:#f8f8f8;color:#555555;font-size:13px;"> # <b><i>A technical note: Why is operator NOT referred as x-gate?<i></b> # # In Bronze, we use only real numbers, but we should note that complex numbers are also used in quantum computing. # # When complex numbers are used, a qubit can be represented by a four dimensional real number valued vector, which is not possible to visualize. # # On the other hand, it is still possible to represent a qubit (with complex numbers) equivalently in three dimensions. # # This representation is called Bloch sphere. # # In three dimensions, we have axis: x, y, and z. # # X refers to the rotation with respect to x-axis. # # Similarly, we have the rotation with respect to y-axis and z-axis. # # In Bronze, we will also see the operator Z (z-gate). # # The operator Y is defined with complex numbers. # </div> # The action of $ X $ on the qubit: # # $ X \ket{0} = \ket{1} $. # # More explicitly, $ X \ket{0} = \X \vzero = \vone = \ket{1} $. # # Similarly, $ X \ket{1} = \ket{0} $. # # More explicitly, $ X \ket{1} = \X \vone = \vzero = \ket{0} $. # <h3> Hadamard operator</h3> # # Hadamard operator ($ H $ or h-gate) looks like a fair coin-flipping. # # $$ # H = \hadamard. # $$ # # But, there are certain dissimilarities: # <ul> # <li> we have a <u>negative entry</u>, and</li> # <li> instead of $ \frac{1}{2} $, we have <u>its square root</u> $ \mypar{ \frac{1}{\sqrt{2}} } $. </li> # </ul> # # <font color="blue"> Quantum systems can have negative transitions. </font> # # <font color="blue"> Can probabilistic system be extended with negative values?</font> # <h4> One-step Hadamard</h4> # # Let's start in $ \ket{0} $. # # After applying $ H $: # # $$ # H \ket{0} = \hadamard \vzero = \vhadamardzero. # $$ # # After measurement, we observe the states $ \ket{0} $ and $ \ket{1} $ with equal probability $ \frac{1}{2} $. # # How can this be possible when their values are $ \frac{1}{\sqrt{2}} $? # <img src="../images/photon3a.jpg" width="45%"> # Let's start in $ \ket{1} $. # # After applying $ H $: # # $$ # H \ket{1} = \hadamard \vone = \vhadamardone. # $$ # # After measurement, we observe the states $ \ket{0} $ and $ \ket{1} $ with equal probability $ \frac{1}{2} $. # # We obtain the same values even when one of the values is negative. # <img src="../images/photon3c.jpg" width="35%"> # <i>The absolute value of a negative value is positive.</i> # # <i>The square of a negative value is also positive.</i> # # As we have observed, the second fact fits better when reading the measurement results. # # <font color="blue"><b> When a quantum system is measured, the probability of observing one state is the square of its value.</b></font> # # The value of the system being in a state is called its <b>amplitude</b>. # # In the above example, the amplitudes of states $\ket{0}$ and $ \ket{1} $ are respectively $ \sqrttwo $ and $ -\sqrttwo $. # # The probabilities of observing them after a measurement are $ \onehalf $. # <h3> Task 1 </h3> # # What are the probabilities of observing the states $ \ket{0} $ and $ \ket{1} $ if the system is in $ \myvector{-\frac{3}{5} \\ - \frac{4}{5}} $ or $ \myvector{\frac{3}{5} \\ \frac{4}{5}} $ or $ \myrvector{\frac{1}{\sqrt{3}} \\ - \frac{\sqrt{2}}{\sqrt{3}}} $? # # your solution is here # # <h3> Quantum state </h3> # # <i>What do we know at this point?</i> # <ul> # <li> A quantum state can be represented by a vector, in which each entry can be zero, a positive value, or a negative value. </li> # <li> We can also say that the amplitude of any state can be zero, a positive value, or a negative value. </li> # <li> The probability of observing one state after measurement is the square of its amplitude. </li> # </ul> # # <i>What else can we say?</i> # # Can the entries of a quantum state be arbitrary? # # Do you remember the properties of a probabilistic state?
bronze/B26_One_Qubit.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] colab_type="text" id="5HEkgJW62Zhq" # Copyright 2020 The TensorFlow Authors. # + cellView="form" colab_type="code" id="ZvnzHC7lmzWB" colab={} #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # + [markdown] colab_type="text" id="mxPxpHKHMAkl" # # Step **12**: Deploy a second model to Firebase ML # # This is the notebook for step **12** of the codelab [**Add Firebase to your TensorFlow Lite-powered app**](https://codelabs.developers.google.com/codelabs/digit-classifier-tflite/). # # In this notebook, we will train an improved version of the handwritten digit classification model using data augmentation. Then we will upload the model to Firebase using the [Firebase ML Model Management API](https://firebase.google.com/docs/ml-kit/manage-hosted-models). # # <table class="tfo-notebook-buttons" align="left"> # <td> # <a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/lite/codelabs/digit_classifier/ml/step7_improve_accuracy.ipynb"> # <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> # Run in Google Colab</a> # </td> # <td> # <a target="_blank" href="https://github.com/tensorflow/examples/blob/master/lite/codelabs/digit_classifier/ml/step7_improve_accuracy.ipynb"> # <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> # View source on GitHub</a> # </td> # </table> # + [markdown] colab_type="text" id="p8bO0hupMdZM" # ## Train an improved TensorFlow Lite model # # Let's start by training the improved model. # # We will not go into details about the model training here but if you are interested to learn more about why we apply data augmentation to this model and other details, check out this [notebook](https://colab.sandbox.google.com/github/tensorflow/examples/blob/master/lite/codelabs/digit_classifier/ml/step7_improve_accuracy.ipynb). # + colab_type="code" id="nImr6z7TMBJQ" colab={} # Import dependencies import tensorflow as tf from tensorflow import keras import matplotlib.pyplot as plt import numpy as np import pandas as pd import random print("TensorFlow version:", tf.__version__) # Import MNIST dataset mnist = keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # Normalize the input image so that each pixel value is between 0 to 1. train_images = train_images / 255.0 test_images = test_images / 255.0 # Add a color dimension to the images in "train" and "validate" dataset to # leverage Keras's data augmentation utilities later. train_images = np.expand_dims(train_images, axis=3) test_images = np.expand_dims(test_images, axis=3) # Define data augmentation configs datagen = keras.preprocessing.image.ImageDataGenerator( rotation_range=30, width_shift_range=0.25, height_shift_range=0.25, shear_range=0.25, zoom_range=0.2 ) # Generate augmented data from MNIST dataset train_generator = datagen.flow(train_images, train_labels) test_generator = datagen.flow(test_images, test_labels) # Define and train the Keras model. model = keras.Sequential([ keras.layers.InputLayer(input_shape=(28, 28, 1)), keras.layers.Conv2D(filters=32, kernel_size=(3, 3), activation=tf.nn.relu), keras.layers.Conv2D(filters=64, kernel_size=(3, 3), activation=tf.nn.relu), keras.layers.MaxPooling2D(pool_size=(2, 2)), keras.layers.Dropout(0.25), keras.layers.Flatten(), keras.layers.Dense(10, activation=tf.nn.softmax) ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(train_generator, epochs=5, validation_data=test_generator) # Convert Keras model to TF Lite format and quantize. converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_model = converter.convert() with open('mnist_v2.tflite', "wb") as f: f.write(tflite_model) # + [markdown] id="CpTQf5gHJz78" colab_type="text" # ## Publish model to Firebase ML # + [markdown] colab_type="text" id="CgCDMe0e6jlT" # Step 1. Upload the private key (json file) for your service account and Initialize Firebase Admin # + id="jALZvgQ2zwfm" colab_type="code" colab={} import os from google.colab import files import firebase_admin from firebase_admin import ml uploaded = files.upload() for fn in uploaded.keys(): print('User uploaded file "{name}" with length {length} bytes'.format( name=fn, length=len(uploaded[fn]))) os.environ["GOOGLE_APPLICATION_CREDENTIALS"]='/content/' + fn projectID = fn.rsplit("-firebase")[0] firebase_admin.initialize_app( options={'projectId': projectID, 'storageBucket': projectID + '.appspot.com' }) # + [markdown] id="ULfDUSYjiNqk" colab_type="text" # Step 2. Upload the model file to Cloud Storage # + id="9fRsDDJyiWFR" colab_type="code" colab={} # This uploads it to your bucket as mmnist_v2.tflite source = ml.TFLiteGCSModelSource.from_keras_model(model, 'mnist_v2.tflite') print (source.gcs_tflite_uri) # + [markdown] id="A1b6jw_wikQ0" colab_type="text" # Step 3. Deploy the model to Firebase # + id="IK-YsWjPik59" colab_type="code" colab={} # Create a Model Format model_format = ml.TFLiteFormat(model_source=source) # Create a Model object sdk_model_1 = ml.Model(display_name="mnist_v2", model_format=model_format) # Make the Create API call to create the model in Firebase firebase_model_1 = ml.create_model(sdk_model_1) print(firebase_model_1.as_dict()) # Publish the model model_id = firebase_model_1.model_id firebase_model_1 = ml.publish_model(model_id)
improve_accuracy_and_upload.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set() # - train_data = pd.read_excel(r'D:\Kedar\Python\iNeuron\Flight Fare Prediction\Data_Train.xlsx') train_data.head() train_data.info() train_data.Duration.value_counts() train_data.shape train_data.dropna(inplace=True) train_data.shape train_data.isnull().sum() # # EDA train_data['Journey_Day'] = pd.to_datetime(train_data['Date_of_Journey'], format='%d/%m/%Y').dt.day train_data['Journey_Month'] = pd.to_datetime(train_data['Date_of_Journey'], format='%d/%m/%Y').dt.month train_data.head() train_data.drop(['Date_of_Journey'], axis=1, inplace=True) # + # Extracting hours train_data['Dep_hour'] = pd.to_datetime(train_data.Dep_Time).dt.hour # Extracting minutes train_data['Dep_minutes'] = pd.to_datetime(train_data.Dep_Time).dt.minute # Dropping Dep time column train_data.drop(['Dep_Time'], axis=1, inplace=True) # - train_data.head() # + # Extracting hours train_data['Arrival_hour'] = pd.to_datetime(train_data.Arrival_Time ).dt.hour # Extracting minutes train_data['Arrival_minutes'] = pd.to_datetime(train_data.Arrival_Time ).dt.minute # Dropping Arrival time column train_data.drop(['Arrival_Time'], axis=1, inplace=True) # - train_data.head() # + duration = list(train_data.Duration) for i in range(len(duration)): if len(duration[i].split()) != 2: # check if duration contains only hours or min if 'h' in duration[i]: duration[i] = duration[i].strip() + '0m' # Adds 0 minutes else: duration[i] = '0h' + duration[i] # Adds 0 hours duration_hours = [] duration_mins = [] for i in range(len(duration)): duration_hours.append(int(duration[i].split(sep='h')[0])) try: duration_mins.append(int(duration[i].split(sep='m')[0].split()[-1])) except ValueError: duration_mins.append(int(0)) # - train_data['Duration_hours'] = duration_hours train_data['Duration_mins'] = duration_mins train_data.head() train_data.drop(['Duration'], axis=1, inplace=True) train_data.head() # # Handling categotical data train_data.Airline.value_counts() # + # Airline vs Price sns.catplot(y= 'Price', x= 'Airline', data= train_data.sort_values('Price', ascending=False), kind='boxen', height=6, aspect=3) plt.show() # - Airline = train_data[['Airline']] Airline = pd.get_dummies(Airline, drop_first=True) Airline.head() train_data.Source.value_counts() # + # Source vs Price sns.catplot(y= 'Price', x= 'Source', data= train_data.sort_values('Price', ascending=False), kind='boxen', height=6, aspect=3) plt.show() # - Source = train_data[['Source']] Source = pd.get_dummies(Source, drop_first=True) Source.head() train_data.Destination.value_counts() Destination = train_data[['Destination']] Destination = pd.get_dummies(Destination, drop_first=True) Destination.head() train_data.Route train_data.drop(['Route', 'Additional_Info'], axis=1, inplace=True) train_data.Total_Stops.value_counts() train_data.replace({'non-stop': 0, '1 stop': 1, '2 stops': 2, '3 stops': 3, '4 stops': 4}, inplace=True) # + # DataFrame Concatenation train_data = pd.concat([train_data, Airline, Source, Destination], axis=1) # - train_data.head() train_data.drop(['Airline', 'Source', 'Destination'], axis=1, inplace=True) train_data.shape # # Test Data test_data = pd.read_excel(r'D:\Kedar\Python\iNeuron\Flight Fare Prediction\Test_set.xlsx') test_data.head() # + # Preprocessing print('Test data info') print('#'*50) print(test_data.info()) print('NULL values :') print('#'*50) test_data.dropna(inplace=True) print(test_data.isnull().sum()) # EDA test_data['Journey_Day'] = pd.to_datetime(test_data['Date_of_Journey'], format='%d/%m/%Y').dt.day test_data['Journey_Month'] = pd.to_datetime(test_data['Date_of_Journey'], format='%d/%m/%Y').dt.month test_data.drop(['Date_of_Journey'], axis=1, inplace=True) # Extracting hours test_data['Arrival_hour'] = pd.to_datetime(test_data.Arrival_Time).dt.hour # Extracting minutes test_data['Arrival_minutes'] = pd.to_datetime(test_data.Arrival_Time).dt.minute # Dropping Arrival time column test_data.drop(['Arrival_Time'], axis=1, inplace=True) duration = list(test_data.Duration) for i in range(len(duration)): if len(duration[i].split()) != 2: # check if duration contains only hours or min if 'h' in duration[i]: duration[i] = duration[i].strip() + '0m' # Adds 0 minutes else: duration[i] = '0h' + duration[i] # Adds 0 hours duration_hours = [] duration_mins = [] for i in range(len(duration)): duration_hours.append(int(duration[i].split(sep='h')[0])) try: duration_mins.append(int(duration[i].split(sep='m')[0].split()[-1])) except ValueError: duration_mins.append(int(0)) test_data['Duration_hours'] = duration_hours test_data['Duration_mins'] = duration_mins test_data.drop(['Duration'], axis=1, inplace=True) # Handling categotical data Airline = test_data[['Airline']] Airline = pd.get_dummies(Airline, drop_first=True) Source = test_data[['Source']] Source = pd.get_dummies(Source, drop_first=True) Destination = test_data[['Destination']] Destination = pd.get_dummies(Destination, drop_first=True) test_data.drop(['Route', 'Additional_Info'], axis=1, inplace=True) test_data.replace({'non-stop': 0, '1 stop': 1, '2 stops': 2, '3 stops': 3, '4 stops': 4}, inplace=True) test_data = pd.concat([test_data, Airline, Source, Destination], axis=1) test_data.drop(['Airline', 'Source', 'Destination'], axis=1, inplace=True) print(test_data.shape) # + # Extracting hours test_data['Dep_hour'] = pd.to_datetime(test_data.Dep_Time).dt.hour # Extracting minutes test_data['Dep_minutes'] = pd.to_datetime(test_data.Dep_Time).dt.minute # Dropping Dep time column test_data.drop(['Dep_Time'], axis=1, inplace=True) # - test_data.head() # # Feature Selection train_data.columns X = train_data.loc[:, ['Total_Stops', 'Journey_Day', 'Journey_Month', 'Dep_hour', 'Dep_minutes', 'Arrival_hour', 'Arrival_minutes', 'Duration_hours', 'Duration_mins', 'Airline_Air India', 'Airline_GoAir', 'Airline_IndiGo', 'Airline_Jet Airways', 'Airline_Jet Airways Business', 'Airline_Multiple carriers', 'Airline_Multiple carriers Premium economy', 'Airline_SpiceJet', 'Airline_Trujet', 'Airline_Vistara', 'Airline_Vistara Premium economy', 'Source_Chennai', 'Source_Delhi', 'Source_Kolkata', 'Source_Mumbai', 'Destination_Cochin', 'Destination_Delhi', 'Destination_Hyderabad', 'Destination_Kolkata', 'Destination_New Delhi']] X.head() y = train_data.iloc[:, 1] y.head() # + # Heatmap plt.figure(figsize=(15, 15)) sns.heatmap(train_data.corr(), annot=True, cmap='RdYlGn') plt.show() # + # important feature using ExtraTreeRegressor from sklearn.ensemble import ExtraTreesRegressor selection = ExtraTreesRegressor() selection.fit(X, y) # - print(selection.feature_importances_) plt.figure(figsize=(12, 8)) feat = pd.Series(selection.feature_importances_, index=X.columns) feat.nlargest(20).plot(kind= 'barh') plt.show() # # Building Model using Fandom Forest from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2) from sklearn.ensemble import RandomForestRegressor reg_rf = RandomForestRegressor() reg_rf.fit(X_train, y_train) y_pred = reg_rf.predict(X_test) reg_rf.score(X_train, y_train) sns.distplot(y_test-y_pred) plt.show() plt.scatter(y_test, y_pred, alpha=0.5) plt.xlabel('y_test') plt.ylabel('y_pred') plt.show() from sklearn import metrics print('MAE:', metrics.mean_absolute_error(y_test, y_pred)) print('MSE:', metrics.mean_squared_error(y_test, y_pred)) print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, y_pred))) print('R2:', metrics.r2_score(y_test, y_pred)) # + # HyperParameter Tuning # - from sklearn.model_selection import RandomizedSearchCV # Number of trees in RF n_estimators = [int(x) for x in np.linspace(start=100, stop=1200, num=12)] # Number of features to consider max_features = ['auto', 'sqrt'] # Max level in tree max_depth = [int(x) for x in np.linspace(5, 30, num=6)] # Min sample split min_samples_split = [2, 5, 10, 15, 100] # Min sample in leaf node min_samples_leaf = [1, 2, 5, 10] random_grid = {'n_estimators': n_estimators, 'max_features': max_features, 'max_depth': max_depth, 'min_samples_split': min_samples_split, 'min_samples_leaf': min_samples_leaf} rf_random = RandomizedSearchCV(estimator=reg_rf, param_distributions=random_grid, scoring='neg_mean_squared_error', n_iter=10, cv=5, verbose=2, random_state=42, n_jobs=1) rf_random.fit(X_train, y_train) rf_random.best_params_ prediction = rf_random.predict(X_test) plt.figure(figsize=(8, 8)) sns.displot(y_test-prediction) plt.show() plt.figure(figsize=(8, 8)) plt.scatter(y_test, prediction, alpha=0.5) plt.xlabel('y_test') plt.ylabel('y_pred') plt.show() print('MAE:', metrics.mean_absolute_error(y_test, prediction)) print('MSE:', metrics.mean_squared_error(y_test, prediction)) print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, prediction))) print('R2:', metrics.r2_score(y_test, prediction)) # # Saving model for later use # + import pickle file = open('flightModel.pkl', 'wb') pickle.dump(rf_random, file) # - model = open('flightModel.pkl', 'rb') forest = pickle.load(model) y_prediction = forest.predict(X_test) metrics.r2_score(y_test, y_prediction)
Flight Price Prediction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="L-Hb2vV2A6EL" # # CreatingCalabiYauManifold # + [markdown] id="rstcm5gX3mEq" # ##### install surf2stl-python # + colab={"base_uri": "https://localhost:8080/"} id="_bV9h_nm2pq6" outputId="d7909638-6475-484c-c1fe-9dd346557da2" # !git clone https://github.com/asahidari/surf2stl-python # + colab={"base_uri": "https://localhost:8080/"} id="lCzgd8a-24RD" outputId="e4fe7d39-eef9-4bd2-ce8b-dea7f7ced843" # cd surf2stl-python # + [markdown] id="xfYdxXZz4blk" # ### How to draw CalbiYau Manifold # # $$ # z^n_1 + z^n_2 = 1 # $$ # # # $$ # z_1= e^{iφ}[cos(x + iy)]^\frac2{n} # $$ # $$ # z_2= e^{iφ}[sin (x + iy)]^\frac2{n} # $$ # # $$ # φ_1= \frac{2πk_1}{n} (0 ≦ k < n) # $$ # # $$ # φ_2= \frac{2πk_1}{n} (0 ≦ k < n) # $$ # + [markdown] id="9QDtRe-18uYw" # # * Parameter k1 and k2 individually take Integer values from 0 to n - 1, and results in n x n parts of the manifold(manupulate x, y each pattern in $nxn=n^2$) # # * to visualize Calabi-Yau manifold means that to satisfy equation $z^n_1 + z^n_2 = 1$ ,then we can get z1, z2 by moving parameter x,y and integer k1, k2 # * $z_1, z_2$ spread 4Dimention thinking considering real number&imaginary number($Re(z_1),Im(z_1),Re(z_2),Im(z_2)$ ), so we should think to reduce dimention # * reduce $Im(z_1),Im(z_2)$ then make $(Re(z1),Re(z2),Im(z1)cos(a)+Im(z2)sin(a))$ 3D # + [markdown] id="oK1a4UlX3VLP" # #### Import library # + id="ZPIB69cn2d9g" import numpy as np import math, cmath # cmath: 複素数のためのライブラリ import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import matplotlib.ticker as plticker from matplotlib import cm from scipy.spatial import Delaunay import surf2stl as s2s # + [markdown] id="wh7biYCw84mm" # ### Set Parameter # + id="rC5qJpbI88B9" N = 9 # Dimension a = 0.4 row, col = 30, 30 writeSTL = False # + [markdown] id="7k8uaYxk_-i6" # #### define function for caculation # + id="mqFcmoOB_9By" def calcZ1(x, y, k, n): return cmath.exp(1j*(2*cmath.pi*k/n)) * (cmath.cos(x+y*1j)**(2/n)) def calcZ2(x, y, k, n): return cmath.exp(1j*(2*cmath.pi*k/n)) * (cmath.sin(x+y*1j)**(2/n)) def calcZ1Real(x, y, k, n): return (calcZ1(x, y, k, n)).real def calcZ2Real(x, y, k, n): return (calcZ2(x, y, k, n)).real def calcZ(x, y, k1_, k2_, n, a_): z1 = calcZ1(x, y, k1, n) z2 = calcZ2(x, y, k2, n) return z1.imag * math.cos(a_) + z2.imag*math.sin(a_) # + [markdown] id="PBqWiIQlAj1J" # #### Draw # + colab={"base_uri": "https://localhost:8080/", "height": 466} id="qJt21J3B2VYY" outputId="bce4f10b-a7ec-4d87-feb2-16bf6f655097" # set param range x = np.linspace(0, math.pi/2, col) y = np.linspace(-math.pi/2, math.pi/2, row) x, y = np.meshgrid(x, y) # init graph fig = plt.figure(figsize=(18,8)) for n in range(2, N): ax = fig.add_subplot(2, 4, n - 1, projection='3d') ax.view_init(elev=15, azim=15) ax.set_title("n=%d" % n) ax.set_xlabel('X') ax.set_ylabel('Y') loc = plticker.MultipleLocator(base=1.0) # this locator puts ticks at regular intervals ax.xaxis.set_major_locator(loc) ax.yaxis.set_major_locator(loc) ax.zaxis.set_major_locator(loc) count = 0 for k1 in range(n): for k2 in range(n): # calc X, Y, Z values X = np.frompyfunc(calcZ1Real, 4, 1)(x, y, k1, n).astype('float32') Y = np.frompyfunc(calcZ2Real, 4, 1)(x, y, k2, n).astype('float32') Z = np.frompyfunc(calcZ, 6, 1)(x, y, k1, k2, n, a).astype('float32') ax.plot_surface(X, Y, Z, cmap=cm.ocean, linewidth=0) # + [markdown] id="EWW8v0SJ33zX" # Reference # * [Creating Calabi Yau Manifold in python](https://asahidari.hatenablog.com/entry/2020/06/08/194342) # * [Calabi-Yau多様体をブラウザ上に可視化する](https://sw1227.hatenablog.com/entry/2018/12/03/235105) # + id="ExkfgWEC4A93"
_notebooks/2021-10-27-CalabiYauManifold.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # [![trident icon](https://raw.githubusercontent.com/jingruoyu/mlflow/master/trident-badge.svg)](https://powerbi-wow-int3.analysis-df.windows.net/workloads/Data%20Science/de-ds/notebooks/external?trident=1&debug.useLocalManifests=1&source=github/jingruoyu/mlflow/blob/eb16b7a5c240d93c0397f2347c88397dc44923c0/examples/sklearn_elasticnet_wine/train.ipynb) # # # MLflow Training Tutorial # # This `train.pynb` Jupyter notebook predicts the quality of wine using [sklearn.linear_model.ElasticNet](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html). # # > This is the Jupyter notebook version of the `train.py` example # # Attribution # * The data set used in this example is from http://archive.ics.uci.edu/ml/datasets/Wine+Quality # * <NAME>, <NAME>, <NAME>, <NAME> and <NAME>. # * Modeling wine preferences by data mining from physicochemical properties. In Decision Support Systems, Elsevier, 47(4):547-553, 2009. # # Wine Quality Sample def train(in_alpha, in_l1_ratio): import os import warnings import sys import pandas as pd import numpy as np from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score from sklearn.model_selection import train_test_split from sklearn.linear_model import ElasticNet import mlflow import mlflow.sklearn import logging logging.basicConfig(level=logging.WARN) logger = logging.getLogger(__name__) def eval_metrics(actual, pred): rmse = np.sqrt(mean_squared_error(actual, pred)) mae = mean_absolute_error(actual, pred) r2 = r2_score(actual, pred) return rmse, mae, r2 warnings.filterwarnings("ignore") np.random.seed(40) # Read the wine-quality csv file from the URL csv_url =\ 'http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv' try: data = pd.read_csv(csv_url, sep=';') except Exception as e: logger.exception( "Unable to download training & test CSV, check your internet connection. Error: %s", e) # Split the data into training and test sets. (0.75, 0.25) split. train, test = train_test_split(data) # The predicted column is "quality" which is a scalar from [3, 9] train_x = train.drop(["quality"], axis=1) test_x = test.drop(["quality"], axis=1) train_y = train[["quality"]] test_y = test[["quality"]] # Set default values if no alpha is provided if float(in_alpha) is None: alpha = 0.5 else: alpha = float(in_alpha) # Set default values if no l1_ratio is provided if float(in_l1_ratio) is None: l1_ratio = 0.5 else: l1_ratio = float(in_l1_ratio) # Useful for multiple runs (only doing one run in this sample notebook) with mlflow.start_run(): # Execute ElasticNet lr = ElasticNet(alpha=alpha, l1_ratio=l1_ratio, random_state=42) lr.fit(train_x, train_y) # Evaluate Metrics predicted_qualities = lr.predict(test_x) (rmse, mae, r2) = eval_metrics(test_y, predicted_qualities) # Print out metrics print("Elasticnet model (alpha=%f, l1_ratio=%f):" % (alpha, l1_ratio)) print(" RMSE: %s" % rmse) print(" MAE: %s" % mae) print(" R2: %s" % r2) # Log parameter, metrics, and model to MLflow mlflow.log_param("alpha", alpha) mlflow.log_param("l1_ratio", l1_ratio) mlflow.log_metric("rmse", rmse) mlflow.log_metric("r2", r2) mlflow.log_metric("mae", mae) mlflow.sklearn.log_model(lr, "model") train(0.5, 0.5) train(0.2, 0.2) train(0.1, 0.1)
examples/sklearn_elasticnet_wine/train.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # $4\pi$ beam convolution # # TOAST provides an interface, `OpSimConviqt`, to the spherical harmonic convolution library, `libconviqt`. It was developed by <NAME> and <NAME> and described in # ``` # <NAME> and <NAME>: # Algorithm for the Evaluation of Reduced Wigner Matrices, # APJS 190 (2010) 267 # ``` # [arXiv:1002.1050](https://arxiv.org/abs/1002.1050). This particular implementation of the algorithm is available at https://github.com/hpc4cmb/libconviqt. # + # Load common tools for all lessons import sys sys.path.insert(0, "..") from lesson_tools import ( fake_focalplane ) # Capture C++ output in the jupyter cells # %reload_ext wurlitzer # - # ## Method # # `libconviqt` takes in spherical harmonic expansions of the beam and the sky and then synthesizes TOD samples at sample positions in the proper orientation. For efficiency, the sky is distributed as isolatitude rings and then each process gets the detector samples that fall on their rings. The calculation itself has two steps, first `conviqt` builds a 3D interpolator of the beam-convolved sky on a grid of $(\theta, \phi, \psi)$ and then the detector samples are interpolated from the grid. Finally the samples are communited back to the processes that own them. # # Typically the interpolation step dominates but if there are few detector samples and the sky and beam expansion orders are high, it is possible that building the interpolator is more expensive. # ## Example # # In this section we create a TOAST data object with simulated signal and noise and process the data into hit maps, pixels noise matrices and signal maps. # + import toast import toast.todmap import toast.pipeline_tools from toast.mpi import MPI import numpy as np import matplotlib.pyplot as plt mpiworld, procs, rank = toast.mpi.get_world() comm = toast.mpi.Comm(mpiworld) # A pipeline would create the args object with argparse class args: sample_rate = 10 # Hz hwp_rpm = None hwp_step_deg = None hwp_step_time_s = None spin_period_min = 1 # 10 spin_angle_deg = 20 # 30 prec_period_min = 100 # 50 prec_angle_deg = 30 # 65 coord = "E" nside = 64 nnz = 3 outdir = "maps" sky_file = "slm.fits" beam_file = "blm.fits" # Create a fake focalplane, we could also load one from file. # The Focalplane class interprets the focalplane dictionary # created by fake_focalplane() but it can also load the information # from file. focalplane = fake_focalplane(samplerate=args.sample_rate, fknee=0.1, alpha=2) detectors = sorted(focalplane.keys()) detquats = {} for d in detectors: detquats[d] = focalplane[d]["quat"] nsample = 100000 start_sample = 0 start_time = 0 iobs = 0 tod = toast.todmap.TODSatellite( comm.comm_group, detquats, nsample, coord=args.coord, firstsamp=start_sample, firsttime=start_time, rate=args.sample_rate, spinperiod=args.spin_period_min, spinangle=args.spin_angle_deg, precperiod=args.prec_period_min, precangle=args.prec_angle_deg, detranks=comm.group_size, hwprpm=args.hwp_rpm, hwpstep=args.hwp_step_deg, hwpsteptime=args.hwp_step_time_s, ) # Constantly slewing precession axis precquat = np.empty(4 * tod.local_samples[1], dtype=np.float64).reshape((-1, 4)) toast.todmap.slew_precession_axis( precquat, firstsamp=start_sample + tod.local_samples[0], samplerate=args.sample_rate, degday=360.0 / 365.25, ) tod.set_prec_axis(qprec=precquat) noise = toast.pipeline_tools.get_analytic_noise(args, comm, focalplane) obs = {} obs["name"] = "science_{:05d}".format(iobs) obs["tod"] = tod obs["intervals"] = None obs["baselines"] = None obs["noise"] = noise obs["id"] = iobs # Conviqt requires at least minimal focal plane information to be present in the observation obs["focalplane"] = toast.pipeline_tools.Focalplane(focalplane) """ for det in tod.local_dets: obs["focalplane"][det] = { "epsilon" : focalplane[det]["epsilon"], } if det.endswith("A"): obs["focalplane"][det]["psi_pol_deg"] = 0, elif det.endswith("B"): obs["focalplane"][det]["psi_pol_deg"] = 90, """ data = toast.Data(comm) data.obs.append(obs) # - # ### Create a high resolution point source map to convolve with the beam import healpy as hp import numpy as np nside_high = 1024 npix_high = 12 * nside_high ** 2 pointsource_map = np.zeros([3, npix_high]) coords = [] for lon in np.linspace(0, 360, 9, endpoint=False): for lat in np.linspace(-90, 90, 7): pix = hp.ang2pix(nside_high, lon, lat, lonlat=True) # Add a completely unpolarized source and see if beam asymmetries manufacture polarization pointsource_map[0, pix] = 1 coords.append((lon, lat)) coords = np.vstack(coords).T hp.mollview(np.zeros(12), title="Input signal", cmap="coolwarm") hp.projplot(np.pi/2 - np.radians(coords[1]), np.radians(coords[0]), 'o') lmax_high = nside_high * 2 cl, alm = hp.anafast(pointsource_map, lmax=lmax_high, iter=0, alm=True) hp.write_map("sim_sources_map.fits", hp.reorder(pointsource_map, r2n=True), nest=True, overwrite=True) hp.write_alm(args.sky_file, alm, overwrite=True) # ### Create asymmetric beam beam_map = np.zeros([3, npix_high]) x, y, z = hp.pix2vec(nside_high, np.arange(npix_high)) xvar = .01 yvar = 5 * xvar beam = np.exp(-(x ** 2 / xvar + y ** 2 / yvar)) beam[z < 0] = 0 hp.mollview(beam, cmap="coolwarm", rot=[0, 90]) beam_map = np.zeros([3, npix_high]) beam_map[0] = beam beam_map[1] = beam bl, blm = hp.anafast(beam_map, lmax=lmax_high, iter=0, alm=True) hp.write_alm(args.beam_file, blm, overwrite=True) # ### Now simulate sky signal # + import toast toast.todmap.OpPointingHpix(nside=args.nside, nest=True, mode="IQU").exec(data) # - npix = 12 * args.nside ** 2 hitmap = np.zeros(npix) tod = data.obs[0]["tod"] for det in tod.local_dets: pixels = tod.cache.reference("pixels_{}".format(det)) hitmap[pixels] = 1 hitmap[hitmap == 0] = hp.UNSEEN hp.mollview(hitmap, nest=True, title="all hit pixels", cbar=False) hp.graticule(22.5, verbose=False) # + name = "signal" toast.tod.OpCacheClear(name).exec(data) conviqt = toast.todmap.OpSimConviqt( comm.comm_rank, args.sky_file, args.beam_file, lmax=512, # Will use maximum from file beammmax=16, # Will use maximum from file pol=True, fwhm=0, order=13, calibrate=True, dxx=True, out=name, quat_name=None, flag_name=None, flag_mask=255, common_flag_name=None, common_flag_mask=255, apply_flags=False, remove_monopole=False, remove_dipole=False, normalize_beam=True, verbosity=1, ) conviqt.exec(data) # - # Destripe the signal and make a map. We use the nascent TOAST mapmaker because it can be run in serial mode without MPI. The TOAST mapmaker is still significantly slower so production runs should used `libMadam`. mapmaker = toast.todmap.OpMapMaker( nside=args.nside, nnz=3, name=name, outdir=args.outdir, outprefix="toast_test_", baseline_length=10, # maskfile=self.maskfile_binary, # weightmapfile=self.maskfile_smooth, # subharmonic_order=None, iter_max=100, use_noise_prior=False, # precond_width=30, ) mapmaker.exec(data) # Plot a segment of the timelines # + plt.figure(figsize=[12, 8]) hitmap = hp.read_map("maps/toast_test_hits.fits") hitmap[hitmap == 0] = hp.UNSEEN hp.mollview(hitmap, sub=[2, 2, 1], title="hits") binmap = hp.read_map("maps/toast_test_binned.fits") binmap[binmap == 0] = hp.UNSEEN hp.mollview(binmap, sub=[2, 2, 2], title="binned map", cmap="coolwarm") destriped = hp.read_map("maps/toast_test_destriped.fits") destriped[destriped == 0] = hp.UNSEEN hp.mollview(destriped, sub=[2, 2, 3], title="destriped map", cmap="coolwarm") inmap = hp.ud_grade(hp.read_map("sim_sources_map.fits"), args.nside) inmap[hitmap == hp.UNSEEN] = hp.UNSEEN hp.mollview(inmap, sub=[2, 2, 4], title="input map", cmap="coolwarm") # - # ## Exercises # - Plot the polarization of the simulated signal above # - Modify the scan strategy so that the beam elongation is more visible
tutorial/04_Simulated_Instrument_Signal/conviqt.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error, r2_score from sklearn.metrics import mean_absolute_error from sklearn.model_selection import GridSearchCV from sklearn.model_selection import KFold from sklearn.model_selection import ShuffleSplit from sklearn.metrics import accuracy_score from keras.layers import Dense from keras.models import Sequential from keras.optimizers import SGD from matplotlib import pyplot as plt import matplotlib as mpl import seaborn as sns import numpy as np import pandas as pd import category_encoders as ce import os import pickle import gc from tqdm import tqdm import pickle from sklearn.svm import SVR from sklearn.linear_model import LinearRegression from sklearn import linear_model from sklearn.neighbors import KNeighborsRegressor from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor from sklearn.ensemble import ExtraTreesRegressor from sklearn import ensemble import xgboost as xgb def encode_text_features(encode_decode, data_frame, encoder_isa=None, encoder_mem_type=None): # Implement Categorical OneHot encoding for ISA and mem-type if encode_decode == 'encode': encoder_isa = ce.one_hot.OneHotEncoder(cols=['isa']) encoder_mem_type = ce.one_hot.OneHotEncoder(cols=['mem-type']) encoder_isa.fit(data_frame, verbose=1) df_new1 = encoder_isa.transform(data_frame) encoder_mem_type.fit(df_new1, verbose=1) df_new = encoder_mem_type.transform(df_new1) encoded_data_frame = df_new else: df_new1 = encoder_isa.transform(data_frame) df_new = encoder_mem_type.transform(df_new1) encoded_data_frame = df_new return encoded_data_frame, encoder_isa, encoder_mem_type def absolute_percentage_error(Y_test, Y_pred): error = 0 for i in range(len(Y_test)): if(Y_test[i]!= 0 ): error = error + (abs(Y_test[i] - Y_pred[i]))/Y_test[i] error = error/ len(Y_test) return error def process_all(dataset_path, dataset_name, path_for_saving_data): ################## Data Preprocessing ###################### df = pd.read_csv(dataset_path) encoded_data_frame, encoder_isa, encoder_mem_type = encode_text_features('encode', df, encoder_isa = None, encoder_mem_type=None) # total_data = encoded_data_frame.drop(columns = ['arch', 'arch1']) total_data = encoded_data_frame.drop(columns = ['arch', 'sys','sysname','executable','PS']) total_data = total_data.fillna(0) X_columns = total_data.drop(columns = 'runtime').columns X = total_data.drop(columns = ['runtime']).to_numpy() Y = total_data['runtime'].to_numpy() # X_columns = total_data.drop(columns = 'PS').columns # X = total_data.drop(columns = ['runtime','PS']).to_numpy() # Y = total_data['runtime'].to_numpy() print('Data X and Y shape', X.shape, Y.shape) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42) print('Train Test Split:', X_train.shape, X_test.shape, Y_train.shape, Y_test.shape) scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.fit_transform(X_test) ################## Data Preprocessing ###################### # Put best models here using grid search # 1. SVR best_svr =SVR(C=1000, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma=0.1, kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False) # 2. LR best_lr = LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=True) # 3. RR best_rr = linear_model.Ridge(alpha=10, copy_X=True, fit_intercept=True, max_iter=None, normalize=False, random_state=None, solver='svd', tol=0.001) # 4. KNN best_knn = KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, n_neighbors=2, p=1, weights='distance') # 5. GPR best_gpr = GaussianProcessRegressor(alpha=0.01, copy_X_train=True, kernel=None, n_restarts_optimizer=0, normalize_y=True, optimizer='fmin_l_bfgs_b', random_state=None) # 6. Decision Tree best_dt = DecisionTreeRegressor(criterion='mse', max_depth=7, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=None, splitter='best') # 7. Random Forest best_rf = RandomForestRegressor(bootstrap=True, criterion='friedman_mse', max_depth=7, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start='False') # 8. Extra Trees Regressor best_etr = ExtraTreesRegressor(bootstrap=False, criterion='friedman_mse', max_depth=15, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=200, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start='True') # 9. GBR best_gbr = ensemble.GradientBoostingRegressor(alpha=0.9, criterion='mae', init=None, learning_rate=0.1, loss='lad', max_depth=None, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_iter_no_change=None, presort='auto', random_state=42, subsample=1.0, tol=0.0001, validation_fraction=0.1, verbose=0, warm_start=False) # 10. XGB best_xgb = xgb.XGBRegressor(alpha=10, base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=0.3, gamma=0, importance_type='gain', learning_rate=0.5, max_delta_step=0, max_depth=10, min_child_weight=1, missing=None, n_estimators=100, n_jobs=1, nthread=None, objective='reg:linear', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=1, validate_parameters=False, verbosity=1) best_models = [best_svr, best_lr, best_rr, best_knn, best_gpr, best_dt, best_rf, best_etr, best_gbr, best_xgb] best_models_name = ['best_svr', 'best_lr', 'best_rr', 'best_knn', 'best_gpr', 'best_dt', 'best_rf', 'best_etr' , 'best_gbr', 'best_xgb'] k = 0 df = pd.DataFrame(columns = ['model_name', 'dataset_name', 'r2', 'mse', 'mape', 'mae' ]) for model in best_models: print('Running model number:', k+1, 'with Model Name: ', best_models_name[k]) r2_scores = [] mse_scores = [] mape_scores = [] mae_scores = [] # cv = KFold(n_splits = 10, random_state = 42, shuffle = True) cv = ShuffleSplit(n_splits=10, random_state=0, test_size = 0.4) # print(cv) fold = 1 for train_index, test_index in cv.split(X): model_orig = model # print("Train Index: ", train_index, "\n") # print("Test Index: ", test_index) X_train_fold, X_test_fold, Y_train_fold, Y_test_fold = X[train_index], X[test_index], Y[train_index], Y[test_index] # print(X_train_fold.shape, X_test_fold.shape, Y_train_fold.shape, Y_test_fold.shape) model_orig.fit(X_train_fold, Y_train_fold) Y_pred_fold = model_orig.predict(X_test_fold) # save the folds to disk data = [X_train_fold, X_test_fold, Y_train_fold, Y_test_fold] filename = path_for_saving_data + '/folds_data/' + best_models_name[k] +'_'+ str(fold) + '.pickle' # pickle.dump(data, open(filename, 'wb')) # save the model to disk # filename = path_for_saving_data + '/models_data/' + best_models_name[k] + '_' + str(fold) + '.sav' fold = fold + 1 # pickle.dump(model_orig, open(filename, 'wb')) # some time later... ''' # load the model from disk loaded_model = pickle.load(open(filename, 'rb')) result = loaded_model.score(X_test, Y_test) print(result) ''' # scores.append(best_svr.score(X_test, y_test)) ''' plt.figure() plt.plot(Y_test_fold, 'b') plt.plot(Y_pred_fold, 'r') ''' # print('Accuracy =',accuracy_score(Y_test, Y_pred)) r2_scores.append(r2_score(Y_test_fold, Y_pred_fold)) mse_scores.append(mean_squared_error(Y_test_fold, Y_pred_fold)) mape_scores.append(absolute_percentage_error(Y_test_fold, Y_pred_fold)) mae_scores.append(mean_absolute_error(Y_test_fold, Y_pred_fold)) df = df.append({'model_name': best_models_name[k], 'dataset_name': dataset_name , 'r2': r2_scores, 'mse': mse_scores, 'mape': mape_scores, 'mae': mae_scores }, ignore_index=True) k = k + 1 print(df.head()) df.to_csv(r'runtimes_final_npb_ep_60.csv') dataset_name = 'runtimes_final_npb_ep' dataset_path = 'C:\\Users\\Rajat\\Desktop\\DESKTOP_15_05_2020\\Evaluating-Machine-Learning-Models-for-Disparate-Computer-Systems-Performance-Prediction\\Dataset_CSV\\PhysicalSystems\\runtimes_final_npb_ep.csv' path_for_saving_data = 'data\\' + dataset_name process_all(dataset_path, dataset_name, path_for_saving_data) df = pd.DataFrame(columns = ['model_name', 'dataset_name', 'r2', 'mse', 'mape', 'mae' ]) df
Codes/Results_For_PPT-Pareto/NPB_EP_physical_all_models_except_dnn.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Traditional Feedforward neural network to approximate a black box function # # This is just a toy example to test the basic functionality of Bokeh interactive plot! # + import torch import torch.nn as nn import torchvision.datasets as dsets import torchvision.transforms as transforms from torch.autograd import Variable import torch.optim as optim import numpy as np import matplotlib from matplotlib import pyplot as plt from bokeh.layouts import gridplot from bokeh.plotting import figure, show, output_notebook, ColumnDataSource from bokeh.layouts import column, row, widgetbox from bokeh.models import CustomJS, Slider, Select output_notebook() # %matplotlib inline # - def fx(x): return np.random.normal(0, 5) + np.log(x)*np.sin(x/2) data__x = np.arange(1,100,1) data__y = fx(data__x) # + TOOLS = "pan,wheel_zoom,box_zoom,reset,save,box_select" p1 = figure(title="my ultimate function!", tools=TOOLS) p1.line(data__x, data__y, legend="random graph") p1.width = 1200 show(p1) # + # try to estimate using regular Feed forward network # + gpu_dtype = torch.cuda.FloatTensor print_every = 2500 # class ffNet(nn.Module): # def __init__(self): # super(ffNet, self).__init__() # self.fc1 = nn.Linear(1, 64) # self.relu = nn.ReLU() # self.fc2 = nn.Linear(64, 1) # def forward(self, x): # out = self.relu(self.fc1(x)) # out = self.fc2(out) # return out # faster way to define network ffNet = nn.Sequential( nn.Linear(1, 1024), nn.ReLU(inplace=True), nn.Linear(1024, 1024), nn.ReLU(inplace=True), nn.Linear(1024, 256), nn.ReLU(inplace=True), nn.Linear(256, 1) ) # - def train(data, mode, loss_fn, optimizer, save_every_epoch=1000, num_epochs=2): model.train() history = {} xs = torch.from_numpy(data['xs']).unsqueeze(1) ys = torch.from_numpy(data['ys']).unsqueeze(1) N = len(ys) for epoch in range(num_epochs): x_var = Variable(xs.type(gpu_dtype)) y_var = Variable(ys.type(gpu_dtype)) scores = model(x_var) if (epoch + 1) % save_every_epoch == 0: history[str(epoch+1)] = scores loss = loss_fn(scores, y_var) if (epoch + 1) % print_every == 0: print('epoch = %d, loss = %.4f' % (epoch + 1, loss.data[0])) optimizer.zero_grad() loss.backward() optimizer.step() return history # + model = ffNet.type(gpu_dtype) loss_fn = nn.MSELoss().type(gpu_dtype) optimizer = optim.Adam(model.parameters(), lr=1e-4) xs = np.arange(1,100,0.1) ys = fx(xs) data = {} data['xs'] = xs data['ys'] = ys history_of_training = train(data, model, loss_fn, optimizer, num_epochs=150000) # + model.eval() xs = torch.from_numpy(data['xs']).unsqueeze(1) x_var = Variable(xs.type(gpu_dtype)) y_pred = model(x_var).data.cpu().numpy().squeeze() # - p1 = figure(title="my prediction of my ultimate function!", tools=TOOLS) p1.line(data['xs'], y_pred, line_color="red") p1.line(data['xs'], data['ys'], line_color="blue") p1.width = 1200 show(p1) # + history_of_training_numpy = {} history_of_training_numpy['x'] = x=xs.numpy() # for kee in history_of_training.keys(): # history_of_training_numpy[kee] = history_of_training[kee].data.cpu().numpy().squeeze() for i in range(1,16): history_of_training_numpy[str(i)] = history_of_training[str(i*10000)].data.cpu().numpy().squeeze() master = ColumnDataSource(data=history_of_training_numpy) # + plot = figure(title="my prediction of my ultimate function over time!!", tools=TOOLS) plot.line('x', 'y', source=source_final, line_width=3, line_alpha=0.6) plot.width = 1200 plot.line(data['xs'], data['ys'], line_color="gray") callback = CustomJS(args={ 'source': source_final, 'master' : master}, code=""" var data = source.data; var master = master.data; var epoch = epoch_number.value; for (var e in data) delete data[e]; data['x'] = master['x']; data['y'] = master[epoch.toString()]; source.change.emit() """) epoch_number = Slider(start=1, end=15, value=5, step=1, title="epoch_number", callback=callback) callback.args["epoch_number"] = epoch_number layout = column( plot, widgetbox(select, epoch_number), ) show(layout)
old:misc/universal approximation/feedforward_1d.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Dependencies # + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _kg_hide-input=true _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" import glob import numpy as np import pandas as pd from transformers import TFDistilBertModel from tokenizers import BertWordPieceTokenizer import tensorflow as tf from tensorflow.keras.models import Model from tensorflow.keras.layers import Dense, Input, Dropout, GlobalAveragePooling1D, Concatenate # + _kg_hide-input=true # Auxiliary functions # Transformer inputs def preprocess_test(text, context, tokenizer, max_seq_len): context_encoded = tokenizer.encode(context) context_encoded = context_encoded.ids[1:-1] encoded = tokenizer.encode(text) encoded.pad(max_seq_len) encoded.truncate(max_seq_len) input_ids = encoded.ids attention_mask = encoded.attention_mask token_type_ids = ([0] * 3) + ([1] * (max_seq_len - 3)) input_ids = [101] + context_encoded + [102] + input_ids # update input ids and attentions masks size input_ids = input_ids[:-3] attention_mask = [1] * 3 + attention_mask[:-3] x = [np.asarray(input_ids, dtype=np.int32), np.asarray(attention_mask, dtype=np.int32), np.asarray(token_type_ids, dtype=np.int32)] return x def get_data_test(df, tokenizer, MAX_LEN): x_input_ids = [] x_attention_masks = [] x_token_type_ids = [] for row in df.itertuples(): x = preprocess_test(getattr(row, "text"), getattr(row, "sentiment"), tokenizer, MAX_LEN) x_input_ids.append(x[0]) x_attention_masks.append(x[1]) x_token_type_ids.append(x[2]) x_data = [np.asarray(x_input_ids), np.asarray(x_attention_masks), np.asarray(x_token_type_ids)] return x_data def decode(pred_start, pred_end, text, tokenizer): offset = tokenizer.encode(text).offsets if pred_end >= len(offset): pred_end = len(offset)-1 decoded_text = "" for i in range(pred_start, pred_end+1): decoded_text += text[offset[i][0]:offset[i][1]] if (i+1) < len(offset) and offset[i][1] < offset[i+1][0]: decoded_text += " " return decoded_text # - # # Load data # + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _kg_hide-input=true _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" test = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/test.csv') print('Test samples: %s' % len(test)) display(test.head()) # - # # Model parameters # + _kg_hide-input=true MAX_LEN = 128 base_path = '/kaggle/input/qa-transformers/distilbert/' base_model_path = base_path + 'distilbert-base-uncased-distilled-squad-tf_model.h5' config_path = base_path + 'distilbert-base-uncased-distilled-squad-config.json' input_base_path = '/kaggle/input/4-tweet-train-distilbert-lower-softmax/' tokenizer_path = input_base_path + 'vocab.txt' model_path_list = glob.glob(input_base_path + '*.h5') model_path_list.sort() print('Models to predict:') print(*model_path_list, sep = "\n") # - # # Tokenizer tokenizer = BertWordPieceTokenizer(tokenizer_path , lowercase=True) # # Pre process # + test['text'].fillna('', inplace=True) test["text"] = test["text"].apply(lambda x: x.lower()) x_test = get_data_test(test, tokenizer, MAX_LEN) # - # # Model def model_fn(): input_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids') attention_mask = Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask') token_type_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='token_type_ids') base_model = TFDistilBertModel.from_pretrained(base_model_path, config=config_path, name="base_model") sequence_output = base_model({'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids}) last_state = sequence_output[0] x = GlobalAveragePooling1D()(last_state) y_start = Dense(MAX_LEN, activation='softmax', name='y_start')(x) y_end = Dense(MAX_LEN, activation='softmax', name='y_end')(x) model = Model(inputs=[input_ids, attention_mask, token_type_ids], outputs=[y_start, y_end]) return model # # Make predictions # + _kg_hide-input=true NUM_TEST_IMAGES = len(test) test_start_preds = np.zeros((NUM_TEST_IMAGES, MAX_LEN)) test_end_preds = np.zeros((NUM_TEST_IMAGES, MAX_LEN)) for model_path in model_path_list: print(model_path) model = model_fn() model.load_weights(model_path) test_preds = model.predict(x_test) test_start_preds += test_preds[0] / len(model_path_list) test_end_preds += test_preds[1] / len(model_path_list) # - # # Post process # + _kg_hide-input=true test['start'] = test_start_preds.argmax(axis=-1) test['end'] = test_end_preds.argmax(axis=-1) test['selected_text'] = test.apply(lambda x: decode(x['start'], x['end'], x['text'], tokenizer), axis=1) # - # # Test set predictions # + _kg_hide-input=true submission = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/sample_submission.csv') submission['selected_text'] = test["selected_text"] submission.to_csv('submission.csv', index=False) submission.head(10)
Model backlog/Inference/4-tweet-inference-distilbert-softmax.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + doc = """ This jupyter notebook is authored by ygg_anderson for the Token Engineering Commons. See appropriate licensing. 🐧 🐧 🐧 """ import param import panel as pn import pandas as pd import hvplot.pandas import holoviews as hv import numpy as np from scipy.stats.mstats import gmean import os pn.extension() yellow = '#DEFB48' blue = '#0F2EEE' pink = '#FD40FE' light_blue = '#03B3FF' purple = '#7622A8' black = '#0b0a15' TEC_COLOURS = [blue, black, yellow, pink, purple] APP_PATH = './' sheets = [ "Total Impact Hours so far", "IH Predictions", "#8 Jan 1", "#7 Dec 18", "#6 Dec 4", "#5 Nov 20", "#4 Nov 6", "#3 Oct 23", "#2 Oct 9", "#1 Sept 24", "#0 Sept 7 (historic)", ] + [f"#{i} IH Results" for i in range(9)] sheets = {i:sheet for i, sheet in enumerate(sheets)} def read_excel(sheet_name="Total Impact Hours so far", header=1, index_col=0, usecols=None) -> pd.DataFrame: data = pd.read_excel( os.path.join(APP_PATH, "data", "TEC Praise Quantification.xlsx"), sheet_name=sheet_name, engine='openpyxl', header=header, index_col=index_col, usecols=usecols ).reset_index().dropna(how='any') return data ## Tests impact_hour_data_1 = read_excel() impact_hour_data_2 = read_excel(sheet_name="IH Predictions", header=0, index_col=0, usecols='A:I').drop(index=19) pn.Row(impact_hour_data_1.hvplot.table(), impact_hour_data_2.hvplot.table()) # - # Load CSTK data cstk_data = pd.read_csv('CSTK_DATA.csv', header=None).reset_index().head(100) cstk_data.columns = ['CSTK Token Holders', 'CSTK Tokens'] cstk_data['CSTK Tokens Capped'] = cstk_data['CSTK Tokens'].apply(lambda x: min(x, cstk_data['CSTK Tokens'].sum()/10)) cstk_data import numpy as np class ImpactHoursFormula(param.Parameterized): """ Sem's Formula 🌱 🐝 🍯 This formala was a collaboration of Sem and Griff for the TEC hatch impact hours formula. https://forum.tecommons.org/t/impact-hour-rewards-deep-dive/90/5 """ # Impact Hour Data historic = pd.read_csv('data/IHPredictions.csv').query('Model=="Historic"') optimistic = pd.read_csv('data/IHPredictions.csv').query('Model=="Optimistic"') predicted_labour_rate = param.Number(0.5, bounds=(-.5,1.5), step=0.05) # Impact Hour Formula total_impact_hours = param.Integer(step=100) minimum_raise = param.Number(100, bounds=(10, 10000), step=100) expected_raise_per_impact_hour = param.Number(25, bounds=(0,200), step=1) maximum_impact_hour_rate = param.Number(100, bounds=(0,200), step=1) target_raise = param.Number() maximum_raise = param.Number() # Hatch params hatch_period_days = param.Integer(15, bounds=(5, 30), step=2) hatch_tribute = param.Number(0.05, bounds=(0,1)) # CSTK Ratio total_cstk_tokens = param.Number(cstk_data['CSTK Tokens Capped'].sum(), constant=True) hatch_oracle_ratio = param.Number(0.005, bounds=(0.005, 100), step=0.005) # Number of TESTTEC exchanged for 1 wxdai hatch_exchange_rate = param.Number(10000, bounds=(1,100000), step=1) def __init__(self, **params): super(ImpactHoursFormula, self).__init__(**params) # Initial Predicted Impact Hours historic = self.historic.set_index('Round') optimistic = self.optimistic[self.optimistic["Actual / Predicted"] == "Predicted"].set_index('Round') predicted = optimistic.copy() predicted['Total IH'] = self.predicted_labour_rate * historic[historic["Actual / Predicted"] == "Predicted"]['Total IH'] + (1 - self.predicted_labour_rate) * optimistic['Total IH'] predicted['Total Hours'] = self.predicted_labour_rate * historic[historic["Actual / Predicted"] == "Predicted"]['Total Hours'] + (1 - self.predicted_labour_rate) * optimistic['Total Hours'] self.total_impact_hours = int(predicted['Total IH'].max()) # Maximum Raise self.maximum_raise = self.total_impact_hours * self.expected_raise_per_impact_hour self.param['maximum_raise'].bounds = (self.maximum_raise / 10, self.maximum_raise * 10) self.param['maximum_raise'].step = self.maximum_raise / 10 # Target Raise self.target_raise = self.maximum_raise / 2 self.param['target_raise'].bounds = (self.minimum_raise, self.maximum_raise) self.param['target_raise'].step = self.maximum_raise / 10 def impact_hours_accumulation(self): x = 'End Date' historic = self.historic.set_index('Round') optimistic = self.optimistic[self.optimistic["Actual / Predicted"] == "Predicted"].set_index('Round') predicted = optimistic.copy() predicted['Total IH'] = self.predicted_labour_rate * historic[historic["Actual / Predicted"] == "Predicted"]['Total IH'] + (1 - self.predicted_labour_rate) * optimistic['Total IH'] predicted['Total Hours'] = self.predicted_labour_rate * historic[historic["Actual / Predicted"] == "Predicted"]['Total Hours'] + (1 - self.predicted_labour_rate) * optimistic['Total Hours'] historic_curve = historic.hvplot(x, 'Total IH', rot=45, title='Impact Hours Accumulation Curve 🛠️') historic_bar = historic.hvplot.bar(x, 'Total Hours', label='Historic') optimistic_curve = optimistic.hvplot(x, 'Total IH') optimistic_bar = optimistic.hvplot.bar(x, 'Total Hours', label='Optimistic') predicted_curve = predicted.hvplot(x, 'Total IH', rot=45, title='Impact Hours Accumulation Curve :)') predicted_bar = predicted.hvplot.bar(x, 'Total Hours', label='Predicted') self.total_impact_hours = int(predicted['Total IH'].max()) return pn.Column(historic_curve * historic_bar * predicted_curve * predicted_bar * optimistic_curve * optimistic_bar) def impact_hours_rewards(self): expected_raise = self.total_impact_hours * self.expected_raise_per_impact_hour if expected_raise > self.maximum_raise: expected_raise = self.maximum_raise self.param['maximum_raise'].bounds = (expected_raise, expected_raise * 10) self.param['maximum_raise'].step = expected_raise / 10 if self.target_raise > self.maximum_raise: self.target_raise = self.maximum_raise self.param['target_raise'].bounds = (self.minimum_raise, self.maximum_raise) self.param['target_raise'].step = self.maximum_raise / 100 x = np.linspace(self.minimum_raise, self.maximum_raise) R = self.maximum_impact_hour_rate m = self.expected_raise_per_impact_hour H = self.total_impact_hours y = [R* (x / (x + m*H)) for x in x] df = pd.DataFrame([x,y]).T df.columns = ['Total XDAI Raised','Impact Hour Rate'] try: expected_impact_hour_rate = df[df['Total XDAI Raised'] > expected_raise].iloc[0]['Impact Hour Rate'] except: expected_impact_hour_rate = df['Impact Hour Rate'].max() try: target_impact_hour_rate = df[df['Total XDAI Raised'] > self.target_raise].iloc[0]['Impact Hour Rate'] except: target_impact_hour_rate = df['Impact Hour Rate'].max() impact_hours_plot = df.hvplot.area(title='Total Raise and Impact Hour Rate 🎯', x='Total XDAI Raised', xformatter='%.0f', hover=True) height = impact_hours_plot.data["Impact Hour Rate"].max() - impact_hours_plot.data["Impact Hour Rate"].min() expected = hv.Spikes(([expected_raise], [height]), vdims="height", label="Expected Raise").opts(color='blue', line_width=2) * hv.HLine(expected_impact_hour_rate).opts(color='blue', line_width=2) target = hv.Spikes(([self.target_raise], [height]), vdims="height", label="Target Raise").opts(color='red', line_width=2) * hv.HLine(target_impact_hour_rate).opts(color='red', line_width=2) return (impact_hours_plot * target * expected).opts(legend_position='bottom_right') def funding_pools(self): x = np.linspace(self.minimum_raise, self.maximum_raise) R = self.maximum_impact_hour_rate m = self.expected_raise_per_impact_hour H = self.total_impact_hours y = [R* (x / (x + m*H)) for x in x] df = pd.DataFrame([x,y]).T df.columns = ['Total XDAI Raised','Impact Hour Rate'] # Minimum Results minimum_raise = self.minimum_raise minimum_rate = df[df['Total XDAI Raised'] > minimum_raise].iloc[0]['Impact Hour Rate'] minimum_cultural_tribute = self.total_impact_hours * minimum_rate # Expected Results expected_raise = self.total_impact_hours * self.expected_raise_per_impact_hour try: expected_rate = df[df['Total XDAI Raised'] > expected_raise].iloc[0]['Impact Hour Rate'] except: expected_rate = df['Impact Hour Rate'].max() expected_cultural_tribute = self.total_impact_hours * expected_rate # Target Results target_raise = self.target_raise try: target_rate = df[df['Total XDAI Raised'] > target_raise].iloc[0]['Impact Hour Rate'] except: target_rate = df['Impact Hour Rate'].max() target_cultural_tribute = self.total_impact_hours * target_rate # Funding Pools and Tribute funding = pd.DataFrame.from_dict({ 'Mimimum': [minimum_cultural_tribute, minimum_raise-minimum_cultural_tribute], 'Expected': [expected_cultural_tribute, expected_raise-expected_cultural_tribute], 'Target': [target_cultural_tribute, target_raise-target_cultural_tribute]}, orient='index', columns=['Culture Tribute', 'Funding Pool']) funding_plot = funding.hvplot.bar(title="Funding Pool Outcomes 🔋", stacked=True, ylim=(0,self.maximum_raise), yformatter='%.0f').opts(color=hv.Cycle(TEC_COLOURS)) return funding_plot def hatch_raise_view(self): # Load CSTK data cstk_data = pd.read_csv('CSTK_DATA.csv', header=None).reset_index().head(100) cstk_data.columns = ['CSTK Token Holders', 'CSTK Tokens'] cstk_data['CSTK Tokens Capped'] = cstk_data['CSTK Tokens'].apply(lambda x: min(x, cstk_data['CSTK Tokens'].sum()/10)) cstk_data['Cap raise'] = cstk_data['CSTK Tokens Capped'] * self.hatch_oracle_ratio cap_plot = cstk_data.hvplot.area(title="Raise Targets Per Hatcher", x='CSTK Token Holders', y='Cap raise', yformatter='%.0f', label="Cap Raise", ylabel="XDAI Staked") cstk_data['max_goal'] = cstk_data['Cap raise'] * self.maximum_raise max_plot = cstk_data.hvplot.area(x='CSTK Token Holders', y='max_goal', yformatter='%.0f', label="Max Raise") cstk_data['min_goal'] = cstk_data['Cap raise'] * self.minimum_raise min_plot = cstk_data.hvplot.area(x='CSTK Token Holders', y='min_goal', yformatter='%.0f', label="Min Raise") cstk_data['target_goal'] = cstk_data['Cap raise'] * self.target_raise target_plot = cstk_data.hvplot.line(x='CSTK Token Holders', y='target_goal', yformatter='%.0f', label="Target Raise") raise_bars = cstk_data.iloc[:,3:].sum().sort_values(ascending=False).hvplot.bar(yformatter='%.0f', title="Total Raise Targets") stats = pd.DataFrame(cstk_data.iloc[:,3:].sum(), columns=['Total XDAI Raise']) stats['GMean XDAI Co-vested Per Hatcher'] = gmean(cstk_data.iloc[:,3:]) stats['XDAI Hatch Tribute'] = stats['Total XDAI Raise'] * self.hatch_tribute stats['Total TECH Tokens'] = stats['Total XDAI Raise'] * self.hatch_exchange_rate return pn.Column(cap_plot * max_plot * min_plot * target_plot, raise_bars, stats.sort_values('Total XDAI Raise',ascending=False).apply(round).reset_index().hvplot.table()) # + impact_hours_rewards = ImpactHoursFormula() pn.Row(impact_hours_rewards, pn.Column(impact_hours_rewards.impact_hours_accumulation, impact_hours_rewards.impact_hours_rewards, impact_hours_rewards.funding_pools), impact_hours_rewards.hatch_raise_view) # -
Lab7/culture_tribute2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Let's scrape some inmate data # # Our goal in this exercise is to scrape the [roster of inmates in the Hennepin County Jail](https://www4.co.hennepin.mn.us/webbooking/search.asp) into a CSV. # ### Step 1: Can we get everyone? # # What happens when we click the search box without entering a first or last name? We're directed to a page with the listing of the entire roster at a new URL. # # This is good news -- some forms are set up to require a minimum number of characters. Now we need to check whether you can just _go_ to that URL without visiting the landing page first and clicking through -- in other words, does that page depend on a [cookie](https://en.wikipedia.org/wiki/HTTP_cookie) being passed? # # To test this, I usually open another browser window in incognito mode and paste in the URL. Success! Going to [https://www4.co.hennepin.mn.us/webbooking/resultbyname.asp](https://www4.co.hennepin.mn.us/webbooking/resultbyname.asp) dumps out the entire list of inmates, so that's where we'll start. (You could also open your network tab and see what information is getting exchanged during the request. For more complex dynamically created pages that rely on cookies, we'd probably need the `requests` [Session object](http://docs.python-requests.org/en/master/user/advanced/#session-objects).) # ### Step 2: Check out the inmate detail page # # Let's click on an inmate link. We want to look at two things: # # - Does each inmate have a unique URL with a consistent pattern? (Yes) # - What information on the page do we want to collect? (Let's grab custody info, housing location, booking date/time and arresting agency) # # What's the pattern for an inmate URL? # ### Step 3: Start scraping # # #### Import the libraries we'll need # + import csv from datetime import datetime import time import requests from bs4 import BeautifulSoup # - # #### Set introductory variables # + # base URL url_base = 'https://www4.co.hennepin.mn.us/webbooking/' # results page URL results_page = url_base + 'resultbyname.asp' # pattern for inmate detail URLs inmate_url_pattern = url_base + 'chargedetail.asp?v_booknum={}' # - # #### Fetch and parse the page contents # + # fetch the page r = requests.get(results_page) # parse it soup = BeautifulSoup(r.text, 'html.parser') # find the table we want table = soup.find_all('table')[6] # get the rows of the table, minus the header inmates = table.find_all('tr')[1:] # - # #### Write a couple of functions # # We need to pause here and write a couple of functions to help us extract the bits of data from the inmate's detail page: # # - A function that takes the URL for an inmate detail page, fetches and parses the contents, then returns the bits of data we're interested in # - A more specific function that takes the text of a label cell on a detail page ("Sheriff's Custody:", for instance) and returns the associated value in the next cell. This function will be called inside our other function -- it's not 100% necessary but it keeps us from repeating ourselves a million times # + def get_inmate_attr(soup, label): """Given a label and a soup'd detail page, return the associated value.""" return soup.find(string=label).parent.parent.next_sibling \ .next_sibling.text.strip() def inmate_details(url): """Fetch and parse and inmate detail page, return three bits of data.""" # fetch the page r = requests.get(url) # parse it into soup soup = BeautifulSoup(r.text, 'html.parser') # call the get_inmate_attr function to nab the cells we're interested in custody = get_inmate_attr(soup, "Sheriff's Custody:") housing = get_inmate_attr(soup, "Housing Location:") booking_date = get_inmate_attr(soup, "Received Date/Time:") # return a dict with this info # lose the " Address" string on the housing cell, where it exists # also, parse the booking date as a date to validate return { 'custody': custody, 'housing': housing.replace(' Address', ''), 'booking_date': datetime.strptime(booking_date, '%m/%d/%Y.. %H:%M') } # - # #### Loop over the inmate rows, write to file # open a file to write to with open('inmates.csv', 'w') as outfile: # define your headers -- they should match the keys in the dict # we're creating as we scrape headers = ['booking_num', 'url', 'last', 'rest', 'dob', 'custody', 'housing', 'booking_date'] # create a writer object writer = csv.DictWriter(outfile, fieldnames=headers) # write the header row writer.writeheader() # print some summary info print('') print('Writing data for {:,} inmates ...'.format(len(inmates))) print('') # loop over the rows of inmates from the search results page for row in inmates: # unpack the list of cells in the row booking_num, name, dob, status = row.find_all('td') # get the detail page link using the template string we defined up top detail_link = inmate_url_pattern.format(booking_num.string) # unpack the name into last/rest and print it last, rest = name.string.split(', ') print(rest, last) # reformat the dob, which, bonus, also validates it dob_parsed = datetime.strptime(dob.string, '%m/%d/%Y') # our dict of summary info summary_info = { 'booking_num': booking_num.string, 'url': detail_link, 'last': last, 'rest': rest, 'dob': dob_parsed.strftime('%Y-%m-%d') } # call the inmate_details function on the detail URL # remember: this returns a dictionary details = inmate_details(detail_link) # combine the summary and detail dicts # by unpacking them into a new dict # https://www.python.org/dev/peps/pep-0448/ combined_dict = { **summary_info, **details } # write the combined dict out to file writer.writerow(combined_dict) # pause for 2 seconds to give the server a break time.sleep(2) # ### _Extra credit_: Get charge details # # It's all well and good to get the basic inmate info, but we're probably also interested in _why_ they're in jail -- what are they charged with? # # For this exercise, add some parsing logic to the `inmate_details` scraping function to extract data about what each inmate has been charged with. Pulling them out as a list of dictionaries makes the most sense to me, but you can format it however you like. # # Because each inmate has a variable number of charges, you also need to think about how you want to represent the data in your CSV. Is each line one charge? One inmate? Picture how one row of data should look in your output file and structure your parsing to match.
completed/15. Web scraping (Part 5).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Ejemplo procesamiento de excel # # Descargue un excel desde el sitio "estadística delictiva" de la policía nacional: # https://www.policia.gov.co/grupo-informacion-criminalidad/estadistica-delictiva # # import numpy as np import pandas as pd from pandas_profiling import ProfileReport xls_path = r"C:\opt\work\igac\git\ejemplo_data_engineering\data\hurto_a_motocicletas_2020_0.xls" df = pd.read_excel(xls_path, sheet_name=0, skiprows = range(1, 9), header=1) df.head() df = pd.read_excel(xls_path, sheet_name=0, skiprows = range(1, 9), header=1, dtype={'CODIGO DANE': str}) df.head() df.shape df.columns # + # estandarizar nombres de columnas df.columns= df.columns.str.lower() df.columns = df.columns.str.strip() df.columns = df.columns.str.replace(' ','_',regex=False) df.columns # + ## generar nuevas columnas df["categoria"] = "HURTO A MOTOCICLETAS" df["anio"] = 2020 df['codigo_departamento'] = df['codigo_dane'].str[:2] df['codigo_municipio'] = df['codigo_dane'].str[:5] df.head() # - df.tail() # + ## REMOVER DATOS INVÁLIDOS df = df[df['codigo_dane'].notna()] df.tail() # - # GENERAR REPORTE DE PERFILAMIENTO profile = ProfileReport(df) html_report = r"C:\opt\work\igac\git\ejemplo_data_engineering\reports\hurto_a_motocicletas_2020_0.xls.report.html" profile.to_file(html_report) # + ## EXPORTAR A OTROS FORMATOS df.to_json(r"C:\opt\work\igac\git\ejemplo_data_engineering\data\hurto_a_motocicletas_2020_0.json", orient='records', lines=True) df.to_csv(r"C:\opt\work\igac\git\ejemplo_data_engineering\data\hurto_a_motocicletas_2020_0.tsv", sep = '\t', index=False) # + ## exportar a una base de datos postgresql from sqlalchemy import create_engine engine_dw = create_engine('postgresql://user:password@server:5432/db_name') df.to_sql('ponal_delitos', con=engine_dw, if_exists='append', schema="public", index=False)
notebooks/ejemplo_excel.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <small><small><i> # All the IPython Notebooks in this lecture series by Dr. <NAME> are available @ **[GitHub](https://github.com/milaan9/02_Python_Datatypes/tree/main/003_Python_List_Methods)** # </i></small></small> # # Python del Statement # # In this class, you will learn to use the **`del`** keyword with the help of examples. # # **Syntax**: # # ```python # del obj_name # ``` # # Here, **`del`** is a Python keyword. And, **`obj_name`** can be variables, user-defined objects, lists, items within lists, dictionaries etc. # + # Example 1: Delete an user-defined object class MyClass: a = 10 def func(self): print('Hello') print(MyClass) # Output: <class '__main__.MyClass'> # + del MyClass # deleting MyClass print(MyClass) # Error: MyClass is not defined # - # In the program, we have deleted **`MyClass`** using **`del MyClass`** statement. # + # Example 2: Delete variable, list, and dictionary my_var = 5 my_tuple = ('Arthur', 33) my_dict = {'name': 'Arthur', 'age': 33} del my_var del my_tuple del my_dict # + # Example 3: Remove items, slices from a list # The `del` statement can be used to delete an item at a given index. # Also, it can be used to remove slices from a list. my_list = [1, 2, 3, 4, 5, 6, 7, 8, 9] # deleting the third item del my_list[2] # Output: [1, 2, 4, 5, 6, 7, 8, 9] print(my_list) # deleting items from 2nd to 4th del my_list[1:4] # Output: [1, 6, 7, 8, 9] print(my_list) # deleting all elements del my_list[:] # Output: [] print(my_list) # - # Error: my_var is not defined print(my_var) # Error: my_tuple is not defined print(my_tuple) # Error: my_dict is not defined print(my_dict) # + # Example 4: Remove a key:value pair from a dictionary person = { 'name': 'Arthur', 'age': 33, 'profession': 'Programmer' } del person['profession'] print(person) # Output: {'name': 'Arthur', 'age': 33} # - # You can't delete items of tuples and strings. It's because tuples and strings are immutables; objects that can't be changed after its creation. # + my_tuple = (1, 2, 3) del my_tuple[1] # Error: 'tuple' object doesn't support item deletion # - # However, you can delete an entire tuple or string. # + my_tuple = (1, 2, 3) del my_tuple # deleting tuple my_tuple # -
003_Python_List_Methods/Python_del_statement.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + deletable=false editable=false # Initialize Otter import otter grader = otter.Notebook("Assignment0.ipynb") # - # # CMPUT 200 Winter 2022 # # Assignment 0 # + [markdown] id="wZYSCsMz1QR4" # Each assignment will be distributed as a notebook such as this one. You will execute the questions in the notebook. The questions might ask for a short answer in text form or for you to write and execute a piece of code. # For short answer questions you must enter your answer in the provided space. For coding questions you must use the provided space. When you are done, you will submit your work from the notebook. Follow directions at the bottom of this notebook for submission. # + id="ZfrPg4x30DDJ" # Don't change this cell; just run it. # %pip install -r requirements.txt import numpy as np import pandas as pd from scipy.optimize import minimize # These lines do some fancy plotting magic. import matplotlib # This is a magic function that renders the figure in the notebook, instead of displaying a dump of the figure object. # %matplotlib inline import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') import warnings warnings.simplefilter('ignore', FutureWarning) import otter grader = otter.Notebook() # + [markdown] id="YKsuY7G77YXl" # ## Importing Data # + [markdown] id="4fxeBd-x7dlO" # ## Linear Regression # + id="8hDDnT5v784Q" # Just run this cell pten = pd.read_csv("pten.csv") pten.head(5) # + id="Whc8ZtqB8a4n" # Just run this cell pten.hist("Copy Number", bins = np.arange(-1, 1.5, 0.5)) # - # **Question 1** # # Looking at the histogram above, we want to check whether or not `Copy Number` is in standard units. For this question, compute the mean and the standard deviation of the values in `Copy Number` and assign these values to `copy_number_mean` and `copy_number_sd` respectively. After you calculate these values, assign `is_su` to either `True` if you think that `Copy Numbers` is in standard units or `False` if you think otherwise. # + tags=[] copy_number = pten['Copy Number'] copy_number_mean = ... copy_number_sd = ... is_su = ... print(f"Mean: {copy_number_mean}, SD: {copy_number_sd}, Is in standard units?: {is_su}") # + deletable=false editable=false grader.check("q1") # - # **Question 2** # # Create the function `standard_units` so that it converts the values in the array `arr` to standard units. We'll then use `standard_units` to create a new table, `pten_su`, that converts all the values in the table `pten` to standard units. # + tags=[] def standard_units(arr): ... # DON'T DELETE OR MODIFY ANY OF THE LINES OF CODE BELOW IN THIS CELL pten_su = pd.DataFrame({"Cell Line": pten["Cell Line"], "Copy Number SU": standard_units(pten["Copy Number"]), "mRNA Expression (Affy) SU": standard_units(pten["mRNA Expression (Affy)"]), "mRNA Expression (RNAseq) SU": standard_units(pten["mRNA Expression (RNAseq)"])}) print(pten_su.head(5)) # + deletable=false editable=false grader.check("q2") # - # You should always visually inspect your data, before numerically analyzing any relationships in your dataset. Run the following cell in order to look at the relationship between the variables in our dataset. # Just run this cell pten_su.plot.scatter("Copy Number SU", "mRNA Expression (Affy) SU") pten_su.plot.scatter("Copy Number SU", "mRNA Expression (RNAseq) SU") pten_su.plot.scatter("mRNA Expression (Affy) SU", "mRNA Expression (RNAseq) SU") # **Question 3** # # Which of the following relationships do you think has the highest correlation (i.e. highest absolute value of `r`)? Assign `highest_correlation` to the number corresponding to the relationship you think has the highest correlation. # # 1. Copy Number vs. mRNA Expression (Affy) # 2. Copy Number vs. mRNA Expression (RNAseq) # 3. mRNA Expression (Affy) vs. mRNA Expression (RNAseq) # + tags=[] highest_correlation = ... # - # **Question 4** # # Now, using the `standard units` function, define the function `correlation` which computes the correlation between `arr1` and `arr2`. # + tags=[] def correlation(arr1, arr2): ... # This computes the correlation between the different variables in pten copy_affy = correlation(pten["Copy Number"], pten["mRNA Expression (Affy)"]) copy_rnaseq = correlation(pten["Copy Number"], pten["mRNA Expression (RNAseq)"]) affy_rnaseq = correlation(pten["mRNA Expression (Affy)"], pten["mRNA Expression (RNAseq)"]) print(f" \ Copy Number vs. mRNA Expression (Affy) Correlation: {copy_affy}, \n \ Copy Number vs. mRNA Expression (RNAseq) Correlation: {copy_rnaseq}, \n \ mRNA Expression (Affy) vs. mRNA Expression (RNAseq) Correlation: {affy_rnaseq}") # + deletable=false editable=false grader.check("q4") # - # **Question 5** # # If we switch what we input as arguments to `correlation`, i.e. found the correlation between `mRNA Expression (Affy)` vs. `Copy Number` instead of the other way around, would the correlation change? Assign `correlation_change` to either `True` if you think yes, or `False` if you think no. # + tags=[] correlation_change = ... # - # <!-- BEGIN QUESTION --> # # **Question 6** # # Looking at both the scatter plots after Question 2 and the correlations computed in Question 4, describe a pattern you see in the relationships between the variables. # _Type your answer here, replacing this text._ # <!-- END QUESTION --> # # **Question 7** # # Let's look at the relationship between mRNA Expression (Affy) vs. mRNA Expression (RNAseq) only. Define a function called `regression_parameters` that returns the parameters of the regression line as a two-item array containing the slope and intercept of the regression line as the first and second elements respectively. The function `regression_line` takes in two arguments, an array of `x` values, and an array of `y` values. # + tags=[] def regression_parameters(x, y): ... slope = ... intercept = ... return [slope, intercept] parameters = regression_parameters(pten["mRNA Expression (Affy)"], pten["mRNA Expression (RNAseq)"]) parameters # + deletable=false editable=false grader.check("q7") # - # **Question 8** # # If we switch what we input as arguments to `regression_parameters`, i.e. found the parameters for the regression line for `mRNA Expression (RNAseq)` vs. `mRNA Expression (Affy)` instead of the other way around, would the regression parameters change (would the slope and/or intercept change)? Assign `parameters_change` to either `True` if you think yes, or `False` if you think no. # + tags=[] parameters_change = ... # - # **Question 9** # # Now, let's look at how the regression parameters look like in standard units. Use the table `pten_su` and the function `regression_parameters`, and assign `parameters_su` to a two-item array containing the slope and the intercept of the regression line for mRNA Expression (Affy) in standard units vs. mRNA Expression (RNAseq) in standard units. # + tags=[] parameters_su = ... parameters_su # + deletable=false editable=false grader.check("q9") # - # <!-- BEGIN QUESTION --> # # **Question 10** # # Looking at the array `parameters_su`, what do you notice about the slope and intercept values specifically? Relate them to another value we already calculated in a previous question, as well as relate them to an equation. # _Type your answer here, replacing this text._ # <!-- END QUESTION --> # # **Question 11** # # The oldest and most commonly used cell line in Biology is the HeLa cell line, named after <NAME>, whose cervical cancer cells were taken without her consent in 1951 to create this cell line. The issue of data privacy and consent is very important to data science! You can read more about this topic [here](https://www.hopkinsmedicine.org/henriettalacks/). # # The HeLa cell line is missing from our dataset. If we know that the HeLa mRNA Expression (Affy) value is 8.2, what is the predicted mRNA Expression (RNAseq) value? Use the values in `parameters` that we derived in Question 7, and assign the result to `hela_rnaseq`. # + tags=[] hela_rnaseq = ... # + deletable=false editable=false grader.check("q11") # - # **Question 12** # # Compute the predicted mRNA Expression (RNAseq) values from the mRNA Expression (Affy) values in the `pten` table. Use the values in the `parameters` array from Question 7, and assign the result to `predicted_rnaseq`. We'll plot your computed regression line with the scatter plot from after question 2 of mRNA Expression (Affy) vs. mRNA Expression (RNAseq). # + tags=[] predicted_rnaseq = ... # DON'T CHANGE/DELETE ANY OF THE BELOW CODE IN THIS CELL pten["Predicted mRNA Expression (RNAseq)"] = predicted_rnaseq pten[["mRNA Expression (Affy)", "mRNA Expression (RNAseq)", "Predicted mRNA Expression (RNAseq)"]].plot.scatter("mRNA Expression (Affy)", "mRNA Expression (RNAseq)") plt.plot(pten["mRNA Expression (Affy)"], predicted_rnaseq) # - # ## Fitting a least-squares regression line # Recall that the least-square regression line is the unique straight line that minimizes root mean squared error (RMSE) among all possible fit lines. Using this property, we can find the equation of the regression line by finding the pair of slope and intercept values that minimize root mean squared error. # **Question 13** # # Define a function called `RMSE`. It takes in one argument 'params' which is a two-item array. The items are: # # 1. the slope of a line (a number) # 2. the intercept of a line (a number). # # It should return a number that is the root mean squared error (RMSE) for a line defined with the arguments slope and intercept used to predict mRNA Expression (RNAseq) values from mRNA Expression (Affy) values for each row in the `pten` table. # # *Hint: Errors are defined as the difference between the actual `y` values and the predicted `y` values.* # # *Note: if you need a refresher on RMSE, here's the [link](https://www.inferentialthinking.com/chapters/15/3/Method_of_Least_Squares.html#Root-Mean-Squared-Error) from the textbook* # + tags=[] def RMSE(params): slope, intercept = params[0], params[1] affy = pten["mRNA Expression (Affy)"] rnaseq = pten["mRNA Expression (RNAseq)"] predicted_rnaseq = ... ... # DON'T CHANGE THE FOLLOWING LINES BELOW IN THIS CELL rmse_example = RMSE([0.5, 6]) rmse_example # + deletable=false editable=false grader.check("q13") # - # <!-- BEGIN QUESTION --> # # **Question 14** # # What is the RMSE of a line with slope 0 and intercept of the mean of `y` equal to? # # *Hint 1: The line with slope 0 and intercept of mean of `y` is just a straight horizontal line at the mean of `y`* # # *Hint 2: What does the formula for RMSE become if we input our predicted `y` values in the formula. Try writing it out on paper! It should be a familiar formula.* # _Type your answer here, replacing this text._ # <!-- END QUESTION --> # # **Question 15** # # Find the parameters that minimizes RMSE of the regression line for mRNA Expression (Affy) vs. mRNA Expression (RNAseq). Assign the result to `minimized_parameters`. # # You will have to use the `minimize` [function](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html) from the scipy library. # # *Hint: Use the `RMSE` function in Question 13* # + tags=[] minimized_parameters = ... minimized_parameters.x # + deletable=false editable=false grader.check("q15") # - # <!-- BEGIN QUESTION --> # # **Question 16** # # The slope and intercept pair you found in Question 15 should be very similar to the values that you found in Question 7. Why were we able to minimize RMSE to find the same slope and intercept from the previous formulas? # _Type your answer here, replacing this text._ # <!-- END QUESTION --> # # **Question 17** # # If we had instead minimized mean squared error (MSE), would we have gotten the same slope and intercept of the minimized root mean squared error (RMSE) results? Assign `same_parameters` to either `True` if you think yes, or `False` if you think no. # _Type your answer here, replacing this text._ # + tags=[] same_parameters = ... same_parameters # + deletable=false editable=false grader.check("q17") # - # <!-- BEGIN QUESTION --> # # **Question 18** # # Using a linear regression model, would we be able to obtain accurate predictions for most of the points? Explain why or why not. # _Type your answer here, replacing this text._ # <!-- END QUESTION --> # # **Convert manually graded questions to pdf** # # Running the following cell will convert the manually graded questions to pdf. # ! otter export -e html "Assignment0.ipynb" --filtering # + [markdown] id="WKzO0BUS2ZgG" # ## SUBMISSION INSTRUCTIONS # This is the end of Assignment 0. Be sure to run the tests and verify that they all pass (just because the tests pass does not mean it's the right answer). For submission you need to submit *2 files* on eclass: # # 1- a zip file "CCID.zip", this zip file will only include this notebook # 2- pdf of the manually graded questions # + [markdown] id="_DF9nAJt3-J2" # This assignment contains altered snippets from the original [Berkeley data-8 course](http://data8.org/), which is licensed under the [Creative Commons license](https://creativecommons.org/licenses/by-nc/4.0/).
assignments/assn0/Assignment0.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Quantitative Seismic Interpretation # ================== # # This notebook provides a step-by-step walkthrough of a reservoir characterization workflow based on an example data-set and problem-set from Quantitative Seismic Interpretation (Avseth, Mukerji, Mavko, 2005). # # The dataset used is provided free of charge on the QSI website (link). It consists of well log data from five wells, with six distinct lithofacies identified in Well 2, chosen as the type well. Alongside the well log data ia a seismic dataset containing on 2D section of NMO-corrected pre-stack CDP gathers and two 3D volumes - near and far offset partial stacks. # # Rock physics modeling # ---------------------- # # First off, we'll need to load the Well 2 data. The LAS file contains both P and S velocity, density, and gamma ray curves. We'll use the LASReader from SciPy recipes (link) # + # %matplotlib inline from rppy import las import rppy from matplotlib import pyplot as plt import numpy as np from matplotlib.ticker import AutoMinorLocator well2 = las.LASReader("data/well_2.las", null_subs=np.nan) # - # First things first, let's take a look at our well logs. Most of the code below isn't necessary just to get a quick look, but it makes the plots nice and pretty. # + plt.figure(1) plt.suptitle("Well #2 Log Suite") plt.subplot(1, 3, 1) plt.plot(well2.data['GR'], well2.data['DEPT'], 'g') plt.ylim(2000, 2600) plt.title('Gamma') plt.xlim(20, 120) plt.gca().set_xticks([20, 120]) plt.gca().xaxis.grid(True, which="minor") minorLoc = AutoMinorLocator(6) plt.gca().xaxis.set_minor_locator(minorLoc) plt.gca().invert_yaxis() plt.ylabel("Depth [m]") plt.gca().set_yticks([2100, 2200, 2300, 2400, 2500]) plt.subplot(1, 3, 2) plt.plot(well2.data['RHOB'], well2.data['DEPT'], 'b') plt.ylim(2000, 2600) plt.title('Density') plt.xlim(1.65, 2.65) plt.gca().set_xticks([1.65, 2.65]) plt.gca().xaxis.grid(True, which="minor") minorLoc = AutoMinorLocator(6) plt.gca().xaxis.set_minor_locator(minorLoc) plt.gca().invert_yaxis() plt.gca().axes.get_yaxis().set_ticks([]) plt.subplot(1, 3, 3) plt.plot(well2.data['Vp'], well2.data['DEPT'], 'b') plt.ylim(2000, 2600) plt.title('VP') plt.xlim(0.5, 4.5) plt.gca().set_xticks([0.5, 4.5]) plt.gca().xaxis.grid(True, which="minor") minorLoc = AutoMinorLocator(6) plt.gca().xaxis.set_minor_locator(minorLoc) plt.gca().invert_yaxis() plt.gca().axes.get_yaxis().set_ticks([]) plt.plot(well2.data['Vs'], well2.data['DEPT'], 'r') plt.ylim(2000, 2600) plt.title('Velocity') plt.xlim(0.5, 4.5) plt.gca().set_xticks([0.5, 4.5]) plt.gca().xaxis.grid(True, which="minor") minorLoc = AutoMinorLocator(6) plt.gca().xaxis.set_minor_locator(minorLoc) plt.gca().invert_yaxis() plt.gca().axes.get_yaxis().set_ticks([]) plt.show() # - # Now, we'll derive a porosity curve from the bulk density assuming a uniform grain density of 2.65 g/cm^3 and a fluid density of 1.05 g/cm^3. phi = (well2.data['RHOB'] - 2.6)/(1.05 - 2.6) # Let's take a look at our porosity curve and sanity-check it. # + plt.figure(2) plt.subplot(1, 3, 1) plt.plot(phi, well2.data['DEPT'], 'k') plt.ylim(2000, 2600) plt.title('Porosity') plt.xlim(0, 0.6) plt.gca().set_xticks([0, 0.6]) plt.gca().xaxis.grid(True, which="minor") minorLoc = AutoMinorLocator(6) plt.gca().xaxis.set_minor_locator(minorLoc) plt.gca().invert_yaxis() plt.ylabel("Depth [m]") plt.gca().set_yticks([2100, 2200, 2300, 2400, 2500]) plt.show() # - # Looks plausible, no values above 0.6, average porosity hovering around or below 30%. Now let's start looking at log relationships. We'll create a crossplot of Vp vs. Porosity for well 2, and colour it by gamma as a quick-look lithology indicator. plt.figure(3) fig, ax = plt.subplots() im = plt.scatter(phi, well2.data["Vp"], s=20, c=well2.data["GR"], alpha=0.5) plt.xlabel('POROSITY') plt.ylabel('VP') plt.xlim([0, 1]) plt.ylim([0, 6]) plt.clim([50, 100]) cbar = fig.colorbar(im, ax=ax) plt.show() # We'll now compute the Hashin-Shtrikman upper and lower bounds, and add them to our crossplot. In order to do this, we'll need to make some assumptions about the composition of the rock. We'll assume a solid quartz matrix, with a bulk modulus K=37 GPa and a shear modulus u=44 GPa. For the fluid phase we'll begin by using water-filled porosity, by assuming a fluid bulk modulus K=2.25 GPa, shear modulus u=0 GPa. # + phi_2 = np.arange(0, 1, 0.001) Ku = np.empty(np.shape(phi_2)) Kl = np.empty(np.shape(phi_2)) uu = np.empty(np.shape(phi_2)) ul = np.empty(np.shape(phi_2)) Vpu = np.empty(np.shape(phi_2)) Vpl = np.empty(np.shape(phi_2)) K = np.array([37., 2.25]) u = np.array([44., 0.001]) for n in np.arange(0, len(phi_2)): Ku[n], Kl[n], uu[n], ul[n] = rppy.media.hashin_shtrikman(K, u, np.array([1-phi_2[n], phi_2[n]])) Vpu[n] = rppy.moduli.Vp(2.65, K=Ku[n], u=uu[n]) Vpl[n] = rppy.moduli.Vp(2.65, K=Kl[n], u=ul[n]) plt.figure(4) fig, ax = plt.subplots() im = plt.scatter(phi, well2.data["Vp"], s=20, c=well2.data["GR"], alpha=0.5) plt.xlabel('POROSITY') plt.ylabel('VP') plt.xlim([0, 1]) plt.ylim([0, 6]) plt.clim([50, 100]) cbar = fig.colorbar(im, ax=ax) plt.plot(phi_2, Vpu, 'k') plt.plot(phi_2, Vpl, 'k') plt.show() # - # Now we'll compute, and add to the mix, Han's empirical sandstone line, for a water-saturated sand at 40 MPa, with a clay content of 5%. # + Vphan, Vshan = rppy.media.han(phi_2, 0.5) plt.figure(5) fig, ax = plt.subplots() im = plt.scatter(phi, well2.data["Vp"], s=20, c=well2.data["GR"], alpha=0.5) plt.xlabel('POROSITY') plt.ylabel('VP') plt.xlim([0, 1]) plt.ylim([0, 6]) plt.clim([50, 100]) cbar = fig.colorbar(im, ax=ax) plt.plot(phi_2, Vpu, 'k') plt.plot(phi_2, Vpl, 'k') plt.plot(phi_2, Vphan, 'k') plt.show() # - # Now, we'll compute the modified Hashin-Shtrikman lower bound, using Hertz-Mindlin theory to define the moduli of the high-porosity end member. This approach is commonly referred to as the "soft", "unconsolidated", or "friable" sand model. It represents a physical model where intergranular cements are deposited away from, rather than at, the grain-to-grain contacts, and as such, models a 'sorting' trend with reduced porosity (rather than a 'diagenetic' trend where clays and cements are deposited at grain contacts, significantly stiffening the model).
notebooks/.ipynb_checkpoints/QSI Sample Workflow-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from SentenceParserPython3 import SentenceParser import pandas as pd import numpy as np from bs4 import BeautifulSoup import sys import re def printProgressBar (iteration, total, prefix = '', suffix = '', decimals = 1, length = 100, fill = '='): percent = ("{0:." + str(decimals) + "f}").format(100 * (iteration / float(total))) filledLength = int(length * iteration // total) bar = fill * filledLength + '.' * (length - filledLength) sys.stdout.write('\r%s |%s| %s%% %s' % (prefix, bar, percent, suffix)) sys.stdout.flush() SP = SentenceParser() SP.readfile('./nvbugs_Description.csv','csv', header =0) SP.data = SP.data.drop('Unnamed: 0', 1) # + def matchhtml(test_str): pattern = r'http[s]?://(?:[a-z]|[0-9]|[=#$-_@.&amp;+]|[!*\(\),]|(?:%[0-9a-f][0-9a-f]))+' return re.findall(pattern, test_str), re.sub(pattern, ' ', test_str) def filtbracket(test_str): pattern = r'<.*?>' return re.sub(pattern, ' ', test_str) def removequot(test_str): pattern = r'&.*?;' text = re.sub(pattern, ' ', test_str) text = re.sub(r'[\[\]/:?\\"|`]+', ' ',text) return text def matchfile(test_str): ''' C:/users /home/app \\itappdev_ml ''' pattern = r'(( [A-Za-z]:|\\|/|\./)[a-zA-Z0-9_\.\\/#&~!%]+)' templist = [] for item in re.findall(pattern, test_str): templist.append(item[0]) return templist, re.sub(pattern, ' ', test_str) def deepclean(test_str): # Remove ' a ', ' # ' something that doesn't make sense pattern = r'( [^ ] |[.\']+)' def overallreplacement(df, column, new_name): df[new_name] = df[column].str.replace(r'[\n\r\t\a\b]+', ' ') # Remove non-printable words df[new_name] = df[new_name].str.replace(r'[^\x00-\x7f]+', '') # df[new_name] = df[column].str.replace(r'http[s]?://(?:[a-z]|[0-9]|[$-_@.&amp;+]|[!*\(\),]|(?:%[0-9a-f][0-9a-f]))+',' ') templist = df[new_name].tolist() htmls = [] converted = [] filepaths = [] printProgressBar(0, df.shape[0], prefix = 'Progress', suffix = 'Completed',length = 50) for idx, row in enumerate(templist): # Determine if it is float('nan') if row == row: html, str_nohtml = matchhtml(row) htmls.append(' '.join(html)) str_nohtml = filtbracket(str_nohtml) filepath, str_nofilepath = matchfile(str_nohtml) filepaths.append(' '.join(filepath)) converted.append(BeautifulSoup(str_nofilepath,'html.parser').get_text()) else: converted.append(row) htmls.append('') filepaths.append('') # converted.append(row) printProgressBar(idx, df.shape[0], prefix = 'Progress', suffix = 'Completed',length = 50) df[new_name] = converted df['html'] = htmls df['filepath'] = filepaths # - text = ''' <pre>/home/scratch.msstegra_t194/USt194_auto_jun_01/hw/nvmobile/bin/mtb_run -P t194 -o testout -rtlarg &#39;+bug_200188886 +mpc_RandSyncLocalDisable +disable_mc_ooo_assert +cif_block_req_limit=80000 +pad_checker +pad_checker_glitch_bus +NV_MCH_COMMON_REQACUM_async_data_fifo_fifo_stall_probability=0 +NV_MCH_COMMON_REQACUM_async_ctrl_fifo_fifo_stall_probability=0 +asserts_are_warnings +skip_agent=csr_mpcorer +skip_agent=csw_mpcorew +use_uvm_bfm=1 +legacy_tests_with_uvm_bfm &#39; -allow_error_string &quot;ace_recm_w_r_hazard&quot; -allow_error_string &quot;ace_recm_w_w_hazard&quot; -allow_error_string &quot;ace_recm_r_w_hazard&quot; -allow_error_string &quot;ace_errs_dvm_tlb_inv&quot; -allow_error_string &quot;ace_errs_dvm_resvd_1&quot; -allow_error_string &quot;ASRT_MPCIF_UNGATE_RD_CLKEN&quot; -rtlarg &quot;+ntb_random_seed=39971 +seed=19686 +seed0=23209 +seed1=27190 +seed2=45848&quot; -memType denali_lpddr4_jedec_4CH_2R_8GB_1866 -rtlarg &quot;+e_br4 +emem_type=denali_lpddr4_jedec_4CH_2R_8GB_1866 +emem_adr_cfg=17826562 +emem_adr_cfg1=1049346 +lpddr4_denali_jedec_4CH_2R +isLpddr4=1 +num_chan=4 +ch_en=0000000000001111 +soma_path=`nvrun nvbuild_call_api get_source_dir ip/mss/mc/3.2/vmodels makePathsAbsolute`/soma/lpddr4/jedec_lpddr4_16gb_3733.spc +dram_x64=0 +x16_lp4_mode=1 +disable_checkNoControlAssertX32 +dram_board_cfg=11&quot; -emcClkPeriod 6.211 -rtlarg &quot;+tick_monitor_warn_only +mc2emc_mon_disable +mcmm_dl_disable_monitor=1&quot; -rtlarg &quot;+dyn_self_ref=0 &quot; -rtlarg &quot;+disable_bfm_bd_compare&quot; -allow_error_string &quot;RESET_NOT_LOW_AT_START|Address in Write|Address in Read&quot; -rtlarg &quot; +asserts_are_warnings&quot; -rtlarg &quot; +asserts_are_warnings&quot; -disable_legacy_checks 1 -rtlarg &quot;+ecc_en=1 +dramErrInject=1 +num_loops=8&quot; -rtlarg &quot;+num_xact=500 +test_parallel +csize_min=0 +csize_max=2 +xact_size_min=0 +xact_size_max=64&quot; -allow_error_string &quot;hit assert condition for bug 1681770&quot; -rtlarg &quot;+enable_eco_fix=1&quot; -rtlarg &quot;+reg_calc_str=MC_EMEM_ARB_CFG=0x40040001:MC_EMEM_ADR_CFG_BANK_MASK_0=0x400:MC_EMEM_ADR_CFG_BANK_MASK_1=0x800:MC_EMEM_ADR_CFG_BANK_MASK_2=0x1000:MC_EMEM_ARB_MISC1=0x00400b39:MC_EMEM_ARB_OVERRIDE=0x00100800 +gob_extremes=1&quot; ecc_stress -v mtb</pre> <strong>Triage Link:</strong> ''' print(removequot(text)) overallreplacement(SP.data, 'Description','Cleaned') SP.data df = SP.data print(df[df['BugId'] == 200315195].Description.tolist()[0]) df = SP.data print(df[df['BugId'] == 200315195].Cleaned.tolist()[0]) print(df[df['BugId'] == 200315195].filepath.tolist()) print(df[df['BugId'] == 200315195].html.tolist()) matchfile(mystr)
Description Verifier.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # This Notebook contained the creation and Visulization of HDF5 File # ## Importing the Neccessary Libraries # + import numpy as np import h5py import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from datetime import datetime, timedelta, date from numpy import median, mean, var, std, amax, amin import datetime as dt import matplotlib.dates as mdates # %matplotlib inline # - # ## Creating The HDF5 Groups and the datasets # + #Nigeria Stock Exchange path_first_st_nigeria = '/Users/buhariabubakar/PycharmProjects/boka/web_mining/web_mining/spiders/stock_ng_first.csv' read_csv_st_ng_fr = pd.read_csv(path_first_st_nigeria) first_data_stock_nigeria = np.array(read_csv_st_ng_fr, dtype='a25') path_second_st_nigeria = '/Users/buhariabubakar/PycharmProjects/boka/web_mining/web_mining/spiders/stock_ng_second.csv' read_csv_st_ng_sd = pd.read_csv(path_second_st_nigeria) second_data_stock_nigeria = np.array(read_csv_st_ng_sd, dtype='a25') path_third_st_nigeria = '/Users/buhariabubakar/PycharmProjects/boka/web_mining/web_mining/spiders/stock_ng_third.csv' read_csv_st_ng_th = pd.read_csv(path_third_st_nigeria) third_data_stock_nigeria = np.array(read_csv_st_ng_th, dtype='a25') path_fourth_st_nigeria = '/Users/buhariabubakar/PycharmProjects/boka/web_mining/web_mining/spiders/stock_ng_fourth.csv' read_csv_st_ng_ft = pd.read_csv(path_third_st_nigeria) fourth_data_stock_nigeria = np.array(read_csv_st_ng_ft, dtype='a25') #Central Bank of Nigeria path_first_cbn_nigeria = '/Users/buhariabubakar/PycharmProjects/boka/web_mining/web_mining/spiders/cbn_rate_first.csv' read_csv_cbn_ng_fr = pd.read_csv(path_first_cbn_nigeria) first_data_rate_nigeria = np.array(read_csv_cbn_ng_fr, dtype='a25') path_second_cbn_nigeria = '/Users/buhariabubakar/PycharmProjects/boka/web_mining/web_mining/spiders/cbn_rate_second.csv' read_csv_cbn_ng_sd = pd.read_csv(path_second_cbn_nigeria) second_data_rate_nigeria = np.array(read_csv_cbn_ng_sd, dtype='a25') path_third_cbn_nigeria = '/Users/buhariabubakar/PycharmProjects/boka/web_mining/web_mining/spiders/cbn_rate_third.csv' read_csv_rate_ng_th = pd.read_csv(path_third_cbn_nigeria) third_data_rate_nigeria = np.array(read_csv_rate_ng_th, dtype='a25') path_fourth_cbn_nigeria = '/Users/buhariabubakar/PycharmProjects/boka/web_mining/web_mining/spiders/cbn_rate_fourth.csv' read_csv_rate_ng_ft = pd.read_csv(path_fourth_cbn_nigeria) fourth_data_rate_nigeria = np.array(read_csv_rate_ng_ft, dtype='a25') #UAE Stock path_first_st_uae = '/Users/buhariabubakar/PycharmProjects/boka/web_mining/web_mining/spiders/stock_uae_first.csv' read_csv_st_uae_fr = pd.read_csv(path_first_st_uae) first_data_stock_uae = np.array(read_csv_st_uae_fr, dtype='a25') path_second_st_uae = '/Users/buhariabubakar/PycharmProjects/boka/web_mining/web_mining/spiders/stock_uae_second.csv' read_csv_st_uae_sd = pd.read_csv(path_second_st_uae) second_data_stock_uae = np.array(read_csv_st_uae_sd, dtype='a25') path_third_st_uae = '/Users/buhariabubakar/PycharmProjects/boka/web_mining/web_mining/spiders/stock_uae_third.csv' read_csv_st_uae_th = pd.read_csv(path_third_st_uae) third_data_stock_uae = np.array(read_csv_st_uae_th, dtype='a25') path_fourth_st_uae = '/Users/buhariabubakar/PycharmProjects/boka/web_mining/web_mining/spiders/stock_uae_fourth.csv' read_csv_st_uae_ft = pd.read_csv(path_fourth_st_uae) fourth_data_stock_uae = np.array(read_csv_st_uae_ft, dtype='a25') #UAE Rate path_first_rate_uae = '/Users/buhariabubakar/PycharmProjects/boka/web_mining/web_mining/spiders/rate_uae_first.csv' read_csv_rate_uae_fr = pd.read_csv(path_first_rate_uae) first_data_rate_uae = np.array(read_csv_rate_uae_fr, dtype='a25') path_second_rate_uae = '/Users/buhariabubakar/PycharmProjects/boka/web_mining/web_mining/spiders/rate_uae_second.csv' read_csv_rate_uae_sd = pd.read_csv(path_second_rate_uae) second_data_rate_uae = np.array(read_csv_rate_uae_sd, dtype='a25') path_third_rate_uae = '/Users/buhariabubakar/PycharmProjects/boka/web_mining/web_mining/spiders/rate_uae_third.csv' read_csv_rate_uae_th = pd.read_csv(path_third_rate_uae) third_data_rate_uae = np.array(read_csv_rate_uae_th, dtype='a25') path_fourth_rate_uae = '/Users/buhariabubakar/PycharmProjects/boka/web_mining/web_mining/spiders/rate_uae_fourth.csv' read_csv_rate_uae_ft = pd.read_csv(path_fourth_rate_uae) fourth_data_rate_uae = np.array(read_csv_rate_uae_ft, dtype='a25') with h5py.File('/Users/buhariabubakar/PycharmProjects/boka/web_mining/web_mining/scrape_data.h5', 'w') as hdf: UAECENTRALBANK = hdf.create_group('UAECentralBank') UAECENTRALBANK.create_dataset('ft_data_rate_uae', data=first_data_rate_uae) UAECENTRALBANK.create_dataset('sd_data_rate_uae', data=second_data_rate_uae) UAECENTRALBANK.create_dataset('td_data_rate_uae', data=third_data_rate_uae) UAECENTRALBANK.create_dataset('fth_data_rate_uae', data=fourth_data_rate_uae) UAECENTRALBANK.attrs['title'] = "The dataset in this group contain the exchange rate of UAE Central Bank" # UAE Stock Exchange UAESTOCK = hdf.create_group('UAEStockExchange') UAESTOCK.create_dataset('ft_data_stock_uae', data=first_data_stock_uae) UAESTOCK.create_dataset('sd_data_stock_uae', data=second_data_stock_uae) UAESTOCK.create_dataset('td_data_stock_uae', data=third_data_stock_uae) UAESTOCK.create_dataset('fth_data_stock_uae', data=fourth_data_stock_uae) UAESTOCK.attrs['title'] = "The dataset in this group contain the stock exchange of UAE" # Nigeria Exchange Rate CBNRATES = hdf.create_group('NigeriaExchangeRate') CBNRATES.create_dataset('ft_data_rate_cbn', data=first_data_rate_nigeria) CBNRATES.create_dataset('sd_data_rate_cbn', data=second_data_rate_nigeria) CBNRATES.create_dataset('td_data_rate_cbn', data=third_data_rate_nigeria) CBNRATES.create_dataset('fth_data_rate_cbn', data=fourth_data_rate_nigeria) CBNRATES.attrs['title'] = "The dataset in this group contain Nigerian Exchange rate" # Nigeria Stock Exchange STOCKNIGERIA = hdf.create_group('NigeriaStockExchange') STOCKNIGERIA.create_dataset('ft_stock_data_ng', data=first_data_stock_nigeria) STOCKNIGERIA.create_dataset('sd_stock_data_ng', data=second_data_stock_nigeria) STOCKNIGERIA.create_dataset('td_stock_data_ng', data=third_data_stock_nigeria) STOCKNIGERIA.create_dataset('fth_stock_data_ng', data=fourth_data_stock_nigeria) STOCKNIGERIA.attrs['title'] = "The dataset in this group contain Nigerian stock Exchange data" # - # ## Reading the HDF5 file # + with h5py.File('/Users/buhariabubakar/PycharmProjects/boka/web_mining/web_mining/scrape_data.h5', 'r') as hdf: ls = list(hdf.keys()) print('List of Group in this file:', ls) data = hdf.get('NigeriaExchangeRate') data_items = list(data.items()) print('Items in the Group',data_items) get_data = hdf.get('NigeriaExchangeRate/fth_data_rate_cbn') fourth_rate = np.array(get_data) print(fourth_rate) # - # ## Changing the data type from bytes to string formated_data_cbn = fourth_rate.astype('U13') formated_data_cbn # ## Numpy arrays to Pandas DataFrame cbn_rate = pd.DataFrame(formated_data_cbn, columns=['date', 'currency', 'buying', 'centralrate', 'sellingrate']) cbn_rate.head(10) # ## Dollar # ## Silicing the dataset to get only US Dollar data clean_rate_dollar = cbn_rate.loc[cbn_rate['currency'].isin(['US DOLLAR'])] clean_rate_new_dollar = clean_rate_dollar[['date', 'currency', 'sellingrate']] dollar_rate = clean_rate_new_dollar.astype({ "date": np.datetime64,"currency": str, "sellingrate": float}) dollar_cbn_clean = dollar_rate[:35] dollar_cbn_clean # ## Visualization # + fig_dims = (16, 8) fig, ax = plt.subplots(figsize=fig_dims) dollar_rate = sns.lineplot(data=dollar_cbn_clean, x="date", y="sellingrate", hue="currency", ax=ax) sns.despine() ax.annotate('Naira appreciate with 2 Kobo', xy=(dollar_cbn_clean.date[355], dollar_cbn_clean.sellingrate[355]), xycoords='data', xytext=(0, 100), textcoords='offset points', arrowprops=dict(arrowstyle="->"), ha='center') ax.annotate('4.42 Naira lost in Value ', xy=(dollar_cbn_clean.date[0], dollar_cbn_clean.sellingrate[0]), xycoords='data', xytext=(30, -50), textcoords='offset points', arrowprops=dict(arrowstyle="->"), ha='center') ax.annotate('Depreciate with almost 1.50 Naira ', xy=(dollar_cbn_clean.date[213], dollar_cbn_clean.sellingrate[213]), xycoords='data', xytext=(0, 100), textcoords='offset points', arrowprops=dict(arrowstyle="->"), ha='center') ax.annotate('Naira Depreciate', xy=(dollar_cbn_clean.date[237], dollar_cbn_clean.sellingrate[237]), xycoords='data', xytext=(-50, 100), textcoords='offset points', arrowprops=dict(arrowstyle="->"), ha='center') plt.title("Movement of Dollar against Naira ") plt.xlabel("Dates", size=16) plt.ylabel("Rates", size=16) # - # ## UAE Dirham # ## Reading the HDF5 file # + with h5py.File('/Users/buhariabubakar/PycharmProjects/boka/web_mining/web_mining/scrape_data.h5', 'r') as hdf: ls = list(hdf.keys()) print('List of Group in this file:', ls) data = hdf.get('UAECentralBank') data_items = list(data.items()) print('Items in the Group',data_items) get_data_first = hdf.get('UAECentralBank/ft_data_rate_uae') uae_rate_first = np.array(get_data_first) get_data_second = hdf.get('UAECentralBank/sd_data_rate_uae') uae_rate_second = np.array(get_data_second) get_data_third = hdf.get('UAECentralBank/td_data_rate_uae') uae_rate_third = np.array(get_data_third) get_data_fourth = hdf.get('UAECentralBank/fth_data_rate_uae') uae_rate_fourth = np.array(get_data_fourth) print(uae_rate_first) # - # ## Silicing the dataset to get only US Dollar, Nigerian Naira against Dirham # ## Changing the data type from bytes to string # + uae_1_data_cbn = uae_rate_first.astype('U13') uae_2_data_cbn = uae_rate_second.astype('U13') uae_3_data_cbn = uae_rate_third.astype('U13') uae_4_data_cbn = uae_rate_fourth.astype('U13') # - # ## Numpy arrays to Pandas DataFrame # + uae_data_f = pd.DataFrame(uae_1_data_cbn, columns=[ 'currency', 'rate']) uae_data_s = pd.DataFrame(uae_2_data_cbn, columns=[ 'currency', 'rate']) uae_data_t = pd.DataFrame(uae_3_data_cbn, columns=[ 'currency', 'rate']) uae_data_ft = pd.DataFrame(uae_4_data_cbn, columns=[ 'currency', 'rate']) # - # ## Date column is not provided in the datasets, need to be added for analysis # + uae_data_f['date'] = pd.date_range(start='1/25/2022', end='1/25/2022', periods=len(uae_data_f)) uae_data_s['date'] = pd.date_range(start='1/26/2022', end='1/26/2022', periods=len(uae_data_s)) uae_data_t['date'] = pd.date_range(start='1/27/2022', end='1/27/2022', periods=len(uae_data_t)) uae_data_ft['date'] = pd.date_range(start='1/28/2022', end='1/28/2022', periods=len(uae_data_ft)) # - uae_data_ft.head(5) # ## Append all the datasets to have one merge datasets uae_cbn_clean = uae_data_f.append([uae_data_s, uae_data_t, uae_data_ft]) result_dtypes = uae_cbn_clean.astype({ "date": np.datetime64, "currency": str, "rate": float}) result_naira_new = result_dtypes.loc[result_dtypes['currency'].isin(['Nigerian Nair', 'US Dollar'])] dirham_naira_rate = pd.DataFrame(result_naira_new, columns=['date', 'currency', 'rate']) dirham_naira_rate dirham_naira_rate.to_csv('/Users/buhariabubakar/Documents/IU/Semester 4/uae_rate.csv') read_dirham = pd.read_csv('/Users/buhariabubakar/Documents/IU/Semester 4/uae_rate.csv') cleaned_dubai_rate = read_dirham[['date', 'currency', 'rate' ]] cleaned_dubai_rate # ## Visualization fig_dims = (15, 6) fig, ax = plt.subplots(figsize=fig_dims) euro_rate = sns.lineplot(data=cleaned_dubai_rate, x="date", y="rate", hue="currency", ax=ax,) sns.despine() plt.title("Movement of Dirham against Naira, and USD ") plt.xlabel("Date", size=18) plt.ylabel("Rates", size=18)
HDF.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import sys import numpy as np sys.path.insert(0, '/home/david/Institut/rydanalysis/') from rydanalysis import * import matplotlib.pyplot as plt import seaborn as sns import matplotlib as mpl from scipy import interpolate mpl.rc('image', cmap='afmhot') sns.set_style("whitegrid") from lmfit import Model,Parameters from scipy import ndimage seq = ExpSequence('/home/qd/Schreibtisch/Data/2019_IEI_new/July/20/DENSITYSCAN') #shot = SingleShot("/home/david/Institut/2019_IEI/July/02/scanblueresonance_FINAL_averaged_images/2019-06-05_00.fts") #seq.parameters['p' in seq.parameters.index] # + variables= seq.variables.copy() variables.insert(0,'fitted_amp',0) variables.insert(0,'pixel_sum',0) for i,shot in enumerate(seq.iter_single_shots()): image=shot.image absorb = calc_absorbtion_image(image) absorb = crop_image(absorb,xslice=slice(10,90),yslice=slice(10,380)) od = absorbtion_to_OD(absorb) shot.optical_density = od # + #od = ndimage.gaussian_filter(od, 4, order=0, output=None, mode='constant', cval=0.0, truncate=4.0) def fit2dGaussian(image): image = ndimage.gaussian_filter(image, 8, order=0, output=None, mode='constant', cval=0.0, truncate=4.0) fit = Fit2dGaussian(image) fit.params = fit.guess(image) #fit.params = restrict_to_init(fit.params,dev=0.2) #fit.params['amp'].max = fit.params['amp'].value*(1+0.5) #fit.params['amp'].min = fit.params['amp'].value*(1-0.5) #fit.params['cen_x'].max = fit.params['amp'].value*(1+0.1) #fit.params['cen_x'].min = fit.params['amp'].value*(1-0.1) #fit.params['cen_y'].max = fit.params['amp'].value*(1+0.1) #fit.params['cen_y'].min = fit.params['amp'].value*(1-0.1) #fit.params['offset'].max = fit.params['amp'].value*(1+2) fit.params['offset'].vary = False fit.params['offset'].value = 0 fit.fit_data() return fittoSeries(fit) def fit22dGaussian(image): #image = ndimage.gaussian_filter(image, 8, order=0, output=None, mode='constant', cval=0.0, truncate=4.0) fit = Fit2d2Gaussian(image) #fit.params = fit.guess(image) # cloud distribution params = fit.params params.add('amp1',value=0.0) params.add('cen_y1',value=160,min=140,max=180) params.add('cen_x1',value=45,min=30,max=60) params.add('sig_x1',value=5,min=30,max=200) params.add('sig_y1',value=5,min=30,max=200) params.add('theta1',value=0,min=0,max=np.pi) # EIT/Autler-Townes-dip params.add('amp2',value=0.00) params.add('cen_y2',value=171.2,min=165,max=175) params.add('cen_x2',value=48.1,min=45,max=55) params.add('sig_x2',value=4.5,min=3,max=15) params.add('sig_y2',expr='sig_x2') params.add('theta2',value=0,min=0,max=np.pi) # offset params.add('offset',value=0,vary=False) fit.params = params fit.fit_data() return fit def Series2Parameter(s): p = Parameters() keys = ['value','min','max','vary'] for l in s.groupby(level=0): print(l[1]) i = l[1:] name = i[0] kwargs = {k: i[1][name][k] for k in keys} p.add(name, **kwargs) return p def stderr_weighted_average(g): rel_err = g.amp.stderr/g.amp.value weights = 1/rel_err return (g.image_od * weights).sum()/weights.sum() data = seq.variables.copy() data['image_od'] = [shot.optical_density[0] for shot in seq.iter_single_shots()] data['image_light'] = [shot.image[3] for shot in seq.iter_single_shots()] data['image_atoms'] = [shot.image[1] for shot in seq.iter_single_shots()] data['image_bg'] = [shot.image[5] for shot in seq.iter_single_shots()] data['light'] = data.image_light-data.image_bg data['atoms'] = data.image_atoms-data.image_bg data['diff'] = data.light- data.atoms # + fit_res = data['image_od'].apply(fit2dGaussian) fit_res.to_csv('fit_res.csv') fit_res[data.columns] = data data = fit_res data['counts'] = data['diff'].apply(lambda x: x[20:30,120:130].mean()) sums = data.image_od.apply(np.mean) plt.figure(figsize=(8,5)) sns.lineplot(x=data.MWduration,y=data.counts,markers='O',hue_norm=(0,0.0015)) sns.scatterplot(x=data.MWduration,y=data.counts,markers='O',hue_norm=(0,0.0015)) # - data.groupby('MWduration')['counts'].std().plot() fig,ax=plt.subplots(nrows=4,ncols=4,figsize=(20,20)) for i,group in enumerate(data.groupby('MWduration')): sns.distplot(group[1].counts,bins=8,ax=ax.flatten()[i]) plt.figure(figsize=(8,5)) sns.scatterplot(x=data.MWduration,y=data.amp.value,hue = data.amp.stderr,markers='O',hue_norm=(0,0.0010)) sns.lineplot(x=data.MWduration,y=data.amp.value) fig,ax=plt.subplots(nrows=4,ncols=4,figsize=(20,20)) for i,group in enumerate(data.groupby('MWduration')): sns.distplot(group[1].amp.value,bins=7,ax=ax.flatten()[i]) fig,ax=plt.subplots(nrows=4,ncols=4,figsize=(20,20)) for i,group in enumerate(data.groupby('MWduration')): sns.distplot(group[1].cen_y.value,bins=7,ax=ax.flatten()[i]) # data['atoms'].apply(lambda x: x[] # Apply conditional filters to data set plt.errorbar(x=data.groupby('MWduration').mean().index, y=data.groupby('MWduration').mean().cen_y.value.values,yerr=data.groupby('MWduration').apply(np.std).cen_y.value.values) data.groupby('MWduration').apply(np.std).cen_y.value.values # ### Group by *MWduration* plt.figure(figsize=(10,10)) for i,group in enumerate(data.groupby('MWduration')): if i ==4: group[1].amp.value.plot(style='.-') # + mw_av = data.groupby('MWduration')['image_od'].apply(np.mean) fit = mw_av.apply(fit22dGaussian) fit_res = fit.apply(fittoSeries) fit_res['3lvl_center_od'] = fit.apply(lambda x: x.eval((x.params['cen_x2'].value,x.params['cen_y2'].value))) fit_res['2lvl_center_od'] = fit_res['3lvl_center_od']-fit_res.amp2.value fit_res['3vs2lvl_od'] = fit_res['3lvl_center_od']/fit_res['2lvl_center_od'] fit_res['2lvl_center_od'].plot(style='o') fit_res.amp1.value.plot(style='o') # - fit_res.to_hdf('fit_res.h5', key='df') test = pd.read_hdf('fit_res.h5') test fit_res.plot(y="3vs2lvl_od",x="2lvl_center_od",style='o-') # ### Group density bins # + results = pd.DataFrame() data['fitted_amp_binning'] = pd.cut(data.amp.value,bins = np.linspace(0.2,0.7,15)) grouping = data.groupby('fitted_amp_binning') results['averaged_od'] = grouping.apply(stderr_weighted_average) fit = results['averaged_od'].apply(fit22dGaussian) fit_res = fit.apply(fittoSeries) fit_res['3lvl_center_od'] = fit.apply(lambda x: x.eval((x.params['cen_x2'].value,x.params['cen_y2'].value))) #fit_res['3lvl_center_od_std'] = np.sqrt(np.square(fit_res.amp2.stderr) +np.square(fit_res.amp1.stderr**2)) fit_res['2lvl_center_od'] = fit_res['3lvl_center_od']-fit_res.amp2.value #fit_res['2lvl_center_od_std'] = fit_res.amp1.stderr fit_res['3vs2lvl_od'] = fit_res['3lvl_center_od']/fit_res['2lvl_center_od'] # - def three_vs_two_lvl(n,r0,n0): fbl = n0*n**(4./5) return (r0 + fbl)/(1+fbl) model = Model(three_vs_two_lvl) params = model.make_params() params['r0'].value = 0.66 params['n0'].value = 1 out = model.fit(fit_res['3vs2lvl_od'].values,n=fit_res['2lvl_center_od'],params=params,nan_policy='omit') fit_res.plot(y="3vs2lvl_od",x="2lvl_center_od",style='o-') x=np.arange(0.2,0.8,0.01) for f in fit: fig,ax = plt.subplots(figsize=(20,10)) f.plot(ax=ax,image_kwargs=dict(vmin=0,vmax=1.5)) for fit in results.fit: fit.fit_object fig,ax = plt.subplots(figsize=(20,10)) fit.plot(ax=ax,image_kwargs=dict(vmin=0.,vmax=0.8)) results=pd.DataFrame() results['averaged_od'] = data.groupby('MWduration')['image_od'].apply(np.mean) results['fit']=results['averaged_od'].apply(fit22dGaussian) #results['par'] = results['fit'].apply(lambda x: x.params) results['center_od_ratio'] = results['fit'].apply(center_od_ratio) results['fitted_amp1'] = results['fit'].apply(lambda x: x.params['amp1'].value) results['fitted_amp2'] = results['fit'].apply(lambda x: x.params['amp2'].value) results.center_od_ratio.plot(style='o') #results.fitted_amp2.plot(style = 'o') results.plot(y='center_od_ratio',x='fitted_amp1',style='o') for fit in results.fit: fit.fit_object fig,ax = plt.subplots(figsize=(20,10)) fit.plot(ax=ax,image_kwargs=dict(vmin=0.,vmax=0.8))
EIT-vs-density-plus-statistics.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="Tu_O6JzzaDu5" colab_type="text" # This is where I got the dataset: https://ocw.mit.edu/courses/sloan-school-of-management/15-097-prediction-machine-learning-and-statistics-spring-2012/datasets/ # + id="zhz1z-gFbYfE" colab_type="code" colab={} import numpy as np import pandas as pd # + [markdown] id="Q76_8vbyaNR9" colab_type="text" # This is how we will organize the input data # + id="cIXToXqCZkR_" colab_type="code" outputId="75caa633-7467-4481-f125-2dc01d031753" colab={"base_uri": "https://localhost:8080/", "height": 51} # Changing center of randomness + learning rate improves network dataset = pd.read_csv("https://ocw.mit.edu/courses/sloan-school-of-management/15-097-prediction-machine-learning-and-statistics-spring-2012/datasets/digits.csv") datasetArray = np.array(dataset) xfData = datasetArray.T[0:-1].T amountOfData = 100 xData = xfData[0:amountOfData] # amount of data x 16 print(xData.shape) yfData = datasetArray.T[-1] yfData = yfData.T[0:xData.shape[0]] yData = np.zeros((yfData.shape[0], 10)) print(yData.shape) #amount of data x 10 for i in range(yfData.shape[0]): yData[i][yfData[i]] = 1 # + id="kJDWalNeaUcu" colab_type="code" colab={} def activate(x,deriv=False): if(deriv==True): return x*(1-x) return 1/(1+np.exp(-x)) # + id="eKiOEXfDa8UK" colab_type="code" colab={} np.random.seed(2) # + id="V5hvq7e7jaaM" colab_type="code" colab={} learning_rate = .01 hiddenNeurons = 20 input_hidden = 2 * np.random.rand(16, hiddenNeurons) - 1 # 16 x hiddenNeurons hidden_output = 2 * np.random.rand(hiddenNeurons, 10) - 1# hiddenNeurons x 10 # + id="pGkOfOGrm62w" colab_type="code" outputId="363b3515-09ad-4882-956d-c3c8bdd8f0ae" colab={"base_uri": "https://localhost:8080/", "height": 119} for i in range(50001): # Feedforward Pass hiddenLayer = np.dot(xData, input_hidden) # amount of data x hiddenNeurons activated_hiddenLayer = activate(hiddenLayer) # ^ z = np.dot(activated_hiddenLayer, hidden_output) # amount of data x 10 a = activate(z) # ^ # Error error = yData - a # ^ c = .5 *(error) ** 2 # mean_squared_error ^ if(i % 10000 == 0): print("Error:", np.sum(c)) # Backward Pass dCdz = error * activate(a, True) # ^ dCdw = np.dot(activated_hiddenLayer.T, dCdz) # hiddenNeurons x 10 dCdz2 = np.dot(dCdz, hidden_output.T) * activate(activated_hiddenLayer, True) # amount of data x hiddenNeurons dCdw2 = np.dot(xData.T, dCdz2) # 16 x hiddenNeurons input_hidden += dCdw2 * learning_rate hidden_output += dCdw * learning_rate
Neural_Nets_Application_Pen_Recognition_of_Handwritten_Digits.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import string import pandas as pd # + # NLP library imports import nltk from nltk.corpus import stopwords from nltk import word_tokenize from nltk.tokenize import sent_tokenize from nltk.tokenize import word_tokenize from nltk.stem import PorterStemmer from nltk.stem import LancasterStemmer from nltk.stem.snowball import SnowballStemmer from nltk.stem.wordnet import WordNetLemmatizer nltk.download('punkt') nltk.download('stopwords') nltk.download('wordnet') # - df = pd.read_json('don4_us.json') df.head() # + df1 = df.copy() df1 = df1.drop(["hotel_adress","hotel_type","image_url","locid","pid","price_range","review_id","reviewer_id","trip_type"], axis = 1) df1.head() # + df1.review = df1.review.apply(lambda x:x.replace(r"\u00e8","è")) df1.review = df1.review.apply(lambda x:x.replace(r'\u00e9', 'é')) df1.review = df1.review.apply(lambda x:x.replace(r"\u00ea","ê")) df1.review = df1.review.apply(lambda x:x.replace(r"\u00eb","ë")) df1.review = df1.review.apply(lambda x:x.replace(r"\u00fb","û")) df1.review = df1.review.apply(lambda x:x.replace(r"\u00f9","ù")) df1.review = df1.review.apply(lambda x:x.replace(r'\u00e0', 'à')) df1.review = df1.review.apply(lambda x:x.replace(r'\u00e2', 'â')) df1.review = df1.review.apply(lambda x:x.replace(r'\u00f4', 'ô')) df1.review = df1.review.apply(lambda x:x.replace(r'\u00ee', 'î')) df1.review = df1.review.apply(lambda x:x.replace(r'\u00ef', 'ï')) df1.review = df1.review.apply(lambda x:x.replace(r'\u2019', "'")) df1.review = df1.review.apply(lambda x:x.replace(r'\'', "'")) df1.review # + df1.title = df1.title.apply(lambda x:x.replace(r"\u00e8","è")) df1.title = df1.title.apply(lambda x:x.replace(r'\u00e9', 'é')) df1.title = df1.title.apply(lambda x:x.replace(r"\u00ea","ê")) df1.title = df1.title.apply(lambda x:x.replace(r"\u00eb","ë")) df1.title = df1.title.apply(lambda x:x.replace(r"\u00f9","ù")) df1.title = df1.title.apply(lambda x:x.replace(r'\u00ee', 'î')) df1.title = df1.title.apply(lambda x:x.replace(r'\u00ef', 'ï')) df1.title = df1.title.apply(lambda x:x.replace(r"\u00fb","û")) df1.title = df1.title.apply(lambda x:x.replace(r'\u00e0', 'à')) df1.title = df1.title.apply(lambda x:x.replace(r'\u00e2', 'â')) df1.title = df1.title.apply(lambda x:x.replace(r'\u00f4', 'ô')) df1.title = df1.title.apply(lambda x:x.replace(r'\u2019', "'")) df1.title = df1.title.apply(lambda x:x.replace(r'\'', "'")) df1.title # - df1.trip_date = df1.trip_date.apply(lambda x:x.replace(r"\u00e8","è")) df1.trip_date = df1.trip_date.apply(lambda x:x.replace(r'\u00e9', 'é')) df1.trip_date = df1.trip_date.apply(lambda x:x.replace(r"\u00ea","ê")) df1.trip_date = df1.trip_date.apply(lambda x:x.replace(r"\u00eb","ë")) df1.trip_date = df1.trip_date.apply(lambda x:x.replace(r"\u00f9","ù")) df1.trip_date = df1.trip_date.apply(lambda x:x.replace(r'\u00ee', 'î')) df1.trip_date = df1.trip_date.apply(lambda x:x.replace(r'\u00ef', 'ï')) df1.trip_date = df1.trip_date.apply(lambda x:x.replace(r"\u00fb","û")) df1.trip_date = df1.trip_date.apply(lambda x:x.replace(r'\u00e0', 'à')) df1.trip_date = df1.trip_date.apply(lambda x:x.replace(r'\u00e2', 'â')) df1.trip_date = df1.trip_date.apply(lambda x:x.replace(r'\u00f4', 'ô')) df1.trip_date = df1.trip_date.apply(lambda x:x.replace(r'\u2019', "'")) df1.trip_date = df1.trip_date.apply(lambda x:x.replace(r'\'', "'")) df1 df1.to_csv('clean_data_gr4_en.csv', sep = ";", encoding = "utf-8") essai = pd.read_csv('clean_data_gr4_en.csv', sep = ";", encoding = "utf-8",index_col = None) essai
Day4/.ipynb_checkpoints/encoding_prob-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # DAT210x - Programming with Python for DS # ## Module4- Lab5 # + import pandas as pd from scipy import misc from mpl_toolkits.mplot3d import Axes3D import matplotlib import matplotlib.pyplot as plt # + # Look pretty... # matplotlib.style.use('ggplot') plt.style.use('ggplot') # - # Create a regular Python list (not NDArray) and name it `samples`: # + # .. your code here .. import os path = 'Datasets/ALOI/32/' samples = [f for f in os.listdir(path)] # - # Code up a for-loop that iterates over the images in the `Datasets/ALOI/32/` folder. Look in the folder first, so you know how the files are organized, and what file number they start from and end at. # # Load each `.png` file individually in your for-loop using the instructions provided in the Feature Representation reading. Once loaded, flatten the image into a single-dimensional NDArray and append it to your `samples` list. # # **Optional**: You can resample the image down by a factor of two if you have a slower computer. You can also scale the image from `0-255` to `0.0-1.0` if you'd like--doing so shouldn't have any effect on the algorithm's results. # + # .. your code here .. import numpy as np from scipy import misc dset = [] for f in samples: fp = os.path.join(path,f) img = misc.imread(fp) dset.append( (img[::2, ::2] / 255.0).reshape(-1) ) # - # Convert `samples` to a DataFrame named `df`: # .. your code here .. df = pd.DataFrame(dset) # Import any necessary libraries to perform Isomap here, reduce `df` down to three components and using `K=6` for your neighborhood size: # .. your code here .. import math, random import scipy.io # + # .. your code here .. from sklearn import manifold iso = manifold.Isomap(n_neighbors=6, n_components=3) iso.fit(df) manifold = iso.transform(df) # - # Create a 2D Scatter plot to graph your manifold. You can use either `'o'` or `'.'` as your marker. Graph the first two isomap components: # + # .. your code here .. matplotlib.style.use('ggplot') fig = plt.figure() ax = fig.add_subplot(111) ax.scatter(manifold[:,0],manifold[:,1],marker='o',c=colors) plt.show() # - # Chart a 3D Scatter plot to graph your manifold. You can use either `'o'` or `'.'` as your marker: # + # .. your code here .. # %matplotlib notebook from sklearn import manifold from mpl_toolkits.mplot3d import Axes3D iso2 = manifold.Isomap(n_neighbors=6, n_components=3) iso2.fit(df) manifold = iso2.transform(df) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.set_title('3D Projection') ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_zlabel('Z') ax.scatter(manifold[:,0], manifold[:,1], manifold[:,2], c=colors, marker='o', alpha=0.75) # - # Answer the first two lab questions! # Create another for loop. This time it should iterate over all the images in the `Datasets/ALOI/32_i` directory. Just like last time, load up each image, process them the way you did previously, and append them into your existing `samples` list: # + # .. your code here .. path2 = 'Datasets/ALOI/32i/' samples2 = [f for f in os.listdir(path2)] for f in samples2: fp = os.path.join(path2,f) img = misc.imread(fp) dset.append( (img[::2, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b] / 255.0).reshape(-1) ) # - # Convert `samples` to a DataFrame named `df`: # .. your code here .. df = pd.DataFrame(dset) # Import any necessary libraries to perform Isomap here, reduce `df` down to three components and using `K=6` for your neighborhood size: # + # .. your code here .. colors = [] for s in samples: colors.append('b') for s in samples2: colors.append('r') colors # - # Create a 2D Scatter plot to graph your manifold. You can use either `'o'` or `'.'` as your marker. Graph the first two isomap components: # + # .. your code here .. # - # Chart a 3D Scatter plot to graph your manifold. You can use either `'o'` or `'.'` as your marker: # + # .. your code here .. # -
Module4/Module4 - Lab5.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- import pandas as pd import numpy as np from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, r2_score mortality=pd.read_table("data/Compressed Mortality, 1999-2016.txt") data=pd.read_csv("data/datafest2018.csv") mortality=mortality.drop("Notes", axis=1) mortality=mortality[0:51] US=data[data["country"]=="US"] US.stateProvince.unique() USHealth=US[US["industry"]=="HEALTH_CARE"] states=USHealth.groupby("stateProvince") jobsByState=pd.DataFrame(states.jobId.nunique()) jobsByState=jobsByState[jobsByState.index!="UNKNOWN"] jobsByState.to_csv("healthJobsByState.csv") pop = pd.read_csv("data/US_Pop.csv", header=1) pop=pop[5:57] pop["Geography"].values abbrev=["AL","AK","AZ","AR","CA","CO","CT","DE","DC","FL","GA", "HI","ID","IL", "IN","IA","KS","KY","LA","ME","MD","MA","MI","MN","MS", "MO", "MT","NE", "NV", "NH", "NJ", "NM", "NY", "NC","ND", "OH","OK","OR","PA","RI","SC","SD","TN","TX","UT","VT","VA","WA","WV","WI","WY", "PR"] pop=pop.reset_index() pop["abbrev"]=pd.Series(abbrev) jobsByState.index merge=pd.merge(jobsByState, pop, left_index=True, right_on="abbrev") merge["JobsToPop"]=merge["jobId"]/merge['Population Estimate (as of July 1) - 2017'] merge["JobsToPop"]=merge["JobsToPop"]*1000 merge.to_csv("jobsPerPop.csv") merge.head() mortality.head() merge=pd.merge(mortality, merge, left_on="State", right_on="Geography") merge[["Age Adjusted Rate", "JobsToPop"]].corr() merge.to_csv("mortality_healthJobs.csv") insurance=pd.read_csv("data/insurance.csv", header=2) insurance=insurance[1:52] mergeInsurance=pd.merge(merge, insurance, left_on="State", right_on="Location") mergeInsurance[["JobsToPop", "Insured", "Age Adjusted Rate"]].corr() # + lm = LinearRegression() X=mergeInsurance[["JobsToPop", "Insured"]] X_train = X[:-10] X_test = X[-10:] # Split the targets into training/testing sets Y_train = mergeInsurance["Age Adjusted Rate"][:-10] Y_test = mergeInsurance["Age Adjusted Rate"][-10:] lm.fit(X_train, Y_train) Y_pred = lm.predict(X_test) print(lm.coef_) r2_score(Y_test, Y_pred) # -
Sam Analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="_Dfkr6M8PTW3" # # Test bed for EM numerical algorithms # # [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1xnftEDtCWRNzep3NTsGI1UcFvopYw75o#scrollTo=IBS1-c8nAbnL) # # TODO: https://en.wikipedia.org/wiki/Finite-difference_time-domain_method # # https://en.wikipedia.org/wiki/Computational_electromagnetics # # # + executionInfo={"elapsed": 397, "status": "ok", "timestamp": 1620586677395, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GicrV4hh4U3yCyLYtg_5UzNyQjvcIHKVqh2TSd9=s64", "userId": "09580664591438119936"}, "user_tz": 240} id="IBS1-c8nAbnL" # libraries import matplotlib.pyplot as plt import numpy as np import sympy as sp from sympy.abc import * # skip declaring symbols, eats up namespace though # #!pip install magpylib import magpylib as em import keras from keras.models import Sequential from keras.layers import Dense, Dropout from keras.wrappers.scikit_learn import KerasClassifier from keras.utils import np_utils # + id="TtWFIJRjVD6x" # data paths, it may not load these train_path = 'swarg_training_data.npz' test_path = 'swarg_eval_data.npz' # + colab={"base_uri": "https://localhost:8080/", "height": 54} executionInfo={"elapsed": 5222, "status": "ok", "timestamp": 1620572374127, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GicrV4hh4U3yCyLYtg_5UzNyQjvcIHKVqh2TSd9=s64", "userId": "09580664591438119936"}, "user_tz": 240} id="Th_NRy2bAw0b" outputId="f350fe3c-b932-429f-9da7-95b46e26cb33" # sympy test x = symbols('x') N = sp.integrate(sp.exp(-x**2),x) N # + colab={"base_uri": "https://localhost:8080/", "height": 282} executionInfo={"elapsed": 5212, "status": "ok", "timestamp": 1620572374129, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GicrV4hh4U3yCyLYtg_5UzNyQjvcIHKVqh2TSd9=s64", "userId": "09580664591438119936"}, "user_tz": 240} id="-Pb7K4OkA0Av" outputId="bd7a629d-b397-4e43-f0de-47f596e7e3e1" # numpy test x = np.arange(0, np.pi,.01) y1 = np.sin(x) y2 = np.sin(np.pi*x) # matplotlib test plt.plot(x,y2) # + id="__u5t5NLCkvX" # "This debug account can edit" - debug1500 # + [markdown] id="lqJ5o_FZFM3w" # ## mark downn lang represent $\Omega$ # + id="AStg73ZTRWW7" # test neural network stuff def basemodel(): model = Sequential() #Adding 20% dropout model.add(Dropout(0.20)) #Add 1 layer with output 200 and relu function model.add(Dense(200,activation='relu')) #Adding 20% dropout here model.add(Dropout(0.20)) #Add 1 layer with output 1 and sigmoid function model.add(Dense(1,activation='sigmoid')) return model model = basemodel() # + colab={"base_uri": "https://localhost:8080/", "height": 265} executionInfo={"elapsed": 591, "status": "ok", "timestamp": 1617834057397, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GicrV4hh4U3yCyLYtg_5UzNyQjvcIHKVqh2TSd9=s64", "userId": "09580664591438119936"}, "user_tz": 240} id="etKfsFpeRkVI" outputId="697b7169-81c0-49c1-cb6b-0e79efadd47b" import itertools import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation def data_gen(): for cnt in itertools.count(): t = cnt / 10 yield t, np.sin(2*np.pi*t) * np.exp(-t/10.) def init(): ax.set_ylim(-1.1, 1.1) ax.set_xlim(0, 10) del xdata[:] del ydata[:] line.set_data(xdata, ydata) return line, fig, ax = plt.subplots() line, = ax.plot([], [], lw=2) ax.grid() xdata, ydata = [], [] def run(data): # update the data t, y = data xdata.append(t) ydata.append(y) xmin, xmax = ax.get_xlim() if t >= xmax: ax.set_xlim(xmin, 2*xmax) ax.figure.canvas.draw() line.set_data(xdata, ydata) return line ani = animation.FuncAnimation(fig, run, data_gen, interval=10, init_func=init) plt.show() # + id="WLvTFCaYal66"
EM-sim-goes-brrrrrrrrrrrrrrrrrrrrrrrr.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Solution-1 # This tutorial shows how to find proteins for a specific organism, how to calculate protein-protein interactions, and visualize the results. from pyspark.sql import SparkSession from pyspark.sql.functions import substring_index from mmtfPyspark.datasets import pdbjMineDataset from mmtfPyspark.webfilters import PdbjMineSearch from mmtfPyspark.interactions import InteractionFilter, InteractionFingerprinter from mmtfPyspark.io import mmtfReader from ipywidgets import interact, IntSlider import py3Dmol # #### Configure Spark spark = SparkSession.builder.appName("Solution-1").getOrCreate() # ## Find protein structures for mouse # For our first task, we need to run a taxonomy query using SIFTS data. [See examples](https://github.com/sbl-sdsc/mmtf-pyspark/blob/master/demos/datasets/PDBMetaDataDemo.ipynb) and [SIFTS demo](https://github.com/sbl-sdsc/mmtf-pyspark/blob/master/demos/datasets/SiftsDataDemo.ipynb) # # To figure out how to query for taxonomy, the command below lists the first 10 entries for the SIFTS taxonomy table. As you can see, we can use the science_name field to query for a specific organism. taxonomy_query = "SELECT * FROM sifts.pdb_chain_taxonomy LIMIT 10" taxonomy = pdbjMineDataset.get_dataset(taxonomy_query) taxonomy.show() # ### TODO-1: specify a taxonomy query where the scientific name is 'Mus musculus' taxonomy_query = "SELECT * FROM sifts.pdb_chain_taxonomy WHERE scientific_name = 'Mus musculus'" taxonomy = pdbjMineDataset.get_dataset(taxonomy_query) taxonomy.show(10) # + path = "../resources/mmtf_full_sample/" pdb = mmtfReader.read_sequence_file(path, fraction=0.1) # - # ### TODO-2: Take the taxonomy from above and use it to filter the pdb structures pdb = pdb.filter(PdbjMineSearch(taxonomy_query)).cache() # ## Calculate polymer-polymer interactions for this subset of structures # Find protein-protein interactions with a 6 A distance cutoff # + distance_cutoff = 6.0 interactionFilter = InteractionFilter(distance_cutoff, minInteractions=10) interactions = InteractionFingerprinter.get_polymer_interactions(pdb, interactionFilter).cache() # - interactions = interactions.withColumn("structureId", substring_index(interactions.structureChainId, '.', 1)).cache() interactions.toPandas().head(10) # ## Visualize the protein-protein interactions # #### Extract id columns as lists (required for visualization) structure_ids = interactions.select("structureId").rdd.flatMap(lambda x: x).collect() query_chain_ids = interactions.select("queryChainID").rdd.flatMap(lambda x: x).collect() target_chain_ids = interactions.select("targetChainID").rdd.flatMap(lambda x: x).collect() target_groups = interactions.select("groupNumbers").rdd.flatMap(lambda x: x).collect() # Disable scrollbar for the visualization below # + # #%%javascript #IPython.OutputArea.prototype._should_scroll = function(lines) {return false;} # - # #### Show protein-protein interactions within cutoff distance (query = orange, target = blue) def view_protein_protein_interactions(structure_ids, query_chain_ids, target_chain_ids, target_groups, distance=4.5): def view3d(i=0): print(f"PDB: {structure_ids[i]}, query: {query_chain_ids[i]}, target: {target_chain_ids[i]}") target = {'chain': target_chain_ids[i], 'resi': target_groups[i]} viewer = py3Dmol.view(query='pdb:' + structure_ids[i], width=600, height=600) viewer.setStyle({}) viewer.setStyle({'chain': query_chain_ids[i]}, {'line': {'colorscheme': 'orangeCarbon'}}) viewer.setStyle({'chain' : query_chain_ids[i], 'within':{'distance' : distance, 'sel':{'chain': target_chain_ids[i]}}}, {'sphere': {'colorscheme': 'orangeCarbon'}}); viewer.setStyle({'chain': target_chain_ids[i]}, {'line': {'colorscheme': 'lightblueCarbon'}}) viewer.setStyle(target, {'stick': {'colorscheme': 'lightblueCarbon'}}) viewer.zoomTo(target) return viewer.show() s_widget = IntSlider(min=0, max=len(structure_ids)-1, description='Structure', continuous_update=False) return interact(view3d, i=s_widget) view_protein_protein_interactions(structure_ids, query_chain_ids, target_chain_ids, \ target_groups, distance=distance_cutoff); spark.stop()
4-mmtf-pyspark-advanced/Solution-1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import ipywidgets as widgets from sidepanel import SidePanel import regulus from ipyregulus import BaseTreeView, TreeView # - gauss = regulus.load('gauss4') b = BaseTreeView(gauss) b # ### Define a constanct attribute generator and add it to the tree # + def fixed(t, n): return 0.8 gauss.add_attr(fixed) # - # #### Now select the new attribute b.attr = 'fixed' # ### Define attribute: relative node size # * attribute is a function or a lambda expression # * A name must be givenn if a lambda expression is provided # # FUNCTION: def relative_size(context, node): return node.data.size()/context['data_size'] gauss.tree.add_attr(relative_size) # LAMBDA gauss.tree.add_attr( lambda tree,node: node.data.size()/tree['data_size'], name='rel') b.attr = 'rel' b.attr = 'span' # ### Filters v = TreeView(gauss) v # ### When auto==True the first filter will refers to the current selected attribute v.auto = False # #### Change a filter's function f = v.add_filter('span') f.func = lambda node_value,threshold: node_value <= threshold # reverse the function f.func = lambda node_value,threshold: node_value >= threshold
examples/1.5-attrs and filters.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # run `pip install cvbase` first import cvbase as cvb import matplotlib.pyplot as plt # %matplotlib inline import numpy as np # to visualize a flow file import cv2 flow = cvb.read_flow('out.flo') flow2 = cvb.read_flow('out2.flo') im = cv2.imread('images/first.png') im2 = cv2.imread('../../../data/test_video/session_hands_1/0000000001.jpg') # to visualize a loaded flow map fn = np.random.rand(100, 100, 2).astype(np.float32) print(fn.shape) flow.shape # + def draw_flow(img, flow, step=16): h, w = img.shape[:2] y, x = np.mgrid[step/2:h:step, step/2:w:step].reshape(2,-1).astype(int) fx, fy = flow[y,x].T lines = np.vstack([x, y, x+fx, y+fy]).T.reshape(-1, 2, 2) lines = np.int32(lines + 0.5) # vis = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) cv2.polylines(img, lines, 0, (0, 255, 0)) for (x1, y1), (_x2, _y2) in lines: cv2.circle(img, (x1, y1), 1, (0, 255, 0), -1) return img def draw_hsv(flow): h, w = flow.shape[:2] fx, fy = flow[:,:,0], flow[:,:,1] ang = np.arctan2(fy, fx) + np.pi v = np.sqrt(fx*fx+fy*fy) hsv = np.zeros((h, w, 3), np.uint8) hsv[...,0] = ang*(180/np.pi/2) hsv[...,1] = 255 hsv[...,2] = np.minimum(v*4, 255) bgr = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR) return bgr # return hsv plt.imshow(draw_hsv(flow)) # - plt.imshow(draw_flow(im, flow)) plt.imshow(draw_hsv(flow2)) plt.imshow(draw_flow(im2, flow2)) # #### main.py # + import getopt import math import numpy as np import os import PIL import PIL.Image import sys import torch import torch.utils.serialization from network import Network, estimate try: from correlation import correlation # the custom cost volume layer except: sys.path.insert(0, './correlation'); import correlation # you should consider upgrading python assert(int(torch.__version__.replace('.', '')) >= 40) # requires at least pytorch version 0.4.0 torch.set_grad_enabled(False) # make sure to not compute gradients for computational performance torch.cuda.device(1) # change this if you have a multiple graphics cards and you want to utilize them torch.backends.cudnn.enabled = True # make sure to use cudnn for computational performance arguments_strModel = 'default' arguments_strFirst = './images/first.png' arguments_strSecond = './images/second.png' arguments_strOut = './out.flo' if __name__ == '__main__': tensorFirst = torch.FloatTensor(np.array(PIL.Image.open(arguments_strFirst))[:, :, ::-1].transpose(2, 0, 1).astype(np.float32) * (1.0 / 255.0)) tensorSecond = torch.FloatTensor(np.array(PIL.Image.open(arguments_strSecond))[:, :, ::-1].transpose(2, 0, 1).astype(np.float32) * (1.0 / 255.0)) moduleNetwork = Network().cuda().eval() tensorOutput = estimate(tensorFirst, tensorSecond, moduleNetwork) objectOutput = open(arguments_strOut, 'wb') np.array([80, 73, 69, 72], np.uint8).tofile(objectOutput) np.array([tensorOutput.size(2), tensorOutput.size(1) ], np.int32).tofile(objectOutput) np.array(tensorOutput.numpy().transpose(1, 2, 0), np.float32).tofile(objectOutput) objectOutput.close() import cvbase as cvb import matplotlib.pyplot as plt # %matplotlib inline import numpy as np # to visualize a flow file import cv2 flow = cvb.read_flow('out.flo') plt.imshow(draw_hsv(flow)) # + import getopt import math import numpy as np import os import PIL import PIL.Image import sys import torch import torch.utils.serialization from pathlib import Path from network import Network, estimate from correlation import correlation assert(int(torch.__version__.replace('.', '')) >= 40) def draw_flow(img, flow, step=16): h, w = img.shape[:2] y, x = np.mgrid[step/2:h:step, step/2:w:step].reshape(2,-1).astype(int) fx, fy = flow[y,x].T lines = np.vstack([x, y, x+fx, y+fy]).T.reshape(-1, 2, 2) lines = np.int32(lines + 0.5) # vis = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) cv2.polylines(img, lines, 0, (0, 255, 0)) for (x1, y1), (_x2, _y2) in lines: cv2.circle(img, (x1, y1), 1, (0, 255, 0), -1) return img def draw_hsv(flow): h, w = flow.shape[:2] fx, fy = flow[:,:,0], flow[:,:,1] ang = np.arctan2(fy, fx) + np.pi v = np.sqrt(fx*fx+fy*fy) hsv = np.zeros((h, w, 3), np.uint8) hsv[...,0] = ang*(180/np.pi/2) hsv[...,1] = 255 hsv[...,2] = np.minimum(v*4, 255) bgr = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR) return bgr # return hsv def compute_video_flow(frames, output_dir_name, save_images=False): """ expected ordered numpy """ flows = [] for i in range(frames.shape[0] - 1): flow = compute_flow(frames[i], frames[i+1]) flows.append(flow) flows = np.array(flows) # record video frame = flows[0] fourcc = cv2.VideoWriter_fourcc(*'XVID') video = cv2.VideoWriter(output_dir_name + '.avi', fourcc, 10.0, (frame.shape[1], frame.shape[0])) if save_images: if not os.path.exists(output_dir_name): os.makedirs(output_dir_name) count = 0 for frame in flows: hsv_frame = draw_hsv(frame) video.write(hsv_frame) if save_images: width = 10 im_name = str(count).zfill(width) + '.jpg' cv2.imwrite(str(output_dir_name / Path(im_name)), hsv_frame) count += 1 video.release() return flows def compute_flow(im1, im2): t1 = torch.FloatTensor(im1[:, :, ::-1].transpose(2, 0, 1).astype(np.float32) * (1.0 / 255.0)) t2 = torch.FloatTensor(im2[:, :, ::-1].transpose(2, 0, 1).astype(np.float32) * (1.0 / 255.0)) moduleNetwork = Network().cuda().eval() flow = estimate(t1, t2, moduleNetwork) flow = np.array(flow.numpy().transpose(1, 2, 0), np.float32) return flow def compute_flow_from_files(im1_dir, im2_dir): t1 = torch.FloatTensor(np.array(PIL.Image.open(im1_dir))[:, :, ::-1].transpose(2, 0, 1).astype(np.float32) * (1.0 / 255.0)) t2 = torch.FloatTensor(np.array(PIL.Image.open(im2_dir))[:, :, ::-1].transpose(2, 0, 1).astype(np.float32) * (1.0 / 255.0)) moduleNetwork = Network().cuda().eval() flow = estimate(t1, t2, moduleNetwork) flow = np.array(flow.numpy().transpose(1, 2, 0), np.float32) return flow def load_video_frames(input_file, output_dir=None, save=False): input_file = Path(input_file) if save: assert output_dir is not None output_dir = Path(output_dir) / input_file.name.split('.')[0] if not os.path.exists(output_dir): os.makedirs(output_dir, exist_ok=True) print('output dir: ', output_dir) frames = [] count = 0 vidcap = cv2.VideoCapture(str(input_file)) success, im = vidcap.read() while success: count += 1 frames.append(im) if save: width = 10 im_name = str(count).zfill(width) + '.jpg' cv2.imwrite(str(output_dir / Path(im_name)), im) # save frame as JPEG file print('Wrote a new frame as: ', str(output_dir / Path(im_name))) success, im = vidcap.read() return np.array(frames) idir = '../../../data/test_video/session_hands_1.avi' odir = '../../../data/test_video/session_hands_1_flow' frames = load_video_frames(idir, odir) flows = compute_video_flow(frames, odir) print(flows.shape) # - np.min(flows[1])
vis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:pyFIRS] # language: python # name: conda-env-pyFIRS-py # --- # In this notebook, we update the names of acquisitions from the DNR lidar portal to be consistent with the names that will come from the NOAA lidar server. import pandas as pd import os import glob import numpy as np dnr_clips = pd.DataFrame( glob.glob('../data/raw/lidar/dnr_portal/dnrplot_clips/hectare_clips_epsg2927/*.laz') + \ glob.glob('../data/raw/lidar/dnr_portal/dnrplot_clips/plot_clips_epsg2927/*.laz'), columns=['path']) dnr_clips['basename'] = dnr_clips['path'].apply(lambda x: os.path.basename(x)) dnr_clips['dnr_acq_name'] = dnr_clips['basename'].apply(lambda x: '_'.join(x.split('_')[1:]).split('.laz')[0]) dnr_clips.head() len(dnr_clips) dnr_noaa_lookup = pd.read_csv('../data/raw/lidar/dnr_noaa_crosswalk.csv').set_index('dnr_name') dnr_noaa_lookup.head() noaa_name_lookup = pd.read_csv('../data/raw/lidar/noaa_tiles/noaa_acq_name_lookup.csv').set_index('noaa_id') noaa_name_lookup.head() dnr_clips.loc[dnr_clips.dnr_acq_name.isin(dnr_noaa_lookup.index.values)].head(10) def rename(row): try: noaa_id = dnr_noaa_lookup.loc[row['dnr_acq_name']] new_name = noaa_name_lookup.loc[noaa_id].values[0][0] return new_name except KeyError: new_name = row['dnr_acq_name'] new_name = new_name[:-5] + '_' + new_name[-4:] return new_name dnr_clips['new_name'] = dnr_clips.apply(lambda x: rename(x), axis=1) dnr_clips['new_path'] = dnr_clips.apply(lambda x: x['path'].replace(x['dnr_acq_name'], x['new_name']), axis=1) print('Renaming {} files'.format(len(dnr_clips))) for i, row in dnr_clips.iterrows(): os.rename(row.path, row.new_path) print(i, end='.')
notebooks/03_Renaming_DNR_PlotClips.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:py27] # language: python # name: conda-env-py27-py # --- # + from __future__ import print_function import matplotlib.pyplot as plt import numpy as np # %matplotlib inline from skimage.data import astronaut from skimage.segmentation import felzenszwalb, slic, quickshift from skimage.segmentation import mark_boundaries from skimage.util import img_as_float img = img_as_float(astronaut()[::2, ::2]) segments_fz = felzenszwalb(img, scale=100, sigma=0.5, min_size=50) segments_slic = slic(img, n_segments=250, compactness=10, sigma=1) segments_quick = quickshift(img, kernel_size=3, max_dist=6, ratio=0.5) print("Felzenszwalb's number of segments: %d" % len(np.unique(segments_fz))) print("Slic number of segments: %d" % len(np.unique(segments_slic))) print("Quickshift number of segments: %d" % len(np.unique(segments_quick))) fig, ax = plt.subplots(1, 3, sharex=True, sharey=True, subplot_kw={'adjustable': 'box-forced'}) fig.set_size_inches(8, 3, forward=True) fig.tight_layout() ax[0].imshow(mark_boundaries(img, segments_fz)) ax[0].set_title("Felzenszwalbs's method") ax[1].imshow(mark_boundaries(img, segments_slic)) ax[1].set_title("SLIC") ax[2].imshow(mark_boundaries(img, segments_quick)) ax[2].set_title("Quickshift") for a in ax: a.set_xticks(()) a.set_yticks(()) plt.show() # + import numpy as np import matplotlib import matplotlib.pyplot as plt from skimage import data from skimage.util.dtype import dtype_range from skimage.util import img_as_ubyte from skimage import exposure from skimage.morphology import disk from skimage.filters import rank matplotlib.rcParams['font.size'] = 9 def plot_img_and_hist(img, axes, bins=256): """Plot an image along with its histogram and cumulative histogram. """ ax_img, ax_hist = axes ax_cdf = ax_hist.twinx() # Display image ax_img.imshow(img, cmap=plt.cm.gray) ax_img.set_axis_off() # Display histogram ax_hist.hist(img.ravel(), bins=bins) ax_hist.ticklabel_format(axis='y', style='scientific', scilimits=(0, 0)) ax_hist.set_xlabel('Pixel intensity') xmin, xmax = dtype_range[img.dtype.type] ax_hist.set_xlim(xmin, xmax) # Display cumulative distribution img_cdf, bins = exposure.cumulative_distribution(img, bins) ax_cdf.plot(bins, img_cdf, 'r') return ax_img, ax_hist, ax_cdf # Load an example image img = img_as_ubyte(data.moon()) # Global equalize img_rescale = exposure.equalize_hist(img) # Equalization selem = disk(30) img_eq = rank.equalize(img, selem=selem) # Display results fig = plt.figure(figsize=(8, 5)) axes = np.zeros((2, 3), dtype=np.object) axes[0, 0] = plt.subplot(2, 3, 1, adjustable='box-forced') axes[0, 1] = plt.subplot(2, 3, 2, sharex=axes[0, 0], sharey=axes[0, 0], adjustable='box-forced') axes[0, 2] = plt.subplot(2, 3, 3, sharex=axes[0, 0], sharey=axes[0, 0], adjustable='box-forced') axes[1, 0] = plt.subplot(2, 3, 4) axes[1, 1] = plt.subplot(2, 3, 5) axes[1, 2] = plt.subplot(2, 3, 6) ax_img, ax_hist, ax_cdf = plot_img_and_hist(img, axes[:, 0]) ax_img.set_title('Low contrast image') ax_hist.set_ylabel('Number of pixels') ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_rescale, axes[:, 1]) ax_img.set_title('Global equalise') ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_eq, axes[:, 2]) ax_img.set_title('Local equalize') ax_cdf.set_ylabel('Fraction of total intensity') # prevent overlap of y-axis labels fig.tight_layout() plt.show()
utils/dataset/plot_canny.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Μια **πρακτική** εισαγωγή στην Αστροστατιστική # --- # δηλαδή εφαρμοσμένη στατιστική με ...*εφαρμογές* στην Αστρονομία # ## Που μας χρειάζεται η στατιστικη # (obvious) # ### Βιβλιογραφία # 1.Statistics, data mining, and machine learning in astronomy: a practical Python guide for the analysis of survey data. (Princeton University Press, 2014). # # 2.<NAME>. & <NAME>. Practical statistics for astronomers. (Cambridge University Press, 2003). # # 3.Chattopadhyay, <NAME>. & <NAME>. Statistical Methods for Astronomical Data Analysis. 3, (Springer New York, 2014). # ## Πιθανότητες και βασικές κατανομές # Παρακάτω θα εξετάσουμε κάποιες βασικές κατανομές, με μερικές πρακτικές εφαρμογές. # ### Κατανομές στη Python μέσω της Scipy # Η `scipy` μας παρέχει όλες τις κατανομές μέσω του module `scipy.stats`. # Ετσι για παράδειγμα μπορούμε να δούμε διάφορες ιδιότητες/μεθόδους της κατανομής Poisson μέσω της instance `scipy.stats.poisson`. # Προσοχή, αυτό που θεωρούμε σαν κατανομή είναι στη πραγματικότητα αυτο που λέμε συνάρτηση πυκνότητας της κατανομής. Δηλαδή η συνάρτηση πυκνότητας της κατανομής Poisson μέσω της παραπάνω instance βρίσκεται από `scipy.stats.poisson.pmf()`. # # Για λόγους συντομίας φορτώνουμε όλοκληρο το module `stats` ενώ ταυτόχρονα φορτώνουμε τις standard βιβλιοθήκες `numpy` και `matplotlib` from scipy import stats from scipy import integrate import numpy as np # %matplotlib inline import matplotlib.pyplot as plt from matplotlib import rc,rcParams rc('text', usetex=True) rcParams['figure.figsize'] = (9, 5.5) rcParams['ytick.labelsize'],rcParams['xtick.labelsize'] = 17.,17. rcParams['axes.labelsize']=19. rcParams['legend.fontsize']=17. rcParams['axes.titlesize']=20. import seaborn as sb sb.despine() sb.set_style('white', {'xes.linewidth': 0.5, 'axes.edgecolor':'black'}) sb.despine(left=True) # ### Ομοιογενής (uniform) Κατανομή # # ### Διωνυμική (binomial) Κατανομή # H διωνυμική κατανομή μας δείχνει τον αριθμό των επιτυχιών $k$ ενός πειράματος με δύο δυνατά αποτελέσματα (Bernouli trials) δεδομένου της πιθανότητας $p$ επιτυχίας ενός πειράματος και του συνολικού αριθμού $n$ των πειραμάτων. # # Η πυκνότητα πιθανότητας δίνεται από την: # $$ # Pr(k;n,p)=\binom{n}{k}p^k (1-p)^{n-k} # $$ # # Η πιθανότερη τιμή της κατανομής είναι: # $$ # E[k]=np # $$ # και η διακύμανση # $$ # var[k]=np(1-p) =\sigma ^2 # $$ # #### Παράδειγμα 1 # Ποιά είναι η κατανομή πιθανοτήτων των φορών που θα έχουμε γράμματα αν ρίξουμε ένα νόμισμα 10 φορές? kk=np.linspace(0,10,10,dtype=int) pr=stats.binom.pmf(kk,p=0.5,n=10) plt.bar(kk,pr) # #### Παράδειγμα 2 # Υποθέστε ότι γνωρίζουμε από ένα δείγμα 100 γαλαξιακών σμηνών ότι σε 10 από αυτά περιέχεται ένας κυρίαρχος κεντρικός γαλαξίας. Σκοπέυουμε να ελέγξουμε ένα δεύτερο διαφορετικό δείγμα από 30 σμήνη, τα οποία έχουνε επιλεχθεί για παράδειγμα από έναν κατάλογο ακτίνων Χ. # # * Σε πόσους από αυτους περιμένουμε να υπάρχει το αντίστοιχο φαινόμενο. # # * Ποια η πιθανότητα να παρατήρησουμε παραπάνω από 8? # # --- # # Υποθέτουμε ότι $p=0.1$ άρα # $$ # Pr(k)=\binom{30}{k}0.1^k 0.9^{30-k} # $$ # (Η κατανομή φαίνεται από το διαγραμμα παρακάτω.) # # * Η πιθανότερη τιμή είναι $\mu= 3$ με τυπική απόκλιση $\sigma = 1.6$ # # * Η πιθανότητα να δούμε περισσότερους από 8 είναι: # $$ # Pr(k>8)=\sum _{i=8}^{30} Pr(i) = 0.02 # $$ # # Τι θα προτείνατε σε περίπτωση που παρατήρησουμε πχ 10 γαλαξίες? kk=np.linspace(0,30,30,dtype=int) pr=stats.binom.pmf(kk,p=0.1,n=30) plt.bar(kk,pr) print('Pr(k>8)={:.3f}'.format(pr[kk>8].sum())) print('Pr(k=10)={:.5f}'.format(pr[10])) # ### Κατανομή Poisson # Η κατανομή Poisson είναι μια διακριτή κατανομή του αριθμού γεγονότων σε ένα συγκεκριμένο χρονικό διάστημα δεδομένου του μέσου αριθμού γεγονότων $\mu$ για αυτό το διάστημα. Η συνάρτηση πυκνότητας πιθανότητας είναι: # $$ # Pr(x;\mu)=\frac{\mu ^x e^{-\mu}}{x!} # $$ # # Η πιθανότερη τιμή της κατανομής καί η διακύμανση είναι: # \begin{align} # E[x]=\mu && var[x]=\sigma ^2=\mu # \end{align} # #### Παράδειγμα # Σε έναν αγώνα ποδοσφαίρου μπαίνουν κατα μέσο όρο $2.5$ γκόλ. Ποιά είναι η πιθανότητα να μπούν $x$ γκόλ? xx=np.linspace(0,8,9,dtype=int) #pr=stats.poisson.pmf(xx,mu=2.5) #plt.bar(xx,pr) for mu in np.linspace(0.5,2.5,4): pr=stats.poisson(mu).pmf(xx) plt.plot(xx,pr,label='$\mu = {:.2f}$'.format(mu)) plt.legend() # #### Παράδειγμα # Ο αριθμός των φωτονίων που φτάνει σε συγκεκριμένο χρονικό διάστημα σε έναν ανιχνευτή είναι σε πολύ καλή προσέγγιση (υποθέτωντας οτι η πιθανότητα εκπομπής διαδοχικών φωτονίων είναι ανεξάρτητη) είναι μια διαδικασία Poisson. # # Αν υποθέσουμε $t$ το χρόνο ολοκλήρωσης του ανιχνευτή, και το ρυθμό εκπομπής φωτονίων $\lambda$ τότε η μέση τιμή των φωτονίων -και το σφάλμα- που μετράει ο ανιχνευτής είναι # \begin{align} # \mu = \lambda t && // && \sigma = \sqrt{\mu} = \sqrt{\lambda t} # \end{align} # άρα το σήμα που πέρνουμε (οι μέσες τιμες) σε σχέση με το θόρυβο θα είναι # $$ # S/N \propto \sqrt{t} # $$ # > Ο παραπάνω υπολογισμός δεν παίρνει υπόψιν του το θόρυβο του ανιχνευτή (readout noise). Το σφάλμα οφείλεται εξ ολοκλήρου στα φωτόνια. # ### Κανονική Κατανομή (Gaussian) # Η διωνυμική (για μεγάλο $N$) και η κατανομή poisson (για μεγάλο $\mu$) τείνουν στη κανονική κατανομή (ή κατανομή Gauss). Η Gauss σε σχέση με τις προηγούμενες που είδαμε είναι μια συνεχής κατανομή με δύο ανεξάρτητες παραμέτρους, τη μέση τιμή $\mu$ και τη τυπική απόκλιση $\sigma$ και έχει συνάρτηση πυκνότητας πιθανότητας: # # $$ # Pr(x;\mu,\sigma)=\frac{1}{\sigma \sqrt{2}}\exp \Big( -\frac{(x-\mu)^2}{2\sigma^2} \Big) # $$ xx=np.linspace(-4,4,100) for s in np.linspace(0.5,2.5,4): pr=stats.norm(0,s).pdf(xx) plt.plot(xx,pr,label='$\sigma = {:.2f}$'.format(s)) plt.legend() # Η Gauss είναι από ίσως η πιο σημαντική κατανομή καθώς σύμφωνα με το **Κεντρικό Οριακό Θεώρημα** οι μέσες τιμές πολλών τύχαιων μεταβλητών -ανεξαρτήτως των κατανομών που τις παρήγαγαν- κατανέμονται σαν τέτοια! # # Αυτό σημαίνει οτι περιμένουμε η πλειονότητα των μετρήσεων μας να κατανέμονται σαν κανονικές κατανομές. # # ### Η κατανομή $x^2$ # Αν τραβήξουμε ένα πλήθος $\{x_i\} ^N$ απο μια κανονική κατανομή $N(\mu,\sigma)$ τότε ορίζουμε τη ποσότητα # $$ # Q = \sum _{i=1} ^N \frac{x_i-\mu}{\sigma} # $$ # # H Q αποδυκνύεται οτι ακολουθεί τη λεγόμενη κατανομή $x^2$ η οποία έχει σαν μοναδική παράμετρο τους βαθμούς ελευθερίας $N$. # $$ # Pr(Q;N)=\frac{Q^{\frac{N}{2}-1}}{2^{N/2} \Gamma (N/2) } \exp \Big( \frac{-Q}{2} \Big) # $$ xx=np.linspace(0,10) for K in range(1,5): pr=stats.chi2(K).pdf(xx) plt.plot(xx,pr,label='K = {}'.format(K)) plt.legend() # ### Η κατανομή t (Student's t) # Ομοίως με πριν αν τραβήξουμε ένα πλήθος $\{x_i\} ^N$ απο μια κανονική κατανομή $N(\mu,\sigma)$ η ποσότητα # $$ # t = \frac{\bar{x}-\mu}{s/\sqrt{N}} # $$ # όπου # \begin{align} # \bar{x} =\frac{1}{N}\sum _{i=1} ^N x_i && \text{και} && s=\sqrt{\frac{1}{N-1}\sum _{i=1} ^N (x_i-\bar{x})^2} # \end{align} # δηλαδή η μέση τιμή και η τυπική απόκλιση όπως τη λαμβάνουμε από τα δεδομένα. # # Η κατανομή της $t$ εξαρτάται πάλι μόνο από τους βαθμούς ελευθερίας $N$: # $$ # Pr(t;N)= \frac{\Gamma (\frac{N+1}{2})}{\sqrt{\pi N}\Gamma (\frac{N}{2})} \Big( 1+\frac{t^2}{N} \Big)^{-\frac{N+1}{2}} # $$ xx=np.linspace(-4,4,100) for K in range(1,5): pr=stats.t(K).pdf(xx) plt.plot(xx,pr,label='K = {}'.format(K)) plt.legend() # #### Παράδειγμα 3 # Στο facebook γκρουπ του Μαθηματικού έγινε μια ψηφοφορία για τη κατανομή των ζωδίων των μελών του γκρουπ. Με βάση τα δεδομένα (που δίνονται παρακάτω) μπορούμε να βγάλουμε συμπεράσματα για την κατανομή γεννήσεων μέσα στο έτος? data=np.array([37,33,29,42,35,57,41,31,65,49,42,38]) data=np.array([40,35,35,46,39,60,47,34,71,54,45,42]) names=['Aqr','Psc','Ari','Tau','Gem','Cnc', 'Leo','Vir','Lib','Sco','Sgr','Cap'] plt.bar(names,data,label='Data') # Κάνουμε την υπόθεση (null hypothesis) οτί η κατανομή των γεννήσεων είναι ισοπίθανη μέσα στο έτος. Άρα θα περιμένουμε ότι ο αριθμός ανθρώπων ανα ζώδιο προέρχεται από ομοιογενή κατανομή. # # Η πιθανότητα να παρατηρήσουμε έναν συγκεκριμένο αριθμό ατόμων δίνεται από μια διωνυμική κατανομή με πιθανότητα $p=1/12$. Ο αριθμος των ατομων ειναι $n=548$ (27/3). # Άρα η τυπική τιμή είναι $np=45.67$ με τυπική απόκλιση $\sigma = 6.5$ # + N=data.sum() p=1/12 mu=N*p var=((p)*(1-p))*N sd=np.sqrt(var) print('E[X] = {:.2f} / var[X] = {:.2f} (sigma = {:.2f}) for {} observations'.format(mu,var,sd,N)) plt.bar(names,data,width=0.65,label='Data') plt.hlines(mu,0,11,linestyles='--',label='Expected Value',alpha=0.7) plt.fill_between(range(12),(mu-sd)*np.ones(12),(mu+sd)*np.ones(12),alpha=0.65,label='Standard Deviation') plt.legend(loc=2) # - # Γεια να εξέτασουμε αν ισχύει η υπόθεση θα χρησιμοποιήσουμε το λεγόμενο $x^2$ test του Pearson. # Τα δεδομένα μας αποτελούνται από $K=12$ διακριτές τιμές $D_i$. Θεωρώντας τα "υποτιθέμενα" δεδομένα $D_{h_i}$ τα οποία στη περίπτωση μας για ομοιογενή κατανομή είναι η τυπική τιμή $np$ # # Όριζουμε τη μετρική $x^2$ -για την οποία θα μιλήσουμε κι αργότερα-: # $$ # x^2=\sum_{i=1} ^{K} \frac{(D_i-D_{h_i})^2}{D_{h_i}} = \sum_{i=1} ^{12} \frac{(D_i-np)^2}{np} = 30.06 # $$ # # Στη συνέχεια συγκρίνουμε τη τιμή που μετρήσαμε με τη θεωρητική κατανομή της $x^2$ για $K-1$ βαθμούς ελευθερίας (1 βαθμός ελευθερίας λιγότερος λόγω του μοντέλου που επιλέξαμε). # # Η κατανομή της $x^2$ (θα την ονομάσουμε $\text{chi2}(x)$) αντιπροσωπέυει τη πιθανότητα οι διακυμάνσεις των παρατηρήσεων από τα υποτιθέμενα δεδομένα να είναι τυχαίες. Ο υπολογισμός της πιθανότητας λοιπόν οι παραπάων παρατηρήσεις να είναι τυχαίες δίνονται από τη σχέση: # $$ # P=\int _{x^2} ^{\infty} \text{chi2}(x) dx = 0.0016 # $$ # # Άρα η πιθανότητα η κατανομή των γεννήσεων ανα μήνα να είναι ομοιογενής στο έτος σύμφωνα με τα παραπάνω δεδομένα είναι 0.14%. Επιλέγοντας μια κρίσιμη τιμή που συνήθως είναι το 5% βγάζουμε το συμπερασμα ότι η υπόθεση μας **απορρίπτεται**. # # + x2=np.sum((N*p-data)**2/(N*p)) print('x^2 = {:.2f} / x^2 (reduced) = {:.2f}'.format(x2,x2/11)) def chi2d(x): return stats.chi2.pdf(x,11) xx=np.linspace(0,1.5*x2,100) plt.plot(xx,chi2d(xx)) plt.vlines(x2,0,chi2d(xx).max()/4,linestyles='--',alpha=0.4) plt.fill_between(xx[xx>x2],0,chi2d(xx)[xx>x2],alpha=0.25) plt.xlabel('$x^2$') P=integrate.quad(chi2d,x2,100)[0] Pc=0.05 print('P value = {:.4f}'.format(P)) NH = 'Accepted' if P>Pc else 'Rejected' print('Null Hypothesis is {}'.format(NH)) # - # ### Παράδειγμα # Εστω οτι παρατηρούμε εναν αστέρα στον ουρανό και μετράμε τη ροή φωτονίων. Θεωρώντας ότι η ροή είναι σταθερή με το χρόνο ίση με $F_{\mathtt{true}}$. # # Παίρνουμε $N$ παρατηρήσεις, μετρώντας τη ροή $F_i$ και το σφάλμα $e_i$. # # Η ανίχνευση ενός φωτονίου είναι ενα ανεξάρτητο γεγονός που ακολουθεί μια τυχαία κατανομή Poisson. Από τη διακύμανση της κατανομής Poisson υπολογίζουμε το σφάλμα $e_i=\sqrt{F_i}$ # + N=200 F_true=1000. F=np.random.poisson(F_true*np.ones(N)) e=np.sqrt(F) plt.errorbar(np.arange(N),F,yerr=e, fmt='ok', ecolor='gray', alpha=0.5) plt.hlines(np.mean(F),0,N,linestyles='--') plt.hlines(F_true,0,N) print('Mean = {:.2f} (diff {:.2f}) // std = {:.2f}'.format(np.mean(F),np.mean(F)-F_true,np.std(F))) # - ax=sb.distplot(F) xx=np.linspace(F.min(),F.max()) mu=F.mean() s=F.std() gaus=np.exp(-0.5*((xx-mu)/s)**2)/np.sqrt(2.*np.pi*s**2) ax.plot(xx,gaus) # Στο παραπάνω διάγραμμα παρατηρούμε οτι οι τιμές παρότι υπακοούν σε στατιστική poisson η κατανόμη τους προσεγγίζει μια κανονική κατανομη, λόγω του Κεντρικού Οριακού Θεωρήματος. # # ## Εκτίμηση της ροής των φωτονίων μέσω της μέγιστης Πιθανοφάνειας (Maximum Likehood Approach) # # Αυτή τη φορά αναζητούμε τις παραμέτρους του μοντέλου. Για να το κανουμε αυτό θα πρέπει να δούμε που μεγιστοποιείται η συνάρτηση πιθανοφάνειας. Δηλαδή υποθέτωντας ένα μοντέλο το οποίο κρύβεται πίσω από τα δεδομένα να υπολογίσουμε ποιά είναι η πιθανότητα να εμφανιστούν αυτά ακριβώς τα δεδομένα. Η πληροφορία αυτή βρίσκεται στη συνάρτηση πιθανοφάνειας (Likehood). # # Το μοντέλο μας στη περίπτωση του αστέρα είναι ότι έχει μια σταθερή ροή $\mu$ η οποία είναι και η μοναδική παράμετρος που προσπαθούμε να υπολογίσουμε. Η πιθανοφάνεια, δηλαδή τη πιθανότητα να παρατηρηθεί η μέτρηση $D_i=(F_i,e_i)$ δεδομένου του μοντέλου σταθερής ροής είναι: # $$ # P(D_i|\mu)=\frac{1}{\sqrt{2\pi e_i^2}} \exp \Big( -\frac{(F_i-\mu)^2}{2e_i^2} \Big) # $$ # # Ορίζουμε τη συνάρτηση πιθανοφάνειας σαν τη συνολική πιθανότητα να παρατηρηθούν σαν σύνολό οι συγκεκριμένες μετρήσεις. # $$ # L(D|\mu)=\prod _{i=1}^N P(D_i|\mu) = \prod _{i=1}^N \frac{1}{\sqrt{2\pi e_i^2}} \exp \Big( -\frac{(F_i-\mu)^2}{2e_i^2} \Big) # $$ # # Επειδή η τιμή της συνάρτησης πιθανοφάνειας μπορεί να γίνει πολύ μικρή, είναι πιο έυκολο να χρησιμοποιήσουμε το λογάριθμο της # $$ # \log L = -\frac{1}{2} \sum _{i=0}^N \big[ \log(2\pi e_i^2) + \frac{(F_i-\mu)^2}{e_i^2} \big] # $$ # # Αναζητούμε τώρα που αυτή μεγιστοποιείται. Άρα, # \begin{align} # \frac{d }{d\mu} \big(\log L\big) = 0 \rightarrow \mu= \frac{\sum w_i F_i}{\sum w_i} && \text{όπου} && w_i=\frac{1}{e_i^2} # \end{align} # Στη περίπτωση όπου όλα τα σφάλματα είναι ίδια (ομοσκεδαστικά(**TODO**) σφάλματα) έχουμε το αναμενόμενο αποτέλεσμα $\mu = \frac{1}{N}\sum F_i$ δηλαδή η μέση τιμή των παρατηρήσεων. # # ### Εκτίμηση Σφάλματος # Για τον υπολογισμό του σφάλματος κατασκευάζουμε τον Covariance Matrix (**?**) ο οποίος είναι ορίζεται από τους όρους δεύτερης τάξης της συνθήκης μεγίστου. Γενικά λοιπόν # # $$ # \sigma _{\jk} = \Big(-\frac{d^2 \ln L}{d\theta _j d\theta _k} \Big) ^{-\frac{1}{2}} # $$ # # Στη δική μας περίπτωση με μια μόνο παράμετρο δεν έχουμε πίνακα, αλλά τη τιμή: # $$ # \sigma _{\mu} = \Big( \sum w_i \Big) ^{-\frac{1}{2}} # $$ # # # --- # \* Παρότι μια κατανομή Gauss ορίζεται από δύο παραμέτρους $(\mu,\sigma)$ η παράμετρος $\sigma$ αποτελεί ταυτόχρονα και το σφάλμα της παραμέτρου $\mu$. Άρα οι βαθμοί ελευθερίας του προβλήματος είναι $N-1$ (**TODO**: ανεπτυξε το) print('Ροή αστέρα: {:.1f} +/- {:.2f}'.format(np.sum(F/e**2)/np.sum(1/e**2),np.sum(1/e**2)**(-0.5))) # ## Τυχαίες μεταβλητές # Μια τυχαία (ή στοχαστική) μεταβλητή είναι μια μεταβλήτη η οποία προέρχεται από μια ποσότητα που υπόκεινται σε τυχαίες διακυμάνσης. Προέρχεται δηλαδή από μια στατιστική κατανομή. Υπαρχουν δύο ειδών τυχαίες μεταβλήτες, οι διακριτές (που μπορεί να προέρχονται για παράδειγμα από μια διωνυμική ή poisson κατανομή) και οι συνεχείς (gaussian αντιστοιχα) # # Δύο τυχαίες μεταβλητές $x,y$ είναι ανεξάρτητες να και μόνο αν: # $$p(x,y)=p(x)p(y)$$ # Στη περίπτωση όπου δύο τυχαίες μεταβλητές δεν είναι ανεξάρτητες τότε: # $$ p(x,y)=p(x|y)p(y)=p(y|x)p(x) $$ # Η marginal πιθανότητα δίνεται από # $$ # p(x)= \int p(x,y)dy = \int p(x|y)p(y) dy # $$ # Στο παρακάτω σχήμα γίνεται εμφανές η πραγματική κατανομή δυο μή ανεξαρτήτων μεταβλητών. Από τη συνδυασμένη κατανομή παίρνουμε slices τα οποία φαίνονται στα δεξιά. # # Δεν αρκεί όμως απλά να πάρουμε το slice της κατανομής. Η $p(x|1.5)$ είναι και αυτή μια κατανομή, άρα θα πρέπει συνολικά να έχει πιθανότητα 1. Άρα πρέπει να κανονικοποιήσουμε τη κατανομή, δηλαδή να διαιρέσουμε με τη marginal κατανομή της $y$ στο συγκεκριμένο slice, $$p(x|1.5)=\frac{p(x,y=1.5)}{p(y=1.5)}$$ # # ![joint_probabillity](figures/joint_probability.png) # # Ο κανόνας του Bayes χτίζεται γενικεύοντας τις παραπαπάνω σχέσεις: # $$ # p(y|x)=\frac{p(x|y)p(y)}{p(x)}=\frac{p(x|y)p(y)}{\int p(x|y)p(y)dy} # $$ # # Ο απλός αυτός κανόνας, μια απλη αγλεβρική αναδιάταξη δηλαδή, μας ανοίξε νέους ορίζοντες στο να κάνουμε στατιστική. Ας δούμε ένα απλό παράδειγμα. # # Έστω ότι παρατηρούμε κάποιον -τον φώτη- να φοράει κασκόλ της αεκ, ποια η πιθανότητα να είναι οπαδός της αεκ? # Τα ενδεχόμενα είναι δύο # * ενδεχόμενο Α ο φώτης είναι αεκ # * ενδεχόμενο Α' ο φώτης δεν είναι αεκ # # Αναζητούμε τη πιθανότητα δεδομένου ο φώτης να φοράει κασκολ της αεκ, να είναι και αεκ. Δηλαδή τη πιθανότητα $P(\text{αεκ}|\text{κασκολ αεκ})$. # # Χρησιμοποιώντας το κανόνα του Bayes έχουμε: # # $$ # P(\text{αεκ}|\text{κασκολ αεκ})= # \frac{P(\text{αεκ}) P(\text{κασκολ αεκ}|\text{αεκ})}{P(\text{αεκ}) P(\text{κασκολ αεκ}|\text{αεκ})+ # P(\text{οχι αεκ}) P(\text{κασκολ αεκ}|\text{οχι αεκ})} # $$ # # # Ας μελετήσουμε τη σχέση αυτή βήμα - βήμα: # # Ο όρος $P(\text{αεκ})$ αποτελεί τη πιθανότητα να είναι κάποιος ΑΕΚ γενικά. Αυτη η πιθανότητα ονομάζεται **prior** και είναι η καινοτομία (καλή ή και κακή) που μας φέρνει η bayesian προσέγγιση. Θα πρέπει λοιπόν να έχουμε μια ένδειξη της υπόθεσης μας. Θεωρώντας ότι η ΑΕΚ είναι μια γνωστή ομάδα, οπού κρατάει ένα ποσοστό κοντά αλλα χαμηλότερα στο $1/3$ στην αθήνα θεωρούμε μια τιμή $1/5$. # # Ο όρος $P(\text{κασκολ ΑΕΚ}|\text{ΑΕΚ})$ αποτελεί τη πιθανότητα του να μην είναι τύχαιο κάποιος να φοράει ένα κασκολ της ΑΕΚ. Πρόκειται για τη σχέση πιθανοφάνειας ή **likehood**. Θεωρώντας ότι είναι αρκετά σπάνιο φαινόμενο αυτό δίνουμε μια τιμή $0.9$. Σημειώστε ότι χωρίς τη χρήση της bayesian προσέγγισης μάλλον η εκτίμηση μας θα ήταν αντίστοιχη αυτού του αριθμού. # # Ο όρος στον παρονομάστη είναι ουσιάστικα ένας όρος κανονικοποιήσης. Στα δικά μας δεδομένα είναι έυκολο να υπολογιστεί καθώς έχουμε μόνο δύο ενδεχώμενα. Σε πραγματικά δεδομένα -όπου θέλουμε σαν αποτέλεσμα μια κατανομή- όμως είναι πολύπλοκο να υπολογιστεί. Αυτό δεν μας πειράζει όμως καθώς μπορούμε έυκολα να κάνουμε τη κανονικοποίηση χωρίς να τον υπολογίσουμε. # # \begin{align} # P(\text{αεκ}) = P_0 = 0.2 \\ # P(\text{όχι αεκ}) = 1-P_0 = 0.8 \\ # P(\text{κασκολ αεκ}|\text{αεκ}) = 0.9 \\ # P(\text{κασκολ αεκ}|\text{οχι αεκ}) = 0.1 \\ # \end{align} # # Η τελική πιθανότητα, ονομάζεται **posterior** υπολογίζεται λοιπόν περίπου $70\%$. 0.2*0.9/(0.2*0.9+0.8*0.1) # ## Υπολογισμός Ροής μέσω της προσέγγισης Bayes # Θα δείξουμε στη συνέχεια πως μπορεί να υπολογιστεί η ροή του αστέρα του προηγούμενου παραδείγματος σύμφωνα με τη bayesian approach. Ουσιαστικά θέλουμε να υπολογίσουμε τη κατανομή πιθανοτήτων της παραμέτρου $\mu$ δεδομένου των δεδομένων $D$, ή αλλιώς $P(\mu|D)$. # Από το νόμο του bayes: # # $$ # P(\mu|D)=\frac{P(D|\mu)P(\mu)}{P(D)} # $$ # Ας εξετάσουμε έναν έναν τους όρους # * $P(\mu|D)$: Η **posterior** κατανομή των τιμών των παραμέτρων (στη συγκεκριμένη περίπτωση έχουμε μόνο μια παράμετρο). Ο όρος που προσπαθούμε να υπολογίσουμε. # * $P(D|\mu)$: Η πιθανοφάνεια, **likehood**, ο όρος $L(D|\mu)=\prod _{i=1}^N P(D_i|\mu) = \prod _{i=1}^N \frac{1}{\sqrt{2\pi e_i^2}} \exp \Big( -\frac{(F_i-\mu)^2}{2e_i^2} \Big)$ που αναφερθήκαμε και στην κλασική προσέγγιση. # * $P(\mu)$: H **prior** κατανομή των τιμών των παραμέτρων. Εδώ χρησιμοποιείται οποιαδήποτε πρωθύστερη γνώση για τις τιμές των παραμέτρων που θέλουμε να υπολογίσουμε. # * $P(D)$: Η πιθανότητα των δεδομένων, όρος ο οποίο στη πράξη λειτουργεί σαν όρος κανονικοποίησης # + def log_prior(theta): return 1 # flat prior def log_likelihood(theta, F, e): return -0.5 * np.sum(np.log(2 * np.pi * e ** 2) + (F - theta[0]) ** 2 / e ** 2) def log_posterior(theta, F, e): return log_prior(theta) + log_likelihood(theta, F, e) # + ndim = 1 # number of parameters in the model nwalkers = 50 # number of MCMC walkers nburn = 1000 # "burn-in" period to let chains stabilize nsteps = 2000 # number of MCMC steps to take # we'll start at random locations between 0 and 2000 starting_guesses = 2000 * np.random.rand(nwalkers, ndim) import emcee sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[F, e]) sampler.run_mcmc(starting_guesses, nsteps) sample = sampler.chain # shape = (nwalkers, nsteps, ndim) sample = sampler.chain[:, nburn:, :].ravel() # discard burn-in points # + # plot a histogram of the sample plt.hist(sample, bins=50, histtype="stepfilled", alpha=0.3, normed=True) # plot a best-fit Gaussian F_fit = np.linspace(975, 1025) pdf = stats.norm(np.mean(sample), np.std(sample)).pdf(F_fit) plt.plot(F_fit, pdf, '-k') plt.xlabel("F"); plt.ylabel("P(F)") # - print(""" F_true = {0} F_est = {1:.1f} +/- {2:.2f} (based on {3} measurements) """.format(F_true, np.mean(sample), np.std(sample), N)) # **Σχόλιο:** Η μέθοδος MCMC μας δίνει σαν αποτέλεσμα μια **αριθμητική κατανομή** -όχι αναλυτική-. Επομένως οι παραπάνω τιμές είναι -λιγο- λάθος. Δεν μας είπε κανείς ότι η τελική κατανομή είναι αναγκαστικά μια κατανομή gauss. Ενα μεγάλο ζήτημα λοιπόν όταν παίρνουμε το τελικό αποτέλεσμα από τον MCMC αλγόριθμο είναι η κατανόηση και εκτίμηση κάποιον μετρικών για την εκάστοτε παράμετρο, όπως πιθανότερη τιμή και σφάλμα. # # ## Περίπτωση ενός αστέρα με στοχαστική ροή. # Ας εξετάσουμε τώρα τη περίτπωση όπου η ροή ενός αστέρα είναι τυχαία εκ φύσεως σαν μια κανονική κατανομή. Δηλαδή # $$ # F_{\text{true}}=\frac{1}{\sqrt{2\pi \sigma^2}} \exp \Big( -\frac{(F-\mu)^2}{2\sigma^2} \Big) # $$ # + np.random.seed(42) # for reproducibility N = 100 # we'll use more samples for the more complicated model mu_true, sigma_true = 1000, 15 # stochastic flux model F_true = stats.norm(mu_true, sigma_true).rvs(N) # (unknown) true flux F = stats.poisson(F_true).rvs() # observed flux: true flux plus Poisson errors. e = np.sqrt(F) # root-N error, as above plt.errorbar(np.arange(N),F,yerr=e, fmt='ok', ecolor='gray', alpha=0.5) plt.hlines(mu_true,0,N,linestyles='--') plt.hlines(mu_true+sigma_true,0,N,linestyles='--',alpha=0.6) plt.hlines(mu_true-sigma_true,0,N,linestyles='--',alpha=0.6) # - # Η συνάρτηση πιθανοφάνειας βρίσκεται από τη συσχέτιση της κατανομής των σφαλμάτων με τη κατανομή της πηγής των μετρήσεων # $$ # L(D|\mu,\sigma)=\prod _{i=1}^N \frac{1}{\sqrt{2\pi (e_i^2+\sigma^2)}} \exp \Big( -\frac{(F_i-\mu)^2}{2(e_i^2+\sigma ^2)} \Big) # $$ # # Αντίστοιχα με πριν υπολογίζουμε: # \begin{align} # \mu=\frac{\sum w_i F_i}{\sum w_i} && \text{όπου} && w_i=\frac{1}{\sigma ^2+e_i ^2} # \end{align} # Και εδώ υπάρχει πρόβλημα. Καθώς η τιμή της παραμέτρου $\mu$ εξαρτάται από τη παράμετρο $\sigma$. # # Εφόσον δεν υπάρχει αναλυτική λύση θα καταφύγουμε σε έναν αριθμητικό υπολογισμό της μέγιστης τιμής της $L$ # + def log_likelihood(theta, F, e): return -0.5 * np.sum(np.log(2 * np.pi * (theta[1] ** 2 + e ** 2)) + (F - theta[0]) ** 2 / (theta[1] ** 2 + e ** 2)) # maximize likelihood <--> minimize negative likelihood def neg_log_likelihood(theta, F, e): return -log_likelihood(theta, F, e) from scipy import optimize theta_guess = [900, 5] theta_est = optimize.fmin(neg_log_likelihood, theta_guess, args=(F, e)) print(""" Maximum likelihood estimate for {0} data points: mu={theta[0]:.0f}, sigma={theta[1]:.0f} """.format(N, theta=theta_est)) # - # Όλα καλά εως εδώ. Ετσι κι αλλιώς σε πολύ λίγες περιπτώσεις μπορούμε να έχουμε αναλυτική επίλυση, άρα δεν κάναμε και κάποιο έγκλημα! # # Όμως αυτό που δεν είναι προφανές πως θα υπολογίσουμε είναι το σφάλμα των τιμών $\mu,\sigma$. Υπάρχουν τρόποι που μπορούμε να το καταφέρουμε αυτό, είτε μέσω κάποιου τέστ $x^2$, είτε να αναζητήσουμε μια κανονική κατανομή στη μέγιστη τιμή της $L$. Και οι δύο προσεγγίσεις βασίζονται στην εκτίμηση όμως οτι η τελική κατανομή των τιμών των παραμέτρων είναι κανονική. # + def log_prior(theta): # sigma needs to be positive. if theta[1] <= 0: return -np.inf else: return 0 def log_posterior(theta, F, e): return log_prior(theta) + log_likelihood(theta, F, e) # same setup as above: ndim, nwalkers = 2, 150 nsteps, nburn = 2000, 1000 starting_guesses = np.random.rand(nwalkers, ndim) starting_guesses[:, 0] *= 2000 # start mu between 0 and 2000 starting_guesses[:, 1] *= 20 # start sigma between 0 and 20 sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[F, e]) sampler.run_mcmc(starting_guesses, nsteps) #sample = sampler.chain # shape = (nwalkers, nsteps, ndim) trace = sampler.chain[:, nburn:, :].reshape(-1, ndim) # - import corner rcParams['figure.figsize'] = (15, 15) co=corner.corner(sampler.flatchain, labels=['mu','sigma'],truths=[mu_true,sigma_true],smooth=0.25,bins=50,)
AstroStatistics1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # --- import pandas as pd import numpy as np df = pd.read_csv("Data.csv") df.head() df.isna().sum() # ### Multiple Linear Regression X = df.iloc[:, :-1] y = df.iloc[:, -1] # + from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) # + from sklearn.linear_model import LinearRegression lm = LinearRegression() lm.fit(X_train, y_train) lm_predictions = lm.predict(X_test) # + from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error print(f"r2_score: {r2_score(y_test, lm_predictions)}") print(f"mean_absolute_error: {mean_absolute_error(y_test, lm_predictions)}") print(f"mean_squared_error: {mean_squared_error(y_test, lm_predictions)}") print(f"Root Mean Square Error: {np.sqrt(mean_squared_error(y_test, lm_predictions))}") # - # ### Polynomial Regression X = df.iloc[:, :-1] y = df.iloc[:, -1] # + from sklearn.preprocessing import PolynomialFeatures from sklearn.model_selection import train_test_split poly_feature = PolynomialFeatures(degree=4) X = poly_feature.fit_transform(X) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) # + from sklearn.linear_model import LinearRegression poly_lm = LinearRegression() poly_lm.fit(X_train, y_train) poly_predictions = poly_lm.predict(X_test) # + from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error print(f"r2_score: {r2_score(y_test, poly_predictions)}") print(f"mean_absolute_error: {mean_absolute_error(y_test, poly_predictions)}") print(f"mean_squared_error: {mean_squared_error(y_test, poly_predictions)}") print(f"Root Mean Square Error: {np.sqrt(mean_squared_error(y_test, poly_predictions))}") # - # ### Support Vector Regression # ###### 1 X = df.iloc[:, :-1] y = df.iloc[:, -1].to_frame() # + from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) # + from sklearn.preprocessing import StandardScaler sc_X = StandardScaler() sc_y = StandardScaler() X_train = sc_X.fit_transform(X_train) y_train = sc_y.fit_transform(y_train) X_test = sc_X.transform(X_test) y_test = sc_y.transform(y_test) # + from sklearn.svm import SVR svr = SVR() # kernel = 'rbf' svr.fit(X_train, y_train) svr_predictions = svr.predict(X_test) # + from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error print(f"r2_score: {r2_score(y_test, svr_predictions)}") print(f"mean_absolute_error: {mean_absolute_error(y_test, svr_predictions)}") print(f"mean_squared_error: {mean_squared_error(y_test, svr_predictions)}") print(f"Root Mean Square Error: {np.sqrt(mean_squared_error(y_test, svr_predictions))}") # - # ###### 2 X = df.iloc[:, :-1] y = df.iloc[:, -1].to_frame() # + from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) # + from sklearn.preprocessing import StandardScaler sc_X = StandardScaler() sc_y = StandardScaler() X_train = sc_X.fit_transform(X_train) y_train = sc_y.fit_transform(y_train) # + from sklearn.svm import SVR svr = SVR() # kernel = 'rbf' svr.fit(X_train, y_train) svr_predictions = svr.predict(sc_X.transform(X_test)) # + from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error print(f"r2_score: {r2_score(y_test, sc_y.inverse_transform(svr_predictions))}") print(f"mean_absolute_error: {mean_absolute_error(y_test, sc_y.inverse_transform(svr_predictions))}") print(f"mean_squared_error: {mean_squared_error(y_test, sc_y.inverse_transform(svr_predictions))}") print(f"Root Mean Square Error: {np.sqrt(mean_squared_error(y_test, sc_y.inverse_transform(svr_predictions)))}") # - # ### Decision Tree Regression X = df.iloc[:, :-1] y = df.iloc[:, -1] # + from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) # + from sklearn.tree import DecisionTreeRegressor dtr = DecisionTreeRegressor(random_state=0) dtr.fit(X_train, y_train) dtr_predictions = dtr.predict(X_test) # + from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error print(f"r2_score: {r2_score(y_test, dtr_predictions)}") print(f"mean_absolute_error: {mean_absolute_error(y_test, dtr_predictions)}") print(f"mean_squared_error: {mean_squared_error(y_test, dtr_predictions)}") print(f"Root Mean Square Error: {np.sqrt(mean_squared_error(y_test, dtr_predictions))}") # - # ### Random Forest Regression X = df.iloc[:, :-1] y = df.iloc[:, -1] # + from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) # - from sklearn.ensemble import RandomForestRegressor rfr = RandomForestRegressor(n_estimators=10, random_state=0) rfr.fit(X_train, y_train) rfr_predictions = rfr.predict(X_test) # + from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error print(f"r2_score: {r2_score(y_test, rfr_predictions)}") print(f"mean_absolute_error: {mean_absolute_error(y_test, rfr_predictions)}") print(f"mean_squared_error: {mean_squared_error(y_test, rfr_predictions)}") print(f"Root Mean Square Error: {np.sqrt(mean_squared_error(y_test, rfr_predictions))}") # - # ---
Regression ML Comparison.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <a href="https://githubtocolab.com/cn-ufpe/cn-ufpe.github.io/blob/master/material/03_zeros_funcoes.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a> # + [markdown] id="xMxNci9rOhCd" # # **Zeros de Funções** # + [markdown] id="DhWSyf4VQJxf" # **O que são zeros de funções?** # # Um número real ξ é zero ou raiz da função f(x) se f(ξ) = 0 # # + id="Sa9S9qOLQbOm" import numpy as np import matplotlib.pyplot as plt # biblioteca para plotar gráficos # + [markdown] id="mnq0oVTiTxuv" # **Aproximação inicial** # # # * ***(a) Estudo Gráfico*** # # Na análise gráfica da função f(x) podemos utilizar um dos seguintes processos: # * (i) Esboçar o gráfico da função f(x) e localizar as abscissas dos pontos onde a curva intercepta o eixo x. # * (ii) A partir da equação f(x) = 0, obter a equação equivalente g(x) = h(x) e localizar os pontos x onde as duas curvas se interceptam: # * f(ξ) = 0 <=> g(ξ) = h(ξ) # # - # # ### Exemplo 1: $f(x) = x^3 - 9*x + 3.$ # Utilizando o gráfico da função f(x) # + colab={"base_uri": "https://localhost:8080/", "height": 265} id="qEkkByDPRMmv" outputId="f9afb801-122d-4d14-86a3-81345bba6583" # define a função f(x) def f(x): return x**3 - 9*x + 3 x = np.linspace(-4,4) #limites no eixo x plt.plot(x, f(x)) plt.grid() plt.show() # + [markdown] id="pLd-09m3TatY" # As raízes estão nos intervalos: # # * ξ1 => (-4, -3) # * ξ2 => (0, 1) # * ξ3 => (2, 3) # - # ### Exemplo: $g(x) = x^3$ e $h(x) = 9*x - 3$ # A partir da equação f(x) = 0, obter a equação equivalente g(x) = h(x) # + colab={"base_uri": "https://localhost:8080/", "height": 265} id="dqm8t9U2SIok" outputId="a71601c7-c8df-4a3d-cc0b-53e306fd7ee1" def g(x): return x**3 def h(x): return 9*x - 3 x = np.linspace(-4,4) #limites no eixo x plt.plot(x, g(x), color='blue') # azul plt.plot(x, h(x), color='red') # vermelho plt.grid() plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 265} id="SSNA4CaCTZuj" outputId="8d521f7f-91ed-4c81-b8ec-27eb901e2af6" #Esboçando os gráficos: f(x), g(x) e h(x) x = np.linspace(-4,4) #limites no eixo x plt.plot(x, g(x), color = 'blue') #azul plt.plot(x, h(x), color = 'orange') #laranja plt.plot(x, f(x), color = 'green') #verde plt.grid() plt.show() # + [markdown] id="q_U1Xz3OZ8xE" # **Aproximação inicial** # # # * ***(b) Estudo Analítico*** # # Para ter a certeza da localização da raiz usaremos o **Teorema de Bolzano**: # * “Se $f(x)$ é uma função contínua em um certo intervalo $[a, b]$ e troca de sinal nos extremos deste intervalo, isto é, $f(a)*f(b) < 0$, então existe pelo menos uma raiz real de $f(x)$ em $[a, b]$.” # # Além disso, “se $f’(x)$ existir e preservar o sinal no intervalo $[a, b]$, existirá uma única raiz real de $f(x)$ nele.” # [a, b] será o intervalo de separação. # - # ### Exercício $f(x) = x^3 - 9*x + 3$, há raizes nos intervalos: (a) [-5, -3], (b) [1, 2] e (c) [2.5, 3]? # Podemos aplicar aplicar o Teorema de Bolzano f(a)*f(b) < 0, no intervalo [a, b] # + colab={"base_uri": "https://localhost:8080/"} id="Szse68rscfeX" outputId="4ba880ec-c019-4ae2-82ac-01956f8f5a96" #(a) f(-5)*f(-3) < 0 ?? def f(x): return x**3 - 9*x + 3 if f(-5)*f(-3) < 0: print(" Existe uma raiz está no intervalo [-5, 3]!") else: print("Não sabemos se existe uma raiz no intervalo [-5, 3]!") #(b) f(1)*f(2) < 0 ? #(c) f(2.5)*f(3) < 0 ? # + [markdown] id="AN8qPfkMoH10" # **Aproximação inicial** # # # * ***(a) Estudo Analítico*** # # “se $f’(x)$ existir e preservar o sinal no intervalo $[a, b]$, existirá uma única raiz real de $f(x)$ no intervalo.” # [a, b] será o intervalo de separação. # # Considere a função $f(x) = sen(x) + ln(x)$, # # $f'(x) = cos(x) + 1/x$ # + colab={"base_uri": "https://localhost:8080/"} id="dsOXOPBToaDx" outputId="373de978-c874-4012-854d-61a63d75b007" # Se f(x) = sin(x) + ln(x) def df(x): return np.cos(x) + 1/x # + [markdown] id="J9gwVkwljXot" # Exercício: # Se f(x) = sin(x) + ln(x), a derivada f'(x) é sempre positiva ou negativa no intervalo [0.2, 0.8]? # + id="PoFLvFPZG6cH" # Solução: x = np.linspace(0.2, 0.8) y = df(x) plt.plot(x, y) plt.show() # + [markdown] id="YvGdnZUnjoxQ" # **Critério de Parada** # Os principais critérios de parada dos métodos iterativos para resolver equações: # # * Número de iterações # * Erro absoluto # * Valor da imagem # # # # # + [markdown] id="adu40abUPgw_" # **Método da Bisseção** # # O método da Bisseção se enquadra na categoria dos métodos de quebra. Estes métodos partem de um intervalo de separação de uma raiz de uma função específica e o "quebra" em dois subintervalos. Abandonam o subintervalo que não contém a raiz procurada e repetem o processo para o subintervalo onde a raiz está, e assim sucessivamente. No caso da bisseção, até que a amplitude (distância entre os extremos a e b) deste subintervalo seja tão pequena quanto se queira. # + [markdown] id="sPL96y4dsirL" # Execício(2): Determinar, usando o método da Bisseção, o valor aproximado da raiz das funções abaixo (importe a biblioteca math): # * (a) f(x) = x**2 - 5; no intervalo I=[a, b]=[2, 3], utilizando como critério de parada (b-a)/2 <= E (erro), E=0.01. # * (b) f(x) = sin(x) + ln(x); no intervalo I=[a, b]=[0.2, 0.8], utilizando como critério de parada c = b - a <= l (amplitude final), l = 0.05. # * (c) f(x) = sin(x) + ln(x); no intervalo I=[a, b]=[0.2, 0.8], utilizando como critério de parada: f(x0) <= P2, onde: x0= (a+b)/2 e P2 = precisão relacionada à distância da imagem de 𝑥0 para o eixo x), P2 = 0.01. # # # # # + [markdown] id="Y8XIrtR207iN" # **RESPOSTA:** # # (a) $f(x) = x^2 - 5$; no intervalo $I=[2, 3]$, utilizando como critério de parada quando acontecer $(b-a)/2 <= E$ (erro), com $E=0.01$ # + id="xNkvcRrQbNyG" # Importar a biblioteca matemática import math # + id="vNu73cI2bkPH" # Definir um intervalo [a, b] e um erro E a = 2 b = 3 E = 0.01 # + id="e0LlC0dRbknL" # Definir uma função (LETRA (a)) def f(x): return x**2 - 5 # + colab={"base_uri": "https://localhost:8080/", "height": 265} id="IVD3j2RUdZwM" outputId="8649cab2-878f-4be5-a31c-73a33449fe91" # Plotar o gráfico de f(x) = x**2 - 5 x = np.linspace(-4,4) #limites no eixo x plt.plot(x, f(x)) plt.grid() plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="2XIS0QYCb4Dz" outputId="c0c18716-0dc5-4c81-8623-45e8c5b107de" # Teorema de Bolzano e Método da Bisseção def bissecao(f, a, b, E): pass # Definir um intervalo [a, b] e um erro E def f(x): return x**2 - 5 a = 2 b = 3 E = 1e-10 xi = bissecao(f, a, b, E) print('o valor da raiz é ', xi) print('f(xi) = ', f(xi)) # + [markdown] id="sqgbMKJv1m0i" # **RESPOSTA:** # # (b) f(x) = sin(x) + ln(x); no intervalo I=[a, b]=[0.2, 0.8], utilizando como critério de parada c = b - a <= l (amplitude final), l = 0.05. # + id="kIou9Qnu41qp" # + [markdown] id="-ennJGKs1oNc" # **RESPOSTA:** # # (c) f(x) = sin(x) + ln(x); no intervalo I=[a, b]=[0.2, 0.8], utilizando como critério de parada: f(x0) <= P2, onde: x0= (a+b)/2 e P2 = precisão relacionada à distância da imagem de 𝑥0 para o eixo x), P2 = 0.01. # # # + id="VdzCrI6o42VM"
material/03_zeros_funcoes.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # <hr style="height:3px;border:none;color:#333;background-color:#333;" /> # <img style=" float:right; display:inline" src="http://opencloud.utsa.edu/wp-content/themes/utsa-oci/images/logo.png"/> # # ### **University of Texas at San Antonio** # <br/> # <br/> # <span style="color:#000; font-family: 'Bebas Neue'; font-size: 2.5em;"> **Open Cloud Institute** </span> # # <hr style="height:3px;border:none;color:#333;background-color:#333;" /> # ### Machine Learning/BigData EE-6973-001-Fall-2016 # # <br/> # <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.5em;"> **<NAME>, Ph.D.** </span> # # # <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.5em;"> **<NAME>, Research Fellow** </span> # # # <hr style="height:1.5px;border:none;color:#333;background-color:#333;" /> # <hr style="height:1.5px;border:none;color:#333;background-color:#333;" /> # <span style="color:#000; font-family: 'Bebas Neue'; font-size: 2em;"> **Real Time Image Classification using DNN through ROS for Drones and Ground Robot** </span> # <br/> # <br/> # <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.6em;"> <NAME>, <NAME> </span> # <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.4em;"> *Open Cloud Institute, University of Texas at San Antonio, San Antonio, Texas, USA* </span> # <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.4em;"> {dxq821, <EMAIL> </span> # <br/> # <br/> # <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.5em;"> **Dataset:** </span> <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.3em;">The image data can be found in http://www.vision.caltech.edu/Image_Datasets/Caltech101/. The dataset contains pictures of objects belonging to 101 categories, and about 40 to 800 images per category. </span> # # [1]: http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-8/faceimages/faces/ # <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.5em;"> **Outcome:** </span> <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.3em;"> Adding another layer of Intelligence to the Kobuki Turtlebot by giving it an Image Identification and classification capability. </span> # <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.5em;"> **Project Definition:** </span> <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.3em;"> Using Convoluted Neural Networks, we train our Network to recognize different classes of images. This is done with the provided training sets which have images with labels, classifying objects into 101 classes. Other training sets are available freely online which we may decide to use later on. We can design and configure this Convoluted Neural Network using TensorFlow, which is the current standard for Machine Learning Languages.</span> # # <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.3em;"> Once this Intelligence is designed and trained, we can create a ROS Package containing the algorithm. ROS (Robot Operating System) is an interface to communicate effectively with Robots and a ROS package can take in different inputs such as Test images from other external sources. This ROS Package can be run on the Kobuki Ground Robot, which feeds it with test images taken by its onboard camera, and we can analyze the results of the trained Neural Network.</span> # # # <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.3em;"> With the project defined above, we aim to add an extension to the visual Intelligence of the ground robot. Currently, the robot does not have in-built intelligence to identify images or classify them, and with this project, we aim to add the feature to the robot. </span>
project/Real Time Image Classification using DNN through ROS for Drones and Ground Robot.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # This is the in situ and SSS collocation code. # import os import numpy as np import matplotlib.pyplot as plt import datetime as dt import pandas as pd import xarray as xr import scipy from glob import glob import cartopy.crs as ccrs from pyresample.geometry import AreaDefinition from pyresample import image, geometry, load_area, save_quicklook, SwathDefinition, area_def2basemap from pyresample.kd_tree import resample_nearest from math import radians, cos, sin, asin, sqrt from scipy import spatial import os.path from os import path # # Define a function to read in insitu data # - Read in the Saildrone USV file either from a local disc or using OpenDAP. # - add room to write collocated data to in situ dataset # def read_usv(iusv): filename_usv_list = ['https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/insitu/L2/spurs2/saildrone/SPURS2_Saildrone1005.nc', 'https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/insitu/L2/spurs2/saildrone/SPURS2_Saildrone1006.nc', 'https://podaac-opendap.jpl.nasa.gov/opendap/allData/insitu/L2/saildrone/Baja/saildrone-gen_4-baja_2018-sd1002-20180411T180000-20180611T055959-1_minutes-v1.nc', 'F:/data/cruise_data/access/CTD_casts_ALL_NASA_update_010819.xlsx', 'F:/data/cruise_data/saildrone/noaa_arctic/saildrone_PMEL_Arctic_2015_126.nc', 'F:/data/cruise_data/saildrone/noaa_arctic/saildrone_PMEL_Arctic_2016_126.nc', 'F:/data/cruise_data/saildrone/noaa_arctic/saildrone_PMEL_Arctic_2016_128.nc', 'F:/data/cruise_data/saildrone/noaa_arctic/saildrone_PMEL_Arctic_2015_128.nc'] name_usv_list = ['SPURS2_1005','SPURS2_1006','baja','access', 'arctic2015_126', 'arctic2016_126', 'arctic2016_128', 'arctic2015_128'] filename_usv = filename_usv_list[iusv] if iusv==3: df = pd.read_excel(filename_usv, sheet_name='data') ds_usv = df.to_xarray() ds_usv = ds_usv.where(ds_usv.Depth==-2,drop=True) ds_usv = ds_usv.swap_dims({'index':'Date'}).rename({'Date':'time','Longitude':'lon','Latitude':'lat','Salinity':'salinity'}).sortby('time') elif iusv<3: ds_usv = xr.open_dataset(filename_usv) ds_usv.close() if iusv==2: ds_usv = ds_usv.isel(trajectory=0).swap_dims({'obs':'time'}).rename({'longitude':'lon','latitude':'lat','SAL_MEAN':'salinity'}) ds_usv = ds_usv.sel(time=slice('2018-04-12T02','2018-06-10T18')) #get rid of last part and first part where USV being towed else: ds_usv = ds_usv.rename({'longitude':'lon','latitude':'lat','sss':'salinity'}) elif iusv>3: ds_usv = xr.open_dataset(filename_usv) ds_usv.close() ds_usv = ds_usv.isel(trajectory=0).swap_dims({'obs':'time'}).rename({'longitude':'lon','latitude':'lat','sal_mean':'salinity'}) # ds_usv['lon'] = ds_usv.lon.interpolate_na(dim='time',method='linear') #there are 6 nan values # ds_usv['lat'] = ds_usv.lat.interpolate_na(dim='time',method='linear') #add room to write collocated data information ilen = ds_usv.time.shape[0] ds_usv['deltaT']=xr.DataArray(np.ones(ilen)*999999,coords={'time':ds_usv.time},dims=('time')) ds_usv['smap_SSS']=xr.DataArray(np.ones(ilen)*999999,coords={'time':ds_usv.time},dims=('time')) ds_usv['smap_name']=xr.DataArray(np.empty(ilen,dtype=str),coords={'time':ds_usv.time},dims=('time')) ds_usv['smap_dist']=xr.DataArray(np.ones(ilen)*999999,coords={'time':ds_usv.time},dims=('time')) ds_usv['smap_ydim']=xr.DataArray(np.ones(ilen)*999999,coords={'time':ds_usv.time},dims=('time')) ds_usv['smap_xdim']=xr.DataArray(np.ones(ilen)*999999,coords={'time':ds_usv.time},dims=('time')) ds_usv['smap_iqc_flag']=xr.DataArray(np.ones(ilen)*999999,coords={'time':ds_usv.time},dims=('time')) #subset data to SMAP observational period ds_usv = ds_usv.sel(time=slice('2015-05-10','2018-12-31')) return ds_usv,name_usv_list[iusv] # ## explore the in situ data and quickly plot using cartopy # #filename='F:/data/cruise_data/saildrone/west_coast/saildrone_west_coast_survey_2018_2506_7567_f05c.nc' #3filename='F:/data/cruise_data/saildrone/noaa_arctic/saildrone_PMEL_Arctic_2015_128.nc' #filename='https://podaac-opendap.jpl.nasa.gov:443/opendap/allData/insitu/L2/spurs2/saildrone/SPURS2_Saildrone1006.nc' #ds=xr.open_dataset(filename) #print(ds) #ds #plt.plot(ds.longitude,ds.latitude,'.') #print(ds) #print(ds.obs.min().data,ds.obs.max().data) #ds_usv = ds.swap_dims({'obs':'time'}) #ds_usv = ds_usv.sel(time=slice('2015-05-10','2018-12-31')) #plt.plot(ds.time[9009:9385]) #plt.plot(ds.obs[9000:9400]) #print(ds.time[9008:9011]) #print(ds.time[-10:].data) #import collections #print([item for item, count in collections.Counter(ds_usv.time.data).items() if count > 1]) for iusv in range(7): ds_usv,name_usv = read_usv(iusv) print(iusv,name_usv) # print(ds_usv.time.min().data,ds_usv.time.max().data) ds_usv = ds_usv.sel(time=slice('2015-05-10','2018-12-31')) print(ds_usv.time.min().data,ds_usv.time.max().data) #intialize grid for iusv in range(7): area_def = load_area('areas.cfg', 'pc_world') rlon=np.arange(-180,180,.1) rlat=np.arange(90,-90,-.1) for isat in range(0,2): ds_usv,name_usv = read_usv(iusv) if isat==0: sat_directory = 'F:/data/sat_data/smap/SSS/L2/RSS/V3/40km/' # sat_directory = 'Z:/SalinityDensity/smap/L2/RSS/V3/SCI/40KM/' fileout = 'F:/data/cruise_data/saildrone/sat_collocations/'+name_usv+'_rss40km_filesave2.nc' file_end = '/*.nc' if isat==1: sat_directory = 'F:/data/sat_data/smap/SSS/L2/JPL/V4.2/' # sat_directory = 'Z:/SalinityDensity/smap/L2/JPL/V4.2/' fileout = 'F:/data/cruise_data/saildrone/sat_collocations/'+name_usv+'_jplv4.2_filesave2.nc' file_end = '/*.h5' if path.exists(fileout): continue #init filelist file_save=[] #search usv data minday,maxday = ds_usv.time[0],ds_usv.time[-1] usv_day = minday print(minday.data,maxday.data) while usv_day<=maxday: # check_day = np.datetime64(str(usv_day.dt.year.data)+'-'+str(usv_day.dt.month.data).zfill(2)+'-'+str(usv_day.dt.day.data).zfill(2)) # usv_day1 = usv_day + np.timedelta64(1,'D') # check_day1 = np.datetime64(str(usv_day1.dt.year.data)+'-'+str(usv_day1.dt.month.data).zfill(2)+'-'+str(usv_day1.dt.day.data).zfill(2)) # ds_day = ds_usv.sel(time=slice(check_day,check_day1)) ds_day = ds_usv.sel(time=slice(usv_day-np.timedelta64(1,'D'),usv_day+np.timedelta64(1,'D'))) ilen = ds_day.time.size if ilen<1: #don't run on days without any data continue minlon,maxlon,minlat,maxlat = ds_day.lon.min().data,ds_day.lon.max().data,ds_day.lat.min().data,ds_day.lat.max().data #caluclate filelist filelist = glob(sat_directory+str(usv_day.dt.year.data)+'/'+str(usv_day.dt.dayofyear.data)+file_end) x,y,z = [],[],[] for file in filelist: file.replace('\\','/') ds = xr.open_dataset(file) ds.close() if isat==0: #change RSS data to conform with JPL definitions ds = ds.isel(look=0) ds = ds.rename({'cellon':'lon','cellat':'lat','sss_smap':'smap_sss'}) ds['lon']=np.mod(ds.lon+180,360)-180 x = ds.lon.fillna(-89).data y = ds.lat.fillna(-89).data z = ds.smap_sss.data lons,lats,data = x,y,z swath_def = SwathDefinition(lons, lats) result1 = resample_nearest(swath_def, data, area_def, radius_of_influence=20000, fill_value=None) da = xr.DataArray(result1,name='sss',coords={'lat':rlat,'lon':rlon},dims=('lat','lon')) subset = da.sel(lat = slice(maxlat,minlat),lon=slice(minlon,maxlon)) num_obs = np.isfinite(subset).sum() if num_obs>0: file_save = np.append(file_save,file) usv_day += np.timedelta64(1,'D') df = xr.DataArray(file_save,name='filenames') df.to_netcdf(fileout) # ## Now, loop through only the files that we know have some data in the region of interest. Use the fast search kdtree which is part of pyresample software, but I think maybe comes originally from sci-kit-learn. # # - read in the in situ data # - read in a single orbit of satellite data # - kdtree can't handle it when lat/lon are set to nan. I frankly have no idea why there is orbital data for both the JPL and RSS products that have nan for the geolocation. That isn't normal. But, okay, let's deal with it. # - stack the dataset scanline and cell positions into a new variable 'z' # - drop all variables from the dataset when the longitude is nan # - set up the tree # - loop through the orbital data # - only save a match if it is less than 0.25 deg distance AND time is less than any previous match # - save the satellite indices & some basic data onto the USV grid # for num_usv in range(8): for isat in range(2): ds_usv,usv_name = read_usv(num_usv) if isat==0: filelist = 'F:/data/cruise_data/saildrone/sat_collocations/'+usv_name+'rss40km_filesave2.nc' fileout = 'F:/data/cruise_data/saildrone/sat_collocations/'+usv_name+'rss40km_usv2.nc' if isat==1: filelist = 'F:/data/cruise_data/saildrone/sat_collocations/'+usv_name+'jplv4.2_filesave2.nc' fileout = 'F:/data/cruise_data/saildrone/sat_collocations/'+usv_name+'jplv42_usv2.nc' df = xr.open_dataset(filelist) print(isat) for file2 in df.filenames.data: file = file2 file.replace('\\','/') ds = xr.open_dataset(file) ds.close() if isat==0: #change RSS data to conform with JPL definitions ds = ds.isel(look=0) ds = ds.rename({'iqc_flag':'quality_flag','cellon':'lon','cellat':'lat','sss_smap':'smap_sss','ydim_grid':'phony_dim_0','xdim_grid':'phony_dim_1'}) ds['lon']=np.mod(ds.lon+180,360)-180 if isat==1: #change RSS data to conform with JPL definitions ds = ds.rename({'row_time':'time'}) #stack xarray dataset then drop lon == nan ds2 = ds.stack(z=('phony_dim_0', 'phony_dim_1')).reset_index('z') #drop nan ds_drop = ds2.where(np.isfinite(ds2.lon),drop=True) lats = ds_drop.lat.data lons = ds_drop.lon.data inputdata = list(zip(lons.ravel(), lats.ravel())) tree = spatial.KDTree(inputdata) orbit_time = ds.time.max().data-np.timedelta64(1,'D') orbit_time2 = ds.time.max().data+np.timedelta64(1,'D') usv_subset = ds_usv.sel(time=slice(orbit_time,orbit_time2)) ilen = ds_usv.time.size for iusv in range(ilen): if (ds_usv.time[iusv]<orbit_time) or (ds_usv.time[iusv]>orbit_time2): continue pts = np.array([ds_usv.lon[iusv], ds_usv.lat[iusv]]) # pts = np.array([ds_usv.lon[iusv]+360, ds_usv.lat[iusv]]) tree.query(pts,k=1) i = tree.query(pts)[1] rdist = tree.query(pts)[0] #don't use matchups more than 25 km away if rdist>.25: continue #use .where to find the original indices of the matched data point #find by matching sss and lat, just randomly chosen variables, you could use any result = np.where((ds.smap_sss == ds_drop.smap_sss[i].data) & (ds.lat == ds_drop.lat[i].data)) listOfCoordinates = list(zip(result[0], result[1])) if len(listOfCoordinates)==0: continue ii, jj = listOfCoordinates[0][0],listOfCoordinates[0][1] if isat==0: deltaTa = ((ds_usv.time[iusv]-ds.time[ii,jj]).data)/ np.timedelta64(1,'m') if isat==1: deltaTa = ((ds_usv.time[iusv]-ds.time[ii]).data)/ np.timedelta64(1,'m') if np.abs(deltaTa)<np.abs(ds_usv.deltaT[iusv].data): ds_usv.deltaT[iusv]=deltaTa ds_usv.smap_SSS[iusv]=ds.smap_sss[ii,jj] ds_usv.smap_iqc_flag[iusv]=ds.quality_flag[ii,jj] ds_usv.smap_name[iusv]=file2 ds_usv.smap_dist[iusv]=rdist ds_usv.smap_ydim[iusv]=ii ds_usv.smap_xdim[iusv]=jj ds_usv.to_netcdf(fileout) # + for num_usv in range(7): for isat in range(2):` ds_usv,usv_name = read_usv(num_usv) if isat==0: file = 'F:/data/cruise_data/saildrone/sat_collocations/'+usv_name+'_rss40km_usv2.nc' fileout = 'F:/data/cruise_data/saildrone/sat_collocations/'+usv_name+'_rss40km_usv2_norepeats.nc' if isat==1: file = 'F:/data/cruise_data/saildrone/sat_collocations/'+usv_name+'_jplv42_usv2.nc' fileout = 'F:/data/cruise_data/saildrone/sat_collocations/'+usv_name+'_jplv42_usv2_norepeats.nc' ds_usv=xr.open_dataset(file) ds_usv.close() ds_usv = ds_usv.where(ds_usv.smap_SSS<10000,np.nan) ilen,index = ds_usv.dims['time'],0 ds_tem = ds_usv.copy(deep=True) duu, duu2, duv1, duv2, dlat, dlon, dut = [],[],[],[],[],[],np.empty((),dtype='datetime64') index=0 while index <= ilen-2: index += 1 if np.isnan(ds_usv.smap_SSS[index]): continue if np.isnan(ds_usv.smap_xdim[index]): continue result = np.where((ds_usv.smap_xdim == ds_tem.smap_xdim[index].data) & (ds_usv.smap_ydim == ds_tem.smap_ydim[index].data)) duu=np.append(duu,ds_usv.smap_SSS[result[0][0]].data) duu2=np.append(duu2,ds_usv.smap_iqc_flag[result[0][0]].data) duv1=np.append(duv1,ds_usv.SAL_MEAN[result].mean().data) dlat=np.append(dlat,ds_usv.lat[result].mean().data) dlon=np.append(dlon,ds_usv.lon[result].mean().data) dut=np.append(dut,ds_usv.time[result].mean().data) ds_usv.smap_SSS[result]=np.nan dut2 = dut[1:] #remove first data point which is a repeat from what array defined ds_new=xr.Dataset(data_vars={'smap_SSS': ('time',duu),'smap_iqc_flag': ('time',duu2), 'SAL_MEAN':('time',duv1), 'lon': ('time',dlon), 'lat': ('time',dlat)}, coords={'time':dut2}) ds_new.to_netcdf(fileout)
.ipynb_checkpoints/just run files-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/rgurve/yolo-v3-custom-object-detection/blob/main/yolo_v3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="DTjamaU7_pz4" # # # # Object detection project using yolo-v3 # 1] Create a new folder in file explorer and name it. # # 2] We download photos from fatkun , a chrome extension(good resolution} - 3 classes and 20 photos each.Put the photos in that folder. # # 3] For labelling and cropping photos we download a software https://tzutalin.github.io/labelImg/ .Click on rectangle(icon) ,crop the faces and label them. # # 4] There will be a text file next to the photos that will classify them. # # 5] There will be a text file at the end which says classes.txt. Now open that file in notepad and rename it as classes.names and save it. After that upload that file to new.zip . Also rename all the photos in chronological order. # # 6] Create a file by the name of yolo_custom_model_training in google drive. Within the file create new folders by the name of custom_data, darknet,backup and weights.Create a new folder by the name of new.zip in file explorer and copy paste all the photos in it and upload in yolo_custom_model_training in drive. # # 7]https://github.com/jakkcoder/training_yolo_custom_object_detection_files Click on this link and click on creating-train-and-test-txt.py ,open with notepad,change the path to custom_data . After that click on creating- files-data-and-name.py and change the path to custom_data. Now upload these two folders in custom_data folder of google drive. # # # # # + colab={"base_uri": "https://localhost:8080/"} id="IYvGBHT48lV-" outputId="0db64151-d551-44b9-d582-0ff9b391de91" from google.colab import drive drive.mount('/content/drive') # + [markdown] id="naYC2Q19NFHV" # **After mounting** # # We unzip the file # # # + colab={"base_uri": "https://localhost:8080/"} id="ZdfKeB9E9Evv" outputId="8438979a-950f-47ec-fceb-e7478331d2f2" # !unzip '/content/drive/My Drive/yolo_custom_model_training/new.zip' -d '/content/drive/My Drive/yolo_custom_model_training/custom_data' # + [markdown] id="G6k-XOu6Nol7" # # Cloning darknet repository from github # # Darknet is used for configuration of files.YOLOv3 was created on Darknet, an open source neural network framework to train detector. # + colab={"base_uri": "https://localhost:8080/"} id="BYUU63Ru9pRl" outputId="d2a65d73-a314-4bf0-afb6-d18a14368648" # !git clone 'https://github.com/AlexeyAB/darknet.git' '/content/drive/My Drive/yolo_custom_model_training/darknet' # + [markdown] id="BdowgSVDOpJo" # Using cd command to change directory # + colab={"base_uri": "https://localhost:8080/"} id="xdOHFtEL-DiG" outputId="ee7c45cc-0a3f-4902-a3c1-f8cba82357f2" # %cd '/content/drive/My Drive/yolo_custom_model_training/darknet' # + [markdown] id="vl0nFoTePLGv" # # # !make is typically used to build executable programs and libraries from source code. Change GPU=1,CUDNN=1 and openCV=1 # + colab={"base_uri": "https://localhost:8080/"} id="jNVf7raW-JIM" outputId="eb8c5750-3eee-403c-a7a7-f2ea72bbcd4d" # !make # + [markdown] id="C59A8cG6Pd6I" # Repeat to execute the changed file. Also in order to have transfer learning we change batches=2,subdivisions=8,classes=3,steps=6000(2000*classes),min_steps=5800,max_steps=6200,width=412 and height=412(both divided by 32). Using the formula of (classes+5)*3 change the filters to 24 in the last three networks of yolo(approx line 620) along with classes=3. # + colab={"base_uri": "https://localhost:8080/"} id="zEiF8CWT_DSc" outputId="f9e4b28e-5b1d-4942-b97e-3965feca4b05" # !make # + colab={"base_uri": "https://localhost:8080/"} id="iauMi1gh_zoc" outputId="d1ddd985-24f0-46e3-fe23-6c72915a3d7e" # %cd /content/drive/My Drive/yolo_custom_model_training # + [markdown] id="hwQbpj60SSxP" # # Using github link to create files and changing path: # # https://github.com/jakkcoder/training_yolo_custom_object_detection_files # # Use the following link to create files using a text editor like notepad/wordpad,etc.Later store it. # # Execute the following two codes to label the data # + id="obzoXcCkCEZ1" # !python custom_data/creating-files-data-and-name.py # + [markdown] id="dgmi4wJmVMIE" # With the following command the data will get split into 85:15 # + id="6wYcOwJ_CRU8" # !python custom_data/creating-train-and-test-txt-files.py # + [markdown] id="Tl_m9K9xTdsX" # **Change directory** # # Download darknet pre-trained model # + colab={"base_uri": "https://localhost:8080/"} id="W6YrfMVVCcqn" outputId="9405da47-fe86-4af0-f856-e618e75911e4" # %cd /content/drive/My Drive/yolo_custom_model_training/weights # !wget https://pjreddie.com/media/files/darknet53.conv.74 # + colab={"base_uri": "https://localhost:8080/"} id="mT_EyiEVDC4o" outputId="003971dc-b039-4de7-928d-6727cc6f6d4c" # %cd /content/drive/My Drive/yolo_custom_model_training # + [markdown] id="gHx-5THcUks2" # To check whether the function will work, we use the code: # + colab={"base_uri": "https://localhost:8080/"} id="9XquFKXADkBI" outputId="65ec4dd9-aa5a-4f74-84af-d00336023eff" # !darknet/darknet # + [markdown] id="_fmzX_nOUo__" # # Use the command below to start training the model: # # It might possibly take 2-6 hours to train depending on how many classes we have. # + colab={"base_uri": "https://localhost:8080/"} id="N-d_8fr9DrvK" outputId="caf7e0b7-7072-4cf4-a204-7a7b33176fef" # !darknet/darknet detector train custom_data/labelled_data.data darknet/cfg/yolov3.cfg weights/darknet53.conv.74 -dont_show # + [markdown] id="wuySzywVVSdp" # # Final steps to end the training: # # Now the weights will be stored in the back up file in the form of steps for every 1000. We now download the final_weights and store it as a folder in our file explorer. The entire training model is contained in this folder. We will now use the jupyter notebook in order to run our object detection live. # # Note : Once the model is run in the jupyter notebook then the webcam of the laptop will get activated . We then present a photo/video in front of the webcam to check for accuracy. # + [markdown] id="jlGlxqRLXMIr" # # END # # + id="Rhhv_-eVXSrO"
yolo_v3.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Twitter Sentiment Analysis import twitter import pandas as pd import numpy as np ### read csv input file: df_source = pd.read_csv("/Users/adrientalbot/Desktop/Studies/Imperial/Capstone/capital_pilot/RefData.csv") buzzwords = df_source['Buzzwords'].iloc[0] buzzwords_list = buzzwords.split(",") buzzwords_list # ### Source # https://towardsdatascience.com/creating-the-twitter-sentiment-analysis-program-in-python-with-naive-bayes-classification-672e5589a7ed # ### Authenticating Twitter API # + # Authenticating our twitter API credentials twitter_api = twitter.Api(consumer_key='f2ujCRaUnQJy4PoiZvhRQL4n4', consumer_secret='<KEY>', access_token_key='<KEY>', access_token_secret='<KEY>') # Test authentication to make sure it was successful print(twitter_api.VerifyCredentials()) # - # ### Building the Test Set #We first build the test set, consisting of only 100 tweets for simplicity. #Note that we can only download 180 tweets every 15min. def buildTestSet(search_keyword): try: tweets_fetched = twitter_api.GetSearch(search_keyword, count = 100) print("Fetched " + str(len(tweets_fetched)) + " tweets for the term " + search_keyword) return [{"text":status.text, "label":None} for status in tweets_fetched] except: print("Unfortunately, something went wrong..") return None # + #Testing out fetching the test set. The function below prints out the first 5 tweets in our test set. search_term = input("Enter a search keyword:") testDataSet = buildTestSet(search_term) print(testDataSet[0:4]) # - testDataSet[0] # + #df = pd.DataFrame(list()) #df.to_csv('tweetDataFile.csv') # - # ### Building the Training Set # We will be using a downloadable training set, consisting of 5,000 tweets. These tweets have already been labelled as positive/negative. We use this training set to calculate the posterior probabilities of each word appearing and its respective sentiment. # + #As Twitter doesn't allow the storage of the tweets on personal drives, we have to create a function to download #the relevant tweets that will be matched to the Tweet IDs and their labels, which we have. def buildTrainingSet(corpusFile, tweetDataFile, size): import csv import time count = 0 corpus = [] with open(corpusFile,'r') as csvfile: lineReader = csv.reader(csvfile,delimiter=',', quotechar="\"") for row in lineReader: if count <= size: corpus.append({"tweet_id":row[2], "label":row[1], "topic":row[0]}) count += 1 else: break rate_limit = 180 sleep_time = 900/180 trainingDataSet = [] for tweet in corpus: try: status = twitter_api.GetStatus(tweet["tweet_id"]) print("Tweet fetched" + status.text) tweet["text"] = status.text trainingDataSet.append(tweet) time.sleep(sleep_time) except: continue # now we write them to the empty CSV file with open(tweetDataFile,'w') as csvfile: linewriter = csv.writer(csvfile,delimiter=',',quotechar="\"") for tweet in trainingDataSet: try: linewriter.writerow([tweet["tweet_id"], tweet["text"], tweet["label"], tweet["topic"]]) except Exception as e: print(e) return trainingDataSet # + #This function is used to download the actual tweets. It takes hours to run and we only need to run it once #in order to get all 5,000 training tweets. The 'size' parameter below is the number of tweets that we want to #download. If 5,000 => set size=5,000 ''' corpusFile = "corpus.csv" tweetDataFile = "tweetDataFile.csv" trainingData = buildTrainingSet(corpusFile, tweetDataFile, 5000) ''' #When this code stops running, we will have a tweetDataFile.csv full of the tweets that we downloaded. # - #This line counts the number of tweets and their labels in the Corpus.csv file that we originally downloaded corp = pd.read_csv("corpus.csv", header = 0 , names = ['topic','label', 'tweet_id'] ) corp['label'].value_counts() #As a check, we look at the first 5 lines in our new tweetDataFile.csv trainingData_copied = pd.read_csv("tweetDataFile.csv", header = None, names = ['tweet_id', 'text', 'label', 'topic']) trainingData_copied.head() len(trainingData_copied) #We check the number of tweets by each label in our training set trainingData_copied['label'].value_counts() # + df = trainingData_copied.copy() lst_labels = df['label'].unique() count_rows_keep = df['label'].value_counts().min() neutral_df = df[df['label'] == 'neutral'].sample(n= count_rows_keep , random_state = 2) irrelevant_df = df[df['label'] == 'irrelevant'].sample(n= count_rows_keep , random_state = 3) negative_df = df[df['label'] == 'negative'].sample(n= count_rows_keep , random_state = 1) positive_df = df[df['label'] == 'positive'].sample(n= count_rows_keep , random_state = 1) lst_df = [neutral_df, irrelevant_df, negative_df, positive_df] trainingData_copied = pd.concat(lst_df) trainingData_copied['label'].value_counts() # - ''' def oversample(df): lst_labels = df['label'].unique() for x in lst_labels: if len(df[df['label'] == x]) < df['label'].value_counts().max(): df=df.append(df[df['label'] == x]*((df['label'].value_counts().max())/ len(df[df['label'] == 'x']))) return df ''' ''' def undersample(df): lst_labels = df['label'].unique() for x in lst_labels: if len(df[df['label'] == 'x']) > df['label'].value_counts().min(): count_rows_keep = df['label'].value_counts().min() sample = df[df['label'] == 'x'].sample(n= count_rows_keep , random_state = 1) index_drop = pd.concat([df[df['label'] == 'x'], sample]).drop_duplicates(keep=False).index df = df.drop(index_drop) return df ''' trainingData_copied = trainingData_copied.to_dict('records') # ### Pre-processing # Here we use the NLTK library to filter for keywords and remove irrelevant words in tweets. We also remove punctuation and things like images (emojis) as they cannot be classified using this model. # + import re #a library that makes parsing strings and modifying them more efficient from nltk.tokenize import word_tokenize from string import punctuation from nltk.corpus import stopwords import nltk #Natural Processing Toolkit that takes care of any processing that we need to perform on text #to change its form or extract certain components from it. #nltk.download('popular') #We need this if certain nltk libraries are not installed. class PreProcessTweets: def __init__(self): self._stopwords = set(stopwords.words('english') + list(punctuation) + ['AT_USER','URL']) def processTweets(self, list_of_tweets): processedTweets=[] for tweet in list_of_tweets: processedTweets.append((self._processTweet(tweet["text"]),tweet["label"])) return processedTweets def _processTweet(self, tweet): tweet = tweet.lower() # convert text to lower-case tweet = re.sub('((www\.[^\s]+)|(https?://[^\s]+))', 'URL', tweet) # remove URLs tweet = re.sub('@[^\s]+', 'AT_USER', tweet) # remove usernames tweet = re.sub(r'#([^\s]+)', r'\1', tweet) # remove the # in #hashtag tweet = word_tokenize(tweet) # remove repeated characters (helloooooooo into hello) return [word for word in tweet if word not in self._stopwords] # - #Here we call the function to pre-process both our training and our test set. tweetProcessor = PreProcessTweets() preprocessedTrainingSet = tweetProcessor.processTweets(trainingData_copied) preprocessedTestSet = tweetProcessor.processTweets(testDataSet) # ### Building the Naive Bayes Classifier # We apply a classifier based on Bayes' Theorem, hence the name. It allows us to find the posterior probability of an event occuring (in this case that event being the sentiment- positive/neutral or negative) is reliant on another probabilistic background that we know. # # The posterior probability is calculated as follows: # $P(A|B) = \frac{P(B|A)\times P(A)}{P(B)}$ # # The final sentiment is assigned based on the highest probability of the tweet falling in each one. # #### To read more about Bayes Classifier in the context of classification: # https://nlp.stanford.edu/IR-book/html/htmledition/naive-bayes-text-classification-1.html # ### Build the vocabulary # + #Here we attempt to build a vocabulary (a list of words) of all words present in the training set. import nltk def buildVocabulary(preprocessedTrainingData): all_words = [] for (words, sentiment) in preprocessedTrainingData: all_words.extend(words) wordlist = nltk.FreqDist(all_words) word_features = wordlist.keys() return word_features #This function generates a list of all words (all_words) and then turns it into a frequency distribution (wordlist) #The word_features is a list of distinct words, with the key being the frequency of each one. # - # ### Matching tweets against our vocabulary # Here we go through all the words in the training set (i.e. our word_features list), comparing every word against the tweet at hand, associating a number with the word following: # # label 1 (true): if word in vocabulary occurs in tweet # # label 0 (false): if word in vocabulary does not occur in tweet def extract_features(tweet): tweet_words = set(tweet) features = {} for word in word_features: features['contains(%s)' % word] = (word in tweet_words) return features # ### Building our feature vector word_features = buildVocabulary(preprocessedTrainingSet) trainingFeatures = nltk.classify.apply_features(extract_features, preprocessedTrainingSet) # The feature vector is a list of dictionaries where each dictionary compares the list of words in a specific tweet against the Vocabulary list of unique words as well as the label of each tweet. # # We will input the feature vector in the Naive Bayes Classifier, which will calculate the posterior probability given the prior probability that a randomly chosen observation is associated with a certain label, and the likelihood which is the probability of having a specific word in a tweet given a specific label (density function of X for an observation that comes from the k class/label) and you have to calculate this probability for each word and for each label to get posterior probability) # ### Train the Naives Bayes Classifier #This line trains our Bayes Classifier NBayesClassifier = nltk.NaiveBayesClassifier.train(trainingFeatures) NBResultLabels = [NBayesClassifier.classify(extract_features(tweet[0])) for tweet in preprocessedTestSet] NBResultLabels.count('positive') 100*NBResultLabels.count('positive')/len(NBResultLabels) twitter_results = {'positive count': NBResultLabels.count('positive'), 'negative count': NBResultLabels.count('negative'), 'neutral count': NBResultLabels.count('neutral'), 'irrelevant count': NBResultLabels.count('irrelevant')} twitter_results # ## Test Classifier # + #We now run the classifier and test it on 100 tweets previously downloaded in the test set, on our specified keyword. # NBResultLabels is the list of the labels for the tweets in the test set: NBResultLabels = [NBayesClassifier.classify(extract_features(tweet[0])) for tweet in preprocessedTestSet] # get the majority vote if NBResultLabels.count('positive') > NBResultLabels.count('negative'): print("Overall Positive Sentiment") print("Positive Sentiment Percentage = " + str(100*NBResultLabels.count('positive')/len(NBResultLabels)) + "%") else: print("Overall Negative Sentiment") print("Negative Sentiment Percentage = " + str(100*NBResultLabels.count('negative')/len(NBResultLabels)) + "%") print("Positive Sentiment Percentage = " + str(100*NBResultLabels.count('positive')/len(NBResultLabels)) + "%") print("Number of negative comments = " + str(NBResultLabels.count('negative'))) print("Number of positive comments = " + str(NBResultLabels.count('positive'))) print("Number of neutral comments = " + str(NBResultLabels.count('neutral'))) print("Number of irrelevant comments = " + str(NBResultLabels.count('irrelevant'))) # - len(preprocessedTestSet) # + import plotly.graph_objects as go sentiment = ["Negative","Positive","Neutral", "Irrelevant"] fig = go.Figure([go.Bar(x=sentiment, y=[str(NBResultLabels.count('negative')), str(NBResultLabels.count('positive')), str(NBResultLabels.count('neutral')), str(NBResultLabels.count('irrelevant'))])]) fig.update_layout(title_text='Sentiment Results for Specific Keyword') fig.update_layout(template = 'simple_white', title_text='Twitter Sentiment Results', yaxis=dict( title='Percentage (%)', titlefont_size=16, tickfont_size=14,) , ) fig.show() # - # ### TBC: # - Retrieve tweets about keyword, not from keyword (username) ''' twitters_and_news = [] for buzzword in buzzwords: twitters_and_news.append(sp.sentiment_analysis(buzzword, tw['consumer_key'], tw['consumer_secret'], tw['access_token_key'], tw['access_token_secret'], news['secret_api'])) '''
sentiment_analysis_twitter_comments.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # S&P 500 Historical Analysis # ## Questions # ### Q1. From 1927-12-29 to the Present (2021-02-01), what is the distribution of daily returns? # ### A1. # %pip install pandas # %pip install matplotlib # + import pandas as pd df = pd.read_csv("1927-12-29-to-2021-02-01-daily.csv", usecols=["Open", "Close"]) daily_returns = [] for i, row in df.iterrows(): if row["Open"] == 0.0: daily_returns.append(0.0) continue daily_return = 100 * (row["Close"] - row["Open"]) / row["Open"] daily_returns.append(daily_return) daily_returns = list(filter(lambda x: x != 0.0, daily_returns)) # - ax = pd.DataFrame(daily_returns).plot.hist(bins=100, figsize=(10,10), grid=True, title="Daily Return % of the S&P 500 since December 29th 1927")
S&P-500/analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # <img src="img/full-colour-logo-UoB.png" alt="Drawing" style="width: 200px;"/> # # # Introduction to Programming for Engineers # # ## Python 3 # # # # + [markdown] slideshow={"slide_type": "slide"} # # 02 Data Structures and Libraries # ## CLASS MATERIAL # # <br> <a href='#DataStructures'>1. Data Structures</a> # <br> &emsp;&emsp; <a href='#list'>1.1 The `list`</a> # <br> &emsp;&emsp; <a href='#numpyarray'>1.2 The `numpy array`</a> # <br> <a href='#Libraries__'>2. Libraries</a> # <br> <a href='#ReviewExercises'>3. Review Exercises</a> # + [markdown] slideshow={"slide_type": "slide"} # <a id='Summary'></a> # # Supplementary Material Summary # For more information refer to the primer notebook for this class 02_DataStructures_Libraries__SupplementaryMaterial.ipynb # # #### Data Structures # - `list` : Can store mixed type data. Not suitable for elementwise operations. # - `numpy array` : Stores values of the *same data type*. Useful for elementwise and matrix operations. # + [markdown] slideshow={"slide_type": "slide"} # #### Libraries # - Python has an extensive __standard library__ of built-in functions. # - More specialised libraries of functions and constants are available. We call these __packages__. # - Packages are imported using the keyword `import` # - The function documentation tells is what it does and how to use it. # - When calling a library function it must be prefixed with a __namespace__ is used to show from which package it should be called. # + [markdown] slideshow={"slide_type": "slide"} # # # # ### Fundamental programming concepts # - Importing existing libraries of code to use in your program # - Storing data in grid-like structures # + [markdown] slideshow={"slide_type": "slide"} # <a id='DataStructures'></a> # # 1. Data Structures # # In the last seminar we learnt to generate a range of numbers for use in control flow of a program, using the function `range()`: # # # for j in range(20): # ... # + [markdown] slideshow={"slide_type": "slide"} # Often we want to manipulate data that is more meaningful than ranges of numbers. # # These collections of variables might include: # - data points recoreded during an experiment # - a large data set # - the components of a vector # - matrices # # # + [markdown] slideshow={"slide_type": "slide"} # Python has different __data structures__ that can be used to store and manipulate these values. # # Like variable types (`string`, `int`,`float`...) different data structures behave in different ways. # # Today we will learn to use two types of data structure: # - `list` # - `numpy array` # + [markdown] slideshow={"slide_type": "slide"} # <a id='list'></a> # ## 1.1 The `list` # # A list is a container with compartments in which we can store data # # A list can store data of any type: # <p align="center"> # <img src="img/ice_cube_tray.png" alt="Drawing" style="width: 500px;"/> # </p> # + slideshow={"slide_type": "slide"} lab_group0 = ["Yukari", "Sajid", "Hemma", "Ayako"] lab_group1 = ["Sara", "Mari", "Quang", "Sam", "Ryo", "Nao", "Takashi"] scores_0 = [91, 92, 66, 85] scores_1 = [66, 75, 23, 88, 97, 99, 100] print(lab_group0) print(lab_group1) # + [markdown] slideshow={"slide_type": "slide"} # We can perform operations on lists such as: # - checking its length (number of students in a lab group) # - sorting the names in the list into alphabetical order # - making a list of lists (*nested list*): # # + lab_groups = [lab_group0, lab_group1, scores_0, scores_1] print(lab_groups) # + [markdown] slideshow={"slide_type": "slide"} # <a id='ExampleChangePosition'></a> # ### Example: Representing Vectors using Lists # # __Vector:__ A quantity with magnitude and direction. # + [markdown] slideshow={"slide_type": "slide"} # A 2D position vector can be expressed using a horizontal (x) a vertical (y) component. # # <img src="img/schiffman_vectors.png" alt="Drawing" style="width: 400px;"/> # # [<NAME>, The Nature of Code] # # + [markdown] slideshow={"slide_type": "slide"} # It is convenient to express the vector ($\mathbf{u}$) in matrix form: # $$ # \mathbf{u} = [u_x, u_y] # $$ # # # __...which looks a lot like a Python list!__ # # + [markdown] slideshow={"slide_type": "slide"} # To add two position vectors, $\mathbf{u}$ and $\mathbf{v}$, we find: # # the sum of the $x$ components $u_x$ and $v_x$ # # the $y$ components $u_y$ and $v_y$ # # # # # # + [markdown] slideshow={"slide_type": "slide"} # \begin{align} # {\displaystyle {\begin{aligned}\ # \mathbf{w} # &=\mathbf{u} + \mathbf{v}\\ # &=[(u_x+v_x),\;\; (u_y+v_y)] \\ \end{aligned}}} # \end{align} # # <img src="img/schiffman_vector.png" alt="Drawing" style="width: 600px;"/> # *[<NAME>, The Nature of Code]* # # + [markdown] slideshow={"slide_type": "slide"} # We can represent each vector as a list: # + # Example : Vector addition u = [5, 2] v = [3, 4] # + [markdown] slideshow={"slide_type": "slide"} # We use an *index* to *address* an *element* of a list. # # Example: # <br>The first *element* (x component) of list/vector `u` is 5. # <br>To *address* the first element of `u`, we use the address `0`: # # u[0] # + slideshow={"slide_type": "slide"} # Example : Vector addition u = [5, 2] v = [3, 4] w = [u[0] + v[0], u[1] + v[1]] print(w) # + [markdown] slideshow={"slide_type": "slide"} # Arranging the code on seperate lines: # - makes the code more readable # - does not effect how the code works # # Line breaks can only be used within code that is enclosed by at elast one set of brackets (), []. # + [markdown] slideshow={"slide_type": "slide"} # <a id='ExampleDotProduct'></a> # ### Example: Loops and data structures # # A programmatically similar operation to vector addition is the __dot product__: # + [markdown] slideshow={"slide_type": "slide"} # __DOT PRODUCT__ # # The dot product of two $n$-length-vectors: # <br> $ \mathbf{A} = [A_1, A_2, ... A_n]$ # <br> $ \mathbf{B} = [B_1, B_2, ... B_n]$ # # \begin{align} # \mathbf{A} \cdot \mathbf{B} = \sum_{i=1}^n A_i B_i # \end{align} # # # + [markdown] slideshow={"slide_type": "slide"} # So the dot product of two 3D vectors: # <br> $ \mathbf{A} = [A_x, A_y, A_z]$ # <br> $ \mathbf{B} = [B_x, B_y, B_z]$ # # # \begin{align} # \mathbf{A} \cdot \mathbf{B} &= \sum_{i=1}^n A_i B_i \\ # &= A_x B_x + A_y B_y + A_z B_z # \end{align} # # # + [markdown] slideshow={"slide_type": "slide"} # __Example : Dot Product__ # # Let's write a program to solve this using a Python `for` loop. # # 1. We initailise a variable, `dot_product` with a value = 0.0. # # 1. With each iteration of the loop: # <br>`dot_product +=` the product of `a` and `b`. # # <p align="center"> # <img src="img/flow_diag_for_loop_dot_product.png" alt="Drawing" style="width: 400px;"/> # </p> # + [markdown] slideshow={"slide_type": "slide"} # In this example, we use the keyword `zip` to loop through more than one list: # + slideshow={"slide_type": "-"} # Example : Dot Product A = [1.0, 3.0, -5.0] B = [4.0, -2.0, -1.0] # Create a variable called dot_product with value, 0.0 dot_product = 0.0 # Update the value each time the code loops for a , b in zip(A, B): dot_product += a * b # Print the solution print(dot_product) # + [markdown] slideshow={"slide_type": "slide"} # __Check Your Solution:__ # # The dot product $\mathbf{A} \cdot \mathbf{B}$: # <br> $ \mathbf{A} = [1, 3, −5]$ # <br> $ \mathbf{B} = [4, −2, −1]$ # # # # \begin{align} # {\displaystyle {\begin{aligned}\ [1,3,-5]\cdot [4,-2,-1]&=(1)(4)+(3)(-2)+(-5)(-1)\\& = 4 \qquad - 6 \qquad + 5 \\&=3\end{aligned}}} # \end{align} # + [markdown] slideshow={"slide_type": "slide"} # <a id='numpyarray'></a> # ## 1.2 The `numpy array` # A `numpy array` is a grid of values, *all of the same type*. # # To work with a `numpy array` we must *import* the numpy package at the start of our code. # # # - import numpy as np # + [markdown] slideshow={"slide_type": "slide"} # ### Why do we need another data structure? # # Python lists hold 'arrays' of data. # # Lists are very flexible. e.g. holding mixed data type. # # There is a trade off between flexibility and performance e.g. speed. # + [markdown] slideshow={"slide_type": "slide"} # Science engineering and mathematics problems typically involve numerical calculations and often use large amounts of data. # # `numpy array`s make computational mathematics faster and easier. # + [markdown] slideshow={"slide_type": "slide"} # To create an array we use the Numpy `np.array()` function. # # We can create an array in a number of ways. # # For example we can convert a list to an array. # + slideshow={"slide_type": "-"} c = [4.0, 5, 6.0] d = np.array(c) print(type(c)) print(type(d)) print(d.dtype) # + [markdown] slideshow={"slide_type": "-"} # The method `dtype` tells us the type of the data contained in the array. # # # + [markdown] slideshow={"slide_type": "slide"} # Or we can construct the array explicitly: # + # 1-dimensional array a = np.array([1, 2, 3]) # 2-dimensional array b = np.array([[1, 2, 3], [4, 5, 6]]) b = np.array([[1, 2, 3], [4, 5, 6]]) # + [markdown] slideshow={"slide_type": "slide"} # List and array behave differently. # # For example, look what happens when we: # - add two lists # - add two arrays # + slideshow={"slide_type": "-"} c = [4.0, 5, 6.0] d = np.array(c) print(c + c) print(d + d) # - # Notice that adding two `numpy array`s gives the vector sum of the two arrays. # # This is much faster than the method using lists that we studied earlier. # + [markdown] slideshow={"slide_type": "slide"} # <a id='MultiDimensionalArrays'></a> # ## 1.2.1 Multi-Dimensional Arrays # # # # # # # Unlike the data types we have studied so far, arrays can have multiple dimensions. # # __`shape`:__ a *tuple* of *integers* giving the *size* of the array along each *dimension*. # # __`tuple`:__ A data structure from which you cannot add or remove elements without creating a new tuple (e.g. connecting two tuples). <br>You cannot change the value of a single tuple element e.g. by indexing. <br>A tuple is created by enclosing a set of numbers in () parentheses. # # We define the dimensions of an array using square brackets # + slideshow={"slide_type": "slide"} # 1-dimensional array a = np.array([1, 2, 3]) # 2-dimensional array b = np.array([[1, 2, 3], [4, 5, 6]]) b = np.array([[1, 2, 3], [4, 5, 6]]) print(a.shape) print(b.shape) # + slideshow={"slide_type": "subslide"} # 2-dimensional array c = np.array([[1, 2, 3]]) # 2-dimensional array d = np.array([[1], [4]]) print(c.shape) print(d.shape) # + slideshow={"slide_type": "subslide"} # 3-dimensional array c = np.array( [[[1, 1], [1, 1]], [[1, 1], [1, 1]]]) print(c.shape) c = np.array( [[[1, 1], [1, 1]], [[1, 1], [1, 1]], [[1, 1], [1, 1]]]) print(c.shape) # + slideshow={"slide_type": "subslide"} # 3-dimensional array c = np.array( [[[1, 1], [1, 1]], [[1, 1], [1, 1]]]) # 4-dimensional array d = np.array( [[[[1, 1], [1, 1]], [[1, 1], [1, 1]]], [[[1, 1], [1, 1]], [[1, 1], [1, 1]]]]) print(c.shape) print(d.shape) # + [markdown] slideshow={"slide_type": "slide"} # As we add dimensions, the array gets more and morecomplicated to type. # # A faster and less error-prone method is to use the `reshape` function. # + [markdown] slideshow={"slide_type": "slide"} # Start with the total number of elements that you want to include in the array: # - #A = np.empty(32) A = np.zeros(32) print(A) # + [markdown] slideshow={"slide_type": "slide"} # Enter the number of elements in each dimension you want: # - A.reshape(2, 2, 2, 2, 2) # + [markdown] slideshow={"slide_type": "slide"} # <a name="CreatingNumpyArray"></a> # ## 1.2.2 Creating a `numpy array` # # # # We don't always have to manually create the individual elements of an array. # # There are several other ways to do this. # # For example, if you don’t know what data you want to put in your array you can initialise it with placeholders and load the data you want to use later. # # + slideshow={"slide_type": "slide"} # Create an array of all zeros # The zeros() function argument is the shape. # Shape: tuple of integers giving the size along each dimension. a = np.zeros(5) print(a) print() a = np.zeros((2,2)) print(a) # + slideshow={"slide_type": "slide"} # Create an array of all ones b = np.ones(5) print(b) print() b = np.ones((1, 4)) print(b) # + slideshow={"slide_type": "subslide"} # Create an array of elements with the same value # The full() function arguments are # 1) Shape: tuple of integers giving the size along each dimension. # 2) The constant value y = np.full((1,1), 3) print(y) print(y.shape) print() y = np.full((2,2), 4) print(y) # + slideshow={"slide_type": "subslide"} # Create a 1D array of evenly spaced values # The arange() function arguments are the same as the range() function. # Shape: tuple of integers giving the size along each dimension. z = np.arange(5,10) print(z) print() z = np.arange(5, 10, 2) print(z) # + slideshow={"slide_type": "subslide"} # Create a 1D array of evenly spaced values # The linspace() function arguments are # The lower limit of the range of values # The upper limit of the range of values (inclusive) # The desired number of equally spaced values z = np.linspace(-4, 4, 5) print(z) # + slideshow={"slide_type": "subslide"} # Create an empty matrix # The empty() function argument is the shape. # Shape: tuple of integers giving the size along each dimension. import numpy as np x = np.empty((4)) print(x) print() x = np.empty((4,4)) print(x) # + slideshow={"slide_type": "subslide"} # Create a constant array # The second function argument is the constant value c = np.full(6, 8) print(c) print() c = np.full((2,2,2), 7) print(c) # + [markdown] slideshow={"slide_type": "slide"} # <a id='IndexingMultiDimensionalArrays'></a> # ## 1.2.3 Indexing into Multi-Dimensional Arrays # # # # We can index into an array exactly the same way as the other data structures we have studied. # + slideshow={"slide_type": "-"} x = np.array([1, 2, 3, 4, 5]) # Select a single element print(x[4]) # Select elements from 2 to the end print(x[2:]) # + [markdown] slideshow={"slide_type": "slide"} # For an n-dimensional (nD) matrix we need n index values to address an element or range of elements. # # Example: The index of a 2D array is specified with two values: # - first the row index # - then the column index. # # Note the order in which dimensions are addressed. # + slideshow={"slide_type": "slide"} # 2 dimensional array y = np.array([[1, 2, 3], [4, 5, 6]]) # Select a single element print(y[1,2]) # Select elements that are both in rows 1 to the end AND columns 0 to 2 print(y[1:, 0:2]) # + [markdown] slideshow={"slide_type": "slide"} # We can address elements by selecting a range with a step: # # For example the index: # # `z[0, 0:]` # # selects every element of row 0 in array, `z` # # The index: # # `z[0, 0::2]` # # selects every *other* element of row 0 in array, `z` # + slideshow={"slide_type": "subslide"} # 2 dimensional array z = np.zeros((4,8)) # Change every element of row 0 z[0, 0:] = 10 # Change every other element of row 1 z[1, 0::2] = 10 print(z) # + slideshow={"slide_type": "subslide"} z = np.zeros((4,8)) # Change the last 4 elements of row 2, in negative direction # You MUST include a step to count in the negative direction z[2, -1:-5:-1] = 10 # Change every other element of the last 6 elements of row 3 # in negative direction z[3, -2:-7:-2] = 10 print(z) # + slideshow={"slide_type": "subslide"} # 3-dimensional array c = np.array( [[[2, 1, 4], [2, 6, 8]], [[0, 1, 5], [7, 8, 9]]]) print(c[0, 1, 2]) # + [markdown] slideshow={"slide_type": "subslide"} # Where we want to select all elements in one dimension we can use : # # __Exception__: If it is the last element , we can omit it. # + slideshow={"slide_type": "subslide"} print(c[0, 1]) print(c[0, :, 1]) # + [markdown] slideshow={"slide_type": "slide"} # <a id='Libraries__'></a> # # 2. Libraries # # One of the most important concepts in good programming is to avoid repetition by reusing code. # # Python, like other modern programming languages, has an extensive *library* of built-in functions. # # These functions are designed, tested and optimised by Python developers. # # We can use these functions to make our code shorter, faster and more reliable. # # # + [markdown] slideshow={"slide_type": "slide"} # <a id='StandardLibrary'></a> # ## 2.1 The Standard Library # # <br> &emsp;&emsp; <a href='#StandardLibrary'>__2.1 The Standard Library__</a> # <br> &emsp;&emsp; <a href='#Packages'>__2.2 Packages__ </a> # <br> &emsp;&emsp; <a href='#FunctionDocumentation'>__2.3 Function Documentation__</a> # <br> &emsp;&emsp; <a href='#Namespaces'>__2.4 Namespaces__</a> # <br> &emsp;&emsp; <a href='#ImportingFunction'>__2.5 Importing a Function__</a> # <br> &emsp;&emsp; <a href='#Optimise'>__2.6 Using Package Functions to Optimise your Code__</a> # # # # + [markdown] slideshow={"slide_type": "slide"} # <a id='StandardLibrary'></a> # ## 2.1 The Standard Library # # Python has a large standard library. # # e.g. `print()` takes the __input__ in the parentheses and __outputs__ a visible representation. # # Standard functions are listed on the Python website: # https://docs.python.org/3/library/functions.html # + [markdown] slideshow={"slide_type": "slide"} # We could write our own code to find the minimum of a group of numbers # # # # + x0 = 1 x1 = 2 x2 = 4 x_min = x0 if x1 < x_min: x_min = x1 if x2 < x_min: x_min = x2 print(x_min) # + [markdown] slideshow={"slide_type": "slide"} # However, it is much faster to use the build in function: # - print(min(1,2,4)) # + [markdown] slideshow={"slide_type": "slide"} # The built-in functions can be found in (.py) files called 'modules'. # # The files are neatly arranged into a system of __sub-packages__ (sub-folders) and __modules__ (files). # # These files are stored on the computer you are using. # + [markdown] slideshow={"slide_type": "slide"} # A quick google search for "python function to sum all the numbers in a list"... # # https://www.google.co.jp/search?q=python+function+to+sum+all+the+numbers+in+a+list&rlz=1C5CHFA_enJP751JP751&oq=python+function+to+sum+&aqs=chrome.0.0j69i57j0l4.7962j0j7&sourceid=chrome&ie=UTF-8 # # ...returns the function `sum()`. # + [markdown] slideshow={"slide_type": "slide"} # `sum()` finds the sum of the values in a data structure. # # # # # + slideshow={"slide_type": "slide"} # list print(sum([1,2,3,4,5])) #tuple print(sum((1,2,3,4,5))) a = [1,2,3,4,5] print(sum(a)) # + [markdown] slideshow={"slide_type": "slide"} # The function `max()` finds the maximum value in data structure. # + [markdown] slideshow={"slide_type": "slide"} # <a id='Packages'></a> # ## 2.2 Packages # # The standard library tools are available in any Python environment. # # More specialised libraries, called packages, are available for more specific tasks # <br>e.g. solving trigonometric functions. # # Packages contain functions and constants. # # We install the packages to use them. # # # + [markdown] slideshow={"slide_type": "slide"} # Two widely used packages for mathematics, science and engineeirng are `numpy` and `scipy`. # # These are already installed as part of Anaconda. # # A package is a collection of Python modules: # - a __module__ is a single Python file # - a __package__ is a directory of Python modules.<br>(It contains an __init__.py file, to distinguish it from folders that are not libraries). # + [markdown] slideshow={"slide_type": "slide"} # The files that are stored on your computer when Pygame is installed: # <br>https://github.com/pygame/pygame # + [markdown] slideshow={"slide_type": "slide"} # The `import` statement must appear before the use of the package in the code. # # import numpy # # After this, any function in `numpy` can be called as: # # `numpy.function()` # # and, any constant in `numpy` can be called as: # # `numpy.constant`. # # There are a many mathematical functions available. <br> # https://docs.scipy.org/doc/numpy-1.13.0/reference/routines.math.html # + [markdown] slideshow={"slide_type": "slide"} # We can change the name of a package e.g. to keep our code short and neat. # # Using the __`as`__ keyword: # + slideshow={"slide_type": "-"} import numpy as np print(np.pi) # + [markdown] slideshow={"slide_type": "slide"} # We only need to import a package once, at the start of the program or notebook. # + [markdown] slideshow={"slide_type": "slide"} # <a id='UsingPackageFunctions'></a> # ## Using Package Functions. # # Let's learn to use `numpy` functions in our programs. # # # # # + slideshow={"slide_type": "slide"} # Some examples Numpy functions with their definitions (as given in the documentation) x = 1 # sine print(np.sin(x)) # tangent print(np.tan(x)) # inverse tangent print(np.arctan(x)) # + [markdown] slideshow={"slide_type": "slide"} # <a id='FunctionDocumentation'></a> # ## 2.3 Function Documentation # # Online documentation can be used to find out: # - what to include in the () parentheses # - allowable data types to use as arguments # - the order in which arguments should be given # # + [markdown] slideshow={"slide_type": "slide"} # A google search for 'numpy functions' returns: # # https://docs.scipy.org/doc/numpy-1.13.0/reference/routines.math.html # # (this list is not exhaustive). # + [markdown] slideshow={"slide_type": "slide"} # ### Try it yourself: # <br> Find a function in the Python Numpy documentation that matches the function definition and use it to solve the following problem: # # Find the hypotenuse of a right angle triangle if the lengths of the other two sides are 3 and 6. # + # The “legs” of a right angle triangle are 6 units and 3 units, # Return its hypotenuse in units. # + [markdown] slideshow={"slide_type": "slide"} # <a id='Examplenumpycos'></a> # ### Example : numpy.cos # Documentation : https://docs.scipy.org/doc/numpy-1.13.0/reference/routines.math.html # # The documentation tells us the following information... # # ##### What the function does. # "Cosine element-wise." # # # + [markdown] slideshow={"slide_type": "slide"} # ##### All possible function arguments (parameters) # # <img src="img/numpy_cos.png" alt="Drawing" style="width: 500px;"/> # # >numpy.cos(<font color='blue'>x</font>, /, <font color='red'>out=None</font>, *, <font color='green'>where=True, casting='same_kind', order='K', dtype=None, subok=True</font> [, <font color='purple'>signature, extobj</font> ]) # # In the () parentheses following the function name are: # - <font color='blue'>*positional* arguments (required)</font> # - <font color='red'>*keyword* arguments (with a default value, optionally set). Listed after the `/` slash.</font> # - <font color='green'>arguments that must be explicitly named. Listed after the `*` star.</font> # <br><font color='purple'>(including arguments without a default value. Listed in `[]` brackets.)</font> # # # + [markdown] slideshow={"slide_type": "slide"} # ##### Function argument definitions and acceptable forms. # # <img src="img/numpy_cos_params.png" alt="Drawing" style="width: 500px;"/> # # x : array_like *(it can be an `int`, `float`, `list` or `tuple`)* # # out : ndarray, None, or tuple of ndarray and None, optional # # where : array_like, optional # # # + [markdown] slideshow={"slide_type": "slide"} # ##### What the function returns # __y__ : ndarray<br> # &nbsp; &nbsp; &nbsp; &nbsp; The corresponding cosine values. # + [markdown] slideshow={"slide_type": "slide"} # Let's look at the function numpy.degrees: # https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.degrees.html # # What does the function do? # # What __arguments__ does it take (and are there any default arguments)? # # How would we __write__ the function when __calling__ it (accept defaults)? # # What __data type__ should our input be? # + [markdown] slideshow={"slide_type": "slide"} # <a id='Namespaces'></a> # ## 2.4 Namespaces # <br>By prefixing `cos` with `np`, we are using a *namespace* (which in this case is `np`). # # # + [markdown] slideshow={"slide_type": "slide"} # The namespace shows we want to use the `cos` function from the Numpy package. # # If `cos` appears in more than one package we import, then there will be more than one `cos` function available. # # We must make it clear which `cos` we want to use. # # # # + [markdown] slideshow={"slide_type": "slide"} # Functions with the *same name*, from *different packages* often use a different algorithms to perform the same or similar operation. # # They may vary in speed and accuracy. # # In some applications we might need an accurate method for computing the square root, for example, and the speed of the program may not be important. # # For other applications we might need speed with an allowable compromise on accuracy. # # + [markdown] slideshow={"slide_type": "slide"} # Below are two functions, both named `sqrt`. # # Both functions compute the square root of the input. # # - `math.sqrt`, from the package, `math`, gives an error if the input is a negative number. It does not support complex numbers. # - `cmath.sqrt`, from the package, `cmath`, supports complex numbers. # # + slideshow={"slide_type": "slide"} import math import cmath print(math.sqrt(4)) #print(math.sqrt(-5)) print(cmath.sqrt(-5)) # + [markdown] slideshow={"slide_type": "slide"} # Two developers collaborating on the same program might choose the same name for two functions that perform similar tasks. # # If these functions are in different modules, there will be no name clash since the module name provides a 'namespace'. # + [markdown] slideshow={"slide_type": "slide"} # <a id='ImportingFunction'></a> # ## 2.7 Importing a Function # Single functions can be imported without importing the entire package e.g. use: # # from numpy import cos # # instead of: # # import numpy # # After this you call the function without the numpy prefix: # + slideshow={"slide_type": "-"} from numpy import cos cos(x) # + [markdown] slideshow={"slide_type": "slide"} # Be careful when doing this as there can be only one definition of each function. # In the case that a function name is already defined, it will be overwritten by a more recent definition. # + slideshow={"slide_type": "-"} from cmath import sqrt print(sqrt(-1)) from math import sqrt #print(sqrt(-1)) # + [markdown] slideshow={"slide_type": "slide"} # A potential solution to this is to rename individual functions or constants when we import them: # + slideshow={"slide_type": "-"} from numpy import cos as cosine cosine(x) # - from numpy import pi as pi pi # + [markdown] slideshow={"slide_type": "slide"} # This can be useful when importing functions from different modules: # + from math import sqrt as sqrt from cmath import sqrt as csqrt print(sqrt(4)) print(csqrt(-1)) # + [markdown] slideshow={"slide_type": "slide"} # Function names should be chosen wisely. # - relevant # - concise # + [markdown] slideshow={"slide_type": "slide"} # <a id='Optimise'></a> # ## 2.8 Using Package Functions to Optimise your Code # # Let's look at some examples of where Numpy functions can make your code shorter and neater. # + [markdown] slideshow={"slide_type": "slide"} # The mean of a group of numbers # - x_mean = (1 + 2 + 3)/3 # Using Numpy: x_mean = np.mean([1, 2, 3]) # + [markdown] slideshow={"slide_type": "slide"} # <a id='DataStructuresFunctionArguments'></a> # ## Data Structures as Function Arguments. # # Notice that the Numpy function `mean` take a data structure as its argument. # - ls = [1, 2, 3] x_mean = np.mean(ls) # + [markdown] slideshow={"slide_type": "slide"} # <a id='ElementwiseFunctions'></a> # ### Elementwise Functions # In contrast, Numpy functions often operate *elementwise*. # <br> This means if the argument is a list, they will perform the same function on each element of the list. # # For example, to find the square root of each number in a list, we can use: # - a = [9, 25, 36] print(np.sqrt(a)) # + [markdown] slideshow={"slide_type": "slide"} # Elementwise operation can be particularly important when performing basic mathematical operations: # + a = [1, 2, 3] b = [4, 5, 6] print(a + b) # vector sum print(np.add(a,b)) # dot product print(np.dot(a,b)) # - # Notice the operations we studied earlier (dot product and vector sum) are coded much faster by using `numpy` functions. # + [markdown] slideshow={"slide_type": "slide"} # <a id='ReviewExercises'></a> # # 3. Review Exercises # # Compete the exercises below. # # Save your answers as .py files and email them to: # <br><EMAIL> # + [markdown] slideshow={"slide_type": "slide"} # ## Review Exercise 1 : Combining Imported Functions # # The dot product of two vectors can be found as: # # \begin{align} # \mathbf{A} \cdot \mathbf{B} = |\mathbf{A}| |\mathbf{B}| cos(\theta) # \end{align} # # Where: # # <br>$\theta$ is the angle between the two vectors # # $|\mathbf{A}|$ is the magnitude of vector $\mathbf{A}$. # # $|\mathbf{B}|$ is the magnitude of vector $\mathbf{B}$. # # # + [markdown] slideshow={"slide_type": "slide"} # The magnitude of an $n$-length vector $ \mathbf{A} = [A_1, ..., A_n]$ is: # # $|\mathbf{A}| = \sqrt{A_1^2 + ... + A_n^2}$ # # # # # # # + [markdown] slideshow={"slide_type": "slide"} # <p align="center"> # <img src="img/dot-product-angle.gif" alt="Drawing" style="width: 300px;"/> # </p> # # Find the angle between the vectors `a` and `b`. # # *Hint:* # # Use a numpy function from this class to find the dot product. # # Search online to find a numpy function that computes *magnitude*. # # Search online to find a numpy function for the *inverse cosine*. # - # Review Exercise 1 : Find the angle between a and b a = [9, 2, 7] b = [4, 8, 10] # + # Review Exercise 1 : Find the angle between a and b # Example Solution import numpy as np a = [9, 2, 7] b = [4, 8, 10] ab = np.dot(a, b) maga = np.linalg.norm(a) magb = np.linalg.norm(b) theta = np.arccos(ab / (maga * magb)) print(theta) # + [markdown] slideshow={"slide_type": "slide"} # ## Review Exercise 2 : Classifer # # The dot product also indicates if the angle between two vectors $\mathbf{A}$ and $\mathbf{B}$ is: # # - acute ($\mathbf{A} \cdot \mathbf{B}>0$) # - obtuse ($\mathbf{A} \cdot \mathbf{B}<0$) # - right angle ($\mathbf{A} \cdot \mathbf{B}==0$) # # Using `if`, `elif` and `else`, classify the angle between `a` and `b` as acute, obtuse or right angle. # + # Review Exercise 2 : Classifer a = [-1, 2, 6] b = [4, 3, 3] # + # Review Exercise 2 : Classifer # Example Solution a = [-1, 2, 6] b = [4, 3, 3] ab = np.dot(a, b) if ab > 0: print("theta is acute") elif ab < 0: print("theta is obtuse") else: print("theta is right angle") # + [markdown] slideshow={"slide_type": "slide"} # ## Review Exercise 3: Numpy Package Functions. # Find a function in the Python Numpy documentation that matches the function definition and use it to solve the problems below: # + [markdown] slideshow={"slide_type": "slide"} # __(A)__ Definition: *Calculates the exponential function, $y= e^x$ for all elements in the input array.* # # Print a list where each element is the exponential function of the corresponding element in list `a = [0.1, 0, 10]` # + # Review Exercise 3A # Print a list where each element is the exponential function of the corresponding element in list a # - # Review Exercise 3A # Example Solution a = [0.1, 0, 10] print(np.exp(a)) # + [markdown] slideshow={"slide_type": "slide"} # __(B)__ Definition: *Converts angles from degrees to radians.* # # Convert angle `theta`, expressed in degrees, to radians: # <br>`theta` = 47 # + # Review Exercise 3B # convert angle `theta`, expressed in degrees, to radians # + # Review Exercise 3B # Example Solution np.radians(47) # + [markdown] slideshow={"slide_type": "slide"} # __(C)__ Definition: *Return the positive square-root of an array, element-wise.* # # Generate an array where each element is the square root of the corresponding element in array `a = ([4, 16, 81])` # + # Review Exercise 3C # Print a list where each element is the square root of the corresponding element in list a # + # Review Exercise 3C # Example Solution a = ([4, 16, 81]) print(np.sqrt(a)) # - # ## Review Exercise 4: Using a single list with a `for` loop. # In the cell below, use a `for` loop to print the first letter of each month in the list. # # # + # Review Exercise 4 # Print the first letter of each month in the list months = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"] # -
02_DataStructures_Libraries__ClassMaterial.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # + [markdown] origin_pos=0 # # 预训练word2vec # :label:`sec_word2vec_pretraining` # # 我们继续实现 :numref:`sec_word2vec`中定义的跳元语法模型。然后,我们将在PTB数据集上使用负采样预训练word2vec。首先,让我们通过调用`d2l.load_data_ptb`函数来获得该数据集的数据迭代器和词表,该函数在 :numref:`sec_word2vec_data`中进行了描述。 # # + origin_pos=1 tab=["mxnet"] import math from mxnet import autograd, gluon, np, npx from mxnet.gluon import nn from d2l import mxnet as d2l npx.set_np() batch_size, max_window_size, num_noise_words = 512, 5, 5 data_iter, vocab = d2l.load_data_ptb(batch_size, max_window_size, num_noise_words) # + [markdown] origin_pos=3 # ## 跳元模型 # # 我们通过嵌入层和批量矩阵乘法实现了跳元模型。首先,让我们回顾一下嵌入层是如何工作的。 # # ### 嵌入层 # # 如 :numref:`sec_seq2seq`中所述,嵌入层将词元的索引映射到其特征向量。该层的权重是一个矩阵,其行数等于字典大小(`input_dim`),列数等于每个标记的向量维数(`output_dim`)。在词嵌入模型训练之后,这个权重就是我们所需要的。 # # + origin_pos=4 tab=["mxnet"] embed = nn.Embedding(input_dim=20, output_dim=4) embed.initialize() embed.weight # + [markdown] origin_pos=6 # 嵌入层的输入是词元(词)的索引。对于任何词元索引$i$,其向量表示可以从嵌入层中的权重矩阵的第$i$行获得。由于向量维度(`output_dim`)被设置为4,因此当小批量词元索引的形状为(2,3)时,嵌入层返回具有形状(2,3,4)的向量。 # # + origin_pos=7 tab=["mxnet"] x = np.array([[1, 2, 3], [4, 5, 6]]) embed(x) # + [markdown] origin_pos=8 # ### 定义前向传播 # # 在前向传播中,跳元语法模型的输入包括形状为(批量大小,1)的中心词索引`center`和形状为(批量大小,`max_len`)的上下文与噪声词索引`contexts_and_negatives`,其中`max_len`在 :numref:`subsec_word2vec-minibatch-loading`中定义。这两个变量首先通过嵌入层从词元索引转换成向量,然后它们的批量矩阵相乘(在 :numref:`subsec_batch_dot`中描述)返回形状为(批量大小,1,`max_len`)的输出。输出中的每个元素是中心词向量和上下文或噪声词向量的点积。 # # + origin_pos=9 tab=["mxnet"] def skip_gram(center, contexts_and_negatives, embed_v, embed_u): v = embed_v(center) u = embed_u(contexts_and_negatives) pred = npx.batch_dot(v, u.swapaxes(1, 2)) return pred # + [markdown] origin_pos=11 # 让我们为一些样例输入打印此`skip_gram`函数的输出形状。 # # + origin_pos=12 tab=["mxnet"] skip_gram(np.ones((2, 1)), np.ones((2, 4)), embed, embed).shape # + [markdown] origin_pos=14 # ## 训练 # # 在训练带负采样的跳元模型之前,我们先定义它的损失函数。 # # ### 二元交叉熵损失 # # 根据 :numref:`subsec_negative-sampling`中负采样损失函数的定义,我们将使用二元交叉熵损失。 # # + origin_pos=15 tab=["mxnet"] loss = gluon.loss.SigmoidBCELoss() # + [markdown] origin_pos=17 # 回想一下我们在 :numref:`subsec_word2vec-minibatch-loading`中对掩码变量和标签变量的描述。下面计算给定变量的二进制交叉熵损失。 # # + origin_pos=18 tab=["mxnet"] pred = np.array([[1.1, -2.2, 3.3, -4.4]] * 2) label = np.array([[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0]]) mask = np.array([[1, 1, 1, 1], [1, 1, 0, 0]]) loss(pred, label, mask) * mask.shape[1] / mask.sum(axis=1) # + [markdown] origin_pos=19 # 下面显示了如何使用二元交叉熵损失中的Sigmoid激活函数(以较低效率的方式)计算上述结果。我们可以将这两个输出视为两个规范化的损失,在非掩码预测上进行平均。 # # + origin_pos=20 tab=["mxnet"] def sigmd(x): return -math.log(1 / (1 + math.exp(-x))) print(f'{(sigmd(1.1) + sigmd(2.2) + sigmd(-3.3) + sigmd(4.4)) / 4:.4f}') print(f'{(sigmd(-1.1) + sigmd(-2.2)) / 2:.4f}') # + [markdown] origin_pos=21 # ### 初始化模型参数 # # 我们定义了两个嵌入层,将词表中的所有单词分别作为中心词和上下文词使用。字向量维度`embed_size`被设置为100。 # # + origin_pos=22 tab=["mxnet"] embed_size = 100 net = nn.Sequential() net.add(nn.Embedding(input_dim=len(vocab), output_dim=embed_size), nn.Embedding(input_dim=len(vocab), output_dim=embed_size)) # + [markdown] origin_pos=24 # ### 定义训练阶段代码 # # 训练阶段代码实现定义如下。由于填充的存在,损失函数的计算与以前的训练函数略有不同。 # # + origin_pos=25 tab=["mxnet"] def train(net, data_iter, lr, num_epochs, device=d2l.try_gpu()): net.initialize(ctx=device, force_reinit=True) trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate': lr}) animator = d2l.Animator(xlabel='epoch', ylabel='loss', xlim=[1, num_epochs]) # 规范化的损失之和,规范化的损失数 metric = d2l.Accumulator(2) for epoch in range(num_epochs): timer, num_batches = d2l.Timer(), len(data_iter) for i, batch in enumerate(data_iter): center, context_negative, mask, label = [ data.as_in_ctx(device) for data in batch] with autograd.record(): pred = skip_gram(center, context_negative, net[0], net[1]) l = (loss(pred.reshape(label.shape), label, mask) * mask.shape[1] / mask.sum(axis=1)) l.backward() trainer.step(batch_size) metric.add(l.sum(), l.size) if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1: animator.add(epoch + (i + 1) / num_batches, (metric[0] / metric[1],)) print(f'loss {metric[0] / metric[1]:.3f}, ' f'{metric[1] / timer.stop():.1f} tokens/sec on {str(device)}') # + [markdown] origin_pos=27 # 现在,我们可以使用负采样来训练跳元模型。 # # + origin_pos=28 tab=["mxnet"] lr, num_epochs = 0.002, 5 train(net, data_iter, lr, num_epochs) # + [markdown] origin_pos=29 # ## 应用词嵌入 # :label:`subsec_apply-word-embed` # # 在训练word2vec模型之后,我们可以使用训练好模型中词向量的余弦相似度来从词表中找到与输入单词语义最相似的单词。 # # + origin_pos=30 tab=["mxnet"] def get_similar_tokens(query_token, k, embed): W = embed.weight.data() x = W[vocab[query_token]] # 计算余弦相似性。增加1e-9以获得数值稳定性 cos = np.dot(W, x) / np.sqrt(np.sum(W * W, axis=1) * np.sum(x * x) + \ 1e-9) topk = npx.topk(cos, k=k+1, ret_typ='indices').asnumpy().astype('int32') for i in topk[1:]: # 删除输入词 print(f'cosine sim={float(cos[i]):.3f}: {vocab.to_tokens(i)}') get_similar_tokens('chip', 3, net[0]) # + [markdown] origin_pos=32 # ## 小结 # # * 我们可以使用嵌入层和二元交叉熵损失来训练带负采样的跳元模型。 # * 词嵌入的应用包括基于词向量的余弦相似度为给定词找到语义相似的词。 # # ## 练习 # # 1. 使用训练好的模型,找出其他输入词在语义上相似的词。您能通过调优超参数来改进结果吗? # 1. 当训练语料库很大时,在更新模型参数时,我们经常对当前小批量的*中心词*进行上下文词和噪声词的采样。换言之,同一中心词在不同的训练迭代轮数可以有不同的上下文词或噪声词。这种方法的好处是什么?尝试实现这种训练方法。 # # + [markdown] origin_pos=33 tab=["mxnet"] # [Discussions](https://discuss.d2l.ai/t/5739) #
submodules/resource/d2l-zh/mxnet/chapter_natural-language-processing-pretraining/word2vec-pretraining.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Semantic Segmentation # # ## Analyzing Arterial Blood Pressure Data with FLUSS and FLOSS # # This example utilizes the main takeaways from the [Matrix Profile VIII](https://www.cs.ucr.edu/~eamonn/Segmentation_ICDM.pdf) research paper. For proper context, we highly recommend that you read the paper first but know that our implementations follow this paper closely. # # According to the aforementioned publication, "one of the most basic analyses one can perform on [increasing amounts of time series data being captured] is to segment it into homogeneous regions." In other words, wouldn't it be nice if you could take your long time series data and be able to segment or chop it up into `k` regions (where `k` is small) and with the ultimate goal of presenting only `k` short representative patterns to a human (or machine) annotator in order to produce labels for the entire dataset. These segmented regions are also known as "regimes". Additionally, as an exploratory tool, one might uncover new actionable insights in the data that was previously undiscovered. Fast low-cost unipotent semantic segmentation (FLUSS) is an algorithm that produces something called an "arc curve" which annotates the raw time series with information about the likelihood of a regime change. Fast low-cost online semantic segmentation (FLOSS) is a variation of FLUSS that, according to the original paper, is domain agnostic, offers streaming capabilities with potential for actionable real-time intervention, and is suitable for real world data (i.e., does not assume that every region of the data belongs to a well-defined semantic segment). # # To demonstrate the API and underlying principles, we will be looking at arterial blood pressure (ABP) data from from a healthy volunteer resting on a medical tilt table and will be seeing if we can detect when the table is tilted from a horizontal position to a vertical position. This is the same data that is presented throughout the original paper (above). # ## Getting Started # # Let's import the packages that we'll need to load, analyze, and plot the data. # + # %matplotlib inline import pandas as pd import numpy as np import stumpy import matplotlib.pyplot as plt from matplotlib.patches import Rectangle, FancyArrowPatch from matplotlib import animation from IPython.display import HTML plt.rcParams["figure.figsize"] = [20, 6] # width, height plt.rcParams['xtick.direction'] = 'out' # - # ## Retrieve the Data df = pd.read_csv("https://zenodo.org/record/4276400/files/Semantic_Segmentation_TiltABP.csv?download=1") df.head() # ## Visualizing the Raw Data plt.plot(df['time'], df['abp']) rect = Rectangle((24000,2400),2000,6000,facecolor='lightgrey') plt.gca().add_patch(rect) plt.show() # We can clearly see that there is a change around `time=25000` that corresponds to when the table was tilted upright. # ## FLUSS # # Instead of using the full dataset, let's zoom in and analyze the 2,500 data points directly before and after `x=25000` (see Figure 5 in the paper). # + start = 25000 - 2500 stop = 25000 + 2500 abp = df.iloc[start:stop, 1] plt.plot(range(abp.shape[0]), abp) plt.ylim(2800, 8500) plt.axvline(x=2373, linestyle="dashed") style="Simple, tail_width=0.5, head_width=6, head_length=8" kw = dict(arrowstyle=style, color="k") # regime 1 rect = Rectangle((55,2500), 225, 6000, facecolor='lightgrey') plt.gca().add_patch(rect) rect = Rectangle((470,2500), 225, 6000, facecolor='lightgrey') plt.gca().add_patch(rect) rect = Rectangle((880,2500), 225, 6000, facecolor='lightgrey') plt.gca().add_patch(rect) rect = Rectangle((1700,2500), 225, 6000, facecolor='lightgrey') plt.gca().add_patch(rect) arrow = FancyArrowPatch((75, 7000), (490, 7000), connectionstyle="arc3, rad=-.5", **kw) plt.gca().add_patch(arrow) arrow = FancyArrowPatch((495, 7000), (905, 7000), connectionstyle="arc3, rad=-.5", **kw) plt.gca().add_patch(arrow) arrow = FancyArrowPatch((905, 7000), (495, 7000), connectionstyle="arc3, rad=.5", **kw) plt.gca().add_patch(arrow) arrow = FancyArrowPatch((1735, 7100), (490, 7100), connectionstyle="arc3, rad=.5", **kw) plt.gca().add_patch(arrow) # regime 2 rect = Rectangle((2510,2500), 225, 6000, facecolor='moccasin') plt.gca().add_patch(rect) rect = Rectangle((2910,2500), 225, 6000, facecolor='moccasin') plt.gca().add_patch(rect) rect = Rectangle((3310,2500), 225, 6000, facecolor='moccasin') plt.gca().add_patch(rect) arrow = FancyArrowPatch((2540, 7000), (3340, 7000), connectionstyle="arc3, rad=-.5", **kw) plt.gca().add_patch(arrow) arrow = FancyArrowPatch((2960, 7000), (2540, 7000), connectionstyle="arc3, rad=.5", **kw) plt.gca().add_patch(arrow) arrow = FancyArrowPatch((3340, 7100), (3540, 7100), connectionstyle="arc3, rad=-.5", **kw) plt.gca().add_patch(arrow) plt.show() # - # Roughly, in the truncated plot above, we see that the segmentation between the two regimes occurs around `time=2373` (vertical dotted line) where the patterns from the first regime (grey) don't cross over to the second regime (orange) (see Figure 2 in the original paper). And so the "arc curve" is calculated by sliding along the time series and simply counting the number of times other patterns have "crossed over" that specific time point (i.e., "arcs"). Essentially, this information can be extracted by looking at the matrix profile indices (which tells you where along the time series your nearest neighbor is). And so, we'd expect the arc counts to be high where repeated patterns are near each other and low where there are no crossing arcs. # # Before we compute the "arc curve", we'll need to first compute the standard matrix profile and we can approximately see that the window size is about 210 data points (thanks to the knowledge of the subject matter/domain expert). m = 210 mp = stumpy.stump(abp, m=m) # Now, to compute the "arc curve" and determine the location of the regime change, we can directly call the `fluss` function. However, note that `fluss` requires the following inputs: # # 1. the matrix profile indices `mp[:, 1]` (not the matrix profile distances) # 2. an appropriate subsequence length, `L` (for convenience, we'll just choose it to be equal to the window size, `m=210`) # 3. the number of regimes, `n_regimes`, to search for (2 regions in this case) # 4. an exclusion factor, `excl_factor`, to nullify the beginning and end of the arc curve (anywhere between 1-5 is reasonable according to the paper) L = 210 cac, regime_locations = stumpy.fluss(mp[:, 1], L=L, n_regimes=2, excl_factor=1) # Notice that `fluss` actually returns something called the "corrected arc curve" (CAC), which normalizes the fact that there are typically less arcs crossing over a time point near the beginning and end of the time series and more potential for cross overs near the middle of the time series. Additionally, `fluss` returns the regimes or location(s) of the dotted line(s). Let's plot our original time series (top) along with the corrected arc curve (orange) and the single regime (vertical dotted line). fig, axs = plt.subplots(2, sharex=True, gridspec_kw={'hspace': 0}) axs[0].plot(range(abp.shape[0]), abp) axs[0].axvline(x=regime_locations[0], linestyle="dashed") axs[1].plot(range(cac.shape[0]), cac, color='C1') axs[1].axvline(x=regime_locations[0], linestyle="dashed") plt.show() # ## FLOSS # # Unlike FLUSS, FLOSS is concerned with streaming data, and so it calculates a modified version of the corrected arc curve (CAC) that is strictly one-directional (CAC_1D) rather than bidirectional. That is, instead of expecting cross overs to be equally likely from both directions, we expect more cross overs to point toward the future (and less to point toward the past). So, we can manually compute the `CAC_1D` cac_1d = stumpy._cac(mp[:, 3], L, bidirectional=False, excl_factor=1) # This is for demo purposes only. Use floss() below! # and compare the `CAC_1D` (blue) with the bidirectional `CAC` (orange) and we see that the global minimum are approximately in the same place (see Figure 10 in the original paper). fig, axs = plt.subplots(2, sharex=True, gridspec_kw={'hspace': 0}) axs[0].plot(np.arange(abp.shape[0]), abp) axs[0].axvline(x=regime_locations[0], linestyle="dashed") axs[1].plot(range(cac.shape[0]), cac, color='C1') axs[1].axvline(x=regime_locations[0], linestyle="dashed") axs[1].plot(range(cac_1d.shape[0]), cac_1d) plt.show() # ## Streaming Data with FLOSS # # Instead of manually computing `CAC_1D` like we did above on streaming data, we can actually call the `floss` function directly which instantiates a streaming object. To demonstrate the use of `floss`, let's take some `old_data` and compute the matrix profile indices for it like we did above: # + old_data = df.iloc[20000:20000+5000, 1].values # This is well before the regime change has occurred mp = stumpy.stump(old_data, m=m) # - # Now, we could do what we did early and compute the bidirectional corrected arc curve but we'd like to see how the arc curve changes as a result of adding new data points. So, let's define some new data that is to be streamed in: new_data = df.iloc[25000:25000+5000, 1].values # Finally, we call the `floss` function to initialize a streaming object and pass in: # # 1. the matrix profile generated from the `old_data` (only the indices are used) # 2. the old data used to generate the matrix profile in 1. # 3. the matrix profile window size, `m=210` # 4. the subsequence length, `L=210` # 5. the exclusion factor stream = stumpy.floss(mp, old_data, m=m, L=L, excl_factor=1) # You can now update the `stream` with a new data point, `t`, via the `stream.update(t)` function and this will slide your window over by one data point and it will automatically update: # # 1. the `CAC_1D` (accessed via the `.cac_1d_` attribute) # 2. the matrix profile (accessed via the `.P_` attribute) # 3. the matrix profile indices (accessed via the `.I_` attribute) # 4. the sliding window of data used to produce the `CAC_1D` (accessed via the `.T_` attribute - this should be the same size as the length of the `old_data\) # # Let's continuously update our `stream` with the `new_data` one value at a time and store them in a list (you'll see why in a second): windows = [] for i, t in enumerate(new_data): stream.update(t) if i % 100 == 0: windows.append((stream.T_, stream.cac_1d_)) # Below, you can see an animation that changes as a result of updating the stream with new data. For reference, we've also plotted the `CAC_1D` (orange) that we manually generated from above for the stationary data. You'll see that halfway through the animation, the regime change occurs and the updated `CAC_1D` (blue) will be perfectly aligned with the orange curve. # + fig, axs = plt.subplots(2, sharex=True, gridspec_kw={'hspace': 0}) axs[0].set_xlim((0, mp.shape[0])) axs[0].set_ylim((-0.1, max(np.max(old_data), np.max(new_data)))) axs[1].set_xlim((0, mp.shape[0])) axs[1].set_ylim((-0.1, 1.1)) lines = [] for ax in axs: line, = ax.plot([], [], lw=2) lines.append(line) line, = axs[1].plot([], [], lw=2) lines.append(line) def init(): for line in lines: line.set_data([], []) return lines def animate(window): data_out, cac_out = window for line, data in zip(lines, [data_out, cac_out, cac_1d]): line.set_data(np.arange(data.shape[0]), data) return lines anim = animation.FuncAnimation(fig, animate, init_func=init, frames=windows, interval=100, blit=True) anim_out = anim.to_jshtml() plt.close() # Prevents duplicate image from displaying if os.path.exists("None0000000.png"): os.remove("None0000000.png") # Delete rogue temp file HTML(anim_out) # anim.save('/tmp/semantic.mp4') # - # ## Summary # # And that's it! You've just learned the basics of how to identify segments within your data using the matrix profile indices and leveraging `fluss` and `floss`. # # ## Resources # # [Matrix Profile VIII](https://www.cs.ucr.edu/~eamonn/Segmentation_ICDM.pdf) # # [STUMPY Documentation](https://stumpy.readthedocs.io/en/latest/) # # [STUMPY Matrix Profile Github Code Repository](https://github.com/TDAmeritrade/stumpy)
notebooks/Tutorial_Semantic_Segmentation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Coding Exercise #0807 # #### 1. Document classification with LSTM network (Binary): import pandas as pd import numpy as np import warnings import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from keras.datasets.imdb import load_data, get_word_index # Movie review data. from keras.models import Sequential from keras.layers import Dense, SimpleRNN, LSTM, Embedding from keras.utils import to_categorical from keras.preprocessing import sequence from keras.optimizers import Adam, RMSprop, SGD warnings.filterwarnings('ignore') # Turn the warnings off. # %matplotlib inline # #### 1.1. Read in the data: n_words = 3000 # Size of the vocabulary. (X_train, y_train), (X_test, y_test) = load_data(num_words = n_words) n_train_size = X_train.shape[0] # Check for the shapes. print("-"*50) print("Training data X shape: {}".format(X_train.shape)) print("Training data y shape: {}".format(y_train.shape)) print("-"*50) print("Test data X shape: {}".format(X_test.shape)) print("Test data y shape: {}".format(y_test.shape)) print("-"*50) # #### 1.2. Explore the data: # Number of unique values of y = Number of categories of the newswires. n_cat = pd.Series(y_train).nunique() n_cat # Print out an observation (document) contained in X. # It is encoded as integers (indices). print(X_train[0]) # Let's check for length of the first 100 documents. # We notice that the length is not uniform. print([len(a) for a in X_train[0:100]]) # Download the dictionary to translate the indices. my_dict = get_word_index(path='imdb_word_index.json') # + # To view the dictionary. # my_dict # - # Exchange the 'key' and 'value'. my_dict_inv = {v:k for k,v in my_dict.items()} # Translate each document. i_review = 10 # Document number that can be changed at will. review = list(pd.Series(X_train[i_review]).apply(lambda x: my_dict_inv[x])) print(' '.join(review)) # #### 1.3. Data preprocessing: # Padding: newswire lengths are uniformly matched to maxlen. # Cut away if longer than maxlen and fill with 0s if shorter than maxlen. X_train = sequence.pad_sequences(X_train, maxlen = 100) X_test = sequence.pad_sequences(X_test, maxlen = 100) # + # y is already binary. Thus, there is no need to covert to the one-hot-encoding scheme. # - # #### 1.4. Define the model: n_neurons = 50 # Neurons within each memory cell. n_input = 100 # Dimension of the embeding space. # LSTM network model. my_model = Sequential() my_model.add(Embedding(n_words, n_input)) # n_words = vocabulary size, n_input = dimension of the embedding space. my_model.add(LSTM(units=n_neurons, return_sequences=False, input_shape=(None, n_input), activation='tanh')) my_model.add(Dense(1, activation='sigmoid')) # Binary output!!! # View the summary. my_model.summary() # #### 1.5. Define the optimizer and compile: n_epochs = 5 # Number of epochs. batch_size = 50 # Size of each batch. learn_rate = 0.002 # learning rate. # Optimizer and compilation. my_optimizer=Adam(lr=learn_rate) my_model.compile(loss = "binary_crossentropy", optimizer = my_optimizer, metrics=["accuracy"]) # #### 1.6. Train the model and visualize the history: my_summary = my_model.fit(X_train, y_train, epochs=n_epochs, batch_size = batch_size, validation_split=0.2, verbose = 1) plt.plot(my_summary.history['accuracy'], c="b") plt.plot(my_summary.history['val_accuracy'], c="g") plt.title('Training History') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Train', 'Validation'], loc='lower right') plt.show() # #### 1.7. Testing: ACC = my_model.evaluate(X_test, y_test, verbose=0)[1] print("Test Accuracy : {}".format(np.round(ACC,3)))
Official Notes/Exercises/Chapter 9/ex_0807.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import matplotlib.pyplot as plt X = np.linspace(-5,5,50) Y = 2.0*(X**3.0) + 5.0*(X**2.0) - 9.0*X - 2 plt.plot(X,Y) Y_noise = np.random.randint(-30,30,50) Y_new = Y + Y_noise plt.plot(X,Y,label='Dados reais') plt.scatter(X,Y_new,label='Dados com ruído',color='red') plt.xlabel('X') plt.ylabel('Y') plt.legend() from sklearn.model_selection import train_test_split X_treino, X_teste, Y_treino, Y_teste = train_test_split(X, Y_new, test_size=0.30, shuffle=False, random_state=42) plt.plot(X,Y,label='Valores reais') plt.scatter(X_treino,Y_treino,label='Treino',color='red') plt.scatter(X_teste,Y_teste,label='Teste',color='green') plt.xlabel('X') plt.ylabel('Y') plt.legend() from sklearn.linear_model import LinearRegression from sklearn.preprocessing import PolynomialFeatures X_treino = X_treino.reshape(-1, 1) X_teste = X_teste.reshape(-1, 1) caracteristicas_1= PolynomialFeatures(degree=1) X_treino_1 = caracteristicas_1.fit_transform(X_treino) X_teste_1 = caracteristicas_1.transform(X_teste) modelo1 = LinearRegression() modelo1.fit(X_treino_1, Y_treino) Y_Polinomio_treino_1 = modelo1.predict(X_treino_1) Y_Polinomio_teste_1 = modelo1.predict(X_teste_1) plt.plot(X,Y,label='Real') plt.scatter(X,Y_new,label='Real+Ruido') plt.plot(X_treino,Y_Polinomio_treino_1,label='Treino') plt.plot(X_teste,Y_Polinomio_teste_1,label='Teste') plt.xlabel('X') plt.ylabel('Y') plt.title('Grau 1') plt.legend() caracteristicas_2= PolynomialFeatures(degree=2) X_treino_2 = caracteristicas_2.fit_transform(X_treino) X_teste_2 = caracteristicas_2.transform(X_teste) modelo2 = LinearRegression() modelo2.fit(X_treino_2, Y_treino) Y_Polinomio_treino_2 = modelo2.predict(X_treino_2) Y_Polinomio_teste_2 = modelo2.predict(X_teste_2) plt.plot(X,Y,label='Real') plt.scatter(X,Y_new,label='Real+Ruido') plt.scatter(X_treino,Y_Polinomio_treino_2,label='Treino') plt.scatter(X_teste,Y_Polinomio_teste_2,label='Teste') plt.xlabel('X') plt.ylabel('Y') plt.title('Grau 2') plt.legend() caracteristicas_3= PolynomialFeatures(degree=3) X_treino_3 = caracteristicas_3.fit_transform(X_treino) X_teste_3 = caracteristicas_3.transform(X_teste) modelo3 = LinearRegression() modelo3.fit(X_treino_3, Y_treino) Y_Polinomio_treino_3 = modelo3.predict(X_treino_3) Y_Polinomio_teste_3 = modelo3.predict(X_teste_3) plt.plot(X,Y,label='Real') plt.scatter(X,Y_new,label='Real+Ruido') plt.scatter(X_treino,Y_Polinomio_treino_3,label='Treino') plt.scatter(X_teste,Y_Polinomio_teste_3,label='Teste') plt.xlabel('X') plt.ylabel('Y') plt.title('Grau 3') plt.legend() caracteristicas_4= PolynomialFeatures(degree=4) X_treino_4 = caracteristicas_4.fit_transform(X_treino) X_teste_4 = caracteristicas_4.transform(X_teste) modelo4 = LinearRegression() modelo4.fit(X_treino_4, Y_treino) Y_Polinomio_treino_4 = modelo4.predict(X_treino_4) Y_Polinomio_teste_4 = modelo4.predict(X_teste_4) plt.plot(X,Y,label='Real') plt.scatter(X,Y_new,label='Real+Ruido') plt.scatter(X_treino,Y_Polinomio_treino_4,label='Treino') plt.scatter(X_teste,Y_Polinomio_teste_4,label='Teste') plt.xlabel('X') plt.ylabel('Y') plt.title('Grau 4') plt.legend() from sklearn.metrics import mean_squared_error MSE1_treino = mean_squared_error(Y_treino,Y_Polinomio_treino_1) MSE2_treino = mean_squared_error(Y_treino,Y_Polinomio_treino_2) MSE3_treino = mean_squared_error(Y_treino,Y_Polinomio_treino_3) MSE4_treino = mean_squared_error(Y_treino,Y_Polinomio_treino_4) MSE1_teste = mean_squared_error(Y_teste,Y_Polinomio_teste_1) MSE2_teste = mean_squared_error(Y_teste,Y_Polinomio_teste_2) MSE3_teste = mean_squared_error(Y_teste,Y_Polinomio_teste_3) MSE4_teste = mean_squared_error(Y_teste,Y_Polinomio_teste_4) grau = np.linspace(1,4,4) MSE_treino = [MSE1_treino,MSE2_treino,MSE3_treino,MSE4_treino] MSE_teste = [MSE1_teste,MSE2_teste,MSE3_teste,MSE4_teste] plt.plot(grau,MSE_treino,'-*',color='blue',label='Treino') plt.plot(grau,MSE_teste,'-*',color='red',label='Teste') plt.yscale('log') plt.xlabel('Grau') plt.ylabel('MSE') plt.legend(loc='best') plt.tight_layout() MSE_treino MSE_teste
Data_Science/Regressao/Underfitting_vs_Overfitting.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # This IPython Notebook illustrates the use of the **`openmc.mgxs.Library`** class. The `Library` class is designed to automate the calculation of multi-group cross sections for use cases with one or more domains, cross section types, and/or nuclides. In particular, this Notebook illustrates the following features: # # * Calculation of multi-energy-group and multi-delayed-group cross sections for a **fuel assembly** # * Automated creation, manipulation and storage of `MGXS` with **`openmc.mgxs.Library`** # * Steady-state pin-by-pin **delayed neutron fractions (beta)** for each delayed group. # * Generation of surface currents on the interfaces and surfaces of a Mesh. # ## Generate Input Files # + # %matplotlib inline import math import matplotlib.pyplot as plt import numpy as np import openmc import openmc.mgxs # - # First we need to define materials that will be used in the problem: fuel, water, and cladding. # + # 1.6 enriched fuel fuel = openmc.Material(name='1.6% Fuel') fuel.set_density('g/cm3', 10.31341) fuel.add_nuclide('U235', 3.7503e-4) fuel.add_nuclide('U238', 2.2625e-2) fuel.add_nuclide('O16', 4.6007e-2) # borated water water = openmc.Material(name='Borated Water') water.set_density('g/cm3', 0.740582) water.add_nuclide('H1', 4.9457e-2) water.add_nuclide('O16', 2.4732e-2) water.add_nuclide('B10', 8.0042e-6) # zircaloy zircaloy = openmc.Material(name='Zircaloy') zircaloy.set_density('g/cm3', 6.55) zircaloy.add_nuclide('Zr90', 7.2758e-3) # - # With our three materials, we can now create a `Materials` object that can be exported to an actual XML file. # Create a materials collection and export to XML materials = openmc.Materials((fuel, water, zircaloy)) materials.export_to_xml() # Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem. # + # Create cylinders for the fuel and clad fuel_outer_radius = openmc.ZCylinder(r=0.39218) clad_outer_radius = openmc.ZCylinder(r=0.45720) # Create boundary planes to surround the geometry min_x = openmc.XPlane(x0=-10.71, boundary_type='reflective') max_x = openmc.XPlane(x0=+10.71, boundary_type='reflective') min_y = openmc.YPlane(y0=-10.71, boundary_type='reflective') max_y = openmc.YPlane(y0=+10.71, boundary_type='reflective') min_z = openmc.ZPlane(z0=-10., boundary_type='reflective') max_z = openmc.ZPlane(z0=+10., boundary_type='reflective') # - # With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces. # + # Create a Universe to encapsulate a fuel pin fuel_pin_universe = openmc.Universe(name='1.6% Fuel Pin') # Create fuel Cell fuel_cell = openmc.Cell(name='1.6% Fuel') fuel_cell.fill = fuel fuel_cell.region = -fuel_outer_radius fuel_pin_universe.add_cell(fuel_cell) # Create a clad Cell clad_cell = openmc.Cell(name='1.6% Clad') clad_cell.fill = zircaloy clad_cell.region = +fuel_outer_radius & -clad_outer_radius fuel_pin_universe.add_cell(clad_cell) # Create a moderator Cell moderator_cell = openmc.Cell(name='1.6% Moderator') moderator_cell.fill = water moderator_cell.region = +clad_outer_radius fuel_pin_universe.add_cell(moderator_cell) # - # Likewise, we can construct a control rod guide tube with the same surfaces. # + # Create a Universe to encapsulate a control rod guide tube guide_tube_universe = openmc.Universe(name='Guide Tube') # Create guide tube Cell guide_tube_cell = openmc.Cell(name='Guide Tube Water') guide_tube_cell.fill = water guide_tube_cell.region = -fuel_outer_radius guide_tube_universe.add_cell(guide_tube_cell) # Create a clad Cell clad_cell = openmc.Cell(name='Guide Clad') clad_cell.fill = zircaloy clad_cell.region = +fuel_outer_radius & -clad_outer_radius guide_tube_universe.add_cell(clad_cell) # Create a moderator Cell moderator_cell = openmc.Cell(name='Guide Tube Moderator') moderator_cell.fill = water moderator_cell.region = +clad_outer_radius guide_tube_universe.add_cell(moderator_cell) # - # Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch. # Create fuel assembly Lattice assembly = openmc.RectLattice(name='1.6% Fuel Assembly') assembly.pitch = (1.26, 1.26) assembly.lower_left = [-1.26 * 17. / 2.0] * 2 # Next, we create a NumPy array of fuel pin and guide tube universes for the lattice. # + # Create array indices for guide tube locations in lattice template_x = np.array([5, 8, 11, 3, 13, 2, 5, 8, 11, 14, 2, 5, 8, 11, 14, 2, 5, 8, 11, 14, 3, 13, 5, 8, 11]) template_y = np.array([2, 2, 2, 3, 3, 5, 5, 5, 5, 5, 8, 8, 8, 8, 8, 11, 11, 11, 11, 11, 13, 13, 14, 14, 14]) # Create universes array with the fuel pin and guide tube universes universes = np.tile(fuel_pin_universe, (17,17)) universes[template_x, template_y] = guide_tube_universe # Store the array of universes in the lattice assembly.universes = universes # - # OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe. # + # Create root Cell root_cell = openmc.Cell(name='root cell', fill=assembly) # Add boundary planes root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z # Create root Universe root_universe = openmc.Universe(universe_id=0, name='root universe') root_universe.add_cell(root_cell) # - # We now must create a geometry that is assigned a root universe and export it to XML. # Create Geometry and export to XML geometry = openmc.Geometry(root_universe) geometry.export_to_xml() # With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles. # + # OpenMC simulation parameters batches = 50 inactive = 10 particles = 2500 # Instantiate a Settings object settings = openmc.Settings() settings.batches = batches settings.inactive = inactive settings.particles = particles settings.output = {'tallies': False} # Create an initial uniform spatial source distribution over fissionable zones bounds = [-10.71, -10.71, -10, 10.71, 10.71, 10.] uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True) settings.source = openmc.Source(space=uniform_dist) # Export to "settings.xml" settings.export_to_xml() # - # Let us also create a plot to verify that our fuel assembly geometry was created successfully. # Plot our geometry plot = openmc.Plot.from_geometry(geometry) plot.pixels = (250, 250) plot.color_by = 'material' openmc.plot_inline(plot) # As we can see from the plot, we have a nice array of fuel and guide tube pin cells with fuel, cladding, and water! # ## Create an MGXS Library # Now we are ready to generate multi-group cross sections! First, let's define a 20-energy-group and 1-energy-group. # + # Instantiate a 20-group EnergyGroups object energy_groups = openmc.mgxs.EnergyGroups() energy_groups.group_edges = np.logspace(-3, 7.3, 21) # Instantiate a 1-group EnergyGroups object one_group = openmc.mgxs.EnergyGroups() one_group.group_edges = np.array([energy_groups.group_edges[0], energy_groups.group_edges[-1]]) # - # Next, we will instantiate an `openmc.mgxs.Library` for the energy and delayed groups with our the fuel assembly geometry. # + # Instantiate a tally mesh mesh = openmc.RegularMesh(mesh_id=1) mesh.dimension = [17, 17, 1] mesh.lower_left = [-10.71, -10.71, -10000.] mesh.width = [1.26, 1.26, 20000.] # Initialize an 20-energy-group and 6-delayed-group MGXS Library mgxs_lib = openmc.mgxs.Library(geometry) mgxs_lib.energy_groups = energy_groups mgxs_lib.num_delayed_groups = 6 # Specify multi-group cross section types to compute mgxs_lib.mgxs_types = ['total', 'transport', 'nu-scatter matrix', 'kappa-fission', 'inverse-velocity', 'chi-prompt', 'prompt-nu-fission', 'chi-delayed', 'delayed-nu-fission', 'beta'] # Specify a "mesh" domain type for the cross section tally filters mgxs_lib.domain_type = 'mesh' # Specify the mesh domain over which to compute multi-group cross sections mgxs_lib.domains = [mesh] # Construct all tallies needed for the multi-group cross section library mgxs_lib.build_library() # Create a "tallies.xml" file for the MGXS Library tallies_file = openmc.Tallies() mgxs_lib.add_to_tallies_file(tallies_file, merge=True) # Instantiate a current tally mesh_filter = openmc.MeshSurfaceFilter(mesh) current_tally = openmc.Tally(name='current tally') current_tally.scores = ['current'] current_tally.filters = [mesh_filter] # Add current tally to the tallies file tallies_file.append(current_tally) # Export to "tallies.xml" tallies_file.export_to_xml() # - # Now, we can run OpenMC to generate the cross sections. # Run OpenMC openmc.run() # ## Tally Data Processing # Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a `StatePoint` object. # Load the last statepoint file sp = openmc.StatePoint('statepoint.50.h5') # The statepoint is now ready to be analyzed by the `Library`. We simply have to load the tallies from the statepoint into the `Library` and our `MGXS` objects will compute the cross sections for us under-the-hood. # + # Initialize MGXS Library with OpenMC statepoint data mgxs_lib.load_from_statepoint(sp) # Extrack the current tally separately current_tally = sp.get_tally(name='current tally') # - # # Using Tally Arithmetic to Compute the Delayed Neutron Precursor Concentrations # Finally, we illustrate how one can leverage OpenMC's [tally arithmetic](https://mit-crpg.github.io/openmc/pythonapi/examples/tally-arithmetic.html) data processing feature with `MGXS` objects. The `openmc.mgxs` module uses tally arithmetic to compute multi-group cross sections with automated uncertainty propagation. Each `MGXS` object includes an `xs_tally` attribute which is a "derived" `Tally` based on the tallies needed to compute the cross section type of interest. These derived tallies can be used in subsequent tally arithmetic operations. For example, we can use tally artithmetic to compute the delayed neutron precursor concentrations using the `Beta` and `DelayedNuFissionXS` objects. The delayed neutron precursor concentrations are modeled using the following equations: # # $$\frac{\partial}{\partial t} C_{k,d} (t) = \int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r} \beta_{k,d} (t) \nu_d \sigma_{f,x}(\mathbf{r},E',t)\Phi(\mathbf{r},E',t) - \lambda_{d} C_{k,d} (t) $$ # # $$C_{k,d} (t=0) = \frac{1}{\lambda_{d}} \int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r} \beta_{k,d} (t=0) \nu_d \sigma_{f,x}(\mathbf{r},E',t=0)\Phi(\mathbf{r},E',t=0) $$ # + # Set the time constants for the delayed precursors (in seconds^-1) precursor_halflife = np.array([55.6, 24.5, 16.3, 2.37, 0.424, 0.195]) precursor_lambda = math.log(2.0) / precursor_halflife beta = mgxs_lib.get_mgxs(mesh, 'beta') # Create a tally object with only the delayed group filter for the time constants beta_filters = [f for f in beta.xs_tally.filters if type(f) is not openmc.DelayedGroupFilter] lambda_tally = beta.xs_tally.summation(nuclides=beta.xs_tally.nuclides) for f in beta_filters: lambda_tally = lambda_tally.summation(filter_type=type(f), remove_filter=True) * 0. + 1. # Set the mean of the lambda tally and reshape to account for nuclides and scores lambda_tally._mean = precursor_lambda lambda_tally._mean.shape = lambda_tally.std_dev.shape # Set a total nuclide and lambda score lambda_tally.nuclides = [openmc.Nuclide(name='total')] lambda_tally.scores = ['lambda'] delayed_nu_fission = mgxs_lib.get_mgxs(mesh, 'delayed-nu-fission') # Use tally arithmetic to compute the precursor concentrations precursor_conc = beta.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) * \ delayed_nu_fission.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) / lambda_tally # The difference is a derived tally which can generate Pandas DataFrames for inspection precursor_conc.get_pandas_dataframe().head(10) # - # Another useful feature of the Python API is the ability to extract the surface currents for the interfaces and surfaces of a mesh. We can inspect the currents for the mesh by getting the pandas dataframe. current_tally.get_pandas_dataframe().head(10) # ## Cross Section Visualizations # In addition to inspecting the data in the tallies by getting the pandas dataframe, we can also plot the tally data on the domain mesh. Below is the delayed neutron fraction tallied in each mesh cell for each delayed group. # + # Extract the energy-condensed delayed neutron fraction tally beta_by_group = beta.get_condensed_xs(one_group).xs_tally.summation(filter_type='energy', remove_filter=True) beta_by_group.mean.shape = (17, 17, 6) beta_by_group.mean[beta_by_group.mean == 0] = np.nan # Plot the betas plt.figure(figsize=(18,9)) fig = plt.subplot(231) plt.imshow(beta_by_group.mean[:,:,0], interpolation='none', cmap='jet') plt.colorbar() plt.title('Beta - delayed group 1') fig = plt.subplot(232) plt.imshow(beta_by_group.mean[:,:,1], interpolation='none', cmap='jet') plt.colorbar() plt.title('Beta - delayed group 2') fig = plt.subplot(233) plt.imshow(beta_by_group.mean[:,:,2], interpolation='none', cmap='jet') plt.colorbar() plt.title('Beta - delayed group 3') fig = plt.subplot(234) plt.imshow(beta_by_group.mean[:,:,3], interpolation='none', cmap='jet') plt.colorbar() plt.title('Beta - delayed group 4') fig = plt.subplot(235) plt.imshow(beta_by_group.mean[:,:,4], interpolation='none', cmap='jet') plt.colorbar() plt.title('Beta - delayed group 5') fig = plt.subplot(236) plt.imshow(beta_by_group.mean[:,:,5], interpolation='none', cmap='jet') plt.colorbar() plt.title('Beta - delayed group 6')
examples/jupyter/mdgxs-part-ii.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] tags=[] # TSG053 - ADS Provided Books must be saved before use # ==================================================== # # Description # ----------- # # Azure Data Studio comes with some “Provided Books”. These books must be # saved first, otherwise some notebooks will not run correctly. # # NOTE: The usability in this area could be better, this is being tracked # under: # # - https://github.com/microsoft/azuredatastudio/issues/10500 # # ### Steps # # To save a provided book, click the “Save Book” icon (this icon is to the # right of the book title, at the top of the tree viewer in the “Provided # Books” section), and save the book to the local machine
Big-Data-Clusters/CU14/public/content/repair/tsg053-save-book-first.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import matplotlib.pyplot as plt import numpy as np import pandas as pd # %matplotlib inline muestras = pd.read_csv('./prueba1.CSV', names=[timestamp,Th, Tn, DX, DY,])
ipython_notebooks/05_experimentos_tolvas/.ipynb_checkpoints/prueba1-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/EnricoHuber/TensorFlow-Review/blob/main/Neural_Networks_with_TF_and_Fashion_MNIST.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="d9imuQolKCm0" # %tensorflow_version 2.x # + id="3yM6LvjLNHSB" import tensorflow as tf from tensorflow import keras import numpy as np import matplotlib.pyplot as plt # + id="BZ3wBj6JNfQ7" fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() # + colab={"base_uri": "https://localhost:8080/"} id="g4V49B-eNqng" outputId="bae8ba79-d787-44ed-df8a-2ca7088ab72a" train_images.shape # + colab={"base_uri": "https://localhost:8080/"} id="-BtWizwwNsB4" outputId="1ed9372d-5fdf-4a83-c64a-33dd61ae498b" train_images[0, 23, 23] # One pixel # + colab={"base_uri": "https://localhost:8080/"} id="zQhZ0DkOOnG5" outputId="bef5403e-98ad-4f8c-d0aa-170a9d9ae2ce" train_labels[:10] # + id="Mk6JdW-QOZFW" class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] # + colab={"base_uri": "https://localhost:8080/", "height": 265} id="yGXenCm4O2Gb" outputId="ebda9304-0b16-457a-ffdf-bc21558165f3" plt.figure() plt.imshow(train_images[1]) plt.colorbar() plt.grid(False) plt.show() # + id="JblG3g4ZO9vv" # Squeeze (scale) the data between 0 and 1 train_images = train_images / 255.0 test_images = test_images / 255.0 # + id="UrsB67nhPoSr" model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(10, activation='softmax') ]) # + colab={"base_uri": "https://localhost:8080/"} id="SnfopwsWQgFV" outputId="d1e53171-44ed-4dd3-fcd9-63392357976d" model # + id="8fl2ITCuQg3e" model.compile( optimizer = 'adam', loss = 'sparse_categorical_crossentropy', metrics=['accuracy'] ) # + colab={"base_uri": "https://localhost:8080/"} id="rDCERDg3RQgd" outputId="59c8617b-2608-4802-b192-c6e409f3fe5d" model.fit(train_images, train_labels, epochs=10) # + colab={"base_uri": "https://localhost:8080/"} id="QUl1yCZiRtna" outputId="6dd0964f-e595-4623-ac06-3d3e690e51e9" test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=1) print('Test accuracy: ', test_acc) # + id="ko0intd1SruD" predictions = model.predict(test_images) # + colab={"base_uri": "https://localhost:8080/"} id="sAouNuuqSuaB" outputId="c9875086-5144-40b8-bcb0-7c088ca59e26" predictions # + colab={"base_uri": "https://localhost:8080/"} id="fhPW-v1sTGMv" outputId="d4f55bbc-212d-448e-c15c-54d50d600531" print(np.argmax(predictions[0])) # + colab={"base_uri": "https://localhost:8080/", "height": 306} id="2mmSYanFWaBg" outputId="43188982-db7a-4afb-d1d1-8fcdec143cc4" i = 2 print('Prediction:', class_names[np.argmax(predictions[i])]) print('Actual:', class_names[train_labels[i]]) plt.figure() plt.imshow(train_images[i]) plt.colorbar() plt.grid(False) plt.show()
Neural_Networks_with_TF_and_Fashion_MNIST.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import json import math import numpy as np import openrtdynamics2.lang as dy from vehicle_lib.vehicle_lib import * # - # load track data with open("track_data/simple_track.json", "r") as read_file: track_data = json.load(read_file) # + # # Demo: a vehicle controlled to follow a given path # # Implemented using the code generator openrtdynamics 2 - https://pypi.org/project/openrtdynamics2/ . # This generates c++ code for Web Assembly to be run within the browser. # system = dy.enter_system() velocity = dy.system_input( dy.DataTypeFloat64(1), name='velocity', default_value=23.75, value_range=[0, 25], title="vehicle velocity") k_p = dy.system_input( dy.DataTypeFloat64(1), name='k_p', default_value=2.0, value_range=[0, 10.0], title="controller gain") disturbance_amplitude = dy.system_input( dy.DataTypeFloat64(1), name='disturbance_amplitude', default_value=20.0, value_range=[-45, 45], title="disturbance amplitude") * dy.float64(math.pi / 180.0) sample_disturbance = dy.system_input( dy.DataTypeInt32(1), name='sample_disturbance', default_value=50, value_range=[0, 300], title="disturbance position") # parameters wheelbase = 3.0 # sampling time Ts = 0.01 # create storage for the reference path: path = import_path_data(track_data) # create placeholders for the plant output signals x = dy.signal() y = dy.signal() psi = dy.signal() # track the evolution of the closest point on the path to the vehicles position projection = track_projection_on_path(path, x, y) d_star = projection['d_star'] # the distance parameter of the path describing the closest point to the vehicle x_r = projection['x_r'] # (x_r, y_r) the projected vehicle position on the path y_r = projection['y_r'] psi_r = projection['psi_r'] # the orientation angle (tangent of the path) K_r = projection['K_r'] # the curvature of the path Delta_l = projection['Delta_l'] # the lateral distance between vehicle and path tracked_index = projection['tracked_index'] # the index describing the closest sample of the input path # reference for the lateral distance Delta_l_r = dy.float64(0.0) # zero in this example dy.append_output(Delta_l_r, 'Delta_l_r') # feedback control u = dy.PID_controller(r=Delta_l_r, y=Delta_l, Ts=0.01, kp=k_p) # path tracking # resulting lateral model u --> Delta_l : 1/s Delta_u = dy.asin( dy.saturate(u / velocity, -0.99, 0.99) ) delta_star = psi_r - psi steering = delta_star + Delta_u steering = dy.unwrap_angle(angle=steering, normalize_around_zero = True) dy.append_output(Delta_u, 'Delta_u') dy.append_output(delta_star, 'delta_star') # # The model of the vehicle including a disturbance # # model the disturbance disturbance_transient = np.concatenate(( cosra(50, 0, 1.0), co(10, 1.0), cosra(50, 1.0, 0) )) steering_disturbance, i = dy.play(disturbance_transient, start_trigger=dy.counter() == sample_disturbance, auto_start=False) # apply disturbance to the steering input disturbed_steering = steering + steering_disturbance * disturbance_amplitude # steering angle limit disturbed_steering = dy.saturate(u=disturbed_steering, lower_limit=-math.pi/2.0, upper_limit=math.pi/2.0) # the model of the vehicle x_, y_, psi_, x_dot, y_dot, psi_dot = discrete_time_bicycle_model(disturbed_steering, velocity, Ts, wheelbase) # close the feedback loops x << x_ y << y_ psi << psi_ # # outputs: these are available for visualization in the html set-up # dy.append_output(x, 'x') dy.append_output(y, 'y') dy.append_output(psi, 'psi') dy.append_output(steering, 'steering') dy.append_output(x_r, 'x_r') dy.append_output(y_r, 'y_r') dy.append_output(psi_r, 'psi_r') dy.append_output(Delta_l, 'Delta_l') dy.append_output(steering_disturbance, 'steering_disturbance') dy.append_output(disturbed_steering, 'disturbed_steering') dy.append_output(tracked_index, 'tracked_index') # generate code for Web Assembly (wasm), requires emcc (emscripten) to build code_gen_results = dy.generate_code(template=dy.TargetWasm(enable_tracing=False), folder="generated/path_following_control", build=True) # dy.clear() # - import IPython IPython.display.IFrame(src='../vehicle_control_tutorial/path_following_control.html', width='100%', height=1000)
vehicle_control/path_following_control.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Feature scaling with sklearn - Exercise Solution # You are given a real estate dataset. # # Real estate is one of those examples that every regression course goes through as it is extremely easy to understand and there is a (almost always) certain causal relationship to be found. # # The data is located in the file: 'real_estate_price_size_year.csv'. # # You are expected to create a multiple linear regression (similar to the one in the lecture), using the new data. This exercise is very similar to a previous one. This time, however, **please standardize the data**. # # Apart from that, please: # - Display the intercept and coefficient(s) # - Find the R-squared and Adjusted R-squared # - Compare the R-squared and the Adjusted R-squared # - Compare the R-squared of this regression and the simple linear regression where only 'size' was used # - Using the model make a prediction about an apartment with size 750 sq.ft. from 2009 # - Find the univariate (or multivariate if you wish - see the article) p-values of the two variables. What can you say about them? # - Create a summary table with your findings # # In this exercise, the dependent variable is 'price', while the independent variables are 'size' and 'year'. # # Good luck! # ## Import the relevant libraries # + import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set() from sklearn.linear_model import LinearRegression # - # ## Load the data data = pd.read_csv('real_estate_price_size_year.csv') data.head() data.describe() # ## Create the regression # ### Declare the dependent and the independent variables x = data[['size','year']] y = data['price'] # ### Scale the inputs # + from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(x) x_scaled = scaler.transform(x) # - # ### Regression reg = LinearRegression() reg.fit(x_scaled,y) # ### Find the intercept reg.intercept_ # ### Find the coefficients reg.coef_ # ### Calculate the R-squared reg.score(x_scaled,y) # ### Calculate the Adjusted R-squared # Let's use the handy function we created def adj_r2(x,y): r2 = reg.score(x,y) n = x.shape[0] p = x.shape[1] adjusted_r2 = 1-(1-r2)*(n-1)/(n-p-1) return adjusted_r2 adj_r2(x_scaled,y) # ### Compare the R-squared and the Adjusted R-squared # It seems the the R-squared is only slightly larger than the Adjusted R-squared, implying that we were not penalized a lot for the inclusion of 2 independent variables. # ### Compare the Adjusted R-squared with the R-squared of the simple linear regression # Comparing the Adjusted R-squared with the R-squared of the simple linear regression (when only 'size' was used - a couple of lectures ago), we realize that 'Year' is not bringing too much value to the result. # ### Making predictions # # Find the predicted price of an apartment that has a size of 750 sq.ft. from 2009. new_data = [[750,2009]] new_data_scaled = scaler.transform(new_data) reg.predict(new_data_scaled) # ### Calculate the univariate p-values of the variables from sklearn.feature_selection import f_regression f_regression(x_scaled,y) p_values = f_regression(x,y)[1] p_values p_values.round(3) # ### Create a summary table with your findings reg_summary = pd.DataFrame(data = x.columns.values, columns=['Features']) reg_summary ['Coefficients'] = reg.coef_ reg_summary ['p-values'] = p_values.round(3) reg_summary # It seems that 'Year' is not event significant, therefore we should remove it from the model. # # Note that this dataset is extremely clean and probably artificially created, therefore standardization does not really bring any value to it.
sklearn - Feature Scaling Exercise Solution.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Option chains # ======= # + from ib_insync import * util.startLoop() ib = IB() ib.connect('127.0.0.1', 7497, clientId=23) # - a = ib.positions() option_positions = [x for x in a if x.contract.secType== 'OPT'] option_positions_dict = {} for option_position in option_positions: option_positions_dict[option_position.contract] = option_position.position for (contract, position) in option_positions_dict.items(): print(contract.symbol,contract.lastTradeDateOrContractMonth, contract.strike,contract.right, 'SMART', contract.tradingClass) contract_new = Option(contract.symbol,contract.lastTradeDateOrContractMonth, contract.strike,contract.right, 'SMART', tradingClass = contract.tradingClass) # contract_new = Option('AMC', '20210618', '145', 'C', 'SMART', tradingClass='AMC') # ib.qualifyContracts(spx) ib.qualifyContracts(contract_new) [ticker] = ib.reqTickers(contract_new) print(ticker.modelGreeks.delta) # Suppose we want to find the options on the SPX, with the following conditions: # # * Use the next three monthly expiries; # * Use strike prices within +- 20 dollar of the current SPX value; # * Use strike prices that are a multitude of 5 dollar. # To get the current market value, first create a contract for the underlyer (the S&P 500 index): spx = Option('AMC', '20210618', '145', 'C', 'SMART', tradingClass='AMC') ib.qualifyContracts(spx) # To avoid issues with market data permissions, we'll use delayed data: ib.reqMarketDataType(4) # Then get the ticker. Requesting a ticker can take up to 11 seconds. [ticker] = ib.reqTickers(spx) ticker # Take the current market value of the ticker: spxValue = ticker.marketPrice() spxValue # The following request fetches a list of option chains: # + chains = ib.reqSecDefOptParams(spx.symbol, '', spx.secType, spx.conId) util.df(chains) # - # These are four option chains that differ in ``exchange`` and ``tradingClass``. The latter is 'SPX' for the monthly and 'SPXW' for the weekly options. Note that the weekly expiries are disjoint from the monthly ones, so when interested in the weekly options the monthly options can be added as well. # # In this case we're only interested in the monthly options trading on SMART: chain = next(c for c in chains if c.tradingClass == 'SPX' and c.exchange == 'SMART') chain # What we have here is the full matrix of expirations x strikes. From this we can build all the option contracts that meet our conditions: # + strikes = [strike for strike in chain.strikes if strike % 5 == 0 and spxValue - 20 < strike < spxValue + 20] expirations = sorted(exp for exp in chain.expirations)[:3] rights = ['P', 'C'] contracts = [Option('SPX', expiration, strike, right, 'SMART', tradingClass='SPX') for right in rights for expiration in expirations for strike in strikes] contracts = ib.qualifyContracts(*contracts) len(contracts) # - contracts[0] # Now to get the market data for all options in one go: # + tickers = ib.reqTickers(*contracts) tickers[0] # - a = tickers[0] a.bidSize a = contracts[0] a # The option greeks are available from the ``modelGreeks`` attribute, and if there is a bid, ask resp. last price available also from ``bidGreeks``, ``askGreeks`` and ``lastGreeks``. For streaming ticks the greek values will be kept up to date to the current market situation. ib.disconnect()
notebooks/option_chain-Copy1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import warnings warnings.filterwarnings(action='ignore') from pyspark import SparkContext sc = SparkContext() nums= sc.parallelize([1,2,3,4]) nums.take(2) squared = nums.map(lambda x: x*x).collect() for num in squared: print('%i ' % (num)) # + from pyspark.sql import Row from pyspark.sql import SQLContext sqlContext = SQLContext(sc) # - lst = [('John',19),('Smith',29),('Adam',35),('Henry',50)] rdd = sc.parallelize(lst) rdd.map(lambda x: Row(name=x[0], age=int(x[1]))) #from pyspark.sql import SQLContext url = "https://raw.githubusercontent.com/guru99-edu/R-Programming/master/adult_data.csv" from pyspark import SparkFiles sc.addFile(url) sqlContext = SQLContext(sc) df = sqlContext.read.csv(SparkFiles.get("adult_data.csv"), header=True, inferSchema= True) df.printSchema() df.show(5, truncate = False) df.select('age','fnlwgt').show(5)
pyspark-test.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Initalization # Load up everything needed. This is tricky because we now have to deal with a non-strucutre preserving operations to count the number of jets and turn that into a number associated with each entry. # %%time import pandas as pd import uproot import matplotlib.pyplot as plt plt.rc('font', size=14) import numpy as np # ## Load required data from a file # In this case we will load only the MET branch as we are trying to be as efficient as we can. The call to keys() below will dump out all possible leaves that we could be loading in file = [r'file://C:\Users\gordo\Documents\GRIDDS\user.emma.mc15_13TeV.361023.Pythia8EvtGen_A14NNPDF23LO_jetjet_JZ3W.merge.DAOD_EXOT15.e3668_s2576_s2132_r7773_r7676_p2952.v201\copied\ntuples_QCD_JZ3__0_addFullEtaMLP.root'] reco_tree = uproot.open(file[0])["recoTree"] reco_tree.keys() # %%time jetinfo = reco_tree.arrays(['event_HTMiss', 'CalibJet_pT', 'CalibJet_eta']) # %%time jetinfo # ## Select # Figure out what the selection is for the events # %%time eta = jetinfo[b'CalibJet_eta'] good_eta = np.abs(eta.content) < 2.0 pt = jetinfo[b'CalibJet_pT'] good_pt = pt.content > 40.0 good_jet = uproot.interp.jagged.JaggedArray(good_eta & good_pt, eta.starts, eta.stops) # %%time good_event = [sum(l) > 2.0 for l in good_jet] # ## Plot # plot everything # + # %%time # First, figure out which jets have a good eta mht = jetinfo[b'event_HTMiss'] plt.hist(mht[good_event], bins=100) plt.xlabel('Missing $H_{T}$ [GeV]') plt.text(300, 60000, "Entries: {0}".format(len(mht[good_event]))) plt.show() # -
jupyter/04 - Plot MHT for events with 2 jets Jet Pt and eta cut.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.5 64-bit (''devenv'': venv)' # name: python3 # --- import cv2 import matplotlib.pylab as plt import numpy as np import os # + image = cv2.imread('LucyBackground.png') kernel = (21, 21) image_blurred = cv2.GaussianBlur(image, kernel, 0) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) mask = np.zeros(image.shape, np.uint8) contours, hierarchy = cv2.findContours(gray.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) edge_width = 5 cv2.drawContours(mask, contours, -1, (255,255,255), edge_width) plt.imshow(mask) output = np.where(mask==np.array([255, 255, 255]), image_blurred, image) plt.imshow(output) gray.shape # - image = np.array( [[ 0.0, 0.0, 0.0, 0.0], [ 0.0, 0.0, 0.0, 0.0], [ 0.0, 0.1, 0.2, 0.0], [ 0.0, 0.0, 0.0, 0.0], [ 0.0, 0.0, 0.0, 0.0]] ) rgb = np.zeros((image.shape[0], image.shape[1], 3), 'uint8') rgb[:,:,0] = image*255 rgb[:,:,1] = image*255 rgb[:,:,2] = image*255 rgb
Solver/Jupyter/Silhouette.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .sh # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Bash # language: bash # name: bash # --- # ### Problem 1 # I did create the jupyter notebook answer file at the same root so I could easily get the txt file. # # ### Problem 2 # 1. Create a directory listing command that shows: # # the ownership of the file # it's file-size, in Megabytes (i.e. human readable) # # 2. Then: # # in words (in a Markdown box), describe the permissions on that file (read/write/execute) for users, groups, and "anyone" # # ##### Answer: # We use command ls -l to find out who owns a file or what group it belongs to. In the second and third column we have the owner and the group name (in this case, both are osboxes). # All files are readable and writable for users and groups because they have -r and -w. "Anyone" only can read this files. However, in "directorio" it can be read, write and execute because it has -r, -w, -x. # > -h: it means human readable. # mkdir directorio ls -hl # # ### Problem 3 # Create a command that outputs only the "header" line of Locus_Germplasm_Phenotype_20130122.txt # # #### Anwer: # The "head -n" (n: specifies a number of lines from the beginning to be displayed) command displays the first part of a file. In the second part we write the file name which we want to output the header line. # head -1 "Locus_Germplasm_Phenotype_20130122.txt" # ### Problem 4 # Create a command that outputs the total number of lines in Locus_Germplasm_Phenotype_20130122.txt # # #### Answer: # With "wc -l" command, we use for count lines. If we want to count words, command will be "wc -w"; for counting chars, command will be "wc -c". After the command we write the file name that we are interested in. wc -l Locus_Germplasm_Phenotype_20130122.txt # ### Problem 5 # - Create a command that writes ONLY the data lines (i.e. excludes the header!) to a new file called "Data_Only.csv" # - prove that your output file has the expected number of lines # # #### Answer: # - The "tail (option)" command we can use for outputting the last part of files. Writting this "-n +2", we can output all lines excluding the header; by default, tail returns the last ten lines of each file that it is given if we do not specify. # " > directorio/Data_Only.csv" with this, we creat a new file in our directorio file. # - "wc -l": count all lines of this file. tail -n +2 Locus_Germplasm_Phenotype_20130122.txt > directorio/Data_Only.csv wc -l directorio/Data_Only.csv # ### Problem 6 # Create a command that shows all of the lines that have a phenotype including the word "root". # # #### Answer: # "grep" is a command that we use for search text. After the command we write the text or word that we want to search and then, the specific file. # In this case, we search the word "root" and words that contain this part. grep root Locus_Germplasm_Phenotype_20130122.txt # ### Problem 7 # Create a command that writes the AGI Locus Code for every line that has a phenotype including the word "root" to a file called: Root-associated-Loci.txt # # #### Answer: # In the first part, we search all "root" words and words which contains "root". # "awc -F": we pipe the output with awk command. # "\t" is for tab. # "{print $1}" we spacify the first column. # # "> directorio/Root-associated-Loci.txt": we creat a file with this name in directorio carpet. grep root Locus_Germplasm_Phenotype_20130122.txt | awk -F '\t' '{print $1}' grep root Locus_Germplasm_Phenotype_20130122.txt | awk -F '\t' '{print $1}' > directorio/Root-associated-Loci.txt cat directorio/Root-associated-Loci.txt # ### Problem 8 # Create a command that writes the PubMed ID for every line that has a phenotype including the word "root" to a file called: Root-associated-Publications.txt # # #### Answer: # We do the same steps like the problem 7, only changing some files and names. grep root Locus_Germplasm_Phenotype_20130122.txt | awk -F '\t' '{print $4}' grep root Locus_Germplasm_Phenotype_20130122.txt | awk -F '\t' '{print $1}' > directorio/Root-associated-Publications.txt # ### Problem 9 # Control experiment: You would hypothesize that genes associated with roots should be found on all chromosomes. Find a way (one or more commands) to test this hypothesis. In this dataset, is the hypothesis true? # # '{print substr($1,3,1)}' | sort | uniq: # "sort": order # "uniq": only one # '{print substr($1,3,1)}': print the first column, the third char of each and only appear one. # # So, this is true. All chromosomes have genes associated with roots. grep root Locus_Germplasm_Phenotype_20130122.txt | awk -F '\t' '{print substr($1,3,1)}' | sort | uniq # ### Problem 10 # # If your control experiment shows genes on every chromosome, then you can skip this question! (you answered Problem 9 correctly!) # # If your control experiment shows genes only on one or two chromosomes, then you have to explain why... what could the problem be? (I told you specifically to be careful about this problem!) # # ### Problem 11 # 'git commit' and 'git push' your answers to your GitHub, then give me your GitHub username before you leave the class. I will clone your repositories and grade your answers. #
Exam 1 Anwers.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: learn-env # language: python # name: learn-env # --- import pandas as pd import json from functions import remove_brackets, remove_quotations1, remove_quotations2 # + #with open('data/chamorro_dict2.json') as f: # data = json.loads("[" + # f.read().replace("}\n{", "},\n{") + # "]") # print(data) # - df_1 = pd.read_json('data/chamorro_dict.json', "r", encoding="utf-8", lines=True) df_2 = pd.read_json('data/chamorro_dict2.json', "r", encoding="utf-8", lines=True) df = pd.concat([df_1, df_2], ignore_index=True, sort=True) df = df[['word', 'definition']] df.head() df.info() df.isna().sum() df = df.dropna() df.info() df.head() df['word'] = df['word'].map(remove_brackets) df['definition'] = df['definition'].map(remove_brackets) df['word'] = df['word'].map(remove_quotations1) df['definition'] = df['definition'].map(remove_quotations1) df['word'] = df['word'].map(remove_quotations2) df['definition'] = df['definition'].map(remove_quotations2) df.head() # + #df.to_excel('chamorro_dict_cleaned.xlsx') #df.to_csv('data/complete_dict.csv', index=False) # -
chamorro_dict_eda.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import cv2 import numpy as np from matplotlib import pyplot as plt img = cv2.imread('sudoku.png', 0) lap = cv2.Laplacian(img, cv2.CV_64F, ksize = 3) lap = np.uint8(np.absolute(lap)) sobelX = cv2.Sobel(img, cv2.CV_64F, 1, 0) sobelY = cv2.Sobel(img, cv2.CV_64F, 0, 1) sobelX = np.uint8(np.absolute(sobelX)) sobelY = np.uint8(np.absolute(sobelY)) sobelCombined = cv2.bitwise_or(sobelX, sobelY) titles = ['image', 'Laplacian', 'sobelX', 'sobelY', 'sobelCombined'] images = [img, lap, sobelX, sobelY, sobelCombined] for i in range(5): plt.subplot(1, 5, i+1), plt.imshow(images[i], 'gray') plt.title(titles[i]) plt.xticks([]), plt.yticks([]) plt.show() # -
opencv/image gradiant and edge detection.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.0.5 # language: julia # name: julia-1.0 # --- # # Analytical estimation using Analytical, CSV, DataFrames, BenchmarkTools, Plots # **Setting up model. Required to open SFS** Analytical.changeParameters(gam_neg=-83,gL=10,gH=500,alLow=0.2,alTot=0.2,theta_f=1e-3,theta_mid_neutral=1e-3,al=0.184,be=0.000402,B=0.999,bRange=append!(collect(0.2:0.05:0.95),0.999),pposL=0.001,pposH=0,N=1000,n=661,Lf=10^6,rho=0.001,TE=5.0,convoluteBinomial=true) # **Opening files** # + path= "/home/jmurga/mktest/data/";suffix="txt"; files = path .* filter(x -> occursin(suffix,x), readdir(path)) empiricalValues = Analytical.parseSfs(data=files[1],output="testData.tsv",sfsColumns=[3,5],divColumns=[6,7]) # + function summStats(iter,data) for i in 1:iter # for j in adap.bRange for j in [0.999] Analytical.changeParameters(gam_neg=-rand(80:200),gL=rand(10:20),gH=rand(100:500),alLow=rand(collect(0.1:0.1:0.4)),alTot=rand(collect(0.1:0.1:0.4)),theta_f=1e-3,theta_mid_neutral=1e-3,al=0.184,be=0.000402,B=j,bRange=bRange=append!(collect(0.2:0.05:0.95),0.999),pposL=0.001,pposH=0,N=1000,n=661,Lf=10^6,rho=0.001,TE=5.0,convoluteBinomial=false) Analytical.set_theta_f() theta_f = adap.theta_f adap.B = 0.999 Analytical.set_theta_f() Analytical.setPpos() adap.theta_f = theta_f adap.B = j x,y,z= Analytical.alphaByFrequencies(adap.gL,adap.gH,adap.pposL,adap.pposH,data,"both") CSV.write("/home/jmurga/prior.csv", DataFrame(z), delim='\t', append=true) end end end # - @btime summStats(1,empiricalValues) x,y,z= Analytical.alphaByFrequencies(adap.gL,adap.gH,adap.pposL,adap.pposH,empiricalValues,"both") using LaTexString # + Plots.gr() Plots.theme(:wong2) Plots.plot(collect(1:size(x,1)),hcat(x,y), legend = :topright, xlabel = "Derived Alleles Counts", ylabel = L"$\alpha", label = [L"$\alpha$ accounting positive alleles" L"$\alpha$ non accounting positive alleles"], lw = 0.5) # - # # ABC using CSV, GZip, StatsPlots, Plots using Plots.PlotMeasures files = filter(x -> occursin("vipOut",x), readdir("/home/jmurga/")) openFiles(f) = convert(Matrix,CSV.read(GZip.open("/home/jmurga/"*"/"*f),header=false)) posteriors = files .|> openFiles using LaTeXStrings # + Plots.gr() Plots.theme(:wong2) p1 = StatsPlots.density(posteriors[1][:,[5,6,7]],legend = :topright, fill=(0, 0.3),xlabel = L"\alpha",label = [L"\alpha_S" L"\alpha_W" L"\alpha"],ylabel = "Posterior density", lw = 0.5,fmt = :svg,bottom_margin=10mm,left_margin=10mm,size=(800,600));p1 # -
scripts/notebooks/results.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + pycharm={"name": "#%%\n"} from get_data import get_commits, get_tags import pandas as pd import os # + pycharm={"name": "#%%\n"} def process_generator(func, year): try: token = os.environ["PRIVATE_TOKEN"] except KeyError: token = input("Private Token: ") gen = func(year, token) data = [] for i in gen: data.append(i) return data # + pycharm={"name": "#%%\n"} commits = process_generator(get_commits, 2021) tags = process_generator(get_tags, 2021) # + pycharm={"name": "#%%\n"} df = pd.json_normalize(commits) df_tags = pd.json_normalize(tags) # + pycharm={"name": "#%%\n"} df["created_at"] = pd.to_datetime(df['created_at'], utc=True) df_tags["commit.created_at"] = pd.to_datetime(df_tags["commit.created_at"], utc=True) # + pycharm={"name": "#%%\n"} df_tags = df_tags[df_tags.name.str.contains("testing").apply(lambda x: not x)] # + pycharm={"name": "#%%\n"} new_df = df.resample("W", on="created_at").count()["id"].to_frame() # + pycharm={"name": "#%%\n"} import matplotlib.pyplot as plt import matplotlib.dates as mdates import numpy as np plt.style.use('seaborn-dark') fig, ax = plt.subplots(figsize=(15, 9)) length = len(df_tags) color = plt.cm.twilight(np.linspace(0, 1, length)) ax.plot(new_df, linewidth=2, color="#3182bd") dates = df_tags["commit.created_at"] names = df_tags["name"] top_offset = 0.2 * max(new_df.id) timeline_baseline_y = max(new_df.id) + top_offset levels = np.tile([timeline_baseline_y for i in dates] + np.resize( [-top_offset, top_offset, -0.15 * max(new_df.id), 0.15 * max(new_df.id), -0.1 * max(new_df.id), 0.1 * max(new_df.id)], len(dates)), int(np.ceil(len(dates) / 6)))[:len(dates)] # v-lines from timeline ax.vlines(dates, timeline_baseline_y, levels, linestyle='-', color='#A40C04', linewidth=2, alpha=0.5) # Baseline and markers on it. ax.axhline(y=timeline_baseline_y, color="black", alpha=0.8) ax.plot(dates, [timeline_baseline_y for i in dates], "o", color="k", markerfacecolor="w") # annotate lines for d, l, r in zip(dates, levels, names): ax.annotate(r, xy=(d, l), xytext=(-3, np.sign(l) * 3), textcoords="offset points", horizontalalignment="right", verticalalignment="bottom" if l > 0 else "top") x1, x2, y1, y2 = plt.axis() # add space on y-ax for longest offset upwards plt.axis((x1, x2, y1, y2 + top_offset)) # add dotted v-lines for all dates ax.vlines(dates, 0, [timeline_baseline_y for i in dates], linestyle='--', color='black', linewidth=1, alpha=0.15) ax.set_title("Repository Activity") ax.set_ylabel('Sum of Weekly Commits') ax.set_xlabel('Date') ax.grid(axis='y', linestyle='dashed', color='black', alpha=0.10) ax.xaxis.set_major_locator(mdates.MonthLocator(interval=1)) ax.tick_params(direction="inout", length=4) plt.yticks(np.arange(0, max(new_df.id) + 5, 5)) plt.savefig('images/reposority_activity.png') # + pycharm={"name": "#%%\n"} df_test = df["author_name"].value_counts().value_counts().sort_values( ascending=False).to_frame().reset_index().sort_values("index") df_test.index = df_test["index"] df_test = df_test["index"] * df_test["author_name"] df_test = df_test.to_frame() df_test.columns = ["data"] # + pycharm={"name": "#%%\n"} larger = df_test["data"][df_test.index >= 23].sum() smaller = df_test["data"][df_test.index < 23].sum() # + pycharm={"name": "#%%\n"} fig, ax = plt.subplots(figsize=(15, 9), subplot_kw=dict(aspect="equal")) ax.axis('equal') width = 0.3 w, l = plt.pie([smaller, larger], wedgeprops=dict(width=0.5)) plt.setp(w, width=width, edgecolor='white', radius=1) w2, l = plt.pie(df_test["data"], wedgeprops=dict(width=0.5), labels=df_test.index, labeldistance=0.65) for t in l: t.set_horizontalalignment('center') plt.setp(w2, width=0.6 - width, edgecolor='white',radius=1-width) bbox_props = dict(boxstyle="square,pad=0.4", fc="w", ec="k", lw=0.92) kw = dict(arrowprops=dict(arrowstyle="-"), bbox=bbox_props, zorder=0, va="center") for i, p in enumerate(w): percentage = [smaller, larger][i] / df_test.data.sum() percentage = str(round(percentage * 100, 2)) + "%" if df_test.index[i] == 1: text = f'people with less then 23 commits made {percentage} of overall commits' else: text = f'people with more or exactly 23 commits made {percentage} of overall commits' ang = (p.theta2 - p.theta1) / 2. + p.theta1 y = np.sin(np.deg2rad(ang)) x = np.cos(np.deg2rad(ang)) horizontalalignment = {-1: "right", 1: "left"}[int(np.sign(x))] connectionstyle = "angle,angleA=0,angleB={}".format(ang) kw["arrowprops"].update({"connectionstyle": connectionstyle}) ax.annotate(text, xy=(x, y), xytext=(1 * np.sign(x), 1.2 * y), horizontalalignment=horizontalalignment, **kw) ax.set_title("Stake of Overall Commits for People Sharing an Unique Amount of Commits") plt.savefig('images/stake_of_commits.png')
make_graphs.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] nbgrader={"grade": false, "locked": false, "solution": false} # <font size="4" style="color:red;"> **IMPORTANT: ** When submitting this homework notebook, please modify only the cells that start with:</font> # # ```python # # modify this cell # ``` # # + [markdown] nbgrader={"grade": false, "locked": false, "solution": false} # # Different Dice # # # So far we mostly considered standard 6-faced dice. The following problems explore dice with different number of faces. # + nbgrader={"grade": false, "locked": false, "solution": false} import numpy as np # %pylab inline # + [markdown] nbgrader={"grade": false, "locked": false, "solution": false} # ## Problem 1 # + [markdown] nbgrader={"grade": false, "locked": false, "solution": false} # Suppose that a 6-sided die is rolled $n$ times. Let $X_i$ be the value of the top face at the $i$th roll, # and let $X\triangleq\max_{1\le i\le n} X_i$ be the highest value observed. For example, if $n=3$ and the three # rolls are 4, 1, and 4, then $X_1=4, X_2=1, X_3=4$ and $X=4$. # # To find the distribution of $X$, observe first that $X\le x$ iff $X_i\le x$ for all $1\le i\le n$, # hence $P(X\le x)=(x/6)^n$. It follows that $P(X=x)=P(X\le x)-P(X\le x-1)=(x/6)^n-((x-1)/6)^n$. # For example, $P(X=1)=(1/6)^n$, and $P(X=2)=(1/3)^n-(1/6)^n$. # # In this problem we assume that each of the $n$ dice has a potentially different number of faces, denoted $f_i$, # and ask you to write a function **largest_face** that determines the probability $P(x)$ that the highest top face observed is $x$. **largest_face** takes a vector $\boldsymbol f$ of positive integers, interpreted as the number of faces of each of the dice, and a value $x$ and returns $P(x)$. For example, if $\boldsymbol f=[2, 5, 7]$, then three dice are rolled, and $P(1)=(1/2)\cdot(1/5)\cdot(1/7)$ as all dice must be 1, while $P(7)=1/7$ as the third die must turn up 7. # # <font style="color:blue">* **Sample run** *</font> # ```python # print largest_face([2,5,8],8) # print largest_face([2], 1) # largest_face( [3,4], 2) # print largest_face([2, 5, 7, 3], 3) # ``` # # # <font style="color:magenta">* **Expected Output** *</font> # ``` # 0.125 # 0.5 # 0.25 # 0.180952380952 # ``` # - def largest_face(f, x_max): # inputs: m is a list of integers and m_max is an integer # output: a variable of type 'float' p_x=1.0 p_xl=1.0 x=x_max xl=x_max-1 for i in f: if x<i: p_x=p_x*x/i if xl<i: p_xl=p_xl*xl/i return p_x-p_xl # + nbgrader={"grade": true, "grade_id": "ex_1", "locked": true, "points": "5", "solution": false} # Check Function assert abs( largest_face([5],3) - 0.19999999999999996 ) < 10**-5 assert abs( largest_face( [11,5,4], 5) - 0.16363636363636364 ) < 10**-5 assert abs( largest_face(range(1,10), 3) - 0.011348104056437391 ) < 10**-5 # # AUTOGRADER TEST - DO NOT REMOVE # # + [markdown] nbgrader={"grade": false, "locked": false, "solution": false} # ## Problem 2 # + [markdown] nbgrader={"grade": false, "locked": false, "solution": false} # Write a function **face_sum** that takes a vector $\boldsymbol f$ that as in the previous problem represents the number of faces of each die, and a positive integer $s$, and returns the probability that the sum of the top faces observed is $s$. For example, if $\boldsymbol f=[3, 4, 5]$ and $s\le 2$ or $s\ge 13$, **face_sum** returns 0, and if $s=3$ or $s=12$, it returns $(1/3)\cdot(1/4)\cdot(1/5)=1/60$. # # Hint: The **constrained-composition** function you wrote for an earlier probelm may prove handy. # # <font style="color:blue"> * **Sample run** *</font> # ```python # print face_sum([3, 4, 5], 13) # print face_sum([2,2],3) # print face_sum([3, 4, 5], 7) # ``` # # # <font style="color:magenta"> * **Expected Output** *</font> # ``` # 0.0 # 0.5 # 0.18333333 # ``` # + [markdown] nbgrader={"grade": false, "locked": false, "solution": false} # ### Helper Code # # Below is a correct implementation of **constrained_composition**. Call this function in your implementation of **face_sum**. # + nbgrader={"grade": false, "locked": false, "solution": false} def constrained_compositions(n, m): # inputs: n is of type 'int' and m is a list of integers # output: a set of tuples k = len(m) parts = set() if k == n: if 1 <= min(m): parts.add((1,)*n) if k == 1: if n <= m[0]: parts.add((n,)) else: for x in range(1, min(n-k+2,m[0]+1)): for y in constrained_compositions(n-x, m[1:]): parts.add((x,)+y) return parts # + [markdown] nbgrader={"grade": false, "locked": false, "solution": false} # ### exercise: # + def face_sum(m, s): # inputs: m is list of integers and s is an integer # output: a variable of type 'float' # # YOUR CODE HERE # n=len(constrained_compositions(s,m)) t=1 for i in m: t=t*i return n/t # + nbgrader={"grade": true, "grade_id": "ex_2", "locked": true, "points": "5", "solution": false} # Check Function assert sum(abs( face_sum([2,2],2) - 0.25 )) < 10**-5 assert sum(abs( face_sum([2,2],10) - 0.0 )) < 10**-5 assert sum(abs( face_sum(range(1,10),20) - 0.03037092151675485 )) < 10**-5 # # AUTOGRADER TEST - DO NOT REMOVE # # + nbgrader={"grade": false, "locked": false, "solution": false}
Assignment/dice_HW.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os import pandas as pd """ What does this do? After downloading issue annotations from GRC (types: gaps, unknown, variation), clean up annotation and normalize output format for downstream processing (compute assembly coverage in issue regions etc.) Input/output Input is manual download, file should be placed under "annotation/grch38/issues" Output is then clean BED file "20200723_GRCh38_p13_unresolved-issues.bed" """ path = '/home/local/work/code/github/project-diploid-assembly/annotation/grch38/issues' names = ['grch38_p13_gaps.tsv', 'grch38_p13_unknown.tsv', 'grch38_p13_variation.tsv'] header = [ 'Issue_ID', 'Issue_Type', 'Issue_Location', 'Issue_TotalPlacements', 'Issue_Status', 'Issue_FixVersion', 'Issue_GenomeBrowsers', 'Issue_Summary' ] def parse_coordinates(entry): chrom, coords = entry.split(':') start, end = coords.split('-') start = int(start.replace(',', '')) end = int(end.replace(',', '')) return chrom, start, end all_issues = [] for n in names: file_path = os.path.join(path, n) df = pd.read_csv(file_path, sep='\t', names=header) df.drop('Issue_GenomeBrowsers', axis=1, inplace=True) # drop everything that is not located in hg38 df = df.loc[~df['Issue_Location'].isna(), :].copy() # drop everything where a assembly version with a fix # is already indicated df = df.loc[df['Issue_FixVersion'].isna(), :].copy() df.reset_index(drop=True, inplace=True) df['Issue_FixVersion'] = 'not_indicated' df['Issue_Status'] = df['Issue_Status'].str.replace(' ', '_') df['Issue_Summary'] = '[' + df['Issue_Summary'] + ']' coords = df['Issue_Location'].map(parse_coordinates) coords = pd.DataFrame.from_records(coords, columns=['chrom', 'start', 'end']) coords.reset_index(drop=True, inplace=True) df = pd.concat([df, coords], axis=1, join='outer') df['Issue_TotalPlacements'] = df['Issue_TotalPlacements'].astype('int64') df.drop('Issue_Location', axis=1, inplace=True) df['start'] = df['start'].astype('int64') # convert to 0-based for BED output df['end'] = df['end'].astype('int64') + 1 df['name'] = df['Issue_Type'] + '_' + df['Issue_ID'] df['score'] = 1000 df['strand'] = '.' all_issues.append(df) all_issues = pd.concat(all_issues, axis=0) all_issues.sort_values(['chrom', 'start', 'end'], inplace=True) sort_order = [ 'chrom', 'start', 'end', 'name', 'score', 'strand', 'Issue_ID', 'Issue_Type', 'Issue_Status', 'Issue_FixVersion', 'Issue_TotalPlacements', 'Issue_Summary' ] all_issues = all_issues[sort_order] out_path = '/home/local/work/code/github/project-diploid-assembly/annotation/grch38' # dump as BED file for intersect operations outfile = os.path.join(out_path, '20200723_GRCh38_p13_unresolved-issues.bed') with open(outfile, 'w') as dump: _ = dump.write('#') all_issues.to_csv(dump, sep='\t', index=False, header=True)
notebooks/2020_project/processing/clean_issue_annotation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} id="svIH6khYKoOe" outputId="20d3184c-6763-4a66-df31-afab74ab037a" #下载YOLOF的代码 # !git clone https://github.com/tiaoteek/YOLOFtest # + id="yqrQqUorkXyN" colab={"base_uri": "https://localhost:8080/"} outputId="08446685-18fc-4d30-ec11-85e28f6e8756" #下载相关环境模块 # !pip install lvis # + id="zRlMTZQkKnkO" colab={"base_uri": "https://localhost:8080/"} outputId="909aa9bd-58f9-4ff4-861d-3578cd466f40" # 安装YOLOF # !pwd # %cd /content/YOLOFtest # !python setup.py develop # + colab={"base_uri": "https://localhost:8080/", "height": 49, "referenced_widgets": ["c6a5fbc3e74e478ba4ac76e016d21b36", "b34576bfa5994cd6a27479924420be7d", "f4d47260b5e747cb99eafe7be31f9d10", "90f0e75a4d8d4354b7e8139b5fdda4f4", "7f2b087ca84a48fb84d2e8a9b1057adf", "1604e0df3fc944a0be9b57bc44674eca", "5a16060001064678bd2fe3a597c768fa", "abe9c68bcf9a4e5e8a6125ab8142d727", "eef6858d320347b7aaca60e12d6d81ec", "<KEY>", "a472280e9ab4447f81cf9a6432d22e3b"]} id="MKv6GkSgnOo7" outputId="7090b739-91b5-454b-e421-428eaa1817fa" # Download COCO val #下载COCO的数据集 import torch torch.hub.download_url_to_file('https://ultralytics.com/assets/coco2017val.zip', 'tmp.zip') # !unzip -q tmp.zip -d ../datasets && rm tmp.zip # + id="Eag2U7B9poZX" colab={"base_uri": "https://localhost:8080/"} outputId="109123cb-42cb-4986-c4cb-5cf5b64d983b" #可选 安装mish-cuda # !git clone https://github.com/thomasbrandon/mish-cuda # %cd mish-cuda # !python setup.py build install # %cd .. # + id="_2lT64Xm0Uct" colab={"base_uri": "https://localhost:8080/"} outputId="530703de-bb7e-4db6-a8e6-b2f12d27084c" # 安装必要的detectron2 # !pwd # !python -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.10/index.html # + id="ZtjvcFbwqXu6" colab={"base_uri": "https://localhost:8080/"} outputId="63421631-1b0b-459b-f8f3-9b9a1a9fd8f0" #创建数据文件夹 # 然后将数据集路径链接到数据集 # !mkdir datasets # %cd datasets/ # !ln -s /content/datasets/coco coco # + id="876oE5HrNMfr" colab={"base_uri": "https://localhost:8080/"} outputId="1a5efc3c-6c44-45a6-d3d1-f33384c9f114" # 下载相关的权重 # !mkdir pretrained_models # !mkdir out_weights # download the `cspdarknet53.pth` to the `pretrained_models` directory # !wget -O cspdarknet53.pth https://uowleq.bn.files.1drv.com/y4myGUHHnw61kQH3ua8umkOOIXc_qcciBDsuy-EVrGpA6mtu0TWPdCgUOFCqxAlbS5fjsGl9uVcB9wvfHJGCzo78aqzg4eS2ipSQzvelApzbrbuBPVZd5EIKgkcLWQSiivwo7AdGshvNOOqRSrKC-eo8nyUuVBokTcIPRpGHEzARWylc-pOPY4eGuidZiOZq4eSfCTaOKHZnJkxFnDC3nwXOA # !wget -O YOLOF_CSP_D_53_DC5_9x.pth https://upzqqa.bn.files.1drv.com/y4mwk1QklsZktMCS8X9bmTcxB8cqyMbouGOKV_8fjN0-<KEY> # !mv /content/YOLOFtest/YOLOF_CSP_D_53_DC5_9x.pth /content/YOLOFtest/out_weights/ # !mv /content/YOLOFtest/cspdarknet53.pth /content/YOLOFtest/pretrained_models/ # + colab={"base_uri": "https://localhost:8080/"} id="VH2eXO1rfgZw" outputId="3e8ce4eb-0dfa-4c58-e056-1956306dc677" # %cd /content/YOLOFtest # !python ./tools/train_net.py --num-gpus 1 --config-file ./configs/yolof_CSP_D_53_DC5_9x.yaml --eval-only MODEL.WEIGHTS /content/YOLOFtest/out_weights/YOLOF_CSP_D_53_DC5_9x.pth # + id="Cp7SbbZpctLe" outputId="47f5d55d-e0d7-40e1-f69d-06052df70982" colab={"base_uri": "https://localhost:8080/"} # %cd /content/YOLOFtest # !zip -r output.zip /content/YOLOFtest/output
YOLOF.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import torch import torch.nn as nn import torch.optim as optim import torchvision import torchvision.transforms as transforms from tqdm.notebook import trange, tqdm # - # ## model class LeNet(nn.Module): def __init__(self, in_channels = 1, out_channels = 10): super(LeNet, self).__init__() self.model = nn.Sequential( nn.Conv2d(in_channels, 6, 5), nn.MaxPool2d(2,2), nn.ReLU(), nn.Conv2d(6, 16, 5), nn.MaxPool2d(2,2), nn.ReLU(), nn.Conv2d(16, 120, 5), nn.ReLU(), nn.Flatten(), nn.Linear(120, out_channels) ) def forward(self, x): return self.model(x) transform = transforms.Compose([ transforms.Resize((32,32)), transforms.ToTensor() ]) dataset_train = torchvision.datasets.MNIST(root = '.', train = True, download = True, transform = transform) dataset_test = torchvision.datasets.MNIST(root = '.', train = False, download = True, transform = transform) dataloader_train = torch.utils.data.DataLoader(dataset_train, batch_size = 256, shuffle = True, num_workers = 8) dataloader_test = torch.utils.data.DataLoader(dataset_test, batch_size = 256, shuffle = True, num_workers = 8) model = LeNet(1, 10) optimizer = optim.Adam(model.parameters(), lr = 1e-3) loss_func = nn.CrossEntropyLoss() def train(model, epoch_num, dataloader_train, dataloader_test, optimizer, loss_func, use_cuda = True): device = torch.device('cuda:0' if use_cuda else 'cpu') model.to(device) epoch_iter = trange(epoch_num) for epoch in epoch_iter: data_iter = tqdm(dataloader_train) model.train() loss_sum = 0 for x,y in data_iter: x = x.to(device) y = y.to(device) y_ = model(x) loss = loss_func(y_,y) optimizer.zero_grad() loss.backward() optimizer.step() loss_sum += float(loss) data_iter.set_postfix(loss = float(loss_sum)) model.eval() test_iter = iter(dataloader_test) acc = 0 total = 0 for x,y in test_iter: x = x.to(device) y = y.to(device) y_ = model(x) acc += sum(y_.argmax(-1) == y) total += len(y) print('Accuracy:{acc:.2f}%'.format(acc = acc/total*100)) return model # + tags=[] model = train(model, 30, dataloader_train, dataloader_test, optimizer, loss_func) # -
ArtificialIntelligence/LeNet/LeNet.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from selenium import webdriver import re import time import requests from datetime import datetime from datetime import timedelta options = webdriver.ChromeOptions() options.add_argument('--ignore-certificate-errors') options.add_argument('--incognito') options.add_argument('--headless') # your location of chromedriver.exe # the version of chromedriver should be same as of browser driver = webdriver.Chrome(r"C:\Users\Lenovo\Downloads\A\chromedriver.exe", options=options) url1 = 'https://www.dream11.com/leagues' driver.get(url1) # Take a second to load javascript time.sleep(2) content = driver.page_source # - # #### Name of tournament tournament_pattern = r'(matchCardHeaderTitle_c5373 matchCardHeaderTitleDesktop_a2024">)(\w*-*\'*\s*\w*-*\'*\s*\w*-*\'*\s*\w*-*\'*\s*\w*-*\'*\s*\w*-*\w*\'*\s*\w*)' tournament_data = re.findall(tournament_pattern,content) # #### Match Name lteam_pattern = r'(<div class="squadShortName_a116b squadShortNameLeft_db179">)([A-Z]*-*[A-Z])' lteam_data = re.findall(lteam_pattern,content) rteam_pattern = r'(<div class="squadShortName_a116b squadShortNameRight_42ab0">)([A-Z]*-*[A-Z*])' rteam_data = re.findall(rteam_pattern,content) match_name = list() for i in range(len(tournament_data)): match_name.append(lteam_data[i][1] + ' vs ' + rteam_data[i][1]) # #### Match Time Remaining time_pattern = r'(matchCardTimerDesktop_48a55"><div>)(\w*\s*\w*\s*\w*)' time_data = re.findall(time_pattern,content) # #### Match Start Time days = 0 hours = 0 minutes = 0 dates = list() for i in range(len(tournament_data)): if (time_data[i][1].rfind('d')) != -1: days = time_data[i][1][0] start_date = datetime.now() + timedelta(days=int(days)) dates.append(start_date) elif (time_data[i][1].rfind('h')) != -1: hours = int(time_data[i][1].split('h')[0]) start_date = datetime.now() + timedelta(hours=int(hours)) if (time_data[i][1].rfind('m')) != -1: minutes = int(time_data[i][1].split('m')[0].split(' ')[1]) start_date = start_date + timedelta(minutes=int(minutes)) dates.append(start_date) elif (time_data[i][1].rfind('m')) != -1: minutes = int(time_data[i][1].split('m')[0]) start_date = datetime.now() + timedelta(minutes=int(minutes)) dates.append(start_date) # #### Match Status match_status = list() for i in range(len(tournament_data)): if datetime.now() > dates[i]: match_status.append("Already Started") else: match_status.append("Not Started") # #### Building Final Data # + data = list() data_dict = dict() for i in range(len(tournament_data)): data.append({'tournament_name':tournament_data[i][1], 'match_name':match_name[i], 'match_status':match_status[i], 'start_time':str(dates[i])}) data_dict = {'tournament_data':data} # - # #### Parsing to json import json import csv dream11_json_data = json.dumps(data_dict) data = json.loads(dream11_json_data) dream11_data = data['tournament_data'] # #### Saving it as CSV file # + data_file = open('data_file1.csv', 'w') csv_writer = csv.writer(data_file) # writing header to a file header = data.keys() csv_writer.writerow(header) for data in dream11_data: # Writing data of CSV file csv_writer.writerow(data.values()) data_file.close() # - # #### Displaying csv file with open('data_file.csv') as csvfile: spamreader = csv.reader(csvfile, delimiter=' ') for row in spamreader: print(' '.join(row))
Dream11 crawler.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # cphonon function # ### Authors # Written by <NAME>, that smiling guy from Latvia in *2019*. Thanks for the theoretical support by <NAME>\ # Much inspired by a matlab code which was written by <NAME> in *1999* and then improved by <NAME>, *2000* # * The first part is the *actual calculation part*, second one is the *user interface* part. In the first part the eigenfrequencies *omega*, k vectors (multiplied with the lattice seperation *a*) *Ka* and eigenvectors *V* are calculated and plotted. It is defined though functions, each of which are doing a specific part and afterwards all of them are continiouslly being executed in the GUI part. # # * If you want only to check what is here, go step by step, read the written explanations and execute cell by cell *ctrl+enter*. Feel free to uncomment the cells containing only outputs e.g cell containing only *#A* and check how the output looks like. # ### So, we begin with importing some important libraries # + # %matplotlib inline # %matplotlib notebook # ^^ Will display figures in the same cell as the code is ^^ from matplotlib import pyplot as plt from scipy.sparse import diags import numpy as np from scipy.linalg import eig import ipywidgets as widgets from ipywidgets import interact, interactive from IPython.display import display #Java - not there anymore for running cells from widgits from math import log # pretty print all cell's output and not just the last one from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" from IPython.utils.capture import capture_output #To suppress the output at some points # - # ### Define some initial values # # * bc - boundary conditions if bc=0 - peridoc b.c; if bc=1 - fixed ends; if bc=2 - free ends. # * N - number of atoms; M1/ M2 - masses of the first/second atom in the diatomic chane. When M1=M2, then program considers monoatomic chane. # * gamma - force constant # * imp_enabled - if 1 then enabled, if 0, then not enabled # * Nimp, Mimp - atom number at which the impurity is at, mass of the impurity\ # You shouldn't begin with N=2 and bc=1. Otherwise at some points it will give errors because *ka* is going to be a null vector, which I haven't dealt with. N = 10 M1 = 30 M2 = 30 gamma = 35 imp_enabled = 0 Nimp = 5 Mimp = 25 bc = 0 # Create force matrix, with 2 on the diagonal and -1 on each side # def CreateForceMatrix(N): tmp1 = [-1]*(N-1) tmp2 = [2]*N diagonals = [tmp1, tmp2, tmp1] # the diagonals of the force matrix A = diags(diagonals, [-1, 0, 1]).todense() return(A) A = CreateForceMatrix(N) # + #A # - # Update the force matrix considering all boundary conditions def ForceMatrixBoundaryCond(A, bc, N): if bc == 2: # free ends A[0, 0] = 1 A[N-1, N-1] = 1 elif bc == 1: # fixed ends A[0, 0] = - 2 # this will generate negative eigenvalues, to be removed below A[N-1, N-1] = - 2 A[1, 0] = 0 A[0, 1] = 0 A[N - 2, N-1] = 0 A[N-1, N - 2] = 0 else: # periodic boundary cond. A[0, N-1] += - 1 # The corner elements are -1 allways, except when N=2. then the atom A[N-1, 0] += - 1 # is connected to the other one in two ways. Thus the matrix element is =-2 return(A) A = ForceMatrixBoundaryCond(A, bc, N) # + #A # - # Create mass matrix, take care of impurities def CreateMassMatrix(N, M1, M2, Nimp, Mimp, imp_enabled): Nimp = Nimp-1 # Python counts from 0 B = np.zeros(shape=(N, N)) # diags([1]*N,0).todense() for i in range(N): if i % 2 == 0: # if even B[i, i] = 1 / M1 else: B[i, i] = 1 / M2 if imp_enabled: B[Nimp, Nimp] = 1 / Mimp return(B) B = CreateMassMatrix(N, M1, M2, Nimp, Mimp, imp_enabled) # + # B # - # ### Solve the eigenvalue equation # We get *omega* - eigenfrequencies and V - the vector of all displacements *u* # # Set number of modes *Nmodes* for fixed b.c. this number will be reduced by 2 as we remove negative eigenvalues (Two of the atoms don't move) def EigenEq(A, B, gamma, bc, N): A1 = np.dot(gamma*B, A) D, V = eig(A1) D = np.real(D) Nmodes = N # Find negative eigenvalues and remove corresponding eigenvectors/-values for fixed b.c if bc == 1: for k in range(2): neg = -1 for i in range(Nmodes): if ((D[i] < -1e-06) & (neg == -1)): neg = i D = np.concatenate([D[:(neg)], D[(neg+1):]]) V = np.concatenate([V[:, :(neg)], V[:, (neg+1):]], axis=1) Nmodes = Nmodes - 1 omega = np.sqrt(np.abs(D)) # Abs because D can be slightly negative ex. -6.10622664e-16 # sort the eigenvalues and eigenmodes according to the eigenvalues ind = np.argsort(omega) omega = np.sort(omega) V = V[:, ind] return(omega, V) omega, V = EigenEq(A, B, gamma, bc, N) # + #omega #V # - # ### Discrete fourier transform # We use Fast fourier transform algorithm (*fft* function) to implement the discrete fourier transform. # * For the case of *fixed ends* boundary conditions the smallest wave possible can be a wave with the wavelength _2*(N-1)_ - our full system is half the wave length, the wave can be antisymetric with the wave outside. This is why when search the Fourier coefficients we extend the wave with its mirror image outside of our region, but at the end we cut the K interval because for *K>N+1* K is displaying the second Brilloin zone. # * For the case of *free ends* - the real periodicity is _2*N_. The cell goes from 0.5a to N+0.5a. At the end points the force is 0 => the deivative of the wave is zero. # * For periodic boundary conditions, the periodicity is *N* but seaching for k vectors we cut, only up to *floor(N/2)*. Don't really get why it happens. But each frequency except zero is doubly degenerate with +K and -K solutions (moving in opposite directions), so when we look for K vectors, they will be in our normalised units 0, -1, 1, -2, 2, 3... What we will get from the fourier analysis, will only be the absolute values of these. So the K values will go from 0 to floor(N/2) and afterwards the peaks in the fourier spectrum will start to repeat, showing the next Brilloin zone. For other boundary conditions each K will be different in absolute values, the maximal value will correspond to Nmodes # * Karg is the phase of the obtained K. I basically use it only for the periodic case when distributing the K values # + def FourierTransform(bc, V, N,omega): if bc == 2: # free ended #wavemax = Nmodes Vplus=list(reversed(V[:,:])) #Add the mirror image - the other part of the system VFull=np.vstack((V,Vplus)) Kk = np.fft.fft(VFull, 2*N, axis=0) Kk = Kk[:(N + 1), :N] #Cut the first Brilluin zone #Ksq=np.imag(Kk)*np.imag(Kk) #Kargs = np.angle(Kk) elif bc == 1: # for fixed ends imaginary part turns out to work better. Still don't get why #wavemax = Nmodes Vplus=list(reversed(V[:-1,:]*-1)) #Add the mirror image with the misus sign - the other part of the system VFull=np.vstack((V,Vplus)) Kk = np.fft.fft(VFull, 2*(N - 1), axis=0) Kk = Kk[:(N ), :(N-2)] #Kargs=np.angle(Kk) else: # periodic omega1=np.append(omega[1:],0) #To search the two consecutive frequencies are the same oddDeg=np.abs(omega1-omega)<1e-06 evenDeg=np.append(False,oddDeg[:-1]) V=V.astype(complex) #Make V matrix a complex matrix Vnew=np.zeros(shape=(N, N)).astype(complex) Vnew[:,oddDeg]=np.sqrt(1/2)*(V[:,oddDeg]+V[:,evenDeg]*1j) V[:,evenDeg]=np.sqrt(1/2)*(V[:,oddDeg]-V[:,evenDeg]*1j) V[:,oddDeg]=Vnew[:,oddDeg] Kk = np.fft.fft(V, N, axis=0) if imp_enabled==1: # Kk=Kk[:N//2+1,:] # wavemax = np.floor(N / 2) # Kk = np.fft.fft(V, N, axis=0) # Kk = Kk[:int(wavemax) + 1, :N] # Ksq = np.real(Kk*np.conj(Kk)) # can be a bit negative # Kargs = np.angle(Kk) #Previously was taken Ksq = np.imag(Kk)*np.imag(Kk) #If we don't extend then the eigenmode is only positive and fourier transform fives K=0, which is not physical. #The previous matlab code solved this by taking the imag parts. I feels it more reasonable just to fix this manually #Check the commented out code bellow Ksq = np.real(Kk*np.conj(Kk)) Ka = np.argmax(Ksq, axis=0) Karg=[0]*len(Ka) if bc==0: #Those K values which are above the first Brilluin zone put on the left branch index=Ka>np.floor(N/2) Ka[index]=Ka[index]-N #for (k, i) in zip(Ka, range(len(Ka))): # Karg[i]=Kargs[k,i] # mx=np.max(Ksq,axis=0)#this one we don't need return(Ka, V) # maybe Kk, Ka, V = FourierTransform(bc, V, N,omega) # + # Vplus=list(reversed(V[:-1,:]*-1)) #Add the mirror image with the misus sign - the other part of the system # VFull=np.vstack((V,Vplus)) # Kk = np.fft.fft(VFull, 2*(N - 1), axis=0) # Kk = Kk[:(N + 1), :(N-2)] # Ksq = np.real(Kk*np.conj(Kk)) # + # plt.plot(Ksq[]) # + # Ksq1=np.imag(Kk)*np.imag(Kk) # Ksq2=np.real(Kk*np.conj(Kk)) # plt.plot(Ksq1[:,0]) # plt.plot(Ksq2[:,0]) # #plt.plot(np.imag(Ksq1[:,1])) #plt.plot(np.imag(Ksq2[:,1])) #plt.plot(np.real(Ksq1[:,1])) #plt.plot(np.real(Ksq2[:,1])) # - Ka omega # + # if bc == 1: # Ka = Ka*np.pi / (N - 1) # elif bc==0: # Ka = Ka*2*np.pi / N #*2 Remembering the cut we did at the Fourier transform # elif bc==2: # Ka = Ka*np.pi / N # - Ka # + def CorrectOmega(Ka, omega, bc, M1, M2, N): # set correct magnitude of Ka if bc == 1: Ka = Ka*np.pi / (N - 1) elif bc==0: Ka = Ka*2*np.pi / N #*2 Remembering the cut we did at the Fourier transform elif bc==2: Ka = Ka*np.pi / N #Give the correct magnitude for the omega in THz. Now the omega is the real frequency, not the angular. #We keep the name omega. omega=omega*3.9057 # The high Ka values () belong to if (M1 != M2): indx=np.abs(Ka) >= np.pi/2 Ka[indx] = np.sign(Ka[indx])*(np.abs(Ka[indx]) - np.pi) # correct sign if the last Ka is on the right boundary to the left side (we define our interval of Ka: [-pi/a, pi/a) ) if np.abs(Ka[-1] - np.pi) < 1e-06: Ka[-1] = - Ka[-1] return(omega, Ka) omega, Ka = CorrectOmega(Ka, omega, bc, M1, M2, N) # - # ### Bunch of boring but important manipulations with the results. # * For periodic boundary conditions we have doubly degenerate levels. For the degenerate levels: give the eigenmode with the smallest phase angle a plus sign and the other a minus sign. Afterwards we make the eigenmodes orthoganal. This will make them move into opposite directions (I guess) # * *Ka* values from the fft are only positive, the absolute values. For periodic b.c. we distribute them along the positive and negative branch. We do it also for other two boundary conditions even though there each wave is actually a combination of both *Ka* and *-Ka* (wave moving in the opposite directions), thus a standing wave # * Give the corect amplitude for *Ka*. So far they have been in values 1,2,..., but we want to them to be up to $Ka=k \cdot a=\frac{\pi}{2}$, where $a$ is the atomic distance # # + # # set Ka to the index of the biggest squared coefficient # def CorrectOmega(Ka, Karg, V, omega, bc, M1, M2, N): # if len(Ka)==0: # because in the case when bc=1 and N=2 gives an error # maxi = 0 # else: # maxi = np.argmax(Ka) # if bc == 0: # for j in range(N-1): # if np.abs(omega[j] - omega[j + 1]) < 1e-06: # if both omegas "almost" equal # diff = Karg[j + 1] - Karg[j] # if diff < - np.pi: # diff = diff + 2*np.pi # elif diff > np.pi: # diff = diff - 2*np.pi # # those where maxi+1 is even Ka will be -Ka, see below in this sec. # if (diff > 0 != ( (maxi+j) % 2) ): #This means do only if both diff>0 and maxi+1 is even or both are not. # # those where maxi+1 is even Ka will be -Ka, see below in this sec. # V[:, j] = - V[:, j] # # If the difference was pi/2 then changing the sign of one does not change the orthogonality # # Otherwise make them both orthonormal (supposes normated vectors) # if np.abs(np.abs(diff) - np.pi / 2) > 1e-06: # V[:, j + 1] = V[:, j + 1] - V[:, j] * \ # np.dot(V[:, j + 1], V[:, j]) # V[:, j + 1] = V[:, j + 1] / \ # np.sqrt(np.dot(V[:, j + 1], V[:, j + 1])) # # Change sigh of every second Ka, depending on which is the maximum Ka. For periodic this will distribute dispersion # #On the right and left branch. For other boundary conditions, it would be enough to plot only one branch, # #but this is a pretty way of drawing it. # Ka[(maxi) % 2::2] = -Ka[(maxi) % 2::2] # # set correct magnitude of Ka # if bc == 1: # Ka = Ka*np.pi / (N - 1) # elif bc==0: # Ka = Ka*2*np.pi / N #*2 Remembering the cut we did at the Fourier transform # elif bc==2: # Ka = Ka*np.pi / N # #Give the correct magnitude for the omega in THz. Now the omega is the real frequency, not the angular. # #We keep the name omega. # omega=omega*3.9057 # # The high Ka values () belong to # if M1 != M2: # Ka = Ka*2 # for i in range(len(Ka)): # if np.abs(Ka[i]) > np.pi: # Ka[i] = np.sign(Ka[i])*(np.abs(Ka[i]) - 2*np.pi) # # correct sign if the last Ka is on the right boundary to the left side (we define our interval of Ka: [-pi/a, pi/a) ) # for i in range(len(Ka)): # if np.abs(Ka[i] - np.pi) < 1e-06: # Ka[i] = - Ka[i] # return(V, omega, Ka) # V, omega, Ka = CorrectOmega(Ka, Karg, V, omega, bc, M1, M2, N) # + # Vdiff = V[:(N-1), :]-V[1:N, :] # if bc == 0: # Vdiff = np.vstack((Vdiff[:(N-1), :], V[(N-1), :] - V[0, :])) # if bc == 0: #Maybe if bc!=1, to incude free ends # Vdiff = np.vstack((Vdiff[:(N-1), :], V[(N-1), :] - V[0, :])) # Vdiff = np.diag(np.dot(np.transpose(Vdiff), Vdiff)).copy() # if len(Ka) != 0: # Otherwise it gives error, because no such element # if Vdiff[0] < 1e-06: # Vdiff[0] = 1 # Ch = 4*np.sqrt(2*omega / (gamma*Vdiff)) # V = np.dot(V, diags(Ch, 0).todense()) # - # ### set amplitude proportional to classical amplitude of one. # Inspired by the previous matlb code. Feels very arbitrary. Basically we cormalize that the distances between the atoms are around 1, for the sake of a pretty animation. # + def CorrectAmplitude(V, omega, gamma, N): Vdiff = V[:(N-1), :]-V[1:N, :] if bc == 0: Vdiff = np.vstack((Vdiff[:(N-1), :], V[(N-1), :] - V[0, :])) Vdiff = V[:(N-1), :]-V[1:N, :] if bc == 0: #Maybe if bc!=1, to incude free ends Vdiff = np.vstack((Vdiff[:(N-1), :], V[(N-1), :] - V[0, :])) Vdiff = np.diag(np.dot(np.transpose(Vdiff), Vdiff)).copy() if len(Ka) != 0: # Otherwise it gives error, because no such element if Vdiff[0] < 1e-06: Vdiff[0] = 1 Ch = 8*np.sqrt(2*omega / (gamma*Vdiff)) #4*np.sqrt(2*omega / (gamma*Vdiff)) V = np.dot(V, diags(Ch, 0).todense()) return(V) V = CorrectAmplitude(V, omega, gamma, N) # - # ### Plots # ##### The dispersion relation + the analytic solution # The guy bellow (function function) plots dispersion plot from both data and the theoretical calculations # + def PlotDisp(gamma, M1, M2, omega, Ka, ax1,ModeNr): #global ax1 ax1.cla() ax1.plot(Ka, omega, 'bo', label='simulation') #Ephesise the chosen point to plot the eigenmode SizeChosenP=16-N/20 #If there are many points on the plot then the marker is too big try: ax1.plot(Ka[ModeNr-1],omega[ModeNr-1],'rx',markersize=SizeChosenP) ax1.plot(Ka[ModeNr-1],omega[ModeNr-1],'ro',fillstyle='none',markersize=SizeChosenP) except: pass if M1 == M2: ka = np.linspace(-np.pi, np.pi, 100) analytic = np.sqrt(4*gamma/M1)*np.abs(np.sin(ka/2))*3.9057 ax1.plot(ka, analytic, label='analytic') else: ka = np.linspace(-np.pi/2, np.pi/2, 100) MM = (M1+M2)/(M1*M2) analytic1 = np.sqrt( gamma*MM * (1 + np.sqrt(1-2/MM/(M1+M2)*(1-np.cos(ka*2)))))*3.9057 #ka*2 because ka*d=ka*2*a analytic2 = np.sqrt( gamma*MM * (1 - np.sqrt(1-2/MM/(M1+M2)*(1-np.cos(ka*2)))))*3.9057 ax1.plot(ka, analytic1, label='analytic acustic') ax1.plot(ka, analytic2, label='analytic optical') ax1.legend() ax1.set(xlabel='k*a',ylabel='frequency,$\omega/(2\pi)$ THz', title='dispersion relation/allowed vibrational frequencies') #fig, ax1 = plt.subplots() #PlotDisp(gamma,M1,M2, omega, Ka,ax1,1) # - # This guy plots dispacements at one specific given eigenmode def PlotEigenmode(V, ModeNr, M1, M2, ax2, imp_enabled, Mimp, Nimp): #fig, ax = plt.subplots(); V=np.real(V) Nmodes=len(V[:, 0]) ax2.cla() if M1 > M2: mark1 = 11 mark2 = 6 elif M1 == M2: mark1 = mark2 = 6 else: mark1 = 6 mark2 = 11 marktype = 'bo' if M1 == M2 else 'go' oddatoms = range(1, Nmodes+1, 2) evenatoms = range(2, Nmodes+1, 2) allatoms = range(1, Nmodes+1) ax2.set(xlabel='x/a, atomic spacings (a $\sim$ 3 $\AA$)',ylabel='displacement, u(t=0) (arb.u)', title='Instantaneous postitions at one eigenmode') if ModeNr==1: ax2.set_title('Inst. postitions at one eigenmode',horizontalalignment='left') ax2.plot(oddatoms, V[::2, ModeNr-1], 'bo', markersize=mark1) ax2.plot(evenatoms, V[1::2, ModeNr-1], marktype, markersize=mark2) ax2.plot(allatoms, V[:, ModeNr-1], '-y') if imp_enabled == 1: ax2.plot(Nimp, V[Nimp-1, ModeNr-1], 'wo', markersize=11) ax2.plot(Nimp, V[Nimp-1, ModeNr-1], 'ro', markersize=log(Mimp*2/(M1+M1)+4,5)*8) # It was easiest to deal with the difficult case when N=2 and bc are fixed by a separate function where are draw the plot manually def PlotEigenmodeFixedN2(V, M1, M2, ax2, imp_enabled, Mimp, Nimp): oddatoms = 1 evenatoms = 2 ax2.cla() if M1 > M2: mark1 = 11 mark2 = 6 elif M1 == M2: mark1 = mark2 = 6 else: mark1 = 6 mark2 = 11 marktype = 'bo' if M1 == M2 else 'go' allatoms = [1, 2] ax2.set(xlabel='x/a, atomic spacings (a $\sim$ 3 $\AA$)', ylabel='displacement, u(t=0) (arb.u)', title='Instantaneous postitions at one eigenmode') ax2.plot(oddatoms, 0, 'bo', markersize=mark1) ax2.plot(evenatoms, 0, marktype, markersize=mark2) ax2.plot(allatoms, [0, 0], '-y') if imp_enabled == 1: ax2.plot(Nimp, 0, 'wo', markersize=11) # very arbitrary value of marker sizes that works ax2.plot(Nimp, 0, 'ro', markersize=log(Mimp*2/(M1+M1)+4, 5)*8) # ## Here we begin the user interface (UI) part using *IPython widgets* :) from ipywidgets import Label, HBox, Layout, Box, VBox # Before the definition of the function we also define plots which will be called later. For some buggy reason, the *matplotlib* has to be imported and "*%matplotlib notebook*" has to be called again # + # %%capture from matplotlib import pyplot as plt # %matplotlib notebook # calling it a second time may prevent some graphics errors # %matplotlib notebook fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2) fig.set_size_inches(9.5, 3.5) #fig2, ax3 = plt.subplots(nrows=1,ncols=1) # fig2.set_size_inches(10.0, 0.7) PlotDisp(gamma, M1, M2, omega, Ka, ax1, 2) PlotEigenmode(**{'V': V, 'ModeNr': 2, 'M1': M1, 'M2': M2, "ax2": ax2, 'imp_enabled': imp_enabled, 'Mimp': Mimp, 'Nimp': Nimp}) fig.subplots_adjust(wspace=0.4, bottom=0.2) # - # Define a function which will calculate $\omega$ and $k \cdot a$ (do everything above) every time some of the main input widgets will be changed. Function *interactive* defines most of these widgets. # + #Debug=widgets.HTMLMath(value='Start',placeholder='Some HTML') def update(N=N, gamma=gamma, bc=bc, M1=M1, M2=M2, imp_enabled=imp_enabled, Nimp=Nimp, Mimp=Mimp, ModeNr=2): # ,Debug=1 A = CreateForceMatrix(N) A = ForceMatrixBoundaryCond(A, bc, N) B = CreateMassMatrix(N, M1, M2, Nimp, Mimp, imp_enabled) omega, V = EigenEq(A, B, gamma, bc, N) # ax.plot(Ka,omega,'bo',label=['simulation']); # fig if (len(omega) == 0): # Debug.value='5' omega = float('nan') V = float('nan') Ka = float('nan') PlotDisp(gamma, M1, M2, omega, Ka, ax1, ModeNr) PlotEigenmodeFixedN2(V, M1, M2, ax2, imp_enabled, Mimp, Nimp) else: Ka, V = FourierTransform(bc, V, N,omega) omega, Ka = CorrectOmega(Ka, omega, bc, M1, M2, N) #Debug.value='I got printed_4_{}'.format(omega) V = CorrectAmplitude(V, omega, gamma, N) PlotDisp(gamma, M1, M2, omega, Ka, ax1, ModeNr) PlotEigenmode(V, ModeNr, M1, M2, ax2, imp_enabled, Mimp, Nimp) try: len(omega) # Will work if omega is not 'nan' return Ka[ModeNr-1], omega[ModeNr-1], V[:, ModeNr-1] except: return Ka, omega, V # MyInteraction = interactive(update, N=(2, 100, 1), gamma=(5, 200, 5), bc=[('periodic', 0), ('fixed ends', 1), ('free ends', 2)], M1=(1, 100, 1), M2=(1, 100, 1), imp_enabled=[('impurity disabled', 0), ('impurity enabled', 1)], Nimp=(1, N, 1), Mimp=(1, 100, 1), ModeNr=(1, N, 1)) # - # I agree that the bellow is not the prettiest way of doing it but I am not really a programmer and just learning about widgets. :D So, first we get acces to all widgets in MyInteraction so that it would be more starightforward accessing them. Also, delete names (*.description*) of each widget because otherwise it will appear when we print it using *HBox* later. The output looks nicer if the name is defined at *HBox*. # + for widg in MyInteraction.children[:-1]: widg.description = "" widg.continuous_update = False NInter, gammaInter, bcInter, M1Inter, M2Inter, imp_enabledInter, NimpInter, MimpInter, ModeNrInter = [ MyInteraction.children[i] for i in range(9)] # Change sizes of the two boxes otherwise it is too large bcInter.layout = Layout(width='100px') imp_enabledInter.layout = Layout(width='130px') # - # #### Defining additional widgets and functions # First we define a bottom that will make the two masses equal when we press it # + MassEqualBtn = widgets.Button(description='Equalize Masses') def equalize_Masses(btn_object): M2Inter.value = M1Inter.value MassEqualBtn.on_click(equalize_Masses) MassEqualBtn.layout = Layout(width='120px') # - # Here define a widget that will print out the chosen value of $K_a$ and corresponding $\omega$ # + OmegaPrint = r"&emsp; Frequency \( \frac{{\omega}}{{2 \pi}} \) is <b>{}</b> (THz)" Kprint = r"<br> &emsp; Wave vector \( k \cdot a \) is <b>{}</b> " PrintValue = widgets.HTMLMath( value=OmegaPrint.format(np.round( omega[ModeNrInter.value], 3))+Kprint.format(np.round(Ka[ModeNrInter.value], 2)), placeholder='Some HTML', ) def updateHTML(*args): try: PrintValue.value = OmegaPrint.format(np.round( MyInteraction.result[1], 3))+Kprint.format(np.round(MyInteraction.result[0], 2)) except: PrintValue.value = OmegaPrint.format( 'No Value')+Kprint.format('No Value') for widg in MyInteraction.children: widg.observe(updateHTML, 'value') # - # Take care that when we have fixed ends, the mode number is N-2\ # Afterwards, when we have different masses and periodic boundary conditions, N must be even # + def updateMaxValues(*args): if bcInter.value == 1: if NInter.value == 2: # otherwise gives error that max value is less then min value ModeNrInter.min = ModeNrInter.max = 0 else: ModeNrInter.max = NInter.value-2 ModeNrInter.min = 1 else: ModeNrInter.max = NInter.value ModeNrInter.min = 1 NimpInter.max = NInter.value NInter.observe(updateMaxValues, 'value') bcInter.observe(updateMaxValues, 'value') def updateNstep(*args): if (M1Inter.value != M2Inter.value) & (bcInter.value == 0): NInter.step = 2 NInter.value = NInter.value+1 if NInter.value % 2 else NInter.value else: NInter.step = 1 M1Inter.observe(updateNstep, 'value') M2Inter.observe(updateNstep, 'value') bcInter.observe(updateNstep, 'value') # - # Animation # # %%capture from matplotlib import animation, rc from IPython.display import HTML # Button which would create an animation if pressed # + # # %matplotlib notebook # # %matplotlib notebook CreateAnim = widgets.Button(description=r'Create animation') outp = widgets.Output(layout={'border': '1px solid black'}) def AnimateOnClick(*args): # %run -p Animation.ipynb outp.clear_output() with outp: display(anim) CreateAnim.on_click(AnimateOnClick) CreateAnim.layout = Layout(width='350px', height='50px') # - # It is difficult with saving animation if everything is in the matplotlib notebook environment. Basically I save the cariable to disk, then I execute animation function which saves the animation, but does not interact with the current notebook environment and then remove the saved variables. # If the saving environment mixes together with the matplotlib notebook environment, then one of them stops working # + SaveAnimToFile = widgets.Button( description="Save animation to mp4 with the current parameters") VariablesForSaveAnim = [0]*10 def SaveAnimation(*args): with capture_output() as captured: global VariablesForSaveAnim VariablesForSaveAnim = [NInter.value, gammaInter.value, bcInter.value, M1Inter.value, M2Inter.value, imp_enabledInter.value, NimpInter.value, MimpInter.value, ModeNrInter.value, MyInteraction.result] # %store VariablesForSaveAnim # !jupyter nbconvert --to notebook --execute AnimationSave.ipynb # %store -z VariablesForSaveAnim SaveAnimToFile.on_click(SaveAnimation) SaveAnimToFile.layout = Layout(width='350px', height='50px') # - # Define all of the outputs FirstBoxLayout = Layout(display='flex', flex_flow='row', align_items='stretch', width='100%') # + # MassEqualBtn=Layout FirstBox = widgets.HBox([Label(r'N of atoms'), NInter, Label( r'force constant $\gamma$ ($\frac{N}{m}$)'), gammaInter]) Masses = widgets.Box([Label(r'Mass 1 $M_1$ ($u$)'), M1Inter, Label( r'Mass 2 $M_2$ ($u$)'), M2Inter, MassEqualBtn]) ImpurityBox = widgets.HBox([imp_enabledInter, Label( r'Mass of imp. (u)'), MimpInter, Label(r'Atom nr. of imp.'), NimpInter]) Impurity = widgets.Accordion(children=[Masses, ImpurityBox]) Impurity.set_title(0, 'Masses') Impurity.set_title(1, 'Impurity') ModeNrInterBox = widgets.Box([Label('Mode number'), ModeNrInter, Label( 'boundary conditions'), bcInter, PrintValue], layout=FirstBoxLayout) AnimationBox = widgets.HBox([CreateAnim, SaveAnimToFile]) OutWidg = VBox([FirstBox, Impurity, ModeNrInterBox, AnimationBox]) # + # fig # display(OutWidg) # outp # -
cphonon.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + tags=[] # %load_ext autoreload # %autoreload 2 # - # ## This is a tutorial on `curve` mode scenario of `Plot` class. # + tags=[] # Necessary imports import sys import numpy as np sys.path.insert(0, '../..') from batchflow import plot # + tags=[] # Generate sample data size = 1000 y0 = np.cumsum(np.random.normal(size=size)) # - # Specify `mode='curve'` to display 1-d array. # + tags=[] plot( data=y0, mode='curve' ) # - # To specify curve domain provide `x` and `y` wraped in a single tuple. x0_max = 20_000 x0 = np.arange(0, x0_max, x0_max // size) # + tags=[] plot( data=(x0, y0), mode='curve' ) # - # To denote curve use `label` parameter. # + tags=[] plot( data=(x0, y0), mode='curve', label='Random process' ) # - # To modify curve parameters provide them with `'curve_'` prefix. # # Some common parameters, however, are accepted even without prefix: `color`, `linestyle`, `alpha`, `label`. # + tags=[] plot( data=(x0, y0), mode='curve', label='Random process', color='plum', linestyle='dashed', curve_linewidth=1 ) # - # Several cuvers can be diplayed simultaneously. # + tags=[] x1 = np.arange(5000, 15000, 10) y1 = np.cumsum(np.random.laplace(size=size)) # - # Provide them as a list of tuples to display on a single subplot. # + tags=[] plot( data=[(x0, y0), (x1, y1)], mode='curve', label=['Normal random process', 'Laplace random process'] ) # - # Specify `combine='separate'` to display provided data on separate subplots. # + tags=[] plot( data=[(x0, y0), (x1, y1)], combine='separate', mode='curve', label=['Normal random process', 'Laplace random process'] ) # - # To keep domain limits same use `xlim` parameter. # + tags=[] plot( data=[(x0, y0), (x1, y1)], combine='separate', mode='curve', label=['Normal random process', 'Laplace random process'], xlim=(min(x0[0], y0[0]), max(x0[-1], y0[-1])) ) # - # To pass separate domain limits to corresponding plots, use `xlim=[lims_0, lims_1]`. # + tags=[] plot( data=[(x0, y0), (x1, y1)], combine='separate', mode='curve', label=['Normal random process', 'Laplace random process'], xlim=[(x1[0], None), (None, x0[-1])] )
examples/plot/03_curve.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 分位数VAR模型 # ## 原理讲解 # # 参考文献: # 1. VAR for VaR: Measuring tail dependence using multivariate regression quantiles # 2. 分位数向量自回归分布滞后模型及脉冲响应分析 # ## 使用 statsmodels 库实现 # # ### QVAR 模型估计 # + import pandas as pd import statsmodels.regression.quantile_regression as qr import statsmodels.api as sm data = pd.read_excel('../数据/上证指数与沪深300.xlsx') Y = data['hs300'] X = data['sz'] def lag_list(Y, X, p=1, q=1, Yname='', Xname='', exogenous=[]): ''' 待估计方程:y = c + y(-1) +....+y(-p) + x(-1) + ... + x(-q) + exogenous 获取自回归分布滞后模型的估计向量 Parameters ---------- Y : 被估计变量 X : 估计变量 p : ADL 模型 Y 的滞后阶数,标量默认为1 q : ADL 模型 X 的滞后阶数,标量默认为1 Yname : 被响应变量名 Xname : 响应变量名 exogenous : 外生变量 Returns ------- ADLy : ADL 模型被解释变量 ADLx : ADL 模型解释变量 ''' if not Yname: Yname = 'y' if not Xname: Xname = 'x' ADLx = pd.DataFrame() T = len(Y) ADLy = pd.Series(list(Y[max(p, q):T]), name=Yname) for i in range(1, p+1): name = f'{Yname}_{i}' ADLx[name] = list(Y[max(p, q)-i:T-i]) for i in range(1, q+1): name = f'{Xname}_{i}' ADLx[name] = list(X[max(p, q)-i:T-i]) # 增加控制变量 if type(exogenous) == pd.Series: ADLx[exogenous.name] = list(exogenous[:0-max(p, q)]) elif type(exogenous) == pd.DataFrame: for name in exogenous.columns: ADLx[name] = list(exogenous[name][:0-max(p, q)]) # 增加常数项 ADLx = sm.add_constant(ADLx) return ADLy, ADLx def qvar(Y, X, P, Q, Yname='', Xname='', exogenous=[]): ''' 待估计方程:y = c + y(-1) +....+y(-p) + x(-1) + ... + x(-q) + exogenous x = c + y(-1) +....+y(-p) + x(-1) + ... + x(-q) + exogenous 估计 QVAR 模型 Parameters ---------- Y : 被估计变量 X : 估计变量 P : QVAR 模型的滞后阶数 Q : 分位点 Yname : 被响应变量名 Xname : 响应变量名 exogenous : 外生变量 Returns ------- res1, res2 : 两模型估计结果 ''' ADLy, ADLx = lag_list(Y, X ,P, P, Yname, Xname, exogenous) mod = qr.QuantReg(ADLy, ADLx) res1 = mod.fit(Q) ADLy, ADLx = lag_list(X, Y ,P, P, Xname, Yname, exogenous) mod = qr.QuantReg(ADLy, ADLx) res2 = mod.fit(Q) return res1, res2 res1, res2 = qvar(Y, X, 2, 0.3, 'hs300', 'sz') beta1 = res1.params beta2 = res2.params pd.DataFrame([beta1, beta2], ['hs300', 'sz']) # - # ### 脉冲响应分析 # + import numpy as np def OIRF(res1, res2, p): ''' 估计脉冲响应函数 Parameters ---------- res1, res2 : 两模型估计结果 p : 滞后阶数 Returns ------- OIRF ''' pass p = 2 resid = pd.DataFrame([res1.resid, res2.resid]).T # 残差序列 # 正交化分解,估计P2矩阵 a = resid.T @ resid a=a/(len()-2*p-1); P2 = chol(a, 'lower'); # - # ## matlab实现 # 可以参考 matlab 代码:[QVAR 模型](https://github.com/lei940324/econometrics/tree/master/matlab代码/分位数回归/VAR模型)
分位数回归/分位数VAR模型.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %load_ext autoreload # %autoreload 2 # + import sys sys.path.append("../") # - from IPython.core.display import HTML # + import os import numpy as np import torch import torch.nn.functional as F from olm.engine import Engine, weight_of_evidence, difference_of_log_probabilities, calculate_correlation from olm import InputInstance, Config from olm.visualization import visualize_relevances from olm.occlusion.explainer import GradxInputExplainer, IntegrateGradExplainer # + #import spacy from tqdm import tqdm from collections import defaultdict from segtok.tokenizer import web_tokenizer, space_tokenizer from transformers import RobertaTokenizer, RobertaForSequenceClassification #, glue_convert_examples_to_features # + CUDA_DEVICE = 0 # or -1 if no GPU is available MODEL_NAME = "../models/CoLA/" # - tokenizer = RobertaTokenizer.from_pretrained(MODEL_NAME) model = RobertaForSequenceClassification.from_pretrained(MODEL_NAME).to(CUDA_DEVICE) COLA_DATASET_PATH = "../data/glue_data/CoLA/" def byte_pair_offsets(input_ids, tokenizer): def get_offsets(tokens, start_offset): offsets = [start_offset] for t_idx, token in enumerate(tokens, start_offset): if not token.startswith(" "): continue offsets.append(t_idx) offsets.append(start_offset + len(tokens)) return offsets tokens = [tokenizer.convert_tokens_to_string(t) for t in tokenizer.convert_ids_to_tokens(input_ids, skip_special_tokens=False)] tokens = [token for token in tokens if token != "<pad>"] tokens = tokens[1:-1] offsets = get_offsets(tokens, start_offset=1) return offsets # + from typing import List, Tuple def read_cola_dataset(path: str) -> List[Tuple[List[str], str]]: dataset = [] with open(path) as fin: fin.readline() for index, line in enumerate(fin): tokens = line.strip().split('\t') sent, target = tokens[3], tokens[1] dataset.append((sent, target)) return dataset def dataset_to_input_instances(dataset: List[Tuple[List[str], str]]) -> List[InputInstance]: input_instances = [] for idx, (sent, _) in enumerate(dataset): instance = InputInstance(id_=idx, sent=web_tokenizer(sent)) input_instances.append(instance) return input_instances def get_labels(dataset: List[Tuple[List[str], List[str], str]]) -> List[str]: return [int(label) for _, label in dataset] # - def collate_tokens(values, pad_idx, eos_idx=None, left_pad=False, move_eos_to_beginning=False): """Convert a list of 1d tensors into a padded 2d tensor.""" size = max(v.size(0) for v in values) res = values[0].new(len(values), size).fill_(pad_idx) def copy_tensor(src, dst): assert dst.numel() == src.numel() if move_eos_to_beginning: assert src[-1] == eos_idx dst[0] = eos_idx dst[1:] = src[:-1] else: dst.copy_(src) for i, v in enumerate(values): copy_tensor(v, res[i][size - len(v):] if left_pad else res[i][:len(v)]) return res def encode_instance(input_instance): return tokenizer.encode(text=" ".join(input_instance.sent.tokens), add_special_tokens=True, return_tensors="pt")[0] # + #def predict(input_instance, model, tokenizer, cuda_device): # input_ids = tokenizer.encode(text=input_instance.sent1.tokens, # text_pair=input_instance.sent2.tokens, # add_special_tokens=True, # return_tensors="pt").to(cuda_device) # # logits = model(input_ids)[0] # return F.softmax(logits, dim=-1) def predict(input_instances, model, tokenizer, cuda_device): if isinstance(input_instances, InputInstance): input_instances = [input_instances] input_ids = [encode_instance(instance) for instance in input_instances] attention_mask = [torch.ones_like(t) for t in input_ids] input_ids = collate_tokens(input_ids, pad_idx=1).to(cuda_device) attention_mask = collate_tokens(attention_mask, pad_idx=0).to(cuda_device) logits = model(input_ids=input_ids, attention_mask=attention_mask)[0] return F.softmax(logits, dim=-1) # - dataset = read_cola_dataset(os.path.join(COLA_DATASET_PATH, "dev.tsv")) input_instances = dataset_to_input_instances(dataset) labels = get_labels(dataset) # + batch_size = 100 ncorrect, nsamples = 0, 0 for i in tqdm(range(0, len(input_instances), batch_size), total=len(input_instances) // batch_size): batch_instances = input_instances[i: i + batch_size] with torch.no_grad(): probs = predict(batch_instances, model, tokenizer, CUDA_DEVICE) #print(probs) predictions = probs.argmax(dim=-1).cpu().numpy().tolist() #print(predictions) for batch_idx, instance in enumerate(batch_instances): # the instance id is also the position in the list of labels idx = instance.id true_label = labels[idx] pred_label = predictions[batch_idx] ncorrect += int(true_label == pred_label) nsamples += 1 print('| Accuracy: ', float(ncorrect)/float(nsamples)) # + def batcher(batch_instances): true_label_indices = [] probabilities = [] with torch.no_grad(): probs = predict(batch_instances, model, tokenizer, CUDA_DEVICE).cpu().numpy().tolist() for batch_idx, instance in enumerate(batch_instances): # the instance id is also the position in the list of labels idx = instance.id true_label_idx = labels[idx] true_label_indices.append(true_label_idx) probabilities.append(probs[batch_idx][true_label_idx]) return probabilities def batcher_gradient(batch_instances): input_ids = [encode_instance(instance) for instance in batch_instances] attention_mask = [torch.ones_like(t) for t in input_ids] input_ids = collate_tokens(input_ids, pad_idx=1).to(CUDA_DEVICE) attention_mask = collate_tokens(attention_mask, pad_idx=0).to(CUDA_DEVICE) inputs_embeds = model.roberta.embeddings(input_ids=input_ids).detach() true_label_idx_list = [labels[instance.id] for instance in batch_instances] true_label_idx_tensor = torch.tensor(true_label_idx_list, dtype=torch.long, device=CUDA_DEVICE) # output_getter extracts the first entry of the return tuple and also applies a softmax to the # log probabilities explainer = IntegrateGradExplainer(model=model, input_key="inputs_embeds", output_getter=lambda x: F.softmax(x[0], dim=-1)) inputs_embeds.requires_grad = True expl = explainer.explain(inp={"inputs_embeds": inputs_embeds, "attention_mask": attention_mask}, ind=true_label_idx_tensor) input_ids_np = input_ids.cpu().numpy() expl_np = expl.cpu().numpy() relevances = [] for b_idx in range(input_ids_np.shape[0]): offsets = byte_pair_offsets(input_ids_np[b_idx].tolist(), tokenizer) relevance_dict = defaultdict(float) for token_idx, (token_start, token_end) in enumerate(zip(offsets, offsets[1:])): relevance = expl_np[b_idx][token_start: token_end].sum() relevance_dict[("sent", token_idx)] = relevance relevances.append(relevance_dict) return relevances config_unk = Config.from_dict({ "strategy": "unk_replacement", "batch_size": 128, "unk_token": "<unk>" }) config_gradient = Config.from_dict({ "strategy": "gradient", "batch_size": 128 }) config_resample = Config.from_dict({ "strategy": "bert_lm_sampling", "cuda_device": 0, "bert_model": "bert-base-uncased", "batch_size": 256, "n_samples": 100, "verbose": False }) unknown_engine = Engine(config_unk, batcher) resample_engine = Engine(config_resample, batcher) gradient_engine = Engine(config_gradient, batcher_gradient) # + instance_idx = 0 n = 20 unk_candidate_instances, unk_candidate_results = unknown_engine.run(input_instances[instance_idx: instance_idx+n]) res_candidate_instances, res_candidate_results = resample_engine.run(input_instances[instance_idx: instance_idx+n]) grad_candidate_instances, grad_candidate_results = gradient_engine.run(input_instances[instance_idx: instance_idx+n]) # - unk_relevances = unknown_engine.relevances(unk_candidate_instances, unk_candidate_results) res_relevances = resample_engine.relevances(res_candidate_instances, res_candidate_results) grad_relevances = gradient_engine.relevances(grad_candidate_instances, grad_candidate_results) labels_true = labels[instance_idx: instance_idx+n] labels_pred = [predict(instance, model, tokenizer, CUDA_DEVICE)[0].argmax().item() for instance in input_instances[instance_idx: instance_idx+n]] HTML(visualize_relevances(input_instances[instance_idx: instance_idx+n], unk_relevances, labels_true, labels_pred)) HTML(visualize_relevances(input_instances[instance_idx: instance_idx+n], res_relevances, labels_true, labels_pred)) HTML(visualize_relevances(input_instances[instance_idx: instance_idx+n], grad_relevances, labels_true, labels_pred)) print(calculate_correlation(unk_relevances, res_relevances)) print(calculate_correlation(unk_relevances, grad_relevances)) print(calculate_correlation(res_relevances, grad_relevances))
notebooks/relevance-COLA.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import numpy as np import pandas as pd import seaborn as sns from scipy.stats import zscore from sklearn.neighbors import LocalOutlierFactor import matplotlib.pyplot as plt from sklearn.preprocessing import normalize from sklearn.preprocessing import MinMaxScaler from sklearn.neighbors import LocalOutlierFactor pd.set_option('display.max_colwidth', 355) pd.set_option('display.max_rows', 100) # - df = pd.read_csv('datasets/b3_stocks_1994_2020.csv') #df = pd.read_csv('../b3data/b3_stocks_1994_2020.csv') # ## IBOV Ações - Exploração inicial df.head() df.shape # ## Tratamento dos dados # Convertendo string para datetime df['datetime'] = pd.to_datetime(df['datetime']) # + # Removendo coluna não utilizada # df = df.drop(['volume'], axis=1) # - # ### Criação de novas features # Variação diária em porcentagem df['delta_open'] = df.apply(lambda x: (abs(x['open'] - x['close']) / x['open']), axis=1) # + # Indica se a variação foi positiva, negativa ou zero def calc_delta_side(row): delta = row['open'] - row['close'] if delta > 0: return 1 elif delta == 0: return 0 else: return -1 df['delta_side'] = df.apply(lambda row: calc_delta_side(row), axis=1) # - # **Coluna delta_high faz sentido??? (high - low)** # ### Removendo dados que não fazem sentido # com por exemplo, preço máximo menor do que o preço de abertura # Muita low e high com muita variação mas close e open próximos df.loc[df['delta_open'] > 10].head(10) # #### Seriam esses dois dados ruido? (high = 1 / close = 33) # + df = df.loc[(df['high'] >= df['open']) & (df['high'] >= df['close'])] df = df.loc[(df['open'] >= df['low']) & (df['close'] >= df['low'])] df = df.loc[(df['high'] >= df['low'])] df.shape # - # ### Plotando distribuições df.describe().apply(lambda s: s.apply('{0:.2f}'.format)) sns.boxplot(x=df['delta_open']) # Esses pontos são possíveis outliers??? # Cortando uma parte dos dados para ver o gráfico com "zoom" sns.relplot(x="open", y="delta_open", data=df) # ### Normalizando valores # # **Normalizar antes ou depois de criar as feature?** # + # Normalizando os valores com MinMaxScaler # df[['open', 'close', 'low', 'high', 'delta_open']] = MinMaxScaler().fit_transform(df[['open', 'close', 'low', 'high', 'delta_open']]) # + # Normalizando os valores usando normalização logaritmica # df[['open', 'close', 'low', 'high', 'delta_open']] = np.log(df[['open', 'close', 'low', 'high', 'delta_open']]) # - # ### Cortando uma parte dos dados para ver o gráfico com "zoom" sns.relplot(x="open", y="delta_open", data=df.loc[(df['delta_open'] < 1000) & (df['open'] < 1000)]) # ## Usaremos dados depois de 95 por causa do plano real # Não temos a informação de como foi realizada a conversão da moeda e o ano de 94 possui muitas observações destoantes df = df.loc[df['datetime'] > '1995-01-01'] # ## Vamos usar apenas dados desse seculo # Ao procurar muitas das ações listadas no dataset não encontramos informações df = df.loc[df['datetime'] > '2000-01-01'] # #### Entendendo ações da amazon caindo bruscamente sns.lineplot(x='datetime', y='close', data=df.loc[(df['ticker'] == 'AMZO34') & (df['datetime'] > '2020-11-01') & (df['datetime'] > '2020-11-01')]) # Analisando alguns dados para entender o motivo da variação brusca df.loc[(df['ticker'] == 'AMZO34') & (df['datetime'] > '2020-11-01') & (df['datetime'] > '2020-11-01')] # #### Provavelmente reindexação (1 ação virou 10 com 1/10 do preço) # ### LOF def get_LOF_scores(df, n_neighbors=10, contamination=0.05): np.random.seed(42) # fit the model for outlier detection (default) clf = LocalOutlierFactor(n_neighbors=n_neighbors, contamination=contamination) # use fit_predict to compute the predicted labels of the training samples # (when LOF is used for outlier detection, the estimator has no predict, # decision_function and score_samples methods). y_pred = clf.fit_predict(df) X_scores = clf.negative_outlier_factor_ output_df = df.copy() output_df['LOF_score'] = X_scores output_df['LOF_predictions'] = y_pred return output_df def show_2D_outliers(df, x, y, scores, title = ''): normalized = (df[scores].max() - df[scores]) / (df[scores].max() - df[scores].min()) t = "Outlier Scores" if title: t=t+": "+title fig, ax = plt.subplots(figsize=(8, 6)) plt.title(t) plt.scatter(x=x, y=y, color='k', s=3., label='Data points', data=df) # plot circles with radius proportional to the outlier scores plt.scatter(x=x, y=y, s=1000 * normalized, edgecolors='r', facecolors='none', label='Outlier scores', data=df) plt.axis('tight') # plt.xlim((-5, 5)) # plt.ylim((-5, 5)) # plt.xlabel("prediction errors: %d" % (n_errors)) legend = plt.legend(loc='upper right') legend.legendHandles[0]._sizes = [10] legend.legendHandles[1]._sizes = [20] plt.show() # ### Aplicando o algoritmo para uma faixa muito densa do dataset # + df_low = df[['open', 'delta_open']] df_low = df_low.loc[(df_low['open'] < 10) & (df_low['delta_open'] < 0.5)] scores_low = get_LOF_scores(df_low, n_neighbors=300, contamination=0.5) show_2D_outliers(scores_low, x = 'open', y = 'delta_open', scores = 'LOF_score', title = 'Delta open low') # - # #### Acreditamos que nesse caso a visualização seja ruim pois alguns circulos ficam grandes o suficiente para ficar fora do gráfico # ### Aplicando o algoritmo para uma faixa pouco densa do dataset # + df_high = df[['open', 'delta_open']] df_high = df_high.loc[(df_high['open'] > 500) & (df_high['delta_open'] > 0.1)] scores_high = get_LOF_scores(df_high, n_neighbors=100, contamination=0.3) show_2D_outliers(scores_high, x = 'open', y = 'delta_open', scores = 'LOF_score', title = 'Delta open high') # - # **Acreditamos que nesse caso a visualização seja mais efetiva, porém não sabemos como realizar transformações no dataset para chegar numa visualização coerente ** # ### Resultados # # - Maior problema identificado: disparidade no preço das ações (algumas centavos outras milhares de reais). Como tratar? - Normalização utilizando escala logaritmica??? # # - Sem resultados satisfatórios # - Visualização não é boa nesse caso # - Quais outras transformações podem ser feitas ou visualizações exploradas? # - Como tratar a disparidade entre os preços das ações? # # Em busca dos Circuit Breakers # ### Criando novas colunas # Meta: # - quantas acoes cairam # - quantas acoes subiram # - media da variacao # - media da variacao das 10+ # - desvio padrao da variacao # - media do volume # - dia # Getting the most important ibovespa stocks ibov_composition = pd.read_csv('datasets/IBOVDia_13-10-21.csv', sep=';', encoding='utf-8') ibov_composition.reset_index(inplace=True) ibov_composition['Theoretical Quantity'] = ibov_composition['Theoretical Quantity'].str.replace(',','.').astype(float) ibov_composition.sort_values('Theoretical Quantity', ascending=False, inplace=True) top_15 = ibov_composition[1:16]['index'].to_list() # Quantos porcento as 15 maiores ações representam no indice IBOVESPA ibov_composition[1:16]['Theoretical Quantity'].sum() # Creating the daily aggregation # + def how_many_went_up(series): return series[series == 1].shape[0] / series.shape[0] def how_many_went_down(series): return series[series == -1].shape[0] / series.shape[0] # - df_daily = df.groupby('datetime').agg( { 'delta_open':['mean','std'], 'volume': 'mean', 'delta_side': [how_many_went_up, how_many_went_down]} ) # Renaming the columns df_daily.columns = df_daily.columns.to_flat_index() df_daily.rename( columns={ ('delta_open', 'mean'): 'variation_mean', ('delta_open', 'std'): 'variation_std', ('volume', 'mean'): 'volume_mean', ('delta_side', 'how_many_went_up'): 'up_count', ('delta_side', 'how_many_went_down'): 'down_count', }, inplace = True ) # Getting the mean and std variation of the 15 most important stocks df_top_15 = df.loc[df['ticker'].isin(top_15)] daily_df_top_15 = df_top_15.groupby('datetime').agg({'delta_open': ['mean', 'std']}) daily_df_top_15.columns = daily_df_top_15.columns.to_flat_index() daily_df_top_15.rename( columns= { ('delta_open', 'mean'): 'top_15_variation_mean', ('delta_open', 'std'): 'top_15_variation_std', }, inplace = True) # joining if the df_daily dataset df_daily = pd.merge(df_daily, daily_df_top_15, on='datetime') df_daily.reset_index(inplace=True) df_daily # Normalizing volume mean df_daily['volume_mean'] = MinMaxScaler().fit_transform(df_daily[['volume_mean']]) sns.relplot(x="datetime", y="volume_mean", data=df_daily.loc[df_daily['datetime'] > '2020']) sns.relplot(x="datetime", y="variation_mean", data=df_daily.loc[df_daily['datetime'] > '2020']) sns.lineplot(x="datetime", y="variation_mean", data=df_daily.loc[(df_daily['datetime'] > '2008') & (df_daily['datetime'] < '2009')]) # ### Outlier Detection # + df_2020 = df_daily.loc[df_daily['datetime'] > '2020'].set_index('datetime') df_2020_lof = get_LOF_scores(df_2020, n_neighbors=40, contamination=0.5) # Para plotar o grafico, o datetime não pode ser a index do dataset df_2020_lof.reset_index(inplace=True) show_2D_outliers(df_2020_lof, x = 'datetime', y = 'variation_mean', scores = 'LOF_score', title = 'Delta open low') # - # **Removendo o dado de volume** # + df_2020_lof = get_LOF_scores(df_2020.drop(columns=['volume_mean']), n_neighbors=40, contamination=0.5) df_2020_lof.reset_index(inplace=True) show_2D_outliers(df_2020_lof, x = 'datetime', y = 'variation_mean', scores = 'LOF_score', title = 'Variation Mean') # - # **Plotando para 2008** # + # df_2020 = df_daily.loc[(df_daily['datetime'] > '2008') & (df_daily['datetime'] < '2009')].set_index('datetime') # scores_low = get_LOF_scores(df_2020, n_neighbors=100, contamination=0.5) # # Para plotar o grafico, o datetime não pode ser a index do dataset # scores_low.reset_index(inplace=True) # show_2D_outliers(scores_low, x = 'datetime', y = 'variation_mean', scores = 'LOF_score', title = 'Delta open low') # - # ## O dataset utilizado será o de 2020 sem os dados de volume df_2020_lof.mean() # **Linha com maior LOF_score que não foi considerada Outlier** df_2020_lof.sort_values(by=['LOF_score']).head(1) df_2020_lof.sort_values(by=['LOF_score']).head(5) # **z-score for all columns and cut the 10+** # + df_top_10_zscore = df_2020_lof.sort_values(by=['LOF_score']) df_top_10_zscore = df_top_10_zscore.drop(columns=['LOF_score', 'LOF_predictions']) df_top_10_zscore = df_top_10_zscore.set_index('datetime').apply(zscore).head(5) df_top_10_zscore = df_top_10_zscore.abs() df_top_10_zscore # - sns.heatmap(df_top_10_zscore, annot=True) # Iniciamos a análise dos dados em busca das razões que resultaram na determinação de dias específicos como anomlaos # ordernando-os em relação ao coeficiente de outlier retornado pelo LOF. Com o dataset organizado, transformamos os valores absolutos de cada uma das colunas no standard score (z-score) correspondente. A ideia é entender o quanto cada um dos registros está destoante da média encontrada no conjunto de dados. # # Com estas informações em mãos foram escolhidos os cinco dias que apresentaram os maiores coeficientes de outlier para uma análise mais apronfudada. É importante notar que este tipo de comparação pode trazer alguma explicabilidade na identificação de anomalias, mas é ingenua, princiaplemnte por dois aspectos que possivelmente derivam o alto coeficiente das observações apontadas como não explicáveis. São eles: estaremos analisando apenas dimensões isoladas, então subespaços maiores não serão considerados, e o algoritmo LOF utilizado leva em consideração os valores temporalmente próximos no calculo do coeficiente de cada obersvação, está informação também não foi considerada nas análises. # # O dia 22 de junho de 2020 foi o identificado como mais destoante dentre as mais de cinco mil amostras do dataset. Neste dia foi observado um desvio enorme em relação à média do conjunto na dimensão variation_std. Esta coluna apresenta qual o desvio padrão encontrado entre todas as ações negociadas no dia. A coluna variation_mean, a média da variação do dia, apresenta um leve desvio. As outras colunas se mantiveram estáveis. Assim, a anomalia encontrada neste dia parece ser um pequeno grupo de ações pequenas, que não estão entre as 15 mais importantes para o indice IBOV, variou muitas vezes em relação ao normal. # # # No dia 22 de junho um grupo de acoes que nao estao no top 15 desviou muito em relação a media do desvio diario. # Provavelmente nenhuma dessas acoes esta no top 15 porque a media de variacao entre elas # # # dia 25 de março grande parte das açoes variou bastante, mas especialmente as 15 acoes mais importantes # # dia 17 de novembro ou um grupo muito pequeno variou muito ou existiram variacoes complementares, que se anularam em relação a media # # --- # # demais dias variaram sao altamente inconclusivos (40% do top 5 e 50% do top 10) -> justificativa para utilização do algoritmo # df_top_10_zscore.sum(axis=1).sort_values() pd.set_option('display.float_format', lambda x: '%.3f' % x) df_top_10_zscore.sum().sort_values() df_2020_lof.set_index('datetime') df_2020_lof.to_csv('datasets/df_outliers.csv') df_2020_lof.shape # df_2020.to_csv('datasets/df_outliers_since2000.csv', index=False)
data_exploration.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd from pandas.api.types import CategoricalDtype df = pd.read_csv('../bike-sharing/data.csv.gz') dfc = df categorical_columns = [ "holiday", "season", "mnth", "dteday" ] for column in categorical_columns: dfc[column] = dfc[column].astype(CategoricalDtype(ordered=True)) dfc[column] = dfc[column].cat.codes # - df.columns from algoneer.dataset import DataSet from algoneer.dataschema import DataSchema ds = DataSchema({ 'name' : 'Bike Rental Data', 'url' : 'https://archive.ics.uci.edu/ml/datasets/bike+sharing+dataset', 'description' : 'This dataset contains the hourly and daily count of rental' ' bikes between years 2011 and 2012 in Capital bikeshare system' ' with the corresponding weather and seasonal information.', 'columns' : { 'instant' : { 'type' : 'timestamp', 'type' : 'numerical', }, 'dteday' : { 'type' : 'integer', }, 'season' : { 'type' : 'categorical', 'category_type' : 'integer', 'categories' : [0, 1, 2, 3], }, 'yr' : { 'type' : 'integer', 'format' : 'year', }, 'mnth' : { 'type' : 'integer', 'format' : 'month', }, 'holiday' : { 'type' : 'boolean', 'true' : 0, 'false' : 1, }, 'weekday' : { 'type' : 'integer', 'format' : 'weekday', }, 'workingday' : { 'type' : 'boolean', 'true' : 0, 'false' : 1, }, 'weathersit' : { 'type' : 'categorical', 'category_type' : 'integer', 'categories' : [2, 1, 3] }, 'temp' : { 'type' : 'numerical', }, 'atemp' : { 'type' : 'numerical' }, 'hum' : { 'type' : 'numerical', }, 'windspeed' : { 'type' : 'numerical', }, 'casual' : { 'type' : 'integer', }, 'registered' : { 'type' : 'integer', }, 'cnt' : { 'type' : 'integer', }, } }) df.cnt.unique()
examples/bike-sharing/data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Political Organizations Activity on Twitter aggregated by Party # # The parameters in the cell below can be adjusted to explore other political parties and time frames. # # ### How to explore other political parties? # The ***organization*** parameter can be use to aggregate organizations by their party. The column `subcategory` in this [this other notebook](../political_organizations.ipynb?autorun=true) show the organizations that belong each party. # # ***Alternatively***, you can direcly use the [organizations API](http://mediamonitoring.gesis.org/api/organizations/swagger/), or access it with the [SMM Wrapper](https://pypi.org/project/smm-wrapper/). # # ## A. Set Up parameters # Parameters: organization = 'csu' from_date = '2017-09-01' to_date = '2018-12-31' aggregation = 'week' # ## B. Using the SMM Organization API # + import pandas as pd # create an instance to the smm wrapper from smm_wrapper import SMMOrganizations smm = SMMOrganizations() # using the api to get the data df = smm.dv.get_organizations() df = df[(df['category']=='political')] # filter the accounts by party, and valid ones (the ones that contain tw_ids) party_df = df[(df['subcategory']==organization) & (df['tw_ids'].notnull())] # query the Social Media Monitoring API tweets_by = pd.concat(smm.dv.tweets_by(_id=organization_id, from_date=from_date, to_date=to_date, aggregate_by=aggregation) for organization_id in party_df.index) replies_to = pd.concat(smm.dv.replies_to(_id=organization_id, from_date=from_date, to_date=to_date, aggregate_by=aggregation) for organization_id in party_df.index) # aggregate the replies total_tweets_by = tweets_by.groupby('date').agg({'tweets': 'sum'}) total_replies_to = replies_to.groupby('date').agg({'replies': 'sum'}) # - # ## C. Plotting # + #plotting data import plotly from plotly import graph_objs as go plotly.offline.init_notebook_mode(connected=True) plotly.offline.iplot({ "data": [go.Scatter(x=total_tweets_by.index, y=total_tweets_by['tweets'], name='Tweets', line_shape='spline'), go.Scatter(x=total_replies_to.index, y=total_replies_to['replies'], name='Replies', line_shape='spline')], "layout": go.Layout( title='Twitter (Tweets and Replies)', yaxis=dict(title='N')) })
python/polorg/twitter.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [Root] # language: python # name: Python [Root] # --- # Description: # # I downloaded dataset and saved as csv files # # I imported all 5 of the csv files into jupyter notebook and finally merged 3 files data, item and user # # I then used this merged file: DataFrame 'df_full' for most of the exploration # # I had in mind to use a loop to find the average ratings per movie by occupation but also decided to try pandas melt function as I had gotten so much help with the indexing part of the loop that I couldn't really justify leaving it in isolation and the melt function is actually quite impressive # # The boxplots and violin plots are done using seaborn library - I also used sns for heatmaps - with just a few matplotlib graphs for data display # # I've included the references to material used as they occur and this really made more sense. # # I used excel to add missing headings to columns, to clean data re spaces and other 'ticks' and for changing the F/M gender to 1/0 as needed that for Logistic Regression # import pandas as pd import numpy as np import matplotlib.pyplot as plt from matplotlib.pyplot import* from matplotlib import colors import seaborn as sns import statsmodels.api as sm import pylab as pl from sklearn.linear_model import LinearRegression from sklearn import cross_validation as cv # %matplotlib inline df_data = pd.read_csv('data.csv') df_item = pd.read_csv('item.csv') df_genre = pd.read_csv('genre.csv') df_occupation = pd.read_csv('occupation.csv') df_user = pd.read_csv('user.csv') pd.set_option('display.max_row', 25) df_data.head(2) df_item.head(2) df_item.columns df_genre.head(19) df_genre1 = df_genre.T df_genre1 df_occupation.head(2) # # # ____________________________________________________________________________________________________________________________________________________________________________________________ # User Dataframe information # Key for Gender - changed in Excel # Female :1 # Male :0 # # _____________________________________________________________________________________________ # _____________________________________________________________________________________________ # + df_user.head(2) # - df_user.T occupation_counts = df_user.occupation.value_counts() occupation_counts occupation_counts.plot.bar(color='green') df_userdata = pd.merge(df_user, df_data, left_on='user_id', right_on='user_id') df_userdata # + #df_userdata.to_excel('userdata.xlsx', sheet_name='Sheet1') # - df_full= pd.merge(df_userdata, df_item, left_on='item_id', right_on='movie_id') df_full # + #excel wont accept as more that 65,530 rows #df_full.to_excel('full.xlsx', sheet_name='Sheet1') # - totals = df_full.groupby(['occupation','age','rating'])['unknown', 'action', 'adventure', 'animation', 'childrens', 'comedy', 'crime', 'documentary', 'drama', 'fantasy', 'film_noir', 'horror', 'musical', 'mystery', 'romance', 'sci_fi', 'thriller', 'war', 'western'].sum() totals totals_occupation_genre = df_full.groupby(['occupation','rating'])['unknown', 'action', 'adventure', 'animation', 'childrens', 'comedy', 'crime', 'documentary', 'drama', 'fantasy', 'film_noir', 'horror', 'musical', 'mystery', 'romance', 'sci_fi', 'thriller', 'war', 'western'].sum() totals_occupation_genre totals_occupation = df_full.groupby(['occupation'])['unknown', 'action', 'adventure', 'animation', 'childrens', 'comedy', 'crime', 'documentary', 'drama', 'fantasy', 'film_noir', 'horror', 'musical', 'mystery', 'romance', 'sci_fi', 'thriller', 'war', 'western'].sum() totals_occupation # + #Notes: #http://stackoverflow.com/questions/29941384/how-can-i-use-melt-to-reshape-a-pandas-dataframe-to-a-list-creating-an-index #http://connor-johnson.com/2014/08/28/tidyr-and-pandas-gather-and-melt/ # melt works best here: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.melt.html #pandas.melt(frame, id_vars=None, value_vars=None, var_name=None, value_name='value', col_level=None) #“Unpivots” a DataFrame from wide format to long format, optionally leaving identifier variables set. #genres: ['unknown', 'action', 'adventure','animation', 'childrens', 'comedy', 'crime', 'documentary', 'drama', 'fantasy','film_noir', 'horror', 'musical', 'mystery', 'romance', 'sci_fi', 'thriller', 'war', 'western'].sum() mean_scores_prep_a = pd.melt(df_full, id_vars=['occupation','rating','user_id','age', 'gender','zip_code','item_id','timestamp','movie_id','movie_title','release_date',' video release date ',' IMDb URL '], var_name = 'movie_genre',value_name ='value') mean_scores_prep_b = mean_scores_prep_a[mean_scores_prep_a['value']==1] [['movie_genre','occupation','rating','user_id','age', 'gender','zip_code','item_id','timestamp','movie_id','movie_title','release_date',' video release date ',' IMDb URL ']] mean_scores = pd.pivot_table(mean_scores_prep_b, aggfunc = np.mean, index = ['occupation'], columns=['movie_genre'], values =['rating'] ) #mean_scores_prep_a#[1896730:1896780] # last record is 1896769 (total here 1896770 rows) #mean_scores_prep_b #[212180:212230] #last record is 1896585 (total now 212215 rows) mean_scores # 21 rows # + # Left this way and above in - as I prefer this but above more of my own investigations # This is the way I prefer but I had a lot of help with this section specifically on how to index into dataframe correctly def average_rate(rating): if rating.sum() <> 0: part1=((float(rating[0]) * 1) + (rating[1] * 2) + (rating[2] * 3) + (rating[3] * 4) + (rating[4] * 5))/ rating.sum() return part1 else: return 0 for i in totals_occupation.index: #and col in totals_occupation: for col in totals_occupation: rating_group = df_full[(df_full.occupation == i)].groupby(['rating'])[col].sum() rating_values = rating_group.get_values() totals_occupation.ix[i,col] = average_rate(rating_values) # df_full.rating.sum() totals_occupation # - # _____________________________________________________________________________________________ # _____________________________________________________________________________________________ # # The two heat maps are the same except second one has movie genre in alphabetical order # # I left in the two just for illustration having used the two data prep methods above. # # Just to note the heatmaps are in reverse order ie the one directly below is for the nested method directly above here. # # Annoyingly I couldn't get rid of the 'None' in front of 'movie_genre' on the x label in heatmap two - there must be some way but I tried several things in vain such as label = False which works for the . # # _____________________________________________________________________________________________ # _____________________________________________________________________________________________ # https://stanford.edu/~mwaskom/software/seaborn/tutorial/aesthetics.html plt.figure(figsize=(10, 8)) sns.plt.title ('HEATMAP: Average Rating by Occupation per Genre') sns.heatmap(totals_occupation, vmin =3, vmax =5, cmap='YlGnBu', annot = True) # + #I tried to get rid of the 'None' in front of movie_genre but can't find a way - a small but annoying thing plt.figure(figsize=(10, 8)) sns.plt.title ('HEATMAP: Average Rating by Occupation per Genre') sns.heatmap(mean_scores, annot=True, vmin =2, vmax =5,cmap='spring_r' ) #cbar_kws={"orientation": "horizontal"} # - exec_documentary = df_full[(df_full.occupation == 'executive')&(df_full.documentary)] exec_documentary.head(2) art_documentary = df_full[(df_full.occupation == 'artist')&(df_full.documentary)] art_documentary.head(2) lib_drama = df_full[(df_full.occupation == 'librarian')&(df_full.drama)] lib_drama.head(2) sales_drama = df_full[(df_full.occupation == 'salesman')&(df_full.drama)] sales_drama.head(2) # + #sns notes: whis= 1.5 etc, 'notch' for CI for median #help(sns.factorplot) #Included violin plots here two because I think they give a good look into boxplot blueprint # - plt.figure(figsize=(8, 6)) sns.set_style('darkgrid')# or 'ticks' for no hoizontal lines sns.plt.title('BOXPLOT Dramas: Librarian vs Salesman') #full = sns.load_dataset('df_full') ax = sns.boxplot(x='occupation', y='rating' ,data = lib_drama, palette ='PRGn',order=['librarian','salesman'],linewidth = 1, whis =0.5, notch = True) ax = sns.boxplot(x='occupation', y='rating' ,data = sales_drama, palette ='PRGn',order=['librarian','salesman'],linewidth = 1, whis=0.5, notch = True) #sns.despine(offset=1, trim=True) plt.figure(figsize=(8, 6)) sns.set_style('whitegrid')# or 'ticks' for no hoizontal lines sns.plt.title('VIOLINPLOT Dramas: Librarian vs Salesman') #full = sns.load_dataset('df_full') ax = sns.violinplot(x='occupation', y='rating' ,data = lib_drama, palette ='Set3',order=['librarian','salesman'],linewidth = 1, whis =0.5, notch = True) ax = sns.violinplot(x='occupation', y='rating' ,data = sales_drama, palette ='Set3',order=['librarian','salesman'],linewidth = 1, whis=0.5, notch = True) #sns.despine(offset=1, trim=True) # + plt.figure(figsize=(8, 6)) sns.set_style('darkgrid') sns.plt.title ('BOXPLOT Documentaries: Executive vs Artist') #full = sns.load_dataset('df_full') ax = sns.boxplot(x='occupation', y='rating' ,data = exec_documentary, palette ='PRGn',order=['executive','artist'],linewidth = 1, notch = True) ax = sns.boxplot(x='occupation', y='rating' ,data = art_documentary, palette ='PRGn',order=['executive','artist'],linewidth = 1, notch = True) #sns.despine(offset=1, trim=True) # - plt.figure(figsize=(8, 6)) sns.set_style('whitegrid') sns.plt.title ('VIOLINPLOT Documentaries: Executive vs Artist') #full = sns.load_dataset('df_full') ax = sns.violinplot(x='occupation', y='rating' ,data = exec_documentary, palette ='Set3',order=['executive','artist'],linewidth = 1, notch = True) ax = sns.violinplot(x='occupation', y='rating' ,data = art_documentary, palette ='Set3',order=['executive','artist'],linewidth = 1, notch = True) #sns.despine(offset=1, trim=True) # + # Logistic regression starts here # - df_full.age.describe() df_full.age.hist(bins =7) df_full.rating.describe() df_full.rating.hist(bins = 9) df_full.head(2) # + from sklearn import cross_validation as cv from sklearn.linear_model import LogisticRegression #I omitted 'unknown' genre here def test_train_logit(x): train_set, test_set = cv.train_test_split(df_full[['gender','action','adventure','animation','childrens','comedy','crime','documentary','drama','fantasy','film_noir','horror','musical','mystery','romance','sci_fi','thriller','war','western']], test_size=x) y = train_set['gender'] x = train_set.drop(train_set.columns[[0]], axis=1) test_y = test_set['gender'] test_x = test_set.drop(test_set.columns[[0]], axis=1) logistic = LogisticRegression() logistic.fit(x,y) logistic.predict(test_x) return (logistic.score(test_x, test_y), logistic.coef_) print " \nRegression Coefficients order is: 'action','adventure','animation','childrens','comedy','crime','documentary','drama','fantasy','film_noir','horror','musical','mystery','romance','sci_fi','thriller','war','western'\n" test_split = [.4,.35,.3,.25,.2] # try different test:train ratios for split in test_split: accuracy = [] coefficients = [] #np.mean((logistic.predict(test_x)-(test_y)**2) for i in range (3): # doing 3 here for shortness - in reality would run many more per ratio split and average logit = test_train_logit(split) accuracy.append(logit[0]) coefficients.append(logit[1]) print ' Test:Train split now is ', split,':',1-split print ' Averaged accuracy of the model: ',sum(accuracy)/3,'\n Coefficients: ',coefficients # - # ___________________________________________________________________________________________________________________ # ___________________________________________________________________________________________________________________ # # For this logistic regression - these are results of three loops (within the loop to test different ratio models) per new test/train split and the average of three is the accuracy. Of course in reality would do more loops just thought it would look very unwieldy here. # # Female:1 Male:0 # # Regression Coefficients order is: # # 1.'action',2.'adventure',3.'animation',4.'childrens',5.'comedy',6.'crime',7.'documentary',8.'drama', # 9.'fantasy',10.'film_noir',11.'horror',12.'musical',13.'mystery',14.'romance',15.'sci_fi', # 16.'thriller',17.'war',18.'western' # # # Genre most predictive of Gender # # instead of the 40/60 split figures below - here the figures was for the best ratio model from section above , the 25/75 split: # # Best 3 coefficients here are: 4.'childrens':0.313228 , 14.'romance' :0.267925766 18: 'western' -0.294950819 so there genre would appear to be most predictive of gender. # # # Accuracy of the logistic model # The accuracy hovers at just over 74 with the best for 25/75 split average accuracy of: 0.742178369895 and highest return of 0.74611840 # # ____________________________________________________________________________________________________________________ # # Sample output from above with 40/60 split: # # [0.7455173795452269, 0.74110988680757284, 0.7428628668736853] # # # Test:Train split now is 0.4 : 0.6 # # # Average accuracy of the model: 0.743163377742 # # # Coefficients: # # # childrens # -0.19938576, 0.01593005, -0.12708934, 0.32795457, 0.04648229, # # -0.14487156, -0.03059875, 0.12733968, -0.06158994, -0.14579716, # # romance # -0.10297987, 0.02571883, 0.05703292, 0.26350223, -0.14346511, # # western # -0.01280939, -0.13380743, -0.27554455 # # # # -0.17703922, -0.02123059, -0.1617448 , 0.32643474, 0.0087088 , # -0.18397035, -0.0823896 , 0.09735925, -0.07936723, -0.05454171, # -0.10359703, 0.03133368, 0.00948583, 0.25626898, -0.16878137, # -0.05404595, -0.16710839, -0.26043405 # # -0.19492109, -0.011701 , -0.18314428, 0.3575479 , -0.00268421, # -0.18052961, -0.04276943, 0.11329138, -0.05233104, -0.13742102, # -0.10613916, -0.01092815, 0.06552888, 0.25209888, -0.18129162, # -0.03052027, -0.14127435, -0.24808462 # # # ____________________________________________________________________________________________________________________ # ____________________________________________________________________________________________________________________ # + # rerun here with just the three best coefficients - more or less same accuracy of circa 74 def test_train_logit(x): train_set, test_set = cv.train_test_split(df_full[['gender','childrens','romance','western']], test_size=x) y = train_set['gender'] x = train_set.drop(train_set.columns[[0]], axis=1) test_y = test_set['gender'] test_x = test_set.drop(test_set.columns[[0]], axis=1) logistic = LogisticRegression() logistic.fit(x,y) logistic.predict(test_x) return (logistic.score(test_x, test_y), logistic.coef_) print " \nRegression Coefficients order is: 'action','adventure','animation','childrens','comedy','crime','documentary','drama','fantasy','film_noir','horror','musical','mystery','romance','sci_fi','thriller','war','western'\n" test_split = [.4,.35,.3,.25,.2] # try different test:train ratios for split in test_split: accuracy = [] coefficients = [] #np.mean((logistic.predict(test_x)-(test_y)**2) for i in range (3): # doing 3 here for shortness - in reality would run many more per ratio split and average logit = test_train_logit(split) accuracy.append(logit[0]) coefficients.append(logit[1]) print ' Test:Train split now is ', split,':',1-split print ' Averaged accuracy of the model: ',sum(accuracy)/3,'\n Coefficients: ',coefficients # -
Movieset assignment.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import forge from puzzle.puzzlepedia import puzzlepedia puzzle = puzzlepedia.parse(""" name in {Beth, Charles, David, Frank, Jessica, Karen, Taylor} city in {Abu_Dhabi, Beijing, Chennai, Manila, Seoul, Singapore, Tokyo} occupation in {historian, investigator, journalist, knitter, linguist, magician, numismatist} weeks_ago in {3, 3, 2, 2, 2, 1, 1} crime in {innocent*6, guilty} #1: Setup: Occupations, 1 thief. #2: Setup: Locations. #3: {Charles, Frank, Karen} == {linguist, magician, numismatist} all(Charles[w] == Frank[w] == Karen[w] for w in [1, 2, 3]) #4: Chennai != knitter #5: magician != Beijing numismatist != Beijing #6: {Manila} == {knitter, linguist, magician} investigator != Seoul investigator == innocent all(investigator[w] == guilty[w] for w in [1, 2, 3]) #7: Beth.weeks_ago > knitter.weeks_ago Jessica.weeks_ago > historian.weeks_ago #8: all(Chennai[w] == Tokyo[w] for w in [1, 2, 3]) #9: if Beth.innocent: Charles == Abu_Dhabi or Charles == Beijing #10: if Charles.innocent: David == Beijing or David == Chennai #10: if David.innocent: Frank == Chennai or Frank == Manila #10: if Frank.innocent: Jessica == Manila or Jessica == Seoul #10: if Jessica.innocent: Karen == Seoul or Karen == Singapore #10: if Karen.innocent: Taylor == Singapore or Taylor == Tokyo #10: if Taylor.innocent: Beth == Tokyo or Beth == Abu_Dhabi """) # -
src/puzzle/examples/mim/p5_2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: PySpark 2 # language: python # name: pyspark2 # --- # # Adapted from Hail documentation here: # ### https://hail.is/docs/stable/tutorials/hail-overview.html from hail import * hc = HailContext(sc) import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.patches as mpatches from collections import Counter from math import log, isnan from pprint import pprint # %matplotlib inline # optional import seaborn import os if os.path.isdir('data/1kg.vds') and os.path.isfile('data/1kg_annotations.txt'): print('All files are present and accounted for!') else: import sys sys.stderr.write('Downloading data (~50M) from Google Storage...\n') import urllib import tarfile urllib.urlretrieve('https://storage.googleapis.com/hail-1kg/tutorial_data.tar', 'tutorial_data.tar') sys.stderr.write('Download finished!\n') sys.stderr.write('Extracting...\n') tarfile.open('tutorial_data.tar').extractall() if not (os.path.isdir('data/1kg.vds') and os.path.isfile('data/1kg_annotations.txt')): raise RuntimeError('Something went wrong!') else: sys.stderr.write('Done!\n') # ! /usr/lib/hadoop/bin/hadoop fs -mkdir /data # ! /usr/lib/hadoop/bin/hadoop fs -put /home/jupyter-user/data/* /data/ vds = hc.read('hdfs:///data/1kg.vds') vds.summarize().report() vds.query_variants('variants.take(5)') vds.query_samples('samples.take(5)') vds.sample_ids[:5] vds.query_genotypes('gs.take(5)') # ! head data/1kg_annotations.txt | column -t table = hc.import_table('hdfs:///data/1kg_annotations.txt', impute=True).key_by('Sample') print(table.schema) pprint(table.schema) table.to_dataframe().show(10) pprint(vds.sample_schema) vds = vds.annotate_samples_table(table, root='sa') pprint(vds.sample_schema) pprint(table.query('SuperPopulation.counter()')) pprint(table.query('CaffeineConsumption.stats()')) table.count() vds.num_samples vds.query_samples('samples.map(s => sa.SuperPopulation).counter()') pprint(vds.query_samples('samples.map(s => sa.CaffeineConsumption).stats()')) snp_counts = vds.query_variants('variants.map(v => v.altAllele()).filter(aa => aa.isSNP()).counter()') pprint(Counter(snp_counts).most_common()) dp_hist = vds.query_genotypes('gs.map(g => g.dp).hist(0, 30, 30)') plt.xlim(0, 31) plt.bar(dp_hist.binEdges[1:], dp_hist.binFrequencies) plt.show() pprint(vds.sample_schema) vds = vds.sample_qc() pprint(vds.sample_schema) df = vds.samples_table().to_pandas() df.head() # + plt.clf() plt.subplot(1, 2, 1) plt.hist(df["sa.qc.callRate"], bins=np.arange(.75, 1.01, .01)) plt.xlabel("Call Rate") plt.ylabel("Frequency") plt.xlim(.75, 1) plt.subplot(1, 2, 2) plt.hist(df["sa.qc.gqMean"], bins = np.arange(0, 105, 5)) plt.xlabel("Mean Sample GQ") plt.ylabel("Frequency") plt.xlim(0, 105) plt.tight_layout() plt.show() # - plt.scatter(df["sa.qc.dpMean"], df["sa.qc.callRate"], alpha=0.1) plt.xlabel('Mean DP') plt.ylabel('Call Rate') plt.xlim(0, 20) plt.show() plt.scatter(df["sa.qc.dpMean"], df["sa.qc.callRate"], alpha=0.1) plt.xlabel('Mean DP') plt.ylabel('Call Rate') plt.xlim(0, 20) plt.axhline(0.97, c='k') plt.axvline(4, c='k') plt.show() vds = vds.filter_samples_expr('sa.qc.dpMean >= 4 && sa.qc.callRate >= 0.97') print('After filter, %d/1000 samples remain.' % vds.num_samples) filter_condition_ab = '''let ab = g.ad[1] / g.ad.sum() in ((g.isHomRef && ab <= 0.1) || (g.isHet && ab >= 0.25 && ab <= 0.75) || (g.isHomVar && ab >= 0.9))''' vds = vds.filter_genotypes(filter_condition_ab) post_qc_call_rate = vds.query_genotypes('gs.fraction(g => g.isCalled)') print('post QC call rate is %.3f' % post_qc_call_rate) pprint(vds.variant_schema) vds = vds.variant_qc().cache() pprint(vds.variant_schema) # + variant_df = vds.variants_table().to_pandas() plt.clf() plt.subplot(2, 2, 1) variantgq_means = variant_df["va.qc.gqMean"] plt.hist(variantgq_means, bins = np.arange(0, 84, 2)) plt.xlabel("Variant Mean GQ") plt.ylabel("Frequency") plt.xlim(0, 80) plt.subplot(2, 2, 2) variant_mleaf = variant_df["va.qc.AF"] plt.hist(variant_mleaf, bins = np.arange(0, 1.05, .025)) plt.xlabel("Minor Allele Frequency") plt.ylabel("Frequency") plt.xlim(0, 1) plt.subplot(2, 2, 3) plt.hist(variant_df['va.qc.callRate'], bins = np.arange(0, 1.05, .01)) plt.xlabel("Variant Call Rate") plt.ylabel("Frequency") plt.xlim(.5, 1) plt.subplot(2, 2, 4) plt.hist(variant_df['va.qc.pHWE'], bins = np.arange(0, 1.05, .025)) plt.xlabel("Hardy-Weinberg Equilibrium p-value") plt.ylabel("Frequency") plt.xlim(0, 1) plt.tight_layout() plt.show() # - common_vds = (vds .filter_variants_expr('va.qc.AF > 0.01') .ld_prune(memory_per_core=256, num_cores=4)) common_vds.count() gwas = common_vds.linreg('sa.CaffeineConsumption') pprint(gwas.variant_schema) def qqplot(pvals, xMax, yMax): spvals = sorted(filter(lambda x: x and not(isnan(x)), pvals)) exp = [-log(float(i) / len(spvals), 10) for i in np.arange(1, len(spvals) + 1, 1)] obs = [-log(p, 10) for p in spvals] plt.clf() plt.scatter(exp, obs) plt.plot(np.arange(0, max(xMax, yMax)), c="red") plt.xlabel("Expected p-value (-log10 scale)") plt.ylabel("Observed p-value (-log10 scale)") plt.xlim(0, xMax) plt.ylim(0, yMax) plt.show() qqplot(gwas.query_variants('variants.map(v => va.linreg.pval).collect()'), 5, 6) pca = common_vds.pca('sa.pca', k=5, eigenvalues='global.eigen') pprint(pca.globals) pprint(pca.sample_schema) pca_table = pca.samples_table().to_pandas() colors = {'AFR': 'green', 'AMR': 'red', 'EAS': 'black', 'EUR': 'blue', 'SAS': 'cyan'} plt.scatter(pca_table["sa.pca.PC1"], pca_table["sa.pca.PC2"], c = pca_table["sa.SuperPopulation"].map(colors), alpha = .5) plt.xlim(-0.6, 0.6) plt.xlabel("PC1") plt.ylabel("PC2") legend_entries = [mpatches.Patch(color=c, label=pheno) for pheno, c in colors.items()] plt.legend(handles=legend_entries, loc=2) plt.show() pvals = (common_vds .annotate_samples_table(pca.samples_table(), expr='sa.pca = table.pca') .linreg('sa.CaffeineConsumption', covariates=['sa.pca.PC1', 'sa.pca.PC2', 'sa.pca.PC3', 'sa.isFemale']) .query_variants('variants.map(v => va.linreg.pval).collect()')) qqplot(pvals, 5, 6) pvals = (common_vds .annotate_samples_table(pca.samples_table(), expr='sa.pca = table.pca') .linreg('sa.CaffeineConsumption', covariates=['sa.pca.PC1', 'sa.pca.PC2', 'sa.pca.PC3', 'sa.isFemale'], use_dosages=True) .query_variants('variants.map(v => va.linreg.pval).collect()')) qqplot(pvals, 5, 6) kt = (vds.genotypes_table() .aggregate_by_key(key_expr=['pop = sa.SuperPopulation', 'chromosome = v.contig'], agg_expr=['n_het = g.filter(g => g.isHet()).count()'])) kt.to_dataframe().show() kt2 = (vds.genotypes_table() .aggregate_by_key(key_expr=['''maf_bin = if (va.qc.AF < 0.01) "< 1%" else if (va.qc.AF < 0.05) "1%-5%" else "> 5%" ''', 'purple_hair = sa.PurpleHair'], agg_expr=['mean_gq = g.map(g => g.gq).stats().mean', 'mean_dp = g.map(g => g.dp).stats().mean'])) kt2.to_dataframe().show()
example-notebooks/Hail_Tutorial.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 学習データを増やす # + import os import glob import numpy as np from keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array, array_to_img def draw_images(generator, x, dir_name, index): # 出力ファイルの設定 save_name = 'extened-' + str(index) g = generator.flow(x, batch_size=1, save_to_dir=output_dir, save_prefix=save_name, save_format='jpg') # 1つの入力画像から何枚拡張するかを指定 # g.next()の回数分拡張される for i in range(200): bach = g.next() if __name__ == '__main__': # 拡張する際の設定 generator = ImageDataGenerator( rotation_range=90, # 90°まで回転 width_shift_range=0.0, # 水平方向にランダムでシフト height_shift_range=0.0, # 垂直方向にランダムでシフト channel_shift_range=0.0, # 色調をランダム変更 shear_range=0.00, # 斜め方向(pi/8まで)に引っ張る horizontal_flip=True, # 垂直方向にランダムで反転 vertical_flip=True, # 水平方向にランダムで反転 zoom_range=0.2 ) # りんごテストデータ # 出力先ディレクトリの設定 output_dir = "data/test/exapple" if not(os.path.exists(output_dir)): os.mkdir(output_dir) # 拡張する画像群の読み込み images = glob.glob(os.path.join('data/test/apple/', "*.jpeg")) # 読み込んだ画像を順に拡張 for i in range(len(images)): img = load_img(images[i]) # 画像を配列化して転置a x = img_to_array(img) x = np.expand_dims(x, axis=0) # 画像の拡張 draw_images(generator, x, output_dir, i) # オレンジテストデータ # 出力先ディレクトリの設定 output_dir = "data/test/exorange" if not(os.path.exists(output_dir)): os.mkdir(output_dir) # 拡張する画像群の読み込み images = glob.glob(os.path.join('data/test/orange/', "*.jpeg")) # 読み込んだ画像を順に拡張 for i in range(len(images)): img = load_img(images[i]) # 画像を配列化して転置a x = img_to_array(img) x = np.expand_dims(x, axis=0) # 画像の拡張 draw_images(generator, x, output_dir, i) # りんご訓練データ # 出力先ディレクトリの設定 output_dir = "data/train/exapple" if not(os.path.exists(output_dir)): os.mkdir(output_dir) # 拡張する画像群の読み込み images = glob.glob(os.path.join('data/train/apple/', "*.jpeg")) # 読み込んだ画像を順に拡張 for i in range(len(images)): img = load_img(images[i]) # 画像を配列化して転置a x = img_to_array(img) x = np.expand_dims(x, axis=0) # 画像の拡張 draw_images(generator, x, output_dir, i) # オレンジ訓練データ # 出力先ディレクトリの設定 output_dir = "data/train/exorange" if not(os.path.exists(output_dir)): os.mkdir(output_dir) # 拡張する画像群の読み込み images = glob.glob(os.path.join('data/train/orange/', "*.jpeg")) # 読み込んだ画像を順に拡張 for i in range(len(images)): img = load_img(images[i]) # 画像を配列化して転置a x = img_to_array(img) x = np.expand_dims(x, axis=0) # 画像の拡張 draw_images(generator, x, output_dir, i) # + import keras from keras.models import Sequential from keras.layers import Activation, Dense, Dropout from keras.layers import Conv2D, MaxPooling2D from keras.layers import Dense, Dropout, Flatten from keras.utils.np_utils import to_categorical from keras.optimizers import Adagrad from keras.optimizers import Adam import numpy as np from PIL import Image import os # 学習用のデータを作る. image_list = [] label_list = [] # ./data/train 以下のorange,appleディレクトリ以下の画像を読み込む。 for dir in os.listdir("data/train"): if dir == ".DS_Store": continue dir1 = "data/train/" + dir label = 0 if (dir == "apple"): # appleはラベル0 label = 0 elif (dir == "exapple"): label = 0 elif (dir == "orange"): # orangeはラベル1 label = 1 elif (dir == "exorange"): label = 1 else: continue for file in os.listdir(dir1): if file != ".DS_Store": # 配列label_listに正解ラベルを追加(りんご:0 オレンジ:1) label_list.append(label) filepath = dir1 + "/" + file image = np.array(Image.open(filepath).resize((25, 25))) image_list.append(image / 255.) # kerasに渡すためにnumpy配列に変換。 image_list = np.array(image_list) # ラベルの配列を1と0からなるラベル配列に変更 # 0 -> [1,0], 1 -> [0,1] という感じ。 Y = to_categorical(label_list) # モデルを作成 model = Sequential() # model.add(BatchNormalization()) model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(25, 25, 3))) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(2, activation='softmax')) ## 学習のためのモデルを設定 model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) # 学習を実行。10%はテストに使用。 model.fit(image_list, Y, nb_epoch=20, batch_size=100, validation_split=0.1) # 学習結果 # score = model.evaluate(x_test, y_test, verbose=0) # print('Test loss:', score[0]) # print('Test accuracy:', score[1]) # + # テスト用ディレクトリ(./data/train/)の画像でチェック。正解率を表示する。 total = 0. ok_count = 0. for dir in os.listdir("data/train"): if dir == ".DS_Store": continue dir1 = "data/test/" + dir label = 0 if (dir == "apple"): # appleはラベル0 label = 0 elif (dir == "exapple"): label = 0 elif (dir == "orange"): # orangeはラベル1 label = 1 elif (dir == "exorange"): label = 1 else: continue for file in os.listdir(dir1): if file != ".DS_Store": label_list.append(label) filepath = dir1 + "/" + file image = np.array(Image.open(filepath).resize((25, 25))) print(filepath) # image = image.transpose(2, 0, 1) # image = image.reshape(1, image.shape[0] * image.shape[1] * image.shape[2]).astype("float32")[0] result = model.predict_classes(np.array([image / 255.])) print("label:", label, "result:", result[0]) total += 1. if label == result[0]: ok_count += 1. print("seikai: ", ok_count / total * 100, "%") # - # 保存 ディレクトリに出力される model.save("./model.h5")
05_trainmyimage/trainmyimage.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + #--Dependencies--# #-- Data Cleaning Libraries: import pandas as pd import numpy as np #-- Data Visualization Libraries: from matplotlib import pyplot as plt import seaborn as sns #--just in case we need it, probably won't #-- Web Scraping Libraries: import os import time import requests from bs4 import BeautifulSoup as bs from splinter import Browser # + #Settings for accessing Website executable_path = {"executable_path": "chromedriver.exe"} browser = Browser ('chrome', **executable_path, headless=False) # + #Pre-defining Variables for NBA scrape #Empty DataFrame for NBA NBA_draftdata_df = pd.DataFrame() #Reference to names NBA_refer = "basketball-reference" #URL NBA_url = "https://www.basketball-reference.com/play-index/tiny.fcgi?id=BRvVj#stats::none" # #Create Empty List for years we want to scrape for # years = ["1990", "1991", "1992", "1993", "1994", "1995",\ # "1996", "1997", "1998", "1999", "2000","2001", \ # "2002", "2003", "2004", "2005", "2006", "2007",\ # "2008", "2009", "2010", "2011", "2012", "2013",\ # "2014", "2015", "2016", "2017", "2018", "2019"] offset=0 # + #Define our function for scraping NBA Data---> ** need to come back to this and finish it off def scrape_nba_draft_data(page_url): #URL's for both NBA and NCAA #Reference global variable: NBA_draftdata_df global NBA_draftdata_df for i in years: #Set the rest of the url url = f"https://www.basketball-reference.com/play-index/draft_finder.cgi?request=1&year_min=1990&year_max=\ &round_min=&round_max=&pick_overall_min=&pick_overall_max=&franch_id=&college_id=0&is_active=&is_hof=\ &pos_is_g=Y&pos_is_gf=Y&pos_is_f=Y&pos_is_fg=Y&pos_is_fc=Y&pos_is_c=Y&pos_is_cf=Y&c1stat=&c1comp=&c1val=\ &c2stat=&c2comp=&c2val=&c3stat=&c3comp=&c3val=&c4stat=&c4comp=&c4val=&order_by=year_id&order_by_asc=\ &offset={offset}" #Visit the NBA Web page browser.visit(url) #Retrieve the html for the web page html = browser.html #Use pandas to read html year_html = pd.read_html(html, header = 0) #Get the second Table with all of the Data year_html = year_html[1] #Convert into DataFrame year_df = pd.DataFrame(year_html) #Delete Column of Ranking: "Rk" year_df = year_df.rename(columns=({"Rk" : "Year"})) #Apply the year to the Year column for each row year_df["Year"] = year_df["Year"].apply(lambda x: year) #Append to main DataFrame: NBA_game_df if NBA_game_df.empty: NBA_game_df = year_df else: NBA_game_df = NBA_game_df.append(year_df, ignore_index = True) #hoooldd on wait a second, let me put some sleep in it time.sleep(1)
iPython_Notebooks/Untitled.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # !pip install -U pip # !pip install japanize-matplotlib # + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import japanize_matplotlib import pandas_profiling # %matplotlib inline # + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" # !cat ../input/data_description.txt # - import lightgbm as lgb from sklearn.metrics import mean_squared_error train = pd.read_csv('../input/train.csv') test = pd.read_csv('../input/test.csv') sample_submission = pd.read_csv('../input/sample_submission.csv') sample_submission.head() pandas_profiling.ProfileReport(train) pandas_profiling.ProfileReport(test) # + submission = test.loc[:, ['Id']] X_train = train.drop(['SalePrice', 'Id'], axis=1) y_train = train.loc[:, ['SalePrice']] X_train = X_train.select_dtypes(include=np.number) X_test = test.loc[:, X_train.columns] # - lgb_train = lgb.Dataset(X_train, y_train) lgb_test = lgb.Dataset(test) params = { 'boosting_type': 'gbdt', 'objective': 'regression', 'metric': {'l2', 'l1'}, 'num_leaves': 31, 'learning_rate': 0.05, 'feature_fraction': 0.9, 'bagging_fraction': 0.8, 'bagging_freq': 5, 'verbose': 0 } # + gbm = lgb.train(params, lgb_train, num_boost_round=20 #valid_sets=lgb_eval, #early_stopping_rounds=5 ) print('Saving model...') # save model to file gbm.save_model('model.txt') print('Starting predicting...') # predict y_pred = gbm.predict(X_test, num_iteration=gbm.best_iteration) # eval #print('The rmse of prediction is:', mean_squared_error(y_test, y_pred) ** 0.5) # - submission['SalePrice'] = y_pred submission.to_csv('submission_lightgbm_not_using_sklearn_IF.csv', index=False)
example_projects/kaggle-house-prices-advanced-regression-techniques/notebook/eda/submission_lightgbm_not_using_sklearn_IF.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # LTI認証連携の設定を行う -- Moodle # # --- # # Moodleとの間で LTI 1.3による LMS(Learning Management System)との認証連携を設定する。 # ## はじめに # + [markdown] heading_collapsed=true # ### 概要 # # CoursewareHub に [LTI 1.3](http://www.imsglobal.org/spec/lti/v1p3/)による LMS(Learning Management System)との認証連携を設定します。LMSを介した認証とLMSから得られた認可情報に基づく CoursewareHub へのアクセス制御を行います。このNotebookでは連携するLMSとしてMoodleを用います。 # + [markdown] hidden=true # #### 基本方針 # + [markdown] hidden=true # * CoursewareHubの既存の認証機能であるローカルユーザに基づく認証、学認連携に基づく認証と共存して設定可能とする # * LMSで認証されたユーザとJupyterHubユーザとの対応は LMS から得られるメールアドレスを一定ルールで変換することにより行う # - 変換ルールは学認連携の認証と同じものを用いる # - LMS のユーザー管理上メールアドレスとユーザーが適切に対応付くように管理されている前提とする # * CoursewareHubでの講師権限、受講者権限の制御は JupyterHub の admin 権限の有無によって制御する # - LTI 認証連携機能自体は権限に関する制御は行わない # + [markdown] hidden=true # #### システム構成 # # auth-proxyコンテナのLTI認証連携機能に関する構成を下図に示します。なおローカルユーザ認証、学認連携認証については省略しています。 # + [markdown] hidden=true # ![lti構成](images/lti-001.png) # - # ### 事前に準備が必要となるものについて # # このNotebookを実行するにあたっての前提条件を以下に記します。 # * CoursewareHubとLTI認証連携を行う Moodle に管理者としてログインできること # * LTI認証連携を設定するCoursewareHubとMoodleとが互いにアクセス可能であること # ### UnitGroup名 # # CoursewareHubの構築環境となるVCPのUnitGroup名を指定します。 # VCノードを作成時に指定した値を確認するために `group_vars`ファイル名の一覧を表示します。 # !ls -1 group_vars/ # 上のセルの出力結果を参考にして、UnitGroup名を次のセルに指定してください。 # + # (例) # ugroup_name = 'CoursewareHub' ugroup_name = # + [markdown] heading_collapsed=true # ### チェック # # 対象となるVCノードがAnsibleによって操作できることを確認します。 # + [markdown] hidden=true # Ansibleの設定ファイルの場所を環境変数に設定しておきます。 # + hidden=true from pathlib import Path import os cfg_ansible = Path('ansible.cfg') if cfg_ansible.exists(): os.environ['ANSIBLE_CONFIG'] = str(cfg_ansible.resolve()) # + [markdown] hidden=true # 構築対象となる各VCノードにアクセスできることを確認します。 # + hidden=true target_hub = f'{ugroup_name}_manager' # !ansible {target_hub} -m ping # - # ## Moodleで外部ツールの設定を行う # # CoursewareHubとLTI認証連携を行う Moodle にて、外部ツールの設定を行います。 # ### 事前設定ツールの追加を行う # # CoursewareHubを Moodle の事前設定ツールとして登録します。 # まずMoodleに管理者としてログインした後に[サイト管理]--[プラグイン]の順にアクセスしてください。表示された画面の「活動モジュール」セクションに下図のような内容が表示されます。 # # ![Moodle-サイト管理画面](images/moodle-201.png) # 上図の赤丸で示した「ツールを管理する」というリンクを選択すると、下図に示すような外部ツールの管理画面が表示されます。 # # ![Moodle-ツール管理画面1](images/moodle-202.png) # 上図の赤丸で示した「ツールを手動設定」のリンクを選択してください。下図に示す「外部ツール設定」画面が表示されます。この画面でCoursewareHubと連携するための情報を設定します。 # # ![Moodle-ツール設定画面1](images/moodle-203.png) # まず「ツール設定」セクションの入力を行います。上図の青枠で示した箇所への入力が必要となります。入力が必要となる項目名を以下に示します。 # # * ツール名 # * ツールURL # * LTIバージョン # * 公開鍵タイプ # * ログインURLを開始する # * リダイレクトURL # * デフォルト起動コンテナ # # 「ツール名」にはツールを識別するための名前を入力してください。教師がコース内で外部ツールを追加するときに、ここで指定した名前が表示されます。 # # 「ツールURL」には CoursewareHub のURLを入力してください。次のセルを実行すると CoursewareHubのURLが表示されます。 # !ansible {target_hub} -c local -a 'echo https://{{{{master_fqdn}}}}' # 「LTIバージョン」は `LTI 1.3` を選択してください。 # # 「公開鍵タイプ」は`鍵セットURL`を選択してください。 # # 「ログインURLを開始する」には CoursewareHubの OpenID Connect/Initialization Endpoint を指定します。次のセルを実行して表示されるURLを入力してください。 # !ansible {target_hub} -c local -a 'echo https://{{{{master_fqdn}}}}/php/lti/login.php' # 「リダイレクトURL」には CoursewareHubの Redirect Endpoint を指定します。次のセルを実行して表示されるURLを入力してください。 # !ansible {target_hub} -c local -a 'echo https://{{{{master_fqdn}}}}/php/lti/service.php' # 「デフォルト起動コンテナ」には「`新しいウィンドウ`」か「`既存のウィンドウ`」を選択してください。 # 「カスタムパラメータ」はLTIリンクを作成するときに指定するので、ここでは空欄のままにしてください。 # 次に「プライバシー」セクションの入力を行います。下図の青枠で示した箇所への入力が必要となります。 # # ![Moodle-ツール設定画面2](images/moodle-204.png) # 入力が必要となる項目名を以下に示します。 # # * ランチャのメールをツールと共有する # # 「ランチャのメールをツールと共有する」は「常に」を選択してください。 # 最後に「変更を保存する」を選択することでツールの登録が完了します。下図に示すようにツールの管理画面に登録したツールが表示されます。 # # ![Moodle-ツール管理画面2](images/moodle-205.png) # ### ツールの設定詳細を確認する # # CoursewareHubにLTI認証連携の設定を行うために必要となる情報を確認します。 # 前節の最後に示した「ツールを管理する」画面の「ツール」セクションに、登録したツールの一覧が表示されています。登録したツールの肩に「設定詳細を表示する」リンクがあります(図中に赤丸で示している箇所)。このリンクを選択すると、下図のような画面表示となります。 # # ![Moodle-ツール設定詳細](images/moodle-206.png) # 表示された情報は、次章で CousewareHub にLTI認証連携の設定を行う際に必要となります。 # ## CoursewareHubにLTI認証連携設定を行う # ### パラメータの指定 # Moodleの「ツール設定詳細」に表示された情報のうち以下に示す項目を指定します。 # # * プラットフォームID # * クライアントID # * 設置ID # プラットフォームIDの値を次のセルに入力してください。 # + # (例) # platform_id = 'https://moodle.example.org' platform_id = # - # クライアントIDの値を次のセルに入力してください。 # + # (例) # client_id = 'xxxxxxxxxxxxxxx' client_id = # - # 設置IDの値を次のセルに入力してください。 # + # (例) # deployment_id = '1' deployment_id = # - # ### パラメータの保存 # # 前節で指定したパラメータを ansible の変数として `group_vars`ファイルに保存します。 # + import yaml from pathlib import Path gvars_path = Path(f'group_vars/{ugroup_name}') with gvars_path.open() as f: gvars = yaml.safe_load(f) lti = gvars.get('lti', []) lti.append({ 'platform_id': platform_id, 'client_id': client_id, 'auth_login_url': f'{platform_id}/mod/lti/auth.php', 'auth_token_url': f'{platform_id}/mod/lti/token.php', 'key_set_url': f'{platform_id}/mod/lti/certs.php', 'deployment_id': deployment_id, }) gvars['lti'] = lti with gvars_path.open(mode='w') as f: yaml.safe_dump(gvars, stream=f) # - # ### CoursewareHubにLTI認証連携のための設定ファイルを配置する # # 前節入力された情報をもとにLTI認証連携のための設定ファイル `lti.json` を生成し、CoursewareHub環境に配置します。 # LTI関連の設定ファイルを配置、生成するための Ansible の playbook を実行します。ここで実行するplaybookでは以下の処理を行います。 # # * 指定されたパラメータに対応するLTIファイルを配置する # * RSAの秘密鍵ファイル`private.key`が存在していない場合は、鍵ペアの生成を行う # まず、実際に設定を変更する前にドライラン(チェックモード)でAnsibleを実行します。 # !ansible-playbook -CDv -l {target_hub} playbooks/setup-lti.yml || true # 実際に設定ファイルの配置を行います。 # !ansible-playbook -Dv -l {target_hub} playbooks/setup-lti.yml # 配置した設定ファイル`lti.json`の内容を確認します。 # !ansible {target_hub} -a 'cat {{{{auth_proxy_dir}}}}/lti/lti.json' # 鍵ペアの公開鍵の内容を確認します。 # # > この公開鍵は将来のLMS連携機能のために使用する。認証連携には使用しない。 # !ansible {target_hub} -m shell -a \ # 'openssl rsa -pubout -in {{{{auth_proxy_dir}}}}/lti/private.key 2>/dev/null' # ### auth-proxyコンテナにLTI認証連携設定を反映する # # 前節で作成、配備したLTIの設定ファイル `lti.json`, `private.key` を auth-proxy コンテナから参照できるようにするために、コンテナのバインドマウント設定を追加します。 # LTI関連のバインドマウントの設定を追加した `docker-compose.yml`を CousewareHub環境に配置します。 # # まずは、チェックモードで実行して差分を確認します。 # !ansible {target_hub} -m template -CDv -a \ # 'src=template/docker-compose.yml dest={{{{base_dir}}}} backup=yes' # 実際に`docker-compose.yml`を配置します。 # !ansible {target_hub} -m template -Dv -a \ # 'src=template/docker-compose.yml dest={{{{base_dir}}}} backup=yes' # 配置した `docker-compose.yml` の内容を Docker Swarm クラスタに反映させます。 # !ansible {target_hub} -a 'chdir={{{{base_dir}}}} \ # docker stack deploy -c docker-compose.yml {{{{ugroup_name}}}}' # コンテナの実行状況を確認します。`auth-proxy`コンテナの状態が`Running`であり、かつ新たに起動されていることを確認してください。 # !ansible {target_hub} -a 'docker stack ps {{{{ugroup_name}}}}' # `docker-compose.yml`に変更がない場合は `auth-proxy`コンテナが変更前のまま実行されていることがあります。その場合は`auth-proxy`サービスを強制的に更新する必要があります。次のセルのコメントを外して実行して下さい。 # + # # !ansible {target_hub} -a \ # # 'docker service update --force {{{{ugroup_name}}}}_auth-proxy' # - # サービスの更新を行った場合は、更新後の状態を確認します。 # !ansible {target_hub} -a 'docker stack ps {{{{ugroup_name}}}}' # ## MoodleでLTI リンクを作成する # # MoodleのサイトまたはコースのページにLTIリンクを作成します。 # ### CoursewareHubへのリンクを作成する # # # コースの編集モードを開始します。下図の赤丸で示したリンクを選択してください。 # # > ここではコースにLTIリンクを作成する手順を示しますが、サイトにリンクを作成する場合も同様の手順となります。 # # ![コース画面1](images/moodle-401.png) # 編集モードの表示画面から「活動またはリソースを追加する」のリンクを選択してください。下図の赤丸で示した箇所などに表示されています。 # # ![コース画面2](images/moodle-402.png) # 追加する活動またはリソースの選択画面から「外部ツール」を選択してください。 # # ![コース画面3](images/moodle-403.png) # 下図のような設定画面が表示されます。「活動名」にリンクの名前を入力してください。ここで指定した値が作成されるリンクの表示名前となります。また「事前設定ツール」の項目には登録した外部ツールの名前を選択してください。 # # ![コース画面4](images/moodle-404.png) # 画面下部の「保存してコースに戻る」ボタンを選択することでLTIリンクが作成されます。 # # > 「保存して表示」ボタンを選択した場合 iframe の表示となりますが CoursewareHub は埋め込みに対応していな # いため正常に認証・認可されません。 # # # ![コース画面5](images/moodle-405.png) # 登録したリンクは下図のように表示されます。 # # ![コース画面6](images/moodle-406.png) # ### CoursewareHubのNotebookへのリンクを作成する # # CousewareHubのNotebookファイルへのリンクを作成する場合は「新しい外部ツールを追加する」画面で「カスタムパラメータ」の指定を行います。 # 前節に示した手順のうち、追加する活動またはリソースの選択画面から「外部ツール」を選択するところまでは同様のの手順となります。「新しい外部ツールを追加する」画面にて「活動名」、「事前設定ツール」を入力した後に、下図の赤丸で示した「さらに表示する」のリンクを選択してください。 # # ![コース画面7](images/moodle-407.png) # 下図のように「カスタムパラメータ」などの入力欄が表示されます。 # # ![コース画面8](images/moodle-408.png) # 「カスタムパラメータ」に以下のような内容を指定します。 # # ``` # notebook={リンクするNotebookのパス} # ``` # # 例えば、CoursewareHubの `textbook/001-test.ipynb` に対するリンクを作成するには、 # # ``` # notebook=textbook/001-test.ipynb # ``` # # のような指定を行います。指定するパスはCoursewareHubの各ユーザーのホームディレクトリからの相対パスとなります。それぞれのユーザーは異なるホームディレクトリを持つため、全てのユーザーの同じパスにNotebookのコピーが存在している必要があります。Notebookが見つからない場合は404 のエラー画面が表示されます。
CoursewareHub/notebooks/411-LTI認証連携の設定を行う.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # # # Pandas-style slicing of Triangle # # # This example demonstrates the familiarity of the pandas API applied to a # :class:`Triangle` instance. # # # # + import chainladder as cl import seaborn as sns import matplotlib.pyplot as plt sns.set_style('whitegrid') # The base Triangle Class: cl.Triangle # Load data clrd = cl.load_dataset('clrd') # pandas-style Aggregations clrd = clrd.groupby('LOB').sum() # pandas-style value/column slicing clrd = clrd['CumPaidLoss'] # pandas loc-style index slicing clrd = clrd.loc['medmal'] # Convert link ratios to dataframe link_ratios = clrd.link_ratio.to_frame().unstack().reset_index() link_ratios.columns = ['Age', 'Accident Year', 'Link Ratio'] # Plot sns.pointplot(hue='Age', y='Link Ratio', x='Accident Year', data=link_ratios, markers='.') plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) g = plt.title('Medical Malpractice Link Ratios')
docs/auto_examples/plot_triangle_slicing.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os import argparse import pandas as pd import numpy as np import datetime as dt from dateutil.parser import parse def changeColumns(x): choice = x['Choice'] correct_answer = x['Correct Answer'] correct_answers = x['Correct Answers'] selections = x['Selections'] input = x['Input'] step_name = x['Step Name'] student_response_type = x['Student Response Type'] tutor_response_type = x['Tutor Response Type'] outcome = x['Outcome'] if selections is not None and selections: input = input.split(",") if (choice in input and correct_answer == 1) or (choice not in input and correct_answer == 0): x['Outcome'] = 'CORRECT' else: x['Outcome'] = 'INCORRECT' x['Step Name'] = step_name + "_" + choice elif (student_response_type and student_response_type == 'ATTEMPT' and tutor_response_type and tutor_response_type == 'RESULT' and step_name and outcome): x['Event Type'] = 'assess_instruct' elif (student_response_type and student_response_type == 'HINT_REQUEST' and tutor_response_type and tutor_response_type == 'HINT_MSG' and step_name): x['Event Type'] = 'instruct' return x map_file_name = "" data_file_name = "" command_line_exe = False #test command #Python non_instruction_step_converter.py -dataFile "ds2846_tx_test.txt" -mapFile "ds2846_non_instructional_steps_map.txt" if command_line_exe: parser = argparse.ArgumentParser(description='Convert multi-selection steps into multiple steps and adjust scoring') parser.add_argument('-dataFile', type=str, help='data file containing multi-selection steps', required=True) parser.add_argument('-mapFile', type=str, help='map file containing mapping information ') args, option_file_index_args = parser.parse_known_args() data_file_name = args.dataFile map_file_name = args.mapFile else: map_file_name = 'ds2846_non_instructional_steps_map.txt' #data_file_name = 'ds2846_tx_test.txt' #data_file_name = 'new_aggr_sp_no_data_in_event_type_results/ds2846_tx_test_converted_with_event_type_no_data.txt' data_file_name = 'ds2846_tx_All_Data_4741_2019_0904_111928_opened_in_excel.txt' df_map = pd.read_csv(map_file_name, dtype=str, sep="\t", encoding='ISO-8859-1') new_columns = df_map.columns.tolist() new_columns.extend(['Choice', 'Correct Answer', 'Event Type']) df_map_new = pd.DataFrame(columns=new_columns) for i in range(len(df_map.index)): #this_row = pd.Series(df_map.iloc[i, :]) this_row = df_map.iloc[i, :] selections = this_row['Selections'].split(",") answers = this_row['Correct Answers'].split(",") selections_count = len(selections) for j, selection in enumerate(selections): new_row = this_row.copy() new_row['Choice'] = selection if selection in answers: new_row['Correct Answer'] = 1 else: new_row['Correct Answer'] = 0 if j < len(selections) - 1: new_row['Event Type'] = "assess" else: new_row['Event Type'] = "assess_instruct" new_row = new_row.to_frame().transpose() if df_map_new.empty: df_map_new = new_row else: df_map_new = df_map_new.append(new_row) df = pd.read_csv(data_file_name, dtype=str, sep="\t", encoding = "ISO-8859-1") #save the first line headers bc Python adds number to duplicate column names infile = open(data_file_name, 'r') original_headers = infile.readline().strip() if "Event Type" in original_headers: #delete the Event Type column and so new one can be made df.drop("Event Type", axis=1, inplace=True) original_headers = original_headers + "\n" else: original_headers = original_headers + "\t" + "Event Type" + "\n" infile.close() # + #find the columns that has Level() in names for mapFile. Assuming mapFile and dataFile has the same names level_column_names = [] for col in df_map_new.columns: if 'Level (' in col: level_column_names.append(col) level_column_names.append('Problem Name') level_column_names.append('Step Name') df_combined = pd.merge( df, df_map_new, left_on=level_column_names, right_on=level_column_names, how='left') df_combined['Selections'] = df_combined['Selections'].fillna(value='') df_combined['Input'] = df_combined['Input'].fillna(value='') df_combined['Step Name'] = df_combined['Step Name'].fillna(value='') df_combined['Student Response Type'] = df_combined['Student Response Type'].fillna(value='') df_combined['Tutor Response Type'] = df_combined['Tutor Response Type'].fillna(value='') df_combined['Outcome'] = df_combined['Outcome'].fillna(value='') df_combined['Input'] = df_combined['Input'].astype(str) df_combined.apply(changeColumns, axis=1) df_combined.drop(['Selections', 'Correct Answers', 'Choice', 'Correct Answer'], axis=1, inplace=True) #make new output file name out_file_name = os.path.splitext(os.path.basename(data_file_name))[0] + "_converted" + os.path.splitext(os.path.basename(data_file_name))[1] #write the header out_file = open(out_file_name, "w") out_file.write(original_headers) out_file.close() with open(out_file_name, 'a', newline='') as f: df_combined.to_csv(f, sep='\t', index=False, header=False)
MultiselectionConverter/program/non_instruction_step_converter.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Examples using the technical analysis libraries (ta and talib_wrapper) # ## tia package (pure python) import tia.analysis.ta as ta import tia.analysis.model as model secs = model.load_yahoo_stock(['MSFT', 'INTC'], start='1/1/2000') pxs = secs.frame pxs.head() close = pxs.swaplevel(1, 0, axis=1).close close.head() # moving average calculation ta.SMA(close, 10).tail() # macd ta.MACD(close).tail() # rsi ta.RSI(close, n=14).tail() # determine when 50 crosses the 200 cross = ta.cross_signal(ta.sma(close.MSFT, 50), ta.sma(close.MSFT, 200)) cross.tail() # ## ta-lib wrapper # Provide ability to call ta-lib with DataFrame or Series which may contain NaN values. import tia.analysis.talib_wrapper as talib talib.MACD(close).tail() # average true range talib.ATR(pxs).tail() talib.RSI(close).tail()
examples/ta.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:venv] # language: python # name: conda-env-venv-py # --- # + # download literature from LitCovid # https://www.ncbi.nlm.nih.gov/research/coronavirus/ # https://ftp.ncbi.nlm.nih.gov/pub/lu/LitCovid/ # - # !pip install bioc # !wget https://ftp.ncbi.nlm.nih.gov/pub/lu/LitCovid/litcovid2pubtator.json.gz -P ../data/LitCovid/ # !gunzip ../data/LitCovid/litcovid2pubtator.json.gz # %load_ext autoreload # %autoreload 2 import parse_data covid_f = '../data/LitCovid/litcovid2pubtator.json' # covid_f = '../data/LitCovid/litcovid2BioCJSON' output_d = '../data/LitCovid/' import json with open(covid_f, encoding='utf-8') as f: data = json.load(f) # + t = parse_data.parse_doc(data[1][7414])[2] print(t) t.replace("\u2019", "'") # t.encode('latin1', 'replace') # .decode("utf-8") t.encode('utf8') # - parse_data.parse_doc(data[1][7414]) tar_text = '' import importlib importlib.reload(parse_data) parse_data.parse_doc(t) t = {'_id': '32644403|None', 'id': '32644403', 'infons': {}, 'passages': [{'infons': {'journal': '; 2020 01 ', 'year': '2020', 'type': 'title', 'authors': '<NAME>, <NAME>, ', 'section': 'Title', 'section_type': 'TITLE'}, 'offset': 0, 'text': b'Severe Acute Respiratory Syndrome (SARS)', 'annotations': [{'id': '2', 'infons': {'identifier': 'MESH:D045169', 'type': 'Disease'}, 'text': 'Severe Acute Respiratory Syndrome', 'locations': [{'offset': 0, 'length': 33}]}, {'id': '3', 'infons': {'identifier': 'MESH:D045169', 'type': 'Disease'}, 'text': 'SARS', 'locations': [{'offset': 35, 'length': 4}]}]}, {'infons': {'type': 'abstract', 'section': 'Abstract'}, 'offset': 41, 'text': 'A new and rapidly progressive respiratory syndrome termed severe acute respiratory syndrome (SARS) was identified by the World Health Organization (WHO) in the Guangdong Province of China as a global threat in March of 2003. SARS went on to spread globally over the following months to over 30 countries and became the 1st pandemic of the 21st century.\xa0It showed that the dissemination of an infectious microbe could be drastically increased in the era of globalization and increased international travel. The decade preceding the SARS outbreak featured the emergence of multiple novel pathogens, including H5N1 influenza, Hantavirus, Nipah virus, and Avian flu. However, SARS was unique among these as it had the ability for efficient person-to-person transmission.[1]\xa0By the end of the outbreak in July 2003, 8096 cases were reported leading to 774 deaths with a case fatality rate of over 9.6%.[2][3] SARS\xa0showed a unique predilection for healthcare workers,\xa0with 21% of cases occurring in these individuals.[4] The WHO, along with its international partners, including the Centers for Disease Control and Prevention (CDC), was able to identify within 2 weeks the etiologic agent.[5][6] The agent was a novel coronavirus and was given the name SARS coronavirus (SARS-CoV). It was isolated in a number of SARS patients and suspected as the causative agent before ultimately being sequenced and fulfilling Koch’s postulates confirming it as the cause.[7]\xa0 The number of secondary cases produced by one SARS patient is thought to be in the range of two to four though a few patients, including the original index case, were\xa0suspected to be “super-spreaders” spreading to up to hundreds of others. The mode of transmission for the virus was largely through respiratory inhalation of droplets. Treatment was primarily supportive, and no specific anti-virals were effective.\xa0Since mid-2004, no new cases of SARS have been reported. Until the recent COVID-19 pandemic, the global reach of SARS had been matched only by the 2009 H1N1 Influenza pandemic.[8] Lessons learned from the SARS pandemic are currently used as a blueprint to fight the pandemic of COVID19.', 'annotations': [{'id': '28', 'infons': {'identifier': 'MESH:D012120', 'type': 'Disease'}, 'text': 'respiratory syndrome', 'locations': [{'offset': 71, 'length': 20}]}, {'id': '29', 'infons': {'identifier': 'MESH:D012120', 'type': 'Disease'}, 'text': 'acute respiratory syndrome', 'locations': [{'offset': 106, 'length': 26}]}, {'id': '30', 'infons': {'identifier': 'MESH:D045169', 'type': 'Disease'}, 'text': 'SARS', 'locations': [{'offset': 134, 'length': 4}]}, {'id': '31', 'infons': {'identifier': 'MESH:D045169', 'type': 'Disease'}, 'text': 'SARS', 'locations': [{'offset': 266, 'length': 4}]}, {'id': '32', 'infons': {'identifier': 'MESH:D045169', 'type': 'Disease'}, 'text': 'SARS', 'locations': [{'offset': 572, 'length': 4}]}, {'id': '33', 'infons': {'identifier': '102793', 'type': 'Species'}, 'text': 'H5N1', 'locations': [{'offset': 648, 'length': 4}]}, {'id': '34', 'infons': {'identifier': '121791', 'type': 'Species'}, 'text': 'Nipah virus', 'locations': [{'offset': 676, 'length': 11}]}, {'id': '35', 'infons': {'identifier': 'MESH:D045169', 'type': 'Disease'}, 'text': 'SARS', 'locations': [{'offset': 713, 'length': 4}]}, {'id': '36', 'infons': {'identifier': 'MESH:D003643', 'type': 'Disease'}, 'text': 'deaths', 'locations': [{'offset': 892, 'length': 6}]}, {'id': '37', 'infons': {'identifier': 'MESH:D045169', 'type': 'Disease'}, 'text': 'SARS', 'locations': [{'offset': 945, 'length': 4}]}, {'id': '38', 'infons': {'identifier': '2697049', 'type': 'Species'}, 'text': 'novel coronavirus', 'locations': [{'offset': 1247, 'length': 17}]}, {'id': '39', 'infons': {'identifier': '694009', 'type': 'Species'}, 'text': 'SARS coronavirus', 'locations': [{'offset': 1288, 'length': 16}]}, {'id': '40', 'infons': {'identifier': '694009', 'type': 'Species'}, 'text': 'SARS-CoV', 'locations': [{'offset': 1306, 'length': 8}]}, {'id': '41', 'infons': {'identifier': 'MESH:D045169', 'type': 'Disease'}, 'text': 'SARS', 'locations': [{'offset': 1348, 'length': 4}]}, {'id': '42', 'infons': {'identifier': '9606', 'type': 'Species'}, 'text': 'patients', 'locations': [{'offset': 1353, 'length': 8}]}, {'id': '43', 'infons': {'identifier': 'MESH:D045169', 'type': 'Disease'}, 'text': 'SARS', 'locations': [{'offset': 1544, 'length': 4}]}, {'id': '44', 'infons': {'identifier': '9606', 'type': 'Species'}, 'text': 'patient', 'locations': [{'offset': 1549, 'length': 7}]}, {'id': '45', 'infons': {'identifier': '9606', 'type': 'Species'}, 'text': 'patients', 'locations': [{'offset': 1615, 'length': 8}]}, {'id': '46', 'infons': {'identifier': 'MESH:D045169', 'type': 'Disease'}, 'text': 'SARS', 'locations': [{'offset': 1945, 'length': 4}]}, {'id': '47', 'infons': {'identifier': 'MESH:C000657245', 'type': 'Disease'}, 'text': 'COVID-19', 'locations': [{'offset': 1987, 'length': 8}]}, {'id': '48', 'infons': {'identifier': 'MESH:D045169', 'type': 'Disease'}, 'text': 'SARS', 'locations': [{'offset': 2026, 'length': 4}]}, {'id': '49', 'infons': {'identifier': '114727', 'type': 'Species'}, 'text': 'H1N1', 'locations': [{'offset': 2065, 'length': 4}]}, {'id': '50', 'infons': {'identifier': 'MESH:D045169', 'type': 'Disease'}, 'text': 'SARS', 'locations': [{'offset': 2118, 'length': 4}]}, {'id': '51', 'infons': {'identifier': 'MESH:C000657245', 'type': 'Disease'}, 'text': 'COVID19', 'locations': [{'offset': 2191, 'length': 7}]}]}], 'pmid': 32644403, 'pmcid': None, 'created': {'$date': 1600357501642}, 'accessions': ['disease@MESH:C000657245', 'species@694009', 'disease@MESH:D003643', 'species@114727', 'species@121791', 'disease@MESH:D012120', 'species@2697049', 'species@9606', 'species@102793', 'disease@MESH:D045169'], 'journal': '', 'year': 2020, 'authors': ['<NAME>', '<NAME>'], 'tags': ['LitCovid']} data[1][0]['passages'][0]['infons'] parse_data.parse_data_json_f_hd(covid_f, output_d) # !python evaluate_renet2_ft.py --raw_data_dir ../data/LitCovid/ --model_dir ../models/ft_models/ --gda_fn_d ../data/LitCovid/ --models_number 10 --batch_size 60 73651
src/nb_scripts/parse_COVID_data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Improving the DQN algorithm using Double Q-Learning # > Notes on improving the DQN algorithm using Double Q-learning. # # - branch: 2020-04-11-double-dqn # - badges: true # - image: images/double-dqn-figure-2.png # - comments: true # - author: <NAME> # - categories: [pytorch, deep-reinforcement-learning, deep-q-networks] # I am continuing to work my way through the [Udacity](https://www.udacity.com/) [*Deep Reinforcement Learning Nanodegree*](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893). In this blog post I discuss and implement the Double DQN algorithm from [*Deep Reinforcement Learning with Double Q-Learning*](https://arxiv.org/abs/1509.06461) (Van Hasselt et al 2015). The Double DQN algorithm is a minor, but important, modification of the original DQN algorithm that I covered in a [previous post](https://davidrpugh.github.io/stochastic-expatriate-descent/pytorch/deep-reinforcement-learning/deep-q-networks/2020/04/03/deep-q-networks.html). # # The Van Hasselt et al 2015 paper makes several important contributions. # # 1. Demonstration of how Q-learning can be overly optimistic in large-scale, even # deterministic, problems due to the inherent estimation errors of learning. # 2. Demonstration that overestimations are more common and severe in practice than previously # acknowledged. # 3. Implementation of Double Q-learning called Double DQN that extends, with minor # modifications, the popular DQN algorithm and that can be used at scale to successfully # reduce overestimations with the result being more stable and reliable learning. # 4. Demonstation that Double DQN finds better policies by obtaining new state-of-the-art # results on the Atari 2600 dataset. # # ## Q-learning overestimates Q-values # # No matter what type of function approximation scheme used to approximate the action-value function $Q$ there will always be approximation error. The presence of the max operator in the Bellman equation used to compute the $Q$-values means that the approximate $Q$-values will almost always be strictly greater than the corresponding $Q$ values from the true action-value function (i.e., the approximation errors will almost always be positive). This potentially significant source of bias can impede learning and is often exacerbated by the use of flexible, non-linear function approximators such as neural networks. # # Double Q-learning addresses these issues by explicitly separating action selection from action evaluation which allows each step to use a different function approximator resulting in a better overall approximation of the action-value function. Figure 2 (with caption) below, which is taken from Van Hasselt et al 2015, summarizes these ideas. See the [paper](https://arxiv.org/pdf/1509.06461.pdf) for more details. # # ![](my_icons/double-dqn-figure-2.png) # # ## Implementing the Double DQN algorithm # # The key idea behind Double Q-learning is to reduce overestimations of Q-values by separating the selection of actions from the evaluation of those actions so that a different Q-network can be used in each step. When applying Double Q-learning to extend the DQN algorithm one can use the online Q-network, $Q(S, a; \theta)$, to select the actions and then the target Q-network, $Q(S, a; \theta^{-})$, to evaluate the selected actions. # # Before implement the Double DQN algorithm, I am going to re-implement the Q-learning update from the DQN algorithm in a way that explicitly separates action selection from action evaluation. Once I have implemented this new version of Q-learning, implementing the Double DQN algorithm will be much easier. Formally separating action selection from action evaluation involves re-writing the Q-learning Bellman equation as follows. # # $$ Y_t^{DQN} = R_{t+1} + \gamma Q\big(S_{t+1}, \underset{a}{\mathrm{argmax}}\ Q(S_{t+1}, a; \theta_t); \theta_t\big) $$ # # In Python this can be implemented as three separate functions. # + import torch from torch import nn def select_greedy_actions(states: torch.Tensor, q_network: nn.Module) -> torch.Tensor: """Select the greedy action for the current state given some Q-network.""" _, actions = q_network(states).max(dim=1, keepdim=True) return actions def evaluate_selected_actions(states: torch.Tensor, actions: torch.Tensor, rewards: torch.Tensor, dones: torch.Tensor, gamma: float, q_network: nn.Module) -> torch.Tensor: """Compute the Q-values by evaluating the actions given the current states and Q-network.""" next_q_values = q_network(states).gather(dim=1, index=actions) q_values = rewards + (gamma * next_q_values * (1 - dones)) return q_values def q_learning_update(states: torch.Tensor, rewards: torch.Tensor, dones: torch.Tensor, gamma: float, q_network: nn.Module) -> torch.Tensor: """Q-Learning update with explicitly decoupled action selection and evaluation steps.""" actions = select_greedy_actions(states, q_network) q_values = evaluate_selected_actions(states, actions, rewards, dones, gamma, q_network) return q_values # - # From here it is straight forward to implement the Double DQN algorithm. All I need is a second action-value function. The target network in the DQN architecture provides a natural candidate for the second action-value function. Hasselt et al 2015 suggest using the online Q-network to select the greedy policy actions before using the target Q-network to estimate the value of the selected actions. Once again here are the maths... # # $$ Y_t^{DoubleDQN} = R_{t+1} + \gamma Q\big(S_{t+1}, \underset{a}{\mathrm{argmax}}\ Q(S_{t+1}, a; \theta_t), \theta_t^{-}\big) $$ # # ...and here is the the Python implementation. def double_q_learning_update(states: torch.Tensor, rewards: torch.Tensor, dones: torch.Tensor, gamma: float, q_network_1: nn.Module, q_network_2: nn.Module) -> torch.Tensor: """Double Q-Learning uses Q-network 1 to select actions and Q-network 2 to evaluate the selected actions.""" actions = select_greedy_actions(states, q_network_1) q_values = evaluate_selected_actions(states, actions, rewards, dones, gamma, q_network_2) return q_values # Note that the function `double_q_learning_update` is almost identical to the `q_learning_update` function above: all that is needed is to introduce a second Q-network parameter, `q_network_2`, to the function. This second Q-network will be use to evaluate the actions chosen using the original Q-network parameter, now called `q_network_1`. # ### Experience Replay # # Just like the DQN algorithm, the Double DQN algorithm uses an `ExperienceReplayBuffer` to stabilize the learning process. # + import collections import typing import numpy as np _field_names = [ "state", "action", "reward", "next_state", "done" ] Experience = collections.namedtuple("Experience", field_names=_field_names) class ExperienceReplayBuffer: """Fixed-size buffer to store Experience tuples.""" def __init__(self, batch_size: int, buffer_size: int = None, random_state: np.random.RandomState = None) -> None: """ Initialize an ExperienceReplayBuffer object. Parameters: ----------- buffer_size (int): maximum size of buffer batch_size (int): size of each training batch random_state (np.random.RandomState): random number generator. """ self._batch_size = batch_size self._buffer_size = buffer_size self._buffer = collections.deque(maxlen=buffer_size) self._random_state = np.random.RandomState() if random_state is None else random_state def __len__(self) -> int: return len(self._buffer) @property def batch_size(self) -> int: """Number of experience samples per training batch.""" return self._batch_size @property def buffer_size(self) -> int: """Total number of experience samples stored in memory.""" return self._buffer_size def append(self, experience: Experience) -> None: """Add a new experience to memory.""" self._buffer.append(experience) def sample(self) -> typing.List[Experience]: """Randomly sample a batch of experiences from memory.""" idxs = self._random_state.randint(len(self._buffer), size=self._batch_size) experiences = [self._buffer[idx] for idx in idxs] return experiences # - # ### Refactoring the `DeepQAgent` class # # Now that I have an implementation of the Double Q-learning algorithm I can refactor the `DeepQAgent` class from my [previous post](https://davidrpugh.github.io/stochastic-expatriate-descent/pytorch/deep-reinforcement-learning/deep-q-networks/2020/04/03/deep-q-networks.html) to incorporate the functionality above. The functions defined above can be added to the `DeepQAgent` as either static methods or simply included as module level functions, depending. I tend to prefer module level functions instead of static methods as module level function can be imported independently of class definitions which makes them a bit more re-usable. # + import typing import numpy as np import torch from torch import nn, optim from torch.nn import functional as F class Agent: def choose_action(self, state: np.array) -> int: """Rule for choosing an action given the current state of the environment.""" raise NotImplementedError def learn(self, experiences: typing.List[Experience]) -> None: """Update the agent's state based on a collection of recent experiences.""" raise NotImplementedError def save(self, filepath) -> None: """Save any important agent state to a file.""" raise NotImplementedError def step(self, state: np.array, action: int, reward: float, next_state: np.array, done: bool) -> None: """Update agent's state after observing the effect of its action on the environment.""" raise NotImplmentedError class DeepQAgent(Agent): def __init__(self, state_size: int, action_size: int, number_hidden_units: int, optimizer_fn: typing.Callable[[typing.Iterable[nn.Parameter]], optim.Optimizer], batch_size: int, buffer_size: int, epsilon_decay_schedule: typing.Callable[[int], float], alpha: float, gamma: float, update_frequency: int, double_dqn: bool = False, seed: int = None) -> None: """ Initialize a DeepQAgent. Parameters: ----------- state_size (int): the size of the state space. action_size (int): the size of the action space. number_hidden_units (int): number of units in the hidden layers. optimizer_fn (callable): function that takes Q-network parameters and returns an optimizer. batch_size (int): number of experience tuples in each mini-batch. buffer_size (int): maximum number of experience tuples stored in the replay buffer. epsilon_decay_schdule (callable): function that takes episode number and returns epsilon. alpha (float): rate at which the target q-network parameters are updated. gamma (float): Controls how much that agent discounts future rewards (0 < gamma <= 1). update_frequency (int): frequency (measured in time steps) with which q-network parameters are updated. double_dqn (bool): whether to use vanilla DQN algorithm or use the Double DQN algorithm. seed (int): random seed """ self._state_size = state_size self._action_size = action_size self._device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # set seeds for reproducibility self._random_state = np.random.RandomState() if seed is None else np.random.RandomState(seed) if seed is not None: torch.manual_seed(seed) if torch.cuda.is_available(): torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False # initialize agent hyperparameters _replay_buffer_kwargs = { "batch_size": batch_size, "buffer_size": buffer_size, "random_state": self._random_state } self._memory = ExperienceReplayBuffer(**_replay_buffer_kwargs) self._epsilon_decay_schedule = epsilon_decay_schedule self._alpha = alpha self._gamma = gamma self._double_dqn = double_dqn # initialize Q-Networks self._update_frequency = update_frequency self._online_q_network = self._initialize_q_network(number_hidden_units) self._target_q_network = self._initialize_q_network(number_hidden_units) self._synchronize_q_networks(self._target_q_network, self._online_q_network) self._online_q_network.to(self._device) self._target_q_network.to(self._device) # initialize the optimizer self._optimizer = optimizer_fn(self._online_q_network.parameters()) # initialize some counters self._number_episodes = 0 self._number_timesteps = 0 def _initialize_q_network(self, number_hidden_units: int) -> nn.Module: """Create a neural network for approximating the action-value function.""" q_network = nn.Sequential( nn.Linear(in_features=self._state_size, out_features=number_hidden_units), nn.ReLU(), nn.Linear(in_features=number_hidden_units, out_features=number_hidden_units), nn.ReLU(), nn.Linear(in_features=number_hidden_units, out_features=self._action_size) ) return q_network @staticmethod def _soft_update_q_network_parameters(q_network_1: nn.Module, q_network_2: nn.Module, alpha: float) -> None: """In-place, soft-update of q_network_1 parameters with parameters from q_network_2.""" for p1, p2 in zip(q_network_1.parameters(), q_network_2.parameters()): p1.data.copy_(alpha * p2.data + (1 - alpha) * p1.data) @staticmethod def _synchronize_q_networks(q_network_1: nn.Module, q_network_2: nn.Module) -> None: """In place, synchronization of q_network_1 and q_network_2.""" _ = q_network_1.load_state_dict(q_network_2.state_dict()) def _uniform_random_policy(self, state: torch.Tensor) -> int: """Choose an action uniformly at random.""" return self._random_state.randint(self._action_size) def _greedy_policy(self, state: torch.Tensor) -> int: """Choose an action that maximizes the action_values given the current state.""" action = (self._online_q_network(state) .argmax() .cpu() # action_values might reside on the GPU! .item()) return action def _epsilon_greedy_policy(self, state: torch.Tensor, epsilon: float) -> int: """With probability epsilon explore randomly; otherwise exploit knowledge optimally.""" if self._random_state.random() < epsilon: action = self._uniform_random_policy(state) else: action = self._greedy_policy(state) return action def choose_action(self, state: np.array) -> int: """ Return the action for given state as per current policy. Parameters: ----------- state (np.array): current state of the environment. Return: -------- action (int): an integer representing the chosen action. """ # need to reshape state array and convert to tensor state_tensor = (torch.from_numpy(state) .unsqueeze(dim=0) .to(self._device)) # choose uniform at random if agent has insufficient experience if not self.has_sufficient_experience(): action = self._uniform_random_policy(state_tensor) else: epsilon = self._epsilon_decay_schedule(self._number_episodes) action = self._epsilon_greedy_policy(state_tensor, epsilon) return action def learn(self, experiences: typing.List[Experience]) -> None: """Update the agent's state based on a collection of recent experiences.""" states, actions, rewards, next_states, dones = (torch.Tensor(vs).to(self._device) for vs in zip(*experiences)) # need to add second dimension to some tensors actions = (actions.long() .unsqueeze(dim=1)) rewards = rewards.unsqueeze(dim=1) dones = dones.unsqueeze(dim=1) if self._double_dqn: target_q_values = double_q_learning_update(next_states, rewards, dones, self._gamma, self._online_q_network, self._target_q_network) else: target_q_values = q_learning_update(next_states, rewards, dones, self._gamma, self._target_q_network) online_q_values = (self._online_q_network(states) .gather(dim=1, index=actions)) # compute the mean squared loss loss = F.mse_loss(online_q_values, target_q_values) # updates the parameters of the online network self._optimizer.zero_grad() loss.backward() self._optimizer.step() self._soft_update_q_network_parameters(self._target_q_network, self._online_q_network, self._alpha) def has_sufficient_experience(self) -> bool: """True if agent has enough experience to train on a batch of samples; False otherwise.""" return len(self._memory) >= self._memory.batch_size def save(self, filepath: str) -> None: """ Saves the state of the DeepQAgent. Parameters: ----------- filepath (str): filepath where the serialized state should be saved. Notes: ------ The method uses `torch.save` to serialize the state of the q-network, the optimizer, as well as the dictionary of agent hyperparameters. """ checkpoint = { "q-network-state": self._online_q_network.state_dict(), "optimizer-state": self._optimizer.state_dict(), "agent-hyperparameters": { "alpha": self._alpha, "batch_size": self._memory.batch_size, "buffer_size": self._memory.buffer_size, "gamma": self._gamma, "update_frequency": self._update_frequency } } torch.save(checkpoint, filepath) def step(self, state: np.array, action: int, reward: float, next_state: np.array, done: bool) -> None: """ Updates the agent's state based on feedback received from the environment. Parameters: ----------- state (np.array): the previous state of the environment. action (int): the action taken by the agent in the previous state. reward (float): the reward received from the environment. next_state (np.array): the resulting state of the environment following the action. done (bool): True is the training episode is finised; false otherwise. """ experience = Experience(state, action, reward, next_state, done) self._memory.append(experience) if done: self._number_episodes += 1 else: self._number_timesteps += 1 # every so often the agent should learn from experiences if self._number_timesteps % self._update_frequency == 0 and self.has_sufficient_experience(): experiences = self._memory.sample() self.learn(experiences) # - # The code for the training loop remains unchanged from the previous post. # + import collections import typing import gym def _train_for_at_most(agent: Agent, env: gym.Env, max_timesteps: int) -> int: """Train agent for a maximum number of timesteps.""" state = env.reset() score = 0 for t in range(max_timesteps): action = agent.choose_action(state) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break return score def _train_until_done(agent: Agent, env: gym.Env) -> float: """Train the agent until the current episode is complete.""" state = env.reset() score = 0 done = False while not done: action = agent.choose_action(state) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward return score def train(agent: Agent, env: gym.Env, checkpoint_filepath: str, target_score: float, number_episodes: int, maximum_timesteps=None) -> typing.List[float]: """ Reinforcement learning training loop. Parameters: ----------- agent (Agent): an agent to train. env (gym.Env): an environment in which to train the agent. checkpoint_filepath (str): filepath used to save the state of the trained agent. number_episodes (int): maximum number of training episodes. maximum_timsteps (int): maximum number of timesteps per episode. Returns: -------- scores (list): collection of episode scores from training. """ scores = [] most_recent_scores = collections.deque(maxlen=100) for i in range(number_episodes): if maximum_timesteps is None: score = _train_until_done(agent, env) else: score = _train_for_at_most(agent, env, maximum_timesteps) scores.append(score) most_recent_scores.append(score) average_score = sum(most_recent_scores) / len(most_recent_scores) if average_score >= target_score: print(f"\nEnvironment solved in {i:d} episodes!\tAverage Score: {average_score:.2f}") agent.save(checkpoint_filepath) break if (i + 1) % 100 == 0: print(f"\rEpisode {i + 1}\tAverage Score: {average_score:.2f}") return scores # - # ## Solving the `LunarLander-v2` environment # # In the rest of this blog post I will use the Double DQN algorithm to train an agent to solve the [LunarLander-v2](https://gym.openai.com/envs/LunarLander-v2/) environment from [OpenAI](https://openai.com/) and the compare it to the the results obtained using the vanilla DQN algorithm. # # In this environment the landing pad is always at coordinates (0,0). The reward for moving the lander from the top of the screen to landing pad and arriving at zero speed is typically between 100 and 140 points. Firing the main engine is -0.3 points each frame (so the lander is incentivized to fire the engine as few times possible). If the lander moves away from landing pad it loses reward (so the lander is incentived to land in the designated landing area). The lander is also incentived to land "gracefully" (and not crash in the landing area!). # # A training episode finishes if the lander crashes (-100 points) or comes to rest (+100 points). Each leg with ground contact receives and additional +10 points. The task is considered "solved" if the lander is able to achieve 200 points (I will actually be more stringent and define "solved" as achieving over 200 points on average in the most recent 100 training episodes). # # ### Action Space # # There are four discrete actions available: # # 0. Do nothing. # 1. Fire the left orientation engine. # 2. Fire main engine. # 3. Fire the right orientation engine. # ### Colab specific environment setup # # If you are playing around with this notebook on Google Colab, then you will need to run the following cell in order to install the required OpenAI dependencies into the environment. # !pip install gym[box2d]==0.17.* # + import gym env = gym.make('LunarLander-v2') _ = env.seed(42) # - # ### Creating a `DeepQAgent` # # Before creating an instance of the `DeepQAgent` I need to define an $\epsilon$-decay schedule and choose an optimizer. # # #### Epsilon decay schedule # # As was the case with the DQN algorithm, when using the Double DQN algorithm the agent chooses its action using an $\epsilon$-greedy policy. When using an $\epsilon$-greedy policy, with probability $\epsilon$, the agent explores the state space by choosing an action uniformly at random from the set of feasible actions; with probability $1-\epsilon$, the agent exploits its current knowledge by choosing the optimal action given that current state. # # As the agent learns and acquires additional knowledge about it environment it makes sense to *decrease* exploration and *increase* exploitation by decreasing $\epsilon$. In practice, it isn't a good idea to decrease $\epsilon$ to zero; instead one typically decreases $\epsilon$ over time according to some schedule until it reaches some minimum value. # + def power_decay_schedule(episode_number: int, decay_factor: float, minimum_epsilon: float) -> float: """Power decay schedule found in other practical applications.""" return max(decay_factor**episode_number, minimum_epsilon) _epsilon_decay_schedule_kwargs = { "decay_factor": 0.99, "minimum_epsilon": 1e-2, } epsilon_decay_schedule = lambda n: power_decay_schedule(n, **_epsilon_decay_schedule_kwargs) # - # #### Choosing an optimizer # # As is the case in training any neural network, the choice of optimizer and the tuning of its hyper-parameters (in particular the learning rate) is important. Here I am going to use the [Adam](https://pytorch.org/docs/stable/optim.html#torch.optim.Adam) optimizer. In my previous post on the DQN algorithm I used [RMSProp](https://pytorch.org/docs/stable/optim.html#torch.optim.RMSProp). In my experiments I found that the Adam optimizer significantly improves the efficiency and stability of both the Double DQN and DQN algorithms compared with RMSProp (on this task at least!). In fact it seemed that the improvements in terms of efficiency and stability from using the Adam optimizer instead of RMSProp optimzer were more important than any gains from using Double DQN instead of DQN. # + from torch import optim _optimizer_kwargs = { "lr": 1e-3, "betas": (0.9, 0.999), "eps": 1e-08, "weight_decay": 0, "amsgrad": False, } optimizer_fn = lambda parameters: optim.Adam(parameters, **_optimizer_kwargs) # - # ### Training the `DeepQAgent` using Double DQN # # Now I am finally ready to train the `deep_q_agent`. The target score for the `LunarLander-v2` environment is 200 points on average for at least 100 consecutive episodes. If the `deep_q_agent` is able to "solve" the environment, then training will terminate early. # + _agent_kwargs = { "state_size": env.observation_space.shape[0], "action_size": env.action_space.n, "number_hidden_units": 64, "optimizer_fn": optimizer_fn, "epsilon_decay_schedule": epsilon_decay_schedule, "batch_size": 64, "buffer_size": 100000, "alpha": 1e-3, "gamma": 0.99, "update_frequency": 4, "double_dqn": True, # True uses Double DQN; False uses DQN "seed": None, } double_dqn_agent = DeepQAgent(**_agent_kwargs) double_dqn_scores = train(double_dqn_agent, env, "double-dqn-checkpoint.pth", number_episodes=2000, target_score=200) # - # ### Training the `DeepQAgent` using DQN # # Next I will create another `DeepQAgent` and train it using the original DQN algorithm for comparison. # + _agent_kwargs = { "state_size": env.observation_space.shape[0], "action_size": env.action_space.n, "number_hidden_units": 64, "optimizer_fn": optimizer_fn, "epsilon_decay_schedule": epsilon_decay_schedule, "batch_size": 64, "buffer_size": 100000, "alpha": 1e-3, "gamma": 0.99, "update_frequency": 4, "double_dqn": False, # True uses Double DQN; False uses DQN "seed": None, } dqn_agent = DeepQAgent(**_agent_kwargs) dqn_scores = train(dqn_agent, env, "dqn-checkpoint.pth", number_episodes=2000, target_score=200) # - # ### Comparing DQN and Double DQN # # To make it a bit easier to compare the overall performance of the two algorithms I will now re-train both agents for the same number of episodes (rather than training for the minimum number of episodes required to achieve a target score). # + _agent_kwargs = { "state_size": env.observation_space.shape[0], "action_size": env.action_space.n, "number_hidden_units": 64, "optimizer_fn": optimizer_fn, "epsilon_decay_schedule": epsilon_decay_schedule, "batch_size": 64, "buffer_size": 100000, "alpha": 1e-3, "gamma": 0.99, "update_frequency": 4, "double_dqn": True, "seed": None, } double_dqn_agent = DeepQAgent(**_agent_kwargs) double_dqn_scores = train(double_dqn_agent, env, "double-dqn-checkpoint.pth", number_episodes=2000, target_score=float("inf"), # hack to insure that training never terminates early ) # + _agent_kwargs = { "state_size": env.observation_space.shape[0], "action_size": env.action_space.n, "number_hidden_units": 64, "optimizer_fn": optimizer_fn, "epsilon_decay_schedule": epsilon_decay_schedule, "batch_size": 64, "buffer_size": 100000, "alpha": 1e-3, "gamma": 0.99, "update_frequency": 4, "double_dqn": False, "seed": None, } dqn_agent = DeepQAgent(**_agent_kwargs) dqn_scores = train(dqn_agent, env, "dqn-checkpoint.pth", number_episodes=2000, target_score=float("inf")) # - # #### Plotting the time series of scores # # I can use [Pandas](https://pandas.pydata.org/) to quickly plot the time series of scores along with a 100 episode moving average. Note that training stops as soon as the rolling average crosses the target score. import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline dqn_scores = pd.Series(dqn_scores, name="scores") double_dqn_scores = pd.Series(double_dqn_scores, name="scores") # + fig, axes = plt.subplots(2, 1, figsize=(10, 6), sharex=True, sharey=True) _ = dqn_scores.plot(ax=axes[0], label="DQN Scores") _ = (dqn_scores.rolling(window=100) .mean() .rename("Rolling Average") .plot(ax=axes[0])) _ = axes[0].legend() _ = axes[0].set_ylabel("Score") _ = double_dqn_scores.plot(ax=axes[1], label="Double DQN Scores") _ = (double_dqn_scores.rolling(window=100) .mean() .rename("Rolling Average") .plot(ax=axes[1])) _ = axes[1].legend() _ = axes[1].set_ylabel("Score") _ = axes[1].set_xlabel("Episode Number") # - # #### Kernel density plot of the scores # # In general, the kernel density plot will be bimodal with one mode less than -100 and a second mode greater than 200. The negative mode corresponds to those training episodes where the agent crash landed and thus scored at most -100; the positive mode corresponds to those training episodes where the agent "solved" the task. The kernel density or scores typically exhibits negative skewness (i.e., a fat left tail): there are lots of ways in which landing the lander can go horribly wrong (resulting in the agent getting a very low score) and only relatively few paths to a gentle landing (and a high score). # # Depending, you may see that the distribution of scores for Double DQN has a significantly higher positive mode and lower negative mode when compared to the distribution for DQN which indicates that the agent trained with Double DQN solved the task *more* frequently and crashed and burned *less* frequently than the agent trained with DQN. fig, ax = plt.subplots(1,1) _ = dqn_scores.plot(kind="kde", ax=ax, label="DQN") _ = double_dqn_scores.plot(kind="kde", ax=ax, label="Double DQN") _ = ax.set_xlabel("Score") _ = ax.legend() # ## Where to go from here? # # In a future post I plan to cover [Prioritized Experience Replay](https://arxiv.org/abs/1509.06461) which improves the sampling scheme used by the `ExperienceReplayBuffer` so as to replay important transitions more frequently which should lead to more efficient learning.
_notebooks/2020-04-11-double-dqn.ipynb