code
stringlengths
38
801k
repo_path
stringlengths
6
263
# --- # title: "Timer decorator for how long a function took to run" # date: 2020-04-12T14:41:32+02:00 # author: "<NAME>" # type: technical_note # draft: false # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Timer decorator # + import time def timer(func): """ A decorator that prints how long a function took to run""" # define the wrapper function to return def wrapper(*args, **kwargs): # When wrapper() is called, get current time t_start = time.time() # Call the decorated function and store the result result = func(*args, **kwargs) # Get the total time it took to run, and print it t_total = time.time() - t_start print(f'{func.__name__} took {t_total}s') return result return wrapper # - @timer def multiply(a,b): return a*b multiply(1,2) @timer def sleep_n_seconds(n): time.sleep(n) sleep_n_seconds(5) sleep_n_seconds(10)
courses/datacamp/notes/python/basics/timer_decorator.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## ENGR 213 Lecture Notes # # ### Lecture 13 # #### Stress Transformation: # # Over the last weeks we have considered the internal stresses in a beam under load. Before that we explored the behavior of materials through their elastic region to yielding and failure when experiencing normal or shear stresses. What we need to consider now is the more complex question of failure of materials when they are subject to a mix of planar stresses. Generally this will expressed as two potential normal stresses at right angles to each other and a single shear stress. From this information we can determine the direction along which the material of the beam will feel the maximum normal and shear stresses. We can also determine the normal and shear stresses along an arbitrary direction that may be important due to a weld seam or the grain of the wood. # # This process of taking the known stresses and determining their effect along different directions is called stress transformation. In it's most direct form it is much like doing a freebody diagram and setting the net forces equal to zero. Before the advent of computers engineers often used Mohr's Circle to do these calculations. We will explore Mohr's Circle in a number of ways. # ### Planar Stress Cube: # # For the purposes of this class we are addressing stresses that occur in a plane rather than the full generality of 3D stresses. Planar stress settings are much more common in general engineering practice than complex 3D stresses. I would expect that aerospace engineering to be an exception to this statement. # # In the image below illustrates the stresses on a cube of the material under load. The left hand cube is subject to the most general stress which includes three normal stress (one along each axis) and 3 shear stresses (two on each face that are equal to various shear stresses on other faces. # # When we limit the stress to occur in a plane we get the cube (in cross section) illustrated on the right. In this exaple there are two normal stresses (remember Poisson's ratio?) and a shear stress which is the same along each pair of faces. Remember that shear strain is described by the deformation angle which will be the same along the two directions if you consider the geometry. # # <img src="https://raw.githubusercontent.com/smithrockmaker/ENGR213/main/images/stressGeneral3D.gif" width="500"/> # # As we often do in science and engineering let's take a simplified case to explore the effect of a single normal stress on the stresses along a different direction in the cube. # ### 'Simple' Example: Pure Normal Stress - 1 axis # # This is taken from Example 8.1 in Vable. We are asked to consider the normal stress along the bottom edge of a bridge beam. A basic cube of the material along the edge has ONLY a normal stress in the x direction. In the setting for this example there bridge beam has a welded joint along a line that is $35^{\circ}$ above the horizontal. The underlying question is what are the stresses (normal and shear) along the weld seam. The illustration below shows the normal stress on the cube as well as the orientation of the weld joint. # # <img src="https://raw.githubusercontent.com/smithrockmaker/ENGR213/main/images/stressCube.png" width="300"/> # # If we now consider the portion of the cube 'below' the weld joint we first need to define a clear coordinate system. Vable labels the external coordinate system of the bridge beam the **global coordinate system** and the coordinate system within the cube along the weld joint to be the **local coordinate system**. In other texts these are referred to as the $(x, y)$ - global - and $(x^{'}, y^{'})$ - local - coordinate systems. # # Notice that the $\hat{n}$ is easy to define as perpendicular to the plane of the weld joint. The direction of $\hat{t}$ (tangential) is chosen so that $\hat{n}\: x\: \hat{t} = \hat{z}$. These define our (+) directions. # # <img src="https://raw.githubusercontent.com/smithrockmaker/ENGR213/main/images/stressWedge.png" width="700"/> # Because this wedge of material must be in equilibrium the net forces must be zero. This is where we have to be a little careful because our initial information is in the form of stresses which are force/area. The areas on the faces of the wedge are different and must be accounted for. # # #### Area: # # Since we are interested in the stresses along the weld joint we will use that surface as the reference. The area of the surface define by the weld joint is given by... # # .$$ \large \Delta A = \Delta t \Delta z$$ # # What we will find in the end is this area occurs in every term and ultimately cancels out. For now it is helpful to be clear about how this area affects the determination of the forces on the wedge that are in equilibrium. # # #### Freebody Diagram: # # The Force wedge is represents the conversion of the stresses into forces. Given that our interest is the stress along the $\hat{n}$ and $\hat{t}$ directions we take the components of $F_x$ along those axes as shown. Setting up the equilibrium equations we find that.... # # .$$\large \sigma_{nn} \Delta t \Delta z = (150 \Delta t\: \sin(35^{\circ})\: \Delta z)\:\sin(35^{\circ})$$ # # ..which we could write as .... # # .$$\large \sigma_{nn} \Delta A = \sigma_{xx} \sin(\theta)^2 \Delta A \implies \sigma_{nn} = \sigma_{xx} \sin(\theta)^2$$ # # ..where $\theta$ is the angle of plane of interest. # # Similarly we can determine the shear stress # # .$$\large \tau_{nt} \Delta t \Delta z = (150 \Delta t\: \sin(35^{\circ})\: \Delta z)\:\cos(35^{\circ})$$ # # ..which can be written more generally as .... # # .$$ \large \tau_{nt} = \sigma_{xx}\: \sin(\theta)\:\cos(\theta)$$ # ### The Take Aways: # # * External stresses on a material lead to different normal and shear stresses along different planes. # * A good freebody diagram allows us to determine the normal and shear stresses along a plane from basic Newtonian mechanics. # * Ultimately the area term cancels out but it is conceptually helpful to remember it's presence. # # If we look at the entire process from a cross sectional view it would look like this. In this visualization the contribution of the width and depth that contribute to the area is hidden which is why I wanted to take the time to explore the previous approach in detail. # # <img src="https://raw.githubusercontent.com/smithrockmaker/ENGR213/main/images/stressSteps.png" width="800"/> # ### More Generally: # # Consider now a more general situation where we have all possible plane stresses. In this example (Vable Example 8.2) the plane of interest is at $30^{\circ}$ to the horizontal. The steps towards a solution are illustrated below (in cross section) starting with the stresses on the cube and then the stresses on the wedge with the normal and tangential directions clearly indicated. # # **Important to Note:** The shear stress of 50 MPa occurs equally along all 4 surfaces. This is because the shear strain is the same vertically and horizontally and like the normal stresses they must occur in pairs on opposite surfaces to cancel and maintain equilibrium. # # <img src="https://raw.githubusercontent.com/smithrockmaker/ENGR213/main/images/stressComplex1.png" width="600"/> # # We then convert the stresses to forces redrawing as a freebody diagram. The net external forces on the x and y surfaces are indicated on the last drawing. The last step is to mathematically set them equal to the normal and shear forces on the plane of interest. # # <img src="https://raw.githubusercontent.com/smithrockmaker/ENGR213/main/images/stressComplex2.png" width="600"/> # # Like so many complex problems the steps are relatively straightforward and depend entirely on the initial representation of the stresses on the cube and a lot of careful algebra. # ### The Formal Math: # # You can surely tell that we must be able to develop a formal mathematical way to make these stress transformations. To use them effectively one has to be VERY careful about signs and labels so consider yourself warned:) Here are the labels that define the (+) directions for the equations. Note that $\theta$ defines two planes of interest. # # <img src="https://raw.githubusercontent.com/smithrockmaker/ENGR213/main/images/stressConventions.png" width="600"/> # # .$$\large \sigma_{nn} = \sigma_{xx}\cos^2\theta + \sigma_{yy}\sin^2\theta + 2\tau_{xy}\sin\theta\cos\theta$$ # # .$$\large \tau_{nt} = -\sigma_{xx}\cos\theta\sin\theta + \sigma_{yy}\sin\theta\cos\theta + \tau_{xy}(\cos^2\theta - \sin^2\theta)$$ # # The transformed shear stress is the same along either plane of interest. If we wish to know the normal stress along the perpendicular plane of interest (the one we explored in the wedge example) the expression becomes.... # # .$$\large \sigma_{nn\perp} = \sigma_{tt} = \sigma_{xx}\sin^2\theta + \sigma_{yy}\cos^2\theta - 2\tau_{xy}\cos\theta\sin\theta$$ # ### Alternate Formulation: # # Using some of those clever trigonometric identities that you felt abused by years ago we can restate the above equations in an alternative form. # # .$$\large \sigma_{nn} = \frac{\sigma_{xx} + \sigma_{yy}}{2} + \frac{\sigma_{xx} - \sigma_{yy}}{2}\cos 2\theta + \tau_{xy}\: \sin 2\theta$$ # # .$$\large \tau_{nt} =\frac{\sigma_{xx} - \sigma_{yy}}{2}\sin 2\theta + \tau_{xy}\: \cos 2\theta$$ # ### Maximum Plane Stress: Principle Axes # # Not suprisingly it would be lovely to know what direction ($\theta$ leads to the maximum shear or normal stress. That is the topic of our last discussion.
Week9BeamDeflection/ENGR213Lec13.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Investigate grids and sub grids for generating a cumaliative vtk file of all relevant quanitities # Creating grids for each quantiy individually leads to non-overlapping gridpoints due to floating point roundoffs. # 1. Create global grid at the maximum extent of all quanitites. # 2. Create subgrids by using np.argwhere(np.logical_and.reduce()) with a list of spatial limits. # 3. use the return from argwhere as the interpolation gridpoints for girddata # 4. then use swapaxes and reshape the make a vtkgrid and use it to subindex and fill a zeros array of the shape of the global grid. # # + import numpy as np from scipy.interpolate import griddata # %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns sns.set_context('poster') sns.set_style('whitegrid') import sys # - from pyvisfile.vtk import write_structured_grid from pytools.obj_array import make_obj_array # + sys.path.append('../../read_from_sql/') import read_from_sql sys.path.append('/Users/vonderlinden2/rsx_analysis/mach_probe_analysis') sys.path.append('/Users/vonderlinden2/rsx_analysis/time_alignment/source/') import ion_current_to_mach_number as ic_to_mach reload(ic_to_mach) sys.path.append('/Users/vonderlinden2/rsx_analysis/time_alignment/source') import absolute_times as at import structured_3d_vtk as struc_3d reload(struc_3d) # + spatial_increment = 0.001 x_min, x_max = -0.027, 0.024 y_min, y_max = -0.021, 0.03 z_min, z_max = 0.249, 0.416 full_grid_bounds = ((x_min, x_max), (y_min, y_max), (z_min, z_max)) full_grid, sizes = struc_3d.bounded_grid(full_grid_bounds, spatial_increment) full_vtk_grid = struc_3d.prepare_mesh(full_grid, sizes) # - tp_vtk_grid_indices = np.where(np.logical_and.reduce([full_vtk_grid[0] >= -0.022, full_vtk_grid[0] <= 0.018, full_vtk_grid[1] >= -0.021, full_vtk_grid[1] <= 0.0255, full_vtk_grid[2] >= 0.249, full_vtk_grid[2] <= 0.416])) full_vtk_grid[0][tp_vtk_grid_indices[0], tp_vtk_grid_indices[1], tp_vtk_grid_indices[2]] tp_vtk_grid_indices5 = np.argwhere(np.logical_and.reduce([full_vtk_grid[0] >= -0.022, full_vtk_grid[0] <= 0.018, full_vtk_grid[1] >= -0.021, full_vtk_grid[1] <= 0.0255, full_vtk_grid[2] >= 0.249, full_vtk_grid[2] <= 0.416])) full_vtk_grid[0][tp_vtk_grid_indices5[:,0], tp_vtk_grid_indices5[:,1], tp_vtk_grid_indices5[:,2]] full_vtk_grid[0][tp_vtk_grid_indices] full_vtk_grid.shape 51*51*167 tp_vtk_grid_indices[2].shape print tp_vtk_grid_indices np.unique(tp_vtk_grid_indices[2]).size np.unique(tp_vtk_grid_indices[0]).size np.unique(tp_vtk_grid_indices[1]).size 167*40*46 tp_0 = np.reshape(tp_vtk_grid_indices[0], (40, 46, 167)) tp_1 = np.reshape(tp_vtk_grid_indices[1], (40, 46, 167)) tp_2 = np.reshape(tp_vtk_grid_indices[2], (40, 46, 167)) full_vtk_grid[0][tp_0, tp_1, tp_2].shape full_vtk_grid[0][tp_0, tp_1, tp_2] tp_vtk_grid_indices2 = np.argwhere(np.logical_and.reduce([full_vtk_grid[0] >= -0.022, full_vtk_grid[0] <= 0.018, full_vtk_grid[1] >= -0.021, full_vtk_grid[1] <= 0.0255, full_vtk_grid[2] >= 0.249, full_vtk_grid[2] <= 0.416])) tp_vtk_grid_indices2.shape # + tp_vtk_grid_indices3 = np.swapaxes(tp_vtk_grid_indices2, 0, 1) tp_all = np.reshape(tp_vtk_grid_indices3, (3, 40, 46, 167)) tp_0 = np.reshape(tp_vtk_grid_indices3[0], (40, 46, 167)) tp_1 = np.reshape(tp_vtk_grid_indices3[1], (40, 46, 167)) tp_2 = np.reshape(tp_vtk_grid_indices3[2], (40, 46, 167)) # - full_vtk_grid[0][tp_all[0], tp_all[1], tp_all[2]] np.sum(np.invert(full_vtk_grid[0][tp_0, tp_1, tp_2] == full_vtk_grid[0][tp_all[0], tp_all[1], tp_all[2]]))
write_to_vtk/experiment_with_subgrids.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/vamshi-kb/Covid19-Detection-Using-Chest-X-Ray/blob/master/Tensorflow_introduction.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="Fkk5HSiKpTPR" import tensorflow as tf import numpy as np from termcolor import colored # + id="VfRsPjHRpV4r" outputId="82a7947f-d21f-41ab-a7a8-7497e2639a25" colab={"base_uri": "https://localhost:8080/", "height": 34} print(('Your TensorFlow version: {0}').format(tf.__version__)) # + id="G1zZ6DBOp9Lm" outputId="dd405b36-b1d6-4bb3-f497-f364e4ae5943" colab={"base_uri": "https://localhost:8080/", "height": 187} import time cpu_slot=0 gpu_slot=0 #Using CPU at slot 0 with tf.device('/CPU:'+str(cpu_slot)): #Starting timer start=time.time() #Doing operations on CPU A=tf.constant([[3,2],[5,2]]) B=tf.constant([[3,2],[5,2]]) print(A+B) end=time.time()- start print("using CPU", end) print("\n") #Using the GPU at slot0 with tf.device('/GPU:'+str(gpu_slot)): #Starting timer start=time.time() #Doing operations on CPU A=tf.constant([[3,2],[5,2]]) B=tf.constant([[3,2],[5,2]]) print(A+B) # Printing how long it took with CPU end = time.time() - start print("using Gpu",end) # + [markdown] id="OkujkpnJsGYU" # TensorFlow programs use a data structure called tensor to represent all the data. Any type of data you plan to use for your model can be stored in Tensors. # Simply put, a Tensor is a multi-dimensional array (0-D tensor: scalar, 1-D tensor: vector, 2-D tensor: matrix, and so on). # Hence, TensorFlow is simply referring to the flow of the Tensors in the computational graph. # # Tensor is multi dimensional array # # A scalar is rank_0_tensor # # A vector is rank_1_tensor # # A matrix is rank_2_tensor # + [markdown] id="TEKJhSSusZ0x" # **Scalar** # + id="QmFJeZVoshEM" outputId="c33aa4a5-7778-46ea-aac1-c46dedd3a835" colab={"base_uri": "https://localhost:8080/", "height": 34} x=tf.constant(3.0) print(x,x.dtype) # + [markdown] id="afLgtxVTsviT" # Vector # + id="NcMLeHbSsiOw" outputId="3f36c019-e76a-4630-8e28-089942089ef8" colab={"base_uri": "https://localhost:8080/", "height": 34} x=tf.constant([3,5,7]) print(x) # + [markdown] id="GsnFM0nDuXjn" # Matrix # + id="uk-bjaKksuU1" outputId="b8ab91fd-f554-469e-9600-fdb1d39358da" colab={"base_uri": "https://localhost:8080/", "height": 68} x=tf.constant([[3,5,7],[3,5,7]]) print(x) # + [markdown] id="6-StrXYDuQ7n" # Tensor # + id="EcsyXPY9uR6h" outputId="b9f00a84-5810-4832-b2e7-ec7dcaa0ec3c" colab={"base_uri": "https://localhost:8080/", "height": 85} x=tf.constant([[3,5,7],[3,5,7],[3,5,7]]) print(x) # + [markdown] id="T2gXVCYbuSQY" # Stacking # + id="wbM_0lhnuSh-" outputId="dec163d3-70d1-4ee0-a6cf-83ded65197a6" colab={"base_uri": "https://localhost:8080/", "height": 34} a=tf.constant(3) print(a) # + id="qK993LeSui3V" outputId="575f8dd9-931f-419a-9240-000c257e30a4" colab={"base_uri": "https://localhost:8080/", "height": 34} b=tf.stack([a,a]) print(b) # + id="kjSELjLrulEr" outputId="ec09fd2d-47fa-4b20-b143-943c2cd0c177" colab={"base_uri": "https://localhost:8080/", "height": 68} c=tf.stack([b,b]) print(c) # + [markdown] id="4N9Hg_Ahu11O" # Slicing # + id="G8p0L_TNuqL3" outputId="cb5d5534-4c35-453e-85e6-dc5071e75714" colab={"base_uri": "https://localhost:8080/", "height": 34} a=tf.constant([[1,2,3],[4,5,6]]) # print(a) sliced_a=a[:,1] print(sliced_a) # + id="HblpQp9Vu62k" outputId="1c0e6f79-790c-4e3c-880a-ca4019a60b99" colab={"base_uri": "https://localhost:8080/", "height": 34} sliced_b=a[0,:] print(sliced_b) # + [markdown] id="VuS5lUqavgHP" # Reshaping # + id="tV2kXfDavXI2" outputId="895a9ffe-d9cc-4fe8-cdd7-583b6506b0cf" colab={"base_uri": "https://localhost:8080/", "height": 85} reshaped_a=tf.reshape(a,[3,2]) print(reshaped_a) # + [markdown] id="NQoN6DJ4vulO" # Variable # + id="9Qrh0ZQovqfU" outputId="5d3b09b3-41ef-46aa-9768-2d0972775465" colab={"base_uri": "https://localhost:8080/", "height": 34} var=tf.Variable([2,3],dtype=tf.int32,name='Var') print(var) # + id="ZDlcz6fUwEKn" outputId="eac9cadd-c295-4050-d870-c552d762fb4e" colab={"base_uri": "https://localhost:8080/", "height": 68} print("Shape:",var.shape) print("Dtype:",var.dtype) print("As Numpy",var.numpy) # + id="pbQtySC3wVqP" outputId="1eb9ae95-8ad1-451d-d546-5988b574e43c" colab={"base_uri": "https://localhost:8080/", "height": 34} var_add=var.assign_add([3,4]) print(var_add) # + id="DHq3p1iRwotL" outputId="92e1edf0-f23b-4453-b0cd-0da01c788e08" colab={"base_uri": "https://localhost:8080/", "height": 34} var_sub=var_add.assign_sub([1,1]) print(var_sub) # + [markdown] id="20E4DzVCw8-E" # Operations # + id="idbVU25Qw7Cf" outputId="ea293ab2-abd0-4fce-8d6f-6ccfb7177e59" colab={"base_uri": "https://localhost:8080/", "height": 323} a = tf.constant([[1, 2], [3, 4]]) print(a) b= tf.constant([[1, 2], [3, 4]]) print(b) print(colored("Addition\n","blue"),a+b)# all other operations print(colored("element wise multiplication\n", "blue"),a*b) # element-wise multiplication print(colored("matrix multiplication\n", "blue"),a @b ) # matrix multiplication # + [markdown] id="DYZeK7hJxnrT" # Other useful Operations # + id="1R-DlEuGxDlr" outputId="bdfbb5b9-da54-44a7-cc28-2681d6e187a0" colab={"base_uri": "https://localhost:8080/", "height": 102} c = tf.constant([[1.0, 2.0], [3.0, 5.0]]) # Find the largest value print(tf.reduce_max(c)) # Find the index of the largest value print(tf.argmax(c)) # Compute the softmax print(tf.nn.softmax(c)) # + [markdown] id="g1loGYHVyj9j" # Zeros # + id="ZWRxRgAoxumP" outputId="2f69caf8-8b13-4d30-9a65-c835b21742b6" colab={"base_uri": "https://localhost:8080/", "height": 85} tf.zeros(shape=[3, 3], dtype=tf.int32) # + [markdown] id="GflxmAV7yoD9" # Ones # + id="dl7HtP_5ymNi" outputId="fb79ee52-b692-4e9a-94dc-8a8f9f26a32c" colab={"base_uri": "https://localhost:8080/", "height": 85} tf.ones(shape=[3,3],dtype=tf.float32) # + [markdown] id="lKdehQtfy52U" # Cast types # + id="ThWFFQkDy8Ev" outputId="267f0f31-1661-4390-ffdb-6b2a571bc159" colab={"base_uri": "https://localhost:8080/", "height": 102} tensor = tf.constant([[3.1, 2.8], [5.2, 2.3], [9.7, 5.5], [1.1, 3.4]], dtype=tf.float32) tensor_as_int = tf.cast(tensor, tf.int32) print(tensor_as_int) # + [markdown] id="StihP1Hky3Cc" # Function # + id="2sPhGBSzyvmb" outputId="dc75ef17-7110-4df0-d5cf-b2aa91db157c" colab={"base_uri": "https://localhost:8080/", "height": 34} # Define a Python function def function_to_get_faster(x, y, b): x = tf.matmul(x, y) x = x + b return x # Create a `Function` object that contains a graph a_function_that_uses_a_graph = tf.function(function_to_get_faster) # Make some tensors x1 = tf.constant([[1.0, 2.0]]) y1 = tf.constant([[2.0], [3.0]]) b1 = tf.constant(4.0) # It just works! a_function_that_uses_a_graph(x1, y1, b1) # + id="nJZPetW9zkQi" outputId="2ec3d506-b078-4cf2-866e-df71720f6fdc" colab={"base_uri": "https://localhost:8080/", "height": 34} print(x1) # + id="Ank_cYnqzxUn" outputId="dd20acfb-4b27-4349-bf85-4f403e5915fd" colab={"base_uri": "https://localhost:8080/", "height": 68} print(y1) # + id="UUAINVQCzzGy" outputId="3b01b9c8-380c-4a00-e19b-bb995eacb185" colab={"base_uri": "https://localhost:8080/", "height": 34} print(b1) # + [markdown] id="iZWScjhh0YoI" # Graph # # Graphs are data structures that contain a set of tf.Operation objects, which represent units of computation; and tf.Tensor objects, which represent the units of data that flow between operations. # # The way you create a graph in TensorFlow is to use tf.function, either as a direct call or as a decorator. # + id="uB-BEIx10V-h" outputId="ed3994d8-236c-404a-9eff-870185b8f0ee" colab={"base_uri": "https://localhost:8080/", "height": 255} # Don't read the output too carefully. print(tf.autograph.to_code(function_to_get_faster)) # + [markdown] id="l8PgGVuF19lB" # Feed Forward # + id="ThqlME2q0rz8" n_input=4 n_hidden1=3 n_hidden2=3 n_output=2 weights = { 'w1': tf.Variable(tf.random.truncated_normal([n_input, n_hidden1], stddev=0.1)), 'w2': tf.Variable(tf.random.truncated_normal([n_hidden1, n_hidden2], stddev=0.1)), 'out': tf.Variable(tf.random.truncated_normal([n_hidden2, n_output], stddev=0.1)), } biases = { 'b1': tf.Variable(tf.constant(0.1, shape=[n_hidden1])), 'b2': tf.Variable(tf.constant(0.1, shape=[n_hidden2])), 'out': tf.Variable(tf.constant(0.1, shape=[n_output])) } # + id="EVNWKl9m2DN0" outputId="6c0cd8b3-df69-42d3-b107-6e87d6732e39" colab={"base_uri": "https://localhost:8080/", "height": 34} weights['w1'].shape # + id="r902T06G2JFv" outputId="06662bf6-1519-45cd-96a3-ebee373d2140" colab={"base_uri": "https://localhost:8080/", "height": 34} weights['w2'].shape # + id="XVQLeVkJ2Kwl" outputId="0a8a5415-a345-47ed-da22-77b248f01cbc" colab={"base_uri": "https://localhost:8080/", "height": 34} biases['out'].shape # + id="hHX1EfVy2MoJ" outputId="e4aec23f-d7a2-4e23-c69f-692d7445dfc6" colab={"base_uri": "https://localhost:8080/", "height": 34} weights['out'].shape # + id="zdUaNRp82jJU" x=tf.Variable([[1.0, 2.0, 3.0,4.0], [4.0, 5.0, 6.0,7.0],[7.0,8.0,9.0,10.0]]) output=tf.Variable([[1.0, 2.0], [4.0, 5.0],[6.0,5.0]]) # + id="WTRcJeVk2q68" outputId="a277bf1c-78c6-47a2-8e21-e7a760b73928" colab={"base_uri": "https://localhost:8080/", "height": 34} output.shape # + id="Jvlw2HoA2rsX" hidden_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['w1']),biases['b1'])) # hidden value calc hidden_2 = tf.nn.sigmoid(tf.add(tf.matmul(hidden_1, weights['w2']),biases['b2'])) y_hat = tf.tanh(tf.add(tf.matmul(hidden_2,weights['out']),biases['out'])) # + id="x-IG8gtb3sH0" outputId="5cf88f9d-7218-4924-a556-f5fc6325d7dd" colab={"base_uri": "https://localhost:8080/", "height": 85} y_hat # + id="5pAnVhJT3t2x" cce = tf.keras.losses.CategoricalCrossentropy() # + id="qj-0FZ4432jc" outputId="1aefce2f-0acf-4e98-905c-47bff23f9aa4" colab={"base_uri": "https://localhost:8080/", "height": 34} cce(y_hat, output).numpy() # + id="3eaFSV6838ht" mse = tf.keras.losses.MeanSquaredError(reduction=tf.keras.losses.Reduction.SUM) # + id="-i4oZpoV4smt" outputId="e6e18de3-f2dc-432d-da9c-acb8a067025a" colab={"base_uri": "https://localhost:8080/", "height": 34} mse(output, y_hat).numpy() # + id="M0H9rH8h4uP3" outputId="b480720f-4abe-4962-ee81-cd8fafdf8d9d" colab={"base_uri": "https://localhost:8080/", "height": 34} tf.nn.sigmoid((1*0.3) + (1*0.4) +(0*0.5) +(0*0.5)+0.5) # + id="2_YgHt_c4xhW" output=tf.Variable([[1.0,0.0]]) # + id="ygfgFUZz45El" y_hat=tf.Variable([[0.8037,0.1097]]) # + id="ZMyuw63M46za" outputId="54d393ae-5605-40c4-e208-d209e719496b" colab={"base_uri": "https://localhost:8080/", "height": 34} mse = tf.keras.losses.MeanSquaredError() mse(output, y_hat).numpy() # + id="DrsgmFmaAvmy"
Tensorflow_introduction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np # # Preparation of validation data # This notebook prepares input data from the dataset "[Mobilität in Deutschland 2017](http://www.mobilitaet-in-deutschland.de/) B2" (MiD2017) for model validation. The German Federal Ministry of Transport and Digital Infrastructure is copyright holder of this dataset and does not allow its publication. However, aggregate shares are already published on [Mobilität in Tabellen](https://mobilitaet-in-tabellen.dlr.de/mit/) and will be reproduced here. input_path = '../input/transport_demand/' trips = pd.read_csv(input_path + 'calibration_inter-cellular_trips_MiD2017.csv') print(trips.shape) trips = trips.groupby(['mode_model','purpose_vp']).size().unstack('purpose_vp').replace({np.nan: 0}) trips.plot.pie(subplots=True, figsize=(16,4), legend=None) trips.T.sum().plot.pie() trips.to_csv('../input_static/mid2017_validation_normalised.csv')
quetzal_germany-master/notebooks/cal1x_validation_data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Object classification # [apoc](https://github.com/haesleinhuepf/apoc) allows classifying labeled object according to properties such as size, shape and intensity in a corresponding image. # + from skimage.io import imread import pyclesperanto_prototype as cle import pandas as pd import numpy as np import apoc cle.select_device('RTX') # - image = cle.push(imread('blobs.tif')) labels = cle.push(imread('labels.tif')) annotation = cle.push(imread('label_annotation.tif')) cle.imshow(image) cle.imshow(labels, labels=True) cle.imshow(annotation, labels=True) # ## Training # + features = 'area,mean_max_distance_to_centroid_ratio,standard_deviation_intensity' cl_filename = "object_classifier.cl" # Create an object classifier apoc.erase_classifier(cl_filename) # delete it if it was existing before classifier = apoc.ObjectClassifier(cl_filename) # train it classifier.train(features, labels, annotation, image) # - # ## Prediction # + # determine object classification result = classifier.predict(labels, image) print(result.max()) cle.imshow(result, labels=True) # + # now, we reload the classifier from disc: classifier = apoc.ObjectClassifier(cl_filename) result = classifier.predict(labels, image) print(result.max()) cle.imshow(result, labels=True) # -
demo/demo_object_classification.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + class Date(object): def __init__(self, day=0, month=0, year=0): self.day = day self.month = month self.year = year @classmethod def from_string(cls, date_as_string): day, month, year = map(int, date_as_string.split('-')) date1 = cls(day, month, year) return date1 @staticmethod def is_date_valid(date_as_string): day, month, year = map(int, date_as_string.split('-')) return day <= 31 and month <= 12 and year <= 3999 date2 = Date.from_string('11-09-2012') is_date = Date.is_date_valid('11-09-2012')
Jupyter/first-python-notebook/Class_practice.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Haberman Breast Cancer Survival Dataset # + from collections import Counter from sklearn.preprocessing import LabelEncoder from sklearn.metrics import accuracy_score from sklearn.model_selection import StratifiedKFold from sklearn.model_selection import train_test_split from tensorflow import keras import matplotlib.pyplot as plt import numpy as np import pandas as pd import tensorflow as tf # - # load the haberman dataset and summarize the shape # define the location of the dataset url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/haberman.csv' # load the dataset df = pd.read_csv(url, header=None) # summarize shape df.shape # show summary statistics and plots of the haberman dataset # show summary statistics df.describe() # plot histograms df.hist() plt.show() # summarize the class ratio of the haberman dataset # define the dataset column names columns = ['age', 'year', 'nodes', 'class'] df.columns = columns # + # summarize the class distribution target = df['class'].values counter = Counter(target) for k, v in counter.items(): per = v / len(target) * 100 print('Class=%d. Count=%d, Percentage=%.3f%%' % (k, v, per)) # - # ### Neural Network Learning Dynamics # split into input and output columns X = df.iloc[:, 0:3] y = df.iloc[:, -1] # ensure all data are floating point values X = X.astype('float32') # encode strings to integer y = LabelEncoder().fit_transform(y) # split into train and test datasets X_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.5, stratify=y, random_state=3) # determine the number of input features n_features = X.shape[1] # define model model = keras.Sequential() model.add(keras.layers.Dense(10, activation='relu', kernel_initializer='he_normal', input_shape=(n_features,))) model.add(keras.layers.Dense(1, activation='sigmoid')) # compile the model model.compile(optimizer='adam', loss='binary_crossentropy') # fit the model history = model.fit(X_train, y_train, epochs=200, batch_size=16, verbose=0, validation_data=(x_test, y_test)) # predict test set yhat = model.predict_classes(x_test) # evaluate predictions score = accuracy_score(y_test, yhat) print('Accuracy: %.3f' % score) # plot learning curves plt.title('Learning curves') plt.xlabel('Epochs') plt.ylabel('Cross entropy') plt.plot(history.history['loss'], label='train') plt.plot(history.history['val_loss'], label='val') plt.legend() plt.show() # ### Robust Model Evaluation # load the dataset path = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/haberman.csv' df = pd.read_csv(path, header=None) # split into input and output columns X, y = df.values[:, :-1], df.values[:, -1] # ensure all data are floating point values X = X.astype('float32') # encode strings to integer y = LabelEncoder().fit_transform(y) # prepare cross validation kfold = StratifiedKFold(10) # enumerate splits scores = list() for train_ix, test_ix in kfold.split(X, y): # split data X_train, X_test, y_train, y_test = X[train_ix], X[test_ix], y[train_ix], y[test_ix] # determine the number of input features n_features = X.shape[1] # define model model = keras.Sequential() model.add(keras.layers.Dense(10, activation='relu', kernel_initializer='he_normal', input_shape=(n_features,))) model.add(keras.layers.Dense(1, activation='sigmoid')) # compile the model model.compile(optimizer='adam', loss='binary_crossentropy') # fit the model model.fit(X_train, y_train, epochs=200, batch_size=16, verbose=0) # predict test set yhat = model.predict_classes(X_test) # evaluate predictions score = accuracy_score(y_test, yhat) print('>%.3f' % score) scores.append(score) # summarize all scores print('Mean Accuracy: %.3f (%.3f)' % (np.mean(scores), np.std(scores)))
neural_networks/02 Develop a Neural Network for Cancer Survival Dataset/Develop a Neural Network for Cancer Survival Dataset.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # MAT281 - Laboratorio N°08 # # <a id='p1'></a> # ## I.- Problema 01 # # # <img src="https://www.cardrates.com/wp-content/uploads/2020/08/shutterstock_576998230.jpg?1" width="480" height="360" align="center"/> # # # El conjunto de datos se denomina `creditcard.csv` y consta de varias columnas con información acerca del fraude de tarjetas de crédito, en donde la columna **Class** corresponde a: 0 si no es un fraude y 1 si es un fraude. # # En este ejercicio se trabajará el problemas de clases desbalancedas. Veamos las primeras cinco filas dle conjunto de datos: # + import os import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn import datasets from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix,accuracy_score,recall_score,precision_score,f1_score from sklearn.dummy import DummyClassifier from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier # %matplotlib inline sns.set_palette("deep", desat=.6) sns.set(rc={'figure.figsize':(11.7,8.27)}) # - # cargar datos df = pd.read_csv(os.path.join("data","creditcard.csv"), sep=";") df.head() # Analicemos el total de fraudes respecto a los casos que nos son fraudes: # # + # calcular proporciones df_count = pd.DataFrame() df_count["fraude"] =["no","si"] df_count["total"] = df["Class"].value_counts() df_count["porcentaje"] = 100*df_count["total"] /df_count["total"] .sum() df_count # - # Se observa que menos del 1% corresponde a registros frudulentos. La pregunta que surgen son: # # * ¿ Cómo deben ser el conjunto de entrenamiento y de testeo? # * ¿ Qué modelos ocupar? # * ¿ Qué métricas ocupar? # # Por ejemplo, analicemos el modelos de regresión logística y apliquemos el procedimiento estándar: # + # datos y = df.Class X = df.drop('Class', axis=1) # split dataset X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=27) # Creando el modelo lr = LogisticRegression(solver='liblinear').fit(X_train, y_train) # predecir lr_pred = lr.predict(X_test) # calcular accuracy accuracy_score(y_test, lr_pred) # - # En general el modelo tiene un **accuracy** del 99,9%, es decir, un podría suponer que el modelo predice casi perfectamente, pero eso esta lejos de ser así. Para ver por qué es necesario seguir los siguientes pasos: # ### 1. Cambiar la métrica de rendimiento # # El primer paso es comparar con distintas métricas, para eso ocupemos las 4 métricas clásicas abordadas en el curso: # * accuracy # * precision # * recall # * f-score # # En este punto deberá poner las métricas correspondientes y comentar sus resultados. # + # metrics y_true = list(y_test) y_pred = list(lr.predict(X_test)) print('\nMatriz de confusion:\n ') print(confusion_matrix(y_true,y_pred)) print('\nMetricas:\n ') print('accuracy: ',accuracy_score(y_test, lr_pred)) print('recall: ',recall_score(y_test, lr_pred)) print('precision: ',precision_score(y_test, lr_pred)) print('f-score: ',f1_score(y_test, lr_pred)) print("") # - # podemos notar una accuracy muy alta, pero esto se debe a que el modelo esta muy desnivelado, puede el modelo estar reespondiendo siempre que no es fraude y tener un accuracy alticimo, no podemos asegurar que el modelo sea bueno. # ### 2. Cambiar algoritmo # # El segundo paso es comparar con distintos modelos. Debe tener en cuenta que el modelo ocupaod resuelva el problema supervisado de clasificación. # # En este punto deberá ajustar un modelo de **random forest**, aplicar las métricas y comparar con el modelo de regresión logística. # + # train model rfc = RandomForestClassifier(n_estimators=10).fit(X_train, y_train) # algoritmo random forest # + # metrics y_true = list(y_test) y_pred = list(rfc.predict(X_test)) # predicciones con random forest print('\nMatriz de confusion:\n ') print(confusion_matrix(y_true,y_pred)) print('\nMetricas:\n ') print('accuracy: ',accuracy_score(y_true,y_pred)) print('recall: ',recall_score(y_true,y_pred)) print('precision: ',precision_score(y_true,y_pred)) print('f-score: ',f1_score(y_true,y_pred)) print("") # - # con 10 arboles el modelo aproxima mejor que el anterior pero por el desnivel de los datos es dificil asegurar que el modelo es bueno # ### 3. Técnicas de remuestreo: sobremuestreo de clase minoritaria # # El tercer paso es ocupar ténicas de remuestreo, pero sobre la clase minoritaria. Esto significa que mediantes ténicas de remuestreo trataremos de equiparar el número de elementos de la clase minoritaria a la clase mayoritaria. # + from sklearn.utils import resample # concatenar el conjunto de entrenamiento X = pd.concat([X_train, y_train], axis=1) # separar las clases not_fraud = X[X.Class==0] fraud = X[X.Class==1] # remuestrear clase minoritaria fraud_upsampled = resample(fraud, replace=True, # sample with replacement n_samples=len(not_fraud), # match number in majority class random_state=27) # reproducible results # recombinar resultados upsampled = pd.concat([not_fraud, fraud_upsampled]) # chequear el número de elementos por clases upsampled.Class.value_counts() # - # datos de entrenamiento sobre-balanceados y_train = upsampled.Class X_train = upsampled.drop('Class', axis=1) # Ocupando estos nuevos conjunto de entrenamientos, vuelva a aplicar el modelos de regresión logística y calcule las correspondientes métricas. Además, justifique las ventajas y desventjas de este procedimiento. # + upsampled = LogisticRegression().fit(X_train, y_train) # algoritmo de regresion logistica # metrics y_true = list(y_test) y_pred = list(upsampled.predict(X_test)) print('\nMatriz de confusion:\n ') print(confusion_matrix(y_true,y_pred)) print('\nMetricas:\n ') print('accuracy: ',accuracy_score(y_true,y_pred)) print('recall: ',recall_score(y_true,y_pred)) print('precision: ',precision_score(y_true,y_pred)) print('f-score: ',f1_score(y_true,y_pred)) print("") # - # se puede notar que disminuyo mucho la precicion esto acompaña la hipotesis de marcar la mayoria de los datos como positivo, claramente este modelo no es bueno ya que fue entrenado muy desnivelado y aprendio incorrectamente a priorizar la respuesta no fraude. # ### 4. Técnicas de remuestreo - Ejemplo de clase mayoritaria # # El cuarto paso es ocupar ténicas de remuestreo, pero sobre la clase mayoritaria. Esto significa que mediantes ténicas de remuestreo trataremos de equiparar el número de elementos de la clase mayoritaria a la clase minoritaria. # + # remuestreo clase mayoritaria not_fraud_downsampled = resample(not_fraud, replace = False, # sample without replacement n_samples = len(fraud), # match minority n random_state = 27) # reproducible results # recombinar resultados downsampled = pd.concat([not_fraud_downsampled, fraud]) # chequear el número de elementos por clases downsampled.Class.value_counts() # + # datos de entrenamiento sub-balanceados y_train = downsampled.Class X_train = downsampled.drop('Class', axis=1) # - # Ocupando estos nuevos conjunto de entrenamientos, vuelva a aplicar el modelos de regresión logística y calcule las correspondientes métricas. Además, justifique las ventajas y desventjas de este procedimiento. # + undersampled = LogisticRegression().fit(X_train, y_train) # modelo de regresi+on logística # metrics y_true = list(y_test) y_pred = list(undersampled.predict(X_test)) print('\nMatriz de confusion:\n ') print(confusion_matrix(y_true,y_pred)) print('\nMetricas:\n ') print('accuracy: ',accuracy_score(y_true,y_pred)) print('recall: ',recall_score(y_true,y_pred)) print('precision: ',precision_score(y_true,y_pred)) print('f-score: ',f1_score(y_true,y_pred)) print("") # - # volvemos a ver que el modelo es malo y predice casi siempre no fraude por lo que la precicion es muy baja. # # este proceso de reescalar los datos es muy util para comprobar que el modelo funcione bien y que no es solo para una forma especifica de datos, su desventaja es el trabajo y tiempo que requiere reescalar los datos y aplicar el modelo varias veces, me gusaria destacar que en mi opinion este proceso deveria hacerse con muchos mas reescalados para que sea fiable, en casos podrian coincidir los 3 escalados pero aun asi no se un buen modelo # ### 5. Conclusiones # # Para finalizar el laboratorio, debe realizar un análisis comparativo con los disintos resultados obtenidos en los pasos 1-4. Saque sus propias conclusiones del caso. # podemos concluir que aunque la metrica en un modelo con una base de datos nos digan que el modelo es bueno, el trabajo no ttemina alli poque hace falta corroborar que su eficiencia, precicion y recall no cambie mucho al cambiar la base de datos, desnivelar los datos esta bien como comprobacion pero es un proceso largo y en casos costoso porque en mi opinion deberia hacerse muchas veces con muchas bases de datos diferentes para poder asegurar que el programa es bueno.
labs/lab_08.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # <img src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png" style="width: 90px; float: right;"> # # # HugeCTR training with HDFS example # ## Overview: # In version v3.4, we introduced the support of HDFS. Users can now move their data and model files from HDFS to local filesystem through our API to do HugeCTR training. And after training, users can choose to dump the trained parameeters and optimizer states into HDFS. In this example notebook, we are going to demonstrate the end to end procedure of training with HDFS. # ## Table of contents: # - [Installation](#1) # * [Get HugeCTR from NGC](#11) # * [Build HugeCTR from Source Code](#12) # - [Hadoop Preparation](#2) # * [Download and Install Hadoop](#21) # * [HDFS Conifiguration](#22) # * [Launch HDFS cluster](#23) # - [Wide&Deep Demo](#3) # <a id="1"></a> # ## 1. Installation # # <a id="11"></a> # ### 1.1 Get HugeCTR from NGC # The HugeCTR Python module is preinstalled in the [Merlin Training Container](https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-training): `nvcr.io/nvidia/merlin/merlin-training:22.04`. # # You can check the existence of required libraries by running the following Python code after launching this container. # ```bash # $ python3 -c "import hugectr" # ``` # <a id="12"></a> # ### 1.2 Build HugeCTR from Source Code # # If you want to build HugeCTR from the source code instead of using the NGC container, please refer to the [How to Start Your Development](https://nvidia-merlin.github.io/HugeCTR/master/hugectr_contributor_guide.html#how-to-start-your-development). # <a id="2"></a> # ## 2. Hadoop Preparation # # <a id="21"></a> # ### 2.1 Download and Install Hadoop # # Download JDK first: # ```bash # wget https://download.java.net/java/GA/jdk16.0.2/d4a915d82b4c4fbb9bde534da945d746/7/GPL/openjdk-16.0.2_linux-x64_bin.tar.gz # tar -zxvf openjdk-16.0.2_linux-x64_bin.tar.gz # # mv jdk-16.0.2 /usr/local # ``` # # Set Java Environmental variables: # ```bash # export JAVA_HOME=/usr/local/jdk-16.0.2 # export JRE_HOME=${JAVA_HOME}/jre # export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib # export PATH=.:${JAVA_HOME}/bin:$PATH # ``` # # Download and install Hadoop: # ```bash # wget https://dlcdn.apache.org/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz # tar -zxvf hadoop-3.3.1.tar.gz # # mv hadoop-3.3.1 /usr/local # ``` # <a id="22"></a> # ### 2.2 HDFS configuration # Set Hadoop Environmental variables: # ```bash # export HDFS_NAMENODE_USER=root # export HDFS_DATANODE_USER=root # export HDFS_SECONDARYNAMENODE_USER=root # export YARN_RESOURCEMANAGER_USER=root # export YARN_NODEMANAGER_USER=root # # echo ‘export JAVA_HOME=/usr/local/jdk-16.0.2’ >> /usr/local/hadoop/etc/hadoop/hadoop-env.sh # ``` # # `core-site.xml` config: # # ```xml # <property> # <name>fs.defaultFS</name> # <value>hdfs://namenode:9000</value> # </property> # <property> # <name>hadoop.tmp.dir</name> # <value>/usr/local/hadoop/tmp</value> # </property> # ``` # # `hdfs-site.xml` for name node: # ```xml # <property> # <name>dfs.replication</name> # <value>4</value> # </property> # <property> # <name>dfs.namenode.name.dir</name> # <value>file:/usr/local/hadoop/hdfs/name</value> # </property> # <property> # <name>dfs.client.block.write.replace-datanode-on-failure.enable</name> # <value>true</value> # </property> # <property> # <name>dfs.client.block.write.replace-datanode-on-failure.policy</name> # <value>NEVER</value> # </property> # ``` # # `hdfs-site.xml` for data node: # ```xml # <property> # <name>dfs.replication</name> # <value>4</value> # </property> # <property> # <name>dfs.datanode.data.dir</name> # <value>file:/usr/local/hadoop/hdfs/data</value> # </property> # <property> # <name>dfs.client.block.write.replace-datanode-on-failure.enable</name> # <value>true</value> # </property> # <property> # <name>dfs.client.block.write.replace-datanode-on-failure.policy</name> # <value>NEVER</value> # </property> # ``` # # `workers` for all node: # ```bash # worker1 # worker2 # worker3 # worker4 # ``` # <a id="23"></a> # ### 2.3 Launch HDFS Cluster # # Enable ssh connection: # ```bash # ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa # # cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys # /etc/init.d/ssh start # ``` # # For Name Node: # ```bash # /usr/local/hadoop/bin/hdfs namenode -format # ``` # # For Data Node: # ```bash # /usr/local/hadoop/bin/hdfs datanode -format # ``` # # Inside Name Node: # ```bash # /usr/local/hadoop/sbin/start-dfs.sh # ``` # <a id="3"></a> # ## 3. Wide&Deep Demo # In your `nvcr.io/nvidia/merlin/merlin-training:22.04` docker: # Make sure you have installed hadoop and set the proper environmental variables as instructed in the last sesstion. # # When compile HugeCTR, please make sure you set `DENABLE_HDFS` as `ON`. # # Then run `export CLASSPATH=$(hadoop classpath --glob)` first to link the required .jar # Then, make sure that we have the model files your hadoop cluster and provide the correct links to the model files. # # Now you can run the sample below. # + # %%writefile train_with_hdfs.py import hugectr from mpi4py import MPI from hugectr.data import DataSource, DataSourceParams data_source_params = DataSourceParams( use_hdfs = True, #whether use HDFS to save model files namenode = 'localhost', #HDFS namenode IP port = 9000, #HDFS port ) solver = hugectr.CreateSolver(max_eval_batches = 1280, batchsize_eval = 1024, batchsize = 1024, lr = 0.001, vvgpu = [[0]], repeat_dataset = True, data_source_params = data_source_params) reader = hugectr.DataReaderParams(data_reader_type = hugectr.DataReaderType_t.Norm, source = ['./wdl_norm/file_list.txt'], eval_source = './wdl_norm/file_list_test.txt', check_type = hugectr.Check_t.Sum) optimizer = hugectr.CreateOptimizer(optimizer_type = hugectr.Optimizer_t.Adam, update_type = hugectr.Update_t.Global, beta1 = 0.9, beta2 = 0.999, epsilon = 0.0000001) model = hugectr.Model(solver, reader, optimizer) model.add(hugectr.Input(label_dim = 1, label_name = "label", dense_dim = 13, dense_name = "dense", data_reader_sparse_param_array = # the total number of slots should be equal to data_generator_params.num_slot [hugectr.DataReaderSparseParam("wide_data", 2, True, 1), hugectr.DataReaderSparseParam("deep_data", 1, True, 26)])) model.add(hugectr.SparseEmbedding(embedding_type = hugectr.Embedding_t.DistributedSlotSparseEmbeddingHash, workspace_size_per_gpu_in_mb = 69, embedding_vec_size = 1, combiner = "sum", sparse_embedding_name = "sparse_embedding2", bottom_name = "wide_data", optimizer = optimizer)) model.add(hugectr.SparseEmbedding(embedding_type = hugectr.Embedding_t.DistributedSlotSparseEmbeddingHash, workspace_size_per_gpu_in_mb = 1074, embedding_vec_size = 16, combiner = "sum", sparse_embedding_name = "sparse_embedding1", bottom_name = "deep_data", optimizer = optimizer)) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Reshape, bottom_names = ["sparse_embedding1"], top_names = ["reshape1"], leading_dim=416)) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Reshape, bottom_names = ["sparse_embedding2"], top_names = ["reshape2"], leading_dim=1)) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Concat, bottom_names = ["reshape1", "dense"], top_names = ["concat1"])) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.InnerProduct, bottom_names = ["concat1"], top_names = ["fc1"], num_output=1024)) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.ReLU, bottom_names = ["fc1"], top_names = ["relu1"])) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Dropout, bottom_names = ["relu1"], top_names = ["dropout1"], dropout_rate=0.5)) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.InnerProduct, bottom_names = ["dropout1"], top_names = ["fc2"], num_output=1024)) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.ReLU, bottom_names = ["fc2"], top_names = ["relu2"])) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Dropout, bottom_names = ["relu2"], top_names = ["dropout2"], dropout_rate=0.5)) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.InnerProduct, bottom_names = ["dropout2"], top_names = ["fc3"], num_output=1)) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Add, bottom_names = ["fc3", "reshape2"], top_names = ["add1"])) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.BinaryCrossEntropyLoss, bottom_names = ["add1", "label"], top_names = ["loss"])) model.compile() model.summary() model.load_dense_weights('/model/wdl/_dense_1000.model') model.load_dense_optimizer_states('/model/wdl/_opt_dense_1000.model') model.load_sparse_weights(['/model/wdl/0_sparse_1000.model', '/model/wdl/1_sparse_1000.model']) model.load_sparse_optimizer_states(['/model/wdl/0_opt_sparse_1000.model', '/model/wdl/1_opt_sparse_1000.model']) model.fit(max_iter = 1020, display = 200, eval_interval = 500, snapshot = 1000, snapshot_prefix = "/model/wdl/") # - # !python train_with_hdfs.py
notebooks/training_with_hdfs.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.10 64-bit # language: python # name: python3 # --- import pandas as pd import numpy as np df = pd.read_csv('/home/rahul/Desktop/DATABASE/NBcombined.csv' , names=['Description' , 'Date' , 'Time' , "Open" , "High" , "Low" , "Close" , "Volume" , "OI"]) df2 = pd.read_csv("/home/rahul/Desktop/DATABASE/Ocombined.csv" , names=['Description' , 'Date' , 'Time' , "Open" , "High" , "Low" , "Close" , "Volume" , "OI"]) df.head() df2.head() df['Description'].value_counts() wk = df[df['Description'].str.contains('WK')] wk.head() wk.shape wk.to_csv('/home/rahul/Desktop/DATABASE/niftywk.csv' , index=False) other = df[~df['Description'].str.contains('WK')] nifty_strike_ot = other[other['Description'].str.contains('NIFTY[0-9][0-9][0-9][0-9][0-9]')] nifty_strike_ot.shape nifty_strike_ot.to_csv('/home/rahul/Desktop/DATABASE/niftyStrikeOT.csv' , index=False) other.head() nifty_date_strike = other[other['Description'].str.contains('^NIFTY[0-9][0-9][A-Z][A-Z][A-Z]')] nifty_date_strike.shape nifty_date_strike.to_csv('/home/rahul/Desktop/DATABASE/niftyDateStrike.csv' , index=False) banknifty_date_strike_with_year = df[df['Description'].str.contains('BANKNIFTY[0-9][0-9][A-Z][A-Z][A-Z][0-9][0-9][0-9][0-9][0-9][0-9][0-9]')] banknifty_date_strike_with_year.shape banknifty_date_strike_with_year.to_csv('/home/rahul/Desktop/DATABASE/bankniftyDateStrikeWithYear.csv' , index=False) banknifty_date_strike_without_year = df[df['Description'].str.contains('BANKNIFTY[0-9][0-9][A-Z][A-Z][A-Z][0-9][0-9][0-9][0-9][0-9][A-Z]')] banknifty_date_strike_without_year.shape banknifty_date_strike_without_year.to_csv('/home/rahul/Desktop/DATABASE/bankniftyDateStrikeWithoutYear.csv' , index=False)
HistoricalData/database.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Hand written digits (MNIST) regognition with a neural network # Based on a book by <NAME>, <a href="http://neuralnetworksanddeeplearning.com">"Neural Networks and Deep Learning"</a>, Determination Press, 2015 # # #### Step 2: # import numpy as np # definition of sigmoid function def sigmoid(x): return 1.0/(1.0 + np.exp(-x)) class Network(object): def __init__(self, sizes): self.num_layers = len(sizes) self.sizes = sizes # np.random.randn(num_rows, num_cols) # for [2, 3, 1] we will have 3 biases between 2 input neurons and 3 neurons in the hidden layer # and 1 bias between 3 neurons in the hidden layer and one output neuron # bias is for the neuron in a given layer (excluding input layer) self.biases = [np.random.randn(y, 1) for y in sizes[1:]] # for weights and the same list [2, 3, 1] we will have: # 3 rows and 2 columns for weigths between 2 input layer neurons (columns) and 3 neurons in the hidden layer # 1 row and 3 columns for weights between 3 neurons in the hidden layer and 1 neuron in the output layer self.weights = [np.random.randn(y, x) for x, y in zip(sizes[:-1], sizes[1:])] def feedforward(self, a): '''return the output of the network for given input a''' for b, w in zip(self.biases, self.weights): out = sigmoid(np.dot(w, a) + b) return out # definition of a network with 2 neurons in input layer, 3 neurons in hidden layer, 1 output neuron net = Network([2, 3, 1]) # Let's look at the feedforard function # input layer a = np.array([[0.8], [0.2]]) print(a) print(a.shape)
03_digit_recognition_step02.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 继承的优缺点 # > (我们)推出继承的初衷是让新手顺利使用只有专家才能设计出来的框架。 # > ——<NAME>, "The Early History of Smalltalk" # # 本章探讨继承和子类化,重点是说明对 Python 而言尤为重要的两个细节: # * 子类化内置类型的缺点 # * 多重继承和方法解析顺序(MRO) # ## 子类化类型的缺点 # 基本上,内置类型的方法不会调用子类覆盖的方法,所以尽可能不要去子类化内置类型。 # 如果有需要使用 `list`, `dict` 等类,`collections` 模块中提供了用于用户继承的 `UserDict`、`userList` 和 `UserString`,这些类经过特殊设计,因此易于扩展。 # + # 子类化内置类型的缺点 class DoppelDict(dict): def __setitem__(self, key, value): super().__setitem__(key, [value] * 2) # 构造方法和 update 都不会调用子类的 __setitem__ dd = DoppelDict(one=1) dd['two'] = 2 print(dd) dd.update(three=3) print(dd) from collections import UserDict class DoppelDict2(UserDict): def __setitem__(self, key, value): super().__setitem__(key, [value] * 2) # UserDict 中,__setitem__ 对 update 起了作用,但构造函数依然不会调用 __setitem__ dd = DoppelDict2(one=1) dd['two'] = 2 print(dd) dd.update(three=3) print(dd) # - # ## 方法解析顺序(Method Resolution Order) # 权威论述:[The Python 2.3 Method Resolution Order](https://www.python.org/download/releases/2.3/mro/) # > Moreover, unless you make strong use of multiple inheritance and you have non-trivial hierarchies, you don't need to understand the C3 algorithm, and you can easily skip this paper. # # Emmm… # # OK,提两句: # 1. 如果想查看某个类的方法解析顺序,可以访问该类的 `__mro__` 属性; # 2. 如果想绕过 MRO 访问某个父类的方法,可以直接调用父类上的非绑定方法。 # + class A: def f(self): print('A') class B(A): def f(self): print('B') class C(A): def f(self): print('C') class D(B, C): def f(self): print('D') def b_f(self): "D -> B" super().f() def c_f(self): "B -> C" super(B, self).f() # C.f(self) def a_f(self): "C -> A" super(C, self).f() print(D.__mro__) d = D() d.f() d.b_f() d.c_f() d.a_f() # - # ## 处理多重继承 # 书中列出来了一些处理多重继承的建议,以免做出令人费解和脆弱的继承设计: # 1. 把接口继承和实现继承区分开 # 如果继承重用的是代码实现细节,通常可以换用组合和委托模式。 # 2. 使用抽象基类显式表示接口 # 如果基类的作用是定义接口,那就应该定义抽象基类。 # 3. 通过混入(Mixin)重用代码 # 如果一个类的作用是为多个不相关的子类提供方法实现,从而实现重用,但不体现实际的“上下级”关系,那就应该明确地将这个类定义为**混入类**(Mixin class)。关于 Mixin(我还是习惯英文名),可以看 Python3-Cookbook 的[《利用Mixins扩展类功能》](https://python3-cookbook.readthedocs.io/zh_CN/latest/c08/p18_extending_classes_with_mixins.html)章节。 # 4. 在名称中明确指出混入 # 在类名里使用 `XXMixin` 写明这个类是一个 Mixin. # 5. 抽象基类可以作为混入,反过来则不成立 # 6. 不要子类化多个具体类 # 在设计子类时,不要在多个**具体基类**上实现多继承。一个子类最好只继承自一个具体基类,其余基类最好应为 Mixin,用于提供增强功能。 # 7. 为用户提供聚合类 # 如果抽象基类或 Mixin 的组合对客户代码非常有用,那就替客户实现一个包含多继承的聚合类,这样用户可以直接继承自你的聚合类,而不需要再引入 Mixin. # 8. “优先使用对象组合,而不是类继承” # 组合和委托可以代替混入,把行为提供给不同的类,不过这些方法不能取代接口继承去**定义类型层次结构**。 # 两个实际例子: # * 很老的 `tkinter` 称为了反例,那个时候的人们还没有充分认识到多重继承的缺点; # * 现代的 `Django` 很好地利用了继承和 Mixin。它提供了非常多的 `View` 类,鼓励用户去使用这些类以免除大量模板代码。
12-inheritance.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <h1 align="center">Segmentation Evaluation</h1> # # **Summary:** # # 1. SimpleITK supports two ways of combining expert segmentations to obtain a reference segmentation. # 2. A variety of criteria used for evaluating a segmentation result are readily available or implemented in SimpleITK. # # <u>Reference Segmentation</u> # # Evaluating segmentation algorithms is most often done using reference data to which you compare your results. In the medical domain reference data is commonly obtained via manual segmentation by an expert (don't forget to thank your clinical colleagues for their hard work). When you are resource limited, the reference data may be defined by a single expert. This is less than ideal. When multiple experts provide you with their input then you can potentially combine them to obtain reference data that is closer to the ever elusive "ground truth". In this notebook we show two approaches to combining input from multiple observers, majority vote and the Simultaneous Truth and Performance Level # Estimation [(STAPLE)](https://www.ncbi.nlm.nih.gov/pubmed/15250643) algorithm. # # <u>Segmentation Evaluation</u> # # Once we have a reference, we compare the algorithm's performance using multiple criteria, as usually there is no single evaluation measure that conveys all of the relevant information. In this notebook we illustrate the use of the following evaluation criteria: # * Overlap measures: # * Jaccard and Dice coefficients # * false negative and false positive errors # * Surface distance measures: # * Hausdorff distance (symmetric) # * mean, median, max and standard deviation between surfaces # * Volume measures: # * volume similarity $ \frac{2*(v1-v2)}{v1+v2}$ # # The relevant criteria are task dependent, so you need to ask yourself whether you are interested in detecting spurious errors or not (mean or max surface distance), whether over/under segmentation should be differentiated (volume similarity and Dice or just Dice), and what is the ratio between acceptable errors and the size of the segmented object (Dice coefficient may be too sensitive to small errors when the segmented object is small and not sensitive enough to large errors when the segmented object is large). # # In the context of segmentation challenges, algorithm rankings are often based on a weighted combination of these criteria. These ranking schemes are not necessarily robust, as discussed in "[Why rankings of biomedical image analysis competitions should be interpreted with care](https://www.nature.com/articles/s41467-018-07619-7)", <NAME> et al. # # The data we use in the notebook is a set of manually segmented liver tumors from a single clinical CT scan. A larger dataset (four scans) is freely available from this [MIDAS repository](http://www.insight-journal.org/midas/collection/view/38). The relevant publication is: <NAME> et al., "Tumor Volume Measurement and Volume Measurement Comparison Plug-ins for VolView Using ITK", SPIE Medical Imaging: Visualization, Image-Guided Procedures, and Display, 2006. # # + import SimpleITK as sitk import numpy as np from downloaddata import fetch_data as fdata # %matplotlib inline import matplotlib.pyplot as plt import gui from ipywidgets import interact, fixed # - # ## Utility method for display # + code_folding=[] def display_with_overlay(segmentation_number, slice_number, image, segs, window_min, window_max): """ Display a CT slice with segmented contours overlaid onto it. The contours are the edges of the labeled regions. """ img = image[:,:,slice_number] msk = segs[segmentation_number][:,:,slice_number] overlay_img = sitk.LabelMapContourOverlay(sitk.Cast(msk, sitk.sitkLabelUInt8), sitk.Cast(sitk.IntensityWindowing(img, windowMinimum=window_min, windowMaximum=window_max), sitk.sitkUInt8), opacity = 1, contourThickness=[2,2]) #We assume the original slice is isotropic, otherwise the display would be distorted plt.imshow(sitk.GetArrayViewFromImage(overlay_img)) plt.axis('off') plt.show() # - # ## Fetch the data # # Retrieve a single CT scan and three manual delineations of a liver tumor. Visual inspection of the data highlights the variability between experts. # + image = sitk.ReadImage(fdata("liverTumorSegmentations/Patient01Homo.mha")) segmentation_file_names = ["liverTumorSegmentations/Patient01Homo_Rad01.mha", "liverTumorSegmentations/Patient01Homo_Rad02.mha", "liverTumorSegmentations/Patient01Homo_Rad03.mha"] segmentations = [sitk.ReadImage(fdata(file_name), sitk.sitkUInt8) for file_name in segmentation_file_names] interact(display_with_overlay, segmentation_number=(0,len(segmentations)-1), slice_number = (0, image.GetSize()[2]-1), image = fixed(image), segs = fixed(segmentations), window_min = fixed(-1024), window_max=fixed(976)); # - # ## Derive a reference # # There are a variety of ways to derive a reference segmentation from multiple expert inputs ("[A comparison of ground truth estimation methods](https://www.ncbi.nlm.nih.gov/pubmed/20033494)", <NAME>, <NAME>, <NAME>). # # Two methods that are available in SimpleITK are [majority vote](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1LabelVotingImageFilter.html) and the STAPLE algorithm ([single label](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1STAPLEImageFilter.html) or [multi label](https://itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1MultiLabelSTAPLEImageFilter.html)). # + # Use the STAPLE algorithm to obtain the reference segmentation. This implementation of the original algorithm # combines a single label from multiple segmentations, the label is user specified. The result of the # filter is the voxel's probability of belonging to the foreground. We then have to threshold the result to obtain # a reference binary segmentation. foregroundValue = 1 threshold = 0.95 reference_segmentation_STAPLE_probabilities = sitk.STAPLE(segmentations, foregroundValue) # We use the overloaded operator to perform thresholding, another option is to use the BinaryThreshold function. reference_segmentation = reference_segmentation_STAPLE_probabilities > threshold manual_plus_staple = list(segmentations) # Append the reference segmentation to the list of manual segmentations manual_plus_staple.append(reference_segmentation) interact(display_with_overlay, segmentation_number=(0,len(manual_plus_staple)-1), slice_number = (0, image.GetSize()[1]-1), image = fixed(image), segs = fixed(manual_plus_staple), window_min = fixed(-1024), window_max=fixed(976)); # - # ## Evaluate segmentations using the reference # # Once we derive a reference from our experts input we can compare segmentation results to it. # # Note that in this notebook we compare the expert segmentations to the reference derived from them. This is not relevant for algorithm evaluation, but it can potentially be used to rank your experts. # # In this specific implementation we take advantage of the fact that we have a binary segmentation with 1 for foreground and 0 for background. # + from enum import Enum # Use enumerations to represent the various evaluation measures class OverlapMeasures(Enum): jaccard, dice, volume_similarity, false_negative, false_positive = range(5) class SurfaceDistanceMeasures(Enum): hausdorff_distance, mean_surface_distance, median_surface_distance, std_surface_distance, max_surface_distance = range(5) # Empty numpy arrays to hold the results overlap_results = np.zeros((len(segmentations),len(OverlapMeasures.__members__.items()))) surface_distance_results = np.zeros((len(segmentations),len(SurfaceDistanceMeasures.__members__.items()))) # Compute the evaluation criteria # Note that for the overlap measures filter, because we are dealing with a single label we # use the combined, all labels, evaluation measures without passing a specific label to the methods. overlap_measures_filter = sitk.LabelOverlapMeasuresImageFilter() hausdorff_distance_filter = sitk.HausdorffDistanceImageFilter() # Use the absolute values of the distance map to compute the surface distances (distance map sign, outside or inside # relationship, is irrelevant) label = 1 reference_distance_map = sitk.Abs(sitk.SignedMaurerDistanceMap(reference_segmentation, squaredDistance=False, useImageSpacing=True)) reference_surface = sitk.LabelContour(reference_segmentation) statistics_image_filter = sitk.StatisticsImageFilter() # Get the number of pixels in the reference surface by counting all pixels that are 1. statistics_image_filter.Execute(reference_surface) num_reference_surface_pixels = int(statistics_image_filter.GetSum()) for i, seg in enumerate(segmentations): # Overlap measures overlap_measures_filter.Execute(reference_segmentation, seg) overlap_results[i,OverlapMeasures.jaccard.value] = overlap_measures_filter.GetJaccardCoefficient() overlap_results[i,OverlapMeasures.dice.value] = overlap_measures_filter.GetDiceCoefficient() overlap_results[i,OverlapMeasures.volume_similarity.value] = overlap_measures_filter.GetVolumeSimilarity() overlap_results[i,OverlapMeasures.false_negative.value] = overlap_measures_filter.GetFalseNegativeError() overlap_results[i,OverlapMeasures.false_positive.value] = overlap_measures_filter.GetFalsePositiveError() # Hausdorff distance hausdorff_distance_filter.Execute(reference_segmentation, seg) surface_distance_results[i,SurfaceDistanceMeasures.hausdorff_distance.value] = hausdorff_distance_filter.GetHausdorffDistance() # Symmetric surface distance measures segmented_distance_map = sitk.Abs(sitk.SignedMaurerDistanceMap(seg, squaredDistance=False, useImageSpacing=True)) segmented_surface = sitk.LabelContour(seg) # Multiply the binary surface segmentations with the distance maps. The resulting distance # maps contain non-zero values only on the surface (they can also contain zero on the surface) seg2ref_distance_map = reference_distance_map*sitk.Cast(segmented_surface, sitk.sitkFloat32) ref2seg_distance_map = segmented_distance_map*sitk.Cast(reference_surface, sitk.sitkFloat32) # Get the number of pixels in the reference surface by counting all pixels that are 1. statistics_image_filter.Execute(segmented_surface) num_segmented_surface_pixels = int(statistics_image_filter.GetSum()) # Get all non-zero distances and then add zero distances if required. seg2ref_distance_map_arr = sitk.GetArrayViewFromImage(seg2ref_distance_map) seg2ref_distances = list(seg2ref_distance_map_arr[seg2ref_distance_map_arr!=0]) seg2ref_distances = seg2ref_distances + \ list(np.zeros(num_segmented_surface_pixels - len(seg2ref_distances))) ref2seg_distance_map_arr = sitk.GetArrayViewFromImage(ref2seg_distance_map) ref2seg_distances = list(ref2seg_distance_map_arr[ref2seg_distance_map_arr!=0]) ref2seg_distances = ref2seg_distances + \ list(np.zeros(num_reference_surface_pixels - len(ref2seg_distances))) all_surface_distances = seg2ref_distances + ref2seg_distances # The maximum of the symmetric surface distances is the Hausdorff distance between the surfaces. In # general, it is not equal to the Hausdorff distance between all voxel/pixel points of the two # segmentations, though in our case it is. More on this below. surface_distance_results[i,SurfaceDistanceMeasures.mean_surface_distance.value] = np.mean(all_surface_distances) surface_distance_results[i,SurfaceDistanceMeasures.median_surface_distance.value] = np.median(all_surface_distances) surface_distance_results[i,SurfaceDistanceMeasures.std_surface_distance.value] = np.std(all_surface_distances) surface_distance_results[i,SurfaceDistanceMeasures.max_surface_distance.value] = np.max(all_surface_distances) # Print the matrices np.set_printoptions(precision=3) print(overlap_results) print(surface_distance_results) # - # ## Improved output # # Using the [pandas](http://pandas.pydata.org/) package we can easily produce high quality output. # + import pandas as pd from IPython.display import display, HTML # Graft our results matrix into pandas data frames overlap_results_df = pd.DataFrame(data=overlap_results, index = list(range(len(segmentations))), columns=[name for name, _ in OverlapMeasures.__members__.items()]) surface_distance_results_df = pd.DataFrame(data=surface_distance_results, index = list(range(len(segmentations))), columns=[name for name, _ in SurfaceDistanceMeasures.__members__.items()]) # Display the data as HTML tables and graphs display(HTML(overlap_results_df.to_html(float_format=lambda x: '%.3f' % x))) display(HTML(surface_distance_results_df.to_html(float_format=lambda x: '%.3f' % x))) overlap_results_df.plot(kind='bar').legend(bbox_to_anchor=(1.6,0.9)) surface_distance_results_df.plot(kind='bar').legend(bbox_to_anchor=(1.6,0.9)) # - # You can also export the data as a table for your LaTeX manuscript using the [to_latex](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_latex.html) function. # <b>Note</b>: You will need to add the \usepackage{booktabs} to your LaTeX document's preamble. # # To create the minimal LaTeX document which will allow you to see the difference between the tables below, copy paste: # # \documentclass{article} # # \usepackage{booktabs} # # \begin{document} # # paste the tables here # # \end{document} # # # + # The formatting of the table using the default settings is less than ideal print(overlap_results_df.to_latex()) # We can improve on this by specifying the table's column format and the float format print(overlap_results_df.to_latex(column_format='ccccccc', float_format=lambda x: '%.3f' % x)) # - # ## Visual Diff # # It is always nice to have a figure with a visual display of the difference between the segmentation and ground truth. # + simpleitk_error_allowed="Exception thrown in SimpleITK Show:" # Use the first segmentation segmentation = segmentations[0] # Save ink, the differences will be in black and background is white segmentation_diff = (segmentation==reference_segmentation)*255 # Flatten for 2D presentation, create a montage from the volume num_slices = segmentation_diff.GetDepth() tile_w = int(np.sqrt(num_slices)) tile_h = int(np.ceil(num_slices/tile_w)) default_background_color = 255 tile_image = sitk.Tile([segmentation_diff[:,:,i] for i in range(num_slices)], (tile_w, tile_h), default_background_color) sitk.Show(tile_image) # - # <a href="09_results_visualization.ipynb"><h2 align=right>Next &raquo;</h2></a>
08_segmentation_evaluation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #imports import wisps import splat import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns from tqdm import tqdm # %matplotlib inline # + #constants sns.set_palette('husl') fltrswfc3= ['WFC3_{}'.format(k) for k in ['F110W', 'F140W', 'F160W']] #load spectra bonafide_data=pd.read_pickle(wisps.OUTPUT_FILES+'/validated_spectra.pkl') # + #load spectra #bonafide_spectra # + #functions def measure_all_mags(sp): mags={} for k in fltrswfc3: mags.update( measure_mag(sp, k)) return mags def measure_mag(sp, flt): #flux calibrate #sp.trim([1.1, 1.7]) plt.plot(sp.wave, sp.flux) mag, mag_unc = splat.filterMag(sp, flt, ab=True, nsamples=1000) return {flt: [mag, mag_unc], 'spectra':sp } def scale_spectrum(sp): med_flux=np.nanmedian(sp.flux.value[np.logical_and(sp.wave.value >=1.2, sp.wave.value<=1.3 )]) #print (med_flux) sp.scale(REF_FLUX/med_flux) return sp def make_wisp_spectra(sp): try: #print (sp.noise.value) return wisps.Spectrum(wave=sp.wave.value, flux=sp.flux.value, noise=sp.noise.value, \ contam=np.ones(len(sp.flux.value))) except: return # - spectral_types=np.vstack(bonafide_data)[:,0] bonafide_spectra=np.vstack(bonafide_data)[:,1] # + #spectral_types # - ydwrf=bonafide_spectra[spectral_types>38.0][0] # + #splat.filterMag(ydwrf, 'WFC3_F110W') # + #reference mdwarf REF_MDWARF=bonafide_spectra[spectral_types==17.][0] #wavelength range REF_FLUX= np.nanmedian(REF_MDWARF.flux[np.logical_and(REF_MDWARF.wave.value >=1.2, REF_MDWARF.wave.value<=1.3 )]) REF_MAG={} for k in fltrswfc3: REF_MAG.update(measure_mag(REF_MDWARF, k)) # - REF_MAG scaled_spectra= [scale_spectrum(x) for x in bonafide_spectra] scale_mags=[measure_all_mags(xi) for xi in tqdm(scaled_spectra) ] # + for x in scaled_spectra: plt.plot(x.wave, x.flux, alpha=0.1, c='#0074D9') plt.plot(REF_MDWARF.wave, REF_MDWARF.flux, c='#111111') plt.xlim([1.1, 1.7]) plt.ylim([0.0, 0.5e-14]) plt.xlabel('Wavelength') plt.ylabel('Flux (normalized)') # + # #splat.filterMag? # + #splat.filterMag(ydwrf, '2MASS H', ab=True, nsamples=1000) # - df=pd.DataFrame() df['wispsp']=[make_wisp_spectra(x) for x in bonafide_spectra] nones=df.wispsp.isna().values snr_df=pd.DataFrame.from_records(df['wispsp'][~nones].apply(lambda x: x.snr).values) # + #snr.snr1.plot(kind='hist') # - mag_df=pd.DataFrame.from_records(scale_mags)[~nones] #mag_df['wispectra']=bonafide_spectra['wispsp'] mag_df['snr1']=snr_df.snr1 mag_df['snr3']=snr_df.snr3 mag_df['medsnr']=snr_df.snr4 mag_df['spt']=spectral_types[~nones] #mag_df['spt_er']=[x.spectral_type[1] for x in bonafide_spectra] for k in ['F110W', 'F140W', 'F160W']: mgkey='WFC3_{}'.format(k) #compute offsets and uncertainties using standard error propagation mag_df['rel'+k]= np.vstack(mag_df[mgkey].values)[:,0]-REF_MAG[mgkey][0] mag_df['rel'+k+'_er']=((np.vstack(mag_df[mgkey].values)[:,1])**2+(REF_MAG[mgkey][1])**2)**0.5 #delete outliers mag_df=(mag_df[mag_df.snr1 >1.]).reset_index(drop=True) mag_df.columns mag_df['dF110W']= 100**(mag_df['relF110W']/5) YET_ANOTHER_POLYNOMIAL={} #plt.plot(mag_df.spt, mag_df.relF140W, '^') mag_df=mag_df.dropna()#.spt.max() def tick_function(mags): return ['{:.1f} pc'.format(wisps.get_distance(0, x)) for x in mags] # + #mag_df # - fig, ax=plt.subplots(ncols=3, figsize=(12, 4), sharey=False) for idx, flt in enumerate(['rel'+k for k in ['F110W', 'F140W', 'F160W']]): mask, pol=wisps.fit_with_nsigma_clipping(mag_df.spt.values.astype(float), mag_df[flt].values.astype(float), mag_df[flt+'_er'].values.astype(float), n=6, sigma=5) ax[idx].errorbar( mag_df.spt, mag_df[flt], yerr=mag_df[flt+'_er'], fmt='o', mec='#111111', mfc='#0074D9', alpha=1.) #c=ax[idx].scatter( mag_df.spt, mag_df[flt], c=mag_df.snr1.apply(np.log10), cmap='coolwarm', s=40) xvl=mag_df.spt.values yvl=mag_df[flt].values YET_ANOTHER_POLYNOMIAL.update({flt.replace('rel', ""):(pol, abs(yvl[mask]-pol(xvl[mask])).mean())}) ax[idx].set_xticks([15, 20, 25, 30, 35, 40]) ax[idx].set_xticklabels(['M5', 'L0', 'L5', 'T0', 'T5', 'Y0']) ax[idx].set_ylabel('{} Correction'.format(flt.replace('rel', ""))) #ax2 = ax[idx].twinx() # instantiate a second axes that shares the same x-axis #ax2.set_ylabel('Distance Correction', color='#FF851B') # we already handled the x-label with ax1 #ax2.plot( mag_df.spt, mag_df['dF110W'], fmt='o', mec='#111111', mfc='#FF851B', alpha=0.) #ax2.plot( mag_df.spt, mag_df['dF110W'], '^', color='#FF851B', alpha=0.1) #ax2.set_ylabel('Distance Correction') #ax2.tick_params(axis='y', color='#FF851B') #ax2.set_ylim(ax[idx].get_ylim()) #new_tick_locations = np.array([0.0, 0.5, 1.0, 1.5, 2.0]) #ax2.set_yticks(new_tick_locations) #ax2.set_yticklabels(tick_function(new_tick_locations)) #ax2.tick_params(colors='#0074D9', which='both') #color = #ax[idx].minorticks_on() ax[idx].plot(np.arange(15, 42), pol(np.arange(15, 42)), c='#111111') ax[idx].axhline(0.0, linestyle='--') plt.tight_layout() plt.savefig(wisps.OUTPUT_FIGURES+'/mag_limits_corrections.pdf', rasterized=True, bbox_inches='tight') #plt.ylim([-1, 4]) #plt.xlabel('Spectral Type') #plt.ylabel('Magnitude offset from standard M7') #plt.colorbar(c) np.shape(scaled_spectra) rels=wisps.POLYNOMIAL_RELATIONS rels.update({'mag_limit_corrections': YET_ANOTHER_POLYNOMIAL}) # + #jklasad # - #/polynomial_relations.pkl' import pickle output = open(wisps.OUTPUT_FILES+'/polynomial_relations.pkl', 'wb') pickle.dump(rels, output) output.close() # + #all_spectra # -
notebooks/Magnitude and J-SNR.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # default_exp core # - #hide # %load_ext autoreload # %autoreload 2 # # Maths #hide import os import math import numpy as np import pandas as pd import scipy.io from pathlib import Path from mayavi import mlab import quaternion as quat from sklearn.decomposition import PCA #export def mag(v): """ Finds magnitude of vector v (np.array): vector""" return math.sqrt(np.dot(v, v)) # ### Example from this [thread](https://uk.mathworks.com/matlabcentral/answers/101590-how-can-i-determine-the-angle-between-two-vectors-in-matlab) # Using atan2 is more robust for very small angels: # start with a very small angle a = 1e-10 # arbitrary non-unit vector in X direction u = 4*np.array([1,0,0]) # vector different from u by small angle v = np.array([math.cos(a), math.sin(a), 0])*5 # acos formulation does not recover the small angle math.acos(np.dot(u,v)/(np.linalg.norm(u)*np.linalg.norm(v))) # atan2 formulation does recover the small angle math.atan2(np.linalg.norm(np.cross(u,v)),np.dot(u,v)) #export def angle(v1, v2): """ Finds angel between 2 vectors returns: ang , v1""" try: ang = math.atan2(np.linalg.norm(np.cross(v1,v2)),np.dot(v1,v2)) if ang > math.pi/2: v1 = -v1 ang = math.atan2(np.linalg.norm(np.cross(v1,v2)),np.dot(v1,v2)) print(f'{ang} PC inverted') else: print(f'{ang} no invert') except: #vang = 0 print(f'ERROR: vectors v1= {v1}, v2= {v2}') ang = 'ERROR' return ang, v1 # + #hide # from nbdev.export import notebook2script; notebook2script()
01_maths.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="1xQ6KAvMoqRR" # ## Imports and Initialization # # Import required modules # + colab={} colab_type="code" id="_JJYqHW8XGiH" import numpy as np import json import pprint from annotater import Annotater # + [markdown] colab_type="text" id="dvv7UoZbtfaY" # ## Annotated BBox Data # # Used this tool - http://www.robots.ox.ac.uk/~vgg/software/via/via_demo.html to annotate 50 images of dogs. # # dog_images: '**data/annotations/dogs**' # # dog_annotations: '**data/annotations/annotations_dogs.json**' # + colab={"base_uri": "https://localhost:8080/", "height": 323} colab_type="code" id="1jlUINpFt1U0" outputId="ea4d2ec0-2e01-4bfb-f1ee-af0283567af1" in_path = "/content/data/annotations/annotations_dogs.json" with open(in_path, 'r') as f: annotations = json.load(f) print(annotations.keys(), "\n") pprint.pprint(annotations["images"][0]) print() pprint.pprint(annotations["annotations"][0]) print() pprint.pprint(annotations["categories"][0]) # + [markdown] colab_type="text" id="-aY-2mH4wD27" # The downloaded annotations are COCO format # # * **images.file_name**: image file name # * **images.height**: height of the image # * **images.width**: width of the image # * **annotations.bbox**: the dimensions of the bounding box in the image in the order: x, y, w, h # * **x**: x co-ordinate of top left corner of bbox assuming the origin is at the top left corner of the image # * **y**: y co-ordinate of top left corner of bbox assuming the origin is at the top left corner of the image # * **w**: width of the bbox # * **h**: height of the bbox # * **categories.name**: the annotated class for the object in the bbox # + [markdown] colab_type="text" id="rOVH8Ypto2k8" # ## Anchor Boxes # # Initialize the annotater. Parse the image annotations and generate the scaled bounding boxes # + colab={"base_uri": "https://localhost:8080/", "height": 221} colab_type="code" id="sQ7O4kjTX07o" outputId="b9dfa786-c779-471a-d0d5-5381398bf1b7" in_path = "/content/data/annotations/annotations_dogs.json" out_path = "/content/data/annotations/bboxes_dogs.csv" ant = Annotater(in_path, out_path) # + [markdown] colab_type="text" id="WnjppjVYqVej" # ### Visualise Bounding Boxes # # Visualise the scaled bounding boxes of all the images. We. use the log scaled data for clustering as it gives compact and better clusters # + colab={"base_uri": "https://localhost:8080/", "height": 308} colab_type="code" id="YjLBVyzTqSxm" outputId="9b1f070a-be80-46ae-8c4b-8ec1c7a175cd" ant.show_bboxes() # + [markdown] colab_type="text" id="eI6UZ8jaqgoq" # ### Determine optimal number of clusters # # Determine the optimal number of clusters using the elbow method. We train multiple models using a different number of clusters and store the value of the intertia_ property (WCSS) every time # + colab={"base_uri": "https://localhost:8080/", "height": 295} colab_type="code" id="0fi4dySGdZV-" outputId="73a17b7d-7847-4b92-d2d1-683f834831e0" ant.try_cluster() # + [markdown] colab_type="text" id="JJPhHFo-rNtX" # ### Generate Template Bounding Boxes # # Get the optimal number of clusters from Elbow method and cluster the data # + colab={"base_uri": "https://localhost:8080/", "height": 281} colab_type="code" id="FWjhQTioe6He" outputId="96e5f496-480c-48c7-d8b4-2a22fb4ec54c" ant.fit(5) # + [markdown] colab_type="text" id="8hVDg2y0riGM" # The centroids are the template bounding boxes. Since we clustered the data on log scale, we convert it back to the (0,1) range. # + colab={"base_uri": "https://localhost:8080/", "height": 102} colab_type="code" id="MVqTx8hanrNO" outputId="1b0454a5-928c-4693-f705-3613eca65a47" np.exp(ant.centroids)
S12/EVA4_S12_B.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.7.11 ('probml') # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/flow_spline_mnist_jax.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="gJIYeg2Y7oZv" # # Spline Flow using JAX, Haiku, Optax and Distrax # # In this notebook we will implement Spline flow to fit a distribution to MNIST dataset. We will be using the RationalQuadraticSpline, a piecewise rational quadratic spline, and Masked Couplings as explained in paper [Neural Spline Flows](https://arxiv.org/abs/1906.04032) by <NAME>, <NAME>, <NAME>, <NAME>. # # This notebook replicates the [original distrax code ](https://github.com/deepmind/distrax/blob/master/examples/flow.py) with suitable minor modifications. # # # # For implementing the Quadratic Splines with Coupling flows, We will be using following libraries: # - JAX - NumPy on GPU, and TPU with automatic differentiation. # - Haiku - JAX based Neural Network Library. # - Optax - gradient processing and optimization library for JAX. # - Distrax - a lightweight library of probability distributions and bijectors. # # ### Installing required libraries in Colab # + colab={"base_uri": "https://localhost:8080/"} id="l0K_D4QX74gV" outputId="bcdb6f99-11d9-4aa8-d1c6-fad24b9caf60" # !pip install -U optax distrax dm-haiku # + [markdown] id="i8vGb5NE-sWj" # ### Importing all required libraries and packages # + id="T6QrYSdV7oZ3" from typing import Any, Iterator, Mapping, Optional, Sequence, Tuple import distrax import haiku as hk import jax import jax.numpy as jnp import numpy as np import optax import tensorflow_datasets as tfds import matplotlib.pyplot as plt Array = jnp.ndarray PRNGKey = Array Batch = Mapping[str, np.ndarray] OptState = Any MNIST_IMAGE_SHAPE = (28, 28, 1) batch_size = 128 # + [markdown] id="7sF08dbZ7oZ7" # ### Conditioner # # Let $u \in \mathbb{R}^D $ be the input. The input is split into two equal sub spaces $(x^A, x^B)$ each of size $\mathbb{R}^{d}$ such that $d = D/2$. # # Let us assume we have a bijection $\hat{f}(\cdot;\theta): \mathbb{R}^d \to \mathbb{R}^d$ parameterized by $\theta $ # # We define a single coupling layer as a function $ f: \mathbb{R}^D \to \mathbb{R}^D $ given by $x = f(u)$ as below: # # $ x^A = \hat{f}(u^A; \Theta(u^B))$ # # $ x^B = u^B $ # # $ x = (x^A, x^B)$ # # In other words, the input $u$ is split into $(u^A, u^B)$ and output $(x^A, x^B)$ is combined back into $x$ using a binary mask $m$. Therefore, the single coupling layer $ f: \mathbb{R}^D \to \mathbb{R}^D $ given by $x = f(u)$ is defined in a single equation as below: # # $x = f(u) = b \odot u + (1-b) \hat{f}(u; \Theta(b \odot u))$ # # We will implement the full flow by chaining multiple coupling layers. The mask $b$ will be flipped between each layer to ensure we capture dependencies in more expressive way. # # The function **${\Theta}$** is called the **Conditioner** which we implement with a set of Linear layers and ReLU activation functions. # + id="kE3YTziR7oZ8" def make_conditioner( event_shape: Sequence[int], hidden_sizes: Sequence[int], num_bijector_params: int ) -> hk.Sequential: """Creates an MLP conditioner for each layer of the flow.""" return hk.Sequential( [ hk.Flatten(preserve_dims=-len(event_shape)), hk.nets.MLP(hidden_sizes, activate_final=True), # We initialize this linear layer to zero so that the flow is initialized # to the identity function. hk.Linear(np.prod(event_shape) * num_bijector_params, w_init=jnp.zeros, b_init=jnp.zeros), hk.Reshape(tuple(event_shape) + (num_bijector_params,), preserve_dims=-1), ] ) # + [markdown] id="_b4hkTxK7oZ-" # ### Flow Model # # Next we implement the **Bijector** $\hat{f}$ using `distrax.RationalQuadraticSpline` and the **Masked Coupling** $f$ using `distrax.MaskedCoupling` # # We join together sequentailly a number of masked coupling layers to define the complete Spline FLow. # # We define base distribution of our flow as **Uniform distribution**. # + id="FGkKBR8n7oZ-" def make_flow_model( event_shape: Sequence[int], num_layers: int, hidden_sizes: Sequence[int], num_bins: int ) -> distrax.Transformed: """Creates the flow model.""" # Alternating binary mask. mask = jnp.arange(0, np.prod(event_shape)) % 2 mask = jnp.reshape(mask, event_shape) mask = mask.astype(bool) def bijector_fn(params: Array): return distrax.RationalQuadraticSpline(params, range_min=0.0, range_max=1.0) # Number of parameters for the rational-quadratic spline: # - `num_bins` bin widths # - `num_bins` bin heights # - `num_bins + 1` knot slopes # for a total of `3 * num_bins + 1` parameters. num_bijector_params = 3 * num_bins + 1 layers = [] for _ in range(num_layers): layer = distrax.MaskedCoupling( mask=mask, bijector=bijector_fn, conditioner=make_conditioner(event_shape, hidden_sizes, num_bijector_params), ) layers.append(layer) # Flip the mask after each layer. mask = jnp.logical_not(mask) # We invert the flow so that the `forward` method is called with `log_prob`. flow = distrax.Inverse(distrax.Chain(layers)) base_distribution = distrax.Independent( distrax.Uniform(low=jnp.zeros(event_shape), high=jnp.ones(event_shape)), reinterpreted_batch_ndims=len(event_shape), ) return distrax.Transformed(base_distribution, flow) # + [markdown] id="M4LcgwQ47oaA" # ### Data Loading and preparation # In this cell, we define a function to load the MNIST dataset using TFDS (Tensorflow Datasets) package. # # We also have a function `prepare_data` to: # 1. dequantize the data i.e. to convert the integer pixel values from `{0,1,...,255}` to real number values `[0,256)` by adding a random uniform noise `[0,1)`; and # # 2. Normalize the pixel values from `[0,256)` to `[0,1)` # # The dequantization of data is done only at training time. # + id="XGqQqhvt7oaA" def load_dataset(split: tfds.Split, batch_size: int) -> Iterator[Batch]: ds = tfds.load("mnist", split=split, shuffle_files=True) ds = ds.shuffle(buffer_size=10 * batch_size) ds = ds.batch(batch_size) ds = ds.prefetch(buffer_size=5) ds = ds.repeat() return iter(tfds.as_numpy(ds)) def prepare_data(batch: Batch, prng_key: Optional[PRNGKey] = None) -> Array: data = batch["image"].astype(np.float32) if prng_key is not None: # Dequantize pixel values {0, 1, ..., 255} with uniform noise [0, 1). data += jax.random.uniform(prng_key, data.shape) return data / 256.0 # Normalize pixel values from [0, 256) to [0, 1). # + [markdown] id="zC1jKZXH7oaB" # ### Log Probability, Sample and training loss Functions # Next we define the `log_prob` `model_sample` and `loss_fn`. `log_prob` is responsible for calculating the log of the probability of the data which we want to maximize for MNIST data inside `loss_fn`. # # `model_sample` allows us to sample new data points after the model has been trained on MNIST. FOr a well trained model, these samples will look like MNIST digits generated synthetically. # + id="izlGzJdH7oaD" flow_num_layers = 8 mlp_num_layers = 2 hidden_size = 500 num_bins = 4 learning_rate = 1e-4 # using 100,000 steps could take long (about 2 hours) but will give better results. # You can try with 10,000 steps to run it fast but result may not be very good training_steps = 10000 eval_frequency = 1000 # + id="JZ2DXvH87oaD" @hk.without_apply_rng @hk.transform def log_prob(data: Array) -> Array: model = make_flow_model( event_shape=MNIST_IMAGE_SHAPE, num_layers=flow_num_layers, hidden_sizes=[hidden_size] * mlp_num_layers, num_bins=num_bins, ) return model.log_prob(data) @hk.without_apply_rng @hk.transform def model_sample(key: PRNGKey, num_samples: int) -> Array: model = make_flow_model( event_shape=MNIST_IMAGE_SHAPE, num_layers=flow_num_layers, hidden_sizes=[hidden_size] * mlp_num_layers, num_bins=num_bins, ) return model.sample(seed=key, sample_shape=[num_samples]) def loss_fn(params: hk.Params, prng_key: PRNGKey, batch: Batch) -> Array: data = prepare_data(batch, prng_key) # Loss is average negative log likelihood. loss = -jnp.mean(log_prob.apply(params, data)) return loss @jax.jit def eval_fn(params: hk.Params, batch: Batch) -> Array: data = prepare_data(batch) # We don't dequantize during evaluation. loss = -jnp.mean(log_prob.apply(params, data)) return loss # + [markdown] id="ldAqassc7oaE" # ### Training # Next we define, the `update` function for the gradient update. We use `jax.grad` to calculate the gradient of loss wrt model parameters. # + id="GisFglnr7oaF" optimizer = optax.adam(learning_rate) @jax.jit def update(params: hk.Params, prng_key: PRNGKey, opt_state: OptState, batch: Batch) -> Tuple[hk.Params, OptState]: """Single SGD update step.""" grads = jax.grad(loss_fn)(params, prng_key, batch) updates, new_opt_state = optimizer.update(grads, opt_state) new_params = optax.apply_updates(params, updates) return new_params, new_opt_state # + [markdown] id="IvWrGKTgNaSD" # Now we carry out the training of the model. # + colab={"base_uri": "https://localhost:8080/"} id="kBvaapIO7oaF" outputId="12cf22ea-4155-4c46-dc5e-94f2b3345736" prng_seq = hk.PRNGSequence(42) params = log_prob.init(next(prng_seq), np.zeros((1, *MNIST_IMAGE_SHAPE))) opt_state = optimizer.init(params) train_ds = load_dataset(tfds.Split.TRAIN, batch_size) valid_ds = load_dataset(tfds.Split.TEST, batch_size) for step in range(training_steps): params, opt_state = update(params, next(prng_seq), opt_state, next(train_ds)) if step % eval_frequency == 0: val_loss = eval_fn(params, next(valid_ds)) print(f"STEP: {step:5d}; Validation loss: {val_loss:.3f}") # + [markdown] id="dlA6fiYn7oaH" # ### Sampling from Trained Flow Model # + [markdown] id="aczbVzmAOlH4" # ### Plot new samples # After the model has been trained in MNIST, we draw new samples and plot them. Once the model has been trained enough, these should look like MNIST dataset digits. # + colab={"base_uri": "https://localhost:8080/", "height": 246} id="Qt59aIVl7oaI" outputId="62ef5147-c177-465c-82a8-471a784a4f6b" def plot_batch(batch: Batch) -> None: """Plots a batch of MNIST digits.""" images = batch.reshape((-1,) + MNIST_IMAGE_SHAPE) plt.figure(figsize=(10, 4)) for i in range(10): plt.subplot(2, 5, i + 1) plt.imshow(np.squeeze(images[i]), cmap="gray") plt.axis("off") plt.show() sample = model_sample.apply(params, next(prng_seq), num_samples=10) plot_batch(sample)
notebooks/misc/flow_spline_mnist_jax.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import torch from torch.utils.data import DataLoader import time import pickle import os import logging from process_data import process_data from model_fn import CBOWHierSoftmax from input_fn import CBOWBibleDataset from utils import set_logger print(torch.__version__) # + from sklearn.neighbors import NearestNeighbors from sklearn.metrics.pairwise import cosine_similarity import numpy as np # + # WARNING! trashy code inside: skip it to the next cells try: huffman_corpus, node_inxs, turns_inxs, leaves_hash_inversed, \ vocab_size, extended_vocab_size = \ pickle.load(open('/tmp/bible_dataset.pkl', 'rb')) except FileNotFoundError: out = process_data('../', 5) # print(out) pickle.dump(out, open('/tmp/bible_dataset.pkl', 'wb')) huffman_corpus, node_inxs, turns_inxs, leaves_hash_inversed, \ vocab_size, extended_vocab_size = out nodes_count = extended_vocab_size # with torch.cuda.device(device): # with torch. device = torch.device("cpu") device = torch.device("cuda:0") # device = None batch_size = 128 # 1024*8 log_freq = 100*8*2 lr = 0.1 st = time.time() # with torch.cuda.device(device): cbow_dataset = CBOWBibleDataset(huffman_corpus, node_inxs, turns_inxs, vocab_size=nodes_count, window_size=10, device=None) data_len = cbow_dataset.__len__() n_steps = (data_len - 1) // batch_size cbow_loader = DataLoader(cbow_dataset, batch_size=batch_size, shuffle=False, num_workers=12) # loss = torch.mean(-1 * torch.log(cbow_out)) losses = [] model = CBOWHierSoftmax(nodes_count, 200) model.cuda(0) path = '/home/d3/study-projects/really_new/doc2vec/pytorch-word2vec/ckpt-lambda-scheduler/e199-lr0.001-loss5.236-w2vec-bible.ckpt.tar' path1 = 'ckpt/e199-lr0.004-loss5.187-w2vec-bible.ckpt.tar' torch_loaded = torch.load(path1) print(torch_loaded['model_state_dict']) model.load_state_dict(torch_loaded['model_state_dict']) # model.load_state_dict(torch_loaded['model_state_dict']['embeddings.weight']) print(model.embeddings.weight[0]) # - leaves_hash = {v:k for k, v in leaves_hash_inversed.items()} leaves_hash['jesus'] embedding_weights = model.embeddings.weight.detach().cpu().numpy() embedding_weights.shape word2vec = embedding_weights[:vocab_size] word2vec.shape # # Helpers # + def get_embeddings_index_of_word(word, leaves_hash=leaves_hash): huffman_inx = leaves_hash[word] return huffman_inx def get_embedding_by_word(word, word2vec=word2vec): return word2vec[get_embeddings_index_of_word(word)] # - # # Nearest Neighbours nbrs = NearestNeighbors(n_neighbors=25, algorithm='auto', metric='minkowski', p=2).fit(word2vec) # %time distances, indices = nbrs.kneighbors(word2vec) # distances, indices = nbrs.kneighbors() def print_nn_neighbours(w, indices=indices, distances=distances): print(f'word: {w}\nneighbours: ') inx = get_embeddings_index_of_word(w) for w_inx, dist in zip(indices[inx], distances[inx]): print('{} - {}'.format(get_word_by_embedding_index(w_inx), round(dist, 2))) print_nn_neighbours('jesus') # # Cos sim cos_sim_matrix = cosine_similarity(word2vec, word2vec) def get_word_by_embedding_index(inx, leaves_hash_inversed=leaves_hash_inversed): word = leaves_hash_inversed[inx] return word def print_neighbours_cosine(w, cos_sim_matrix=cos_sim_matrix, topn=10): print(f'word: {w}\nneighbours: ') inx = get_embeddings_index_of_word(w) word_row_dists = cos_sim_matrix[inx] neighbours = np.argsort(-1 * word_row_dists)[:topn] for n in neighbours: print('{} - {:<3}'.format(get_word_by_embedding_index(n), round(word_row_dists[n], 2))) print_neighbours_cosine('jesus', topn=25) print_neighbours_cosine('god', topn=25) from gensim.models import word2vec # + # word2vec.Word2Vec? # -
notebooks/test_word2vec_hier_softmax.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: PyCharm (ML-Deep-Practices) # language: python # name: pycharm-f0b42db8 # --- # + [markdown] pycharm={"is_executing": false, "name": "#%% md\n"} # ## Index, Slice and Reshape Numpy Arrays # ### From List to Arrays # ### One dimensional list to array: # + pycharm={"name": "#%%\n", "is_executing": false} import numpy as np data = [11, 22, 33, 44, 55, 66] data = np.array(data) print(f"data: {data}\n") print(f"type of data: {type(data)}") # + [markdown] pycharm={"name": "#%% md\n", "is_executing": false} # ### Two dimensional list of lists to array: # + pycharm={"name": "#%%\n", "is_executing": false} data1 = [[11, 22], [33, 44], [55, 66]] data1 = np.array(data1) print(f"data1: \n{data1}\n") print(f"type of data1: {type(data1)}") # + [markdown] pycharm={"name": "#%% md\n"} # ## Array Indexing # ### One dimensional indexing: # + pycharm={"name": "#%%\n", "is_executing": false} print(f"data[0]: {data[0]}") print(f"data[3]: {data[3]}") print(f"data[-1]: {data[-1]}") print(f"data[-3]: {data[-3]}") # + [markdown] pycharm={"name": "#%% md\n", "is_executing": false} # ### Two dimensional indexing: # + pycharm={"name": "#%%\n", "is_executing": false} print(f"data1[0, 0]: {data1[0, 0]}") # + [markdown] pycharm={"name": "#%% md\n"} # **All items in the first row:** # + pycharm={"name": "#%%\n", "is_executing": false} print(f"data1[0,]: {data1[0, ]}") # + [markdown] pycharm={"name": "#%% md\n", "is_executing": false} # ## Array Slicing # # **The slice extends from the index and ends one item before the to index:** # # **data[from:to -1]** # ### One dimensional slicing: # access all data in an array dimension by specifying the slice # ‘:’ with no indexes: # + pycharm={"name": "#%%\n", "is_executing": false} print(f"data[:]: {data[:]}") # + [markdown] pycharm={"name": "#%% md\n"} # The first item of the array can be sliced by specifying a slice that # starts at index 0 and ends at index 1 (one item before the to index): # + pycharm={"name": "#%%\n", "is_executing": false} print(f"data[0, 1]: {data[0: 1]}") # + [markdown] pycharm={"name": "#%% md\n", "is_executing": false} # ### Split input and output features: # It is common to split your loaded data into input variables (X) and the # output variable (y). We can do this by slicing all rows and all columns # up to, but before the last column, then separately indexing the last # column. For the input features, we can select all rows and all columns # except the last one by specifying : for in the rows index, and :-1 in # the columns index. # # X = [:, :-1] # # For the output column, we can select all rows again using : and index just # the last column by specifying the -1 index. # # y = [:, -1] # + pycharm={"name": "#%%\n", "is_executing": false} data2 = np.array([[11, 22, 33], [44, 55, 66], [77, 88, 99]]) X, y = data2[:, :-1], data2[:, -1] print(f"X:\n{X}\n") print(f"y:\n{y}") # + [markdown] pycharm={"name": "#%% md\n", "is_executing": false} # ### Split train and test rows: # It is common to split a loaded dataset into separate train and test sets. # This is a splitting of rows where some portion will be used to train the # model and the remaining portion will be used to estimate the skill of the # trained model. This would involve slicing all columns by specifying : in # the second dimension index. The training dataset would be all rows from # the beginning to the split point. # # train = data[:split, :] # # The test dataset would be all rows starting from the split point to # the end of the dimension. # # test[split:, :] # # Putting all of this together, we can split the dataset at the contrived # split point of 2. # + pycharm={"name": "#%%\n", "is_executing": false} split = 2 train , test = data2[:split, :], data2[split:, :] print(f"train: \n{train}\n") print(f"test: \n{test}") # + [markdown] pycharm={"name": "#%% md\n", "is_executing": false} # ## Array reshaping # ### Data Shape: # + pycharm={"name": "#%%\n", "is_executing": false} print(f"shape of data: {data.shape}") print(f"shape of data2: {data2.shape}") print(f"Rows: {data2.shape[0]}") print(f"Columns: {data2.shape[1]}") # + [markdown] pycharm={"name": "#%% md\n", "is_executing": false} # ### Reshape 1D to 2D Array: # + pycharm={"name": "#%%\n", "is_executing": false} data_2d = data.reshape((data.shape[0], 1)) print(f"data_2d: \n{data_2d}\n") print(f"shape of data_2: {data_2d.shape}") # + [markdown] pycharm={"name": "#%% md\n", "is_executing": false} # ### Reshape 2D to 3D Array: # + pycharm={"name": "#%%\n", "is_executing": false} data_3d = data2.reshape(data2.shape[0], data2.shape[1], 1) print(f"data_3d: \n{data_3d}\n") print(f"shape of data_3d: {data_3d.shape}")
src/Number_02.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.5 ('deep-rl') # language: python # name: python3 # --- import numpy as np import pandas as pd import collections import datetime import pprint import gym import gym_anytrading import matplotlib.pyplot as plt from lib import data, environ from helpers import validation, environ from typing import List, Optional, Tuple, Any from tensorforce import Agent, Environment from tensorforce.agents import ConstantAgent from tensorforce.core.networks import AutoNetwork from tensorforce.execution import Runner from tensorforce.core.layers import Dense, Gru # %load_ext blackcellmagic # # Prepare environments # ## Load prices # ## Create environments # + env = environ.TradingEnv( window_size=10, commission_perc=0.001, random_ofs_on_reset=True, date_range=('2018-01-01', '2019-12-31') ) env_val = environ.TradingEnv( window_size=10, commission_perc=0.001, random_ofs_on_reset=False, date_range=('2020-01-01', '2020-12-31') ) obs = env.reset() obs, reward, done, info = env.step(0) print(f'Observation: {obs}') print(f"Reward: {reward}") print(f"Done: {done}") print(f"Info: {info}") print(f'Number of trading days in data: {len(env.prices)}') # - # ## Create tensorforce environments environment = Environment.create(environment=env, max_episode_timesteps=100) environment_val = Environment.create(environment=env_val, max_episode_timesteps=1000) print(f'Action space: {environment.actions()}') print(f'State space: {environment.states()}') print(f'Initial state: {environment.reset()}') print(f'Initial state (validation): {environment_val.reset()}') # # Create agent # Instantiate a Tensorforce agent if False: agent_ = Agent.create( agent="dueling_dqn", max_episode_timesteps=1000, environment=environment, memory=100000, # update=dict(unit="timesteps", batch_size=32), batch_size=32, # optimizer=dict(type="adam", learning_rate=3e-4), # policy=dict(network="auto"), # objective="policy_gradient", start_updating=1e4, # network=dict(network=[Gru(size=5, horizon=5, name='GRU_1'), Dense(size=3, name='Dense_1')]), # network=dict(network=[dict(type='dense', size=32), dict(type='dense', size=3)]), network='auto', # reward_estimation=dict(horizon=4, discount=0.99), discount=0.99, target_sync_frequency=1e3, config=dict(name="agent_007"), summarizer=dict( directory="runs/summaries", # list of labels, or 'all' summaries=["entropy", "kl-divergence", "loss", "reward", "update-norm"], ), ) # + # Instantiate a Tensorforce agent agent = Agent.create( agent="a2c", environment=environment, network=[ dict( type="gru", size=64, activation="tanh", horizon=1, # dropout=0.1, l2_regularization=0.01, ), # dict( # type="lstm", # size=64, # activation="tanh", # horizon=1, # dropout=0.1, # l2_regularization=0.01, # ), dict(type="dense", size=16, activation="tanh"), ], critic=[ dict( type="gru", size=64, activation="tanh", horizon=1, # dropout=0.1, l2_regularization=0.01, ), # dict( # type="lstm", # size=64, # activation="tanh", # horizon=1, # dropout=0.1, # l2_regularization=0.01, # ), dict(type="dense", size=32, activation="tanh"), ], # update=dict(unit="timesteps", batch_size=32), batch_size=32, # objective="policy_gradient", # reward_estimation=dict(horizon=5), # optimizer=dict(optimizer="rmsprop", learning_rate=1e-3), # memory=10000, # Replay memory capacity config=dict(name="agent_001", device="gpu"), summarizer=dict( directory="runs/summaries", # list of labels, or 'all' # summaries=["entropy", "kl-divergence", "loss", "reward", "update-norm"], summaries="all", ), ) # pprint.PrettyPrinter(indent=2).pprint(agent.get_specification()) # - # # Run Training and Validation runner = Runner( agent=agent, environment=environment, ) # ## Run training latest_run_name = f'save_{datetime.datetime.now().isoformat(timespec="seconds").replace(":", "_")}' print(latest_run_name) runner.run(num_episodes=400, evaluation=True) agent.save('./saves', filename=f'{latest_run_name}', format='checkpoint') runner.close() print(f'{latest_run_name}') print('Finished') latest_run_name = 'save_2022-03-22T15_37_15' agent = Agent.load('./saves', filename=f'{latest_run_name}', environment=environment_val) # ## Run validation validator = validation.Validator(env=environment_val, agent=agent, commission=None, num_episodes=1) res = validator.run() for key, val in res.items(): print(f'{key}: {val}') environment_val.environment.render_all()
algorithmic_trading/experiments.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 五子棋 ver 0.0 # ### To do: # 2. future feature: # * 新增: QL_player # + import numpy as np class GoBang(): def __init__(self, width = 9, height = 9):#設定初始參數(棋盤) self.width = width self.height = height self.board = [[' ']*self.width for h in range(self.height)] self.player = np.random.choice(["P1","P2"]) self.moveRecord = [] def restart(self):#重新開始遊戲 self.board = [[' ']*self.width for h in range(self.height)] self.moveRecord = [] def isLegal(self, x, y):#檢查這步棋是否合法 if self.board[x][y]==" ": return True else: return False def makeMove(self, token):#下棋(token = "O" or "X") try: draw_location = input(f"請 {self.player} 輸入下棋座標(範例:0,0):") x, y = int(draw_location.split(",")[0]), int(draw_location.split(",")[1]) if self.isLegal(x, y): self.board[x][y] = token self.moveRecord.append((self.player, x, y)) else: print("illegal move") self.makeMove(token) except: print("請依照範例輸入座標") self.makeMove(token) def drawBoard(self):#畫出棋盤 HLine = " "*3 + "+---"*self.width + "+" axis = " "*5 + "0" for i in range(1, self.width): axis += " "*3 +str(i) print(axis) print(HLine) for y in range(self.height): print(y, end=" ") #加上end=" "可以不換行 for x in range(self.width): print(f"| {self.board[y][x]}", end=" ") print("|") print(HLine) def isOver(self, token):#判斷遊戲結束 goal = 5 #橫線 for iy in range(self.height): for ix in range(self.width - 4):#往後延展四格 if (self.board[iy][ix:ix+5] == [token]*goal): print(f"Player {self.player} wins!!!") return True #直線(轉置後就變成判斷橫線) Tboard = np.array(self.board) Tboard = Tboard.T for ix in range(self.width): for iy in range(self.height - 4): if (list(Tboard[ix][iy:iy+5]) == [token]*goal): print(f"Player {self.player} wins!!!") return True #斜線 longSide = np.max([self.width, self.height]) for delta_x in range(self.width - 4): for delta_y in range(self.height - 4): diag = [self.board[ix + delta_y][ix + delta_x] for ix in range(goal)] diagAnti = [self.board[ix + delta_y][longSide -1 -ix - delta_x] for ix in range(goal)] if (diag == [token]*goal) or (diagAnti == [token]*goal): print(f"Player {self.player} wins!!!") return True line = [] for ix in range(longSide): line += self.board[ix] if " " not in line: print("Tie!!!") return True return False def play2PlayerGame(self): print("歡迎來到雙人下棋峽谷") while True: self.restart() #重置棋盤 P1_token, P2_token = "○", "●" print("P1先手" if self.player=="P1" else "P2先手") while True: token = P1_token if self.player=="P1" else P2_token self.makeMove(token) self.drawBoard() if self.isOver(token): print("Game Over") break self.player = "P2" if self.player=="P1" else "P1" #換人下棋 ans = input("輸入'yes'再來一場,輸入其他離開遊戲。") if ans == "yes": continue else: print("下次再來玩") break def play1PlayerGame(self, AI): print("歡迎來到人機下棋峽谷") while True: self.restart() #重置棋盤 P1_token, P2_token = "○", "●" print("P1先手" if self.player=="P1" else "P2(AI)先手") while True: token = P1_token if self.player=="P1" else P2_token if self.player == "P1": self.makeMove(token) else: x,y = AI.getAI_Move(self.board) print(f"AI輸入下棋座標:{(x,y)}") self.board[x][y] = token self.moveRecord.append(("AI",x,y)) self.drawBoard() if self.isOver(token): print("Game Over") break self.player = "P2" if self.player=="P1" else "P1" #換人下棋 ans = input("輸入'y'再來一場,輸入'n'離開遊戲。") if ans == "y": continue else: print("下次再來玩") break class AI_Player(GoBang): def __init__(self, width = 9, height = 9): #取得遊戲棋盤 super().__init__(width = 9, height = 9) def possibleMoves(self, remainedBoard):#取得可下之空位座標 possibleMovesList = [] for iy in range(self.height): for ix in range(self.width): if remainedBoard[iy][ix]==" ": possibleMovesList.append((iy,ix)) return possibleMovesList def getAI_Move(self, remainedBoard, mode = "random"):#AI使用隨機策略 if mode == "random": possibleMovesList = self.possibleMoves(remainedBoard) listLength = len(possibleMovesList) moveIndex = np.random.choice(listLength) return(possibleMovesList[moveIndex]) # - Game = GoBang() AI = AI_Player(Game) Game.play1PlayerGame(AI) moveRecord = Game.moveRecord Game.play2PlayerGame() moveRecord = Game.moveRecord a = ["O","O","X","X","X","X","X","O","X","O","X"] for ix in range(9): if (a[ix:ix+5] == ["X"]*5): print("win") b=a b[5]="A" # + a # - a.index("A") b =
GoBang.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Analysis of Quantitative Ratings # + [markdown] slideshow={"slide_type": "subslide"} # #### The RE-Pract Survey Research Questions # # 1. What is the relevance of RE research to practitioners in the industry? # 2. What are the most highly rated research ideas? # 3. What research problems do practitioners think are most important to be focused on by the RE research community? # 4. Do papers with explicit ties to industry have higher practical relevance than other papers? # 5. Do practitioners’ perceptions and views differ in dependence on their roles? # + [markdown] slideshow={"slide_type": "subslide"} # #### Our Data Sources # # * 435 Paper Summaries with Metadata and Tags # # * 154 Respondents with Metadata # # * 2164 Paper Ratings # + [markdown] slideshow={"slide_type": "slide"} # # Setup and Definitions # + slideshow={"slide_type": "fragment"} # %run setup.py # %matplotlib inline # - # ## Scores to be used for evaluation # + def e_score(x): eScore = np.count_nonzero(x == 'Essential') / x.size return eScore def ew_score(x): ewScore = (np.count_nonzero(x == 'Essential')+ np.count_nonzero(x == 'Worthwhile'))/ x.size return ewScore def u_score(x): uScore = np.count_nonzero(x == 'Unwise') / x.size return uScore # + [markdown] slideshow={"slide_type": "slide"} # # Demographics # + [markdown] slideshow={"slide_type": "slide"} # ### The Respondents # + slideshow={"slide_type": "fragment"} dfdict['truth_metadata'].head(3) # - # ## Country # + slideshow={"slide_type": "subslide"} plot_sample(dfdict['truth_metadata'],'v_124', 'Respondent Country') # - # ## Industry Sector # + slideshow={"slide_type": "subslide"} plot_sample(dfdict['truth_metadata'],'v_19_coded', 'Respondent Industry Sector') # - # ## Role # + slideshow={"slide_type": "subslide"} plot_sample(dfdict['truth_metadata'],'v_5_6_integrated', 'Respondent Role') dfdict['truth_metadata'].to_excel(r"../data/dataexports/participants.xlsx") # + [markdown] slideshow={"slide_type": "slide"} # # Overall Ratings # + slideshow={"slide_type": "subslide"} print('Overall number of ratings: ',len(ratings_with_respondent_meta.index)) ratings_with_respondent_meta.head(3) # + slideshow={"slide_type": "subslide"} x = ratings_with_respondent_meta.groupby('rating').count( )['lfdn'].sort_index(ascending=False) x.to_excel(r"../data/dataexports/overall.xlsx") ratings_with_respondent_meta.to_excel(r"../data/dataexports/ratings.xlsx") x.plot.barh(stacked=False,color='k', alpha=0.7) plt.ylabel('Rating') plt.xlabel('Number of Ratings') plt.tight_layout() plt.rcParams.update({'font.size': 14}) plt.rcParams['figure.figsize'] = 9, 6 plt.savefig("../plots/overallPerception.pdf") plt.savefig("../plots/overallPerception.png") x # - # ## Number of Ratings per Summary ratings_per_summary = ratings_with_respondent_meta.groupby('PaperID').count()['lfdn'] plt.ylabel('Number of Ratings') plt.xlabel('Paper') plt.boxplot(ratings_per_summary) plt.show() print ("Minimal number of ratings:",ratings_per_summary.min()) print ("Maximum number of ratings:",ratings_per_summary.max()) print ("Median number of ratings:",ratings_per_summary.median()) # ## Number of Responses and Ratings per Respondent ratings_per_respondent = ratings_with_respondent_meta.groupby('lfdn').count()['rating'] plt.ylabel('Number of Ratings') plt.xlabel('Respondent') plt.boxplot(ratings_per_respondent) plt.show() print ("Minimal number of ratings:",ratings_per_respondent.min()) print ("Maximum number of ratings:",ratings_per_respondent.max()) print ("Median number of ratings:",ratings_per_respondent.median()) print ("Distribribution of number of ratings per respondent:") ratings_per_respondent.value_counts() # ## Distribution of ratings per respondent # + # #%load_ext rpy2.ipython # + # #%%R -i ratings_with_respondent_meta # some Jupyter notebook magic is needed touse calcualtion of Fleiss in R within Python # import ratings_with_respondent_meta from global environment #install.packages("irr", quiet=TRUE) #library(irr) #kappam.fleiss(ratings_with_respondent_meta) # + [markdown] slideshow={"slide_type": "slide"} # # Ratings Grouped by Respondent Metadata # + [markdown] slideshow={"slide_type": "slide"} # ## Respondent Role # - ratings_with_respondent_meta.head(3) # + slideshow={"slide_type": "subslide"} plot_df(ratings_with_respondent_meta, 'v_5_6_integrated', 'Respondent Role', absolute=True) # + slideshow={"slide_type": "subslide"} plot_df(ratings_with_respondent_meta, 'v_5_6_integrated', 'Respondent Role', absolute=False) # - # ## Respondent Country plot_df(ratings_with_respondent_meta, 'v_124', 'Country', absolute=True) # + slideshow={"slide_type": "subslide"} plot_df(ratings_with_respondent_meta, 'v_124', 'Country', absolute=False) # + [markdown] slideshow={"slide_type": "slide"} # # Ratings Grouped by Paper Metadata # + slideshow={"slide_type": "subslide"} ratings_meta = dfdict['paper_metadata'].merge(dfdict['truth_ratings'], on='PaperID') ratings_meta.head(3) # - # ## Venue ratings_meta['Venue'] = ratings_meta['Venue'].replace("FSE","ESEC/FSE") plot_df(ratings_meta, 'Venue', 'Venue', absolute=False) plt.savefig("../plots/venues.pdf") scores_venue=ratings_meta.groupby(['Venue'])['rating'].agg([np.size,e_score,ew_score,u_score]) scores_venue # Aggregate scores by RE specific venues (RE and REFSQ) and rest df = ratings_meta.set_index('Venue') mapping = {'ESEC/FSE': 'Other', 'ESEM': 'Other','ICSE': 'Other','RE': 'RE-Venue','REFSQ': 'RE-Venue'} score_REvenues=df.groupby([mapping])['rating'].agg([np.size,e_score,ew_score,u_score]) score_REvenues # ## Year plot_df(ratings_meta, 'Year', 'Year', absolute=False) plt.savefig("../plots/years.pdf") scores_year=ratings_meta.groupby(['Year'])['rating'].agg([np.size,e_score,ew_score,u_score]) scores_year # + [markdown] slideshow={"slide_type": "slide"} # ## Author Affiliations and Conference Tracks # + slideshow={"slide_type": "subslide"} plot_df(ratings_meta, 'AcadVsInd', 'Affiliation', absolute=False) # + slideshow={"slide_type": "subslide"} plot_df(ratings_meta, 'IndTrack', 'Industry Track', absolute=False) # + slideshow={"slide_type": "subslide"} plot_df(ratings_meta, ['AcadVsInd','IndTrack'], '(Affiliation, Industry Track)', absolute=False) plt.savefig("../plots/ind_vs_academic.pdf") # + [markdown] slideshow={"slide_type": "slide"} # # Ratings Grouped by Paper Tags # + slideshow={"slide_type": "fragment"} ratings_methodtags = papertags_method.merge(dfdict['truth_ratings'], on='PaperID') ratings_methodtags.head(1) # + slideshow={"slide_type": "fragment"} ratings_contenttags = papertags_content.merge(dfdict['truth_ratings'], on='PaperID') ratings_contenttags.head(1) # + [markdown] slideshow={"slide_type": "slide"} # ## Ratings Grouped by Research Method # + slideshow={"slide_type": "subslide"} plot_tag_ratings( tag_stats(ratings_methodtags, all_levels[:-1], totals=True, rel=False).loc['how'], "Research Method", sort_values=True, rel=False, step=50) # + slideshow={"slide_type": "subslide"} plot_tag_ratings( tag_stats(ratings_methodtags,all_levels[:-1], totals=True, rel=True).loc['how'], "Research Method", sort_values=True, rel=True) # - # ## Ratings Grouped by Evaluation Subjects # + slideshow={"slide_type": "subslide"} plot_tag_ratings( tag_stats(ratings_methodtags,all_levels[:-1], totals=True, rel=False).loc['withwhom'], "Subjects", sort_values=True, rel=False) # + slideshow={"slide_type": "subslide"} # %run setup.py plot_tag_ratings( tag_stats(ratings_methodtags, all_levels[:-1], totals=True, rel=True).loc['withwhom'], "Subjects", sort_values=True, rel=True) plt.savefig("../plots/students_vs_professionals.pdf") # - # ## Ratings Grouped by Documentation Style data = dfdict['papertags_what-manual_final'] data.rename(columns={data.columns[0]:'PaperID'}, inplace=True) data.head(3) # + reshaped = \ (data.set_index(data.columns.drop('documentation',1).tolist()) .documentation.str.split(',', expand=True) .stack() .reset_index() .rename(columns={0:'documentation'}) .loc[:, data.columns] ) reshaped.head(5) # + # %run setup.py ratings_with_tags = dfdict['truth_ratings'].merge(reshaped, how='inner') plot_df(ratings_with_tags, 'documentation', 'Documentation', absolute=False) plt.rcParams['figure.figsize'] = 9, 9 plt.savefig("../plots/content.pdf") #Calculate scores scores_doc=ratings_with_tags.groupby(['documentation'])['rating'].agg([np.size,e_score,ew_score,u_score]) scores_doc["category"]="documentation" ratings_with_tags.groupby('rating').count() # + papercount_doc=ratings_with_tags.groupby(['documentation'])['PaperID'].nunique() #join both tables stats_doc=scores_doc.merge(pd.DataFrame(papercount_doc), left_index=True, right_index=True).rename(columns={'PaperID':'nr_Papers'}) stats_doc # - # ## Ratings based on People aspects # + data = dfdict['papertags_what-manual_final'] data.rename(columns={data.columns[0]:'PaperID'}, inplace=True) reshaped = \ (data.set_index(data.columns.drop('people',1).tolist()) .people.str.split(',', expand=True) .stack() .reset_index() .rename(columns={0:'people'}) .loc[:, data.columns] ) reshaped.head(5) # + ratings_with_tags = dfdict['truth_ratings'].merge(reshaped, how='inner') plot_df(ratings_with_tags, 'people', 'People', absolute=False) plt.rcParams['figure.figsize'] = 9, 9 plt.savefig("../plots/content_people.pdf") #Calculate scores scores_people=ratings_with_tags.groupby(['people'])['rating'].agg([np.size,e_score,ew_score,u_score]) scores_people["category"]="people" ratings_with_tags.groupby('rating').count() # + papercount_people=ratings_with_tags.groupby(['people'])['PaperID'].nunique() #join both tables stats_people=scores_people.merge(pd.DataFrame(papercount_people), left_index=True, right_index=True).rename(columns={'PaperID':'nr_Papers'}) stats_people # - # ## Ratings based on Quality # + data = dfdict['papertags_what-manual_final'] data.rename(columns={data.columns[0]:'PaperID'}, inplace=True) reshaped = \ (data.set_index(data.columns.drop('quality',1).tolist()) .quality.str.split(',', expand=True) .stack() .reset_index() .rename(columns={0:'quality'}) .loc[:, data.columns] ) reshaped.head(5) # + ratings_with_tags = dfdict['truth_ratings'].merge(reshaped, how='inner') plot_df(ratings_with_tags, 'quality', 'Quality', absolute=False) plt.rcParams['figure.figsize'] = 9, 9 plt.savefig("../plots/content_quality.pdf") #Calculate scores scores_quality=ratings_with_tags.groupby(['quality'])['rating'].agg([np.size,e_score,ew_score,u_score]) scores_quality["category"]="quality" # + papercount_quality=ratings_with_tags.groupby(['quality'])['PaperID'].nunique() #join both tables stats_quality=scores_quality.merge(pd.DataFrame(papercount_quality), left_index=True, right_index=True).rename(columns={'PaperID':'nr_Papers'}) stats_quality # - # ## Rating based on Phase # + data = dfdict['papertags_what-manual_final'] data.rename(columns={data.columns[0]:'PaperID'}, inplace=True) reshaped = \ (data.set_index(data.columns.drop('phase',1).tolist()) .phase.str.split(',', expand=True) .stack() .reset_index() .rename(columns={0:'phase'}) .loc[:, data.columns] ) reshaped.head(5) # + ratings_with_tags = dfdict['truth_ratings'].merge(reshaped, how='inner') plot_df(ratings_with_tags, 'phase', 'Phase', absolute=False) plt.rcParams['figure.figsize'] = 9, 9 plt.savefig("../plots/content_phase.pdf") #Calculate scores scores_phase=ratings_with_tags.groupby(['phase'])['rating'].agg([np.size,e_score,ew_score,u_score]) scores_phase["category"]="phase" # + papercount_phase=ratings_with_tags.groupby(['phase'])['PaperID'].nunique() #join both tables stats_phase=scores_phase.merge(pd.DataFrame(papercount_phase), left_index=True, right_index=True).rename(columns={'PaperID':'nr_Papers'}) stats_phase # - # ## Ratings based on Process # + data = dfdict['papertags_what-manual_final'] data.rename(columns={data.columns[0]:'PaperID'}, inplace=True) reshaped = \ (data.set_index(data.columns.drop('process',1).tolist()) .process.str.split(',', expand=True) .stack() .reset_index() .rename(columns={0:'process'}) .loc[:, data.columns] ) reshaped.head(5) # + ratings_with_tags = dfdict['truth_ratings'].merge(reshaped, how='inner') plot_df(ratings_with_tags, 'process', 'Process', absolute=False) plt.rcParams['figure.figsize'] = 9, 9 plt.savefig("../plots/content_process.pdf") #Calculate scores scores_process=ratings_with_tags.groupby(['process'])['rating'].agg([np.size,e_score,ew_score,u_score]) scores_process["category"]="process" scores_process # + papercount_process=ratings_with_tags.groupby(['process'])['PaperID'].nunique() papercount_process #join both tables stats_process=scores_process.merge(pd.DataFrame(papercount_process), left_index=True, right_index=True).rename(columns={'PaperID':'nr_Papers'}) stats_process # - # ## Summary of important topics scores = stats_doc.append(stats_people).append(stats_quality).append(stats_phase).append(stats_process).sort_values(by=['e_score','ew_score'], ascending=False) unwise_scores = scores.sort_values(by=['u_score'],ascending=False) display(scores.head(10),unwise_scores.head(5))
notebooks/10_quantitative_analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Sample callback # # This notebook demonstrates the usage of the callback attribute in `pm.sample`. A callback is a function which gets called for every sample from the trace of a chain. The function is called with the trace and the current draw as arguments and will contain all samples for a single trace. # # The sampling process can be interrupted by throwing a `KeyboardInterrupt` from inside the callback. # # use-cases for this callback include: # # - Stopping sampling when a number of effective samples is reached # - Stopping sampling when there are too many divergences # - Logging metrics to external tools (such as TensorBoard) # # We'll start with defining a simple model # + import numpy as np import pymc3 as pm X = np.array([1, 2, 3, 4, 5]) y = X * 2 + np.random.randn(len(X)) with pm.Model() as model: intercept = pm.Normal('intercept', 0, 10) slope = pm.Normal('slope', 0, 10) mean = intercept + slope * X error = pm.HalfCauchy('error', 1) obs = pm.Normal('obs', mean, error, observed=y) # - # We can then for example add a callback that stops sampling whenever 100 samples are made, regardless of the number of draws set in the `pm.sample` # + def my_callback(trace, draw): if len(trace) >= 100: raise KeyboardInterrupt() with model: trace = pm.sample(tune=0, draws=500, callback=my_callback, chains=1) print(len(trace)) # - # Something to note though, is that the trace we get passed in the callback only correspond to a single chain. That means that if we want to do calculations over multiple chains at once, we'll need a bit of machinery to make this possible. # + def my_callback(trace, draw): if len(trace) % 100 == 0: print(len(trace)) with model: trace = pm.sample(tune=0, draws=500, callback=my_callback, chains=2, cores=2) # - # We can use the `draw.chain` attribute to figure out which chain the current draw and trace belong to. Combined with some kind of convergence statistic like r_hat we can stop when we have converged, regardless of the amount of specified draws. # + import arviz as az class MyCallback: def __init__(self, every=1000, max_rhat=1.05): self.every = every self.max_rhat = max_rhat self.traces = {} def __call__(self, trace, draw): if draw.tuning: return self.traces[draw.chain] = trace if len(trace) % self.every == 0: multitrace = pm.backends.base.MultiTrace(list(self.traces.values())) if pm.stats.rhat(multitrace).to_array().max() < self.max_rhat: raise KeyboardInterrupt with model: trace = pm.sample(tune=1000, draws=100000, callback=MyCallback(), chains=2, cores=2) # - # %load_ext watermark # %watermark -n -u -v -iv -w
docs/source/notebooks/sampling_callback.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: p37 # language: python # name: p37 # --- # # 04.01 - DATA EXPLORATION # !wget --no-cache -O init.py -q https://raw.githubusercontent.com/rramosp/ai4eng.v1.20211.udea/main/content/init.py import init; init.init(force_download=False); init.get_weblink() # # ## Based on [Kaggle House Pricing Prediction Competition](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/) # # - Inspect and learn from the competition [Notebooks](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/notebooks) # - You must make available to this notebook the `train.csv` file from the competition [data](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data) section. If running this notebook in Google Colab you must upload it in the notebook files section in Colab. ## KEEPOUTPUT import pandas as pd import seaborn as sns import numpy as np import matplotlib.pyplot as plt # %matplotlib inline d = pd.read_csv("train.csv") d.head() # data size ## KEEPOUTPUT print (d.shape) # Missing values in columns ## KEEPOUTPUT k = d.isna().sum() k[k!=0] # ## Inspect the target variable ## KEEPOUTPUT sns.distplot(d['SalePrice']); # ## Discover data types ## KEEPOUTPUT d.columns ## KEEPOUTPUT for c in d.columns: print ("%20s"%c, d[c].dtype) # ## Inspect numeric columns ## KEEPOUTPUT d._get_numeric_data().describe().T ## KEEPOUTPUT cols = ['OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'FullBath', 'YearBuilt', 'SalePrice'] #cols = np.unique(list(np.random.permutation(d._get_numeric_data().columns)[:5])+['SalePrice']) sns.set() sns.pairplot(d[cols]) # ### correlations ## KEEPOUTPUT #correlation matrix corrmat = d.corr() f, ax = plt.subplots(figsize=(12, 9)) sns.heatmap(corrmat, vmax=.8, square=True); # ## Inspect categorical variables ## KEEPOUTPUT ccols = [i for i in d.columns if not i in d._get_numeric_data()] print (ccols) ## KEEPOUTPUT for c in ccols: print ("%10s"%c, np.unique(d[c].dropna())) ## KEEPOUTPUT c="GarageType" d[c].value_counts() ## KEEPOUTPUT plt.figure(figsize=(20,8)) for i,c in enumerate(["ExterQual", "HouseStyle", "LandSlope", "Alley"]): plt.subplot(2,4,i+1) k=d[[c,"SalePrice"]].dropna() for v in d[c].dropna().unique(): sns.distplot(k.SalePrice[k[c]==v], label=v); plt.title(c) plt.yticks([]) plt.legend() plt.subplot(2,4,i+5) vc = k[c].value_counts() sns.barplot(vc.index, vc.values) plt.xticks(range(len(vc)), vc.index, rotation="vertical") # ## Vision on mission values # Missing values in columns ## KEEPOUTPUT k = d.isna().sum() k[k!=0] ## KEEPOUTPUT ax = plt.figure(figsize=(30,15)).add_subplot(111) ax.imshow(d.isna().values.T) ax.set_aspect(12) plt.yticks(range(d.shape[1]), d.columns);
content/NOTES 04.01 - DATA EXPLORATION.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + # %load_ext autoreload # %autoreload 2 import pandas as pd import numpy as np from src.easycoref.coref import CorefModel # - # ## **Using class CorefModel()** # # # The class CorefModel() implements two coreference detection models : NeuralCoref and e2eCoref. It allows one user to chose one of those two models and use it to detect and extract all coreference chains from a list of texts under a dataset format. # # The user must give in entry a dataset with one or several columns of texts for which we want to do the coreference detection. Then, the CorefModel() tool will detect and extract coreference chains for each text, and place those created clusters in a new column of the dataset. User can also use a visualisation tool to highlight the different coreference chains of one text found by one model. # # In this tutorial, we will present an example of coreference detection and extraction of a dataset of interest, using each model and each function of class CorefModel(). This tutorial will illustrate the standard use of that class. # # ### 1. Dataset # # To use the class, the user must give either a csv or a json dataset in entry, with at least one column being a list of texts (under string format). # # The dataset df_tutorial used for this example is a csv. It has 4 lines and 2 columns of interest : 'text1' and 'text2', which are the columns of texts for which we want to detect coreference chains. # Texts are in english and extracted from press corpus. # # # + datapath = './src/easycoref/data/df_tutorial.csv' df_tutorial = pd.read_csv(datapath).drop('Unnamed: 0', axis=1) df_tutorial.head() # - # # ## **Standards steps of CorefModel() :** # # # **The class steps must be done in order.** # # ## 1. Calling the class coref_model = CorefModel() # ## 2. Preprocessing # # **Importation of the dataset** using it path and the name of the columns colnames. # # Colnames can be a string if there is only one column, or a list of string if there is one or several column of interest. coref_model.import_dataset(datapath, colnames=['text1','text2']) # **Cleaning of the dataset :** can only be done after importation. # The columns of interest (given by colnames) will be cleaned : string format are checked, typos errors are corrected and line break are erased. coref_model.clean() # ## 3. Choosing the model for the following steps # # # When choosing a model steps of inference, and visualisation must be done successively, one model after the other. One must be careful because if inference is done for both models, then visualisation for both, the visualisation function will only use the last resulting dataframes (*df_results*, *df_standardized_results*) so it will only work for the last model. # ## **When choosing : NeuralCoref** # # ### **4. Inference** # # For evaluation, we use the function **inference** that take the model in argument and requires to have already import and clean the dataset. # # NeuralCoref detect and extract coreference chains. # # **Steps of the inference fonction :** # # - **Transforming the dataset format** # # We want the dataset to be to the right format, to use NeuralCoref. # # For NeuralCoref we only need the dataset with columns of interest to the right format, which is already the case after preprocessing. # The transformation step creates *df_eval*, used for evaluation (which is directly the dataframe *df_tutorial* after preprocessing). # # - **Detect coreference chains** # # Detect and extract coreference chains for each text and column of the dataframe and present the results in a new dataframe called df_results with the columns of interested *col* and columns of predicted clusters *cluster_col*. Each line of a column is a list of detected clusters, each cluster being a list of spans (specific class of NeuralCoref) : it is the intervall of text corresponding to the mention. # # **Inference function returns the dataframe *df_results* with original columns of text and columns of coreference chain detected.** coref_model.inference(model='neuralcoref') # ### **5. Visualisation** # # Using inference function is useful to see the predicted clusters of coreference as list of text. But the dataframe returned can be complex to read and not really visual. To see the coreference chains of a specific text of the dataframe highlighted, we can print the function **visualisation**. # # This requires to have already import, clean and used inference on the dataset of interest. # # Function **visualisation** takes in argument the model (must be the same as the one chosed for inference), and the position of the text of interested : column col and line i. print(coref_model.visualisation(model='neuralcoref', col='text1', i=1)) # # **When choosing : e2eCoref** # ### **4. Inference** # # For evaluation, we use the function **inference** that take the model in argument and requires to have already import and clean the dataset. # # e2eCoref detect and extract coreference chains. # # **Steps of the inference fonction :** # # - **Transforming the dataset format** # # We want the dataset to be to the right format, to use e2eCoref. # # For e2eCoref we need to create a specific jsonfile to the right format for each column of interest. # The transformation step creates that file for each column *col*, called *df_coref_col*. # # - **Detect coreference chains** # # Detect and extract coreference chains for each text and column of the dataframe and present the results in a new dataframe called *df_results* with the columns of interested *col* and columns of predicted clusters *cluster_col*. Each line of a column is a list of detected clusters, each cluster being a list of strings (specific class of NeuralCoref). # # - **Creates a dataframe useful for further use** # # Parallel to the coreference chains detections, we create *df_useful* which stocks for each column *col*, columns *text_list_col* - text under list format - and *predicted_clusters_col* - list of clusters, each cluster being a list of coreference mentions positions under list format. "List format" means the interval [a,b] given as positions correspond to the mention returned when selecting the interval of text under list format : text_list_col[a,b]. # # # **Inference function returns the dataframe *df_results* with original columns of text and columns of coreference chain detected.** # # # # # coref_model.inference(model='e2ecoref') # ### **5. Visualisation** # # **Steps:** # # - **Standardized results** # # We use df_useful to have the positions of clusters and convert it to "spans positions" to have for each column *col* a *span_positions_col* # # - **Visualisation** # # Those columns are then used for visualisation to rewrite the text while highlighting mentions of the same coreference chain in the same colour. print(coref_model.visualisation(model='e2ecoref',col='text1',i=0))
tutorial.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="KkxnLSGarD3m" # # Describing Data # # What does "describing data" mean? # # Frictionless is a project based on the [Frictionless Data Specifications](https://specs.frictionlessdata.io/). It's a set of patterns for creating metadata, including Data Package (for datasets), Data Resource (for files), and Table Schema (for tables). # # In other words, "describing data" means creating metadata for your data files. The reason for having metadata is simple: usually, data files themselves are not capable of providing enough information. For example, if you have a data table in a CSV format, it misses a few critical pieces of information: # - meaning of the fields e.g., what the `size` field means; is it clothes size or file size # - data types information e.g., is this field a string or an integer # - data constraints e.g., the minimum temperature for your measurements # - data relations e.g., identifiers connection # - and others # # # + id="NZZ48jzUrM-O" # ! pip install frictionless # + [markdown] id="bxX922q1_B_O" # # For a dataset, there is even more information that can be provided like general dataset purpose, information about data sources, list of authors, and many more. Of course, when there are many tabular files, relational rules can be very important. Usually, there are foreign keys ensuring the integrity of the dataset; for example, there is some reference table containing country names and other tables using it as a reference. Data in this form is called "normalized data" and it occurs very often in scientific and another kind of research. # # Having a general understanding of what is "data describing", we can now articulate why it's important: # - **data validation**; metadata helps to reveal problems in your data on the early stages of your workflow # - **data publication**; metadata provides additional information that your data can't include # # There are not the only two pros of having metadata but they are two the most important. Please continue reading to learn how Frictionless helps to achieve these advantages describing your data. # + [markdown] id="LhW-Sq2cuFTW" # ## Describe Functions # # The `describe` functions are the main tool for data describing. In many cases, this high-level interface is enough for data exploration and other needs. # # The frictionless framework provides 4 different `describe` functions in Python: # - `describe`: it will detect the source type and return Data Resource or Data Package metadata # - `describe_schema`: it will always return Table Schema metadata # - `describe_resource`: it will always return Data Resource metadata # - `describe_package`: it will always return Data Package metadata # # In command-line, there is only 1 command but there is a flag to adjust the behavior: # # ```bash # $ frictionless describe # $ frictionless describe --source-type schema # $ frictionless describe --source-type resource # $ frictionless describe --source-type package # ``` # # For example, if we want a Data Package descriptor for a single file: # # + id="Ib55AQdAAJuI" executionInfo={"status": "ok", "timestamp": 1601546777499, "user_tz": -180, "elapsed": 954, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="b34c65bc-8845-49b9-ceae-cfc6c68504a4" colab={"base_uri": "https://localhost:8080/", "height": 67} # ! wget -q -O table.csv https://raw.githubusercontent.com/frictionlessdata/frictionless-py/master/data/table.csv # ! cat table.csv # + id="sUm4daRpxqcH" executionInfo={"status": "ok", "timestamp": 1601546781606, "user_tz": -180, "elapsed": 2121, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="0e322318-c9b7-4ade-f55e-96bcc6a125d0" colab={"base_uri": "https://localhost:8080/", "height": 487} # ! frictionless describe table.csv --source-type package # + [markdown] id="8iibTiI1rG9C" # ### Describing Schema # # Table Schema is a specification for providing a "schema" (similar to a database schema) for tabular data. This information includes the expected type of each value in a column ("string", "number", "date", etc.), constraints on the value ("this string can only be at most 10 characters long"), and the expected format of the data ("this field should only contain strings that look like email addresses"). Table Schema can also specify relations between tables. # # We're going to use this file for this section examples. For this guide, we use solely CSV files because of their demonstrativeness but in-general Frictionless can handle Excel, JSON, SQL, and many other formats: # # + id="4SXsPQ2IrVER" executionInfo={"status": "ok", "timestamp": 1601546819094, "user_tz": -180, "elapsed": 924, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="e438a0c9-e66e-4c81-e2f5-74ea0e7ad49a" colab={"base_uri": "https://localhost:8080/", "height": 118} # ! wget -q -O country-1.csv https://raw.githubusercontent.com/frictionlessdata/frictionless-py/master/data/country-1.csv # ! cat country-1.csv # + [markdown] id="Cr7FvgTTrdzO" # Let's get Table Schema using Frictionless framework: # # + id="OBIVmbLVrf0J" from frictionless import describe_schema schema = describe_schema("country-1.csv") schema.to_yaml("country.schema-simple.yaml") # + [markdown] id="tRla4DIvrlJ2" # The high-level functions of Frictionless operate on dataset and resource levels so we have to use Python a little of Python programming to get schema information. Below we will show how to use a command-line interface for similar tasks. # # + id="tq4zuEtZrnAQ" executionInfo={"status": "ok", "timestamp": 1596633422057, "user_tz": -180, "elapsed": 2003, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="11f4342f-ce5a-4380-b931-d51bf51ceeba" colab={"base_uri": "https://localhost:8080/", "height": 168} # ! cat country.schema-simple.yaml # + [markdown] id="u9xxi2j-rq65" # As we can see, we were able to get infer basic metadata of our data file but describing data doesn't end here, we can provide additional information we discussed earlier: # # + id="RTspMyNmrswr" executionInfo={"status": "ok", "timestamp": 1601546837137, "user_tz": -180, "elapsed": 658, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} from frictionless import describe_schema schema = describe_schema("country-1.csv") schema.get_field("id").title = "Identifier" schema.get_field("neighbor_id").title = "Identifier of the neighbor" schema.get_field("name").title = "Name of the country" schema.get_field("population").title = "Population" schema.get_field("population").description = "According to the year 2020's data" schema.get_field("population").constraints["minimum"] = 0 schema.foreign_keys.append( {"fields": ["neighbor_id"], "reference": {"resource": "", "fields": ["id"]}} ) schema.to_yaml("country.schema.yaml") # + [markdown] id="jMXLIu5Ur6Oc" # # Let's break it down: # - we added a title for all the fields # - we added a description to the "Population" field; the year information can be critical to interpret the data # - we set a constraint to the "Population" field because it can't be less than 0 # - we added a foreign key saying that "Identifier of the neighbor" should present in the "Identifier" field # # + id="rQ43A6N-r7yp" executionInfo={"status": "ok", "timestamp": 1596633454217, "user_tz": -180, "elapsed": 2012, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="616a0dec-a11a-41c8-c17c-5f32c4fca060" colab={"base_uri": "https://localhost:8080/", "height": 403} # ! cat country.schema.yaml # + [markdown] id="TiazRDTJsBPL" # Later we're going to show how to use the schema we created to ensure the validity of your data; in the next few sections, we will focus on Data Resource and Data Package metadata. # # To continue learning about table schemas please read: # - [Table Schema Spec](https://specs.frictionlessdata.io/table-schema/) # - API Reference: Schema # # + [markdown] id="6cMy-O--sEAs" # ### Describing Resource # # The Data Resource format describes a data resource such as an individual file or table. # The essence of a Data Resource is a locator for the data it describes. # A range of other properties can be declared to provide a richer set of metadata. # # For this section, we will use the file that is slightly more complex to handle. For some reason, cells are separated by the ";" char and there is a comment on the top: # # + id="FlZEe0Obsg5I" executionInfo={"status": "ok", "timestamp": 1601546795791, "user_tz": -180, "elapsed": 980, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="dee7ade1-e280-45ce-f287-b8edb13e6d5c" colab={"base_uri": "https://localhost:8080/", "height": 134} # ! wget -q -O country-2.csv https://raw.githubusercontent.com/frictionlessdata/frictionless-py/master/data/country-2.csv # ! cat country-2.csv # + [markdown] id="g5Vu61Ooso2a" # Let's describe it this time using the command-line interface: # # + id="AjvZdPHOsr6k" executionInfo={"status": "ok", "timestamp": 1601546799682, "user_tz": -180, "elapsed": 1753, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="1d5bb6d2-2e6f-42d3-9af5-387281f31e2b" colab={"base_uri": "https://localhost:8080/", "height": 420} # ! frictionless describe country-2.csv # + [markdown] id="zLehxfx4swSb" # OK, that's clearly wrong. As we have seen in the "Introductory Guide" Frictionless is capable of inferring some complicated cases' metadata but our table is too weird for it. We need to program it: # # + id="6PUztOFjsyRW" executionInfo={"status": "ok", "timestamp": 1601546841577, "user_tz": -180, "elapsed": 680, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} from frictionless import describe_resource, Schema resource = describe_resource("country-2.csv") resource.dialect.header_rows = [2] resource.dialect.delimiter = ";" resource.schema = Schema("country.schema.yaml") resource.to_yaml("country.resource.yaml") # + [markdown] id="2pG0GugPs7ct" # So what we are doing here: # - we set header rows to be row number 2; as humans, we can easily see it # - we set CSV Delimiter to be ";"; this file in not really usual CSV for some reason # - we reuse the schema we created earlier as the data has the same structure and meaning # # + id="gH_o68Tps-kv" executionInfo={"status": "ok", "timestamp": 1601546846417, "user_tz": -180, "elapsed": 911, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="df3f1bb9-b8a4-49da-97c7-5eaeb4f054bc" colab={"base_uri": "https://localhost:8080/", "height": 773} # ! cat country.resource.yaml # + [markdown] id="KXg7PkyJtDhb" # Our resource metadata includes the schema metadata we created earlier but also it has: # - general information about the file's schema, format, and compression # - information about CSV Dialect helping software understand how to read it # - checksum information as though hash, bytes, and rows # # But the most important difference is that resource metadata contains the `path` property. It conceptually distinct Data Resource specification from Table Schema specification because while a Table Schema descriptor can describe a class of data files, a Data Resource descriptor describes the only one exact data file, `data/country-2.csv` in our case. # # Using programming terminology we could say that: # - Table Schema descriptor is abstract (for a class of files) # - Data Resource descriptor is concrete (for an individual file) # # We will show the practical difference in the "Using Metadata" section but in the next section, we will overview the Data Package specification. # # To continue learning about data resources please read: # - [Data Resource Spec](https://specs.frictionlessdata.io/data-resource/) # - API Reference: Resource # # + [markdown] id="gYkduTp6tGQC" # ### Describing Package # # A Data Package consists of: # - Metadata that describes the structure and contents of the package # - Resources such as data files that form the contents of the package # The Data Package metadata is stored in a "descriptor". This descriptor is what makes a collection of data a Data Package. The structure of this descriptor is the main content of the specification below. # # In addition to this descriptor, a data package will include other resources such as data files. The Data Package specification does NOT impose any requirements on their form or structure and can, therefore, be used for packaging any kind of data. # # The data included in the package may be provided as: # - Files bundled locally with the package descriptor # - Remote resources, referenced by URL # - "Inline" data (see below) which is included directly in the descriptor # # For this section, we will use the following files: # # + id="d_lhFoI4tK6b" executionInfo={"status": "ok", "timestamp": 1601546856493, "user_tz": -180, "elapsed": 1001, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="d6d737db-3093-4a9a-884f-e96506545b90" colab={"base_uri": "https://localhost:8080/", "height": 118} # ! wget -q -O country-3.csv https://raw.githubusercontent.com/frictionlessdata/frictionless-py/master/data/country-3.csv # ! cat country-3.csv # + id="LrBih29vtcT1" executionInfo={"status": "ok", "timestamp": 1601546858327, "user_tz": -180, "elapsed": 968, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="199c5ec1-2858-4780-af7f-35055cd08735" colab={"base_uri": "https://localhost:8080/", "height": 118} # ! wget -q -O capital-3.csv https://raw.githubusercontent.com/frictionlessdata/frictionless-py/master/data/capital-3.csv # ! cat capital-3.csv # + [markdown] id="VKNDU957tpRJ" # First of all, let's describe our package using the command-line interface. We did it before for a resource but now we're going to use a glob pattern to indicate that there are multiple files: # # + id="zYYXTnRdtrig" executionInfo={"status": "ok", "timestamp": 1601546862480, "user_tz": -180, "elapsed": 1771, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="7ebfba5e-8d5f-4baf-e12e-725ec41f4065" colab={"base_uri": "https://localhost:8080/", "height": 958} # ! frictionless describe *-3.csv # + [markdown] id="LIR2ybwntzj8" # We have already learned about many concepts that are reflected in this metadata. We can see resources, schemas, fields, and other familiar entities. The difference is that this descriptor has information about multiple files which is the most popular way of sharing data - in datasets. Very often you have not only one data file but also additional data files, some textual documents e.g. PDF, and others. To package all of these files with the corresponding metadata we use data packages. # # Following the already familiar to the guide's reader pattern, we add some additional metadata: # # + id="lDHMlcdct24V" executionInfo={"status": "ok", "timestamp": 1601546867329, "user_tz": -180, "elapsed": 654, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} from frictionless import describe_package package = describe_package("*-3.csv") package.title = "Countries and their capitals" package.description = "The data was collected as a research project" package.get_resource("country-3").name = "country" package.get_resource("capital-3").name = "capital" package.get_resource("country").schema.foreign_keys.append( {"fields": ["capital_id"], "reference": {"resource": "capital", "fields": ["id"]}} ) package.to_yaml("country.package.yaml") # + [markdown] id="OtVFkMsSt6gd" # In this case, we add a relation between different files connecting `id` and `capital_id`. Also, we provide dataset-level metadata to share with the purpose of this dataset. We haven't added individual fields' titles and description but it can be done as it was shown in the "Table Schema" section. # # + id="pDTh--lxt9Ol" executionInfo={"status": "ok", "timestamp": 1601546870118, "user_tz": -180, "elapsed": 626, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="0ce828f6-c05c-4100-f3b5-ea1f6eb8ab79" colab={"base_uri": "https://localhost:8080/", "height": 1000} # ! cat country.package.yaml # + [markdown] id="0J4WuKRJuCX0" # The main role of the Data Package descriptor is describing a dataset; as we can see, it includes previously shown descriptors as though `schema`, `dialect`, and `resource`. But it's a mistake to think then that Data Package is the least important specification; actually, it completes the Frictionless Data suite making possible sharing and validating not only individual files but complete datasets. # # To continue learning about data resources please read: # - [Data Package Spec](https://specs.frictionlessdata.io/data-package/) # - API Reference: Package # # + [markdown] id="PUaRUDwm_ZVs" # ## Description Options # # The `describe` functions above share the only one common argument: # - `expand`: whether to expand output metadata or not (see "Expanding Metadata") # # + [markdown] id="QAVM-J2r_2vg" # **Package** # # The `describe_package` doesn't accept any additional options. # + [markdown] id="-BlMFUoe_w53" # **Resource** # # With the `describe_resource` function you can use as options: # - File Details (see "Extracting Data") # - File Control (see "Extracting Data") # - Table Dialect (see "Extracting Data") # - Table Query (see "Extracting Data") # - Header Options (see "Extracting Data") # - Infer Options # + [markdown] id="42kA6Fhex0YA" # ## Metadata Purpose # # This documentation contains a great deal of information on how to use metadata and why it's vital for your data. In this article, we're going to provide a quick example based on the "Data Resource" section but please read other documents to get the full picture. # # Let's get back to this exotic data table: # # + id="K1bSnYSTx4w0" executionInfo={"status": "ok", "timestamp": 1596113530672, "user_tz": -180, "elapsed": 2063, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="49b05b6f-bb0f-4b8a-93a6-425c2d950b7b" colab={"base_uri": "https://localhost:8080/", "height": 134} # ! cat country-2.csv # + [markdown] id="7LQ49fVCx9nP" # As we tried before, by default Frictionless can't properly describe this file so we got something like: # # + id="K1chD5w0yAJY" executionInfo={"status": "ok", "timestamp": 1601546884494, "user_tz": -180, "elapsed": 1753, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="6bfd23e9-fcc3-4e78-f15c-650f924b4349" colab={"base_uri": "https://localhost:8080/", "height": 420} # ! frictionless describe country-2.csv # + [markdown] id="QbdLRQ_myFl5" # Trying to extract the data will fail the same way: # # + id="LULksbnMyHTy" executionInfo={"status": "ok", "timestamp": 1596113597210, "user_tz": -180, "elapsed": 2793, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="127d0b9c-a62c-46e5-e535-597b2f895ee6" colab={"base_uri": "https://localhost:8080/", "height": 185} # ! frictionless extract country-2.csv # + [markdown] id="0TnObnx2ycyK" # Basically, that's a really important idea - with not metadata many software will not be able to even read this data file, furthermore, without metadata people can not understand the purpose of this data. Let's now use the `country.resource.yaml` the file we created in the "Data Resource" section: # # + id="2eUcS3Q1ycLY" executionInfo={"status": "ok", "timestamp": 1596113685765, "user_tz": -180, "elapsed": 2758, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="c8705ec0-27d2-4ce8-9d17-dfba12fd7783" colab={"base_uri": "https://localhost:8080/", "height": 168} # ! frictionless extract country.resource.yaml # + [markdown] id="cK_PNcQRyik2" # As we can see, it's now fixed. The metadata we'd had saved the day. If we explore this data in Python we can discover that it also correct data types e.g. `id` is Python's integer not string. This fact will allow exporting and sharing this data without any fear. # # + [markdown] id="xLslK8JyryXu" # ## Metadata Classes # # Frictionless has many classes that is derived from the `Metadata` class. It means that all of them can be treated as a metadata object with getters and setters, `to_json` and `to_yaml` function, and other Metadata's API. See "API Reference" for more information about these classes: # # - Package # - Resource # - Schema # - Field # - Control # - Dialect # - Query # - Report # - Pipeline # - Error # - etc # # # + [markdown] id="SIxwZBmPyp2L" # ## Inferring Metadata # # Many Frictionless functions infer metadata under the hood as though `describe`, `extract`, and many more. On a lower-level, it's possible to control this process. Let's create a `Resource`. # # + id="F8nkhhCizHvT" executionInfo={"status": "ok", "timestamp": 1601546899579, "user_tz": -180, "elapsed": 659, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="68051f78-7bd5-4355-9704-d38825f67ddb" colab={"base_uri": "https://localhost:8080/", "height": 34} from pprint import pprint from frictionless import Resource resource = Resource(path="country-1.csv") pprint(resource) # + [markdown] id="11meE9w8zLOe" # Frictionless always tries to be as explicit as possible. We didn't provide any metadata except for `type` so we got the expected result. But now, we'd like to `infer` additional metadata: # + id="EMT1R6kZzOud" executionInfo={"status": "ok", "timestamp": 1601546902025, "user_tz": -180, "elapsed": 663, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="59911d43-693d-4010-b07e-602fcb611ac8" colab={"base_uri": "https://localhost:8080/", "height": 353} resource.infer() pprint(resource) # + [markdown] id="fOxBNThKzSUQ" # The result is really familiar to us already. We have seen it a lot as an output of the `describe` function or command. Basically, that's what this high-level function does under the hood: create a resource and then infer additional metadata. # # All main `Metadata` classes have this method with different available options but with the same conceptual purpose: # - `package.infer` # - `resource.infer` # - `schema.infer` # + [markdown] id="sE0zZv2qzVt7" # ## Expanding Metadata # # By default, Frictionless never adds default values to metadata, for example: # # + id="0Rbb-hVVzXzw" executionInfo={"status": "ok", "timestamp": 1596113917749, "user_tz": -180, "elapsed": 684, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="3a484295-a91c-4fbf-fa46-6bc04a92e497" colab={"base_uri": "https://localhost:8080/", "height": 84} from pprint import pprint from frictionless import describe resource = describe("country-1.csv") pprint(resource.schema) # + [markdown] id="EUKs1pE6zbZO" # Under the hood it, for example, still treats empty string as missing values because it's the specs' default. We can make reveal implicit metadata by expanding it: # # + id="_7w8fk3azc0a" executionInfo={"status": "ok", "timestamp": 1596113934541, "user_tz": -180, "elapsed": 796, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="3bbcc38b-46f5-42a4-f2ed-b43c728e3a32" colab={"base_uri": "https://localhost:8080/", "height": 252} resource.schema.expand() pprint(resource.schema) # + [markdown] id="8Hluowkazgod" # ## Transforming Metadata # # We have seen it before but let's re-iterate; it's possible to transform core metadata properties using Python interface: # # + id="4VEto-FnzjL3" executionInfo={"status": "ok", "timestamp": 1601546914067, "user_tz": -180, "elapsed": 669, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} from frictionless import Resource resource = Resource("country.resource.yaml") resource.title = "Countries" resource.description = "It's a research project" resource.dialect.header_rows = [2] resource.dialect.delimiter = ";" resource.to_yaml("country.resource.yaml") # + [markdown] id="B35UJzShz1FV" # But not only the Python interface is available. Thanks to the flexibility of the Frictionless Specs, we can add arbitrary metadata to our descriptor. We use dictionary operations for it: # # + id="aPMiVRh1z2vI" executionInfo={"status": "ok", "timestamp": 1601546916789, "user_tz": -180, "elapsed": 632, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} from frictionless import Resource resource = Resource("country.resource.yaml") resource["customKey1"] = "Value1" resource["customKey2"] = "Value2" resource.to_yaml("country.resource.yaml") # + [markdown] id="0LU_IeDAz5w3" # Let's check it out: # + id="b_JqJ6VZz7LP" executionInfo={"status": "ok", "timestamp": 1601546919421, "user_tz": -180, "elapsed": 840, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="932852d4-91d5-40b2-e5c1-490976f58750" colab={"base_uri": "https://localhost:8080/", "height": 840} # ! cat country.resource.yaml # + [markdown] id="9z4sm4OO0BB_" # ## Validating Metadata # # Metadata validity is an important topic so it's recommended to validate your metadata before publishing. For example, let's make it invalid: # # # + id="z9ljLGjB0I8r" executionInfo={"status": "ok", "timestamp": 1596114122788, "user_tz": -180, "elapsed": 660, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="5fa367d0-349f-42df-94cd-6ca304c31a7f" colab={"base_uri": "https://localhost:8080/", "height": 70} from frictionless import Resource resource = Resource("country.resource.yaml") resource["title"] = 1 print(resource.metadata_valid) print(resource.metadata_errors) # + [markdown] id="yfQPIZo30OND" # Let's fix our resource metadata: # # + id="o9qtXgy50QS0" executionInfo={"status": "ok", "timestamp": 1596114146781, "user_tz": -180, "elapsed": 804, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="d93359af-b8c1-4cb0-e41a-e857e2c4af7b" colab={"base_uri": "https://localhost:8080/", "height": 34} from frictionless import Resource resource = Resource("country.resource.yaml") resource["title"] = 'Countries' print(resource.metadata_valid) # + [markdown] id="UOJLhhJw0TEZ" # You need to check `metadata.metadata_valid` only if you change it by hands; the available high-level functions like `validate` do it on their own. # # + [markdown] id="S0oQZ8eJ0XsS" # ## Mastering Metadata # # Metadata class is under the hood of many of Frictionless' classes. Let's overview main `Metadata` features. For a full reference, please read "API Reference". Let's take a look at the Metadata class which is a `dict` subclass: # + [markdown] id="xTeR5PBXRVRv" # ```text # Metadata(dict) # metadata_attach # metadata_extract # metadata_process # metadata_validate # --- # metadata_valid # metadata_errors # --- # to_json # to_yaml # ``` # + [markdown] id="ltR13kX5Rwpl" # This class exists for subclassing and here is important points that will help to work with metadata objects and design and write new metadata classes: # - to bind default values to a property it's possible to use `metadata_attach` (see e.g. the `Schema` class) # - during the initialization a descriptor is processed by `metadata_extract` # - metadata detect any shallow update and call `metadata_process` # - checking for validity or errors will trigger `metadata_validate` # - functions exporting to json and yaml are available be default # - `metadata_profile` can be set to a JSON Schema # - `metadata_Error` can be set to an Error class # + [markdown] id="7M_-na--0c53" # ## Infer Options # # Let's explore some handy options to customize the infer process. All of them are available in some form for all the functions above and for different invocation types: in Python, in CLI, or for a REST server. # # + [markdown] id="0AIOmMgH0xYx" # **Infer Type** # # This option allows manually setting all the field types to a given type. It's useful when you need to skip datacasting (setting `any` type) or have everything as a string (setting `string` type): # # + id="v2YCKm8s02bQ" executionInfo={"status": "ok", "timestamp": 1601546934551, "user_tz": -180, "elapsed": 1667, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="b8e3af65-e3f4-4e61-ed31-1704d5987d0c" colab={"base_uri": "https://localhost:8080/", "height": 521} # ! frictionless describe country-1.csv --infer-type string # + [markdown] id="N_LfrV261CNe" # **Infer Names** # # Sometimes you don't want to use existent header row to compose field names. It's possible to provide custom names: # # + id="4cHIGy1n1EIC" executionInfo={"status": "ok", "timestamp": 1596114359573, "user_tz": -180, "elapsed": 804, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="44ad4919-a31f-4733-8011-77f345af5f44" colab={"base_uri": "https://localhost:8080/", "height": 34} from frictionless import describe resource = describe("country-1.csv", infer_names=["f1", "f2", "f3", "f4"]) print(resource.schema.field_names) # + [markdown] id="Lx8094711HVr" # **Infer Volume** # # By default, Frictionless will use the first 100 rows to detect field types. This can be customized. The following code will be slower but the result can be more accurate # # + id="UhJPE73F1JZH" from frictionless import describe resource = describe("country-1.csv", infer_volume=1000) # + [markdown] id="YH3-nkU_1Q0j" # **Infer Confidence** # # By default, Frictionless uses 0.9 (90%) confidence level for data types detection. It means that it there are 9 integers in a field and one string it will be inferred as an integer. If you want a guarantee that an inferred schema will conform to the data you can set it to 1 (100%): # # + id="J1-cBbje1TS2" from frictionless import describe resource = describe("country-1.csv", infer_confidence=1) # + [markdown] id="EE9Mxk5v1YtF" # **Infer Missing Values** # # Missing Values is an important concept in data description. It provides information about what cell values should be considered as nulls. We can customize the defaults: # # + id="mlW-IMUH1b_I" executionInfo={"status": "ok", "timestamp": 1596114463566, "user_tz": -180, "elapsed": 633, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12160649330696878788"}} outputId="2b3e66f1-fcbf-4e11-fa1c-633edfad7360" colab={"base_uri": "https://localhost:8080/", "height": 168} from pprint import pprint from frictionless import describe resource = describe("country-1.csv", infer_missing_values=["", "67"]) pprint(resource.schema.missing_values) pprint(resource.read_rows()) # + [markdown] id="k5sFaqKt1h2N" # As we can see, the textual values equal to "67" are now considered nulls. Usually, it's handy when you have data with values like: '-', 'n/a', and similar. #
docs/target/describing-data/README.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tensorflow as tf from tensorflow.contrib import seq2seq import numpy as np class Model: def __init__( self, vocabulary_size, maxlen = 50, output_size = 512, learning_rate = 1e-3, embedding_size = 256, batch_size = 16, max_grad_norm = 10, **kwargs ): word_embeddings = tf.Variable( tf.random_uniform( [vocabulary_size, embedding_size], -np.sqrt(3), np.sqrt(3) ) ) self.output_size = output_size self.maxlen = maxlen self.embeddings = word_embeddings self.output_layer = tf.layers.Dense(vocabulary_size) self.output_layer.build(output_size) self.BEFORE = tf.placeholder(tf.int32, [None, maxlen]) self.INPUT = tf.placeholder(tf.int32, [None, maxlen]) self.AFTER = tf.placeholder(tf.int32, [None, maxlen]) self.batch_size = tf.shape(self.INPUT)[0] self.get_thought = self.thought(self.INPUT) self.attention = tf.matmul( self.get_thought, tf.transpose(self.embeddings), name = 'attention' ) self.fw_logits = self.decoder(self.get_thought, self.AFTER) self.bw_logits = self.decoder(self.get_thought, self.BEFORE) self.loss = self.calculate_loss( self.fw_logits, self.AFTER ) + self.calculate_loss(self.bw_logits, self.BEFORE) tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm( tf.gradients(self.loss, tvars), max_grad_norm ) self.optimizer = tf.train.AdamOptimizer(learning_rate).apply_gradients( zip(grads, tvars) ) def get_embedding(self, inputs): return tf.nn.embedding_lookup(self.embeddings, inputs) def thought(self, inputs): encoder_in = self.get_embedding(inputs) fw_cell = tf.nn.rnn_cell.GRUCell(self.output_size) bw_cell = tf.nn.rnn_cell.GRUCell(self.output_size) sequence_length = tf.reduce_sum(tf.sign(inputs), axis = 1) with tf.variable_scope( 'thought_scope', reuse = False ): rnn_output = tf.nn.bidirectional_dynamic_rnn( fw_cell, bw_cell, encoder_in, sequence_length = sequence_length, dtype = tf.float32, )[1] return sum(rnn_output) def decoder(self, thought, labels): main = tf.strided_slice(labels, [0, 0], [self.batch_size, -1], [1, 1]) shifted_labels = tf.concat([tf.fill([self.batch_size, 1], 2), main], 1) decoder_in = self.get_embedding(shifted_labels) cell = tf.nn.rnn_cell.GRUCell(self.output_size) max_seq_lengths = tf.fill([self.batch_size], self.maxlen) helper = seq2seq.TrainingHelper( decoder_in, max_seq_lengths, time_major = False ) decoder = seq2seq.BasicDecoder(cell, helper, thought) decoder_out = seq2seq.dynamic_decode(decoder)[0].rnn_output return decoder_out def calculate_loss(self, outputs, labels): mask = tf.cast(tf.sign(labels), tf.float32) logits = self.output_layer(outputs) return seq2seq.sequence_loss(logits, labels, mask) import json with open('skip-news-dict.json') as fopen: dictionary = json.load(fopen) len(dictionary) def rename(checkpoint_dir, replace_from, replace_to, add_prefix, dry_run=False): checkpoint = tf.train.get_checkpoint_state(checkpoint_dir) with tf.Session() as sess: for var_name, _ in tf.contrib.framework.list_variables(checkpoint_dir): var = tf.contrib.framework.load_variable(checkpoint_dir, var_name) new_name = var_name if None not in [replace_from, replace_to]: new_name = new_name.replace(replace_from, replace_to) if add_prefix: new_name = add_prefix + new_name if dry_run: print('%s would be renamed to %s.' % (var_name, new_name)) else: print('Renaming %s to %s.' % (var_name, new_name)) # Rename the variable var = tf.Variable(var, name=new_name) if not dry_run: # Save the variables saver = tf.train.Saver() sess.run(tf.global_variables_initializer()) saver.save(sess, 'skip-rename/model.ckpt') # + # rename('skip/model.ckpt','thought_scope_e1d42da4-5ae4-4898-b0f1-f52f687a4e28', # 'thought_scope',None) # - tf.reset_default_graph() sess = tf.InteractiveSession() model = Model(len(dictionary), embedding_size = 128, output_size = 128, batch_size=16,maxlen=100) sess.run(tf.global_variables_initializer()) [i.name for i in tf.global_variables()] saver=tf.train.Saver(tf.global_variables()) saver.restore(sess, 'skip/model.ckpt') # + import random def sequence(s, w2v_model, maxlen, vocabulary_size): words = s.split() np_array = np.zeros((maxlen),dtype=np.int32) current_no = 0 for no, word in enumerate(words[:maxlen - 2]): id_to_append = 1 if word in w2v_model: word_id = w2v_model[word] if word_id < vocabulary_size: id_to_append = word_id np_array[no] = id_to_append current_no = no np_array[current_no + 1] = 3 return np_array def generate_batch(sentences,batch_size,w2v_model,maxlen,vocabulary_size): window_size = batch_size + 2 first_index = 1000 batch_sentences = sentences[first_index:first_index+window_size] print(batch_sentences) batch_sequences = np.array([sequence(sentence,w2v_model,maxlen,vocabulary_size) for sentence in batch_sentences]) window_shape = [] for i in range(batch_size): window_shape.append(batch_sequences[i:i+3]) window_shape = np.array(window_shape) return window_shape[:,0], window_shape[:,1], window_shape[:,2] # - import json with open('news-bm.json','r') as fopen: sentences = json.loads(fopen.read()) bw_input, current_input, fw_input = generate_batch(sentences,1,dictionary,100,len(dictionary)) encoded = sess.run(model.get_thought,feed_dict={model.INPUT:fw_input}) encoded strings = ','.join( [ n.name for n in tf.get_default_graph().as_graph_def().node if ( 'Variable' in n.op or n.name.find('Placeholder') >= 0 or 'add_1' in n.name or 'attention' in n.name ) and 'Adam' not in n.name ] ) strings.split(',') def freeze_graph(model_dir, output_node_names): if not tf.gfile.Exists(model_dir): raise AssertionError( "Export directory doesn't exists. Please specify an export " "directory: %s" % model_dir) checkpoint = tf.train.get_checkpoint_state(model_dir) input_checkpoint = checkpoint.model_checkpoint_path absolute_model_dir = "/".join(input_checkpoint.split('/')[:-1]) output_graph = absolute_model_dir + "/frozen_model.pb" clear_devices = True with tf.Session(graph=tf.Graph()) as sess: saver = tf.train.import_meta_graph(input_checkpoint + '.meta', clear_devices=clear_devices) saver.restore(sess, input_checkpoint) output_graph_def = tf.graph_util.convert_variables_to_constants( sess, tf.get_default_graph().as_graph_def(), output_node_names.split(",") ) with tf.gfile.GFile(output_graph, "wb") as f: f.write(output_graph_def.SerializeToString()) print("%d ops in the final graph." % len(output_graph_def.node)) freeze_graph('skip', strings) def load_graph(frozen_graph_filename): with tf.gfile.GFile(frozen_graph_filename, "rb") as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) with tf.Graph().as_default() as graph: tf.import_graph_def(graph_def) return graph g=load_graph('skip/frozen_model.pb') x = g.get_tensor_by_name('import/Placeholder_1:0') logits = g.get_tensor_by_name('import/thought_scope/add_1:0') attention = g.get_tensor_by_name('import/attention:0') test_sess = tf.InteractiveSession(graph=g) out, att = test_sess.run([logits,attention], feed_dict={x:fw_input}) att.shape rev_dict = {v: k for k, v in dictionary.items()} for i in att[0].argsort()[-10:][::-1]: print(i) print(rev_dict[i])
session/summary/skip-thought-freeze.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np df = pd.read_csv('../DATA/cancer_classification.csv') df.info() df.describe().transpose() # ## EDA import seaborn as sns import matplotlib.pyplot as plt sns.countplot(x='benign_0__mal_1',data=df) sns.heatmap(df.corr()) df.corr()['benign_0__mal_1'].sort_values() df.corr()['benign_0__mal_1'].sort_values().plot(kind='bar') df.corr()['benign_0__mal_1'][:-1].sort_values().plot(kind='bar') # ## Train Test Split X = df.drop('benign_0__mal_1',axis=1).values y = df['benign_0__mal_1'].values from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.25,random_state=101) # # ## Scaling Data from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() scaler.fit(X_train) X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) # ## Creating the Model # # # For a binary classification problem # model.compile(optimizer='rmsprop', # loss='binary_crossentropy', # metrics=['accuracy']) # # import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation,Dropout X_train.shape # + model = Sequential() # https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw model.add(Dense(units=30,activation='relu')) model.add(Dense(units=15,activation='relu')) model.add(Dense(units=1,activation='sigmoid')) # For a binary classification problem model.compile(loss='binary_crossentropy', optimizer='adam') # - # ## Training the Model # # ### Example One: Choosing too many epochs and overfitting! # + # https://stats.stackexchange.com/questions/164876/tradeoff-batch-size-vs-number-of-iterations-to-train-a-neural-network # https://datascience.stackexchange.com/questions/18414/are-there-any-rules-for-choosing-the-size-of-a-mini-batch model.fit(x=X_train, y=y_train, epochs=600, validation_data=(X_test, y_test), verbose=1 ) # + # model.history.history # - model_loss = pd.DataFrame(model.history.history) # + # model_loss # - model_loss.plot() # ## Example Two: Early Stopping # # We obviously trained too much! Let's use early stopping to track the val_loss and stop training once it begins increasing too much! model = Sequential() model.add(Dense(units=30,activation='relu')) model.add(Dense(units=15,activation='relu')) model.add(Dense(units=1,activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam') from tensorflow.keras.callbacks import EarlyStopping # Stop training when a monitored quantity has stopped improving. # # Arguments: # monitor: Quantity to be monitored. # min_delta: Minimum change in the monitored quantity # to qualify as an improvement, i.e. an absolute # change of less than min_delta, will count as no # improvement. # patience: Number of epochs with no improvement # after which training will be stopped. # verbose: verbosity mode. # mode: One of `{"auto", "min", "max"}`. In `min` mode, # training will stop when the quantity # monitored has stopped decreasing; in `max` # mode it will stop when the quantity # monitored has stopped increasing; in `auto` # mode, the direction is automatically inferred # from the name of the monitored quantity. early_stop = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=25) model.fit(x=X_train, y=y_train, epochs=600, validation_data=(X_test, y_test), verbose=1, callbacks=[early_stop] ) model_loss = pd.DataFrame(model.history.history) model_loss.plot() # ## Example Three: Adding in DropOut Layers from tensorflow.keras.layers import Dropout # + model = Sequential() model.add(Dense(units=30,activation='relu')) model.add(Dropout(0.5)) model.add(Dense(units=15,activation='relu')) model.add(Dropout(0.5)) model.add(Dense(units=1,activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam') # - model.fit(x=X_train, y=y_train, epochs=600, validation_data=(X_test, y_test), verbose=1, callbacks=[early_stop] ) model_loss = pd.DataFrame(model.history.history) model_loss.plot() # # Model Evaluation predictions = model.predict_classes(X_test) from sklearn.metrics import classification_report,confusion_matrix # https://en.wikipedia.org/wiki/Precision_and_recall print(classification_report(y_test,predictions)) print(confusion_matrix(y_test,predictions))
Classification/02-Keras-Classification.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Apprentissage de Jupyter Lab - Exercice pratique # # + **Cours "Informatique pour l'ingénieur de l'environnement"** - EPFL - ENG-270 # + **Cours "Programmation Matlab"** - EPFL - ENG-274 # # Préparé par : # # - [<NAME>](https://enacit.epfl.ch/personnes/bonjour.shtml) (v. 2019-09-10) © [CC BY-SA](http://creativecommons.org/licenses/by-sa/3.0/deed.fr)_ # - [<NAME>](https://people.epfl.ch/Samuel.Bancal) (v. 2020-09-16) © [CC BY-SA](http://creativecommons.org/licenses/by-sa/3.0/deed.fr)_ # # ----- # # _Bienvenue dans Jupyter Notebook ! Tout d'abord **bravo** d'être parvenu à lancer Jupyter Lab et ouvrir ce notebook !_ # # _Cet exercice a pour objectif de vous faire rapidement découvrir et explorer les **fonctionnalités de base** des **notebooks Jupyter** ainsi que de l'interpréteur interactif **IPython** en mode notebook._ # # ----- # ## ▶ Édition et exécution d'une cellule de code # # Un notebook Jupyter consiste en **cellules** individuelles pouvant contenir du **code** (Python, GNU Octave, ou autre langage supporté par Jupyter), du **texte** (avec formatage Markdown, HTML et LaTeX) ou des **graphiques**. # # Commençons par les cellules de type Code. # # **1)** Placez le curseur dans la 1ère cellule **[ ]:** ci-dessous puis frappez `<enter>`. Que se passe-t-il ? Rien d'autre que l'insertion d'un saut de ligne... C'est normal, car dans une cellule de type "Code" (c'est-à-dire préfixée par **[ ]:**) on est par défaut en mode **édition de code** ! # # Pour **exécuter le code** d'une cellule, il faut en effet frapper `<maj-enter>` (ou cliquer sur le bouton `[Run]` (triangle orienté vers la droite), ou encore utiliser le menu `Run>Run Selected Cells...`). Essayez dans la première cellule ci-dessous ! 15 + 24 'Hello tout le monde !' # Une ligne **[ ]:** devrait être apparue en dessous de la 1ère expression, affichant le résultat de celle-ci ! Le curseur s'est alors automatiquement déplacé dans la cellule d'en-dessous... que vous pouvez exécuter à son tour ! # # _Notez qu'il est aussi possible de frapper `<ctrl-enter>` (identique à `Cell>Run Selected Cells and don't advance`). Dans ce cas, le curseur **reste dans la cellule** après l'exécution du code._ # **2)** Exécutez maintenant le code de la cellule ci-dessous. a, b = 5, 10 # affectation simultanée de 2 variables print('La somme de ces 2 nombres vaut :', a + b) # Vous constatez que le résultat s'affiche bien, mais pas dans une cellule préfixée par **[ ]** comme précédemment. En effet, ce qui génère une cellule [ ] c'est le résultat de la **dernière expression** Python si elle n'est pas affectée à une variable (c-à-d. la valeur que Python stocke temporairement sur la variable `"_"` souligné). Le comportement est donc exactement le même que dans les interpréteurs Python et IPython. # **3.1)** Peut-on **modifier le code** d'une cellule puis le **ré-exécuter** ? Bien sûr ! \ # Expérimentez cela dans la cellule ci-dessus en modifiant les valeurs des variables a et b, puis frappez à nouveau `<maj-enter>` # # **3.2)** Pour **supprimer l'output** d'une cellule, faire `Edit>Clear Outputs` \ # Essayez ! # _Notez que si plusieurs cellules sont sélectionnées (ce que l'on peut faire en mode commande avec `<maj-curseurBas>` ou `<maj-curseurHaut>`), les commandes `<ctrl-enter>` et `<maj-enter>` exécutent l'ensemble des cellules sélectionnées !_ # ## ▶ Mode édition vs Mode de commande # Lorsque vous êtes sur une cellule, vous pouvez soit être # + en mode **édition**, c'est lorsque le curseur clignote et que vous pouvez tapper du texte dans la cellule # + en mode **commande**, c'est lorsqu'il n'y a pas de curseur apparent. # # Pour passer d'un mode à l'autre : # + **édition -> commande** appuyez sur la touche `<escape>` # + **commande -> édition** appuyez sur la touche `<enter>` # # Le mode **commande** sert à faire des opérations sur la/les cellule/s sélectionnée/s (comme la copier/couper/supprimer ou encore ajouter une cellule avant/après, ...). Ci-après, à chaque fois qu'il sera question de raccourcis clavier avec une lettre, cela sous-entend que vous êtes en mode **commande**! # ## ▶ Création de cellules de code # **4)** Comment puis-je créer **mes propres cellules** de code ? Tout simplement de la façon suivante : # # - sélectionnez d'abord la cellule juste au-dessus de l'emplacement où vous désirez insérer votre propre cellule (ou respectivement juste en-dessous) # - puis cliquez sur le bouton `[+]`, ou utilisez le raccourci clavier `B` pour ajouter une cellule en dessous (respectivement `A` pour en dessus) # # Exercez-vous en créant, juste en-dessous de la présente cellule, une nouvelle cellule. Insérez-y le code : # # print('Ouaouh... c'est ma première cellule !') # # puis exécutez le code de cette cellule. # + [markdown] slideshow={"slide_type": "subslide"} # **5)** Nous vous avons tendu un piège : si vous avez repris cette instruction `print` telle quelle, son exécution a dû générer une erreur, car l'apostrophe de la partie de chaîne _...c'est..._ n'était pas préfixé du caractère backslash. Peu importe, corrigez votre code et réexécutez-le, ce qui illustre bien l'intérêt de Jupyter Notebook ! # - # _Notez qu'il existe encore `<alt-enter>` (ou `Run>Run Selected Cells and Insert Below`) qui exécuterait le code des cellules sélectionnées et insèrerait automatiquement une nouvelle cellule en-dessous._ # ## ▶ Détruire, couper, copier et coller des cellules # **6.1)** Pour détruire, couper, copier, puis coller, faites ceci après avoir sélection la/les cellule/s choisie/s : # * **détruire** `Edit>Delete Cells` (ou raccourcis clavier `D, D`) # * **couper** le bouton ciseau `[Cut the Selected Cells]` (ou raccourci clavier `X`, ou `Edit>Cut Cells`) # * **copier** le bouteau copie `[Copy the Selected Cells]` (ou raccourci clavier `C`, ou `Edit>Copy Cells`) # + **coller** le bouteau coller `[Paste cells from the clipboard]` (ou raccourci clavier `V` ou `Edit>Paste Cells Bellow`) # # Exercez-vous en détruisant la cellule ci-dessous. Une cellule à couper, copier, coller... # **6.2)** Vous pouvez annuler la dernière opération avec `Edit>Undo Cell Operation` (ou raccourci clavier `Z`) # ## ▶ Déplacement, splittage et fusionnement de cellules # **7)** Vous trouvez ci-dessous 2 cellules de type texte constituant une liste à puce (on verra dans un instant comment créer de telles cellules). # # Commencez par **spliter** la première cellule : pour cela frappez d'abord `<enter>` (ou _double-clic_ ) dans la cellule pour l'éditer, puis placez le curseur entre les 2 lignes, et faites finalement `Edit>Split Cell` (raccourcis clavier `ctrl-maj -`). # - Ceci est la **troisième cellule** de texte, blabla blabla blabla blabla blabla blabla blabla blabla blabla blabla blabla blabla blabla blabla. # # - Ceci doit devenir la **seconde cellule** de texte, blabla blabla blabla blabla blabla blabla blabla blabla blabla blabla blabla blabla. # - Ceci sera la **première cellule** de texte, blabla blabla blabla blabla blabla blabla blabla blabla blabla blabla blabla blabla blabla. # **8)** Vous devriez alors avoir ci-dessus 3 cellules de texte. **Déplacez-les**, pour les mettre dans l'ordre, en utilisant les boutons `Edit>Move Cells Up` et `Edit>Move Cells Down`. # Naturellement vous pouvez faire cela avec les commandes de couper/coller vues plus haut. # **9)** Puis expérimentez le **fusionnement** des 3 cellules en une seule. Pour cela, sélectionnez-les, puis `Edit>Merge Selected Cells` (raccourcis clavier `maj-M`) # ## ▶ Création de cellules de texte # # Il existe 2 types de cellules de texte, selon le choix effectué à l'aide du **menu** déroulant de droite : # # - **Markdown** : cellules dans lesquelles le texte est soumis à un formatage selon la syntaxe Markdown ; les éventuelles balises HTML et le code LaTeX sont également interprétés # - **Raw NBConvert** (texte) : cellules de texte sans aucun formatage, directement éditables sans devoir entrer en mode **édition** # # _Notez qu'on peut aussi changer le type d'une cellule en mode commande avec les **raccourcis** clavier suivants : `C`= Code, `M`= Markdown, `R`= Raw, `1`= Heading1, `2`= Heading2, `3`= Heading3, etc..._ # **10)** Créez juste ci-dessous une cellule de type **Raw NBConvert** (texte). Pour cela, créez une cellule par le procédé standard (elle sera donc par défaut de type Code), puis changez son type à l'aide du menu déroulant ou du raccourcis clavier. # **11)** Créez ci-dessous un titre **Heading 4** (donc de 4ème niveau). Modifiez ensuite le texte du titre, ce qui nécessite de frapper `<enter>` ou _double-cliquer_ dans la cellule pour passer en mode d'édition. # **12)** _S'agissant des cellules de texte au format **Markdown**, cela suppose quelques notions relatives au langage de balisage Markdown. Nous vous renvoyons à la documentation "[Élaboration et conversion de documents avec Markdown et Pandoc](https://enacit.epfl.ch/cours/markdown-pandoc/)" que nous avons élaborée. Mais notez que vous ne pourrez utiliser, sous Jupyter Notebook, que la syntaxe de base Markdown (les extensions Pandoc n'étant pas reconnues par Jupyter)._ # # _Pour vous donner une idée de la simplicité du balisage Markdown, vous pouvez double-cliquer dans les cellules de texte du présent notebook qui sont toutes au format Markdown._ # _Et comme vous le voyez ci-dessous, il est donc même possible d'introduire du **code LaTeX** (placé entre deux caractères `$`) dans une cellule de type Markdown :_ # # $$ D_{KL}(P||Q) = \sum\limits_{i}ln \left( \frac{P(i)}{Q(i)} \right) P(i)$$ # ## ▶ Recherche et substitution # # **13)** Au sein d'une cellule, il est possible de faire des **recherches/substitutions** de chaîne, avec `Edit>Find...` (raccourci clavier `ctrl-F`), puis `Edit>Find next` (raccourcis clavier ctrl-g). # # Remplacez les "zzz" ci-dessous par des "www" ! # # Abc def zzz ghi 123 zzz fin. # ## ▶ Sauvegarder et imprimer un notebook # **14)** Lorsque vous avez terminé de travailler dans un notebook, **n'oubliez pas de le sauvegarder** avec le bouton `[Save and Checkpoint]` (ou le raccourci clavier `ctrl-S` ou menu `File>Save Notebook`). # **15)** Si vous désirez **imprimer un notebook** Jupyter, commencez par faire `File>Print...` (raccourcis clavier `ctrl-P`). # # Il est également possible d'exporter le Notebook dans différents formats avec `File>Export Notebook as>...`. # ## ▶ Graphiques dans un notebook # # **16)** Si vous démarrez un notebook Jupyter contenant du code Python avec l'option `--pylab=inline` ou que vous exécuter la magic fonction `%pylab inline`, les packages **Numpy** et **Matplotlib** sont importés, et il sera possible de dessiner des graphiques de façon "inline" (intégrés au notebook). L'exemple de code ci-dessous sera alors exécutable. # + # %pylab inline # Fait appel à numpy (linspace et pi) x = linspace(0, 3*pi, 500) # Fait appel à matplotlib (plot et title) plot(x, sin(x**2)) title('Graphique sin(x**2)') # - # ## ▶ Et encore... # # Mentionnons quelques autres fonctionnalités utiles : # # - Pour **créer vos propres notebooks**, utilisez le bouton `[New]` dans la première fenêtre Jupyter, ou faites `File>New Notebook` depuis n'importe quelle fenêtre de notebook. N'oubliez ensuite pas de donner un nom de fichier à votre notebook en cliquant dans le champ "Untitled" au haut de la fenêtre du nouveau notebook # - Si Jupyter Notebook semble bloqué, c'est surement qu'il **tourne indéfiniment** en raison par exemple d'une boucle mal contrôlée (consommant 100% de CPU d'1 core de votre machine). Vous devriez voir le message "Kernel busy" au haut de l'écran. Pour l'**interrompre**, cliquez sur le bouton carré noir `[Interrupt the kernel]` (ou menu `Kernel>Interrupt`). Si ça ne fonctionne vraiment pas, il faudra se résoudre à tuer le serveur web avec `<ctrl-C>` depuis la fenêtre console derrière la page web. # - Pour **effacer tous les outputs** d'un notebook, vous pouvez faire `Edit>Clear All Outputs` # - Sous `Help>Keyboard Shortcuts` vous trouverez la liste des nombreux **raccourcis clavier** améliorant grandement l'efficacité d'utilisation de Jupyter Notebook ! # - Sous `Help` vous trouverez différentes ressources de **documentation** pour explorer encore plus **Jupyter**, **Python** et des librairies essentielles, tel que **Numpy**, **MatplotLib**, **Pandas**, ... # ## ▶ Quitter un notebook Jupyter # # **17)** Pour **quitter** un notebook Jupyter, faites `File>Close Shutdown Notebook`. Faites cela maintenant. Au revoir !
jupyter-notebook.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # livelossplot example: PyTorch Ignite # # Last update: `livelossplot 0.5.0`. For code and documentation, see [livelossplot GitHub repository](https://github.com/stared/livelossplot). # # <a href="https://colab.research.google.com/github/stared/livelossplot/blob/master/examples/pytorch-ignite.ipynb" target="_parent"> # <img src="https://colab.research.google.com/assets/colab-badge.svg"/> # </a> # # ## Note # # The syntax changed, and from `0.5+` on it is): # # ```{python} # from livelossplot import PlotLossesIgnite # ``` # + pycharm={"name": "#%%\n"} # !pip install pytorch-ignite livelossplot --quiet # + pycharm={"is_executing": true} # %matplotlib inline from torch import nn import torch.optim import torch.nn.functional as F from torch.utils.data import DataLoader from torchvision.transforms import Compose, ToTensor, Normalize from torchvision.datasets import MNIST from ignite.engine import Events, create_supervised_trainer, create_supervised_evaluator from ignite.metrics import Accuracy, Loss from ignite.contrib.handlers import ProgressBar from livelossplot import PlotLossesIgnite # + pycharm={"is_executing": true} TRAIN_BATCH_SIZE = 100 TEST_BATCH_SIZE = 100 # + pycharm={"is_executing": true} class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) self.conv2_drop = nn.Dropout2d() self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 10) def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, 320) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) return F.log_softmax(x, dim=-1) # + pycharm={"is_executing": true} def get_data_loaders(train_batch_size, val_batch_size): data_transform = Compose([ToTensor(), Normalize((0.1307,), (0.3081,))]) train_loader = DataLoader( MNIST(download=True, root=".", transform=data_transform, train=True), batch_size=train_batch_size, shuffle=True ) val_loader = DataLoader( MNIST(download=False, root=".", transform=data_transform, train=False), batch_size=val_batch_size, shuffle=False ) return train_loader, val_loader # + pycharm={"is_executing": true} model = Net() train_loader, val_loader = get_data_loaders(TRAIN_BATCH_SIZE, TEST_BATCH_SIZE) optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.8) loss = torch.nn.NLLLoss() # + pycharm={"is_executing": true} trainer = create_supervised_trainer(model, optimizer, loss) metrics = metrics={ 'val_acc': Accuracy(), 'val_loss': Loss(loss) } p_bar = ProgressBar() p_bar.attach(trainer) evaluator = create_supervised_evaluator(model, metrics=metrics) # + pycharm={"is_executing": true} @trainer.on(Events.EPOCH_COMPLETED) def validate(trainer): evaluator.run(val_loader) # + pycharm={"is_executing": true} callback = PlotLossesIgnite() callback.attach(evaluator) # + pycharm={"is_executing": true, "name": "#%%\n"} trainer.run(train_loader, max_epochs=100)
examples/pytorch-ignite.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # + #using xarray for data read import xarray as xa import numpy as np #using Cartopy for mapping import matplotlib.pyplot as plt import cmocean # - # ***If you are on a mac and use conda-forge and homebrew, cartopy will not work if you have gdal/geos installed via homebrew*** Oct 5 2017 # + import cartopy.crs as ccrs import cartopy.feature as cfeature from cartopy.io import shapereader from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER def make_map(projection=ccrs.PlateCarree()): fig, ax = plt.subplots(figsize=(13, 8), subplot_kw=dict(projection=projection)) gl = ax.gridlines(draw_labels=True) gl.xlabels_top = gl.ylabels_right = False gl.xformatter = LONGITUDE_FORMATTER gl.yformatter = LATITUDE_FORMATTER return fig, ax land_50m = cfeature.NaturalEarthFeature('physical', 'land', '50m', edgecolor='face', facecolor='1.0') # - #greater bering region extent = [-180, -135, 45, 75] # + #avhrr only data is whats available at esrl threddspath='https://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/noaa.oisst.v2.highres/sst.day.err.2010.v2.nc' cmap = cmocean.cm.amp with xa.open_dataset(threddspath) as xadf: fig,ax = make_map(projection=ccrs.PlateCarree(-160)) (xadf['err'].sel(time='2010-06-12')).isel(lat=slice(-180,-45),lon=slice(-750,-500)).plot(x='lon', y='lat', robust=True,ax=ax,vmin=0,vmax=0.5, transform=ccrs.PlateCarree(), cmap=cmap) ax.add_feature(land_50m) ax.coastlines(resolution='50m') ax.set_extent(extent) ax.plot([195.95,185.311],[56.869,62.19], color='white', linewidth=0, marker='o', transform=ccrs.PlateCarree(),zorder=3 ) # + #avhrr+amsr data is available for 2002-2011 from ncei erddap_path="http://coastwatch.pfeg.noaa.gov/erddap/griddap/ncdcOisst2AmsrAgg" cmap = cmocean.cm.amp with xa.open_dataset(erddap_path) as xadf: fig,ax = make_map(projection=ccrs.PlateCarree(-160)) (xadf['err'].sel(time='2010-06-12')).isel(latitude=slice(-180,-45),longitude=slice(-750,-500)).plot(x='longitude', y='latitude', robust=True,ax=ax,vmin=0,vmax=0.5, transform=ccrs.PlateCarree(), cmap=cmap) ax.add_feature(land_50m) ax.coastlines(resolution='50m') ax.set_extent(extent) ax.plot([195.95,185.311],[56.869,62.19], color='white', linewidth=0, marker='o', transform=ccrs.PlateCarree(),zorder=3 ) # - # ## Compare timeseries of two SST retrieval algorithms at M2 (56.869N 164.05W or 195.95E) erdf = xa.open_dataset(erddap_path) sst_amsrpavhrr = erdf['sst'].sel(time=slice('2010-06-01','2010-09-20'), latitude=slice(56.63,57),longitude=slice(195.8,196)) err_amsrpavhrr = erdf['err'].sel(time=slice('2010-06-01','2010-09-20'), latitude=slice(56.63,57),longitude=slice(195.8,196)) threddspath_sst='https://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/noaa.oisst.v2.highres/sst.day.mean.2010.v2.nc' thdf = xa.open_dataset(threddspath_sst) sst_avhrr = thdf['sst'].sel(time=slice('2010-06-01','2010-09-20'), lat=slice(56.63,57),lon=slice(195.8,196)) thedf = xa.open_dataset(threddspath) err_avhrr = thedf['err'].sel(time=slice('2010-06-01','2010-09-20'), lat=slice(56.63,57),lon=slice(195.8,196)) # + plt.figure(1, figsize=(18, 6), facecolor='w', edgecolor='w') plt.subplot(3,1,1) sst_avhrr.plot() #blue sst_amsrpavhrr.plot() #orange plt.subplot(3,1,2) plt.plot(sst_amsrpavhrr.time,sst_amsrpavhrr.data[:,0,0,0]-sst_avhrr.data[:,0,0]) plt.subplot(3,1,3) err_avhrr.plot() #blue err_amsrpavhrr.plot() # - output = [None,None,None] output[0] = sst_amsrpavhrr.mean() output[1] = sst_avhrr.mean() output[2] = (sst_amsrpavhrr.data[:,0,0,0]-sst_avhrr.data[:,0,0]).mean() print(output) # ## Compare timeseries of two SST retrieval algorithms at M8 (62.19N 174.689W - 185.311E) sst_amsrpavhrr = erdf['sst'].sel(time=slice('2010-06-01','2010-09-20'), latitude=slice(62,62.2),longitude=slice(185.25,185.5)) err_amsrpavhrr = erdf['err'].sel(time=slice('2010-06-01','2010-09-20'), latitude=slice(62,62.2),longitude=slice(185.25,185.5)) sst_avhrr = thdf['sst'].sel(time=slice('2010-06-01','2010-09-20'), lat=slice(62,62.2),lon=slice(185.25,185.5)) err_avhrr = thedf['err'].sel(time=slice('2010-06-01','2010-09-20'), lat=slice(62,62.2),lon=slice(185.25,185.5)) # + plt.figure(1, figsize=(18, 6), facecolor='w', edgecolor='w') plt.subplot(3,1,1) sst_avhrr.plot() #blue sst_amsrpavhrr.plot() #orange plt.subplot(3,1,2) plt.plot(sst_amsrpavhrr.time,sst_amsrpavhrr.data[:,0,0,0]-sst_avhrr.data[:,0,0]) plt.subplot(3,1,3) err_avhrr.plot() #blue err_amsrpavhrr.plot() # - output = [None,None,None] output[0] = sst_amsrpavhrr.mean() output[1] = sst_avhrr.mean() output[2] = (sst_amsrpavhrr.data[:,0,0,0]-sst_avhrr.data[:,0,0]).mean() print(output) # + slideshow={"slide_type": "-"}
2020/NBS_SST_Analysis/prior_research/Bering SST Exploration - M8.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="R2mXdNS5awaX" # ! pip install -q kaggle from google.colab import files files.upload() # ! mkdir ~/.kaggle # ! cp kaggle.json ~/.kaggle/ # ! chmod 600 ~/.kaggle/kaggle.json # ! kaggle datasets list # + colab={"base_uri": "https://localhost:8080/"} id="lhNKSLeGa1K_" outputId="b85f73bc-5641-435f-f806-8a4ebf64f50b" # !kaggle datasets download -d puneet6060/intel-image-classification # + id="bpY83MVpa14f" # !unzip /content/intel-image-classification.zip # + id="0kwz1cnwa10W" import cv2 import os import matplotlib.pyplot as plt from keras import Sequential from keras.preprocessing.image import ImageDataGenerator from keras.layers import Dense, Conv2D, Dropout, MaxPool2D, Flatten # + colab={"base_uri": "https://localhost:8080/", "height": 183} id="UbzQ503Ja1xx" outputId="c27b6dec-c4b5-4e03-b0d6-f785306fea2d" w = 10 h = 10 fig = plt.figure(figsize=(15,10)) columns = 6 rows = 1 fielName = "/content/seg_train/seg_train" for i in range(0, columns*rows ): folderName = os.path.join((fielName), os.listdir(fielName)[i]) img = cv2.imread(folderName+'/'+(os.listdir(os.path.join((fielName), os.listdir(fielName)[i]))[i])) fig.add_subplot(rows, columns, i+1) plt.imshow(img) plt.title(os.path.basename(folderName)) plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="TNIVtBRZa1vk" outputId="003aa48a-75ae-4dbb-a921-9b5fc3acd368" batch_size = 32 resize = (227, 227) train_datagen = ImageDataGenerator( rescale=1./255,) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( '/content/seg_train/seg_train', target_size=resize, batch_size=batch_size, class_mode='categorical',) validation_generator = test_datagen.flow_from_directory( '/content/seg_test/seg_test', target_size=resize, batch_size=batch_size, class_mode='categorical') # + id="EJa6U8I4a-z5" model = Sequential() model.add(Conv2D(96, (11,11), (4,4), input_shape=(227,227,3), activation="relu")) model.add(MaxPool2D((3,3), (2,2))) model.add(Conv2D(256, (5,5), padding="same", activation="relu")) model.add(MaxPool2D((3,3), (2,2))) model.add(Conv2D(384, (3,3), padding="same", activation="relu")) model.add(Conv2D(384, (3,3), padding="same", activation="relu")) model.add(Conv2D(256, (3,3), padding="same", activation="relu")) model.add(MaxPool2D((3,3), (2,2))) model.add(Flatten()) model.add(Dense(4096, activation="relu")) model.add(Dropout(0.5)) model.add(Dense(4096, activation="relu")) model.add(Dense(6, activation="softmax")) # + colab={"base_uri": "https://localhost:8080/"} id="iR4zYPFia-mD" outputId="d170c979-f214-41ac-d6f6-b9e3fa6e1167" model.summary() # + id="fefgltdJa-j1" model.compile(loss='categorical_crossentropy', optimizer="adam", metrics=["accuracy"]) # + colab={"base_uri": "https://localhost:8080/"} id="lcwsvNIjbrmw" outputId="b1e78680-b099-4d3d-b151-f734452d92bd" model.fit( train_generator, validation_data=validation_generator, epochs=20) # + id="a_hVspkHoukn"
1. AlexNet/alexnet.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.7 ('MSCS-basic') # language: python # name: python3 # --- # + def fast_exp(base, exponent, modulus): result = 1 while exponent != 0: if exponent % 2 == 1: result = (result * base) % modulus base = base**2 % modulus exponent = exponent // 2 return result def extended_euclidean(m, n): if n == 0: return 1, 0, m x, y, g = extended_euclidean(n, m % n) return y, x - (m // n)*y, g def inv_mod(a, modulus): s, _, g = extended_euclidean(a, modulus) assert g == 1, ValueError('the modular inverse does not exist') return s % modulus def crt(remainders, moduli): assert len(remainders) == len(moduli), ValueError('the lists remainders and moduli are not the same length') assert len(remainders) > 0, ValueError('the list lengths must be greater than zero') first_modulus = moduli[0] first_remainder = remainders[0] if len(remainders) == 1: return first_remainder % first_modulus, first_modulus consecutive_remainder, consecutive_modulus = crt(remainders[1:], moduli[1:]) u, v, g = extended_euclidean(consecutive_modulus, first_modulus) assert g == 1, ValueError('the moduli are not relatively prime') return (first_remainder*consecutive_modulus*u + consecutive_remainder*first_modulus*v) % (first_modulus*consecutive_modulus), first_modulus*consecutive_modulus def eulers_totient(p, q): return (p-1)*(q-1) def polynomial_congruence(e, m_to_the_e, totient_n, n): d = inv_mod(e, totient_n) m = fast_exp(m_to_the_e, d, n) return m, d def rsa_decrypt(e, m_to_the_e, p, q): return polynomial_congruence(e, m_to_the_e, eulers_totient(p, q), p*q) # - fast_exp(314159, 65537, 6154877) rsa_decrypt(65537, 377707, 2459, 2503)
Class/22-02-14/22-02-14.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## 论文引用网络中的节点分类任务 # # 在这一教程中,我们将展示GraphScope如何结合图分析、图查询和图学习的能力,处理论文引用网络中的节点分类任务。 # # # 在这个例子中我们使用的是[ogbn-mag](https://ogb.stanford.edu/docs/nodeprop/#ogbn-mag)数据集。ogbn是由微软学术关系图(Microsoft Academic Graph)的子集组成的异构图网络。该图中包含4种类型的实体(即论文、作者、机构和研究领域),以及连接两个实体的四种类型的有向关系边。 # # 我们需要处理的任务是,给出异构的ogbn-mag数据,在该图上预测每篇论文的类别。这是一个节点分类任务,该任务可以归类在各个领域、各个方向或研究小组的论文,通过对论文属性和引用图上的结构信息对论文进行分类。在该数据中,每个论文节点包含了一个从论文标题、摘要抽取的 128 维 word2vec 向量作为表征,该表征是经过预训练提前获取的;而结构信息是在计算过程中即时计算的。 # # 这一教程将会分为以下几个步骤: # - 建立会话和载图; # - 通过gremlin交互式查询图; # - 执行图算法做图分析; # - 执行基于图数据的机器学习任务; # - 关闭会话 # 首先,我们要新建一个会话,并载入ogbn_mag数据集 # + import os import graphscope from graphscope.dataset import load_ogbn_mag k8s_volumes = { "data": { "type": "hostPath", "field": { "path": "/testingdata", "type": "Directory" }, "mounts": { "mountPath": "/home/jovyan/datasets", "readOnly": True } } } graphscope.set_option(show_log=True) sess = graphscope.session(k8s_volumes=k8s_volumes) graph = load_ogbn_mag(sess, "/home/jovyan/datasets/ogbn_mag_small/") # - # ## Interactive query with gremlin # # 在此示例中,我们启动了一个交互查询引擎,然后使用图遍历来查看两位给定作者共同撰写的论文数量。为了简化查询,我们假设作者可以分别由ID 2 和 4307 唯一标识。 # + # get the entrypoint for submitting Gremlin queries on graph g. interactive = sess.gremlin(graph) # count the number of papers two authors (with id 2 and 4307) have co-authored. papers = interactive.execute("g.V().has('author', 'id', 2).out('writes').where(__.in('writes').has('id', 4307)).count()").one() print("result", papers) # - # ## Graph analytics with analytical engine # # 继续我们的示例,下面我们在图数据中进行图分析来生成节点结构特征。我们首先通过在特定周期内从全图中提取论文(使用Gremlin!)来导出一个子图,然后运行 k-core 分解和三角形计数以生成每个论文节点的结构特征。 # + # exact a subgraph of publication within a time range. sub_graph = interactive.subgraph( "g.V().has('year', inside(2014, 2020)).outE('cites')" ) # project the subgraph to simple graph by selecting papers and their citations. simple_g = sub_graph.project(vertices={"paper": []}, edges={"cites": []}) # compute the kcore and triangle-counting. kc_result = graphscope.k_core(simple_g, k=5) tc_result = graphscope.triangles(simple_g) # add the results as new columns to the citation graph. sub_graph = sub_graph.add_column(kc_result, {"kcore": "r"}) sub_graph = sub_graph.add_column(tc_result, {"tc": "r"}) # - # ## Graph neural networks (GNNs) # # 接着我们利用生成的结构特征和原有特征通过GraphScope的学习引擎来训练一个学习模型。 # # 在本例中,我们训练了GCN 模型,将节点(论文)分类为349个类别,每个类别代表一个出处(例如预印本和会议)。 # + # define the features for learning, # we chose original 128-dimension feature and k-core, triangle count result as new features. paper_features = [] for i in range(128): paper_features.append("feat_" + str(i)) paper_features.append("kcore") paper_features.append("tc") # launch a learning engine. here we split the dataset, 75% as train, 10% as validation and 15% as test. lg = sess.learning(sub_graph, nodes=[("paper", paper_features)], edges=[("paper", "cites", "paper")], gen_labels=[ ("train", "paper", 100, (0, 75)), ("val", "paper", 100, (75, 85)), ("test", "paper", 100, (85, 100)) ]) # Then we define the training process, use internal GCN model. from graphscope.learning.examples import GCN from graphscope.learning.graphlearn.python.model.tf.trainer import LocalTFTrainer from graphscope.learning.graphlearn.python.model.tf.optimizer import get_tf_optimizer def train(config, graph): def model_fn(): return GCN(graph, config["class_num"], config["features_num"], config["batch_size"], val_batch_size=config["val_batch_size"], test_batch_size=config["test_batch_size"], categorical_attrs_desc=config["categorical_attrs_desc"], hidden_dim=config["hidden_dim"], in_drop_rate=config["in_drop_rate"], neighs_num=config["neighs_num"], hops_num=config["hops_num"], node_type=config["node_type"], edge_type=config["edge_type"], full_graph_mode=config["full_graph_mode"]) trainer = LocalTFTrainer(model_fn, epoch=config["epoch"], optimizer=get_tf_optimizer( config["learning_algo"], config["learning_rate"], config["weight_decay"])) trainer.train_and_evaluate() # hyperparameters config. config = {"class_num": 349, # output dimension "features_num": 130, # 128 dimension + kcore + triangle count "batch_size": 500, "val_batch_size": 100, "test_batch_size":100, "categorical_attrs_desc": "", "hidden_dim": 256, "in_drop_rate": 0.5, "hops_num": 2, "neighs_num": [5, 10], "full_graph_mode": False, "agg_type": "gcn", # mean, sum "learning_algo": "adam", "learning_rate": 0.01, "weight_decay": 0.0005, "epoch": 5, "node_type": "paper", "edge_type": "cites"} # start traning and evaluating train(config, lg) # - # 最后,当我们完成所有的计算过程后,关闭当前的会话 # close the session. sess.close()
tutorials/zh/10_node_classification_on_citation_network.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # This notebook was prepared by [<NAME>](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). # # Challenge Notebook # ## Problem: Check if a binary tree is balanced. # # * [Constraints](#Constraints) # * [Test Cases](#Test-Cases) # * [Algorithm](#Algorithm) # * [Code](#Code) # * [Unit Test](#Unit-Test) # * [Solution Notebook](#Solution-Notebook) # ## Constraints # # * Is a balanced tree one where the heights of two sub trees of any node doesn't differ by more than 1? # * Yes # * If this is called on a None input, should we raise an exception? # * Yes # * Can we assume we already have a Node class with an insert method? # * Yes # * Can we assume this fits memory? # * Yes # ## Test Cases # # * None -> No # * 1 -> Yes # * 5, 3, 8, 1, 4 -> Yes # * 5, 3, 8, 9, 10 -> No # ## Algorithm # # Refer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/graphs_trees/check_balance/check_balance_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. # ## Code # %run ../bst/bst.py # %load ../bst/bst.py class BstBalance(Bst): def check_balance(self): if self.root == None: raise TypeError("Empty tree") def check(node): if not node: return 0 l = check(node.left) r = check(node.right) if l == -1 or r == -1 or abs(l-r) > 1: return -1 return 1 + max(l,r) return check(self.root) != -1 # ## Unit Test # **The following unit test is expected to fail until you solve the challenge.** # + # # %load test_check_balance.py import unittest class TestCheckBalance(unittest.TestCase): def test_check_balance_empty(self): bst = BstBalance(None) bst.check_balance() def test_check_balance(self): bst = BstBalance(Node(5)) self.assertEqual(bst.check_balance(), True) bst.insert(3) bst.insert(8) bst.insert(1) bst.insert(4) self.assertEqual(bst.check_balance(), True) bst = BstBalance(Node(5)) bst.insert(3) bst.insert(8) bst.insert(9) bst.insert(10) self.assertEqual(bst.check_balance(), False) bst = BstBalance(Node(3)) bst.insert(2) bst.insert(1) bst.insert(5) bst.insert(4) bst.insert(6) bst.insert(7) self.assertEqual(bst.check_balance(), True) print('Success: test_check_balance') def main(): test = TestCheckBalance() test.assertRaises(TypeError, test.test_check_balance_empty) test.test_check_balance() if __name__ == '__main__': main() # - # ## Solution Notebook # # Review the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/graphs_trees/check_balance/check_balance_solution.ipynb) for a discussion on algorithms and code solutions.
graphs_trees/check_balance/check_balance_challenge.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ## Default sample: Stay Inside the Two Borders in Time Trials # It is the second sample used in DeepRacer. # # The car will try to stay inside two borders. # def reward_function(params): ''' Example of rewarding the agent to stay inside the two borders of the track ''' # Read input parameters all_wheels_on_track = params['all_wheels_on_track'] distance_from_center = params['distance_from_center'] track_width = params['track_width'] # Give a very low reward by default reward = 1e-3 # Give a high reward if no wheels go off the track and # the car is somewhere in between the track borders if all_wheels_on_track and (0.5*track_width - distance_from_center) >= 0.05: reward = 1.0 # Always return a float value return reward # To test the reward function, you need to setup a testing parameter. # # Beside "track_width" and "distance_from_center", parameter "all_wheels_on_track" is used, let's setup a testing parameter. testing_params = dict() testing_params['track_width'] = 1 testing_params['distance_from_center'] = 0.1 testing_params['all_wheels_on_track'] = True # Then, let's try the reward function: reward_function(testing_params) # And you can modify the testing_params to test different cases: testing_params['distance_from_center'] = 100 testing_params['all_wheels_on_track'] = False reward_function(testing_params) # Now, you can try to modify the reward_function with customized setting # # The following is the copy of default first cell, you can try to edit the code without worrying about making things wrong. If you screw up, just delete all the codes you write and then copy the code from the first cell again: def reward_function(params): ''' Example of rewarding the agent to stay inside the two borders of the track ''' # Read input parameters all_wheels_on_track = params['all_wheels_on_track'] distance_from_center = params['distance_from_center'] track_width = params['track_width'] # Give a very low reward by default reward = 1e-3 # Give a high reward if no wheels go off the track and # the car is somewhere in between the track borders if all_wheels_on_track and (0.5*track_width - distance_from_center) >= 0.05: reward = 1.0 # Always return a float value return reward # After you modify the reward function, you can test it with testing-params again: testing_params['distance_from_center'] = 100 testing_params['all_wheels_on_track'] = False reward_function(testing_params) # ## Python Tips: if statement # # If statement is very important in program language, it gives your code the abilities to make adjustment. # # The following are some samples: # + a = 10 if (a > 5): print ("a is larger than 5") else: print ("a is smaller or equal to 5") # - # And you can use `elif` when you have multiple branchs to go: # + a = 7 if (a == 5): print ("a is 5") elif (a == 6): print ("a is 6") elif (a == 7): print ("a is 7") else: print ("a is other value") # - # Please notice that `==` is used to check whether two values are equal. If you use `=`, it will set the left variable to the right value. # # The the following code, the value of a is set to 1: # # a = 1 # # Operations used to check values include: # # == equal # >= larger or euqal # <= smaller or equal # > larger # < smaller # ## Python Tips: constant True and False # # Some parameters, some variables, are set to a boolean value, which only contains two possible value: True or False, like "all_wheels_on_track" # # Boolean value can be used directly in `if` statement # + love_coding = True if (love_coding): print ("I love coding") else: print ("er....., I am not sure about that") # + love_coding = False if (love_coding): print ("I love coding") else: print ("er....., I am not sure about that") # - # ## Python Tips: Multiple conditions # # If you want to combine multiple conditions, you can use `or`, `and`, `not` # + love_coding = False love_working = True if (love_coding and love_working): print ('Good programmer') else: print ('You don\'t love coding and working at the same time') # - if (love_coding or love_working): print ('you love coding or you love working') else: print ('You don\'t love coding , neither do you love working') # ## Configs and result # # If your model has good potential, but the training is stop because of time limitation. Then you way want to train the model again. In this case, you don't need to train the model from the begining, you can clone the model and start the new training with the best checkpoint of your old one. # # While training the `stay in two borders` model, we got the following result in the first 1 hour. It is obvious that the result is not good enough, but the trend is good enough to make us believe that there will be a better model if we have more time to train it. # # <img src="./images/02_result.png" width = "300" alt="result" /> # # Then we clone the first model and train it again. In the next hour, we got the following metrics, which reachs 100% progress in evaluation: # # <img src="./images/02_result_clone.png" width = "300" alt="result" /> # # And the evaluation results are listed below. # # The model finished two evaluations without going out of track, and the time used to finish one lap is about 11 to 12 seconds, which is much better than the first model. # # By submitting the model to community race, we got a result of 36 seconds for 3 laps # # <img src="./images/02_evaluation.png" width = "300" alt="evaluation" /> # # # Action space and other hyper parameters are listed as below. # # Please be notice that the speed was set to 1 to 2 meters per second. It is one of the key settings to make the model reach 12 seconds per lap. # # <img src="./images/02_action_space.png" width = "300" alt="action space" /> # # <img src="./images/02_hyper_parameters.png" width = "300" alt="hyper parameters" /> # #
02_stay_in_two_borders.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # <div class="contentcontainer med left" style="margin-left: -50px;"> # <dl class="dl-horizontal"> # <dt>Title</dt> <dd> TriMesh Element</dd> # <dt>Dependencies</dt> <dd>Bokeh</dd> # <dt>Backends</dt> <dd><a href='./TriMesh.ipynb'>Matplotlib</a></dd> <dd><a href='../bokeh/TriMesh.ipynb'>Bokeh</a></dd> # </dl> # </div> # + import numpy as np import holoviews as hv from holoviews import opts from scipy.spatial import Delaunay hv.extension('matplotlib') # - # A ``TriMesh`` represents a mesh of triangles represented as the simplexes and vertexes. The simplexes represent the indices into the vertex data, made up of three indices per triangle. The mesh therefore follows a datastructure very similar to a graph, with the abstract connectivity between nodes stored on the ``TriMesh`` element itself, the node or vertex positions stored on a ``Nodes`` element and the concrete ``EdgePaths`` making up each triangle generated when required by accessing the edgepaths attribute. # # Unlike a Graph each simplex is represented as the node indices of the three corners of each triangle rather than the usual source and target node. # # We will begin with a simple random mesh, generated by sampling some random integers and then applying Delaunay triangulation, which is available in SciPy. We can then construct the ``TriMesh`` by passing it the **simplexes** and the **vertices** (or **nodes**). # + n_verts = 100 pts = np.random.randint(1, n_verts, (n_verts, 2)) tris = Delaunay(pts) trimesh = hv.TriMesh((tris.simplices, pts)) trimesh # - # To make this easier TriMesh also provides a convenient ``from_vertices`` method, which will apply the Delaunay triangulation and construct the ``TriMesh`` for us: hv.TriMesh.from_vertices(np.random.randn(100, 2)) # Just like the ``Graph`` element we can access the ``Nodes`` and ``EdgePaths`` via the ``.nodes`` and ``.edgepaths`` attributes respectively. trimesh.nodes + trimesh.edgepaths # Now let's make a slightly more interesting example by generating a more complex geometry. Here we will compute a geometry, then apply Delaunay triangulation again and finally apply a mask to drop nodes in the center. # + # First create the x and y coordinates of the points. n_angles = 36 n_radii = 8 min_radius = 0.25 radii = np.linspace(min_radius, 0.95, n_radii) angles = np.linspace(0, 2*np.pi, n_angles, endpoint=False) angles = np.repeat(angles[..., np.newaxis], n_radii, axis=1) angles[:, 1::2] += np.pi/n_angles x = (radii*np.cos(angles)).flatten() y = (radii*np.sin(angles)).flatten() z = (np.cos(radii)*np.cos(angles*3.0)).flatten() nodes = np.column_stack([x, y, z]) # Apply Delaunay triangulation delaunay = Delaunay(np.column_stack([x, y])) # Mask off unwanted triangles. xmid = x[delaunay.simplices].mean(axis=1) ymid = y[delaunay.simplices].mean(axis=1) mask = np.where(xmid*xmid + ymid*ymid < min_radius*min_radius, 1, 0) simplices = delaunay.simplices[np.logical_not(mask)] # - # Once again we can simply supply the simplices and nodes to the ``TriMesh``. hv.TriMesh((simplices, nodes)) # We can also do something more interesting, e.g. by adding a value dimension to the vertices and coloring the edges by the vertex averaged value using the ``edge_color`` plot option: hv.TriMesh((simplices, hv.Points(nodes, vdims='z'))).opts( opts.TriMesh(cmap='viridis', edge_color='z', filled=True, fig_size=200)) # For full documentation and the available style and plot options, use ``hv.help(hv.TriMesh).``
examples/reference/elements/matplotlib/TriMesh.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # # # SVG Tooltip # # # This example shows how to create a tooltip that will show up when # hovering over a matplotlib patch. # # Although it is possible to create the tooltip from CSS or javascript, # here we create it in matplotlib and simply toggle its visibility on # when hovering over the patch. This approach provides total control over # the tooltip placement and appearance, at the expense of more code up # front. # # The alternative approach would be to put the tooltip content in ``title`` # attributes of SVG objects. Then, using an existing js/CSS library, it # would be relatively straightforward to create the tooltip in the # browser. The content would be dictated by the ``title`` attribute, and # the appearance by the CSS. # # # :author: <NAME> # # + import matplotlib.pyplot as plt import xml.etree.ElementTree as ET from io import BytesIO ET.register_namespace("", "http://www.w3.org/2000/svg") fig, ax = plt.subplots() # Create patches to which tooltips will be assigned. rect1 = plt.Rectangle((10, -20), 10, 5, fc='blue') rect2 = plt.Rectangle((-20, 15), 10, 5, fc='green') shapes = [rect1, rect2] labels = ['This is a blue rectangle.', 'This is a green rectangle'] for i, (item, label) in enumerate(zip(shapes, labels)): patch = ax.add_patch(item) annotate = ax.annotate(labels[i], xy=item.get_xy(), xytext=(0, 0), textcoords='offset points', color='w', ha='center', fontsize=8, bbox=dict(boxstyle='round, pad=.5', fc=(.1, .1, .1, .92), ec=(1., 1., 1.), lw=1, zorder=1)) ax.add_patch(patch) patch.set_gid('mypatch_{:03d}'.format(i)) annotate.set_gid('mytooltip_{:03d}'.format(i)) # Save the figure in a fake file object ax.set_xlim(-30, 30) ax.set_ylim(-30, 30) ax.set_aspect('equal') f = BytesIO() plt.savefig(f, format="svg") # --- Add interactivity --- # Create XML tree from the SVG file. tree, xmlid = ET.XMLID(f.getvalue()) tree.set('onload', 'init(evt)') for i in shapes: # Get the index of the shape index = shapes.index(i) # Hide the tooltips tooltip = xmlid['mytooltip_{:03d}'.format(index)] tooltip.set('visibility', 'hidden') # Assign onmouseover and onmouseout callbacks to patches. mypatch = xmlid['mypatch_{:03d}'.format(index)] mypatch.set('onmouseover', "ShowTooltip(this)") mypatch.set('onmouseout', "HideTooltip(this)") # This is the script defining the ShowTooltip and HideTooltip functions. script = """ <script type="text/ecmascript"> <![CDATA[ function init(evt) { if ( window.svgDocument == null ) { svgDocument = evt.target.ownerDocument; } } function ShowTooltip(obj) { var cur = obj.id.split("_")[1]; var tip = svgDocument.getElementById('mytooltip_' + cur); tip.setAttribute('visibility',"visible") } function HideTooltip(obj) { var cur = obj.id.split("_")[1]; var tip = svgDocument.getElementById('mytooltip_' + cur); tip.setAttribute('visibility',"hidden") } ]]> </script> """ # Insert the script at the top of the file and save it. tree.insert(0, ET.XML(script)) ET.ElementTree(tree).write('svg_tooltip.svg')
matplotlib/gallery_jupyter/user_interfaces/svg_tooltip_sgskip.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 2A.ml - Machine Learning et données cryptées # # Comment faire du machine learning avec des données cryptées ? Ce notebook propose d'en montrer un principe exposé dans [CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy](http://proceedings.mlr.press/v48/gilad-bachrach16.pdf). # %matplotlib inline from jyquickhelper import add_notebook_menu add_notebook_menu() # ## Principe # # Le machine learning sur des données cryptées repose sur un algorithme de [chiffrement_homomorphe](https://fr.wikipedia.org/wiki/Chiffrement_homomorphe) ou [homomorphic encryption](https://en.wikipedia.org/wiki/Homomorphic_encryption). Ce concept a été inventé par <NAME> (lire [Fully Homomorphic Encryption Using Ideal Lattices](https://www.cs.cmu.edu/~odonnell/hits09/gentry-homomorphic-encryption.pdf), [Fully Homomorphic Encryption over the Integers](https://eprint.iacr.org/2009/616.pdf)). On note $x \rightarrow \varepsilon(x)$ une fonction de chiffrement complètement homomorphe. Il vérifie : # # $$\begin{array}{ll}\varepsilon(x+y) = \varepsilon(x) + \varepsilon(y) \\ \varepsilon(x*y) = \varepsilon(x) * \varepsilon(y)\end{array}$$. Dans l'exemple qui suit, nous avons besoin que le système de cryptage soit [partiellement homomorphe](https://fr.wikipedia.org/wiki/Chiffrement_homomorphe#Syst.C3.A8mes_partiellement_homomorphes) : seule l'addition est stable une fois l'entier crypté. # # Un exemple : $\varepsilon:\mathbb{N} \rightarrow \mathbb{Z}/n\mathbb{Z}$ et $\varepsilon(x) = (x * a) \mod n$. Cela veut dire que l'on peut crypter des données, faire des calculs avec et décrypter un résultat qui serait presque le même que si les calculs avaient été fait sur les données non cryptées. # ## Exercice 1 : écrire deux fonctions de cryptage, décryptage # # Il faut bien choisir $n$, $a$ pour implémenter la fonction de cryptage : # $\varepsilon:\mathbb{N} \rightarrow \mathbb{Z}/n\mathbb{Z}$ et $\varepsilon(x) = (x * a) \mod n$. On vérifie ensuite qu'elle conserve l'addition au module $n$ près. # ## Exercice 2 : Entraîner une régression linéaire from sklearn.datasets import load_diabetes data = load_diabetes() X = data.data Y = data.target # ## Exercice 3 : réécrire la fonction de prédiction pour une régression linéaire # # ## Exercice 4 : assembler le tout # # Prendre une observation, crypter, prédire, décrypter, comparer avec la version non cryptée. Il faudra sans doute un peu ruser car la fonction de cryptage s'applique à des entiers et le modèle de prédiction à des réels. # ## Questions # # * A quelle condition peut-on aussi entraîner un modèle sur des données cryptées ? # * Et les arbres de décision ?
_doc/notebooks/td2a/ml_crypted_data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) # # Visualization for class predictions over learning time # + import sys sys.path.insert(0, '/home/nathan/caffe-segnet-crf/python') import caffe caffe.set_mode_gpu() sys.path.insert(0, '/home/nathan/histo-seg/v2/core') import colorNormalization as cnorm import os, glob, shutil, cv2 import numpy as np # %matplotlib inline from matplotlib import pyplot as plt # - from sklearn.manifold import TSNE, MDS, Isomap from sklearn.decomposition import PCA def img2caffe(img): img = img.transpose((2,0,1)) img = np.expand_dims(img, 0) return img # ### Set up a way to parse a mask into one of our classes. We'll use 5 classes: # - Stroma only # - Grade 3 + Stroma # - Grade 4/5 + Stroma # - Benign + Stroma # - Grade 4/5 only def mask2classes(mask): u = np.unique(mask) if all(u == 3): # ST return 0 # return -1 elif 0 in u and 1 not in u and 2 not in u: # 3 return 1 # return -1 elif 0 not in u and 1 in u and 2 not in u: # 4 return 2 # return -1 elif 0 not in u and 1 not in u and 2 in u: # Benign return 3 # return -1 elif all(u == 1): # all 4 # return 4 return -1 else: return -1 image_types = { 0: ['ST','.', 'r'], 1: ['G3','o', 'b'], 2: ['G4','1', 'm'], 3: ['BN','^', 'g'], 4: ['44','v', 'k'] } # + weights_list = ['/home/nathan/histo-seg/semantic-pca/weights/whole_set_512/batchnorm_segnet_pca_90000.caffemodel'] deploy_proto = '/home/nathan/histo-seg/semantic-pca/code/segnet_basic_deploy_10X.prototxt' ## All of them image_list = sorted(glob.glob('/home/nathan/histo-seg/semantic-pca/data/_data_origin/jpg/*jpg')) masks_list = sorted(glob.glob('/home/nathan/histo-seg/semantic-pca/data/_data_origin/mask/*png')) weights = weights_list[0] net = caffe.Net(deploy_proto, weights, caffe.TEST) # + # Test our function use_masks_list = [mask for mask in masks_list if mask2classes(cv2.imread(mask,-1)) >= 0] use_image_list = [img for mask,img in zip(masks_list, image_list) if mask2classes(cv2.imread(mask,-1)) >= 0] use_masks_label = [mask2classes(cv2.imread(mask,-1)) for mask in use_masks_list] use_masks_label = np.asarray(use_masks_label) print np.unique(use_masks_label) print len(use_masks_list) # - # ### Fill up a matrix # + test_layer = 'pool4' x_vectors = [] for image in use_image_list: img_data = cv2.imread(image) img_data = cv2.resize(img_data, dsize=(600,600)) img_data = cnorm.normalize(img_data) img_caffe = img2caffe(img_data) obs = net.forward(data=img_caffe) x_vectors.append(net.blobs[test_layer].data[0,0:4,:,:].ravel()) print len(x_vectors) x = np.vstack(x_vectors) print x.shape x = x[:, (np.var(x, axis=0) > 0)] print x.shape transform = MDS(n_components=2) print 'Fitting' x_manifold = transform.fit_transform(x) print 'Plotting' fig, ax = plt.subplots(1,1,figsize=(9,9)) for label in image_types.iterkeys(): print label ax.scatter(x_manifold[use_masks_label==label, 0], x_manifold[use_masks_label==label, 1], label=image_types[label][0], marker=image_types[label][1], c=image_types[label][2], alpha=0.5, s = 180) ax.legend(scatterpoints=1) print 'Done' # -
notebooks/learning_evolution.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np # 收集数据:可以使用任何方法 def loadDataSet(): """ 创建数据集 :return: 单词列表postingList,所属类别classVec """ postingList = [['my', 'dog', 'has', 'flea', 'problems', 'help', 'please'], ['maybe', 'not', 'take', 'him', 'to', 'dog', 'park', 'stupid'], ['my', 'dalmation', 'is', 'so', 'cute', 'I', 'love', 'him'], ['stop', 'posting', 'stupid', 'worthless', 'garbage'], ['mr', 'licks', 'ate', 'my', 'steak', 'how', 'to', 'stop', 'him'], ['quit', 'buying', 'worthless', 'dog', 'food', 'stupid']] classVec = [0, 1, 0, 1, 0, 1] return postingList, classVec # + # 准备数据:从文本中构建词向量 def createVocabList(dataSet): """ 获取所有单词的集合 :param dataSet: 数据集 :return: 所有单词的集合(即不含重复元素的单词列表) """ vocabSet = set() # create empty set for document in dataSet: # 操作符 | 用于求两个集合的并集 vocabSet = vocabSet | set(document) # union of the two sets return list(vocabSet) def setOfWords2Vec(vocabList, inputSet): """ 遍历查看该单词是否出现,出现该单词则将该单词置1 :param vocabList: 所有单词集合列表 :param inputSet: 输入数据集 :return: 匹配列表[0, 1, 0, 1...],其中1与0表示词汇表中的单词是否出现在输入的数据集中 """ # 创建一个和词汇表等长的向量,并将其元素都设置为0 returnVec = [0] * len(vocabList) # [0, 0...] # 遍历文档中的所有单词,如果出现了词汇表中的单词,则将输出的文档想两种的对应值设为1 for word in inputSet: if word in vocabList: returnVec[vocabList.index(word)] = 1 else: print("the word: %s is not in my Vocabulary!" % word) return returnVec # + # 分析数据:检查词条确保解析的正确性 # 检查函数执行情况,检查词表,不出现重复单词,需要的话,可以对其进行排序。 listOPosts, listClasses = loadDataSet() myVocabList = createVocabList(listOPosts) myVocabList # 检查函数有效性。例如:myVocabList中索引为2的元素是什么单词?应该是'I'。 # 该单词在第3篇文档中出现了,现在检查一下看看它是否出现在第四篇文档中。 setOfWords2Vec(myVocabList, listOPosts[0]) setOfWords2Vec(myVocabList, listOPosts[3]) # - # 训练算法:从词向量计算概率 # 朴素贝叶斯分类器训练函数 def trainNB0(trainMatrix, trainCategory): """ 训练数据原版 :param trainMatrix: 文件单词矩阵 [[1, 0, 1, 1, 1...], [], []...] :param trainCayegory: 文件对应的类别 [0, 1, 1, 0...],列表长度等于单词矩阵书 :return: """ # 文件数 numTrainDocs = len(trainMatrix) # 单词数 numWords = len(trainMatrix[0]) # 侮辱性文件的出现概率,即trainCategory中所有的1的个数, # 代表的就是多少个侮辱性文件,与文件的总数相除就得到了侮辱性文件的出现概率 pAbusive = sum(trainCategory) / float(numTrainDocs) # 构造单词出现次数列表 p0Num = np.zeros(numWords) p1Num = np.zeros(numWords) # 整个数据集单词出现总数 p0Denom = 0.0 p1Denom = 0.0 for i in range(numTrainDocs): # 是否是侮辱性文件 if trainCategory[i] == 1: # 如果是侮辱性文件,对侮辱性文件的向量进行加和 p1Num += trainMatrix[i] # 对向量中的所有元素进行求和,也就是计算所有侮辱性文件中出现的单词总数 p1Denom += sum(trainMatrix[i]) else: p0Num += trainMatrix[i] p0Denom += sum(trainMatrix[i]) # 类别1,即侮辱性文档的 [P(F1|C1), P(F2|C1), ...] 列表 # 即在类别1下,每个单词出现的概率 p1Vect = p1Num / p1Denom # 类别0下,每个单词出现的概率 p0Vect = p0Num / p0Denom return p0Vect, p1Vect, pAbusive # 测试算法:根据现实情况修改分类器 def trainNB1(trainMatrix, trainCategory): """ 训练数据优化版本 :param trainMatrix: 文件单词矩阵 :param trainCayegory: 文件对应的类别 :return: """ # 总文件数 numTrainDocs = len(trainMatrix) # 总单词数 numWords = len(trainMatrix[0]) # 侮辱性文件的出现概率 pAbusive = sum(trainCategory) / float(numTrainDocs) # 构造单词出现次数列表 # p0Num 正常的统计 # p1Num 侮辱的统计 p0Num = np.ones(numWords) # [0, 0, ...] --> [1, 1, ...] p1Num = np.ones(numWords) # 整个数据集单词出现总数,2.0 # 根据样本/实际调查结果调整分母的值(2主要是避免分母为0,当然值可以调整) # p0Denom 正常的统计 # p1Denom 侮辱的统计 p0Denom = 2.0 p1Denom = 2.0 for i in range(numTrainDocs): # 是否是侮辱性文件 if trainCategory[i] == 1: # 如果是侮辱性文件,对侮辱性文件的向量进行加和 p1Num += trainMatrix[i] # 对向量中的所有元素进行求和,也就是计算所有侮辱性文件中出现的单词总数 p1Denom += sum(trainMatrix[i]) else: p0Num += trainMatrix[i] p0Denom += sum(trainMatrix[i]) # 类别1,即侮辱性文档的 [log(P(F1|C1)), log(P(F2|C1)), ...] 列表 # 即在类别1下,每个单词出现的概率 p1Vect = np.log(p1Num / p1Denom) # 类别0下,每个单词出现的概率 p0Vect = np.log(p0Num / p0Denom) return p0Vect, p1Vect, pAbusive # + # 使用算法:对社区留言板言论进行分类 # 朴素贝叶斯分类函数 def classifyNB(vec2Classify, p0Vec, p1Vec, pClass1): """ 使用算法: # 将乘法转换为加法 乘法:P(C|F1F2...Fn) = p(F1F2...Fn|C)P(C)/P(F1F2...Fn) 加法:P(F1|C)*P(F2|C)...P(Fn|C)P(C) --> log(P(F1|C))+log(P(F2|C))+...+log(P(Fn|C))+log(P(C)) :param vec2Classify: 待测数据[0, 1, 1, 1, 1, ...],即要分类的向量 :param p0Vec: 类别0,即正常文档的[log(P(F1|C0)), log(P(F2|C0)), ...]列表 :param p1Vec: 类别1,即侮辱性文档的[log(P(F1|C1)), log(P(F2|C1)), ...]列表 :param pClass1: 类别1,即侮辱性文件的出现概率 :return: 类别1或0 """ # 计算公式 log(P(F1|C))+log(P(F2|C))+...+log(P(Fn|C))+log(P(C)) p1 = sum(vec2Classify * p1Vec) + np.log(pClass1) # 即贝叶斯准则的分子 p0 = sum(vec2Classify * p0Vec) + np.log(1.0 - pClass1) if p1 > p0: return 1 else: return 0 def testingNB(): """ 测试朴素贝叶斯算法 """ # 1. 加载数据集 listOPosts, listClasses = loadDataSet() # 2. 创建单词集合 myVocabList = createVocabList(listOPosts) # 3. 计算单词是否出现并创建数据矩阵 trainMat = [] for postinDoc in listOPosts: # 返回 m*len(myVocabList) 的矩阵,记录的都是0,1信息 trainMat.append(setOfWords2Vec(myVocabList, postinDoc)) # 4. 训练数据 p0V, p1V, pAb = trainNB1(np.array(trainMat), np.array(listClasses)) # 5. 测试数据 testEntry = ['love', 'my', 'dalmation'] thisDoc = np.array(setOfWords2Vec(myVocabList, testEntry)) print(testEntry, 'classified as: ', classifyNB(thisDoc, p0V, p1V, pAb)) testEntry = ['stupid', 'garbage'] thisDoc = np.array(setOfWords2Vec(myVocabList, testEntry)) print(testEntry, 'classified as: ', classifyNB(thisDoc, p0V, p1V, pAb)) # - testingNB()
naive bayes/naive_bayes.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: .venv # language: python # name: .venv # --- # ## Channels & legend # # Besides the x-axis and the y-axis, there are five more channels in Vizzu you can use to visualize your data. Similarly to the axes you can put any number of dimensions and/or one measure to a channel. In the following example the four most commonly used channels are shown. The fifth, noop channel is introduced later in the Without coordinates & noop channel chapter. # # Data on the label channel will be written on the markers on the chart. Vizzu automatically determines the best way to position these labels, but you can set them differently with the style object introduced in the Color palette & fonts chapter. # Note: The data used in this tutorial is available [here](https://github.com/vizzuhq/ipyvizzu/blob/gh-pages/docs/tutorial/music_data.json). You can read more about the available types of data in the [Adding data](./data.ipynb) chapter. # + from ipyvizzu import Chart, Data, Config chart = Chart() data = Data.from_json("./music_data.json") chart.animate(data) chart.animate(Config({ "channels": { "y": { "set": "Popularity" }, "x": { "set": "Genres" } }, "title": "Label" })) chart.animate(Config({ "channels": { "label": { "attach": "Popularity" } } })) snapshot1 = chart.store() # - # The lightness channel sets the lightness of the markers. In this case the same measure (Popularity) is added to it that is on the y-axis, meaning that columns’ height and lightness represent the same values. The legend for the lightness channel is turned on using the legend property. # # Note: This is an example when we explicitly instruct Vizzu to show the legend. By default Vizzu automatically shows/hides the legend when it's necessary. You can also turn it off with the legend : null command or set back to automatic mode with legend : 'auto'. # + chart.animate(snapshot1) chart.animate(Config({"title": "Lightness - legend on"})) chart.animate(Config({ "channels": { "lightness": { "attach": "Popularity" } }, "legend": "lightness" })) snapshot2 = chart.store() # - # The color channel sets the color of the markers. The same dimension (Genres) is put on it that is on the x-axis resulting in each bar having a different color. If a measure is put on the color channel, a color range will be used. # # Note: The value on the lightness channel is removed in this step as it doesn’t make sense to use it together with the color channel in this case. # + chart.animate(snapshot2) chart.animate(Config({"title": "Color"})) chart.animate(Config({ "channels": { "color": { "attach": "Genres" } }, "legend": "color" })) snapshot3 = chart.store() # - # The size channel sets the size of the markers if the geometry is circle - where size sets the radius of the circles - or line - where size determines line width. It is ignored when using rectangle or area geometry. This is why we change the geometry to circle in the example. # + chart.animate(snapshot3) chart.animate(Config({"title": "Size - change of geometry required"})) chart.animate(Config({ "channels": { "size": { "set": "Popularity" } }, "geometry": "circle" })) # - # Next chapter: [Group/stack](./group.ipynb) ----- Previous chapter: [Geometry](./geometry.ipynb) ----- Back to the [Table of contents](../doc.ipynb#tutorial)
docs/tutorial/channels.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## 使用梯度上升法实现PCA import numpy as np import matplotlib.pyplot as plt X = np.empty((100, 2)) X[:,0] = np.random.uniform(0., 100., size=100) X[:,1] = 0.75 * X[:,0] + 3. + np.random.normal(0, 10., size=100) plt.scatter(X[:,0], X[:,1]) plt.show() # ### demean def demean(X): return X - np.mean(X, axis=0) X_demean = demean(X) plt.scatter(X_demean[:,0], X_demean[:,1]) plt.show() np.mean(X_demean[:,0]) np.mean(X_demean[:,1]) # ### 梯度上升法 # + def f(w, X): return np.sum((X.dot(w)**2)) / len(X) def df_math(w, X): return X.T.dot(X.dot(w)) * 2. / len(X) def df_debug(w, X, epsilon=0.0001): res = np.empty(len(w)) for i in range(len(w)): w_1 = w.copy() w_1[i] += epsilon w_2 = w.copy() w_2[i] -= epsilon res[i] = (f(w_1, X) - f(w_2, X)) / (2 * epsilon) return res def direction(w): return w / np.linalg.norm(w) def gradient_ascent(df, X, initial_w, eta, n_iters = 1e4, epsilon=1e-8): w = direction(initial_w) cur_iter = 0 while cur_iter < n_iters: gradient = df(w, X) last_w = w w = w + eta * gradient w = direction(w) # 注意1:每次求一个单位方向 if(abs(f(w, X) - f(last_w, X)) < epsilon): break cur_iter += 1 return w # - initial_w = np.random.random(X.shape[1]) # 注意2:不能用0向量开始 initial_w eta = 0.001 # 注意3: 不能使用StandardScaler标准化数据 gradient_ascent(df_debug, X_demean, initial_w, eta) gradient_ascent(df_math, X_demean, initial_w, eta) w = gradient_ascent(df_math, X_demean, initial_w, eta) plt.scatter(X_demean[:,0], X_demean[:,1]) plt.plot([0, w[0]*30], [0, w[1]*30], color='r') plt.show() # ### 使用极端数据集测试 X2 = np.empty((100, 2)) X2[:,0] = np.random.uniform(0., 100., size=100) X2[:,1] = 0.75 * X2[:,0] + 3. plt.scatter(X2[:,0], X2[:,1]) plt.show() X2_demean = demean(X2) w2 = gradient_ascent(df_math, X2_demean, initial_w, eta) plt.scatter(X2_demean[:,0], X2_demean[:,1]) plt.plot([0, w2[0]*30], [0, w2[1]*30], color='r') plt.show() # 同学们可以自己思考实现随机梯度下降法和小批量梯度下降法的版本:)
07-PCA-and-Gradient-Ascent/03-Implement-PCA-in-BGA/03-Implement-PCA-in-BGA.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Typing time series transformations # # Contributors: @mloning, @fkiraly # # ## Problem statement # Time series data comes in many different forms: univariate or multivariate, a single instances or multiple instances. # # Correspondingly, there are many different forms of transformations that can be applied to time series data. # # This is an attempt to organise these transformations into categories with common APIs. # # 1. Find a taxonomy of time series transformations using scientific types based on input and output types # 2. Develop transformer class structure/hierarchies in Python based on scitype taxonomy from sktime.forecasting.all import * from sktime.forecasting.model_selection import temporal_train_test_split from sklearn.preprocessing import FunctionTransformer import typing from typing import Union, NewType import pandas as pd import numpy as np # ## Data types # # ### Primitives # * single or multiple primitives # * primitive types or 1d array # single or mulitple primitives Primitive = Union[np.integer, int, np.float, float, str] Primitives = np.ndarray # ### Tabular # * scikit-learn-like data # * 2d arrays: rows represent instances, columns represent variables/features, cells contain Primitives Tabular = Union[pd.DataFrame, np.ndarray] # 2d arrays # ### Series # * univariate or multivariate # * 1d or 2d arrays: rows represent time points, columns represent variables, cells represent observations # * observations are Primitives # * observations are ordered (usually in time) # * for multiple series, for now we only support time-homogenous series (all series are observed at the same time points) # univariate or multivariate UnivariateSeries = Union[pd.Series, np.ndarray] MultivariateSeries = Union[pd.DataFrame, np.ndarray] Series = Union[UnivariateSeries, MultivariateSeries] def make_series(n_timepoints=10, n_variables=1): assert n_timepoints > 1 assert n_variables > 0 index = pd.date_range("2020-01-01", periods=n_timepoints, freq="D") if n_variables == 1: return pd.Series(np.random.lognormal(size=n_timepoints), index=index, name="Data") else: return pd.DataFrame(np.random.lognormal(size=(n_timepoints, n_variables)), index=index) y = make_series(n_timepoints=10) Y = make_series(n_timepoints=10, n_variables=3) y_train, y_test = temporal_train_test_split(make_series(n_timepoints=20)) # ### Series-as-features # * panel/longitudinal data with multiple instances of univariate/multivariate Series (and potentially Primitives) # * nested/3d arrays, potentially also awkward array: rows represent instances, columns represent variables/features, 3rd dimension represents time points SeriesAsFeatures = Union[pd.DataFrame, np.ndarray] # ## Transformation types # ### Base abstract transformer type # * implements default and empty fit method # * implements default fit_transform method # * inverse-transform is optional, not implemented as an abstract method and only implemented by concrete transformers if available # + from sktime.base import BaseEstimator class BaseTransformer(BaseEstimator): def fit(self, X, y=None): self._is_fitted = True return self def transform(self, X, y=None): raise NotImplementedError("abstract method") def fit_transform(self, X, y=None): return self.fit(X, y).transform(X, y) # - # ### 1. Series -> Primitives # * encapsulated in functions - not classes - because not fittable, so no parameters have to be stored (not stateful) # * may have tuneable arguments (e.g. which quantile to extract), which can be exposed in comopsitions via FunctionTransformer # * optionally, wrapped in classes for convenience # # #### Examples: # * mean # * quantile def series_to_primitive(x: Series, **kwargs) -> Primitives: # non-fittable pass # example def quantile(x, q=0.25): # non-fittable return np.quantile(x, q, axis=0) quantile(y) quantile(Y) # turn functions into classes for model composition from sklearn.preprocessing import FunctionTransformer transformer = FunctionTransformer(quantile) transformer.fit_transform(y) # alternatively, base class with __call__ class _SeriesToPrimitivesTransformer(BaseTransformer): def transform(self, X: Series, y=None) -> Primitives: raise NotImplementedError("abstract method") def inverse_transform(self, X: Primitives, y=None) -> Series: # any examples of inverse transform? # all series -> primitive transforms seem lossy raise NotImplementedError() # example class QuantileTransformer(_SeriesToPrimitivesTransformer): def __init__(self, q=0.5): self.q = q def transform(self, X, y=None): return np.quantile(X, self.q, axis=0) transformer = QuantileTransformer(q=0.25) transformer.fit_transform(y) # ### 2. Series -> Series # * may be fittable (e.g. extract AR coefficients from series) # * if not fittable, encapsulated as functions, composable via FunctionTransformer, optionally wrapped in classes for convenience # * if fittable, encapsulated in classes # * input and output of transform may have different number of time points and different index # * input and output may have different domain (no longer on the same time line) # * in a forecasting pipeline, fit_transform is called before call fit of the final forecaster, and inverse_transform is called after calling predict of the final forecaster # # #### Examples # * Box-Cox transform # * logarithm # * smoothing, filters # * detrender # * deseasonalisation # * Change point annotator (e.g. returns sequence of change point waiting times) def series_to_series(x: Series, **kwargs) -> Series: # non-fittable pass def fourier_transform(x): return np.fft.fft(x, axis=0) fourier_transform(y).shape fourier_transform(Y).shape # scikit-learn FunctionTransformer transformer = FunctionTransformer(fourier_transform) transformer.fit_transform(y).shape # bespoke class implementations class _SeriesToSeriesTransformer(BaseTransformer): def transform(self, X: Series, y=None) -> Series: raise NotImplementedError("abstract method") def inverse_transform(self, X: Series, y=None) -> Series: raise NotImplementedError("abstract method") # example class LogTransformer(_SeriesToSeriesTransformer): def transform(self, X, y=None): return np.log(X) def inverse_transform(self, X, y=None): return np.exp(X) transformer = LogTransformer() transformer.fit_transform(Y) # ### Special case: Featurizer # In a forecasting pipeline, fit_transform is called before calling fit of the final forecaster and transform is called *before* calling predict of the final forecaster. So, Featurizers need to be aware of the forecasting horizon and need to be able to return transformed data even when y (forecasted values) are not available yet. # # #### Examples: # * Adding holiday dummies # * Adding Fourier terms # # #### Option 1: Keep transformer API but change inner workings, adapt pipeline classes # In this option, we treat Featurizer as series-to-series transformers, but change how input data is handled in transform, as transform has to work on time index only with no data. Additionally, this option requires to adjust the pipeline class. # + # examples from pmdarima.preprocessing.exog.fourier import _fourier_terms def _get_index(X=None, y=None): assert X is not None or y is not None if X is None: return y.index else: return X.index class HolidayFeaturizer(_SeriesToSeriesTransformer): def __init__(self, calendar=None): self.calendar = calendar def transform(self, X, y=None): index = _get_index(X, y) start, end = index[0], index[-1] holidays = self.calendar.holidays(start, end, return_name=True) if holidays.empty: return X Xt = pd.get_dummies(holidays.reindex(index)) return pd.concat([X, Xt], axis=1) class FourierFeaturizer(_SeriesToSeriesTransformer): def __init__(self, sp=None, k=None): self.sp = sp self.k = k def fit(self, X, y=None): index = _get_index(X, y) assert self.sp is not None if self.k is None: k = self.sp // 2 else: k = self.k if 2 * k > self.sp or k < 1: raise ValueError("k must be a positive integer not greater " "than sp//2") # Compute the periods of all Fourier terms. Since R allows multiple # seasonality and we do not, we can do this much more simply. p = ((np.arange(k) + 1) / self.sp).astype(np.float64) # 1:K / m # If sinpi is 0... maybe blow up? # if abs(2 * p - round(2 * p)) < np.finfo(y.dtype).eps: # min eps self.p_ = p self.k_ = k self.n_ = index.shape[0] self._start = index[0] self._cutoff = index[-1] return self def transform(self, X, y=None): index = _get_index(X, y) # extrapolate fourier terms fh = ForecastingHorizon(index, is_relative=False) start = self._start cutoff = self._cutoff times = fh.to_absolute_int(start, cutoff).to_numpy(dtype=np.float64) + 1 Xt = pd.DataFrame(_fourier_terms(self.p_, times), index=index) return pd.concat([X, Xt], axis=1) # - # fitting from pandas.tseries.holiday import USFederalHolidayCalendar y_train = make_series(100) transformer = HolidayFeaturizer(calendar=USFederalHolidayCalendar()) transformer.fit_transform(None, y_train).head(3) # fitting X_train = None t = FourierFeaturizer(sp=12, k=2) Xt = t.fit_transform(X_train, y=y_train) Xt.tail(3) plot_series(*[Xt.iloc[:, i] for i in range(Xt.shape[1])]) # + # predicting X_test = None fh = ForecastingHorizon(y_test.index, is_relative=False) # pre-initialised empty y_pred with fh as index if X_test is None: X_test = pd.DataFrame(index=fh.to_absolute(t._cutoff).to_pandas(), dtype=np.float) X_test = t.transform(X_test) X_test.head(5) # - # #### Option 2: Add new Featurizer type and composition class # In this option, we treat Featurizers as their own distinct estimator type. This allows us to align their API with their scientific type, taking in data in fit and predicting values for a given time index (or forecasting horizon). In contrast to forecasters, they do not predict values in the same domain as the training data. # + from sktime.forecasting.base._sktime import OptionalForecastingHorizonMixin from sktime.forecasting.base._sktime import BaseSktimeForecaster class _Featurizer(OptionalForecastingHorizonMixin, BaseSktimeForecaster): def fit(self, y, fh=None, X=None): self._set_fh(fh) self._set_y_X(y, X) self._is_fitted = True return self def predict(self, fh=None, X=None) -> Series: raise NotImplementedError("abstract method") def fit_predict(self, y, fh=None, X=None) -> Series: return self.fit(y, fh, X).predict(fh, X) # - class HolidayFeaturizer(_Featurizer): def __init__(self, calendar=None): self.calendar = calendar self._features = None super(HolidayFeaturizer, self).__init__() def fit(self, y, fh=None, X=None): super(HolidayFeaturizer, self).fit(y, fh, X) assert self.calendar is not None assert isinstance(y.index, pd.DatetimeIndex) def predict(self, fh=None, X=None): self.check_is_fitted() self._set_fh(fh) fh = self.fh.to_absolute(self.cutoff).to_pandas() holidays = self.calendar.holidays(fh[0], fh[-1], return_name=True) return pd.get_dummies(holidays.reindex(fh, axis=0)).reindex(self._features, axis=1, fill_value=0) def fit_predict(self, y, fh=None, X=None): # additionally keeps track of holiday names, so that we generate the same features in fit and predict in pipeline self.fit(y, fh, X) X_pred = self.predict(fh, X) self._features = X_pred.columns return X_pred from pandas.tseries.holiday import USFederalHolidayCalendar y_train = make_series(100) featurizer = HolidayFeaturizer(calendar=USFederalHolidayCalendar()) featurizer.fit(y_train) fh = ForecastingHorizon(y_train.index, is_relative=False) X_pred = featurizer.predict(fh) X_pred.head() # may return empty data frame # + from sklearn.base import clone def _clone_estimators(estimators): return [(name, clone(estimator)) for name, estimator in estimators] class FeaturizedForecaster(OptionalForecastingHorizonMixin, BaseSktimeForecaster): """Composition class for forecasting with featurizers""" def __init__(self, estimators): self.estimators = estimators super(FeaturizedForecaster, self).__init__() def fit(self, y, fh=None, X=None): self._set_y_X(y, X) _check_estimators(self.estimators) self.estimators_ = _clone_estimators(self.estimators) xs = [] # use in-sample forecasting horizon h = ForecastingHorizon(self._y.index, is_relative=False) for i, (name, featurizer) in self.featurizers_(): # 1. fit featurizer x = featurizer.fit_predict(y, h, X) xs.append(x) self.estimators_[i] = (name, featurizer) X = pd.concat([X, *xs], axis=1) # 3. fit final forecaster name, forecaster = self.forecaster_ forecaster.fit(y, fh, X) self.estimators_[-1] = (name, forecaster) self._is_fitted = True return self def predict(self, fh=None, X=None): self.check_is_fitted() xs = [] for _, (_, featurizer) in self.featurizers_(): # 3. generate features x = featurizer.predict(fh, X=X) xs.append(x) X = pd.concat([X, *xs], axis=1) # 3. fit final forecaster _, forecaster = self.forecaster_ y_pred = forecaster.predict(fh, X) return y_pred @property def forecaster_(self): return self.estimators_[-1] def featurizers_(self): yield from enumerate(self.estimators_[:-1]) def _get_name(estimator): return estimator.__class__.__name__ def _check_estimators(estimators): assert isinstance(estimators, list) assert len(estimators) > 0 for estimator in estimators: assert isinstance(estimator, tuple) names = [name for name, _ in estimators] assert len(set(names)) == len(names) return estimators def make_featurized_forecaster(*estimators): """Convenience function for creating a FeaturizedForecaster""" return FeaturizedForecaster([(_get_name(estimator), estimator) for estimator in estimators]) # - y = make_series(n_timepoints=365) y_train, y_test = temporal_train_test_split(y, test_size=10) forecaster = make_featurized_forecaster( HolidayFeaturizer(USFederalHolidayCalendar()), AutoARIMA(suppress_warnings=True) ) forecaster.fit(y_train) fh = ForecastingHorizon(y_test.index, is_relative=False) y_pred = forecaster.predict(fh) y_pred.head() # check if features have been passed to final forecaster _X = forecaster.forecaster_[-1]._X assert _X.shape[0] == y_train.shape[0] assert _X.shape[1] > 1 _X.head() # ### 3. Series-as-features -> Tabular # * usually fittable and encapsualted in classes # * input of fit and transform always needs to have the same number of columns and time points, but may have different number of instances # * input and output of transform always has the same number of instances, but may have different numbers of columns and time points # # #### Examples # * Shapelet transform (returns minimum distance of each instance to found shapelets as tabular matrix) # * Feature extractors # * Tabularizer/TimeBinner # * composite transformers, e.g. applying series-to-primitives transform iteratively over instances/rows class _SeriesAsFeaturesToTabularTransformer(BaseTransformer): def transform(self, X: SeriesAsFeatures, y=None) -> Tabular: raise NotImplementedError("abstract method") def inverse_transform(self, X: Tabular, y=None) -> SeriesAsFeatures: raise NotImplementedError() # example from sktime.transformers.series_as_features.summarize import RandomIntervalFeatureExtractor from sktime.utils._testing import make_classification_problem X, y = make_classification_problem() X.columns = ["var"] transformer = RandomIntervalFeatureExtractor() Xt = transformer.fit_transform(X) Xt.head() # ### 4. Series-as-features -> Series-as-features # * usually fittable and encapsualted in classes # * input of fit and transform always needs to have the same number of columns and time points, but may have different number of instances # * input and output of transform always has same number of instances, but may have different numbers of columns and time points (including going from time-homogeneous data to time-heterogenous data) # # #### Examples # * dictionary-based transforms # * time series segmentation, e.g. splitting a time series columns into multiple ones, see IntervalSegmenter # * time series concatenation, e.g. merging multiple time series columns into a single one, see ColumnConcatenator # * composite transformers, e.g. applying series-to-series or series-to-primitive transforms iteratively over instances/rows class _SeriesAsFeaturesToSeriesAsFeaturesTransformer(BaseTransformer): def transform(self, X: SeriesAsFeatures, y=None) -> SeriesAsFeatures: raise NotImplementedError("abstract method") def inverse_transform(self, X: SeriesAsFeatures, y=None) -> SeriesAsFeatures: raise NotImplementedError() # example from sktime.transformers.series_as_features.segment import RandomIntervalSegmenter transformer = RandomIntervalSegmenter(n_intervals=3) Xt = transformer.fit_transform(X) Xt.head() # ### 5. Meta-transformers # * Concatenators/chaining, e.g. pipelines # * Appliers, e.g. InstanceTransformer (or RowTransformer) # * Unions/Selectors, e.g. FeatureUnion, ColumnTransformer # * Adaptors, e.g. applying scikit-learn like transformer to single series by making the series into a column vector # + # example def make_instance_transformer(transformer, **kwargs): """Factory function for creating InstanceTransformer based on transform type""" if isinstance(transformer, _SeriesToSeriesTransformer): return SeriesToSeriesInstanceTransformer(transformer, **kwargs) elif isinstance(transformer, _SeriesToPrimitivesTransformer): return SeriesToPrimitivesInstanceTransformer(transformer, **kwargs) else: raise TypeError("transformer type not supported") from sklearn.base import clone from sktime.utils.data_container import from_2d_array_to_nested def _from_nested_to_series(x): """Helper function to un-nest series""" if x.shape[0] == 1: return x.iloc[0] else: return pd.DataFrame(x.tolist()).T def _make_column_names(columns, prefix): """Helper function to generate column names""" if len(prefix) > 0: prefix = prefix + "_" return [f"{prefix}{column}" for column in columns] class _InstanceTransformer: """Base class for InstanceTransformer""" def __init__(self, transformer, prefix="", check_transformer=True): self.transformer = transformer self.check_transformer = check_transformer self.prefix = prefix def _check_transformer(self): """Check transformer type compatibility""" assert hasattr(self, "_supported_transformer_type") if self.check_transformer and not isinstance(self.transformer, self._supported_transformer_type): raise TypeError(f"transformer must be a {self._supported_transformer_type.__class__.__name__}") def fit(self, X, y=None): self._check_transformer() transformer = clone(self.transformer) self.transformer_ = transformer.fit(X, y) return self class SeriesToPrimitivesInstanceTransformer(_InstanceTransformer, _SeriesAsFeaturesToTabularTransformer): _supported_transformer_type = _SeriesToPrimitivesTransformer def transform(self, X, y=None): Xt = np.zeros(X.shape) for i, (_, x) in enumerate(X.iterrows()): x = _from_nested_to_series(x) Xt[i] = self.transformer_.transform(x) return pd.DataFrame(Xt, columns=_make_column_names(X.columns, self.prefix)) class SeriesToSeriesInstanceTransformer(_InstanceTransformer, _SeriesAsFeaturesToSeriesAsFeaturesTransformer): _supported_transformer_type = _SeriesToSeriesTransformer def transform(self, X, y=None): Xt = None for i, (_, x) in enumerate(X.iterrows()): x = _from_nested_to_series(x) xt = self.transformer_.transform(x) Xt = pd.concat([Xt, from_2d_array_to_nested(xt.T)], axis=1) Xt = Xt.T Xt.columns = _make_column_names(X.columns, self.prefix) return Xt # - from sktime.datasets import load_basic_motions X, _ = load_basic_motions(return_X_y=True) transformer = make_instance_transformer(QuantileTransformer(q=.25), prefix="q25") print(type(transformer)) Xt = transformer.fit_transform(X) Xt.head() transformer = make_instance_transformer(LogTransformer(), prefix="log") print(type(transformer)) Xt = transformer.fit_transform(X) Xt.head() # ## Pipelining # # ### Forecasting # * pipeline for exogenous transformations with Featurizer as in option 1 # * separate pipeline class for transforming the target series from sktime.forecasting.base._base import BaseForecaster from sktime.forecasting.base._sktime import BaseSktimeForecaster from sktime.forecasting.base._sktime import OptionalForecastingHorizonMixin # + class ForecastingPipeline(OptionalForecastingHorizonMixin, BaseSktimeForecaster): def __init__(self, estimators): self.estimators = estimators super(ForecastingPipeline, self).__init__() def fit(self, y, X=None, fh=None): # even though this is a meta-estimator, we need to keep track of fh # and the cutoff to pre-initialise X in predict if no X is given self._set_y_X(y, X) self._set_fh(fh) _check_estimators(self.estimators) # check and clone estimators self.estimators_ = [(name, clone(estimator)) for name, estimator in self.estimators] # 1. transformations for i, name, transformer in self.transformers_(): X = transformer.fit_transform(X, y) self.estimators_[i] = (name, transformer) # 2. fit final forecaster name, forecaster = self.forecaster_ forecaster.fit(y, fh=fh, X_train=X) self.estimators_[-1] = (name, forecaster) self._is_fitted = True return self def predict(self, fh=None, X=None): self.check_is_fitted() self._set_fh(fh) # if no exogneous variables are given, we need to pre-initialise if X is None: y_pred = pd.Series(index=self.fh.to_absolute(self.cutoff).to_pandas(), dtype=np.float) # 1. exogeneous transformations for _, _, transformer in self.transformers_(): X = transformer.transform(X, y_pred) # 2. prediction _, forecaster = self.forecaster_ y_pred = forecaster.predict(self.fh, X) return y_pred def transform(self, y=None, X=None): self.check_is_fitted() for i, name, transformer in self.transformers_(): X = transformer.transform(X, y) return X @property def forecaster_(self): return self.estimators_[-1] def transformers_(self): transformers = self.estimators_[:-1] for i, (name, transformer) in enumerate(transformers): yield i, name, transformer def make_pipeline(*estimators): estimators = [(_get_name(estimator), estimator) for estimator in estimators] return ForecastingPipeline(estimators) # + y = load_airline() k = 6 y_train, y_test = temporal_train_test_split(y, test_size=10) fh = ForecastingHorizon(y_test.index, is_relative=False) forecaster = make_pipeline(FourierFeaturizer(sp=12, k=k), AutoARIMA(seasonal=False, max_p=2, max_q=2, max_d=1, suppress_warnings=True)) forecaster.fit(y_train) # tests _data = forecaster.forecaster_[-1]._forecaster.model_.arima_res_.data _X = _data.exog assert _X.shape == (y_train.shape[0], k * 2), print(_X.shape) _y = _data.endog np.testing.assert_array_equal(_y, y_train) y_pred = forecaster.predict(fh) plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]); # - class TransformedTargetForecaster(BaseSktimeForecaster): def __init__(self, estimators): self.estimators = estimators def fit(self, y, fh=None, X=None): _check_estimators(self.estimators) self.estimators_ = [(name, clone(estimator)) for name, estimator in self.estimators] # 1. transform for i, name, transformer in self.transformers_(): y = transformer.fit_transform(y) self.estimators_[i] = (name, transformer) # 2. fit final forecaster name, forecaster = self.forecaster_ forecaster.fit(y, fh, X) self.estimators_[-1] = (name, forecaster) self._is_fitted = True return self def predict(self, fh=None, X=None): self.check_is_fitted() # 1. predict _, forecaster = self.forecaster_ y_pred = forecaster.predict(fh, X) # 2. inverse-transform for _, _, transformer in self.transformers_(reverse=True): y_pred = transformer.inverse_transform(y_pred) return y_pred def transformers_(self, reverse=False): transformers = self.estimators_[:-1] if reverse: transformers = reversed(transformers) for i, (name, transformer) in enumerate(transformers): yield i, name, transformer @property def forecaster_(self): return self.estimators_[-1] # + pipe = TransformedTargetForecaster([("log", LogTransformer()), ("forecaster", forecaster)]) pipe.fit(y_train) # tests _data = pipe.forecaster_[-1].forecaster_[-1]._forecaster.model_.arima_res_.data _X = _data.exog assert _X.shape == (y_train.shape[0], k * 2), print(_X.shape) _y = _data.endog np.testing.assert_array_equal(_y, np.log(y_train)) y_pred = forecaster.predict(fh) plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]); # - # ### Series-as-features setting # * use scikit-learn pipeline from sklearn.pipeline import make_pipeline as make_pipeline from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix X, y = make_classification_problem(n_instances=500, n_columns=2) X += 100 X_train, X_test, y_train, y_test = train_test_split(X, y) classifier = make_pipeline( make_instance_transformer(LogTransformer()), make_instance_transformer(QuantileTransformer()), DecisionTreeClassifier()) classifier.fit(X_train, y_train) y_pred = classifier.predict(X_test) confusion_matrix(y_test, y_pred)
steps/03_transformer_api/transform_types.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os import glob from shutil import copyfile starting_index = 19191 number_digits = 5 extension = ".jpg" root_source = "/home/maxim/semantic_keypoints_dataset_GT_step5/2a4/dragonfly2/1/image_raw" abs_file_path_list = glob.glob(root_source+"/*"+extension) abs_file_path_list.sort() print(str(len(abs_file_path_list))+" files in directory") #print("\n".join(abs_file_path_list)) #os.rename(r'file path\OLD file name.file type',r'file path\NEW file name.file type') root_destination = root_source+"_copy" os.mkdir(root_destination) for abs_file_path in abs_file_path_list: copyfile(abs_file_path, root_destination+"/"+"0"*(number_digits-len(str(starting_index)))+str(starting_index)+extension) starting_index = starting_index+1 # -
rename_files.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <div align="center"> # <h1>Homework 4</h1> # <p> # <div align="center"> # <h2><NAME> <EMAIL></h2> # </div> # </p> # </div> # ## 3.6 # a) According to the graph below, the optimal value for the objective function is $-\frac{11}{3}$, which is attained at $(\frac{1}{3},\frac{4}{3})$. import matplotlib.pyplot as plt import numpy as np x = np.linspace(0,1.2,100) fig, ax = plt.subplots() ax.plot(x, 1 + x, 'b-', markersize=2, label="x2=1+x1") ax.plot(x, 2-(2/3)*x, 'b-', markersize=2, label="x2=2-2/3*x1") ax.plot(x, np.zeros_like(x)+4/3, 'b-', markersize=2, label="x2=4/3") ax.plot(x, -(3/2)*x - (1/2) * (-11/3), "g-.", label="optimal") plt.plot(3/5,8/5,'ro',markersize=4, label='(3/5,8/5)') plt.plot(1/3,4/3,'ko',markersize=4, label='(1/3,4/3)') plt.plot(1,4/3,'co',markersize=4, label='(1,4/3)') ax.grid(True, which='both') ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') plt.legend() plt.xlabel('x1') plt.ylabel('x2') plt.show() # b) # # Rewrite the problem in the standard form, # # $$ # \begin{align} # & \max \quad -3x_1 - 2x_2\\ # & s.t \quad -x_1 + x_2 + x_3 = 1\\ # & \qquad 2x_1 + 3x_2 + x_4 = 6\\ # & \qquad x_2 -x_5 = \frac{4}{3} \\ # & \qquad x_1,...,x_5\geq 0 # \end{align} # $$ # # Then we partition the matrix $A$ as # # $$ # A= # \begin{bmatrix} # -1 & 1 & 1 & || & 0 & 0 \\ # 2 & 3 & 0 & || & 1 & 0\\ # 0 & 1 & 0 & || & 0 & -1 # \end{bmatrix}, # $$ # # So $B= # \begin{bmatrix} # -1 & 1 & 1 \\ # 2 & 3 & 0 \\ # 0 & 1 & 0 \\ # \end{bmatrix}$, # $N= # \begin{bmatrix} # 0 & 0 \\ # 1 & 0\\ # 0 & -1 # \end{bmatrix}$, $C_B^T=(-3,-2,0)$, $C_N^T=(0,0)$, $b^T=(1,6,4/3)$. # # So the **basic feasible solution** is $x_0^T=[(B^{-1}b)^T,0^T]=(1, 4/3, 2/3, 0, 0)$. # And the tableau form is given below. # # | | $z$ | $x_1$ | $x_2$ | $x_3$ | $x_4$ | $x_5$ | RHS | # | --- | --- | --- | --- | --- | --- | --- | --- | # | $z$ | -1 | 0 | 0 | 0 |1.5 | 2.5 | 17/3| # | $x_1$| 0| 1 | 0 | 0 | 0.5 | 1.5 | 1| # | $x_2$| 0| 0 | 1 | 0 | 0 | -1 | 4/3 | # | $x_3$| 0| 0 | 0 | 1 | 0.5 | 2.5 | 2/3| # B = np.array([[-1,1,1],[2,3,0],[0,1,0]]) N = np.array([[0,0],[1,0],[0,-1]]) CB = np.array([[-3],[-2],[0]]) CN = np.array([[0],[0]]) b = np.array([[1],[6],[4/3]]) print("Current Cost:{}".format(np.dot(CB.T,np.linalg.inv(B).dot(b)))) print("Current BFS:{}".format(np.linalg.inv(B).dot(b).T)) print("Cost Reduction:{}".format(CN.T-np.dot(CB.T,np.linalg.inv(B)).dot(N))) print("B.inv * N:\n{}".format(np.linalg.inv(B).dot(N))) # c) Choose $x_4$ as the pivot column. # $\delta=\min\{1/0.5, (2/3)/(0.5)\}=4/3$, then the tableau becomes to # # | | $z$ | $x_1$ | $x_2$ | $x_3$ | $x_4$ | $x_5$ | RHS | # | --- | --- | --- | --- | --- | --- | --- | --- | # | $z$ | -1 | 0 | 0 | -3 |0 | -5 | 11/3| # | $x_1$| 0| 1 | 0 | -1 | 0 | -1 | 1/3| # | $x_2$| 0| 0 | 1 | 0 | 0 | -1 | 4/3 | # | $x_4$| 0| 0 | 0 | 2 | 1 | 5 | 4/3| # # So the current solution is $(1/3, 4/3, 0, 4/3, 0)$ and the base matrix is # $B= # \begin{bmatrix} # -1 & 1 & 0 \\ # 2 & 3 & 1 \\ # 0 & 1 & 0 # \end{bmatrix}$. # # Since all the cost is negative, the solution is optimal for the maximization problem. # d) # In order to draw the requirements space, we need to reformulate this problem. # # $$ # \begin{align} # & \max \quad -3x_1 - 2x_2' - \frac{8}{3}\\ # & s.t \quad -x_1 + x_2' + x_3 = -\frac{1}{3}\\ # & \qquad 2x_1 + 3x_2' + x_4 = 2\\ # & \qquad x_1,x_2',x_3,x_4\geq 0 # \end{align} # $$ # # i) According to the discussion with the TA, the possible bases are defined as 2 by 2 matrix. # # They are # # $$ # \begin{align} # & B_1 = # \begin{bmatrix} # -1 & 1 \\ # 2 & 3 # \end{bmatrix}, # B_2 = # \begin{bmatrix} # -1 & 1 \\ # 2 & 0 # \end{bmatrix}, # B_3 = # \begin{bmatrix} # -1 & 0 \\ # 2 & 1 # \end{bmatrix}\\ # & B_4 = # \begin{bmatrix} # 1 & 1\\ # 3 & 0 # \end{bmatrix}, # B_5 = # \begin{bmatrix} # 1 & 0\\ # 3 & 1 # \end{bmatrix}, # B_6 = # \begin{bmatrix} # 1 & 0\\ # 0 & 1 # \end{bmatrix} # \end{align} # $$ # + import numpy as np import matplotlib.pyplot as plt # %matplotlib inline f, ((ax1, ax2, ax3), (ax4, ax5, ax6)) = plt.subplots(2, 3, figsize=(9,6)) ax1.plot([-1,0], [2,0]) ax1.plot([1,0], [3,0]) ax1.set_title('B1') ax2.plot([-1,0], [2,0]) ax2.plot([1,0], [0,0]) ax2.set_title('B2') ax3.plot([-1,0], [2,0]) ax3.plot([0,0], [1,0]) ax3.set_title('B3') ax4.plot([1,0], [3,0]) ax4.plot([1,0], [0,0]) ax4.set_title('B4') ax5.plot([1,0], [3,0]) ax5.plot([0,0], [1,0]) ax5.set_title('B5') ax6.plot([1,0], [0,0]) ax6.plot([0,0], [1,0]) ax6.set_title('B6') plt.show() # - # ii) # # Possible feasible bases are given as # # $$ # B_1 = # \begin{bmatrix} # -1 & 1 \\ # 2 & 3 # \end{bmatrix}, # B_2 = # \begin{bmatrix} # -1 & 1 \\ # 2 & 0 # \end{bmatrix}, # B_3= # \begin{bmatrix} # -1 & 0 \\ # 2 & 1 # \end{bmatrix}. # $$ import numpy as np A = np.array([[-1,1,1,0],[2,3,0,1]]) b = np.array([[-1/3], [2]]) for i in range(3): for j in range(i+1,4): a = A[:,[i,j]] #print(i,j, np.linalg.inv(a).dot(b)) # e) # # * The basic matrix $B_1$ corresponds to $(3/5, 8/5)$. # * The basic matrix $B_2$ corresponds to $(1, 4/3)$. # * The basic matrix $B_3$ correspond to $(1/3, 4/3)$. # ## 3.9 # By adding a slack variable $x_7$, we can change the maximization problem to the standard form. $p_1,...,p_7$ are 7 basic feasible solutions. # # # | | $x_1$ | $x_2$ | $x_3$ | $x_4$ | $x_5$ | $x_6$ | $x_7$ | # | --- | --- | --- | --- | --- | --- | --- | --- | # | $p_1$ | 20 | 0 | 0 | 0 | 0 | 0 | 0 | # | $p_2$ | 0 | 10 | 0 | 0 | 0 | 0 | 0 | # | $p_3$ | 0 | 0 | 20 | 0 | 0 | 0 | 0 | # | $p_4$ | 0 | 0 | 0 | 20 | 0 | 0 | 0 | # | $p_5$ | 0 | 0 | 0 | 0 | 20 | 0 | 0 | # | $p_6$ | 0 | 0 | 0 | 0 | 0 | 15 | 0 | # | $p_7$ | 0 | 0 | 0 | 0 | 0 | 0 | 60 | # # Clearly, the optimal value is 100, which attains at the point $p_5$. # # **General Methodology:** # # Find $j^*\in \arg\max \{\frac{c_j}{a_j}\}$. Then the $j^*$(s) indicate(s) the optimal solution, where we set the $j^*$-th coordinate to $b/a_j$ while keep the rest coordinates 0. # # If $c_j <0 \text{ or } c_j > 0, a_j\leq 0$, then just skip calculating $\frac{c_j}{a_j}$, since those variables won't become basic variables. # ## 3.13 # a) The objective value will increase by $3\times 7 = 21$. # # b) ~~**No**. If have some $j$ such that $c_j-z_j > 0$, then we can let $x_j$ enter the basic variables and kick out one from the basic variables, while still keep increasing the objective function value.~~ # # **Yes.** If for some $j$, $x_j$ is not a basic variable and the current BFS is degenerated, then $c_j-z_j > 0$ can be possible. # # c) ~~**Yes**. Suppose $x_0$ is a BFS, then $c^T(x_0+ \lambda d) \geq c^Tx_0,\forall \lambda \geq 0$. Notice that $x_0+ \lambda d$ is still in the feasible set, so the problem is unbounded.~~ # # **No**.If $X$ is an empty, then it is not necessarily true. # # d) **No**. If all the BFS of $X$ are non-degenerated, then the statement is true. # # e) **No**. When the optimal solution is degenerated, we stay at the same BFS after one iteration. # # f) **False**. A point can lead to multiple different bases. So $B_1$ and $B_2$ are not necessarily adjacent. # # g) **True**. Consider the case that cost $c$ is $0$, then every feasible solution is optimal. # # h) The tight upper bound on the feasible bases is $m+1$ while the tight upper bound on the BFS is $m+1$. # # i) **False**. Consider a convex cone defined by $n+1$ hyper-planes, then it can have $n+1$ extreme directions. For example, the figure below has more than 3 extreme directions. (Source:Google Image.) # # ![(a)](./figs/cone.png) # # j) ~~**True**. Since the degree of the degeneracy is $1$, we know that there are $r=n-m+1$ planes from $x\geq 0$ are binding and there are $q=m-(r-(n-m))=m-1$ basic variables are positive. Each possible choice of a basis $B$ that includes the columns of these $q$ positive variables represents $\bar x$(#). So there are $C_{n-(m-1)}^{m-(m-1)}=n-m+1$ bases associate with this $\bar x$.~~ # # ~~Note, to see the statement (#), notice that $B \bar x=b$, so $b$ is the linear combination of columns in $B$ that corresponds to positive basic variables.~~ # # This is false. Above statement is flawed in *Each possible choice of a basis $B$ that includes the columns of these $q$ positive variables represents $\bar x$*. This is incorrect, as columns of these $q$ positive variables can identical to one the rest, which makes $B$ not full rank. # # Consider # # $$ # \begin{cases} # & x_1 -x_3 +x_4 =1\\ # & x_2 -x_3 = 0\\ # & x_1,x_2,x_3,x_4 \geq 0 # \end{cases} # $$ # # $\bar x = (1, 0, 0, 0)$ only corresponds to two bases. # # ## 3.42 # We can reformulate the problem as following # # $$ # \begin{align} # & \min \quad z\\ # & s.t \quad -2x_1 - x_2 + x_3 \leq z\\ # & \qquad 2x_1 + x_2 + 4x_3 \leq 6\\ # & \qquad x_1 + 4x_2 - x_3 \leq 4\\ # & \qquad x_1,x_2,x_3\geq 0 # \end{align} # $$ # # * Step 1: Eliminate $x_1$ # # $$ # \begin{cases} # & x_1 \geq -\frac{1}{2}z -\frac{1}{2}x_2 + \frac{1}{2}x_3 \\ # & x_1 \leq 3 -\frac{1}{2}x_2 -2x_3\\ # & x_1 \leq 4 - 4x_2 + x_3\\ # & x_1,x_2,x_3 \geq 0 # \end{cases} # \Rightarrow # \begin{cases} # & 3 -\frac{1}{2}x_2 -2x_3 \geq x_1 \geq -\frac{1}{2}z -\frac{1}{2}x_2 + \frac{1}{2}x_3 \\ # & 4 - 4x_2 + x_3\geq x_1 \geq -\frac{1}{2}z -\frac{1}{2}x_2 + \frac{1}{2}x_3 \\ # & 3 -\frac{1}{2}x_2 -2x_3 \geq x_1 \geq 0 \\ # & 4 - 4x_2 + x_3\geq x_1 \geq 0\\ # & x_2,x_3 \geq 0 # \end{cases} # \Rightarrow # \begin{cases} # & x_2 \leq 6 - 4x_3 \\ # & x_2 \leq 1 + \frac{1}{4}x_3\\ # & x_2 \leq \frac{8}{7} + \frac{1}{7}z + \frac{1}{7}x_3\\ # & x_3 \leq \frac{1}{5}z + \frac{6}{5}\\ # & x_2,x_3 \geq 0 # \end{cases} # $$ # # * Step 2: Eliminate $x_2$ # # $$ # \begin{cases} # & x_2 \leq 6 - 4x_3 \\ # & x_2 \leq 1 + \frac{1}{4}x_3\\ # & x_2 \leq \frac{8}{7} + \frac{1}{7}z + \frac{1}{7}x_3\\ # & x_3 \leq \frac{1}{5}z + \frac{6}{5}\\ # & x_2,x_3 \geq 0 # \end{cases} # \Rightarrow # \begin{cases} # & 0 \leq x_2 \leq 6 - 4x_3 \\ # & 0 \leq x_2 \leq 1 + \frac{1}{4}x_3\\ # & 0 \leq x_2 \leq \frac{8}{7} + \frac{1}{7}z + \frac{1}{7}x_3\\ # & x_3 \leq \frac{1}{5}z + \frac{6}{5}\\ # & x_3 \geq 0 # \end{cases} # \Rightarrow # \begin{cases} # & x_3 \leq \frac{3}{2} \\ # & x_3 \geq -8 - z\\ # & x_3 \leq \frac{1}{5}z + \frac{6}{5}\\ # & x_3 \geq 0 # \end{cases} # $$ # # * Step 3: Eliminate $x_3$ # # $$ # \begin{cases} # & x_3 \leq \frac{3}{2} \\ # & x_3 \geq -8 - z\\ # & x_3 \leq \frac{1}{5}z + \frac{6}{5}\\ # & x_3 \geq 0 # \end{cases} # \Rightarrow # \begin{cases} # & \frac{1}{5}z + \frac{6}{5} \geq x_3 \geq -8 - z\\ # & \frac{1}{5}z + \frac{6}{5} \geq x_3 \geq 0 \\ # & \frac{3}{2} \geq x_3 \geq -8 - z\\ # & \frac{3}{2}\geq x_3 \geq 0 \\ # \end{cases} # \Rightarrow # z\geq -6 # $$ # # So the optimal value for the original LP is **z=6**. # ## Exercise 3.18 # # a) **False**. The cost of every consecutive BFS visited by the simplex is strictly less than the cost of the previous one. otherwise the algorithms is supposed to terminate. To see this, in every iteration, the algorithm is moving along a direction $d_j$ with positive distance given by the minimal ratio test, where $j$ satisfies $(c_j-z_j)< 0, j \in N$. # # b) **True**. Assume $x_r$ is leaving the basic variables and $x_k$ enters. Then previous $c_r-z_r$ changes from zero to $-\frac{c_r-z_r}{y_{k,r}}$, which is positive. So $x_r$ can no longer enter the basic variables. # # c) **False**. Look at the example p.114 of Chapter 3 of the BT's book. The problem is, # # $$ # \begin{align} # & \min x_5 + x_6 + x_7 + x_8 \\ # & s.t \quad x_1 + 2x_2+ 3x_3 +x_5 = 3\\ # & \qquad -x_1 + 2x_2 + 6x_3 + x_6= 2 \\ # & \qquad 4x_2 + 9x_3+ x_7 = 5\\ # & \qquad 3x_3 + x_4 + x_8 = 1\\ # & \qquad x_1,...,x_8 \geq 0 # \end{align} # $$ # # Let's begin with $(0, 0, 0, 0, 3, 2, 5, 1)$. # # * In the first iteration, $x_4$ enteres and the BFS is $(0, 0, 0, 1, 3, 2, 5, 0 )$. # * In the very next iteration, $x_4$ leaves and the BFS is $(0, 0, 1/3, 0, 2, 0, 2, 0)$. # # d) **False**. When the optimal solution is not unique, the statement is false. Consider the following problem # # $$ # \begin{align} # & \min x_1\\ # & s.t \quad x_2 - x_3 = 1\\ # & \qquad x_2 + x_4 = 2\\ # & \qquad x_1,x_2,x_3,x_4\geq 0 # \end{align} # $$ # # Consider # $B_1 = # \begin{bmatrix} # 1 & -1 \\ # 1 & 0 # \end{bmatrix}$, the BFS is $(0, 2, 1, 0)$ and non-degenerated. Also, consider # $B_2 = # \begin{bmatrix} # 1 & 0 \\ # 1 & 1 # \end{bmatrix}$, the BFS is $(0, 1, 0, 1)$ and non-degenerated. # # e) ~~**False**. Consider the cost $c=0$, then any feasible solution is optimal.~~ # # In general this is true. But for simplex method, it can only explore the BFS, hence at most $m$ positive components. # ## Exercise 3.19 # Since the problem only asks for some parameters values, so the answer below by no means will be comprehensive. They are just some feasible choices. # # a) In order to guarantee the optimality, $\beta=0$. To make the case simple, we can also assume $\delta>0$ and $\delta+\frac{2}{3}\gamma=0$ to guarantee the optimality of the BFS by a iteration of the simplex method. In sum, a possible solution would be # # $$ # (\alpha, \beta, \gamma, \delta, \eta) = (1, 0, -3, 2, 1). # $$ # # b) For the LP to be unbounded, we need to set $\delta,\alpha,\gamma$ to be negative. A possible combination would be # # $$ # (\alpha, \beta, \gamma, \delta, \eta) = (-1, 1, -1, -1, 1). # $$ # # c) As long we can move current BFS by a positive distance, we are done. This requires $\beta, \delta>0$.A possible combination would be # # $$ # (\alpha, \beta, \gamma, \delta, \eta) = (1, 3, -3, 3, 2). # $$ # # # ## Exercise 3.20 # # a) $\beta \geq 0$, $\alpha,\gamma,\delta,\eta,\zeta\in R$. # # b) $\beta < 0$, $\alpha\geq 0$, $\gamma,\delta,\eta,\zeta\in R$. In this case, no direction will increase $x_2$ to a non-negative number. # # c) We need $\beta \geq 0$ to guarantee the feasibility. Then as long as at least one of the $\gamma,\delta,\zeta\in R$ is negative, then the current basis is not optimal. Also, set $\eta \in R$. # # d) We need $\beta \geq 0$ to guarantee the feasibility. To make the optimal is unbounded, we need to make the one $y_j\leq 0$ with only one iteration. # # After trying different pivots, I discover the following ranges work. $\alpha <0,\gamma<0$, $\beta, \delta,\zeta \geq 0$, $\eta > \frac{4}{3}$, and $2\frac{\gamma}{\eta} + \delta <0$. # # The last constraints guarantee the negative reduced cost for $x_4$ after one iteration of the simplex method. # # e) To make the statement hold, $\gamma <0$, $\eta > 0$ and $\frac{2}{\eta}<\frac{3}{2}$. In sum, $\beta\geq 0, \gamma < 0, \eta > \frac{4}{3}$, $\alpha, \delta,\zeta \in R$. # # f) This happens when there is a degeneracy, that is to say, $\beta=0$. In sum, $\beta=0, \zeta < 0$, $\alpha, \gamma, \eta, \delta \in R$. # ## Exercise 3.27 # # (a) Notice that, if $x\geq 0$ and $x\neq 0$, there exist a $\alpha>0$, such that $\alpha x=y + z$. To see this, define $\alpha=\frac{1}{\min_i \{x_i|x_i>0\}}$ and $y^T=(I(x_1>0), ..., I(x_n>0))$, where $I(.)$ is a indicator function. # Then $\alpha x=y + z$, where $z\geq 0$. Since $Ax=0$, so we have $A(y+z)=0$. To conclude, the problem can be formulated as # # $$ # \begin{align} # & \max\qquad 1_n^T y\\ # & s.t.\quad A(y+z)=0\\ # & \qquad y_i \leq 1, i=1,2,...,n\\ # & \qquad y, z \geq 0 # \end{align} # $$ # # (b) # # $$ # \begin{align} # & \max\qquad 1_n^T y\\ # & s.t.\quad A(y+z)=\alpha b\\ # & \qquad y_i \leq 1, i=1,2,...,n\\ # & \qquad y, z \geq 0 \\ # & \qquad\alpha \geq 1 # \end{align} # $$ # # **Remark**: The reason I set $\alpha\geq 1$ instead of using $\alpha\geq 0$, is only to scale the coordinates of $x$, which are smaller than 1 but greater than 0. # ## Exercise 3.31 # a) As shown below, $(b_1,b_2)$ is supposed to be enclosed by the four boundaries such that $P(b_1,b_2)$ is non-empty. import matplotlib.pyplot as plt import numpy as np fig, ax = plt.subplots() x, y = np.array([2,3,1,1]), np.array([1,2,1,3]) for i in range(4): ax.plot(x[i],y[i], 'bo', markersize=5) ax.annotate("A{}".format(i+1), (x[i]+0.01,y[i]+0.01)) ax.plot(x[[3, 2]],y[[3, 2]], 'r-', markersize=3) ax.plot(x[[2, 0]],y[[2, 0]], 'r-', markersize=3) ax.plot(x[[0, 1]],y[[0, 1]], 'r-', markersize=3) ax.plot(x[[1, 3]],y[[1, 3]], 'r-', markersize=3) plt.show() # b) To have degeneracy, we must set at least one basic variable to zero. So the $(b_1,b_2)$ has to lie on the boundary of the set, namely on the four segments, whose endpoints are ($A_i,A_j$) respectively. # # c) # Denote $\hat A_1=(2,1,1)^T,\hat A_2=(3,2,1)^T,\hat A_3=(1,1,1)^T,\hat A_4=(1,3,1)^T$ and $B_1=(\hat A_1, \hat A_2,\hat A_3)$, $B_2=(\hat A_1, \hat A_2,\hat A_4)$, $B_3=(\hat A_1, \hat A_3,\hat A_4)$, $B_4=(\hat A_2, \hat A_3,\hat A_4)$. # # * If $(b_1,b_2)$ lies on ($A_3,A_1$), then the bases are $B_1, B_3$. # # * If $(b_1,b_2)$ lies on ($A_1,A_2$), then the bases are $B_1, B_2$. # # * If $(b_1,b_2)$ lies on ($A_2,A_4$), then the bases are $B_2, B_4$. # # * If $(b_1,b_2)$ lies on ($A_4,A_3$), then the bases are $B_3, B_4$. # # d) # # * See below for $S_1 = \{(b_1,b_2) | B_1 \text{ is optimal }\}$ # # * See below for $S_2 = \{(b_1,b_2) | B_2 \text{ is optimal }\}$ # # * See below for $S_3 = \{(b_1,b_2) | B_3 \text{ is optimal }\}$ # # * See below for $S_4 = \{(b_1,b_2) | B_4 \text{ is optimal }\} = \emptyset$ # + import numpy as np import matplotlib.pyplot as plt # %matplotlib inline f, ((ax0, ax1), (ax2, ax3)) = plt.subplots(2, 2, figsize=(6,6)) ax0.plot([2,3],[1,2], 'r-', markersize=3) ax0.plot([2,9/5],[1, 7/5], 'r-', markersize=3) ax0.plot([9/5, 3],[7/5, 2], 'r-', markersize=3) ax0.set_title('S_1') ax1.plot([1,3],[3,2], 'r-', markersize=3) ax1.plot([3,9/5],[2, 7/5], 'r-', markersize=3) ax1.plot([9/5, 1],[7/5, 3], 'r-', markersize=3) ax1.set_title('S_2') ax2.plot(x[[3, 2]],y[[3, 2]], 'r-', markersize=3) ax2.plot([1,9/5],[1, 7/5], 'r-', markersize=3) ax2.plot([9/5, 1],[7/5, 3], 'r-', markersize=3) ax2.set_title('S_3') ax3.set_title('S_4') plt.show() # - # e) According to the second graph below, if $(b_1, b_2)=(2,1)$, then BFS is set to $(1,0,0,0)$, we got the optimal. # f) Based on the first figure below, we know that $(b_1, b_2)$ can be present by any three columns of $A$. # # At first iteration, we pivot on $x_2$, then we end up with the basic variables $x_1,x_3,x_4$. From the second figure below, we know it's the optimal solution. import matplotlib.pyplot as plt import numpy as np from mpl_toolkits import mplot3d fig, ax = plt.subplots() x, y = np.array([2,3,1,1]), np.array([1,2,1,3]) for i in range(4): ax.plot(x[i],y[i], 'bo', markersize=5) ax.annotate("A{}".format(i+1), (x[i]+0.01,y[i]+0.01)) ax.plot(x[[3, 2]],y[[3, 2]], 'r-', markersize=3) ax.plot(x[[2, 0]],y[[2, 0]], 'r-', markersize=3) ax.plot(x[[0, 1]],y[[0, 1]], 'r-', markersize=3) ax.plot(x[[1, 3]],y[[1, 3]], 'r-', markersize=3) ax.plot(x[[1, 3]],y[[1, 2]], 'r-.', markersize=3) ax.plot(x[[0, 3]],y[[0, 3]], 'r-.', markersize=3) ax.plot(9/5, 7/5, 'ko', markersize=5) plt.show() import matplotlib.pyplot as plt import numpy as np from mpl_toolkits import mplot3d fig = plt.figure() ax = plt.axes(projection='3d') x, y, z = np.array([2,3,1,1]), np.array([1,2,1,3]), np.array([1,3,2,2]) z0 = np.array([0, 0, 0, 0]) ax.scatter3D(x, y, z, c="blue", s=30) for i in range(4): #ax.plot([x[i],x[i]],[y[i], y[i]],[z[i],z0[i]], "r-") for j in range(i+1, 4): ax.plot(x[[i,j]], y[[i,j]], z[[i,j]],"k-") ax.plot([9/5, 9/5], [7/5, 7/5], [0, 3],"r-") ax.set_xlim(0,3), ax.set_ylim(0,3), ax.set_zlim(0,3) ax.set_xlabel('X'), ax.set_ylabel('Y'), ax.set_zlabel('Z') plt.show() # + # c = [1, 3, 2, 2] # A = [[2,3,1,1], [1, 2,1,3], [1,1,1,1]] # b = [9/5, 7/5, 1] # x1_bounds = (0, None) # x2_bounds = (0, None) # x3_bounds = (0, None) # x4_bounds = (0, None) # from scipy.optimize import linprog # res = linprog(c, A_eq=A, b_eq=b, bounds=(x1_bounds, x2_bounds,x3_bounds,x4_bounds), options={"disp": True}) # - import numpy as np A = np.array([[2,3,1,1], [1, 2,1,3], [1,1,1,1]]) B = A [:,[0,2,3]] np.linalg.inv(B)
Python_IE411/hw4.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %load_ext autoreload # %autoreload 2 import molsysmt as msm # # How to convert a form into other form # # The meaning of molecular system 'form', in the context of MolSysMT, has been described previously in the section XXX. There is in MolSysMT a method to convert a form into other form: `molsysmt.convert()`. This method is the keystone of this library, the hinge all other methods and tools in MolSysMT rotates on. And in addition, the joining piece connecting the pipes of your work-flow when using different python libraries. # # The method `molsysmt.convert()` requires at least two input arguments: the original pre-existing item in whatever form accepted by MolSysMT (see XXX), and the name of the output form: molecular_system = msm.convert('1TCD', 'molsysmt.MolSys') # The id code `1TCD` from the Protein Data Bank is converted into a native `molsysmt.MolSys` python object. At this point, you probably think that this operation can also be done with the method `molsysmt.load()`. And you are right. Actually, `molsysmt.load()` is nothing but an alias of `molsysmt.convert()`. Although redundant, a loading method was included in MolSysMT just for the sake of intuitive usability. But it could be removed from the library since `molsysmt.convert()` has the same functionality. # # The following cells illustrate some conversions you can do with `molsysmt.convert()`: msm.get_form('1sux.pdb') msm.convert('1SUX', '1sux.pdb') # fetching a pdb file to save it locally msm.convert('1SUZ', '1sux.mmtf') # fetching an mmtf to save it locally pdb_file = msm.demo_systems.files['1tcd.pdb'] molecular_system = msm.convert(pdb_file, 'mdtraj.Trajectory') # loading a pdb file as an mdtraj.Trajectory object seq_aa1 = msm.convert(molecular_system, 'string:aminoacids1') # converting an mdtraj.Trajectory into a sequence form # ## How to convert just a selection # The conversion can be done over the entiry system or over a part of it. The input argument `selection` works with most of the MolSysMT methods, with `molsysmt.convert()` also. To know more about how to perform selections there is a section on this documentation entitled "XXX". By now, lets see some simple selections to see how it operates: pdb_file = msm.demo_systems.files['1tcd.pdb'] whole_molecular_system = msm.convert(pdb_file, to_form='openmm.Topology') msm.info(whole_molecular_system) aa = msm.convert(pdb_file, to_form='string:pdb') msm.get_form(aa) molecular_system = msm.convert(pdb_file, to_form='openmm.Topology', selection='molecule_type=="protein"') msm.info(molecular_system) # ## How to combine multiple forms into one # Sometimes the molecular system comes from the combination of more than a form. For example, we can have two files with topology and coordinates to be converted into an only molecular form: prmtop_file = msm.demo_systems.files['pentalanine.prmtop'] inpcrd_file = msm.demo_systems.files['pentalanine.inpcrd'] molecular_system = msm.convert([prmtop_file, inpcrd_file], to_form='molsysmt.MolSys') msm.info(molecular_system) # ## How to convert a form into multiple ones at once # In the previous section the way to convert multiple forms into one was illustrated. Lets see now how to produce more than an output form in just a single line: h5_file = msm.demo_systems.files['pentalanine.h5'] topology, trajectory = msm.convert(h5_file, to_form=['molsysmt.Topology','molsysmt.Trajectory']) msm.info(topology) msm.info(trajectory) msm.info([topology, trajectory]) # Lets now combine both forms into one to see their were properly converted: pdb_string = msm.convert([topology, trajectory], to_form='string:pdb', frame_indices=0) print(pdb_string) # ## Some examples with files PDB_file = msm.demo_systems.files['1tcd.pdb'] system_pdbfixer = msm.convert(PDB_file, to_form='pdbfixer.PDBFixer') system_parmed = msm.convert(PDB_file, to_form='parmed.Structure') MOL2_file = msm.demo_systems.files['caffeine.mol2'] system_openmm = msm.convert(MOL2_file, to_form='openmm.Modeller') system_mdtraj = msm.convert(MOL2_file, to_form='mdtraj.Trajectory') MMTF_file = msm.demo_systems.files['1tcd.mmtf'] system_aminoacids1_seq = msm.convert(MMTF_file, to_form='string:aminoacids1') system_molsys = msm.convert(MMTF_file) print('Form of object system_pdbfixer: ', msm.get_form(system_pdbfixer)) print('Form of object system_parmed: ', msm.get_form(system_parmed)) print('Form of object system_openmm: ', msm.get_form(system_openmm)) print('Form of object system_mdtraj: ', msm.get_form(system_mdtraj)) print('Form of object system_aminoacids1_seq: ', msm.get_form(system_aminoacids1_seq)) print('Form of object system_molsys: ', msm.get_form(system_molsys)) # A single file can be converted into more than a form in just a line: h5_file = msm.demo_systems.files['pentalanine.h5'] topology, trajectory = msm.convert(h5_file, to_form=['molsysmt.Topology','molsysmt.Trajectory']) # When the output file path is only a dot followed by the file extension, the output is a string insted of a written file. Lets see how this works when two forms are combinend into a pdb string: pdb_string = msm.convert([topology,trajectory], to_form='string:pdb', frame_indices=0) msm.get_form(pdb_string) # ## Some examples with IDs molecular_system = msm.convert('1SUX', to_form='mdtraj.Trajectory') # ## Conversions implemented in MolSysMT msm.info_convert(from_form='mdtraj.Trajectory', to_form_type='string') msm.info_convert(from_form='mdtraj.Trajectory', to_form_type='file', as_rows='to') from_list=['pytraj.Trajectory','mdanalysis.Universe'] to_list=['mdtraj.Trajectory', 'openmm.Topology'] msm.info_convert(from_form=from_list, to_form=to_list)
docs/contents/Convert.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.6 64-bit # language: python # name: python3 # --- # + from widgets.data import Data import numpy as np from functools import reduce import matplotlib.pyplot as plt import numpy as np import scipy.interpolate as si def get_indices_range(x, start_value, end_value): start_index = np.argmin(np.absolute(x - start_value)) end_index = np.argmin(np.absolute(x - end_value)) return np.r_[start_index:end_index] def get_indices_to_fit(x, ranges_to_ignore): union = reduce(np.union1d, (get_indices_range(x, *i) for i in ranges_to_ignore)) to_fit = np.in1d(np.arange(x.shape[0]), union, invert=True) return to_fit # - # # Normalization on water spectrum # + from sklearn import cluster cell = "Cryptomonas" # "Bigelowiella" "Cryptomonas" "Penium" "a" data = Data(f"./data/{cell}.mat") n_comp = 2 clf = cluster.MiniBatchKMeans(n_clusters=n_comp, random_state=2, max_iter=100) # cluster based on C-H band flattened_data = np.reshape(data.data, (-1, data.data.shape[-1]))[:,get_indices_range(data.x_axis, 2750, 3050)] clf.fit(flattened_data) result = clf.predict(flattened_data) comp_im = np.reshape(result, data.data.shape[:2]) water_component = int(np.round(np.mean(np.concatenate((comp_im[:,0], comp_im[:, -1], comp_im[-1, :], comp_im[0, :]))))) # let the water component be 0 if water_component == 1: comp_im = np.ones(comp_im.shape) - comp_im plt.imshow(comp_im.T, interpolation='nearest', zorder=1) plt.axis('off') plt.show() no_water_rows = np.argwhere(np.max(comp_im, axis=0) > 0) no_water_cols = np.argwhere(np.max(comp_im, axis=1) > 0) inner_points = comp_im[no_water_cols[0][0]:no_water_cols[-1][0] + 1, no_water_rows[0][0]:no_water_rows[-1][0] + 1] comp_im[no_water_cols[0][0]:no_water_cols[-1][0] + 1, no_water_rows[0][0]:no_water_rows[-1][0] + 1] = 1 outer_points = np.vstack(np.where(comp_im == 0)) print(outer_points) print() #plt.imshow(comp_im.T, interpolation='nearest', zorder=1) plt.scatter(*outer_points, color="r", zorder=2, s=1) plt.axis('off') plt.show() plt.imshow(comp_im.T, interpolation='nearest', zorder=1) plt.axis('off') plt.show()
30-04-norm.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Spatial and temporal characteristics of a movement pattern # # > <NAME> # > Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/)) # > Federal University of ABC, Brazil # The measurement of spatial and temporal characteristics of a movement pattern is an important resource for the characterization of an observed movement. Such variables are typically the first description performed in gait analysis (Whittle, 2007) and also used in the study of other movements and biological systems. The determination of such variables is also an excellent and valuable example of the application of relatively simple concepts of kinematics. # # Gait is the pattern of movement with the limbs by animals during terrestrial locomotion and for humans, gait consists of walking or running. Gait is typically a repetitive task where the overall pattern of movement repeats after a certain period or cycle. In the context of human gait, the movement of interest, walking or running, can be defined by steps or strides performed with the limbs. A step is the movement of one foot in front of the other and a stride is two consecutive steps with alternation of the limbs, as illustrated next. # <br> # <div class='center-align'><figure><img src="./../images/gaitstepstride.png"alt="Gait step and stride"/><figcaption><center><i>Step and stride in human gait.</i></center></figcaption></figure></div> # <br /> # <div class='center-align'><figure><img src="./../images/gaitcycle.png" width=720 alt="Gait cycle"/><figcaption><center><i>Figure. The gait cycle of walking and its subphases ([http://www.gla.ac.uk/t4/~fbls/files/fab/](http://www.gla.ac.uk/t4/~fbls/files/fab/)). HS: heel strike. TO: toe off.</i></center></figcaption></figure></div> # ## Common measurements of spatial and temporal characteristics # # The most commonly investigated spatial and temporal characteristics in the context of human gait (walking or running) analysis are: # # - Step length: distance in the direction of progression between two consecutive similar events with the different limbs. For instance, the distance between two heel strikes, with the right and left limbs. # - Stride length: distance in the direction of progression between two consecutive similar events with the same limb. For instance, the distance between two heel strikes with the right limb. # - Step duration: time duration of the step. # - Stride duration: time duration of the stride. # - Cadence: number of steps per unit of time. # - Velocity: traveled distance divided by the spent time. # - Stance duration: time duration which one limb is in contact with the ground. # - Swing duration: time duration which one limb is not in contact with the ground. # - Single support duration: time duration which only one limb is in contact with the ground. # - Double support duration: time duration which the two limbs are in contact with the ground. # - Base of support width: distance in the frontal plane between the two feet when they were in contact with the ground (in different instants of time). # # Some of these variables can be normalized by a parameter to take into account individual characteristics or to simply make them dimensionless, for instance: # # - Dimensionless stride length: stride length divided by lower limb length. # - Dimensionless speed: the [Froude number](http://en.wikipedia.org/wiki/Froude_number), $ v/\sqrt{gL} $, where g is the gravitational acceleration and L is the lower limb length. # - Dimensionless stride frequency: stride frequency multiplied by $ \sqrt{L/g} $. # - Duty factor: period of contact of one limb with the ground divided by the stride period. # - Stance, swing, single support and double support durations can be expressed as a fraction of the stride duration and multiplied by 100 for percentage values. # ## Examples of use of spatial and temporal characteristics # # - The article [Bipedal animals, and their differences from humans](http://onlinelibrary.wiley.com/doi/10.1111/j.0021-8782.2004.00289.x/abstract) by Alexander describes an interesting comparison of the spatial and temporal characteristics, as well as other biomechanical and physiological variables, of gaits by humans and other animals. Alexander found a lot of similarities across animals, particularly concerning the spatial and temporal characteristics, but at the same time he concludes that no animal walks or runs as we do. # # - With aging, typically it's observed a decrease in gait speed, an increase in double stance time, and an increase in step width, among other changes. See the website [Gait Disorders in the Elderly](http://www.merckmanuals.com/professional/geriatrics/gait_disorders_in_the_elderly/gait_disorders_in_the_elderly.html) for more details. # # - A study involving 26,802 individuals 60 years and older from 17 different countries found that the simple measurement of gait speed, combined with a kind of memory test, can be used to identify high-risk seniors that will develop dementia ([Verghese et al., 2014](http://www.neurology.org/content/83/8/718)). # ### Example of a clinical gait analysis # # <a href="http://demotu.org/wp-content/uploads/2016/08/walkAnimation.gif"><img src="http://demotu.org/wp-content/uploads/2016/08/walkAnimation.gif" alt="walking" width="480" height="480" /></a> from IPython.display import IFrame IFrame('http://demotu.org/wp-content/uploads/2016/08/SampleReportWalking.pdf', width='100%', height=450) # ## Problems # # 1. Propose different instruments to measure spatial and temporal characteristics of the human gait. # 2. Design and perform a simple experiment to measure as many spatial and temporal characteristics as possible during walking at different speeds (slow, normal, and fast) using only a chronometer and a known distance of about 10 m. Compare your results with the data from [Spatial and Temporal Descriptors](https://clinicalgate.com/kinesiology-of-walking/#s0020). # 3. Use a video camera (or propose and use another method) to measure the characteristics you couldn't measure in the previous item. # ## References # # - <NAME> (2004) [Bipedal animals, and their differences from humans](http://onlinelibrary.wiley.com/doi/10.1111/j.0021-8782.2004.00289.x/abstract). Journal of Anatomy, 204, 5, 321-330. # - [Gait Disorders in the Elderly](http://www.merckmanuals.com/professional/geriatrics/gait_disorders_in_the_elderly/gait_disorders_in_the_elderly.html). # - <NAME> al. (2014) [Motoric cognitive risk syndrome](http://www.neurology.org/content/83/8/718). Neurology, doi: 10.1212/WNL.0000000000000717. # - <NAME> (2007) [Gait Analysis: An Introduction](http://books.google.com.br/books?id=HtNqAAAAMAAJ). Butterworth-Heinemann.
notebooks/SpatialTemporalCharacteristcs.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Bayesian Linear Regression # > A programming introduction to Bayesian Linear Regression. # # - toc: true # - badges: true # - comments: true # - image: images/blr-map.png # - author: <NAME> # - categories: [ML] import numpy as np import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline x = np.linspace(-1, 1, 50).reshape(-1, 1) y = 5*x + 4 noise = (np.abs(x.flatten())*np.random.randn(len(x))).reshape(-1,1) y = y + noise plt.scatter(x, y) plt.plot(x, 5*x + 4, 'k') from scipy.stats import multivariate_normal from matplotlib import cm cov = np.array([[ 1 , 0], [0, 1]]) var = multivariate_normal(mean=[0,0], cov=cov) x_grid, y_grid = np.mgrid[-1:1:.01, -1:1:.01] pos = np.dstack((x_grid, y_grid)) z = var.pdf(pos) plt.contourf(x_grid, y_grid, z) plt.gca().set_aspect('equal') plt.xlabel(r"$\theta_0$") plt.ylabel(r"$\theta_1$") plt.title(r"Prior distribution of $\theta = f(\mu, \Sigma)$") plt.colorbar() # $$ # \prod_{i=1}^{n} \frac{1}{\sqrt{2 \pi \sigma^{2}}} e^{-\frac{(y_{i}-\hat{y}_{i})^{2}}{2 \sigma^{2}}} # $$ # ### Sample from prior n_samples = 20 for n in range(n_samples): theta_0_s, theta_1_s = var.rvs() plt.plot(x, theta_1_s*x + theta_0_s, color='k',alpha=0.2) plt.scatter(x, y) # ### Likelihood of theta def likelihood(theta_0, theta_1, x, y, sigma): s = 0 x_plus_1 = np.hstack((np.ones_like(x), x)) for i in range(len(x)): y_i_hat = x_plus_1[i, :]@np.array([theta_0, theta_1]) s += (y[i,:]-y_i_hat)**2 return np.exp(-s/(2*sigma*sigma))/np.sqrt(2*np.pi*sigma*sigma) likelihood(-1, 1, x, y, 4) # + x_grid_2, y_grid_2 = np.mgrid[0:8:.1, 0:8:.1] li = np.zeros_like(x_grid_2) for i in range(x_grid_2.shape[0]): for j in range(x_grid_2.shape[1]): li[i, j] = likelihood(x_grid_2[i, j], y_grid_2[i, j], x, y, 4) # - plt.contourf(x_grid_2, y_grid_2, li) plt.gca().set_aspect('equal') plt.xlabel(r"$\theta_0$") plt.ylabel(r"$\theta_1$") plt.colorbar() plt.scatter(4, 5, s=200, marker='*', color='r') plt.title(r"Likelihood as a function of ($\theta_0, \theta_1$)") # ### Likelihood of $\sigma^2$ # + x_plus_1 = np.hstack((np.ones_like(x), x)) theta_mle = np.linalg.inv(x_plus_1.T@x_plus_1)@(x_plus_1.T@y) sigma_2_mle = np.linalg.norm(y - x_plus_1@theta_mle)**2 sigma_mle = np.sqrt(sigma_2_mle) sigma_mle # - # ### Posterior # $$ # \begin{aligned} # p(\boldsymbol{\theta} | \mathcal{X}, \mathcal{Y}) &=\mathcal{N}\left(\boldsymbol{\theta} | \boldsymbol{m}_{N}, \boldsymbol{S}_{N}\right) \\ # \boldsymbol{S}_{N} &=\left(\boldsymbol{S}_{0}^{-1}+\sigma^{-2} \boldsymbol{\Phi}^{\top} \boldsymbol{\Phi}\right)^{-1} \\ # \boldsymbol{m}_{N} &=\boldsymbol{S}_{N}\left(\boldsymbol{S}_{0}^{-1} \boldsymbol{m}_{0}+\sigma^{-2} \boldsymbol{\Phi}^{\top} \boldsymbol{y}\right) # \end{aligned} # $$ # + S0 = np.array([[ 1 , 0], [0, 1]]) M0 = np.array([0, 0]) SN = np.linalg.inv(np.linalg.inv(S0) + (sigma_mle**-2)*x_plus_1.T@x_plus_1) MN = SN@(np.linalg.inv(S0)@M0 + (sigma_mle**-2)*(x_plus_1.T@y).squeeze()) # - MN, SN # + from scipy.stats import multivariate_normal from matplotlib import cm cov = np.array([[ 1 , 0], [0, 1]]) var_pos = multivariate_normal(mean=MN, cov=SN) x_grid, y_grid = np.mgrid[0:8:.1, 0:8:.1] pos = np.dstack((x_grid, y_grid)) z = var_pos.pdf(pos) plt.contourf(x_grid, y_grid, z) plt.gca().set_aspect('equal') plt.xlabel(r"$\theta_0$") plt.ylabel(r"$\theta_1$") plt.title(r"Posterior distribution of $\theta = f(\mu, \Sigma)$") plt.scatter(4, 5, s=200, marker='*', color='r', label='MLE') plt.scatter(MN[0], MN[1], s=100, marker='^', color='black', label='MAP') plt.colorbar() plt.legend() plt.savefig("../images/blr-map.png") # - # Sample from posterior # + n_samples = 20 for n in range(n_samples): theta_0_s, theta_1_s = var_pos.rvs() plt.plot(x, theta_1_s*x + theta_0_s, color='k',alpha=0.2) plt.scatter(x, y) # - # ### Posterior predictions # $$ # \begin{aligned} # p\left(y_{*} | \mathcal{X}, \mathcal{Y}, \boldsymbol{x}_{*}\right) &=\int p\left(y_{*} | \boldsymbol{x}_{*}, \boldsymbol{\theta}\right) p(\boldsymbol{\theta} | \mathcal{X}, \mathcal{Y}) \mathrm{d} \boldsymbol{\theta} \\ # &=\int \mathcal{N}\left(y_{*} | \boldsymbol{\phi}^{\top}\left(\boldsymbol{x}_{*}\right) \boldsymbol{\theta}, \sigma^{2}\right) \mathcal{N}\left(\boldsymbol{\theta} | \boldsymbol{m}_{N}, \boldsymbol{S}_{N}\right) \mathrm{d} \boldsymbol{\theta} \\ # &=\mathcal{N}\left(y_{*} | \boldsymbol{\phi}^{\top}\left(\boldsymbol{x}_{*}\right) \boldsymbol{m}_{N}, \boldsymbol{\phi}^{\top}\left(\boldsymbol{x}_{*}\right) \boldsymbol{S}_{N} \boldsymbol{\phi}\left(\boldsymbol{x}_{*}\right)+\sigma^{2}\right) # \end{aligned} # $$ # For a point $x*$ # Predictive mean = $X^Tm_N$ # # Predictive variance = $X^TS_NX + \sigma^2$ x_plus_1.T.shape, SN.shape, x_plus_1.shape pred_var = x_plus_1@SN@x_plus_1.T pred_var.shape ## Marginal individual_var = pred_var.diagonal() # + y_hat_map = x_plus_1@MN plt.plot(x, y_hat_map, color='black') plt.fill_between(x.flatten(), y_hat_map-individual_var, y_hat_map+individual_var, alpha=0.2, color='black') plt.scatter(x, y)
_notebooks/2020-02-20-bayesian-linear-regression.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## 2020년 5월 11일 월요일 # ### leetCode : Longest Substring Without Repeating Characters # ### 문제 : https://leetcode.com/problems/longest-substring-without-repeating-characters/ # ### 블로그 : https://somjang.tistory.com/entry/leetCode-3-Longest-Substring-Without-Repeating-Characters-Python # ### 첫번째 시도 class Solution: def lengthOfLongestSubstring(self, my_str: str) -> int: answer = 0 substrings = [] my_str_list = list(my_str) set_list = set(my_str_list) if len(set_list) == 0: answer = 0 elif len(set_list) == 1: answer = 1 else: for i in range(len(my_str_list)): sub = [] for j in range(i, len(my_str_list)): if my_str_list[j] in sub: substring = ''.join(sub) substrings.append(len(substring)) break sub.append(my_str_list[j]) if j == len(my_str_list) - 1: substring = ''.join(sub) substrings.append(len(substring)) answer = max(substrings) return answer
DAY 001 ~ 100/DAY095_[leetCode] Longest Substring Without Repeating Characters (Python).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import sys from IPython.display import HTML, display import hatchet as ht # - # # Basic Mock Example # # This basic example uses a mock GraphFrame taken from the testing directory in the Hatchet repo. # ## Generate and Visualize Mock GraphFrame # Copied from hatchet/hatchet/tests/conftest.py def mock_graph_literal(): graph_dict = [ { "frame": {"name": "foo", "type": "function"}, "metrics": {"time (inc)": 130.0, "time": 0.0}, "children": [ { "frame": {"name": "bar", "type": "function"}, "metrics": {"time (inc)": 20.0, "time": 5.0}, "children": [ { "frame": {"name": "baz", "type": "function"}, "metrics": {"time (inc)": 5.0, "time": 5.0} }, { "frame": {"name": "grault", "type": "function"}, "metrics": {"time (inc)": 10.0, "time": 10.0}, }, ], }, { "frame": {"name": "qux", "type": "function"}, "metrics": {"time (inc)": 60.0, "time": 0.0}, "children": [ { "frame": {"name": "quux", "type": "function"}, "metrics": {"time (inc)": 60.0, "time": 5.0}, "children": [ { "frame": {"name": "corge", "type": "function"}, "metrics": {"time (inc)": 55.0, "time": 10.0}, "children": [ { "frame": {"name": "bar", "type": "function"}, "metrics": { "time (inc)": 20.0, "time": 5.0, }, "children": [ { "frame": {"name": "baz", "type": "function"}, "metrics": { "time (inc)": 5.0, "time": 5.0, }, }, { "frame": {"name": "grault", "type": "function"}, "metrics": { "time (inc)": 10.0, "time": 10.0, }, }, ], }, { "frame": {"name": "grault", "type": "function"}, "metrics": { "time (inc)": 10.0, "time": 10.0, }, }, { "frame": {"name": "garply", "type": "function"}, "metrics": { "time (inc)": 15.0, "time": 15.0, }, }, ], } ], } ], }, { "frame": {"name": "waldo", "type": "function"}, "metrics": {"time (inc)": 50.0, "time": 0.0}, "children": [ { "frame": {"name": "fred", "type": "function"}, "metrics": {"time (inc)": 35.0, "time": 5.0}, "children": [ { "frame": {"name": "plugh", "type": "function"}, "metrics": {"time (inc)": 5.0, "time": 5.0}, }, { "frame": {"name": "xyzzy", "type": "function"}, "metrics": {"time (inc)": 25.0, "time": 5.0}, "children": [ { "frame": {"name": "thud", "type": "function"}, "metrics": { "time (inc)": 25.0, "time": 5.0, }, "children": [ { "frame": {"name": "baz", "type": "function"}, "metrics": { "time (inc)": 5.0, "time": 5.0, }, }, { "frame": {"name": "garply", "type": "function"}, "metrics": { "time (inc)": 15.0, "time": 15.0, }, }, ], } ], }, ], }, { "frame": {"name": "garply", "type": "function"}, "metrics": {"time (inc)": 15.0, "time": 15.0}, }, ], }, ], }, { "frame": {"name": "waldo", "type": "function"}, "metrics": {"time (inc)": 30.0, "time": 10.0}, "children": [ { "frame": {"name": "bar", "type": "function"}, "metrics": {"time (inc)": 20.0, "time": 5.0}, "children": [ { "frame": {"name": "baz", "type": "function"}, "metrics": {"time (inc)": 5.0, "time": 5.0} }, { "frame": {"name": "grault", "type": "function"}, "metrics": {"time (inc)": 10.0, "time": 10.0}, }, ], } ], }, ] return graph_dict gf = ht.GraphFrame.from_literal(mock_graph_literal()) print(gf.tree(metric_column="time (inc)")) display(HTML(gf.dataframe.to_html())) # ## Query Example #1 # This query matches the following: # 1. A single node with name "qux" # 2. 0 or more nodes with inclusive time greater than 10 # 3. A single node with name starting with "gr" and inclusive time less than or equal to 10 query = [ {"name": "qux"}, ("*", {"time (inc)": "> 10"}), {"name": "gr[a-z]+", "time (inc)": "<= 10"} ] sgf = gf.filter(query) print(sgf.tree(metric_column="time (inc)")) display(HTML(sgf.dataframe.to_html())) # ## Query Example #2 # This query matches the following: # 1. A single node with name "bar" # 2. 0 or more nodes with inclusive time greater than 10 # 3. A single node with name starting with "gr" and inclusive time less than or equal to 10 query = [ {"name": "bar"}, ("*", {"time (inc)": "> 50"}), {"name": "gr[a-z]+", "time (inc)": "<= 10"} ] sgf = gf.filter(query) print(sgf.tree(metric_column="time (inc)")) display(HTML(sgf.dataframe.to_html())) # ## Query Example #3 # # This query matches the following: # 1. A single node with name "waldo" # 2. 1 or more of any node # 3. A single node with an inclusive time >= 20 # 4. 1 or more of any node # 5. A single node with an exclusive and inclusive time equal to 5 query = [ {"name": "waldo"}, "+", {"time (inc)": ">= 20.0"}, "+", {"time (inc)": 5.0, "time": 5.0} ] sgf = gf.filter(query, squash=True) print(sgf.tree(metric_column="time (inc)")) display(HTML(sgf.dataframe.to_html())) # ## Query Example #4 # # This query matches the following: # 1. A single node with name "waldo" # 2. 1 or more of any node # 3. A single node with an inclusive time >= 20 # 4. 1 or more of any node # 5. A single node with an exclusive and inclusive time equal to 7.5 # # This query does not match any node. It should raise an `EmptyFilter` exception. query = [ {"name": "waldo"}, "+", {"time (inc)": ">= 20.0"}, "+", {"time (inc)": 7.5, "time": 7.5} ] sgf = gf.filter(query) # # "Real-Life" Example # # This example uses an HPCToolkit database in `hpctoolkit-cpi-database/` in the `tests/` directory. gf = ht.GraphFrame.from_hpctoolkit("../../..//hatchet/tests/data/hpctoolkit-cpi-database") print(gf.tree(metric_column="time (inc)")) display(HTML(gf.dataframe.to_html())) gf.drop_index_levels() display(HTML(gf.dataframe.to_html())) query = [ "*", {"name": "PMPI.*"}, "*" ] sgf = gf.filter(query) print(sgf.tree(metric_column="time (inc)")) display(HTML(sgf.dataframe.to_html()))
docs/examples/tutorial/hatchet_query_examples.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python (My py38cu11 Kernel) # language: python # name: py38cu11 # --- # + active="" # # // *********************************************************************** # # // # # // V2W-BERT: A Python library for vulnerability classification # # // <NAME> (<EMAIL>) : Purdue University # # // <NAME> (<EMAIL>): Pacific Northwest National Laboratory # # // # # // *********************************************************************** # # # # Copyright © 2022, Battelle Memorial Institute # # All rights reserved. # # # # # Redistribution and use in source and binary forms, with or without # # modification, are permitted provided that the following conditions are met: # # # # 1. Redistributions of source code must retain the above copyright notice, this # # # list of conditions and the following disclaimer. # # # # # 2. Redistributions in binary form must reproduce the above copyright notice, # # # this list of conditions and the following disclaimer in the documentation # # # and/or other materials provided with the distribution. # # # # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" # # # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # # # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE # # # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE # # # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL # # # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR # # # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER # # # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, # # # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # # # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # - # # Plot accuracy and Loss # + import matplotlib.pyplot as plt import numpy as np def save_plot(train_data, valid_data, name='Loss',yname=''): """Plot Plot one figure: accurace/loss vs. epoch and accuracy vs. epoch """ n = len(train_data) xs = np.arange(n) # plot train and test accuracies plt.clf() fig, ax = plt.subplots() ax.plot(xs, train_data, '--', linewidth=2, label='train') ax.plot(xs, valid_data, '-', linewidth=2, label='valid') ax.set_xlabel("Epoch") ax.set_ylabel(yname) ax.legend(loc='lower right') plt.savefig(name+'-train-valid.png') plt.show() # - def save_loss(valid_data, name=''): n = len(valid_data) xs = np.arange(n) # plot train and test accuracies plt.clf() fig, ax = plt.subplots() ax.plot(xs, valid_data, '-', linewidth=2, label='loss') ax.set_xlabel("Epoch") ax.set_ylabel(name) ax.legend(loc='lower right') plt.savefig(name+'.png') plt.show() # + #as it turned out interactive shell (like Jupyter cannot handle CPU multiprocessing well so check which medium the code is runing) #we will write code in Jupyter for understanding purposes but final execuation will be in shell def isnotebook(): try: shell = get_ipython().__class__.__name__ if shell == 'ZMQInteractiveShell': return True # Jupyter notebook or qtconsole elif shell == 'TerminalInteractiveShell': return False # Terminal running IPython else: return False # Other type (?) except NameError: return False # Probably standard Python interpreter # - if __name__ == '__main__': y_true=[0,1,2,3,4,5,6,7,8,9,10] y_pred=[0,1,2,3,4,5,6,7,8,9,10] save_plot(y_true,y_pred)
lib/Utils.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import torch torch.set_printoptions(edgeitems=2, threshold=50) # + import imageio img_arr = imageio.imread('../data/p1ch4/image-dog/bobby.jpg') img_arr.shape # - img = torch.from_numpy(img_arr) out = img.permute(2, 0, 1) batch_size = 3 batch = torch.zeros(batch_size, 3, 256, 256, dtype=torch.uint8) # + import os data_dir = '../data/p1ch4/image-cats/' filenames = [name for name in os.listdir(data_dir) if os.path.splitext(name)[-1] == '.png'] for i, filename in enumerate(filenames): img_arr = imageio.imread(os.path.join(data_dir, filename)) img_t = torch.from_numpy(img_arr) img_t = img_t.permute(2, 0, 1) img_t = img_t[:3] # <1> batch[i] = img_t # - batch = batch.float() batch /= 255.0 n_channels = batch.shape[1] for c in range(n_channels): mean = torch.mean(batch[:, c]) std = torch.std(batch[:, c]) batch[:, c] = (batch[:, c] - mean) / std batch.shape len(filenames)
src/dl-pytorch-book/p1ch4/1_image_dog.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd # %matplotlib inline import matplotlib.pyplot as plt import numpy as np import plotly.offline as pyoff import plotly.graph_objs as go import pkg_resources libs = ['pandas', 'matplotlib', 'numpy', 'plotly'] for lib in libs: print(lib+'=='+pkg_resources.get_distribution(lib).version) pyoff.init_notebook_mode() tx_data = pd.read_csv('data/OnlineRetail.csv',encoding = "ISO-8859-1") tx_data.head(10) tx_data.info() tx_data.shape tx_data['InvoiceDate'] = pd.to_datetime(tx_data['InvoiceDate']) tx_data['InvoiceDate'].describe() tx_data['InvoiceYearMonth'] = tx_data['InvoiceDate'].map(lambda date: 100*date.year + date.month) tx_data.head(10) tx_data['Revenue'] = tx_data['UnitPrice'] * tx_data['Quantity'] tx_data.groupby('InvoiceYearMonth')['Revenue'].sum() tx_revenue = tx_data.groupby(['InvoiceYearMonth'])['Revenue'].sum().reset_index() tx_revenue # + plot_data = [ go.Scatter( x=tx_revenue['InvoiceYearMonth'], y=tx_revenue['Revenue'], ) ] plot_layout = go.Layout( xaxis={"type": "category"}, title='Montly Revenue' ) # - fig = go.Figure(data=plot_data, layout=plot_layout) pyoff.iplot(fig) tx_revenue['MonthlyGrowth'] = tx_revenue['Revenue'].pct_change() tx_revenue.head() # + plot_data = [ go.Scatter( x=tx_revenue.query("InvoiceYearMonth < 201112")['InvoiceYearMonth'], y=tx_revenue.query("InvoiceYearMonth < 201112")['MonthlyGrowth'], ) ] plot_layout = go.Layout( xaxis={"type": "category"}, title='Montly Growth Rate' ) fig = go.Figure(data=plot_data, layout=plot_layout) pyoff.iplot(fig) # - tx_data.groupby('Country')['Revenue'].sum().sort_values(ascending=False).astype(int) tx_uk = tx_data.query("Country=='United Kingdom'").reset_index(drop=True) tx_uk.head() tx_monthly_active = tx_uk.groupby('InvoiceYearMonth')['CustomerID'].nunique().reset_index() tx_monthly_active # + plot_data = [ go.Bar( x=tx_monthly_active['InvoiceYearMonth'], y=tx_monthly_active['CustomerID'], ) ] plot_layout = go.Layout( xaxis={"type": "category"}, title='Monthly Active Customers' ) # - fig = go.Figure(data=plot_data, layout=plot_layout) pyoff.iplot(fig) tx_monthly_active['CustomerID'].mean() tx_monthly_sales = tx_uk.groupby('InvoiceYearMonth')['Quantity'].sum().reset_index() tx_monthly_sales # + plot_data = [ go.Bar( x=tx_monthly_sales['InvoiceYearMonth'], y=tx_monthly_sales['Quantity'], ) ] plot_layout = go.Layout( xaxis={"type": "category"}, title='Monthly Total # of Order' ) # - fig = go.Figure(data=plot_data, layout=plot_layout) pyoff.iplot(fig) tx_monthly_sales['Quantity'].mean() tx_monthly_order_avg = tx_uk.groupby('InvoiceYearMonth')['Revenue'].mean().reset_index() tx_monthly_order_avg # + plot_data = [ go.Bar( x=tx_monthly_order_avg['InvoiceYearMonth'], y=tx_monthly_order_avg['Revenue'], ) ] plot_layout = go.Layout( xaxis={"type": "category"}, title='Monthly Order Average' ) fig = go.Figure(data=plot_data, layout=plot_layout) pyoff.iplot(fig) # - tx_monthly_order_avg.Revenue.mean() tx_uk.info() # # New & Existing Users tx_min_purchase = tx_uk.groupby('CustomerID').InvoiceDate.min().reset_index() tx_min_purchase.columns = ['CustomerID','MinPurchaseDate'] # + tx_min_purchase['MinPurchaseYearMonth'] = tx_min_purchase['MinPurchaseDate'].map(lambda date: 100*date.year + date.month) # - tx_min_purchase tx_uk = pd.merge(tx_uk, tx_min_purchase, on='CustomerID') tx_uk.head() tx_uk['UserType'] = 'New' tx_uk.loc[tx_uk['InvoiceYearMonth']>tx_uk['MinPurchaseYearMonth'],'UserType'] = 'Existing' tx_uk.UserType.value_counts() tx_uk.head() tx_user_type_revenue = tx_uk.groupby(['InvoiceYearMonth','UserType'])['Revenue'].sum().reset_index() tx_user_type_revenue.query("InvoiceYearMonth != 201012 and InvoiceYearMonth != 201112") tx_user_type_revenue = tx_user_type_revenue.query("InvoiceYearMonth != 201012 and InvoiceYearMonth != 201112") # + plot_data = [ go.Scatter( x=tx_user_type_revenue.query("UserType == 'Existing'")['InvoiceYearMonth'], y=tx_user_type_revenue.query("UserType == 'Existing'")['Revenue'], name = 'Existing' ), go.Scatter( x=tx_user_type_revenue.query("UserType == 'New'")['InvoiceYearMonth'], y=tx_user_type_revenue.query("UserType == 'New'")['Revenue'], name = 'New' ) ] plot_layout = go.Layout( xaxis={"type": "category"}, title='New vs Existing' ) fig = go.Figure(data=plot_data, layout=plot_layout) pyoff.iplot(fig) # - tx_user_ratio = tx_uk.query("UserType == 'New'").groupby(['InvoiceYearMonth'])['CustomerID'].nunique()/tx_uk.query("UserType == 'Existing'").groupby(['InvoiceYearMonth'])['CustomerID'].nunique() tx_user_ratio = tx_user_ratio.reset_index() tx_user_ratio = tx_user_ratio.dropna() tx_uk.query("UserType == 'New'").groupby(['InvoiceYearMonth'])['CustomerID'].nunique() tx_uk.query("UserType == 'Existing'").groupby(['InvoiceYearMonth'])['CustomerID'].nunique() # + plot_data = [ go.Bar( x=tx_user_ratio.query("InvoiceYearMonth>201101 and InvoiceYearMonth<201112")['InvoiceYearMonth'], y=tx_user_ratio.query("InvoiceYearMonth>201101 and InvoiceYearMonth<201112")['CustomerID'], ) ] plot_layout = go.Layout( xaxis={"type": "category"}, title='New Customer Ratio' ) fig = go.Figure(data=plot_data, layout=plot_layout) pyoff.iplot(fig) # - # # Create Signup Data tx_min_purchase.head() unq_month_year = tx_min_purchase.MinPurchaseYearMonth.unique() unq_month_year def generate_signup_date(year_month): signup_date = [el for el in unq_month_year if year_month >= el] return np.random.choice(signup_date) # + tx_min_purchase['SignupYearMonth'] = tx_min_purchase.apply(lambda row: generate_signup_date(row['MinPurchaseYearMonth']),axis=1) # - tx_min_purchase['InstallYearMonth'] = tx_min_purchase.apply(lambda row: generate_signup_date(row['SignupYearMonth']),axis=1) tx_min_purchase.head() channels = ['organic','inorganic','referral'] tx_min_purchase['AcqChannel'] = tx_min_purchase.apply(lambda x: np.random.choice(channels),axis=1) # # Activation Rate tx_activation = tx_min_purchase[tx_min_purchase['MinPurchaseYearMonth'] == tx_min_purchase['SignupYearMonth']].groupby('SignupYearMonth').CustomerID.count()/tx_min_purchase.groupby('SignupYearMonth').CustomerID.count() tx_activation = tx_activation.reset_index() # + plot_data = [ go.Bar( x=tx_activation.query("SignupYearMonth>201101 and SignupYearMonth<201109")['SignupYearMonth'], y=tx_activation.query("SignupYearMonth>201101 and SignupYearMonth<201109")['CustomerID'], ) ] plot_layout = go.Layout( xaxis={"type": "category"}, title='Monthly Activation Rate' ) fig = go.Figure(data=plot_data, layout=plot_layout) pyoff.iplot(fig) # - tx_activation_ch = tx_min_purchase[tx_min_purchase['MinPurchaseYearMonth'] == tx_min_purchase['SignupYearMonth']].groupby(['SignupYearMonth','AcqChannel']).CustomerID.count()/tx_min_purchase.groupby(['SignupYearMonth','AcqChannel']).CustomerID.count() tx_activation_ch = tx_activation_ch.reset_index() # + plot_data = [ go.Scatter( x=tx_activation_ch.query("SignupYearMonth>201101 and SignupYearMonth<201108 and AcqChannel == 'organic'")['SignupYearMonth'], y=tx_activation_ch.query("SignupYearMonth>201101 and SignupYearMonth<201108 and AcqChannel == 'organic'")['CustomerID'], name="organic" ), go.Scatter( x=tx_activation_ch.query("SignupYearMonth>201101 and SignupYearMonth<201108 and AcqChannel == 'inorganic'")['SignupYearMonth'], y=tx_activation_ch.query("SignupYearMonth>201101 and SignupYearMonth<201108 and AcqChannel == 'inorganic'")['CustomerID'], name="inorganic" ), go.Scatter( x=tx_activation_ch.query("SignupYearMonth>201101 and SignupYearMonth<201108 and AcqChannel == 'referral'")['SignupYearMonth'], y=tx_activation_ch.query("SignupYearMonth>201101 and SignupYearMonth<201108 and AcqChannel == 'referral'")['CustomerID'], name="referral" ) ] plot_layout = go.Layout( xaxis={"type": "category"}, title='Monthly Activation Rate - Channel Based' ) fig = go.Figure(data=plot_data, layout=plot_layout) pyoff.iplot(fig) # - # # Monthly Retention Rate tx_uk.head() df_monthly_active = tx_uk.groupby('InvoiceYearMonth')['CustomerID'].nunique().reset_index() tx_user_purchase = tx_uk.groupby(['CustomerID','InvoiceYearMonth'])['Revenue'].sum().astype(int).reset_index() tx_user_purchase tx_user_purchase.Revenue.sum() tx_retention = pd.crosstab(tx_user_purchase['CustomerID'], tx_user_purchase['InvoiceYearMonth']).reset_index() tx_retention.head() months = tx_retention.columns[2:] months retention_array = [] for i in range(len(months)-1): retention_data = {} selected_month = months[i+1] prev_month = months[i] retention_data['InvoiceYearMonth'] = int(selected_month) retention_data['TotalUserCount'] = tx_retention[selected_month].sum() retention_data['RetainedUserCount'] = tx_retention[(tx_retention[selected_month]>0) & (tx_retention[prev_month]>0)][selected_month].sum() retention_array.append(retention_data) tx_retention = pd.DataFrame(retention_array) tx_retention.head() tx_retention['RetentionRate'] = tx_retention['RetainedUserCount']/tx_retention['TotalUserCount'] tx_retention # + plot_data = [ go.Scatter( x=tx_retention.query("InvoiceYearMonth<201112")['InvoiceYearMonth'], y=tx_retention.query("InvoiceYearMonth<201112")['RetentionRate'], name="organic" ) ] plot_layout = go.Layout( xaxis={"type": "category"}, title='Monthly Retention Rate' ) fig = go.Figure(data=plot_data, layout=plot_layout) pyoff.iplot(fig) # - # # Churn Rate tx_retention['ChurnRate'] = 1- tx_retention['RetentionRate'] # + plot_data = [ go.Scatter( x=tx_retention.query("InvoiceYearMonth<201112")['InvoiceYearMonth'], y=tx_retention.query("InvoiceYearMonth<201112")['ChurnRate'], name="organic" ) ] plot_layout = go.Layout( xaxis={"type": "category"}, title='Monthly Churn Rate' ) fig = go.Figure(data=plot_data, layout=plot_layout) pyoff.iplot(fig) # - # # Cohort Base Retention tx_user_purchase.head() tx_min_purchase.head() tx_retention = pd.crosstab(tx_user_purchase['CustomerID'], tx_user_purchase['InvoiceYearMonth']).reset_index() tx_retention = pd.merge(tx_retention,tx_min_purchase[['CustomerID','MinPurchaseYearMonth']],on='CustomerID') tx_retention.head() tx_retention.columns new_column_names = [ 'm_' + str(column) for column in tx_retention.columns[:-1]] new_column_names.append('MinPurchaseYearMonth') tx_retention.columns = new_column_names tx_retention months # + retention_array = [] for i in range(len(months)): retention_data = {} selected_month = months[i] prev_months = months[:i] next_months = months[i+1:] for prev_month in prev_months: retention_data[prev_month] = np.nan total_user_count = tx_retention[tx_retention.MinPurchaseYearMonth == selected_month].MinPurchaseYearMonth.count() retention_data['TotalUserCount'] = total_user_count retention_data[selected_month] = 1 query = "MinPurchaseYearMonth == {}".format(selected_month) for next_month in next_months: new_query = query + " and {} > 0".format(str('m_' + str(next_month))) retention_data[next_month] = np.round(tx_retention.query(new_query)['m_' + str(next_month)].sum()/total_user_count,2) retention_array.append(retention_data) # - tx_retention = pd.DataFrame(retention_array) len(months) tx_retention.index = months tx_retention
1_know_your_metrics.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/LuisHVSilva/Boostrap-4/blob/main/Curso_Processamento_Imagens.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="LVLhRMSYPsKl" # #Básico # + id="yD_jhIn8gTO5" import torch # + [markdown] id="PNa3Ow-sPvYD" # ##Estruturas básicas # + id="dxgOR14qOUBZ" # Basicamente um array otimizado para redes neurais T = torch.tensor( [ [1,2] , [3,4]]) # Estruturas básicas T_zeros = torch.zeros(100, 100) T_ones = torch.ones(100, 100) T_random = torch.rand(10, 10) # 0 <= T <= 1 # dtype sets the data type T_int = torch.tensor( [ [1,2] , [3,4]], dtype=torch.int) T_double = torch.tensor( [3, 5, 7], dtype=torch.double) # torch.float64 print(T.dtype) print(T.size()) # + [markdown] id="MZ6SJRqRPx6S" # ##Operações # + id="_OvwvKXLPz7C" # (operação elemento à elemento) x = torch.randn(3, 3, dtype=torch.float64) y = torch.randn(3, 3, dtype=torch.float64) # Adição e subtração z = x + y z = torch.add(x,y) w = x - y w = torch.sub(x,y) # Multiplicação e divisão u = x * y u = torch.mul(x,y) v = x / y v = torch.div(x,y) y.add_(x) x.add_(y) # + [markdown] id="U7tJPwv8QtKc" # ##Indexação # + id="nJuHHSz9QueX" x = torch.randn(6, 6, dtype=torch.float64) y = torch.randn(6, 6, dtype=torch.float64) # Indexando e fatiando x_row = x[3,:] # Selecionando terceira linha x_cln = x[:,3] # Selecionando terceira coluna x_e = x[0,0].item() # Selecionando o primeiro item # Reshape z = y.view(36) # 36x1 w = y.view(-1, 9) # Tensor 4x9 (-1 = dimensão que falta) # + [markdown] id="NgRWYwpsR_3c" # ##União com numpy # + id="RXMi3zAvSCAg" import numpy as np # + id="1kmHKjJvSFf6" x = torch.rand(10) y = x.numpy() a = np.ones(5) b = torch.from_numpy(a) # mesma coisa b = torch.tensor(a) # mesma coisa # + [markdown] id="qWtvj7kJT2UJ" # ##Como usar a gpu # + id="SO4O0bkhU1Wy" # As operações devem ser feitas todas no mesmo device # # Existem placas cuda = gpu e cpu = cpu, onde cpu é o padrão device = torch.device("cuda") #x = torch.randn(5,5, device=device) # Colocar os dados na gpu device2 = torch.device("cuda:2") #x = torch.randn(5,5, device=device2) # Colocar os dados na gpu y = torch.ones(5,5) y.to(device) # + [markdown] id="vY9mlQukVARr" # ##Autograd # + id="AX6Xv5zTVBpr" # Maneiras de construir gradientes automaticamente x = torch.randn(3, requires_grad=True) # Acompanhar os gradientes desse tensor y = torch.randn(3, requires_grad=True) # Acompanhar os gradientes desse tensor z = torch.randn(3, requires_grad=False) # Não acompanhar os gradientes desse tensor # Todas as operações abaixo terão uma função gradiente p = x + y # Adição g = p * z # Produto g = g.mean() # Média g.backward() # Derivando g em x print(x.grad) y.detach() # Tirar o gradiente escrito ali em cima with torch.no_grad(): # Setando um ambiente sem gradientes pass # + [markdown] id="m0EbJHJ-Y3pP" # ##torch.nn # + id="i7VG8dSTY9dA" ''' Parte da biblioteca para as camadas de redes neurais, função de ativação, função de perda Algumas funções de 'nn' -> Camadas : Linear, Conv2d, LSTM -> Função de ativação: ReLu, Sigmoid, Dropout -> Classes próprias: Module, Sequencial -> Função de perda: Softmax, CrossEntropyLoss, MSELoss, Transformer ''' #Criando uma camada linear L = torch.nn.Linear(10, 25) # Leva 10 neurônios em 25 x = torch.randn(3,10) # 3 exemplos de tamanho 10 y = L(x) # Cria 3 versões de tamanho 25 print(y.size()) model = torch.nn.Sequential(L, torch.nn.ReLU(), # Função de ativação ReLU torch.nn.Linear(25,5) # Mais uma linear de 25 para 5 ) # Compondo camadas y = model(x) print(y.size())
Curso_Processamento_Imagens.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Extracting a pore network using PoreSpy and loading into OpenPNM # import warnings import scipy as sp import numpy as np import porespy as ps import openpnm as op # %config InlineBackend.figure_formats = ['svg'] import matplotlib.pyplot as plt ws = op.Workspace() ws.settings["loglevel"] = 40 warnings.filterwarnings('ignore') # %matplotlib inline np.random.seed(10) im = ps.generators.overlapping_spheres(shape=[200, 200, 200], radius=10, porosity=0.5, iter_max=0) plt.imshow(im[:, :, 50]); # Let's check out the porosity of the generated image! eps = ps.metrics.porosity(im) print(f"Porosity: {eps*100:.1f}%") # Let's visualize the image using `porespy`'s 3D visualizer: (this might take several seconds) im_3d = ps.visualization.show_3D(im) plt.imshow(im_3d, cmap=plt.cm.magma); snow = ps.networks.snow(im=im, boundary_faces=['right']) # OpenPNM has an IO class specifically for importing the output from PoreSpy. The ``import_data`` method can either accept a handle to a dictionary (as output from the ``snow`` algorithm above), or it can accept a filename to a saved dctionary (saved using Python's ``pickle`` library). All IO methods in OpenPNM return a ``project`` which is a ``list``, in this case containing a network and a geometry object. proj = op.io.PoreSpy.import_data(snow) print(proj) # We can unpack the network and geometry objects from the ``project`` using the indices in the list as follows: net = proj[0] geo = proj[1] print(net) # It is important to note that the ``net`` object only has topological information and labels. The ``geo`` object was created by the ``openpnm.io.PoreSpy`` import class to extract all geometric information from the supplied ``snow`` dict and put in on a geometry object. We can print ``geo`` to confirm: print(geo) # Now let's plot things to see what we have: # NBVAL_IGNORE_OUTPUT fig, ax = plt.subplots(figsize=(8, 8)) op.topotools.plot_connections(network=net, alpha=0.8, color='grey', ax=ax) op.topotools.plot_coordinates(network=net, ax=ax, color='b', markersize=50) op.topotools.plot_coordinates(network=net, pores=net.pores('right'), ax=ax, color='r', markersize=50) fig.tight_layout() # This looks pretty good, but it only has boundary pores on the right face, indicated by the red dots. When we ran the ``snow`` algorithm we specifically told it to only put boundary pores the ``"right"``. We could have added them to all faces during the extraction, but for the sake of demonstration we can add them after the fact, although the result is slightly different, as you'll see. # # We'll use the ``find_surface_pores`` function in the ``topotools`` module. This function applies a Delaunay tessellation between the interior pores and some fictitious "marker" nodes. Only pores that are on the surface will be connected to these marker nodes. To get the best result from the ``find_surface_pores`` function, it's a good idea to supply your own markers, so let's make a 2D plane of points, positioned outside the left face of the domain: m = np.meshgrid(range(50, 195, 10), range(50, 195, 10)) m = np.vstack([-10 * np.ones_like(m[0].flatten()), m[0].flatten(), m[1].flatten()]).T # Now we pass these points in as markers to the ``find_surface_pores`` function: op.topotools.find_surface_pores(network=net, markers=m, label='left') # Lastly we want to "clone" these pores and translate them to domain edge: op.topotools.clone_pores(network=net, pores=net.pores('left'), labels='left_boundary') net['pore.coords'][net.pores('left_boundary')] *= [0, 1, 1] # Now let's inspect the result using the quick plotting tools in the ``topotools`` module. First we'll add a new label called ``'right_boundary'`` to match the ``'left_boundary'`` we added above, then we'll plot the throats that connect to ther ``'right_boundary'`` or ``'left_boundary'``: # + Ps = net.pores('right') net.set_label('right_boundary', pores=Ps) Ts = net.find_neighbor_throats(pores=net.pores('right_boundary'), mode='or') net.set_label('right_boundary', throats=Ts) fig, ax = plt.subplots(figsize=(8, 8)) op.topotools.plot_coordinates(network=net, color='w', ax=ax) op.topotools.plot_connections(network=net, throats=net.throats('right_boundary'), color='r', ax=ax) op.topotools.plot_connections(network=net, throats=net.throats('left_boundary'), color='b', ax=ax) # - # This result shows that the boundary pores added during the ``snow`` extraction (red) are randomly oriented while those added by the ``clone_pores_method`` are aligned with their internal counter-parts. It also seems like longer connections are made with the ``clone_pores_method`` which may be the result of the Delanauy tessellation identifying pores that are too deep into the domain.
examples/notebooks/networks/extraction/working_with_extracted_networks.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] deletable=true editable=true # # Data Aggregation and Group Operations # + deletable=true editable=true import numpy as np import pandas as pd PREVIOUS_MAX_ROWS = pd.options.display.max_rows pd.options.display.max_rows = 20 np.random.seed(12345) import matplotlib.pyplot as plt plt.rc('figure', figsize=(10, 6)) np.set_printoptions(precision=4, suppress=True) # + [markdown] deletable=true editable=true # ## GroupBy Mechanics # + deletable=true editable=true df = pd.DataFrame({'key1' : ['a', 'a', 'b', 'b', 'a'], 'key2' : ['one', 'two', 'one', 'two', 'one'], 'data1' : np.random.randn(5), 'data2' : np.random.randn(5)}) df # + deletable=true editable=true grouped = df['data1'].groupby(df['key1']) grouped # + deletable=true editable=true grouped.mean() # + deletable=true editable=true means = df['data1'].groupby([df['key1'], df['key2']]).mean() means # + deletable=true editable=true means.unstack() # + deletable=true editable=true states = np.array(['Ohio', 'California', 'California', 'Ohio', 'Ohio']) years = np.array([2005, 2005, 2006, 2005, 2006]) df['data1'].groupby([states, years]).mean() # + deletable=true editable=true df.groupby('key1').mean() df.groupby(['key1', 'key2']).mean() # + deletable=true editable=true df.groupby(['key1', 'key2']).size() # + [markdown] deletable=true editable=true # ### Iterating Over Groups # + deletable=true editable=true for name, group in df.groupby('key1'): print(name) print(group) # + deletable=true editable=true for (k1, k2), group in df.groupby(['key1', 'key2']): print((k1, k2)) print(group) # + deletable=true editable=true pieces = dict(list(df.groupby('key1'))) pieces['b'] # + deletable=true editable=true df.dtypes grouped = df.groupby(df.dtypes, axis=1) # + deletable=true editable=true for dtype, group in grouped: print(dtype) print(group) # + [markdown] deletable=true editable=true # ### Selecting a Column or Subset of Columns # + [markdown] deletable=true editable=true # df.groupby('key1')['data1'] # df.groupby('key1')[['data2']] # + [markdown] deletable=true editable=true # df['data1'].groupby(df['key1']) # df[['data2']].groupby(df['key1']) # + deletable=true editable=true df.groupby(['key1', 'key2'])[['data2']].mean() # + deletable=true editable=true s_grouped = df.groupby(['key1', 'key2'])['data2'] s_grouped s_grouped.mean() # + [markdown] deletable=true editable=true # ### Grouping with Dicts and Series # + deletable=true editable=true people = pd.DataFrame(np.random.randn(5, 5), columns=['a', 'b', 'c', 'd', 'e'], index=['Joe', 'Steve', 'Wes', 'Jim', 'Travis']) people.iloc[2:3, [1, 2]] = np.nan # Add a few NA values people # + deletable=true editable=true mapping = {'a': 'red', 'b': 'red', 'c': 'blue', 'd': 'blue', 'e': 'red', 'f' : 'orange'} # + deletable=true editable=true by_column = people.groupby(mapping, axis=1) by_column.sum() # + deletable=true editable=true map_series = pd.Series(mapping) map_series people.groupby(map_series, axis=1).count() # + [markdown] deletable=true editable=true # ### Grouping with Functions # + deletable=true editable=true people.groupby(len).sum() # + deletable=true editable=true key_list = ['one', 'one', 'one', 'two', 'two'] people.groupby([len, key_list]).min() # + [markdown] deletable=true editable=true # ### Grouping by Index Levels # + deletable=true editable=true columns = pd.MultiIndex.from_arrays([['US', 'US', 'US', 'JP', 'JP'], [1, 3, 5, 1, 3]], names=['cty', 'tenor']) hier_df = pd.DataFrame(np.random.randn(4, 5), columns=columns) hier_df # + deletable=true editable=true hier_df.groupby(level='cty', axis=1).count() # + [markdown] deletable=true editable=true # ## Data Aggregation # + deletable=true editable=true df grouped = df.groupby('key1') grouped['data1'].quantile(0.9) # + deletable=true editable=true def peak_to_peak(arr): return arr.max() - arr.min() grouped.agg(peak_to_peak) # + deletable=true editable=true grouped.describe() # + [markdown] deletable=true editable=true # ### Column-Wise and Multiple Function Application # + deletable=true editable=true tips = pd.read_csv('examples/tips.csv') # Add tip percentage of total bill tips['tip_pct'] = tips['tip'] / tips['total_bill'] tips[:6] # + deletable=true editable=true grouped = tips.groupby(['day', 'smoker']) # + deletable=true editable=true grouped_pct = grouped['tip_pct'] grouped_pct.agg('mean') # + deletable=true editable=true grouped_pct.agg(['mean', 'std', peak_to_peak]) # + deletable=true editable=true grouped_pct.agg([('foo', 'mean'), ('bar', np.std)]) # + deletable=true editable=true functions = ['count', 'mean', 'max'] result = grouped['tip_pct', 'total_bill'].agg(functions) result # + deletable=true editable=true result['tip_pct'] # + deletable=true editable=true ftuples = [('Durchschnitt', 'mean'), ('Abweichung', np.var)] grouped['tip_pct', 'total_bill'].agg(ftuples) # + deletable=true editable=true grouped.agg({'tip' : np.max, 'size' : 'sum'}) grouped.agg({'tip_pct' : ['min', 'max', 'mean', 'std'], 'size' : 'sum'}) # + [markdown] deletable=true editable=true # ### Returning Aggregated Data Without Row Indexes # + deletable=true editable=true tips.groupby(['day', 'smoker'], as_index=False).mean() # + [markdown] deletable=true editable=true # ## Apply: General split-apply-combine # + deletable=true editable=true def top(df, n=5, column='tip_pct'): return df.sort_values(by=column)[:-n:-1] top(tips, n=6) # + deletable=true editable=true tips.groupby('smoker').apply(top) # + deletable=true editable=true tips.groupby(['smoker', 'day']).apply(top, n=1, column='total_bill') # + deletable=true editable=true result = tips.groupby('smoker')['tip_pct'].describe() result result.unstack('smoker') # + [markdown] deletable=true editable=true # f = lambda x: x.describe() # grouped.apply(f) # + [markdown] deletable=true editable=true # ### Suppressing the Group Keys # + deletable=true editable=true tips.groupby('smoker', group_keys=False).apply(top) tips.groupby('smoker', group_keys=True).apply(top) # + [markdown] deletable=true editable=true # ### Quantile and Bucket Analysis # + deletable=true editable=true frame = pd.DataFrame({'data1': np.random.randn(1000), 'data2': np.random.randn(1000)}) frame[:7] quartiles = pd.cut(frame.data1, 4) quartiles[:10] # + deletable=true editable=true def get_stats(group): return {'min': group.min(), 'max': group.max(), 'count': group.count(), 'mean': group.mean()} grouped = frame.data2.groupby(quartiles) grouped.apply(get_stats).unstack() # + deletable=true editable=true # Return quantile numbers grouping = pd.qcut(frame.data1, 10, labels=False) grouped = frame.data2.groupby(grouping) grouped.apply(get_stats).unstack() # + [markdown] deletable=true editable=true # ### Example: Filling Missing Values with Group-Specific Values # + deletable=true editable=true s = pd.Series(np.random.randn(6)) s[::2] = np.nan s s.fillna(s.mean()) # + deletable=true editable=true states = ['Ohio', 'New York', 'Vermont', 'Florida', 'Oregon', 'Nevada', 'California', 'Idaho'] group_key = ['East'] * 4 + ['West'] * 4 data = pd.Series(np.random.randn(8), index=states) data # + deletable=true editable=true data[['Vermont', 'Nevada', 'Idaho']] = np.nan data data.groupby(group_key).mean() # + deletable=true editable=true fill_mean = lambda g: g.fillna(g.mean()) data.groupby(group_key).apply(fill_mean) # + deletable=true editable=true fill_values = {'East': 0.5, 'West': -1} fill_func = lambda g: g.fillna(fill_values[g.name]) data.groupby(group_key).apply(fill_func) # + [markdown] deletable=true editable=true # ### Example: Random Sampling and Permutation # + deletable=true editable=true # Hearts, Spades, Clubs, Diamonds suits = ['H', 'S', 'C', 'D'] card_val = (list(range(1, 11)) + [10] * 3) * 4 base_names = ['A'] + list(range(2, 11)) + ['J', 'K', 'Q'] cards = [] for suit in ['H', 'S', 'C', 'D']: cards.extend(str(num) + suit for num in base_names) deck = pd.Series(card_val, index=cards) # + deletable=true editable=true deck[:13] # + deletable=true editable=true def draw(deck, n=5): return deck.sample(n) draw(deck) # + deletable=true editable=true get_suit = lambda card: card[-1] # last letter is suit deck.groupby(get_suit).apply(draw, n=2) # + deletable=true editable=true deck.groupby(get_suit, group_keys=False).apply(draw, n=2) # + [markdown] deletable=true editable=true # ### Example: Group Weighted Average and Correlation # + deletable=true editable=true df = pd.DataFrame({'category': ['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b'], 'data': np.random.randn(8), 'weights': np.random.rand(8)}) df # + deletable=true editable=true grouped = df.groupby('category') get_wavg = lambda g: np.average(g['data'], weights=g['weights']) grouped.apply(get_wavg) # + deletable=true editable=true close_px = pd.read_csv('examples/stock_px_2.csv', parse_dates=True, index_col=0) close_px.info() close_px[-4:] # + deletable=true editable=true spx_corr = lambda x: x.corrwith(x['SPX']) # + deletable=true editable=true rets = close_px.pct_change().dropna() # + deletable=true editable=true get_year = lambda x: x.year by_year = rets.groupby(get_year) by_year.apply(spx_corr) # + deletable=true editable=true by_year.apply(lambda g: g['AAPL'].corr(g['MSFT'])) # + [markdown] deletable=true editable=true # ### Example: Group-Wise Linear Regression # + deletable=true editable=true import statsmodels.api as sm def regress(data, yvar, xvars): Y = data[yvar] X = data[xvars] X['intercept'] = 1. result = sm.OLS(Y, X).fit() return result.params # + deletable=true editable=true by_year.apply(regress, 'AAPL', ['SPX']) # + [markdown] deletable=true editable=true # ## Pivot Tables and Cross-Tabulation # + deletable=true editable=true tips.pivot_table(index=['day', 'smoker']) # + deletable=true editable=true tips.pivot_table(['tip_pct', 'size'], index=['time', 'day'], columns='smoker') # + deletable=true editable=true tips.pivot_table(['tip_pct', 'size'], index=['time', 'day'], columns='smoker', margins=True) # + deletable=true editable=true tips.pivot_table('tip_pct', index=['time', 'smoker'], columns='day', aggfunc=len, margins=True) # + deletable=true editable=true tips.pivot_table('tip_pct', index=['time', 'size', 'smoker'], columns='day', aggfunc='mean', fill_value=0) # + [markdown] deletable=true editable=true # ### Cross-Tabulations: Crosstab # + deletable=true editable=true from io import StringIO data = """\ Sample Nationality Handedness 1 USA Right-handed 2 Japan Left-handed 3 USA Right-handed 4 Japan Right-handed 5 Japan Left-handed 6 Japan Right-handed 7 USA Right-handed 8 USA Left-handed 9 Japan Right-handed 10 USA Right-handed""" data = pd.read_csv(StringIO(data), sep='\s+') # + deletable=true editable=true data # + deletable=true editable=true pd.crosstab(data.Nationality, data.Handedness, margins=True) # + deletable=true editable=true pd.crosstab([tips.time, tips.day], tips.smoker, margins=True) # + deletable=true editable=true pd.options.display.max_rows = PREVIOUS_MAX_ROWS # + [markdown] deletable=true editable=true # ## Conclusion # -
08_DataAggregation_GroupOperations.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # XOR with TensorFlow #importing tensorflow import tensorflow as tf #import the time module import time #importing numpy import numpy as np #importing debug library from tensorflow.python import debug as tf_debug #creating a session object which creates an environment where we can execute Operations and evaluate Tensors sess = tf.Session() # ## Debugger # # ### Uncomment the below line and execute the code to run the debugger. # # ### Go to the link once you start execution http://localhost:6006/ # + #Uncomment the below line to run the debugger # sess = tf_debug.TensorBoardDebugWrapperSession(sess, "localhost:6064") # + #Inserting a placeholder for a tensor equal to size of data X = tf.placeholder(tf.float32, shape=[4,2], name = 'X') #Inserting a placeholder for a tensor equal to size of labels of the data Y = tf.placeholder(tf.float32, shape=[4,1], name = 'Y') # + #declaring a variable which will retain its state through multiple runs with random values from normal distribution W = tf.Variable(tf.truncated_normal([2,2]), name = "W") #declaring a variable which will retain its state through multiple runs with random values from normal distribution w = tf.Variable(tf.truncated_normal([2,1]), name = "w") # + #declaring a variable which will retain its state through multiple runs with zeros, shape = 4 x 2 c = tf.Variable(tf.zeros([4,2]), name = "c") #declaring a variable which will retain its state through multiple runs with zeros, shape = 4 x 1 b = tf.Variable(tf.zeros([4,1]), name = "b") # - #define a python operation for the hidden layer with tf.name_scope("hidden_layer") as scope: #the operation of the hidden layer, matrix multpilaction, addition and relu activation function h = tf.nn.relu(tf.add(tf.matmul(X, W),c)) #define a python operation for output layer with tf.name_scope("output") as scope: #the operation at the outplut layer, matrix multiplication, addition and sigmoid activation y_estimated = tf.sigmoid(tf.add(tf.matmul(h,w),b)) #define a python operation for the loss function with tf.name_scope("loss") as scope: #the operation that calculates the loss for our model, here it's the squared loss loss = tf.reduce_mean(tf.squared_difference(y_estimated, Y)) #define a python operation for training our model with tf.name_scope("train") as scope: #the train step with gradient descent optimizer to minimize the loss train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss) # + #input data INPUT_XOR = [[0,0],[0,1],[1,0],[1,1]] #expected output/labels for the data OUTPUT_XOR = [[0],[1],[1],[0]] #python operation to initialize the global variables init = tf.global_variables_initializer() #write the summary protocol buffers to event files writer = tf.summary.FileWriter("./logs/xor_logs", sess.graph) #run the graph fragment to execute the operation (initialize global vars) sess.run(init) # + #start the clock to record the execution time t_start = time.clock() #run the model for multiple epochs for epoch in range(100001): #run the graph fragment to execute the operation (training) #and evaluate each tensor using data from feed_dict sess.run(train_step, feed_dict={X: INPUT_XOR, Y: OUTPUT_XOR}) #check if the step is a multiple of 10000 if epoch % 10000 == 0: #print the char 80 times, forms a separator print("_"*80) #print the epoch number print('Epoch: ', epoch) #print y_estimated print(' y_estimated: ') #run the graph fragment to execute the operation (y_estimated) #and evaluate each tensor using data from feed_dict for element in sess.run(y_estimated, feed_dict={X: INPUT_XOR, Y: OUTPUT_XOR}): #print each value of y_estimated print(' ',element) #print W (theta1) print(' W: ') #run the graph fragment to execute the operation (W) for element in sess.run(W): #print each value from W print(' ',element) #print c(bias1) print(' c: ') #run the graph fragment to execute the operation (c) for element in sess.run(c): #print each value from c print(' ',element) #print w(theta2) print(' w: ') #run the graph fragment to execute the operation (w) for element in sess.run(w): #print each value from w print(' ',element) #print b(bias2) print(' b ') #run the graph fragment to execute the operation (b) for element in sess.run(b): #print each value from b print(' ',element) #run the graph fragment to execute the operation (loss) #and evaluate each tensor using data from feed_dict, print the loss print(' loss: ', sess.run(loss, feed_dict={X: INPUT_XOR, Y: OUTPUT_XOR})) #end the clock recording the execution time t_end = time.clock() #print the char 80 times, forms a separator print("_"*80) #print the execution time print('Elapsed time ', t_end - t_start) # -
ieee_ml/.ipynb_checkpoints/xor_combined-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os,sys,glob from seisgo.noise import compute_fft,correlate from seisgo import utils,downloaders # + rootdir='.' respdir='.' sacfiles = sorted(glob.glob(os.path.join(rootdir,'*.SAC'))) rm_resp='RESP' #for removing responses. freqmin=0.01 freqmax=100 tr,inv=downloaders.read_data(sacfiles,rm_resp=rm_resp,freqmin=freqmin,freqmax=freqmax,stainv=True) tr1,tr2=tr;inv1,inv2=inv #trimming is needed for this data set, which there is one sample difference in the starting time. cstart=max([tr1.stats.starttime,tr2.stats.starttime]) cend=min([tr1.stats.endtime,tr2.stats.endtime]) tr1.trim(starttime=cstart,endtime=cend,nearest_sample=True) tr2.trim(starttime=cstart,endtime=cend,nearest_sample=True) # + print('cross-correlation ...') cc_len = 3600 # basic unit of data length for fft (sec) cc_step = 900 # overlapping between each cc_len (sec) maxlag = 100 # lags of cross-correlation to save (sec) freq_norm='rma' time_norm='rma' #for whitening freqmin=0.02 freqmax=2 #get FFT, #do correlation fftdata1=compute_fft(tr1,cc_len,cc_step,stainv=inv1, freq_norm=freq_norm,freqmin=freqmin,freqmax=freqmax, time_norm=time_norm,smooth=500) fftdata2=compute_fft(tr2,cc_len,cc_step,stainv=inv2, freq_norm=freq_norm,freqmin=freqmin,freqmax=freqmax, time_norm=time_norm,smooth=500) corrdata=correlate(fftdata1,fftdata2,maxlag,substack=True) # - #plot xcorr result corrdata.plot(freqmin=1,freqmax=2,lag=50,stack_method='robust',save=False)
notebooks/seisgo_xcorr_sac.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = 'all' #默认为'last' import pandas as pd import numpy from sklearn import tree from sklearn import neighbors from sklearn.feature_extraction import DictVectorizer from sklearn.model_selection import train_test_split from sklearn.ensemble import AdaBoostClassifier train_set = pd.read_csv("./train_set.csv") train_set = pd.DataFrame(train_set) train_set = train_set.drop(columns="ID") # test_set = pd.DataFrame(train_set) train_set.head() # test_set.head() train_set.shape vec = DictVectorizer(sparse=False) train_set = vec.fit_transform(train_set.to_dict(orient="record")) train_set = pd.DataFrame(train_set) train_set.columns = vec.feature_names_ train_set X = train_set.iloc[:,:-1] Y = train_set.iloc[:,-1] x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.25,random_state=33) x_train # y_train.shape #传入的模型不一定要训练好的 AB = AdaBoostClassifier(tree.DecisionTreeClassifier(max_depth=3),n_estimators=10) AB.fit(x_train,y_train) AB.score(x_test,y_test) test_set = pd.read_csv("./test_set.csv") ID = test_set['ID'] test_set = test_set.drop(columns="ID") test_set vec = DictVectorizer(sparse=False) data = vec.fit_transform(test_set.to_dict(orient="record")) data = pd.DataFrame(data) data.columns = vec.feature_names_ data predict = AB.predict_proba(data) result = pd.DataFrame(predict).iloc[:,1] ID = pd.DataFrame(ID) ID.insert(1,"pred",result) ID.to_csv("result.csv",index=False)
数据挖掘与数据仓库/大作业/.ipynb_checkpoints/SVM-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os import random import matplotlib.pyplot as plt import numpy as np # - # %matplotlib inline import mr from mrcnn import model as modellib from mrcnn import visualize def get_ax(rows=1, cols=1, size=8): fig, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows)) return fig, ax work_path = os.path.join("E:", os.sep, "RCNNSmoke256Train") os.chdir(work_path) dataset = mr.MRDataset() dataset.load(work_path) dataset.prepare() # + class InferenceConfig(mr.TrainConfig): GPU_COUNT = 1 IMAGES_PER_GPU = 1 DETECTION_MIN_CONFIDENCE = 0.8 DETECTION_NMS_THRESHOLD = 0.2 inference_config = InferenceConfig() # Recreate the model in inference mode model = modellib.MaskRCNN(mode="inference", config=inference_config, model_dir="logs") find_last = True if find_last: model_path = model.find_last() print("Loading weights from ", model_path) else: model_path = "mask_rcnn_mr_0500.h5" model.load_weights(model_path, by_name=True) # - inference_config.display() fig_id = 1 # + image_id = random.choice(dataset.image_ids) original_image, image_meta, gt_class_id, gt_bbox, gt_mask =\ modellib.load_image_gt(dataset, inference_config, image_id, use_mini_mask=False) results = model.detect([original_image], verbose=0) r = results[0] fig, (ax1, ax2) = get_ax(1,2) visualize.display_instances(original_image, gt_bbox, gt_mask, gt_class_id, dataset.class_names, ax=ax1) visualize.display_instances(original_image, r['rois'], r['masks'], r['class_ids'], dataset.class_names, r['scores'], ax=ax2) # - fig.savefig(f"pipes-train-{fig_id:03d}.png") fig_id += 1
Inference.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib.pyplot as plt plt.style.use('classic') # %matplotlib inline # # Class 10: Stochastic Time Series Processes # # Most models of the business cycle are *stochastic* time series models. That is, the models incorporate randomness as a determinant of equilibrium. Randomness is necessary for two reasons. First, from a practical point of view, any model will be incomplete and will not fit the data perfectly and so randomness in a macroeconomic model is analagous to an error term in an OLS model. Second, from a philosophical perspective, randomness in a macroeconomic model reflects economists' consensus that fundamentally unpredictable forces cause the business cycle. # # # ## Simulating Normal Random Variables with Numpy # # Recall that the `numpy.random` module has functions for generating (pseudo) random variables. Learn more about the module by reading the documentation: https://docs.scipy.org/doc/numpy/reference/routines.random.html # # We use the `numpy.random.normal()` function to crate arrays of random draws from the normal distribution. The function takes three arguments: # * `loc`: the mean of the distribution (default=0) # * `scale`: the standard deviation of the distribution (default=1) # * `size`: how many to numbers to draw (default = `None`) # # The default is to draw numbers from the *standard normal* distribution. # ### Example # # Draw 500 values each from the $\mathcal{N}(0,1)$ and $\mathcal{N}(0,2^2)$ distributions. Plot. # + # Set the seed for the random number generator to 126 np.random.seed(126) # Create two arrays: # x: 500 draws from the normal(0,1) distribution # y: 500 draws from the normal(0,4) distribution x = np.random.normal(loc=0,scale=1,size=500) y = np.random.normal(loc=0,scale=2,size=500) # Plot x and y plt.plot(x,lw=2,alpha = 0.6,label='$\sigma=1$') plt.plot(y,lw=2,alpha = 0.6,label='$\sigma=2$') plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.grid() # - # ## The White Noise Process # # In the previous example, we created two variables that stored draws from normal distrbutions with means of zero but with different standard deviations. Both of the variables were simulations of *white noise processes*. A white noise process is a random variable $\epsilon_t$ with constant mean and constant variance. We are concerned only with the zero-mean white noise process and we'll denote that a variable is a zero-mean white noise process with the following shorthand notation: # # \begin{align} # \epsilon_t & \sim \text{WN}(0,\sigma^2), # \end{align} # # where $\sigma^2$ is the variance of the processes. Strictly speaking, a white noise process can follow any distribution as long as the mean and variance are constant, but we'll concentrate exclusively white noise process drawn from the normal distribution. # # ## The AR(1) Process # # A random variable $y_t$ is an *autoregressive process of order 1* or AR(1) process if it can be written in the following form: # # \begin{align} # y_t & = \rho y_{t-1} + \epsilon_t, # \end{align} # # where $\rho$ is a constant and $\epsilon \sim \text{WN}(0,\sigma^2)$. The AR(1) process is the stochastic analog of the first-order difference equation where the random variable $\epsilon_t$ replaces the exogenous variable $w_t$. # # ### Example # # Simulate an AR(1) process for 101 periods ($t = 0,\ldots, 100$) using the following parameter values: # # \begin{align} # \rho & = 0.5\\ # \sigma & = 1\\ # y_0 & = 0 # \end{align} # # Plot the simulated values for $y$. # + # Set the seed for the random number generator to 126 np.random.seed(126) # Initialize values for T, y0, rho, and sigma T = 101 y0 = 0 rho = 0.5 sigma = 1 # Initialize an array of zeros for y y = np.zeros(T) # Set the first value of y equal to y0 y[0] = y0 # Create a variable called 'epsilon' equal to an array containing T draws from the normal(0,sigma^2) process eps= np.random.normal(loc=0,scale=sigma,size=T) # Iterate over t in range(T-1) to compute y for t in range(T-1): y[t+1] = rho*y[t] + eps[t+1] # Plot y plt.plot(y,lw=2) plt.grid(linestyle=':') # - # The AR(1) process wtih $\rho = 0.5$ seems to fluctuate around the value $y=0$ and so the process appears to be stable # ### Example # # Simulate an AR(1) process for 101 periods ($t = 0,\ldots, 100$) using the following parameter values: # # \begin{align} # \rho & = 1.5\\ # \sigma & = 1\\ # y_0 & = 0 # \end{align} # # Plot the simulated values for $y$. # + # Set the seed for the random number generator to 126 np.random.seed(126) # Initialize values for T, y0, rho, and sigma T = 101 y0 = 0 rho = 1.5 sigma = 1 # Initialize an array of zeros for y y = np.zeros(T) # Set the first value of y equal to y0 y[0] = y0 # Create a variable called 'epsilon' equal to an array containing T draws from the normal(0,sigma^2) process eps= np.random.normal(loc=0,scale=sigma,size=T) # Iterate over t in range(T-1) to compute y for t in range(T-1): y[t+1] = rho*y[t] + eps[t+1] # Plot y plt.plot(y,lw=2) plt.grid(linestyle=':') # - # The AR(1) process wtih $\rho = 1.5$ seems to be approaching infinity in magnitude and so the process appears to be explosive. # # In general, like the first-order difference equation, if $|\rho| < 1$, the the AR(1) process is stable. If $|\rho|>1$ then the process is explosive. A special case is the *random walk process* which is an AR(1) process with $\rho = 1$. The random walk process has many important applications including asset pricing theory. # # The function in the next cell # Define a function for simulating an AR(1) process. CELL PROVIDED def ar1_sim(rho=0,sigma=1,y0=0,T=25): '''Funciton for simulating an AR(1) process for T periods Args: rho (float): autoregressive parameter sigma (float): standard deviation of the white noise process y0 (float): initial value of the process T (int): number of periods to simulate Returns: numpy array ''' # initialize y array y = np.zeros(T) y[0] = y0 # draw random numbers for white noise process eps= np.random.normal(loc=0,scale=sigma,size=T-1) for t in range(T-1): y[t+1] = rho*y[t] + eps[t] return y # ### Example: # # Use the `ar1_sim()` function to simulate an AR(1) process for 201 periods ($t = 0,\ldots, 200$) using the following parameter values: # # \begin{align} # \rho & = -0.99\\ # \sigma & = 0.5\\ # y_0 & = 0 # \end{align} # # Set the seed for the NumPy random number generator to 126. Plot the simulated values for $y$. # + # Set the seed for the random number generator to 126 np.random.seed(126) # Simulate y and plot plt.plot(ar1_sim(rho=-0.99,sigma=0.5,y0=0,T=201)) plt.grid() # - # ## Application: Estimate TFP as an AR(1) process # # It is routine to model quarterly fulctuations in TFP as an AR(1) process and we will encounter this in our business cycle models. Here we will fit the following AR(1) model to US data: # # \begin{align} # \log\left(A_t/A^{trend}_t\right) & = \rho \log\left(A_{t-1}/A^{trend}_{t-1}\right) + \epsilon_t # \end{align} # # where $\log\left(A_t/A^{trend}_t\right) = \log A_t - \log A^{trend}$ is the log-deviation of TFP from its trend and $\epsilon_t$ is a white noize process with mean 0 and variance $\sigma^2$. # # # ### The Data # The file `rbc_data_actual_trend.csv`, available at https://github.com/letsgoexploring/econ126/raw/master/Data/Csv/rbc_data_actual_trend.csv, contains actual and trend data for real GDP per capita, real consumption per capita, real investment per capita, real physical capital per capita, TFP, and hours per capita at quarterly frequency. The GDP, consumption, investment, and capital data are in terms of 2012 dollars. Hours is measured as an index with the value in October 2012 set to 100. All of the data are *real* quantities. That is, there are no *nominal* quantities like money or inflation or a nominal interest rate. The reason is that the first theory that we will encounter is called *real business cycle* or RBC theory and, in that theory, there is no place for nominal quantities. RBC theory seeks to explain fluctuations in real quantities as being primarily due to TFP shocks; i.e., shocks to the production function. # # ### Objectives # # 1. Import TFP data (trend and actual) and compute cyclical component as log-deviation from trend. # 2. Construct a scatter plot of log-deviation of TFP from trend against *lagged* log-deviation of TFP from trend. # 3. Estimate the AR(1) model of log-deviation of TFP from trend to obtain estimates of $\rho$ and $\sigma$. # Read business_cycle_data_actual_trend.csv into a Pandas DataFrame called 'df' with the first column set as the index and parse_dates=True df = pd.read_csv('https://github.com/letsgoexploring/econ126/raw/master/Data/Csv/rbc_data_actual_trend.csv',index_col=0,parse_dates=True) # Construct a plot of TFP with its trend with: # 1. Actual line: blue with lw=1, alpha=0.7, label = 'actual' # 2. Trend line: red with lw=3, alpha=0.7, label = 'trend' plt.plot(df['tfp'],lw=2,alpha=0.6,label='Actual') plt.plot(df['tfp_trend'],'r',lw=2,alpha=0.6,label='Trend') plt.title('TFP for US from '+df.index[0].strftime('%B %Y')+' to '+df.index[-1].strftime('%B %Y')) plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.grid() # Now we need to compute the log-deviation of TFP from its trend. Note that since $\log(a/b) = \log(a) - \log(b)$, we can write: # # \begin{align} # \log A_t - \log A^{trend} & = \log\left(A_t/A^{trend}_t\right) # \end{align} # # The term on the right will make the AR(1) model easier to read. # # # \begin{align} # \log\left(A_t/A^{trend}_t\right) & = \rho \log\left(A_{t-1}/A^{trend}_{t-1}\right) + \epsilon_t # \end{align} # # # # Create a new column to df called tfp_cycle equal to the log difference between actual TFP and it's trend: df['tfp_cycle'] = np.log(df['tfp']/df['tfp_trend']) # Plot the log deviation of TFP from its trend (times 100) plt.plot(df['tfp_cycle'],lw=2,alpha=0.6) plt.title('TFP for US from '+df.index[0].strftime('%B %Y')+' to '+df.index[-1].strftime('%B %Y')) plt.grid() # + # Create a new column to df called tfp_cycle_lag that, for each date, contains values # in 'tfp_cycle' at the previous date df['tfp_cycle_lag'] = df['tfp_cycle'].shift() # Print the first five rows of only the 'tfp_cycle' and 'tfp_cycle_lag' columns. PROVIDED print(df[['tfp_cycle','tfp_cycle_lag']].head()) # - # Notice that there is a missing value inthe `tfp_cycle_lag` column for the first date in the index because there is no prior observation in the `tfp_cycle` column. To proceed with the estimation, we need to get rid of the row with missing values # + # Use dropna() method on df to remove the row iwth the missing value df= df.dropna() # Print the first five rows of only the 'tfp_cycle' and 'tfp_cycle_lag' columns. PROVIDED print(df[['tfp_cycle','tfp_cycle_lag']].head()) # - # Next, we should make a scatter plot of log-deviation of TFP from trend against the one-period lag of log-deviation of TFP from trend to see if out AR(1) model is a good idea. # Construct a scatter plot of log-deviation of TFP from trend against the one-period lag of log-deviation # of TFP from trend with: # 1. scatter points sized at least 50, opacity (alpha) no greater than 0.25 # 2. x- and y-axis limits: [-0.04,0.04] plt.scatter(df['tfp_cycle_lag'],df['tfp_cycle'],s=50,alpha=0.25) plt.xlabel('Lag log-deviation from trend') plt.ylabel('Current log-deviation from trend') plt.title('TFP for US from '+df.index[0].strftime('%B %Y')+' to '+df.index[-1].strftime('%B %Y')) plt.xlim([-0.04,0.04]) plt.ylim([-0.04,0.04]) plt.grid() # Now we use StatsModels to fit the model. First we estimate the AR(1) model to obtain an estimate of $\rho$. Then we find the standard deviation of the residuals of the regression to estimate $\sigma$. # + # Create variable 'X' to be the independent variable of the OLS model. Do not add a constant to X X = df['tfp_cycle_lag'] # Create variable 'Y' to be the dependent variable of the OLS model. Y = df['tfp_cycle'] # Create a variable called 'model' that initializes the OLS model model = sm.OLS(Y,X) # Create a variable called 'results' that stores the results of the estimated OLS model results = model.fit() # Print output of summary2() mdethod of results print(results.summary2()) # - # Estimated coefficients are stored as a Pandas `Series` in the `params` attribute of `results`. Index values correspond to names of columns in the dependent variable `X` # Print the contents of the params method of results print(results.params) # Create a variable called 'rho' that equals the value of the estimated coefficient on df['tfp_cycle_lag'] rho = results.params['tfp_cycle_lag'] # The residuals from the regression are stored in an attribute of `results` called `resid` # + # Create a variable called 'sigma' that equals the standard deviation of the residuals of the regression sigma = results.resid.std() # Print the value of sigma print(sigma) # - # And that's how you estimate the parameters of an AR(1) model of the cyclical component of TFP fr the US. # ## Application: A Stochastic Solow Growth Model (Optional) # # Consider the Solow growth model with written in "per worker" terms: # # \begin{align} # y_t & = A_tk_t^{\alpha}\\ # k_{t+1} & = i_t + (1-\delta)k_{t}\\ # y_t & = c_t + i_t\\ # c_t & = (1-s)y_t, # \end{align} # # where $y_t$ is output per worker, $k_t$ is capital per worker, $c_t$ is consumption per worker, $i_t$ is investment per worker, and $A_t$ is time-varying TFP. Assume: # # \begin{align} # \log A_{t+1} & = \rho \log A_{t} + \epsilon_{t+1} # \end{align} # # where $\epsilon_t$ is a white noise process with mean 0 and variance $\sigma^2$. Since capital and TFP are the only two variables, that depend on past value of themselves, they are the two state variables of the model. Therefore, we can simulate simulate the model in two steps. First, simulate $k_t$ and $A_t$ using the following two laws of motion: # # \begin{align} # k_{t+1} & = sAk_t^{\alpha} + (1-\delta)k_{t} \label{eqn:capital_solution}\\ # \log A_{t+1} & = \rho \log A_{t} + \epsilon_{t+1}. \label{eqn:tfp_solution} # \end{align} # # Second, compute simulated values for $y_t$, $c_t$, and $i_t$ using the following static relationships: # # \begin{align} # y_t & = A_tk_t^{\alpha}\\ # i_t & = sA_tk_t^{\alpha}\\ # c_t & = (1-s)A_tk_t^{\alpha}, # \end{align} # # In the previous example, we estimated $\rho$ and $\sigma$ for the US so let's use those values for this simulation. For the other parameter, use the following values for the simulation: # # | $A_0$ | $k_0$ | $s$ | $\alpha$ | $\delta $ | $T$ | # |-------|-------|-----|----------|-----------|------| # | 1 | 8.43 | 0.1 | 0.35 | 0.025 | 201 | # # Where $T$ is the total number of simulation periods (i.e., $t$ will range from $0$ to $200$). # # ### Function # # The function in the next cell simulates a stochastic Solow growth model. It returns a `DataFrame` with colums equal to the log-deviations of the simulated variables relative to the trend (i.e., nonstochastic steady state) implied by the model. # Define a function that returns a DataFrame of simulated value from the Solow model with exogenous labor and TFP growth. CELL PROVIDED def solow_stochastic(s,alpha,delta,k0,A0,rho,sigma,T): '''Function for computing a simulation of the Solow growth model with exogenous labor and TFP growth. y[t] = A[t]*K[t]^alpha k[t+1] = i[t] + (1-delta)*k[t] y[t] = c[t] + i[t] c[t] = (1-s)*y[t] A[t+1] = exp(rho*logA[t] + epsilon[t+1]) Args: s (float): Saving rate alpha (float): Capital share in Cobb-Douglas production function delta (float): Capital depreciation rate T (int): Number of periods to simulate k0 (float): Initial value of capital per worker A0 (float): Initial TFP rho (float): AR coeficient for log A[t] sigma (float): Standard deviation of shock to log A[t] Returns: Pandas DataFrame ''' # Create epsilon values epsilon = np.random.normal(scale=sigma,size=T) # Initialize capital values capital = np.zeros(T) # Set first value of capital equal to k0 capital[0] = k0 # Initialize TFP values tfp = np.zeros(T) # Set first value of TFP equal to A0 tfp[0] = A0 # Iterate over t in range(T-1) to update subsequent values in the capital and tfp arrays for t in range(T-1): capital[t+1] = s*tfp[t]*capital[t]**alpha + (1-delta)*capital[t] tfp[t+1] = np.exp(rho*np.log(tfp[t]) + epsilon[t]) # Compute the values of the other aggregate variables output = tfp*capital**alpha consumption = (1-s)*output investment = s*output # Compute steady state (or trend) of endogenous vars capital_ss = (s/delta)**(1/(1-alpha)) tfp_ss = 1 output_ss = tfp_ss*capital_ss**alpha consumption_ss = (1-s)*output_ss investment_ss = s*output_ss # Put simulated data into a DataFrame df = pd.DataFrame({'output_log_dev':np.log(output/output_ss), 'consumption_log_dev':np.log(consumption/consumption_ss), 'investment_log_dev':np.log(investment/investment_ss), 'capital_log_dev':np.log(capital/capital_ss), 'tfp_log_dev':np.log(tfp/tfp_ss)}) # Return the simulated data return df # ### Simulate the Stochastic Solow Model # + # CELL PROVIDED # Set parameters for simulation alpha=0.35 s = 0.1 delta = 0.025 k0 = 8.43 A0=1 T = 201 np.random.seed(126) # Simulate the model and store output in a variable called 'solow_df' solow_df = solow_stochastic(s,alpha,delta,k0,A0,rho,sigma,T) # Plot fig = plt.figure() ax = fig.add_subplot(1,1,1) solow_df.plot(ax=ax,grid=True) ax.legend(loc='center left', bbox_to_anchor=(1, 0.5)) # - # ### Simulation Results # # Compute the standard deviation of the simulated data and the correlation coefficients of the simulated data # Standard deviations. CELL PROVIDED solow_df.std()*100 # Correlation coefficients. CELL PROVIDED solow_df.corr() # It looks like the stochastic Solow model does a god job replicating the volatility (standard deviation) of ouput and consumption relative to the data. However the simulated investment is too small by about a factor of 6. The stochastic Solow model does captures the correlation between output, consumption, and investment, but it implies perfect correlation which is too much. # ## The Random Walk Process (Optional) # # The *random walk process* is an AR(1) process with $\rho=1$: # # \begin{align} # y_t = y_{t-1} + \epsilon_t # \end{align} # # The random walk process has an important place in finance since the evidence suggests that stock prices follow a random walk process. # # ### Example # # Simulate 7 random walk processes for 501 periods. Set $\sigma = 1$. Plot all 7 simulated processes on the same ayes. # + # CELL PROVIDED np.random.seed(126) for i in range(7): plt.plot(ar1_sim(rho=1,T=501)) plt.title('Seven random walk processes') plt.grid()
Lecture Notebooks/Econ126_Class_10.ipynb
# -*- coding: utf-8 -*- # # Sensitivity indices for Sobol's G function # # <!-- dom:AUTHOR: <NAME> --> # <!-- Author: --> # **<NAME>**, [<EMAIL>](mailto:<EMAIL>) # # In this notebook we # illustrate how sensitivity indices may be computed # for Sobol's G function # [Archer et al. 1997](https://www.tandfonline.com/doi/abs/10.1080/00949659708811825). We will demonstrate how both # Monte Carlo methods and # polynomial chaos expansions may be used to # estimate both first order indices and # total indices. # # The G function was chosen as an example for two reasons: # # * The sensitivity indices have analytical solutions [Saltelli et al. 2010](https://www.sciencedirect.com/user/chooseorg?targetURL=%2Fscience%2Farticle%2Fpii%2FS0010465509003087). # # * The G function can be used to generate test cases over a wide spectrum of difficulties # # The notebook has four sections; in the first section the G # function is # presented along with the analytical expressions for the sensitivity # indices. The second and third sections demonstrate how polynomial # chaos # expansions and Monte Carlo methods may be used to approximate # the sensitivity # indices. The final section is devoted for comparison # of these two numerical # approaches. # # For all sections we make use of the interactive features of the # notebooks, which allow you to experiment with how the values of the G # function # parameters and sample size influence the sensitivity indices # for the G function. # The intention is to let you gain understanding, # experience and intuition in a # efficient and convenient manner. # # Run the first cell to initialise plotting and # printing modules for # later use (and system settings). # + attributes={"classes": [], "id": "", "n": "1"} # ipython magic # %matplotlib notebook # %load_ext autoreload # %autoreload 2 import os, sys, inspect # Use this if you want to include modules from a subfolder cmd_subfolder = os.path.realpath(os.path.abspath(os.path.join(os.path.split(inspect.getfile( inspect.currentframe() ))[0],"python_source"))) if cmd_subfolder not in sys.path: sys.path.insert(0, cmd_subfolder) # %run matplotlib_header import matplotlib.pyplot as plt from present_output import print_vectors_relerror, print_3vectors_relerror # - # ## Analytical computation of sensitivity indices for Sobol's G function # # <div # id="sec:G_functions"></div> # # A function which has proved to be useful as a test # function with analytical solutions for the sensitivity indices is Sobol's G # function which is defined as: # # <!-- Equation labels as ordinary links --> # <div id="eq:1"></div> # # $$ # \begin{equation} # Y=G(X) = G(X_1, X_2,\ldots,X_k,a_1, a_2,\ldots,a_k) = # \prod_{i=1}^{k} g_i \label{eq:1} \tag{1} # \end{equation} # $$ # # where # # <!-- Equation labels as ordinary links --> # <div id="eq:2"></div> # # $$ # \begin{equation} # g_i = \frac{|{4 \, X_i}-2|+{a}_i}{1+{a}_i} \label{eq:2} \tag{2} # \end{equation} # $$ # # The input factors $X_i$ are assumed to be uniformly # distributed in the # interval $[0,1]$ with positive real-number coefficients $a_i$ $(a_i \geq 0).$ The number of # factors *k* can be varied as the reader # pleases, although the minimum # number to produce a meaningful inference is set at # three. # # As you will explore below, the sensitivity $S_i$ of $G(X)$ in # ([1](#eq:1)) with respect to a specific input factor $X_i$, will depend # on the # value of the corresponding coefficient $a_i$; small values of # $a_i$ (e.g. # $a_i=0$) will yield a high corresponding $S_i$, meaning # that $X_i$ is an # important/influential variable on the variance or # uncertainty of $G(X)$. # # We have # implemented Sobol's G function in ([1](#eq:1)) and ([2](#eq:2)) # in the code # snippet below: # + attributes={"classes": [], "id": "", "n": "2"} # model function import numpy as np from numba import jit @jit def g(Xj,aj): return (np.abs(4*Xj-2.)+aj)/(1+aj) @jit def G(X,a): G_vector=np.ones(X.shape[0]) for j, aj in enumerate(a): np.multiply(G_vector,g(X[:,j],aj),G_vector) return G_vector # - # For computational efficiency we make use of `just in time` compilation # from # `numba`. If you have not installed `numba`, you # may comment out # the lines with `@jit` - the cells will run anyway, # albeit probably somewhat slower. # # The sensitivity indices $S_i$ and $S_{Ti}$ for $Y=G(X)$ in # eq. # ([1](#eq:1)) may be derived as outlined in [Saltelli et al. 2010](https://www.sciencedirect.com/science/article/pii/S0010465509003087). # The conditional variance $V_i$ is: # # <!-- Equation labels as ordinary links --> # <div id="eq:Vi"></div> # # $$ # \begin{equation} # V_i = V_{X_i} \left (E_{X_{\sim i}} (Y \;| \;X_{i}) \right) = # \frac{1/3}{(1+a_i)^2} \label{eq:Vi} \tag{3} # \end{equation} # $$ # # <!-- while the conditional variance $V\left (E(Y \; | \; X_{i_1}, X_{i_1}, # X_{i_1}, \ldots, X_{i_s}) \right)$ is given by: --> # # <!-- !bt --> # <!-- # \begin{equation} --> # <!-- V\left (E(Y \; | \; X_{i_1}, X_{i_1}, X_{i_1}, \ldots, # X_{i_s}) \right) = \prod_{j=1}^{s} \left (1 + V_j \right) -1 <div # id="eq:Vscond"></div> --> # <!-- \end{equation} --> # <!-- !et --> # # while the # $V_{T_I}$ and the total variance $V$ are given by: # # <!-- Equation labels as ordinary links --> # <div id="eq:4"></div> # # $$ # \begin{equation} # V_{T_i} = V_i \; \prod_{j\neq i} (1+V_j) \qquad \text{and} # \qquad V = \prod_{i=1}^k (1+V_i) -1 # \label{eq:4} \tag{4} # \end{equation} # $$ # # Consequently the first order sensitivity indices $S_i$ of $Y=G(X)$, are given by # # <!-- Equation labels as ordinary links --> # <div id="_auto1"></div> # # $$ # \begin{equation} # S_i=\frac{V_i}{V} # \label{_auto1} \tag{5} # \end{equation} # $$ # # <!-- The expressions for the variance obtained when keeping one parameter --> # <!-- fixed and varying all the others can be found below alow with the --> # <!-- # expression for the total variance. The Sensitivity indices --> # <!-- expressions # can be easily retrieved from these. --> # # In the code snippet below one can # interactively experiment how the values of $a_i$ affect the correspoding # $S_i$, i.e the # sensitivity of $G$ with respect to $X_i$. # + attributes={"classes": [], "id": "", "n": "3"} # Analytical computations f, ax = plt.subplots(1,1) f.suptitle('G function variable a coefficients') # import modules import numpy as np def Vi(ai): return 1/(3*(1+ai)**2) def V(a_prms): D=1 for a in a_prms: D*=(1+Vi(a)) return D-1 def S_i(ai,a): return Vi(ai)/V(a) def S_T(ai,a): Dtot=V(a) return (Dtot+1)/(Vi(ai)+1)*Vi(ai)/Dtot def update_Sobol(**kwargs): ax.clear() for key, value in kwargs.items(): #find indx and value for a_prms pre,post = key.split("a") assert pre=="" a_prms[int(post)] = value width=0.4 x_tick_list=np.arange(len(a_prms))+1 ax.set_xticks(x_tick_list+width/2) x_labels=['x'+str(i) for i in np.arange(len(a_prms))] ax.set_xticklabels(x_labels) ax.set_ylim(0,1) for i, a in enumerate(a_prms): Si[i]=S_i(a,a_prms) ST[i]=S_T(a,a_prms) ax.bar(x_tick_list,Si,width,color='blue') ax.bar(x_tick_list+width,ST,width,color='red') ax.legend(['First order indices','Total indices']) # Analytical sliders k=4 #number of prms a_lbls=['a'+str(i) for i in np.arange(k)] Si=np.zeros(k) ST=np.zeros(k) a_prms=np.zeros(k) import ipywidgets as widgets my_sliders=[] for i in range(k): my_sliders.append(widgets.FloatSlider(min=0, max=15, value=6.52, description=a_lbls[i])) slider_dict = {slider.description:slider for slider in my_sliders} ui_left = widgets.VBox(my_sliders[0::2]) ui_right = widgets.VBox(my_sliders[1::2]) ui=widgets.HBox([ui_left,ui_right]) out=widgets.interactive_output(update_Sobol, slider_dict) display(ui,out) # - # Do you observe the effect stated above, that small # values of # $a_i$ (e.g. $a_i=0$) will yield high corresponding $S_i$? You may # change all the parameters simultaneously or one at the time. # # You may also # change the number of parameters *k* directly in the # python-code, however note # that this will affect the # computing time. In particular, the # computing time for the numerical # approximations with *chaospy* will be sensitive # to *k*. # # If more than one factor has low $a_i$, high order interactions among factors will # be tangible. # # * The extreme case is setting all $a_i$’s to zero. In this circumstance, all factors will interact and will be of equal importance -> check it out! # # * How would you assess a setting with only some $a_i$’s are zero and all others are large (e.g. $a_i \geq 9$ )? # # Note that the G function has a singularity in each of its $k$ dimensions corresponding to the points $X_i = 1/2$. # # ## Approximation of the sensitivity indices for Sobol's G function with spectral expansions # # In this section we show how the spectral expansion module [chaospy](https://github.com/jonathf/chaospy) may # be used to compute the Sobol indices for Sobol's G function. A more in-depth treatment of # `chaospy` and its usage is provided in the separate notebook [A # practical introduction to polynomial chaos with the1 `chaospy` # package](introduction_gpc.ipynb). Furthermore, you may find our previous "A # Guide to Uncertainty Quantification and Sensitivity Analysis for # Cardiovascular # Applications" [Eck et al. 2015](https://onlinelibrary.wiley.com/doi/full/10.1002/cnm.2755) as a useful # introduction to how polynomial chaos expansions may be used for # UQ&S. We are therefore focusing on # the application of the spectral # expansions and how this benchmarks against the # analytical solutions for the # indices, rather than presenting the spectral # expansion theory. # + attributes={"classes": [], "id": "", "n": "4"} # Si with chaospy for G-function import chaospy as cp cp.seed(0) jpdf = cp.Iid(cp.Uniform(),k) polynomial_order = 4 poly = cp.orth_ttr(polynomial_order, jpdf) #Ns=2*len(poly) Ns=500 print('Number of samples for chaospy: ', Ns) X=jpdf.sample(Ns) G_sample=G(X.transpose(),a_prms) approx = cp.fit_regression(poly, X, G_sample) exp_pc = cp.E(approx, jpdf) std_pc = cp.Std(approx, jpdf) print("Statistics polynomial chaos\n") print('\n E(Y) | std(Y) \n') print('pc : {:2.5f} | {:2.5f}'.format(float(exp_pc), std_pc)) S_pc = cp.Sens_m(approx, jpdf) #Si from chaospy S_tpc = cp.Sens_t(approx, jpdf) #Total effect sensitivity index from chaospy row_labels= ['S_'+str(idx) for idx in range(k)] col_labels=['Chaospy','Analytical','Error (%)'] print("\nFirst Order Indices") print_vectors_relerror(S_pc,Si,col_labels,row_labels,[3,3,0]) print("\n\nTotal Effect Indices") row_labels= ['St_'+str(idx) for idx in range(k)] print_vectors_relerror(S_tpc,ST,col_labels,row_labels,[3,3,0]) # - # In the code-snippet above we compare both the first order indices `S_pc` and the # total indices `S_tpc` computed with chaospy, and print them in columns along # side the analytical indices and the relative errors. You may experiment how the # error is affected by the number of samples `Ns`. # # ### Spectral expansions for computation of Sobol's sensitivity indices # # To better facilitate and encourage # your experimentation with the impact of changes in the coefficients $a_i$, # number of samples, and polynomial order for the spectral expansions in the # chaospy module, we make use of interactive widgets with sliders for all these # coefficients and variables. # # Run the code snippet below, and you will see # sliders for $a_i$, number # of samples `NS` and polynomial order. Once you change # one of the slider # values, the chaospy approximations of the sensitivity indices # is recomputed and the new results will be presented. Bear in mind that # the computational time is dependent on the number of samples and the # _cpu_ capacity of the machine you are running this notebook on. # + attributes={"classes": [], "id": "", "n": "5"} # chaospy G-function with sliders import chaospy as cp if not 'jpdf' in globals(): jpdf = cp.Iid(cp.Uniform(),k) #the joint pdf print('Create the joint pdf') def update_chaospy_G(**kwargs): NS=kwargs['NS'] del kwargs['NS'] polynomial_order=kwargs['polynomial_order'] del kwargs['polynomial_order'] for key, value in kwargs.items(): #find indx and value for a_prms pre,post = key.split("a") assert pre=="" a_prms[int(post)] = value X=jpdf.sample(NS) print('Number of samples: ',NS) G_sample=G(X.transpose(),a_prms) poly = cp.orth_ttr(polynomial_order, jpdf) approx = cp.fit_regression(poly, X, G_sample) exp_pc = cp.E(approx, jpdf) std_pc = cp.Std(approx, jpdf) print("Statistics polynomial chaos\n") print('\n E(Y) | std(Y) \n') print('pc : {:2.5f} | {:2.5f}'.format(float(exp_pc), std_pc)) S_pc = cp.Sens_m(approx, jpdf) #Si from chaospy S_tpc = cp.Sens_t(approx, jpdf) #Total effect sensitivity index from chaospy row_labels= ['S_'+str(idx) for idx in range(len(a_prms))] col_labels=['Chaospy','Analytical','Error (%)'] print("\nFirst Order Indices") print_vectors_relerror(S_pc,Si,col_labels,row_labels,[3,3,0]) print("\n\nTotal Effect Indices") row_labels= ['St_'+str(idx) for idx in range(k)] print_vectors_relerror(S_tpc,ST,col_labels,row_labels,[3,3,0]) if (len(my_sliders)==len(a_prms)): #add sliders if not added before my_sliders.append(widgets.IntSlider(min=500,max=5100,step=250,value=500,description='NS')) #add slider for samples my_sliders.append(widgets.IntSlider(description='polynomial_order', min=1,max=6,value=4)) # add slider for polynomial order slider_dict = {slider.description:slider for slider in my_sliders} #add the sliders in the dictionary ui_left = widgets.VBox(my_sliders[0::2]) ui_right = widgets.VBox(my_sliders[1::2]) ui=widgets.HBox([ui_left,ui_right]) out=widgets.interactive_output(update_chaospy_G, slider_dict) display(ui,out) # end chaospy G-function with sliders # - # ### Monte Carlo simulations for computation of sensitivity indices # # The snippet of code below allows to evaluate the Sobol sensitivity indices for the same $G$ function in a Monte Carlo simulation. In analogy with the previous example, one can again vary the $a_i$ coefficients along with the number of runs of the simulation. # + attributes={"classes": [], "id": "", "n": "6"} # Si with monte carlo for G-function import monte_carlo as mc a_prms=np.ones(k) if not 'jpdf' in globals(): cp.seed(0) jpdf = cp.Iid(cp.Uniform(),k) #the joint pdf print('Create the joint pdf') def update_mc_G(**kwargs): Ns=kwargs['NS'] del kwargs['NS'] for key, value in kwargs.items(): #find indx and value for a_prms pre,post = key.split("a") assert pre=="" a_prms[int(post)] = value print('Number of samples for Monte Carlo: ', Ns) X=jpdf.sample(Ns) A, B, C = mc.generate_sample_matrices_mc(Ns, k, jpdf, sample_method='R') #A, B, C already transposed G_A_sample = G(A, a_prms) G_B_sample = G(B, a_prms) G_C_sample_list = np.array([G(C_i, a_prms) for C_i in C]).T exp_mc = np.mean(G_A_sample) std_mc = np.std(G_A_sample) print("Statistics Monte Carlo\n") print('\n E(Y) | std(Y) \n') print('mc : {:2.5f} | {:2.5f}'.format(float(exp_mc), std_mc)) S_mc, S_tmc = mc.calculate_sensitivity_indices_mc(G_A_sample, G_B_sample, G_C_sample_list) row_labels= ['S_'+str(idx) for idx in range(k)] col_labels=['Monte carlo','Analytical','Error (%)'] print("\nFirst Order Indices") import analytical_g_function as agf Si=np.zeros(k) ST=np.zeros(k) for i, a in enumerate(a_prms): Si[i]=agf.S_i(a,a_prms) ST[i]=agf.S_T(a,a_prms) print_vectors_relerror(S_mc, Si, col_labels, row_labels, [3,3,0]) print("\n\nTotal Effect Indices") row_labels= ['St_'+str(idx) for idx in range(k)] print_vectors_relerror(S_tmc, ST, col_labels, row_labels, [3,3,0]) ## Set up the sliders mc_sliders=[] for i in range(k): mc_sliders.append(widgets.FloatSlider(min=0, max=15, value=6.52, description=a_lbls[i])) mc_sliders.append(widgets.IntSlider(min=500,max=25000,step=250,value=500,description='NS')) #add slider for samples slider_dict = {slider.description:slider for slider in mc_sliders} #add the sliders in the dictionary ui_left = widgets.VBox(mc_sliders[0::2]) ui_right = widgets.VBox(mc_sliders[1::2]) ui=widgets.HBox([ui_left,ui_right]) out=widgets.interactive_output(update_mc_G, slider_dict) display(ui,out) # - # ### Comparison of MC and PC for sensitivity indices computations # # Finally, the performance of the two approaches can be compared by benchmarking against the analytical values of the indices. Which approach performs better? Under which combination of $a_i$ coefficients? How many runs are required in order to get an error below a given threshold (e.g. 5%, 1%)? # + attributes={"classes": [], "id": "", "n": "7"} # Si comparison of mc and pc for G-function import monte_carlo as mc a_prms=np.ones(k) if not 'jpdf' in globals(): cp.seed(0) jpdf = cp.Iid(cp.Uniform(),k) #the joint pdf print('Create the joint pdf') def update_cmp(**kwargs): NsMC=kwargs['NsMC'] del kwargs['NsMC'] NsPC=kwargs['NsPC'] del kwargs['NsPC'] for key, value in kwargs.items(): #find indx and value for a_prms pre,post = key.split("a") assert pre=="" a_prms[int(post)] = value ## Monte Carlo update print('Number of samples for Monte Carlo: ', NsMC) X_mc=jpdf.sample(NsMC) A, B, C = mc.generate_sample_matrices_mc(NsMC, k, jpdf, sample_method='R') #A, B, C already transposed G_A_sample = G(A, a_prms) G_B_sample = G(B, a_prms) G_C_sample_list = np.array([G(C_i, a_prms) for C_i in C]).T exp_mc = np.mean(G_A_sample) std_mc = np.std(G_A_sample) print("Statistics Monte Carlo\n") print('\n E(Y) | std(Y) \n') print('mc : {:2.5f} | {:2.5f}'.format(float(exp_mc), std_mc)) S_mc, S_tmc = mc.calculate_sensitivity_indices_mc(G_A_sample, G_B_sample, G_C_sample_list) ## PC update Xpc=jpdf.sample(NsPC) G_sample=G(Xpc.transpose(),a_prms) approx = cp.fit_regression(poly, Xpc, G_sample) exp_pc = cp.E(approx, jpdf) std_pc = cp.Std(approx, jpdf) print("Statistics polynomial chaos\n") print('\n E(Y) | std(Y) \n') print('pc : {:2.5f} | {:2.5f}'.format(float(exp_pc), std_pc)) S_pc = cp.Sens_m(approx, jpdf) #Si from chaospy S_tpc = cp.Sens_t(approx, jpdf) #Total effect sensitivity index from chaospy import analytical_g_function as agf Si=np.zeros(k) ST=np.zeros(k) for i, a in enumerate(a_prms): Si[i]=agf.S_i(a,a_prms) ST[i]=agf.S_T(a,a_prms) row_labels= ['S_'+str(idx) for idx in range(k)] col_labels=['Monte Carlo','Err (%)','PolyChaos','Err (%)'] print("\nFirst Order Indices") print_3vectors_relerror(S_mc,S_pc, Si, col_labels, row_labels, [3,0,3,0]) print("\n\nTotal Effect Indices") row_labels= ['St_'+str(idx) for idx in range(k)] print_3vectors_relerror(S_tmc,S_tpc, ST, col_labels, row_labels, [3,0,3,0]) ## Set up the sliders cmp_sliders=[] for i in range(k): cmp_sliders.append(widgets.FloatSlider(min=0, max=15, value=6.52, description=a_lbls[i])) cmp_sliders.append(widgets.IntSlider(min=500,max=100000,step=250,value=500,description='NsMC')) #slider for MC samples cmp_sliders.append(widgets.IntSlider(min=500,max=2000,step=250,value=500,description='NsPC')) #slider for PC samples slider_dict = {slider.description:slider for slider in cmp_sliders} #add the sliders in the dictionary ui_left = widgets.VBox(cmp_sliders[0::2]) ui_right = widgets.VBox(cmp_sliders[1::2]) ui=widgets.HBox([ui_left,ui_right]) out=widgets.interactive_output(update_cmp, slider_dict) display(ui,out)
interactive_g_function.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: conda_pytorch_p36 # language: python # name: conda_pytorch_p36 # --- # # 利用AWS SageMaker以及Pytorch实现Bert文本多分类 # Bert text classification on SageMaker using PyTorch # # 本实验使用数据数据集 [dbpedia dataset](https://wiki.dbpedia.org/services-resources/dbpedia-data-set-2014#2) 以及 BERT模型实现文本多分类,数据格式为csv文件,内容例举如下: # # ```英文文本 # 1,"E. D. Abbott Ltd"," Abbott of Farnham E D Abbott Limited was a British coachbuilding business based in Farnham Surrey trading under that name from 1929. A major part of their output was under sub-contract to motor vehicle manufacturers. Their business closed in 1972." # 1,"Schwan-Stabilo"," Schwan-STABILO is a German maker of pens for writing colouring and cosmetics as well as markers and highlighters for office use. It is the world's largest manufacturer of highlighter pens Stabilo Boss." # 1,"Q-workshop"," Q-workshop is a Polish company located in Poznań that specializes in designand production of polyhedral dice and dice accessories for use in various games (role-playing gamesboard games and tabletop wargames). They also run an online retail store and maintainan active forum community.Q-workshop was established in 2001 by <NAME> – a student from Poznań. Initiallythe company sold its products via online auction services but in 2005 a website and online store wereestablished." # ``` # # # + import sys, os import logging sys.path.append("src") logging.basicConfig(level="INFO", handlers=[logging.StreamHandler(sys.stdout)], format='%(asctime)s - %(name)s - %(levelname)s - %(message)s') # - # ### AWS Bucket 与 role 设置 import sagemaker from sagemaker import get_execution_role sm_session = sagemaker.session.Session() role = get_execution_role() # + ### 创建S3存储桶路径,用于存放测试、验证数据集以及checkpoint # + data_bucket = sm_session.default_bucket() data_bucket_prefix = "bert-demo-<yourname>" ##<yourname>替换为实际名称 # 全部数据集(训练与验证)存放路径.. s3_uri_data = "s3://{}/{}/data".format(data_bucket, data_bucket_prefix) s3_uri_train = "{}/{}".format(s3_uri_data, "train.csv") s3_uri_val = "{}/{}".format(s3_uri_data, "val.csv") # 迷你数据集(训练与验证)存放路径.. s3_uri_mini_data = "s3://{}/{}/minidata".format(data_bucket, data_bucket_prefix) s3_uri_mini_train = "{}/{}".format(s3_uri_mini_data, "train.csv") s3_uri_mini_val = "{}/{}".format(s3_uri_mini_data, "val.csv") # 数据分类存放路径.. s3_uri_classes = "{}/{}".format(s3_uri_data, "classes.txt") # 测试数据分类存放路径.. s3_uri_test = "{}/{}".format(s3_uri_data, "test.csv") s3_output_path = "s3://{}/{}/output".format(data_bucket, data_bucket_prefix) s3_code_path = "s3://{}/{}/code".format(data_bucket, data_bucket_prefix) s3_checkpoint = "s3://{}/{}/checkpoint".format(data_bucket, data_bucket_prefix) # - prepare_dataset = True # ## 准备数据集 tmp ="tmp" # + magic_args="-s \"$prepare_dataset\" \"$s3_uri_test\" \"$s3_uri_classes\" \"$tmp\"" language="bash" # # prepare_dataset=$1 # s3_test=$2 # s3_classes=$3 # tmp=$4 # # if [ "$prepare_dataset" == "True" ] # then # echo "Downloading data.." # wget https://github.com/saurabh3949/Text-Classification-Datasets/raw/master/dbpedia_csv.tar.gz -P ${tmp} # tar -xzvf ${tmp}/dbpedia_csv.tar.gz # mv dbpedia_csv ${tmp} # # ls -l ${tmp}/dbpedia_csv/ # cat ${tmp}/dbpedia_csv/classes.txt # head -3 ${tmp}/dbpedia_csv/train.csv # # echo aws s3 cp ${tmp}/dbpedia_csv/test.csv ${s3_test} # aws s3 cp ${tmp}/dbpedia_csv/test.csv ${s3_test} # # aws s3 cp ${tmp}/dbpedia_csv/classes.txt ${s3_classes} # # fi # - # #### 测试数据集与验证数据集划分 # + from sklearn.model_selection import train_test_split def train_val_split(data_file, train_file_name = None, val_file_name = None, val_ratio =.30, train_ratio = .70): with open(data_file, "r") as f: lines = f.readlines() train, val = train_test_split( lines, test_size=val_ratio, train_size = train_ratio ,random_state=42) train_file_name = train_file_name or os.path.join(os.path.dirname(data_file), "train.csv") val_file_name = val_file_name or os.path.join(os.path.dirname(data_file), "val.csv") with open(train_file_name, "w") as f: f.writelines(train) print("Wrote {} records to train".format(len(train))) with open(val_file_name, "w") as f: f.writelines(val) print("Wrote {} records to validation".format(len(val))) return train_file_name, val_file_name # + if prepare_dataset: from s3_util import S3Util s3util = S3Util() l_data_file = os.path.join(tmp, "dbpedia_csv", "train.csv") l_train, l_val = train_val_split(l_data_file) s3util.upload_file(l_train, s3_uri_train) s3util.upload_file(l_val, s3_uri_val) l_mini_train = os.path.join(os.path.dirname(l_data_file), "mini_train.csv") l_mini_val = os.path.join(os.path.dirname(l_data_file), "mini_val.csv") l_train, l_val = train_val_split(l_data_file, l_mini_train, l_mini_val, val_ratio = 0.001, train_ratio=0.01) s3util.upload_file(l_mini_train, s3_uri_mini_train) s3util.upload_file(l_mini_val, s3_uri_mini_val) # - # 删除临时文件.. # !rm -rf $tmp # ## 模型训练 # # 使用SageMaker SPOT实例进行模型训练 # + inputs_full = { "train" : s3_uri_train, "val" : s3_uri_val, "class" : s3_uri_classes } inputs_sample = { "train" : s3_uri_mini_train, "val" : s3_uri_mini_val, "class" : s3_uri_classes } # 若使用整个数据集,模型训练时间将为4~5小时,如果想要快速验证模型,请使用input_sample数据集 inputs = inputs_sample # - ## checkpoint目录定义 sm_localcheckpoint_dir="/opt/ml/checkpoints/" ## Spot实例类型定义 instance_type = "ml.p3.8xlarge" instance_type_gpu_map = {"ml.p3.8xlarge":4, "ml.p3.2xlarge": 1, "ml.p3.16xlarge":8} # + ## 超参定义 hp = { "epochs" : 10, "earlystoppingpatience" : 3, "batch" : 8 * instance_type_gpu_map[instance_type], "trainfile" :s3_uri_train.split("/")[-1], "valfile" : s3_uri_val.split("/")[-1], "classfile":s3_uri_classes.split("/")[-1], "gradaccumulation" : 4, "log-level":"INFO", "maxseqlen" : 512, "lr":0.00001, "finetune": 0, "checkpointdir" : sm_localcheckpoint_dir, "checkpointfreq": 2 } # - ## 计量值定义 metric_definitions = [{"Name": "TrainLoss", "Regex": "###score: train_loss### (\d*[.]?\d*)"} ,{"Name": "ValidationLoss", "Regex": "###score: val_loss### (\d*[.]?\d*)"} ,{"Name": "TrainScore", "Regex": "###score: train_score### (\d*[.]?\d*)"} ,{"Name": "ValidationScore", "Regex": "###score: val_score### (\d*[.]?\d*)"} ] # + # 如果使用spot实例进行训练,运行此步骤,跳过下一步 use_spot = True train_max_run_secs = 2*24 * 60 * 60 spot_wait_sec = 5 * 60 max_wait_time_secs = train_max_run_secs + spot_wait_sec if not use_spot: max_wait_time_secs = None # - # 如果使用本地模式进行训练,运行此步骤,略过上一步 if instance_type == 'local': use_spot = False max_wait_time_secs = 0 wait = True # Use smaller dataset to run locally inputs = inputs_sample job_type = "bert-classification" base_name = "{}".format(job_type) # + from sagemaker.pytorch import PyTorch estimator = PyTorch( #entry_point='main_train_k_fold.py', entry_point='main.py', source_dir = 'src', role=role, framework_version ="1.4.0", py_version='py3', train_instance_count=1, train_instance_type=instance_type, hyperparameters = hp, output_path=s3_output_path, metric_definitions=metric_definitions, train_volume_size=30, code_location=s3_code_path, debugger_hook_config=False, base_job_name =base_name, train_use_spot_instances = use_spot, train_max_run = train_max_run_secs, train_max_wait = max_wait_time_secs, checkpoint_s3_uri=s3_checkpoint, checkpoint_local_path=sm_localcheckpoint_dir) estimator.fit(inputs, wait=True) # - # ## 部署Bert模型 # #### 推理容器 # + from sagemaker.pytorch import PyTorchModel from sagemaker import get_execution_role role = get_execution_role() model_uri = estimator.model_data model = PyTorchModel(model_data=model_uri, role=role, py_version = "py3", framework_version='1.4.0', entry_point='serve.py', source_dir='src') predictor = model.deploy(initial_instance_count=1, instance_type='ml.m5.xlarge') # - # ### API调用 # + import json class TextSerDes: def serialize(self, x): data_bytes="\n".join(data).encode("utf-8") return data_bytes def deserialize(self, x, content_type): return json.loads(x.read().decode("utf-8")) # + predictor.serializer = TextSerDes().serialize predictor.deserializer = TextSerDes().deserialize response = predictor.predict(data, initial_args={ "Accept":"text/json", "ContentType" : "text/csv" } ) response # - # ## 删除SageMaker Endpoint predictor.delete_endpoint()
BertTextClassification.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import go_board from obsidian.canvas import Canvas from ipywidgets import interact, IntSlider, FloatSlider, Checkbox def make_interactive_board(): size = IntSlider(min=5, max=50, value=15, description="Font size:", continuous_update=False) w = IntSlider(min=200, max=1000, value=550, description="Width:", continuous_update=False) h = IntSlider(min=200, max=1000, value=550, description="Height:", continuous_update=False) inset = IntSlider(min=20, max=100, value=34, description="Inset:", continuous_update=False) rows = IntSlider(min=7, max=50, value=19, description="Rows:", continuous_update=False) cols = IntSlider(min=7, max=50, value=19, description="Cols:", continuous_update=False) rc_lock = Checkbox(value=False, description="Lock cols to rows?") def render_board(font_size, w, h, inset, rows, cols, rc_lock): marked_move = (8, 9) args = (go_board.POSITION, w, h, inset, rows, rows if rc_lock else cols, marked_move, font_size) group = go_board.make_board_group(*args) canvas = Canvas(group, w, h) canvas.render() return canvas.rendered interact(render_board, font_size=size, w=w, h=h, inset=inset, rows=rows, cols=cols, rc_lock=rc_lock) make_interactive_board() # + # I've put some padding down here to keep the page from scrolling up as the image reloads. # -
examples/go_board_interactive.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.6.10 64-bit (''PythonData'': conda)' # name: python3610jvsc74a57bd0878543922dfa0db5b921533c15bb17828ca230de8c8fd7e418f81e793bfa1eab # --- import os import pandas as pd from bs4 import BeautifulSoup as bs import requests from splinter import Browser from webdriver_manager.chrome import ChromeDriverManager import pymongo #Setup Splinter executable_path = {'executable_path': ChromeDriverManager().install()} browser = Browser('chrome', **executable_path, headless=False) mars = {} url = 'https://redplanetscience.com/' browser.visit(url) html = browser.html soup = bs(html, 'html.parser') # + # Create news title and paragraph variables news_title = soup.find('div', class_='content_title').text news_paragraph = soup.find('div', class_='article_teaser_body').text mars['news_title'] = news_title mars['news_paragraph'] = news_paragraph # - #Display lastest title and paragraph print(f'Latest Article: {news_title} \nDescription: {news_paragraph}') #URL for featured image url = 'https://spaceimages-mars.com' browser.visit(url) html = browser.html soup = bs(html, 'html.parser') # + #Scrape featured image image_url = soup.find('img', class_='headerimage fade-in')['src'] featured_image_url = f'{url}/{image_url}' featured_image_url mars['featured_image_url'] = featured_image_url # + #Scraping Mars Facts #URL url = 'https://galaxyfacts-mars.com' all_about_mars = pd.read_html(url) all_about_mars # - type(all_about_mars) mars_facts = all_about_mars[0] mars_facts mars_facts = mars_facts.rename(columns=mars_facts.iloc[0]).drop(mars_facts.index[0]) mars_facts mars_facts_table = mars_facts.rename(columns={'Mars - Earth Comparison': ''}) mars_facts_table mars_table = mars_facts_table[['','Mars','Earth']].reset_index(drop=True) mars_t = mars_table.set_index('') mars_t mars_t = mars_table.set_index('') mars_t # + mars_t_html = mars_t.to_html() mars_t_html_class_t = mars_t_html.replace('<table border="1" class="dataframe">\n','<table class="table table-dark table-striped">\n') mars_t_html_class_tr = mars_t_html_class_t.replace('<tr style="text-align: right;">','<tr>') mars_t_html_class_row = mars_t_html_class_tr.replace('<th>','<th scope="row">') mars['mars_t_html'] = mars_t_html_class_row # - # Scraping Mars Hemisphere Images and Titles # + #Cerberus Hemisphere url = 'https://marshemispheres.com/' browser.visit(url) browser.links.find_by_partial_text('Cerberus Hemisphere Enhanced').click() html = browser.html soup = bs(html, 'html.parser') mars_h1_title = soup.find('h2', class_='title').text mars_h1 = soup.find('img', class_='wide-image')['src'] mars_h1_img = f'{url}/{mars_h1}' # + #Schiaparelli Hemisphere url = 'https://marshemispheres.com/' browser.visit(url) browser.links.find_by_partial_text('Schiaparelli Hemisphere Enhanced').click() html = browser.html soup = bs(html, 'html.parser') mars_h2_title = soup.find('h2', class_='title').text mars_h2 = soup.find('img', class_='wide-image')['src'] mars_h2_img = f'{url}/{mars_h2}' # + #Syrtis Major Hemisphere url = 'https://marshemispheres.com/' browser.visit(url) browser.links.find_by_partial_text('Syrtis Major Hemisphere Enhanced').click() html = browser.html soup = bs(html, 'html.parser') mars_h3_title = soup.find('h2', class_='title').text mars_h3 = soup.find('img', class_='wide-image')['src'] mars_h3_img = f'{url}/{mars_h3}' # + #Valles Marineris Hemisphere url = 'https://marshemispheres.com/' browser.visit(url) browser.links.find_by_partial_text('Valles Marineris Hemisphere').click() html = browser.html soup = bs(html, 'html.parser') mars_h4_title = soup.find('h2', class_='title').text mars_h4 = soup.find('img', class_='wide-image')['src'] mars_h4_img = f'{url}/{mars_h4}' # + mars_hemispheres_imgs = [ {"title1": mars_h1_title, "link1": mars_h1_img}, {"title2": mars_h2_title, "link2": mars_h2_img}, {"title3": mars_h3_title, "link3": mars_h3_img}, {"title4": mars_h4_title, "link4": mars_h4_img}] mars_hemispheres_imgs mars['hemispheres'] = mars_hemispheres_imgs # - #Close Browser browser.quit() mars
Missions_to_Mars/missions_to_mars.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.10 64-bit # name: python3 # --- # + # Copyright 2021 NVIDIA Corporation. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== # - # # Getting Started Outbrain: ETL with NVTabular # ## Overview # In this notebook we will do preprocessing and feature engineering using [Kaggle Outbrain dataset](https://www.kaggle.com/c/outbrain-click-prediction). # # **Learning objectives** # # In this notebook, we learn how to # # - Use LambdaOp for custom row-wise dataframe manipulations with NVTabular # - Preprocess single-hot categorical input features with NVTabular # - Apply TargetEncoding to categorical features # - Create a custom operator to create time features # - Apply ColumnSimilarity to calculate the similarity between two columns using tf-idf metric # + import os import glob import cupy # Get dataframe library - cudf or pandas from nvtabular.dispatch import get_lib df_lib = get_lib() import nvtabular as nvt from nvtabular.io import Shuffle from nvtabular.ops import ( FillMedian, Categorify, LogOp, TargetEncoding, Rename, ) from nvtabular.ops.column_similarity import ColumnSimilarity from nvtabular import ColumnGroup # - # First, we set where the dataset should be saved once processed (OUTPUT_BUCKET_FOLDER), as well as where the dataset originally resides (DATA_BUCKET_FOLDER). DATA_BUCKET_FOLDER = os.environ.get("INPUT_DATA_DIR", "~/nvt-examples/outbrain/data") OUTPUT_BUCKET_FOLDER = os.environ.get("OUTPUT_DATA_DIR", "./outbrain-preprocessed/") # Let's read our saved train and valid datasets. train_filename = os.path.join(OUTPUT_BUCKET_FOLDER, "train_gdf.parquet") valid_filename = os.path.join(OUTPUT_BUCKET_FOLDER, "valid_gdf.parquet") # ## Preparing documents metadata # Let's create the output directories to store the preprocessed parquet files. output_train_dir = os.path.join(OUTPUT_BUCKET_FOLDER, "train/") output_valid_dir = os.path.join(OUTPUT_BUCKET_FOLDER, "valid/") # ! mkdir -p $output_train_dir # ! mkdir -p $output_valid_dir # We read in three more cudf data frames, <i>documents categories</i>, <i>topics</i>, and <i>entities</i>, and use them to create sparse matrices in cupy. We will use these later to calculate cosine similarity between event document (landing page context) and ad document profile vectors (TF-IDF), i.e., how close in profile an ad is to the page that it is being displayed. # + # Alias for read_csv read_csv = df_lib.read_csv documents_categories_cudf = read_csv(DATA_BUCKET_FOLDER + "documents_categories.csv") documents_topics_cudf = read_csv(DATA_BUCKET_FOLDER + "documents_topics.csv") documents_entities_cudf = read_csv(DATA_BUCKET_FOLDER + "documents_entities.csv") # read in document categories/topics/entities as cupy sparse matrices def df_to_coo(df, row="document_id", col=None, data="confidence_level"): return cupy.sparse.coo_matrix((df[data].values, (df[row].values, df[col].values))) categories = df_to_coo(documents_categories_cudf, col="category_id") topics = df_to_coo(documents_topics_cudf, col="topic_id") documents_entities_cudf["entity_id"] = ( documents_entities_cudf["entity_id"].astype("category").cat.codes ) entities = df_to_coo(documents_entities_cudf, col="entity_id") documents_categories_cudf = documents_topics_cudf = documents_entities_cudf = None # - # ## Initiate NVTabular Workflow # Now that our datasets, sparse matrices and udf are created, we can begin laying the groundwork for NVTabular. NVTabular requires input features to be defined as groups of columns , so we define our ColumnGroup features at this step. Note that feature engineering and preprocessing often happens to sets of columns, so we adopt that method and require the user to specify continuous and categoricals along with the target as lists within ColumnGroup. # At this point, our data still isn’t in a form that’s ideal for consumption by our W&D model that we will train in the next notebook. There are missing values, and our categorical variables are still represented by random, discrete identifiers, and need to be transformed into contiguous indices for embedding lookups. The distributions of our continuous variables are uncentered. We also would like to create new features that will help to increase the model accuracy. # Let's begin to create and process features using NVTabular ops: # * <i>geo_location_state</i> and <i>geo_location_country</i> are created by stripping geo_location using the `LambdaOp` # * <i>publish_time_days_since_published</i> and <i>publish_time_promo_days_since_published</i> features are created using the `calculate_delta` function in a `LambdaOp` # * Missing values are filled using median value depending on the feature using `FillMedian()`op # * Continuous features are log transformed with the `LogOp()`. # # `Categorify` op is used for categorification, i.e. encoding of categorical features. Categorify op takes a param called `freq_threshold` which is used for frequency capping. This handy functionality will map all categories which occur in the dataset with some threshold level of infrequency to the _same_ index, keeping the model from overfitting to sparse signals. We don't apply frequency thresholds in this example, but one can easily create a frequency threshold dictionary, assign a custom threshold value for each categorical feature, and feed that dictionary into the `Categorify` op as `freq_threshold` param. # One of the important part of building recommender systems is to do feature engineering. As a very promising feature engineering technique, `Target Encoding` processes the categorical features and makes them easier accessible to the model during training and validation. *Target Encoding (TE)* has emerged as being both effective and efficient in many data science projects. For example, it is the major component of Nvidia Kaggle Grandmasters team’s [winning solution](https://medium.com/rapids-ai/winning-solution-of-recsys2020-challenge-gpu-accelerated-feature-engineering-and-training-for-cd67c5a87b1f) of [Recsys Challenge 2020](http://www.recsyschallenge.com/2020/). TE calculates the statistics from a target variable grouped by the unique values of one or more categorical features. For example in a binary classification problem, it calculates the probability that the target is true for each category value - a simple mean. In other words, for each distinct element in feature <b>$x$</b> we are going to compute the average of the corresponding values in target <i>y</i>. Then we are going to replace each $x_{i}$ with the corresponding mean value. For more details on TargetEncoding please visit [here](https://medium.com/rapids-ai/target-encoding-with-rapids-cuml-do-more-with-your-categorical-data-8c762c79e784) and [here](https://github.com/rapidsai/deeplearning/blob/main/RecSys2020Tutorial/03_3_TargetEncoding.ipynb). # # Here, we apply Target Encoding to certain categorical features with *kfold* of 5 and *smoothing* of 20 to avoid overfitting using [TargetEncoding op](https://github.com/NVIDIA/NVTabular/blob/a0141d0a710698470160bc2cbc42b18ce2d49133/nvtabular/ops/target_encoding.py). # ## Feature Engineering # Below, we create a custom operator that calculates the time difference between a specified time column (either publish_time or publish_time_promo) and timestamp. This is used to calculate <i>time elapsed since publication</i> between the landing page and the ad. # + # To save disk space, the timestamps in the entire dataset are relative to the first time in the dataset. # To recover the actual epoch time of the visit, we add 1465876799998 to the timestamp. TIMESTAMP_DELTA = 1465876799998 from nvtabular.ops import Operator class DaysSincePublished(Operator): def transform(self, columns, gdf): for column in columns.names: col = gdf[column] col.loc[col == ""] = None col = col.astype("datetime64[ns]") timestamp = (gdf["timestamp"] + TIMESTAMP_DELTA).astype("datetime64[ms]") delta = (timestamp - col).dt.days gdf[column + "_since_published"] = delta * (delta >= 0) * (delta <= 10 * 365) return gdf def output_column_names(self, columns): return nvt.ColumnSelector([column + "_since_published" for column in columns.names]) def dependencies(self): return ["timestamp"] # + # geo processing: apply two different lambda operators to the ‘geo_location’ column, and # extract the country/state from the geo_location value. The geo_location column # looks something like "US>CA>12345", so we're using string slicing to pull out the country # and the country+state then geo_location = ColumnGroup(["geo_location"]) country = geo_location >> (lambda col: col.str.slice(0, 2)) >> Rename(postfix="_country") state = geo_location >> (lambda col: col.str.slice(0, 5)) >> Rename(postfix="_state") geo_features = geo_location + country + state # categoricals processing: categorify certain input columns as well as the geo features cats = ColumnGroup( [ "ad_id", "document_id", "platform", "document_id_promo", "campaign_id", "advertiser_id", "source_id", "publisher_id", "source_id_promo", "publisher_id_promo", ] ) cat_features = geo_features + cats >> Categorify() # Apply TargetEncoding to certain categoricals with kfold of 5 and smoothing of 20 te_features = cats >> TargetEncoding("clicked", kfold=5, p_smooth=20) # process dates using the ‘DaysSincePublished’ custom operator dates = ["publish_time", "publish_time_promo"] date_features = dates >> DaysSincePublished() >> FillMedian() >> LogOp() # - # Let's visualize our calculation graph with the column groups we used and created so far. features = date_features + cat_features + te_features + "clicked" features.graph # A user might sometimes be interested to continue reading about the same topics of the current page. Computing the similarity between the textual content of the current page and the pages linked to the displayed ads, can be a relevant feature for a model that predicts which ad the user would click next. A simple, yet effective way to compute the similarity between documents is generating the TF-IDF vectors for each of them, which captures their most relevant terms, and then computing the cosine similarity between those vectors. # # Below, we calculate <i>doc_event_doc_ad_sim_categories</i>, <i>topics</i>, and <i>entities</i> using the `ColumnSimilarity` op, which utilizes the sparse categories, topics, and entities matrices that were created above to calculate landing page similarity for categories, topics, and entities. We calculate Cosine similarity between event doc (landing page) and ad doc aspects vectors (TF-IDF). Creating these extra features help to improve model accuracy and predictability. # Note that we rename the column names to avoid duplicated column names. sim_features_categ = ( [["document_id", "document_id_promo"]] >> ColumnSimilarity(categories, metric="tfidf", on_device=False) >> Rename(postfix="_categories") ) sim_features_topics = ( [["document_id", "document_id_promo"]] >> ColumnSimilarity(topics, metric="tfidf", on_device=False) >> Rename(postfix="_topics") ) sim_features_entities = ( [["document_id", "document_id_promo"]] >> ColumnSimilarity(entities, metric="tfidf", on_device=False) >> Rename(postfix="_entities") ) sim_features = sim_features_categ + sim_features_topics + sim_features_entities # The workflow is created with the output node of the graph workflow = nvt.Workflow(features + sim_features) # We then create an NVTabular Dataset object both for train and validation sets. We calculate statistics for this workflow on the input dataset, i.e. on our training set, using the `workflow.fit()` method so that our <i>Workflow</i> can use these stats to transform any given input. When our <i>Workflow</i> transforms our datasets and, we also save the results out to parquet files for fast reading at train time. # + train_dataset = nvt.Dataset(train_filename) valid_dataset = nvt.Dataset(valid_filename) # Calculate statistics on the training set workflow.fit(train_dataset) # - # use the calculated statistics to transform the train/valid datasets # and write out each as parquet workflow.transform(train_dataset).to_parquet( output_path=output_train_dir, shuffle=Shuffle.PER_PARTITION, out_files_per_proc=5 ) workflow.transform(valid_dataset).to_parquet(output_path=output_valid_dir) # We can save the stats from the workflow and load it anytime, so we can run training without doing preprocessing. # In the next notebooks, we will train a deep learning model. Our training pipeline requires information about the data schema to define the neural network architecture. We will save the NVTabular workflow to disk so that we can restore it in the next notebooks. workflow.save(os.path.join(OUTPUT_BUCKET_FOLDER, "workflow")) # ## Reviewing processed data TRAIN_PATHS = sorted(glob.glob(os.path.join(OUTPUT_BUCKET_FOLDER, "train/*.parquet"))) VALID_PATHS = sorted(glob.glob(os.path.join(OUTPUT_BUCKET_FOLDER, "valid/*.parquet"))) TRAIN_PATHS, VALID_PATHS df = df_lib.read_parquet(TRAIN_PATHS[0]) df.head()
examples/advanced-ops-outbrain/02-ETL-with-NVTabular.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Neural graph learning on CiteSeer dataset # ## Introduction to the dataset # # The [Citeseer dataset](https://linqs.soe.ucsc.edu/data) consists of two parts - 3312 scientific publications and a graph structure. # Each publication has an id, a one-hot representaiton of word attributes and a label indicating the type of publication. The publications are classified into one of six categories. Theres is a total of 3703 unique words in the vocabulary. Removing stopwords, stemming and removal of infrequent words was already done to the dataset. # In the graph data each node is a publication and a link is formed between the nodes when one has citet the other. # # In our experiment we consider the graph to be bidirectional as the the graph is considered a measure of similarity between the papers and the direction of the link does not offer additional insight. # # To read more about other papers that has used the same dataset, see [<NAME>, and <NAME>. "Link-based classification." ICML, 2003.](https://linqspub.soe.ucsc.edu/basilic/web/Publications/2003/lu:icml03/) and [<NAME>, et al. "Collective classification in network data." AI Magazine, 2008.](https://linqspub.soe.ucsc.edu/basilic/web/Publications/2008/sen:aimag08/). # # No additional preprocessing was done to the dataset. The original dataset can be downloaded [here](https://linqs-data.soe.ucsc.edu/public/lbc/citeseer.tgz). # # Dataset summary: # * 3703 unique word attributes # * 3312 scientific publications # * Labels for classifying the papers are: # * Agents # * AI # * DB # * IR # * ML # * HCI # * Each publication can have only one label. # # ## Experiment # # The goal is to correctly classify the publication category and examine the performance difference between the neural graph regulated model and a base model. # # To proberly examine the performance difference between the models, each model is trainined with traning sizes from 0.1 to 0.85 with 0.05 incrementes. The models is run 5 times at each traning sizes whereafter the average results are presented in a graph. # # ## References # # Large parts of the code for preprocessing, loading train, test and validation data, evaluation, generation of Keras functional models is modefied from Tensorflows tutorials and resources introducing neural structed learning. The original code can be found here: [Tensorflows github](https://github.com/tensorflow/neural-structured-learning) and [Guide and Tutorials](https://www.tensorflow.org/neural_structured_learning/framework). # Furthermore, additional information about the API can be found [here](https://www.tensorflow.org/neural_structured_learning/api_docs/python/nsl). # # Experiment # ### Importing needed libraries from __future__ import absolute_import, division, print_function, unicode_literals import neural_structured_learning as nsl import tensorflow as tf import csv import pandas as pd import numpy as np import matplotlib.pyplot as plt pd.set_option('display.expand_frame_repr', False) import warnings warnings.filterwarnings('ignore') # ### Getting the data # !tar -C /tmp -xvzf /Users/johanweisshansen/Documents/DTU/3.semester/advanced_project/oticon_project/terror_data/nsl/dataset_test_6/citeseer/citeseer.tgz # ### Defining hyperparameters # + class HParams(object): """Hyperparameters used for training.""" def __init__(self): ### dataset parameters self.num_classes = 6 self.max_seq_length = 3703 # distinct features ### neural graph learning parameters self.distance_type = nsl.configs.DistanceType.L2 self.graph_regularization_multiplier = 0.1 self.num_neighbors = 1 ### model architecture self.num_fc_units = [50,50] ### training parameters self.train_epochs = 150 self.batch_size = 150 self.dropout_rate = 0.5 ### eval parameters self.eval_steps = None # All instances in the test set are evaluated. HPARAMS = HParams() # - # ### Load train and test data # + def parse_example(example_proto): feature_spec = { 'words': tf.io.FixedLenFeature([HPARAMS.max_seq_length], tf.int64, default_value=tf.constant( 0, dtype=tf.int64, shape=[HPARAMS.max_seq_length])), 'label': tf.io.FixedLenFeature((), tf.int64, default_value=-1), } # We also extract corresponding neighbor features in a similar manner to # the features above. for i in range(HPARAMS.num_neighbors): nbr_feature_key = '{}{}_{}'.format('NL_nbr_', i, 'words') nbr_weight_key = '{}{}{}'.format('NL_nbr_', i, '_weight') feature_spec[nbr_feature_key] = tf.io.FixedLenFeature( [HPARAMS.max_seq_length], tf.int64, default_value=tf.constant( 0, dtype=tf.int64, shape=[HPARAMS.max_seq_length])) # We assign a default value of 0.0 for the neighbor weight so that # graph regularization is done on samples based on their exact number # of neighbors. In other words, non-existent neighbors are discounted. feature_spec[nbr_weight_key] = tf.io.FixedLenFeature( [1], tf.float32, default_value=tf.constant([0.0])) features = tf.io.parse_single_example(example_proto, feature_spec) labels = features.pop('label') return features, labels def make_dataset(file_path, training=False): #Creates a `tf.data.TFRecordDataset`. dataset = tf.data.TFRecordDataset([file_path]) if training: dataset = dataset.shuffle(10000) dataset = dataset.map(parse_example) dataset = dataset.batch(HPARAMS.batch_size) return dataset # - # ### Functional base model from Keras API def functional_model(hparams): """Creates a functional API-based multi-layer perceptron model.""" inputs = tf.keras.Input(shape=(hparams.max_seq_length,), dtype='int64', name='words') # casting one hot to floating point format. cur_layer = tf.keras.layers.Lambda( lambda x: tf.keras.backend.cast(x, tf.float32))( inputs) for num_units in hparams.num_fc_units: cur_layer = tf.keras.layers.Dense(num_units, activation='relu')(cur_layer) cur_layer = tf.keras.layers.Dropout(hparams.dropout_rate)(cur_layer) cur_layer = tf.keras.layers.BatchNormalization()(cur_layer) outputs = tf.keras.layers.Dense( hparams.num_classes, activation='softmax')( cur_layer) model = tf.keras.Model(inputs, outputs=outputs) return model # ### Function to evaluate models # Helper function to print evaluation metrics. def print_metrics(model_desc, eval_metrics): print('\n') print('Eval accuracy for ', model_desc, ': ', eval_metrics['accuracy']) print('Eval loss for ', model_desc, ': ', eval_metrics['loss']) if 'graph_loss' in eval_metrics: print('Eval graph loss for ', model_desc, ': ', eval_metrics['graph_loss']) # ### Function for training base model def traning_base_model(train_dataset): base_model = functional_model(HPARAMS) base_model.compile( optimizer='adam', #sparse_categorical_crossentropy #categorical_crossentropy loss='sparse_categorical_crossentropy', metrics=['accuracy'], lr=0.01, clipnorm=1.) base_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=0) return base_model # ### Function for traning graph model def training_graph_model(train_dataset): # Build a new base MLP model. base_reg_model = functional_model(HPARAMS) # Wrap the base MLP model with graph regularization. graph_reg_config = nsl.configs.make_graph_reg_config( max_neighbors=HPARAMS.num_neighbors, multiplier=HPARAMS.graph_regularization_multiplier, distance_type=HPARAMS.distance_type, sum_over_axis=-1) graph_reg_model = nsl.keras.GraphRegularization(base_reg_model, graph_reg_config) graph_reg_model.compile( optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) graph_reg_history = graph_reg_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=0) return graph_reg_model def generateTrainingData(train_percent): # running the data trough the preprocessing script # !python preprocessing_citeseer_dataset.py \ # --input_content=/tmp/citeseer/citeseer.content \ # --input_graph=/tmp/citeseer/citeseer.cites \ # --max_nbrs=3 \ # --train_percentage=$train_percent\ # --output_train_data=/tmp/citeseer/train_merged_examples.tfr \ # --output_test_data=/tmp/citeseer/test_examples.tfr # generating train and test data train_dataset = make_dataset('/tmp/citeseer/train_merged_examples.tfr', training=True) test_dataset = make_dataset('/tmp/citeseer/test_examples.tfr') return train_dataset, test_dataset # ### Iterating over training size # + # defining the training size we need to iterate over train_percentage = [] train_percentage.append(0.01) # starting the list at 1% of training data for i in np.arange(0.05, 0.9, 0.05): train_percentage.append(round(i,2)) # + # lists for holding results graph_accuracy_by_training_size_avg = [] base_accuracy_by_training_size_avg = [] for j in range(5): print("----------------------------- iteration: ", j+1, "------------------------" ) base_model_results_list = [] graph_model_results_list = [] for i in range(len(train_percentage)): print("---------------------training at percentage ", train_percentage[i], "--------------------------------") # creating test and training data train_dataset, test_dataset = generateTrainingData(train_percentage[i]) # creating and training the base model base_model = traning_base_model(train_dataset) # evaluate base model eval_results = dict( zip(base_model.metrics_names, base_model.evaluate(test_dataset, steps=HPARAMS.eval_steps))) print_metrics('Base MLP model', eval_results) # adding results to a list base_model_results_list.append(eval_results) # creating and training the graph model graph_model = training_graph_model(train_dataset) # evaluating the model eval_results_graph_regulated_model = dict( zip(graph_model.metrics_names, graph_model.evaluate(test_dataset, steps=HPARAMS.eval_steps))) print_metrics('MLP + graph regularization', eval_results_graph_regulated_model) # adding the graph results to a list graph_model_results_list.append(eval_results_graph_regulated_model) graph_accuracy_by_training_size_avg.append(graph_model_results_list) base_accuracy_by_training_size_avg.append(base_model_results_list) # - # # Results # # To get at better idea of the difference in learning at different training sizes we substract the two model preformances from each other. # # A postive value indicates a gain for the graph based model over the base model. # + graph_avg_list = [] base_avg_list = [] for i in range(0,len(graph_accuracy_by_training_size_avg[0])): tmp_avg_value = 0 for j in range(0,len(graph_accuracy_by_training_size_avg)): tmp_avg_value += graph_accuracy_by_training_size_avg[j][i]['accuracy'] graph_avg_list.append(tmp_avg_value/5) for i in range(0,len(base_accuracy_by_training_size_avg[0])): tmp_avg_value = 0 for j in range(0,len(base_accuracy_by_training_size_avg)): tmp_avg_value += base_accuracy_by_training_size_avg[j][i]['accuracy'] base_avg_list.append(tmp_avg_value/5) # + diff_graph_and_basemodel = [] for i in range(len(base_avg_list)): diff_graph_and_basemodel.append(graph_avg_list[i]- base_avg_list[i]) # - collected_list = [] collected_list.append(base_avg_list) collected_list.append(graph_avg_list) collected_list.append(diff_graph_and_basemodel) # + import numpy as np import matplotlib.pyplot as plt plt.figure(figsize=(12, 6)) columns = train_percentage rows = ['base model', 'graph model', 'pref. diff.'] # Get some pastel shades for the colors n_rows = len(collected_list) # Initialize the vertical-offset for the stacked bar chart. y_offset = np.zeros(len(train_percentage)) # Plot bars and create text labels for the table cell_text = [] for row in range(n_rows): y_offset = collected_list[row] cell_text.append(['%1.3f' % x for x in y_offset]) # Add a table at the bottom of the axes table = plt.table(cellText=cell_text, rowLabels=rows, colLabels=columns, rowColours=['#5b9ac4', '#fd8e39', '#ffffff'], loc='bottom') table.set_fontsize(14) # table.auto_set_font_size(False) table.set_fontsize(14) table.scale(1, 1.7) # Adjust layout to make room for the table: plt.subplots_adjust(left=0.2, bottom=0.2) # plt.plot(graph_collected) plt.plot(base_avg_list) plt.plot(graph_avg_list) # plt.ylabel("Loss in ${0}'s".format(value_increment)) # plt.yticks(values * value_increment, ['%d' % val for val in values]) plt.xticks([]) plt.title('Comparison between base and graph model accuracy') plt.ylabel('accuracy') # plt.xlabel('training size') plt.annotate('training size', xy=(1,0), xytext=(-669, -3), ha='left', va='top', xycoords='axes fraction', textcoords='offset points') plt.legend(['base model', 'graph reg model'], loc='upper left') plt.savefig('plots/citseer_accuracy_graph.png', bbox_inches='tight', pad_inches=0.1) plt.show()
dataset_3_citeseer/citeceer_dataset_model_and_test.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.1 64-bit # name: python3 # --- # + [markdown] id="wSFbIMb87cHu" # # **Computational Drug Discovery [Part 1] Download Bioactivity Data** # # <NAME> # # [*'Data Professor' YouTube channel*](http://youtube.com/dataprofessor) # # In this Jupyter notebook, we will be building a real-life **data science project** that you can include in your **data science portfolio**. Particularly, we will be building a machine learning model using the ChEMBL bioactivity data. # # --- # + [markdown] id="3iQiERxumDor" # ## **ChEMBL Database** # # The [*ChEMBL Database*](https://www.ebi.ac.uk/chembl/) is a database that contains curated bioactivity data of more than 2 million compounds. It is compiled from more than 76,000 documents, 1.2 million assays and the data spans 13,000 targets and 1,800 cells and 33,000 indications. # [Data as of March 25, 2020; ChEMBL version 26]. # + [markdown] id="iryGAwAIQ4yf" # ## **Installing libraries** # + [markdown] id="toGT1U_B7F2i" # Install the ChEMBL web service package so that we can retrieve bioactivity data from the ChEMBL Database. # + id="cJGExHQBfLh7" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1629583636239, "user_tz": -180, "elapsed": 3512, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} outputId="9b9a541e-083d-471f-bce4-4e4264387dd7" # ! pip install chembl_webresource_client # + [markdown] id="J0kJjL8gb5nX" # ## **Importing libraries** # + id="RXoCvMPPfNrv" executionInfo={"status": "ok", "timestamp": 1629583637303, "user_tz": -180, "elapsed": 1066, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} # Import necessary libraries import pandas as pd from chembl_webresource_client.new_client import new_client # + [markdown] id="1FgUai1bfigC" # ## **Search for Target protein** # + [markdown] id="7lBsDrD0gAqH" # ### **Target search for coronavirus** # + id="Vxtp79so4ZjF" colab={"base_uri": "https://localhost:8080/", "height": 640} executionInfo={"status": "ok", "timestamp": 1629583638330, "user_tz": -180, "elapsed": 1028, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} outputId="c807fd44-9950-436b-92c3-c5bf18878477" # Target search for coronavirus target = new_client.target target_query = target.search('coronavirus') targets = pd.DataFrame.from_dict(target_query) targets # + [markdown] id="Y5OPfEALjAfZ" # ### **Select and retrieve bioactivity data for *SARS coronavirus 3C-like proteinase* (fifth entry)** # + [markdown] id="gSQ3aroOgML7" # We will assign the fifth entry (which corresponds to the target protein, *coronavirus 3C-like proteinase*) to the ***selected_target*** variable # + id="StrcHMVLha7u" colab={"base_uri": "https://localhost:8080/", "height": 35} executionInfo={"status": "ok", "timestamp": 1629583638338, "user_tz": -180, "elapsed": 12, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} outputId="255a441b-b9c1-48d9-91ac-d3b839d23ae7" selected_target = targets.target_chembl_id[4] selected_target # + [markdown] id="GWd2DRalgjzB" # Here, we will retrieve only bioactivity data for *coronavirus 3C-like proteinase* (CHEMBL3927) that are reported as IC$_{50}$ values in nM (nanomolar) unit. # + id="LeFbV_CsSP8D" executionInfo={"status": "ok", "timestamp": 1629583638339, "user_tz": -180, "elapsed": 12, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} activity = new_client.activity res = activity.filter(target_chembl_id=selected_target).filter(standard_type="IC50") # + id="RC4T-NEmSWV-" executionInfo={"status": "ok", "timestamp": 1629583650144, "user_tz": -180, "elapsed": 11816, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} df = pd.DataFrame.from_dict(res) # + id="s9iUAXFdSkoM" colab={"base_uri": "https://localhost:8080/", "height": 264} executionInfo={"status": "ok", "timestamp": 1629583650155, "user_tz": -180, "elapsed": 22, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} outputId="332a7e67-d1fc-418c-8419-dddfb4cc43c2" df.head(3) # + id="oNtBv36dYhxy" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1629583650156, "user_tz": -180, "elapsed": 22, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} outputId="3e3368a8-b89e-4c5f-f465-156313bab321" df.standard_type.unique() # + [markdown] id="fQ78N26Fg15T" # Finally we will save the resulting bioactivity data to a CSV file **bioactivity_data.csv**. # + id="ZvUUEIVxTOH1" executionInfo={"status": "ok", "timestamp": 1629583650156, "user_tz": -180, "elapsed": 19, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} df.to_csv('bioactivity_data.csv', index=False) # + [markdown] id="BOrSrTGjOWU7" # ## **Copying files to Google Drive** # + [markdown] id="PRputWaI7ZW7" # Firstly, we need to mount the Google Drive into Colab so that we can have access to our Google adrive from within Colab. # + colab={"base_uri": "https://localhost:8080/"} id="8QKGNN6ouLRu" executionInfo={"status": "ok", "timestamp": 1629583653319, "user_tz": -180, "elapsed": 3181, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} outputId="f353bcf7-fd20-4ea1-e6dd-70114e9b570f" # !pip install drive # + id="6RBX658q65A5" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1629583654544, "user_tz": -180, "elapsed": 1227, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} outputId="04513336-171e-4c5f-97f6-da1d2d33396f" from google.colab import drive drive.mount('/content/gdrive/', force_remount=True) # + [markdown] id="CMlY0xudN1mL" # Next, we create a **data** folder in our **Colab Notebooks** folder on Google Drive. # + id="tew-UtUWIS__" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1629583654544, "user_tz": -180, "elapsed": 12, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} outputId="8a846c45-d359-4d93-baa2-5a387ca28666" # ! mkdir "/content/gdrive/My Drive/Colab Notebooks/data2" # + id="YDMBpK2XJ_rJ" executionInfo={"status": "ok", "timestamp": 1629583654545, "user_tz": -180, "elapsed": 4, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} # ! cp bioactivity_data.csv "/content/gdrive/My Drive/Colab Notebooks/data" # + id="iRIr1QiEJtuw" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1629583654927, "user_tz": -180, "elapsed": 385, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} outputId="d9f3935e-78c6-48aa-f5b6-05082748263d" # ! ls -l "/content/gdrive/My Drive/Colab Notebooks/data" # + [markdown] id="z9NwrYJni8CH" # Let's see the CSV files that we have so far. # + id="FO3cZC5vnCht" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1629583654928, "user_tz": -180, "elapsed": 19, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} outputId="13008eca-955b-4628-c66d-6e2ecbe4d80a" # ! ls # + [markdown] id="7UAasSu5jAeB" # Taking a glimpse of the **bioactivity_data.csv** file that we've just created. # + id="jwEJjx5b5gAn" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1629583654928, "user_tz": -180, "elapsed": 11, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} outputId="8ffaa00f-e164-443c-c278-e62f14b4ceae" # ! head bioactivity_data.csv # + [markdown] id="_GXMpFNUOn_8" # ## **Handling missing data** # If any compounds has missing value for the **standard_value** column then drop it # + id="hkVOdk6ZR396" colab={"base_uri": "https://localhost:8080/", "height": 779} executionInfo={"status": "ok", "timestamp": 1629583655424, "user_tz": -180, "elapsed": 499, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} outputId="d01f2a2c-feb0-48cf-c99e-4dc66109f93b" df2 = df[df.standard_value.notna()] df2 # + [markdown] id="Y-qNsUlmjS25" # Apparently, for this dataset there is no missing data. But we can use the above code cell for bioactivity data of other target protein. # + [markdown] id="5H4sSFAWhV9B" # ## **Data pre-processing of the bioactivity data** # + [markdown] id="tO22XVlzhkXR" # ### **Labeling compounds as either being active, inactive or intermediate** # The bioactivity data is in the IC50 unit. Compounds having values of less than 1000 nM will be considered to be **active** while those greater than 10,000 nM will be considered to be **inactive**. As for those values in between 1,000 and 10,000 nM will be referred to as **intermediate**. # + id="1E8rz7oMOd-5" executionInfo={"status": "ok", "timestamp": 1629583655425, "user_tz": -180, "elapsed": 23, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} bioactivity_class = [] for i in df2.standard_value: if float(i) >= 10000: bioactivity_class.append("inactive") elif float(i) <= 1000: bioactivity_class.append("active") else: bioactivity_class.append("intermediate") # + [markdown] id="PFsmb2N9hnTB" # ### **Iterate the *molecule_chembl_id* to a list** # + id="DMJng9xnVnMM" executionInfo={"status": "ok", "timestamp": 1629583655425, "user_tz": -180, "elapsed": 23, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} molecule_chembl_id = [] for i in df2.molecule_chembl_id: molecule_chembl_id.append(i) # + [markdown] id="YRieJc9dhuVZ" # ### **Iterate *canonical_smiles* to a list** # + id="AT8qUBk1eVmj" executionInfo={"status": "ok", "timestamp": 1629583655425, "user_tz": -180, "elapsed": 22, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} canonical_smiles = [] for i in df2.canonical_smiles: canonical_smiles.append(i) # + [markdown] id="DZFugUXxhwjE" # ### **Iterate *standard_value* to a list** # + id="ZaPt-FjEZNBe" executionInfo={"status": "ok", "timestamp": 1629583655426, "user_tz": -180, "elapsed": 23, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} standard_value = [] for i in df2.standard_value: standard_value.append(i) # + [markdown] id="Nv2dzid_hzKd" # ### **Combine the 4 lists into a dataframe** # + id="TWlYO4I3Wrh-" executionInfo={"status": "ok", "timestamp": 1629583655426, "user_tz": -180, "elapsed": 23, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} data_tuples = list(zip(molecule_chembl_id, canonical_smiles, bioactivity_class, standard_value)) df3 = pd.DataFrame( data_tuples, columns=['molecule_chembl_id', 'canonical_smiles', 'bioactivity_class', 'standard_value']) # + id="Li64nUiZQ-y2" colab={"base_uri": "https://localhost:8080/", "height": 473} executionInfo={"status": "ok", "timestamp": 1629583655426, "user_tz": -180, "elapsed": 23, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} outputId="fcc567b9-fac6-43e8-c1d2-4675a7a4a51c" df3 # + [markdown] id="vE0Vvo6ic3MI" # ### **Alternative method** # + id="VICiiCtqc2ne" colab={"base_uri": "https://localhost:8080/", "height": 419} executionInfo={"status": "ok", "timestamp": 1629583655427, "user_tz": -180, "elapsed": 23, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} outputId="4a7e1056-1f7a-41d2-8eef-57c9b1efa9dd" selection = ['molecule_chembl_id', 'canonical_smiles', 'standard_value'] df3 = df2[selection] df3 # + id="d8nV77oWdbq1" colab={"base_uri": "https://localhost:8080/", "height": 436} executionInfo={"status": "ok", "timestamp": 1629583655427, "user_tz": -180, "elapsed": 21, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} outputId="1a9b2484-4e21-460f-b4a0-f10dcfa3427b" pd.concat([df3,pd.Series(bioactivity_class)], axis=1) # + [markdown] id="9tlgyexWh7YJ" # Saves dataframe to CSV file # + id="nSNia7suXstR" executionInfo={"status": "ok", "timestamp": 1629583655427, "user_tz": -180, "elapsed": 20, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} df3.to_csv('bioactivity_preprocessed_data.csv', index=False) # + id="UuZf5-MEd-H5" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1629583655428, "user_tz": -180, "elapsed": 21, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} outputId="0069dbae-f1c1-4e2a-9330-f0a772fea5ac" # ! ls -l # + [markdown] id="_C7rqJKTePhV" # Let's copy to the Google Drive # + id="ZfyvJcENeHDB" executionInfo={"status": "ok", "timestamp": 1629583655913, "user_tz": -180, "elapsed": 495, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} # ! cp bioactivity_preprocessed_data.csv "/content/gdrive/My Drive/Colab Notebooks/data" # + id="7PU7yU9leLV5" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1629583655914, "user_tz": -180, "elapsed": 15, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "12454893848479578606"}} outputId="e786f5a6-52ca-4335-9865-1250da047173" # ! ls "/content/gdrive/My Drive/Colab Notebooks/data" # + [markdown] id="ZywB5K_Dlawb" # ---
Drug Discovery Using Machine Learning and Data Analysis/CDD_ML_Part_1_bioactivity_data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Analyzing the MSTIS simulation # # Included in this notebook: # # * Opening files for analysis # * Rates, fluxes, total crossing probabilities, and condition transition probabilities # * Per-ensemble properties such as path length distributions and interface crossing probabilities # * Move scheme analysis # * Replica exchange analysis # * Replica move history tree visualization # * Replaying the simulation # * MORE TO COME! Like free energy projections, path density plots, and more from __future__ import print_function # %matplotlib inline import matplotlib.pyplot as plt import openpathsampling as paths import numpy as np # The optimum way to use storage depends on whether you're doing production or analysis. For analysis, you should open the file as an `AnalysisStorage` object. This makes the analysis much faster. # %%time storage = paths.AnalysisStorage("ala_mstis_production.nc") print("PathMovers:", len(storage.pathmovers)) print("Engines:", len(storage.engines)) print("Samples:", len(storage.samples)) print("Ensembles:", len(storage.ensembles)) print("SampleSets:", len(storage.samplesets)) print("Snapshots:", len(storage.snapshots)) print("Trajectories:", len(storage.trajectories)) print("Networks:", len(storage.networks)) # %%time mstis = storage.networks[0] # %%time for cv in storage.cvs: print(cv.name, cv._store_dict) # ## Reaction rates # # TIS methods are especially good at determining reaction rates, and OPS makes it extremely easy to obtain the rate from a TIS network. # # Note that, although you can get the rate directly, it is very important to look at other results of the sampling (illustrated in this notebook and in notebooks referred to herein) in order to check the validity of the rates you obtain. # By default, the built-in analysis calculates histograms the maximum value of some order parameter and the pathlength of every sampled ensemble. You can add other things to this list as well, but you must always specify histogram parameters for these two. The pathlength is in units of frames. mstis.hist_args['max_lambda'] = { 'bin_width' : 2, 'bin_range' : (0.0, 90) } mstis.hist_args['pathlength'] = { 'bin_width' : 5, 'bin_range' : (0, 100) } # %%time mstis.rate_matrix(storage.steps, force=True) # The self-rates (the rate of returning the to initial state) are undefined, and return not-a-number. # # The rate is calcuated according to the formula: # # $$k_{AB} = \phi_{A,0} P(B|\lambda_m) \prod_{i=0}^{m-1} P(\lambda_{i+1} | \lambda_i)$$ # # where $\phi_{A,0}$ is the flux from state A through its innermost interface, $P(B|\lambda_m)$ is the conditional transition probability (the probability that a path which crosses the interface at $\lambda_m$ ends in state B), and $\prod_{i=0}^{m-1} P(\lambda_{i+1} | \lambda_i)$ is the total crossing probability. We can look at each of these terms individually. # ### Total crossing probability stateA = storage.volumes["A"] stateB = storage.volumes["B"] stateC = storage.volumes["C"] # + tcp_AB = mstis.transitions[(stateA, stateB)].tcp tcp_AC = mstis.transitions[(stateA, stateC)].tcp tcp_BC = mstis.transitions[(stateB, stateC)].tcp tcp_BA = mstis.transitions[(stateB, stateA)].tcp tcp_CA = mstis.transitions[(stateC, stateA)].tcp tcp_CB = mstis.transitions[(stateC, stateB)].tcp plt.plot(tcp_AB.x, tcp_AB) plt.plot(tcp_CA.x, tcp_CA) plt.plot(tcp_BC.x, tcp_BC) plt.plot(tcp_AC.x, tcp_AC) # same as tcp_AB in MSTIS # - # We normally look at these on a log scale: plt.plot(tcp_AB.x, np.log(tcp_AB)) plt.plot(tcp_CA.x, np.log(tcp_CA)) plt.plot(tcp_BC.x, np.log(tcp_BC)) # ### Flux # # Here we also calculate the flux contribution to each transition. The flux is calculated based on # + import pandas as pd flux_matrix = pd.DataFrame(columns=mstis.states, index=mstis.states) for state_pair in mstis.transitions: transition = mstis.transitions[state_pair] flux_matrix.set_value(state_pair[0], state_pair[1], transition._flux) flux_matrix # - # ### Conditional transition probability # + outer_ctp_matrix = pd.DataFrame(columns=mstis.states, index=mstis.states) for state_pair in mstis.transitions: transition = mstis.transitions[state_pair] outer_ctp_matrix.set_value(state_pair[0], state_pair[1], transition.ctp[transition.ensembles[-1]]) outer_ctp_matrix # - # ## Path ensemble properties hists_A = mstis.transitions[(stateA, stateB)].histograms hists_B = mstis.transitions[(stateB, stateC)].histograms hists_C = mstis.transitions[(stateC, stateB)].histograms # ### Interface crossing probabilities # # We obtain the total crossing probability, shown above, by combining the individual crossing probabilities of for hist in [hists_A, hists_B, hists_C]: for ens in hist['max_lambda']: normalized = hist['max_lambda'][ens].normalized() plt.plot(normalized.x, normalized) # + # add visualization of the sum # - for hist in [hists_A, hists_B, hists_C]: for ens in hist['max_lambda']: reverse_cumulative = hist['max_lambda'][ens].reverse_cumulative() plt.plot(reverse_cumulative.x, reverse_cumulative) for hist in [hists_A, hists_B, hists_C]: for ens in hist['max_lambda']: reverse_cumulative = hist['max_lambda'][ens].reverse_cumulative() plt.plot(reverse_cumulative.x, np.log(reverse_cumulative)) # ### Path length histograms for hist in [hists_A, hists_B, hists_C]: for ens in hist['pathlength']: normalized = hist['pathlength'][ens].normalized() plt.plot(normalized.x, normalized) for ens in hists_A['pathlength']: normalized = hists_A['pathlength'][ens].normalized() plt.plot(normalized.x, normalized) # ## Sampling properties # # The properties we illustrated above were properties of the path ensembles. If your path ensembles are sufficiently well-sampled, these will never depend on how you sample them. # # But to figure out whether you've done a good job of sampling, you often want to look at properties related to the sampling process. OPS also makes these very easy. # ### Move scheme analysis scheme = storage.schemes[0] scheme.move_summary(storage.steps) scheme.move_summary(storage.steps, 'shooting') scheme.move_summary(storage.steps, 'minus') scheme.move_summary(storage.steps, 'repex') scheme.move_summary(storage.steps, 'pathreversal') # ### Replica exchange sampling # # See the notebook `repex_networks.ipynb` for more details on tools to study the convergence of replica exchange. However, a few simple examples are shown here. All of these are analyzed with a separate object, `ReplicaNetwork`. repx_net = paths.ReplicaNetwork(scheme, storage.steps) # #### Replica exchange mixing matrix repx_net.mixing_matrix() # #### Replica exchange graph # # The mixing matrix tells a story of how well various interfaces are connected to other interfaces. The replica exchange graph is essentially a visualization of the mixing matrix (actually, of the transition matrix -- the mixing matrix is a symmetrized version of the transition matrix). # # Note: We're still developing better layout tools to visualize these. repxG = paths.ReplicaNetworkGraph(repx_net) repxG.draw('spring') # #### Replica exchange flow # # Replica flow is defined as ***TODO*** # # Flow is designed for calculations where the replica exchange graph is linear, which ours clearly is not. However, we can define the flow over a subset of the interfaces. # ### Replica move history tree import openpathsampling.visualize as vis #reload(vis) from IPython.display import SVG # + tree = vis.PathTree( [step for step in storage.steps if not isinstance(step.change, paths.EmptyMoveChange)], vis.ReplicaEvolution(replica=3, accepted=False) ) tree.options.css['width'] = 'inherit' SVG(tree.svg()) # - decorrelated = tree.generator.decorrelated print ("We have " + str(len(decorrelated)) + " decorrelated trajectories.") # ### Visualizing trajectories # ## Histogramming data (TODO)
examples/alanine_dipeptide_mstis/AD_mstis_4_analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python3-azureml # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 언어 이해(Language Understanding) # # 점점 더, 우리는 컴퓨터가 자연 언어로 말하거나 입력된 명령을 이해하기 위해 AI를 사용할 수 있기를 기대한다. 예를 들어, "전등을 켜라" 또는 "팬을 켜라"와 같은 음성 명령을 사용하여 가정에서 장치를 제어할 수 있도록 하는 홈 자동화 시스템을 구현하고 AI로 구동되는 장치가 명령을 이해하고 적절한 조치를 취하도록 할 수 있다. # # # ![A robot listening](./images/language_understanding.jpg) # # ## 작성 및 예측 리소스 만들기 # # Microsoft Cognitive 서비스에는 *발언(Utterance)*에 따라 *엔터티*에 적용되는 *의도(intent)*를 정의할 수 있는 Language Understanding 서비스가 포함되어 있습니다. **Language Understanding** 또는 **Cognitive 서비스** 리소스를 사용하여 Language Understanding 앱을 *게시*할 수 있지만 *작성*을 위해 별도의 **Language Understanding** 리소스를 생성해야 한다. # # 1. 1. 브라우저의 새로운 탭을 열고, Azure portal( https://portal.azure.com )을 입력하고 , Microsoft계정으로 로그인한다.. # 2. **+ 래소스 만들기**를 클릭하고 *Language Understanding*을 찾는다.. # 3. 서비스 목록에서 **Language Understanding**을 클릭한다. # 4. **Language Understanding** 을 선택하고 **만들기**를 누른다. # 5. **만들기** 페이지에서 다음 내용을 입력하고 **만들기**를 클릭한다. # - **만들기 옵션**: 둘 다 # - **이름**: * 유일한 서비스이름* # - **구독**: *Azure 구독서비스를 선택* # - **리소스 그룹**: *이미 존재하는 리소스그룹을 선택하거나 새로운 것을 만듬* # - **작성 위치**: *사용가능한 위치를 선택* # - **작성 가격 책정 계층**: F0 # - **예측 위치**: *작성위치와 같은 위치 선택* # - **예측 가격 책정 계층**: F0 # 6.리소스가 만들어지까지 기다리면 두개의 Language Understanding 리소스가 만들어지게 되는데 하나는 제작용이고 다른 하나는 예측용이다. 만들어진 리소스 그룹을 클릭하여 각 러소스를 확인할 수 있다. # # ### Language Understanding 앱 만들기 # # Language Understanding를 통해 자연어 Language Understanding을 구현하려면 앱을 만든 다음 엔터티, 의도(intents) 및 발언(Utterance)을 추가하여 앱에서 이해할 명령들을 정의한다. # # 1. 브라우저의 새로운 탭에서 Language Understanding 포털 [https://www.luis.ai](https://www.luis.ai) 을 열고 Azure 구독과 관련된 Microsoft 계정으로 로그인을 한다. 만일 계정이 Language Understading 포털에 처음으로 로그인을 하였다면 계정 세부 정보에 액세스가 가능하도록 권한을 부여하도록 하는 작업이 필요하다. Azure 구독에서 만들었던 Language Understanding 작성 리소스를 선택함으로 *Welcome* 과정을 완료한다. # 2. **My Apps** 페이지를 클릭하고 구독을 선택하고 Language Understanding 작성 리소스를 선택한다. 그리고 나서 다음과 같은 설정으로 Conversation을 위한 새로운 앱을 생성한다. # - **Name**: Home Automation # - **Culture**: English # - **Description**: 간단한 홈 자동화시스템 # - **Prediction resource**: *앞에서 만든 Language Understanding 예측 리소스* # 3. 효과적인 Language Understanding 앱을 만드는데 필요한 팁 메뉴가 나타나면 닫는다. # # ### 엔터티(Entity) 만들기 # # *entity*는 언어 모델이 식별하여 수행할 수 있는 작업이다. 이 경우 Language Understanding 앱은 조명이나 팬과 같은 사무실의 다양한 *기기*를 제어하는 데 사용되므로, 앱에서 사용할 장치 유형 목록이 포함된 *devices* 엔티티를 만든다. 각 장치 유형에 대해 장치의 이름(예: *light*)과 이 장치 유형을 참조하는 데 사용할 수 있는 모든 동의어(예: *lamp*)를 식별하는 하위 목록을 생성한다. # # 1. 앱의 Language Understanding 페이지에서, 왼쪽에 있는 **Entities**를 클릭한다. 그리고 나서 **Create**를 클릭하고 새로운 엔터티 이름으로 **device**라 넣고 , **List** 형식을 선택하고 **Create** 를릭한다.. # 2. **List items** 페이지에서 **Normalized Values** 값 밑에 **light**라 넣고 ENTER를 누른다.. # 3. **light** 값이 추가된 뒤에 **Synonyms** 밑에다 **lamp**라 입력하고 ENTER를 누른다.. # 4. 두번째 리스트 항목으로 **fan** 을 넣고 동의어로 **AC**을 입력한다. # # ### 의도(Intents) 만들기 # # *Intents*는 하나 이상의 엔티티에 대해 수행할 작업이다. 예를 들어 조명을 켜거나 팬을 끌 수 있다. 이 경우 장치를 켤 때와 장치를 끌 때의 두 가지 의도를 정의한다. 각 의도에 대해 Intents를 나타내는 언어 종류를 나타내는 표본 *Utterance*를 지정한다. # # 1. 페이지 왼편에 있는 **Intents**를 선택하고. **Create** 클릭하고 **switch_on**이라는 이름으로intent를 추가한다. 그리고 나서**Done**을 클릭한다.. # 2. **Examples**와 **Example user input**의 밑으로 이동하여 ***turn the light on***이라 utterance를 입력하고 **Enter**를 눌러 이것을 리스트에 포함시킨다. # 3. *turn the light on* utterance에서 "light" 단어를 클릭하고 **device** 엔터티의 **light** 값을 할당한다. # 4. **switch_on** intent의 두번째 utterance로 ***turn the fan on***를 입력한다. 그리고 나서 "fan"단어를 클릭하고 **device** 엔터티의 **fan** 값을 할당한다. # 5. 페이지 왼편에 있는 **Intents**를 선택하고. **Create** 클릭하고 **switch_off**이라는 이름으로intent를 추가한다 # 6. **switch_off** intent에 대한 페이지로 이동하여 ***turn the light off*** utterance를 입력하고 "light" 단어에 **device** 엔터티의 **light** 값을 할당한다. # 7. **switch_off** intent의 두번째 utterance로 ***turn the fan off***구를 입력한다. 그리고 나서 "fan"단어에 **device** 엔터티의 **fan** 값을 할당한다. # # ### 언어 모델을 학습시키고 테스트하기 # # 이제 엔터티들, intents와 utterance의 형식으로 제공한 데이터를 이용하여 앱의 언어모델을 학습시킨다. # # 1. 앱의 Language Understanding 페이지에서 **Train**을 클릭하여 언어 모델을 학습시킨다. # 2. 모델이 하였다면 **Test**를 클릭하고 아래의 구들을 입력하여 예측된 intent를 Test페이지에서 확인한다. # * *switch the light on* # * *turn off the fan* # * *turn the lamp off* # * *switch on the AC* # 3. 테스트 창을 닫는다. # # ### 모델을 게시하고 엔드포인트를 설정하기 # # 학습시킨 모델을 클라이언트 응용프로그램에서 사용하기 위해서는 클라이언트 응용프로그램이 새로운 untterance이 보낼 수 있는 엔드포인트로 게시해야 한다. 이렇게 하면 intent와 엔터티들을 예측할 수 있다. # # 1. 앱에서 Language Understanding 페이지에 있는 **Publish**를 클릭한다. 그런다음 **Production slot** 클릭하고 **Done**을 선택한다.. # 2. 모델이 게시된 후 앱의 Language Understanding 페이지의 위에 있는 **Manage**를 클릭한다. 그리고 나서 **Application Information** 탭에서, 앱의 **App ID**를 확인한다. 이것을 복사해서 아래의 셀에 있는 **YOUR_LU_APP_ID**에 붙여넣기를 한다.. # 3. **Azure Resources** 탭에서 예측 리소스의 **Primary key**와 **Endpoint URL** 를 확인한다. 복사하여 아래 코드의 **YOUR_LU_KEY**와 **YOUR_LU_ENDPOINT**에 대신하여 붙여넣기를 한다.. # 4. 아래 셀의 **Run cell** (&#9655;) 버튼(왼쪽에 있음)을 클릭하여 실행한다. 입력창이 나타나t *turn the light on*라고 입력한다. 해당 텍스트는 Language Understanding 모델이 해석을 하여 알맞은 이미지를 보여주게 된다.. # # + gather={"logged": 1599696381331} tags=[] from python_code import luis import matplotlib.pyplot as plt from PIL import Image import os # %matplotlib inline try: # API 구성 설정하기 luis_app_id = 'YOUR_LU_APP_ID' luis_key = 'YOUR_LU_KEY' luis_endpoint = 'YOUR_LU_ENDPOINT' # 명령창 띄우기 command = input('Please enter a command: \n') # 예측된 intent와 엔터티 가져오기(python_code.home_auto.py 코드에 있음) action = luis.get_intent(luis_app_id, luis_key, luis_endpoint, command) # 알맞은 이미지 보여주기 img_name = action + '.jpg' img = Image.open(os.path.join("data", "luis" ,img_name)) plt.axis('off') plt. imshow(img) except Exception as ex: print(ex) # - # 아래와 같은 구로 위를 다시 실행해 보자: # # * *turn on the light* # * *put the lamp off* # * *switch the fan on* # * *switch the light on* # * *switch off the light* # * *turn off the fan* # * *switch the AC on* # # > **알림**: 만일 Language Understanding앱이 intents와 엔터티를 가져오는 방법에 대해서 궁금하다면 **python_code** 폴더의 **luis.py** 파일을 참고하라. # ## 음성 제어(Voice Control)추가하기 # # 지금까지 우리는 텍스트를 분석하는 방법을 살펴보았다. 하지만 점점 더 AI 시스템은 음성 인식을 통해 인간이 소프트웨어 서비스와 의사소통할 수 있게 한다. 이를 지원하기 위해 **Speech** Cognitive 서비스는 음성을 텍스트로 간단하게 기록할 수 있는 방법을 제공한다. # # ### Cognitive 서비스 리소스 만들기 # # 이미 만들어진 Cognitive 서비스가없다면 다음과 같은 순서로 Azure 구독에서 **Cognitive Services** 리소스를 생성할 수 있다. # # 1. 브라우저의 새로운 탭을 열고, Azure portal( https://portal.azure.com)을 입력하고 , Microsoft계정으로 로그인한다.. # 2. **&#65291;리소스 만들기** 버튼을 클릭하고, *Cognitive Services* 서비스를 찾은 다음, **Cognitive Services** 리소스를 다음과 같은 내용으로 생성한다. # - **이름**: *유일한 이름을 입력한다(가능하면 영문과 숫자사용)*. # - **구독**: *Azure 구독선택*. # - **위치**: *가능한 위치(한국 중부 추천)*: # - **가격책정계층**: 표준 S0 # - **리소스 그룹**: *원하는 유리한 이름(가능하면 영문과 숫자사용)*. # 3. 배포가 완료될 때까지 기다린다. 그런 다음 Cognitive Services 리소스로 이동하여 **개요* 페이지에서 링크를 클릭하여 서비스 키를 관리한다. 클라이언트 응용 프로그램에서 Cognitive Services 리소스에 연결하려면 엔드포인트과 키가 필요하다. # # # ### Cognitive Services 리소스에 있는 키와 엔드포인트 가져오기 # # Cgnitive Services 리소스를 사용하기 위해서는, 클라이언트 응용프로그램에서는 엔드포인트와 인증 키가 필요하다. # # 1. Azure portal에서, Cgnitive Services 리소스를 선택하고 **키 및 엔트포인트** 페이지를 선택한 다음 **키1** 을 복사하여 아래의 **YOUR_COG_KEY**.를 붙여 넣는다. # 2. 리소스에 있는 **엔드포인트** 를 복사해서 아래의 **YOUR_COG_ENDPOINT**.에 붙여 넣는다. # 3. 리소스에 있는 **위치** 를 복사해서 아래의 **YOUR_COG_REGION**.에 붙여 넣는다. # 4. 아래 셀을 선택한 다음 셀 왼쪽에있는 **셀 실행**(&#9655;) 버튼을 클릭하여 아래 코드를 실행한다. # + gather={"logged": 1599696409914} tags=[] cog_key = 'YOUR_COG_KEY' cog_endpoint = 'YOUR_COG_ENDPOINT' cog_region = 'YOUR_COG_REGION' print('Ready to use cognitive services in {} using key {}'.format(cog_region, cog_key)) # + [markdown] nteract={"transient": {"deleting": false}} # Cognitive 서비스 리소스에 있는 Speech 서비스를 이용하기 위해서는 Azure Cognitive Services Speech SDK를 설치해야 한다. # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} # !pip install azure.cognitiveservices.speech # - # 아래의 셀을 실행하여 오디오 파일로부터 음성을 텍스트로 바꾸고 이것을 Language Understanding 앱의 명령으로 사용해보자. # + gather={"logged": 1599696420498} tags=[] from python_code import luis import os import IPython import os from azure.cognitiveservices.speech import SpeechConfig, SpeechRecognizer, AudioConfig try: # 오디오 파일로부터 음성 명령을 가져온다. file_name = 'light-on.wav' audio_file = os.path.join('data', 'luis', file_name) # 음성 인식기를 구성한다 speech_config = SpeechConfig(cog_key, cog_region) audio_config = AudioConfig(filename=audio_file) # 기본설정(마이크) 대신에 파일을 사용한다 speech_recognizer = SpeechRecognizer(speech_config, audio_config) # 음성을 텍스트로 바꾸기 위해 일회성 및 동기화된 호출을 사용한다. speech = speech_recognizer.recognize_once() # 예측된 intent와 엔터티를 가져온다(python_code.luis.py에 있는 코드를 가져온다) action = luis.get_intent(luis_app_id, luis_key, luis_endpoint, speech.text) # 알맞은 이미지를 가져온다 img_name = action + '.jpg' # 오디오를 실행하고 이미지를 보여준다 IPython.display.display(IPython.display.Audio(audio_file, autoplay=True), IPython.display.Image(data=os.path.join("data", "luis" ,img_name))) except Exception as ex: print(ex) # - # 위의 코드에서 음성 파일을 **light-off.wav**로 바꾸어 실행해보자. # # ## 심화학습 # # Language Understanding 에 대한 추가 문서는 다음을 참조하라[service documentation](https://docs.microsoft.com/azure/cognitive-services/luis/)
02d - Language Understanding.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # References # - https://kontext.tech/column/code-snippets/402/pandas-dataframe-plot-pie-chart # - https://medium.com/dunder-data/selecting-subsets-of-data-in-pandas-6fcd0170be9c # - https://data36.com/pandas-tutorial-2-aggregation-and-grouping/ import pandas as pd df = pd.read_csv("students.csv") df # # Groupby division students=df by_division = students.groupby('division').count() by_division type(by_division) howard = students[students['boss'].str.contains('Howard')] # # Who works for <NAME> - I knew a <NAME> howard science_folks = students[students['division'].str.contains('Science')] science_folks len(science_folks) by_division # # Plot the classes of students using boss boss_by_division = by_division['boss'] boss_by_division boss_by_division.plot.pie(autopct='%1.1f%%', shadow = True) boss_by_division.plot.bar() # # Improve the colors from itertools import islice from itertools import cycle my_colors = list(islice(cycle(['b', 'r', 'g', 'y', 'k']),None,len(boss_by_division))) boss_by_division.plot(title='boss by division', kind='bar', stacked=True, color=my_colors) sayler = students[students['boss'].str.contains('Sayler')] sayler print(students['lastname']) lastname = students['lastname'] lastname lastname_no_indices = lastname.to_string(index=False) print(lastname_no_indices) type(lastname_no_indices) for i in lastname_no_indices.split('\n'): #print(i) i = i.lower() i = i.strip() print(i)
class/06-Instructor/01-Students.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # PACS details domains = ['photo', 'art_painting', 'cartoon', 'sketch'] classes = ['dog', 'elephant', 'giraffe', 'guitar', 'horse','house', 'person'] # # Set parameters # + import torch from torch import nn, optim from torch.utils.data import DataLoader from torch.optim.swa_utils import AveragedModel, SWALR import torchvision.datasets as Datasets from torchvision import models from torchvision.datasets import ImageFolder, DatasetFolder from utils import * import os import numpy as np import matplotlib.pyplot as plt from model.resnet18_selfreg import resnet18 # + ############################## # Training Setting ############################## # Select model to train # resnet18(pytorch official):'resnet18_classic' # SelfReg : 'resnet18' used_model = 'resnet18' save_name = 'SelfReg_official_test' # save_dir name # save_path : resnet_18/pacs/{save_name}/ dataset ='pacs' pacs_ver = 'pacs_official_split' number_of_tests = 20 gpu_num = 0 n_workers = 6 ############################## # Basic Hyper-parameters ############################## # Training Setting # classic : classic training # IDCL : classic + Inter-domain curriculum learning training_setting = 'IDCL' is_self_reg = True # Using SefReg Flag epochs = 30 batch_size = 128 is_pretrained = True # Use ImageNet pretrain weight ? used_optimizer = 'SGD' # 'Adam' or 'SGD' #Learning rate lr = 4e-3 lr_decay_epoch = [100] lr_decay_gamma = 0.1 train_tf, test_tf = get_tf(augment=True) # + device= torch.device('cpu') use_gpu = torch.cuda.is_available() if use_gpu: print("Using CUDA") device = torch.device("cuda:{}".format(gpu_num)) print(device) #모델 세팅 저장 model_settings={ "used_model" : used_model, "dataset" : dataset, "save_name" : save_name, "pacs_ver" : pacs_ver, "number_of_tests" : number_of_tests, "training_setting" : training_setting, "epochs" : epochs, "batch_size" : batch_size, "is_pretrained" : is_pretrained, "lr" : lr, "lr_decay_epoch" : lr_decay_epoch, "lr_decay_gamma" : lr_decay_gamma, "gpu_num" : gpu_num } criterion = nn.CrossEntropyLoss().to(device) # - # # Functions # + code_folding=[63] def classic_setting(test_domain_idx, domains, batch_size, is_pretrained, train_tf, test_tf, used_model,pacs_ver,used_optimizer): train_set1 = ImageFolder(root=os.path.join('{}/train'.format(pacs_ver), domains[(test_domain_idx+1)%len(domains)]), transform = train_tf) train_set2 = ImageFolder(root=os.path.join('{}/train'.format(pacs_ver), domains[(test_domain_idx+2)%len(domains)]), transform = train_tf) train_set3 = ImageFolder(root=os.path.join('{}/train'.format(pacs_ver), domains[(test_domain_idx+3)%len(domains)]), transform = train_tf) val_set1 = ImageFolder(root=os.path.join('{}/val'.format(pacs_ver), domains[(test_domain_idx+1)%len(domains)]), transform = test_tf) val_set2 = ImageFolder(root=os.path.join('{}/val'.format(pacs_ver), domains[(test_domain_idx+2)%len(domains)]), transform = test_tf) val_set3 = ImageFolder(root=os.path.join('{}/val'.format(pacs_ver), domains[(test_domain_idx+3)%len(domains)]), transform = test_tf) train_set = train_set1+train_set2+train_set3 val_set = val_set1+val_set2+val_set3 test_set = ImageFolder(root=os.path.join('{}/test'.format(pacs_ver),domains[test_domain_idx]), transform = test_tf) train_loader = DataLoader(train_set, batch_size=batch_size, shuffle=True, num_workers=n_workers) val_loader = DataLoader(val_set, batch_size=batch_size, shuffle=False, num_workers=n_workers) test_loader = DataLoader(test_set, batch_size=batch_size, shuffle=False, num_workers=n_workers) if used_model=='vgg16': model = models.vgg16(pretrained=is_pretrained).cuda() model.classifier[6].out_features=len(classes) elif used_model=='inceptionv3': model = models.inception_v3(pretrained=is_pretrained).cuda() model.AuxLogits.fc.out_features = len(classes) model.fc.out_features=len(classes) elif used_model=='resnet18': # load weights pretrained on ImageNet model = resnet18(pretrained=is_pretrained) num_ftrs = model.fc.in_features model.fc = nn.Linear(num_ftrs,len(classes)) model = model.to(device) elif used_model=='resnet18_classic': model = models.resnet18(pretrained=is_pretrained) num_ftrs = model.fc.in_features model.fc = nn.Linear(num_ftrs,len(classes)) model = model.to(device) else: raise NotImplementedError if used_optimizer=="Adam": optimizer = optim.Adam(model.parameters(), lr=lr) elif used_optimizer=="SGD": optimizer = optim.SGD(model.parameters(), lr=lr, momentum=0.9) else: raise NotImplementedError swa_model = AveragedModel(model).to(device) swa_scr = SWALR(optimizer, swa_lr=0.004, anneal_epochs=1) scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=lr_decay_epoch, gamma= lr_decay_gamma) return train_loader, val_loader, test_loader, optimizer, model, scheduler, swa_model, swa_scr def IDCL_setting(test_domain_idx, domains, batch_size, is_pretrained, train_tf, test_tf, used_model,pacs_ver, used_optimizer): check = 1 train_set = 0 val_set = 0 check_limit = 3 for i in range(4): if check > check_limit: break if i==test_domain_idx: continue temp = ImageFolder(root=os.path.join('{}/train'.format(pacs_ver),domains[i]), transform = train_tf) temp_val = ImageFolder(root=os.path.join('{}/val'.format(pacs_ver),domains[i]), transform = test_tf) if check==1: train_set = temp val_set = temp_val else: train_set += temp val_set += temp_val if check==1: train_set_stage1 = train_set val_set_stage1 = val_set elif check==2: train_set_stage2 = train_set val_set_stage2 = val_set elif check==3: train_set_stage3 = train_set val_set_stage3 = val_set check += 1 test_set = ImageFolder(root=os.path.join('{}/test'.format(pacs_ver),domains[test_domain_idx]), transform = test_tf) print('stage1 (train,val):',len(train_set_stage1),len(val_set_stage1)) print('stage2 (train,val):',len(train_set_stage2),len(val_set_stage2)) print('stage3 (train,val):',len(train_set_stage3),len(val_set_stage3)) print('test :',len(test_set)) t_loader1 = DataLoader(train_set_stage1, batch_size=batch_size, shuffle=True, num_workers=6) v_loader1 = DataLoader(val_set_stage1, batch_size=batch_size, shuffle=True, num_workers=6) t_loader2 = DataLoader(train_set_stage2, batch_size=batch_size, shuffle=True, num_workers=6) v_loader2 = DataLoader(val_set_stage2, batch_size=batch_size, shuffle=True, num_workers=6) t_loader3 = DataLoader(train_set_stage3, batch_size=batch_size, shuffle=True, num_workers=6) v_loader3 = DataLoader(val_set_stage3, batch_size=batch_size, shuffle=True, num_workers=6) test_loader = DataLoader(test_set, batch_size=batch_size, shuffle=False, num_workers=6) if used_model=='vgg16': print('vgg16') model = models.vgg16(pretrained=is_pretrained).cuda() model.classifier[6].out_features=7 elif used_model=='inceptionv3': model = models.inception_v3(pretrained=is_pretrained).cuda() model.AuxLogits.fc.out_features = 7 model.fc.out_features=7 elif used_model=='resnet18': model = resnet18(pretrained=is_pretrained) num_ftrs = model.fc.in_features model.fc = nn.Linear(num_ftrs,7) model = model.to(device) elif used_model=='resnet18_classic': model = models.resnet18(pretrained=is_pretrained) num_ftrs = model.fc.in_features model.fc = nn.Linear(num_ftrs,7) model = model.to(device) else: raise NotImplementedError if used_optimizer=="Adam": optimizer = optim.Adam(model.parameters(), lr=lr) elif used_optimizer=="SGD": optimizer = optim.SGD(model.parameters(), lr=lr, momentum=0.9) else: raise NotImplementedError lr_scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=lr_decay_epoch, gamma= lr_decay_gamma) swa_model = AveragedModel(model).to(device) swa_scr = SWALR(optimizer, swa_lr=0.004, anneal_epochs=1) scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=lr_decay_epoch, gamma= lr_decay_gamma) train_loaders = [t_loader1,t_loader2, t_loader3] val_loaders = [v_loader1, v_loader2, v_loader3] return train_loaders, val_loaders, test_loader, optimizer, model, lr_scheduler, swa_model, swa_scr # - # # Automation # + code_folding=[] save_model_setting(model_settings,used_model,domains,dataset,save_name) for i in range(1,number_of_tests+1): try_check = i for test_idx in [3,2,1,0]: ########################## #### Training Setting #### ########################## if training_setting=='classic': train_loader, val_loader, test_loader, optimizer, model, lr_scheduler, swa_model, swa_scr = classic_setting( test_idx, domains, batch_size, is_pretrained, train_tf, test_tf, used_model, pacs_ver,used_optimizer ) elif training_setting=='IDCL': train_loaders, val_loaders, test_loader, optimizer, model, lr_scheduler,swa_model, swa_scr = IDCL_setting( test_idx, domains, batch_size, is_pretrained, train_tf, test_tf, used_model, pacs_ver, used_optimizer ) else: raise NotImplementedError save_dir = save_route(test_idx, domains, dataset, save_name, used_model) try: if not os.path.exists(save_dir): os.makedirs(save_dir) except: print('Error : Creating directory. '+ save_dir) ########################## #### Training #### ########################## if training_setting=='classic': model, losses, accuracies = classic_training( device, epochs, model,optimizer, criterion, train_loader, val_loader, lr_scheduler,is_self_reg=is_self_reg, swa_model=swa_model, swa_scr=swa_scr ) test_accuracy,_,__ = classic_test(device, model,criterion, test_loader,used_model, save_dir, try_check) elif training_setting=='IDCL': model, losses, accuracies = IDCL_training( device, epochs, model,optimizer, criterion, train_loaders, val_loaders, lr_scheduler, is_self_reg=is_self_reg, swa_model=swa_model, swa_scr=swa_scr ) test_accuracy,_,__ = classic_test(device, model,criterion, test_loader,used_model, save_dir, try_check) else: raise NotImplementedError total_result_text_path=os.path.join(save_dir,"test_total_result.txt") with open(total_result_text_path,"a") as f: print(test_accuracy) f.write(str(test_accuracy)+"\n") plotting(losses, accuracies, used_model, save_dir, is_pretrained, try_check) # save_model(model, used_model, save_dir, is_pretrained, try_check) # -
codes/train.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os import numpy as np import pandas as pd import netCDF4 as nc # + DATA_FILE_DIR = "./nasa/" START_YEAR, END_YEAR = 2010, 2020 NUM_OF_YEARS = END_YEAR - START_YEAR NUM_OF_MONTHS = 12 NUM_OF_DAYS = {1: 31, 2: 28, 3: 31, 4: 30, 5: 31, 6: 30, 7: 31, 8: 31, 9: 30, 10: 31, 11: 30, 12: 31} # + file = nc.Dataset(DATA_FILE_DIR+'20110101.nc4') lat = file.variables['lat'][:].filled() lon = file.variables['lon'][:].filled() mask = file.variables['AvgSurfT_tavg'][0].mask LON = len(lon) LAT = len(lat) file.close() # - def get_tmp(filepath): assert os.path.isfile(filepath), '{} does not exist!'.format(filepath) file = nc.Dataset(filepath) temperature = file.variables['AvgSurfT_tavg'][0] file.close() return temperature.filled(np.nan) # %%time each_year_HDD = np.ndarray(shape=(NUM_OF_YEARS, LAT, LON)) each_year_MPID = np.ndarray(shape=(NUM_OF_YEARS, LAT, LON)) for year in range(START_YEAR, END_YEAR): print(year) yearly_temp = np.ndarray(shape=(365, LAT, LON)) i = 0 for month in range(1, NUM_OF_MONTHS+1): for day in range(1, NUM_OF_DAYS[month]+1): date = "{}{:02d}{:02d}".format(year, month, day) filepath = DATA_FILE_DIR + date + '.nc4' yearly_temp[i] = get_tmp(filepath) i += 1 date_HDD = np.where(yearly_temp<291.15, 291.15-yearly_temp, 0) #291.15 K = 18 oC date_MPID = np.where(yearly_temp<253.15, 1, 0) # 253.15 K = -20 oC each_year_HDD[year-START_YEAR] = date_HDD.sum(axis=0) each_year_MPID[year-START_YEAR] = date_MPID.sum(axis=0) avg_HDD = each_year_HDD.mean(axis=0) avg_MPID = each_year_MPID.mean(axis=0) pos_HDD = np.argmax(avg_HDD) print(pos_HDD // LON, pos_HDD % LON) pos_MPID = np.argmax(avg_MPID) print(pos_MPID // LON, pos_MPID % LON) avg_MPID[564, 563] avg_MPID[556, 556] avg_HDD[564, 563] avg_HDD[556, 556]
archive/NASA_data/archive/worst_place_search.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # + [markdown] origin_pos=0 # # 批量规范化 # :label:`sec_batch_norm` # # 训练深层神经网络是十分困难的,特别是在较短的时间内使他们收敛更加棘手。 # 在本节中,我们将介绍*批量规范化*(batch normalization) :cite:`Ioffe.Szegedy.2015`,这是一种流行且有效的技术,可持续加速深层网络的收敛速度。 # 再结合在 :numref:`sec_resnet`中将介绍的残差块,批量规范化使得研究人员能够训练100层以上的网络。 # # ## 训练深层网络 # # 为什么需要批量规范化层呢?让我们来回顾一下训练神经网络时出现的一些实际挑战。 # # 首先,数据预处理的方式通常会对最终结果产生巨大影响。 # 回想一下我们应用多层感知机来预测房价的例子( :numref:`sec_kaggle_house`)。 # 使用真实数据时,我们的第一步是标准化输入特征,使其平均值为0,方差为1。 # 直观地说,这种标准化可以很好地与我们的优化器配合使用,因为它可以将参数的量级进行统一。 # # 第二,对于典型的多层感知机或卷积神经网络。当我们训练时,中间层中的变量(例如,多层感知机中的仿射变换输出)可能具有更广的变化范围:不论是沿着从输入到输出的层,跨同一层中的单元,或是随着时间的推移,模型参数的随着训练更新变幻莫测。 # 批量规范化的发明者非正式地假设,这些变量分布中的这种偏移可能会阻碍网络的收敛。 # 直观地说,我们可能会猜想,如果一个层的可变值是另一层的100倍,这可能需要对学习率进行补偿调整。 # # 第三,更深层的网络很复杂,容易过拟合。 # 这意味着正则化变得更加重要。 # # 批量规范化应用于单个可选层(也可以应用到所有层),其原理如下:在每次训练迭代中,我们首先规范化输入,即通过减去其均值并除以其标准差,其中两者均基于当前小批量处理。 # 接下来,我们应用比例系数和比例偏移。 # 正是由于这个基于*批量*统计的*标准化*,才有了*批量规范化*的名称。 # # 请注意,如果我们尝试使用大小为1的小批量应用批量规范化,我们将无法学到任何东西。 # 这是因为在减去均值之后,每个隐藏单元将为0。 # 所以,只有使用足够大的小批量,批量规范化这种方法才是有效且稳定的。 # 请注意,在应用批量规范化时,批量大小的选择可能比没有批量规范化时更重要。 # # 从形式上来说,用$\mathbf{x} \in \mathcal{B}$表示一个来自小批量$\mathcal{B}$的输入,批量规范化$\mathrm{BN}$根据以下表达式转换$\mathbf{x}$: # # $$\mathrm{BN}(\mathbf{x}) = \boldsymbol{\gamma} \odot \frac{\mathbf{x} - \hat{\boldsymbol{\mu}}_\mathcal{B}}{\hat{\boldsymbol{\sigma}}_\mathcal{B}} + \boldsymbol{\beta}.$$ # :eqlabel:`eq_batchnorm` # # 在 :eqref:`eq_batchnorm`中,$\hat{\boldsymbol{\mu}}_\mathcal{B}$是小批量$\mathcal{B}$的样本均值,$\hat{\boldsymbol{\sigma}}_\mathcal{B}$是小批量$\mathcal{B}$的样本标准差。 # 应用标准化后,生成的小批量的平均值为0和单位方差为1。 # 由于单位方差(与其他一些魔法数)是一个主观的选择,因此我们通常包含 # *拉伸参数*(scale)$\boldsymbol{\gamma}$和*偏移参数*(shift)$\boldsymbol{\beta}$,它们的形状与$\mathbf{x}$相同。 # 请注意,$\boldsymbol{\gamma}$和$\boldsymbol{\beta}$是需要与其他模型参数一起学习的参数。 # # 由于在训练过程中,中间层的变化幅度不能过于剧烈,而批量规范化将每一层主动居中,并将它们重新调整为给定的平均值和大小(通过$\hat{\boldsymbol{\mu}}_\mathcal{B}$和${\hat{\boldsymbol{\sigma}}_\mathcal{B}}$)。 # # 从形式上来看,我们计算出 :eqref:`eq_batchnorm`中的$\hat{\boldsymbol{\mu}}_\mathcal{B}$和${\hat{\boldsymbol{\sigma}}_\mathcal{B}}$,如下所示: # # $$\begin{aligned} \hat{\boldsymbol{\mu}}_\mathcal{B} &= \frac{1}{|\mathcal{B}|} \sum_{\mathbf{x} \in \mathcal{B}} \mathbf{x},\\ # \hat{\boldsymbol{\sigma}}_\mathcal{B}^2 &= \frac{1}{|\mathcal{B}|} \sum_{\mathbf{x} \in \mathcal{B}} (\mathbf{x} - \hat{\boldsymbol{\mu}}_{\mathcal{B}})^2 + \epsilon.\end{aligned}$$ # # 请注意,我们在方差估计值中添加一个小的常量$\epsilon > 0$,以确保我们永远不会尝试除以零,即使在经验方差估计值可能消失的情况下也是如此。估计值$\hat{\boldsymbol{\mu}}_\mathcal{B}$和${\hat{\boldsymbol{\sigma}}_\mathcal{B}}$通过使用平均值和方差的噪声(noise)估计来抵消缩放问题。 # 你可能会认为这种噪声是一个问题,而事实上它是有益的。 # # 事实证明,这是深度学习中一个反复出现的主题。 # 由于尚未在理论上明确的原因,优化中的各种噪声源通常会导致更快的训练和较少的过拟合:这种变化似乎是正则化的一种形式。 # 在一些初步研究中, :cite:`Teye.Azizpour.Smith.2018`和 :cite:`Luo.Wang.Shao.ea.2018`分别将批量规范化的性质与贝叶斯先验相关联。 # 这些理论揭示了为什么批量规范化最适应$50 \sim 100$范围中的中等批量大小的难题。 # # 另外,批量规范化层在”训练模式“(通过小批量统计数据规范化)和“预测模式”(通过数据集统计规范化)中的功能不同。 # 在训练过程中,我们无法得知使用整个数据集来估计平均值和方差,所以只能根据每个小批次的平均值和方差不断训练模型。 # 而在预测模式下,可以根据整个数据集精确计算批量规范化所需的平均值和方差。 # # 现在,我们了解一下批量规范化在实践中是如何工作的。 # # ## 批量规范化层 # # 回想一下,批量规范化和其他层之间的一个关键区别是,由于批量规范化在完整的小批量上运行,因此我们不能像以前在引入其他层时那样忽略批量大小。 # 我们在下面讨论这两种情况:全连接层和卷积层,他们的批量规范化实现略有不同。 # # ### 全连接层 # # 通常,我们将批量规范化层置于全连接层中的仿射变换和激活函数之间。 # 设全连接层的输入为u,权重参数和偏置参数分别为$\mathbf{W}$和$\mathbf{b}$,激活函数为$\phi$,批量规范化的运算符为$\mathrm{BN}$。 # 那么,使用批量规范化的全连接层的输出的计算详情如下: # # $$\mathbf{h} = \phi(\mathrm{BN}(\mathbf{W}\mathbf{x} + \mathbf{b}) ).$$ # # 回想一下,均值和方差是在应用变换的"相同"小批量上计算的。 # # ### 卷积层 # # 同样,对于卷积层,我们可以在卷积层之后和非线性激活函数之前应用批量规范化。 # 当卷积有多个输出通道时,我们需要对这些通道的“每个”输出执行批量规范化,每个通道都有自己的拉伸(scale)和偏移(shift)参数,这两个参数都是标量。 # 假设我们的小批量包含$m$个样本,并且对于每个通道,卷积的输出具有高度$p$和宽度$q$。 # 那么对于卷积层,我们在每个输出通道的$m \cdot p \cdot q$个元素上同时执行每个批量规范化。 # 因此,在计算平均值和方差时,我们会收集所有空间位置的值,然后在给定通道内应用相同的均值和方差,以便在每个空间位置对值进行规范化。 # # ### 预测过程中的批量规范化 # # 正如我们前面提到的,批量规范化在训练模式和预测模式下的行为通常不同。 # 首先,将训练好的模型用于预测时,我们不再需要样本均值中的噪声以及在微批次上估计每个小批次产生的样本方差了。 # 其次,例如,我们可能需要使用我们的模型对逐个样本进行预测。 # 一种常用的方法是通过移动平均估算整个训练数据集的样本均值和方差,并在预测时使用它们得到确定的输出。 # 可见,和暂退法一样,批量规范化层在训练模式和预测模式下的计算结果也是不一样的。 # # ## (**从零实现**) # # 下面,我们从头开始实现一个具有张量的批量规范化层。 # # + origin_pos=1 tab=["mxnet"] from mxnet import autograd, init, np, npx from mxnet.gluon import nn from d2l import mxnet as d2l npx.set_np() def batch_norm(X, gamma, beta, moving_mean, moving_var, eps, momentum): # 通过autograd来判断当前模式是训练模式还是预测模式 if not autograd.is_training(): # 如果是在预测模式下,直接使用传入的移动平均所得的均值和方差 X_hat = (X - moving_mean) / np.sqrt(moving_var + eps) else: assert len(X.shape) in (2, 4) if len(X.shape) == 2: # 使用全连接层的情况,计算特征维上的均值和方差 mean = X.mean(axis=0) var = ((X - mean) ** 2).mean(axis=0) else: # 使用二维卷积层的情况,计算通道维上(axis=1)的均值和方差。 # 这里我们需要保持X的形状以便后面可以做广播运算 mean = X.mean(axis=(0, 2, 3), keepdims=True) var = ((X - mean) ** 2).mean(axis=(0, 2, 3), keepdims=True) # 训练模式下,用当前的均值和方差做标准化 X_hat = (X - mean) / np.sqrt(var + eps) # 更新移动平均的均值和方差 moving_mean = momentum * moving_mean + (1.0 - momentum) * mean moving_var = momentum * moving_var + (1.0 - momentum) * var Y = gamma * X_hat + beta # 缩放和移位 return Y, moving_mean, moving_var # + [markdown] origin_pos=4 # 我们现在可以[**创建一个正确的`BatchNorm`层**]。 # 这个层将保持适当的参数:拉伸`gamma`和偏移`beta`,这两个参数将在训练过程中更新。 # 此外,我们的层将保存均值和方差的移动平均值,以便在模型预测期间随后使用。 # # 撇开算法细节,注意我们实现层的基础设计模式。 # 通常情况下,我们用一个单独的函数定义其数学原理,比如说`batch_norm`。 # 然后,我们将此功能集成到一个自定义层中,其代码主要处理数据移动到训练设备(如GPU)、分配和初始化任何必需的变量、跟踪移动平均线(此处为均值和方差)等问题。 # 为了方便起见,我们并不担心在这里自动推断输入形状,因此我们需要指定整个特征的数量。 # 不用担心,深度学习框架中的批量规范化API将为我们解决上述问题,我们稍后将展示这一点。 # # + origin_pos=5 tab=["mxnet"] class BatchNorm(nn.Block): # num_features:完全连接层的输出数量或卷积层的输出通道数。 # num_dims:2表示完全连接层,4表示卷积层 def __init__(self, num_features, num_dims, **kwargs): super().__init__(**kwargs) if num_dims == 2: shape = (1, num_features) else: shape = (1, num_features, 1, 1) # 参与求梯度和迭代的拉伸和偏移参数,分别初始化成1和0 self.gamma = self.params.get('gamma', shape=shape, init=init.One()) self.beta = self.params.get('beta', shape=shape, init=init.Zero()) # 非模型参数的变量初始化为0和1 self.moving_mean = np.zeros(shape) self.moving_var = np.ones(shape) def forward(self, X): # 如果X不在内存上,将moving_mean和moving_var # 复制到X所在显存上 if self.moving_mean.ctx != X.ctx: self.moving_mean = self.moving_mean.copyto(X.ctx) self.moving_var = self.moving_var.copyto(X.ctx) # 保存更新过的moving_mean和moving_var Y, self.moving_mean, self.moving_var = batch_norm( X, self.gamma.data(), self.beta.data(), self.moving_mean, self.moving_var, eps=1e-12, momentum=0.9) return Y # + [markdown] origin_pos=8 # ## 使用批量规范化层的 LeNet # # 为了更好理解如何[**应用`BatchNorm`**],下面我们将其应用(**于LeNet模型**)( :numref:`sec_lenet`)。 # 回想一下,批量规范化是在卷积层或全连接层之后、相应的激活函数之前应用的。 # # + origin_pos=9 tab=["mxnet"] net = nn.Sequential() net.add(nn.Conv2D(6, kernel_size=5), BatchNorm(6, num_dims=4), nn.Activation('sigmoid'), nn.AvgPool2D(pool_size=2, strides=2), nn.Conv2D(16, kernel_size=5), BatchNorm(16, num_dims=4), nn.Activation('sigmoid'), nn.AvgPool2D(pool_size=2, strides=2), nn.Dense(120), BatchNorm(120, num_dims=2), nn.Activation('sigmoid'), nn.Dense(84), BatchNorm(84, num_dims=2), nn.Activation('sigmoid'), nn.Dense(10)) # + [markdown] origin_pos=12 # 和以前一样,我们将[**在Fashion-MNIST数据集上训练网络**]。 # 这个代码与我们第一次训练LeNet( :numref:`sec_lenet`)时几乎完全相同,主要区别在于学习率大得多。 # # + origin_pos=13 tab=["mxnet"] lr, num_epochs, batch_size = 1.0, 10, 256 train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size) d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr, d2l.try_gpu()) # + [markdown] origin_pos=15 # 让我们来看看从第一个批量规范化层中学到的[**拉伸参数`gamma`和偏移参数`beta`**]。 # # + origin_pos=16 tab=["mxnet"] net[1].gamma.data().reshape(-1,), net[1].beta.data().reshape(-1,) # + [markdown] origin_pos=19 # ## [**简明实现**] # # 除了使用我们刚刚定义的`BatchNorm`,我们也可以直接使用深度学习框架中定义的`BatchNorm`。 # 该代码看起来几乎与我们上面的代码相同。 # # + origin_pos=20 tab=["mxnet"] net = nn.Sequential() net.add(nn.Conv2D(6, kernel_size=5), nn.BatchNorm(), nn.Activation('sigmoid'), nn.AvgPool2D(pool_size=2, strides=2), nn.Conv2D(16, kernel_size=5), nn.BatchNorm(), nn.Activation('sigmoid'), nn.AvgPool2D(pool_size=2, strides=2), nn.Dense(120), nn.BatchNorm(), nn.Activation('sigmoid'), nn.Dense(84), nn.BatchNorm(), nn.Activation('sigmoid'), nn.Dense(10)) # + [markdown] origin_pos=23 # 下面,我们[**使用相同超参数来训练模型**]。 # 请注意,通常高级API变体运行速度快得多,因为它的代码已编译为C++或CUDA,而我们的自定义代码由Python实现。 # # + origin_pos=24 tab=["mxnet"] d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr, d2l.try_gpu()) # + [markdown] origin_pos=25 # ## 争议 # # 直观地说,批量规范化被认为可以使优化更加平滑。 # 然而,我们必须小心区分直觉和对我们观察到的现象的真实解释。 # 回想一下,我们甚至不知道简单的神经网络(多层感知机和传统的卷积神经网络)为什么如此有效。 # 即使在暂退法和权重衰减的情况下,它们仍然非常灵活,因此无法通过常规的学习理论泛化保证来解释它们是否能够泛化到看不见的数据。 # # 在提出批量规范化的论文中,作者除了介绍了其应用,还解释了其原理:通过减少*内部协变量偏移*(internal covariate shift)。 # 据推测,作者所说的“内部协变量转移”类似于上述的投机直觉,即变量值的分布在训练过程中会发生变化。 # 然而,这种解释有两个问题: # 1、这种偏移与严格定义的*协变量偏移*(covariate shift)非常不同,所以这个名字用词不当。 # 2、这种解释只提供了一种不明确的直觉,但留下了一个有待后续挖掘的问题:为什么这项技术如此有效? # 本书旨在传达实践者用来发展深层神经网络的直觉。 # 然而,重要的是将这些指导性直觉与既定的科学事实区分开来。 # 最终,当你掌握了这些方法,并开始撰写自己的研究论文时,你会希望清楚地区分技术和直觉。 # # 随着批量规范化的普及,“内部协变量偏移”的解释反复出现在技术文献的辩论,特别是关于“如何展示机器学习研究”的更广泛的讨论中。 # <NAME>在接受2017年NeurIPS大会的“接受时间考验奖”(Test of Time Award)时发表了一篇令人难忘的演讲。他将“内部协变量转移”作为焦点,将现代深度学习的实践比作炼金术。 # 他对该示例进行了详细回顾 :cite:`Lipton.Steinhardt.2018`,概述了机器学习中令人不安的趋势。 # 此外,一些作者对批量规范化的成功提出了另一种解释:在某些方面,批量规范化的表现出与原始论文 :cite:`Santurkar.Tsipras.Ilyas.ea.2018`中声称的行为是相反的。 # # 然而,与机器学习文献中成千上万类似模糊的说法相比,内部协变量偏移没有更值得批评。 # 很可能,它作为这些辩论的焦点而产生共鸣,要归功于目标受众对它的广泛认可。 # 批量规范化已经被证明是一种不可或缺的方法。它适用于几乎所有图像分类器,并在学术界获得了数万引用。 # # ## 小结 # # * 在模型训练过程中,批量规范化利用小批量的均值和标准差,不断调整神经网络的中间输出,使整个神经网络各层的中间输出值更加稳定。 # * 批量规范化在全连接层和卷积层的使用略有不同。 # * 批量规范化层和暂退层一样,在训练模式和预测模式下计算不同。 # * 批量规范化有许多有益的副作用,主要是正则化。另一方面,”减少内部协变量偏移“的原始动机似乎不是一个有效的解释。 # # ## 练习 # # 1. 在使用批量规范化之前,我们是否可以从全连接层或卷积层中删除偏置参数?为什么? # 1. 比较LeNet在使用和不使用批量规范化情况下的学习率。 # 1. 绘制训练和测试准确度的提高。 # 1. 你的学习率有多高? # 1. 我们是否需要在每个层中进行批量规范化?尝试一下? # 1. 你可以通过批量规范化来替换暂退法吗?行为会如何改变? # 1. 确定参数`beta`和`gamma`,并观察和分析结果。 # 1. 查看高级API中有关`BatchNorm`的在线文档,以查看其他批量规范化的应用。 # 1. 研究思路:想想你可以应用的其他“规范化”转换?你可以应用概率积分变换吗?全秩协方差估计可以么? # # + [markdown] origin_pos=26 tab=["mxnet"] # [Discussions](https://discuss.d2l.ai/t/1876) #
submodules/resource/d2l-zh/mxnet/chapter_convolutional-modern/batch-norm.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np from lapy import TetMesh, TetIO, FuncIO from lapy.Plot import plot_tet_mesh import plotly plotly.offline.init_notebook_mode(connected=True) # - T = TetIO.import_vtk('../data/cubeTetra.vtk') #T.is_oriented() T.orient_() # + from lapy import Solver fem = Solver(T,lump=True) evals, evec = fem.eigs(10) # - # also get A,B (lumped), and inverse of B (easy as it is diagonal) A, B = fem.stiffness, fem.mass Bi = B.copy() Bi.data **= -1 evnum=1 cutting = ['x<0.5'] # also here we comment all plots to reduce file size # uncomment and take a look #plot_tet_mesh(T,vfunc=evals[evnum]*evec[:,evnum],plot_edges=None,plot_levels=False,cutting=cutting,edge_color='rgb(50,50,50)',html_output=False,flatshading=True) from lapy.DiffGeo import compute_gradient from lapy.DiffGeo import compute_divergence grad = compute_gradient(T,evec[:,evnum]) divx = -compute_divergence(T,grad) vfunc = Bi*divx cutting = ['x<0.5'] #plot_tet_mesh(T,vfunc=vfunc,plot_edges=None,plot_levels=False,cutting=cutting,edge_color='rgb(50,50,50)',html_output=False,flatshading=True) np.max(np.abs(vfunc-(evals[evnum]*evec[:,evnum]))) dd = np.abs(vfunc-(evals[evnum]*evec[:,evnum])) max(dd) # + from lapy import Heat tria = T.boundary_tria() bvert = np.unique(tria.t) u = Heat.diffusion(T,bvert,m=1) cutting = ['x<0.5'] #plot_tet_mesh(T,vfunc=u,plot_edges=None,plot_levels=True,cutting=cutting,edge_color='rgb(50,50,50)',html_output=False,flatshading=True) # + # compute gradient of heat diffusion, normalize it, and compute the divergence of normalized gradient tfunc = compute_gradient(T, u) # flip and normalize it X = -tfunc / np.sqrt((tfunc ** 2).sum(1))[:, np.newaxis] X = np.nan_to_num(X) # compute divergence divx = compute_divergence(T, X) # + # compute distance from scipy.sparse.linalg import splu useCholmod = True try: from sksparse.cholmod import cholesky except ImportError: useCholmod = False A, B = fem.stiffness, fem.mass # computed above when creating Solver H=A b0=-divx # solve H x = b0 print("Matrix Format now: "+H.getformat()) if useCholmod: print("Solver: cholesky decomp - performance optimal ...") chol = cholesky(H) x = chol(b0) else: print("Solver: spsolve (LU decomp) - performance not optimal ...") lu = splu(H) x = lu.solve(b0) x = x - np.min(x) # - cutting = ['x<0.5'] #plot_tet_mesh(T,vfunc=x,plot_edges=None,plot_levels=True,cutting=cutting,edge_color='rgb(50,50,50)',html_output=False,flatshading=True) max(x), 0.5*np.sqrt(3.0) # + #debug gradient v1func = T.v[:,0]* T.v[:,0] + T.v[:,1]* T.v[:,1] + T.v[:,2]* T.v[:,2]# x coord #v1func = (v4[:,1]-0.5) * (v4[:,1]-0.5) + (v4[:,0]-0.5)* (v4[:,0]-0.5) # xcoord grad = compute_gradient(T,v1func) glength = np.sqrt(np.sum(grad * grad, axis=1)) fcols=glength fcols=grad[:,2] cutting = ['x<0.5'] cutting = None #plot_tet_mesh(T,vfunc=None,tfunc=fcols,plot_edges=None,plot_levels=False,cutting=cutting,edge_color='rgb(50,50,50)',html_output=False) # + divx = compute_divergence(T, grad) divx2 = Bi * divx cutting = ['z<0.5'] #plot_tet_mesh(T,vfunc=divx2,plot_edges=True,plot_levels=False,cutting=cutting,edge_color='rgb(50,50,50)',html_output=False,flatshading=True) # - divx2[5000:5010] T.avg_edge_length() # + from lapy.DiffGeo import compute_geodesic_f from lapy import Heat tria = T.boundary_tria() bvert=np.unique(tria.t) # get heat diffusion u = Heat.diffusion(T,bvert, m=1) gu = compute_geodesic_f(T,u) cutting = ['x<0.5'] #plot_tet_mesh(T,vfunc=gu,plot_edges=None,plot_levels=True,cutting=cutting,edge_color='rgb(50,50,50)',html_output=False,flatshading=True)
examples/Test_TetMesh_Geodesics.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tensorflow as tf import pandas as pd import numpy as np import matplotlib.pyplot as plt # %matplotlib inline from numba import cuda cuda.close() df = pd.read_csv('../data/minda-corp.csv') df = df[::-1] df.head() close_price = df["Close Price"] time = range(len(close_price)) split_time = 900 price_train = close_price[:split_time] price_valid = close_price[split_time:] time_train = time[:split_time] time_valid = time[split_time:] def window_dataset(series,window_size,batch_size,shuffle_buffer): data = tf.data.Dataset.from_tensor_slices(series) data = data.window(window_size+1,shift=1,drop_remainder=True) data = data.flat_map(lambda window : window.batch(window_size+1)) data = data.shuffle(shuffle_buffer).map(lambda window: (window[:-1],window[-1])) data = data.batch(batch_size).prefetch(1) return data tf.keras.backend.clear_session() # ### Simple RNN model = tf.keras.Sequential([ tf.keras.layers.Lambda(lambda x: tf.expand_dims(x,axis=-1),input_shape=[None]), tf.keras.layers.SimpleRNN(40,return_sequences=True), tf.keras.layers.SimpleRNN(40), tf.keras.layers.Dense(1), tf.keras.layers.Lambda(lambda x : x*100) ]) lr_sch = tf.keras.callbacks.LearningRateScheduler(lambda epoch : 1e-8 * 10**(epoch/20)) optimizer = tf.keras.optimizers.SGD(lr=1e-8,momentum=0.9) model.compile(optimizer = optimizer, loss=tf.keras.losses.Huber(),metrics = ['mae']) # + tf.random.set_seed(101) np.random.seed(101) dataset = window_dataset(price_train, 20, batch_size=32, shuffle_buffer=1000) # - history = model.fit(dataset,epochs =100, callbacks=[lr_sch]) plt.figure() plt.semilogx(history.history["lr"], history.history["loss"]) plt.axis([1e-8, 1e-4, 0, 30]) history.history["loss"].index(min(history.history["loss"])) history.history["lr"][33] optimizer = tf.keras.optimizers.SGD(lr=4.466836e-07,momentum=0.9) model.compile(optimizer = optimizer, loss=tf.keras.losses.Huber(),metrics = ['mae']) history = model.fit(dataset,epochs =400) def plot_series(time, series, format="-", start=0, end=None): plt.plot(time[start:end], series[start:end], format) plt.xlabel("Time") plt.ylabel("Value") plt.grid(True) # + forecast = [] for time in range(len(close_price) - 20): forecast.append(model.predict(tf.expand_dims(close_price[time:time+20],axis=-1))) forecast = forecast[split_time-20:] results = np.array(forecast)[:,0,0] # + plot_series(time_valid, price_valid) plot_series(time_valid, results) # - print(tf.keras.metrics.mean_absolute_error(price_valid,results).numpy()) # ### Bidirectional LSTM # + tf.random.set_seed(101) np.random.seed(101) dataset = window_dataset(price_train, 20, batch_size=8, shuffle_buffer=1000) # - model = tf.keras.Sequential([ tf.keras.layers.Lambda(lambda x: tf.expand_dims(x,axis=-1),input_shape=[None]), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32,return_sequences=True)), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)), tf.keras.layers.Dense(1), tf.keras.layers.Lambda(lambda x : x*100) ]) optimizer = tf.keras.optimizers.SGD(lr=1e-6,momentum=0.9) model.compile(optimizer = optimizer, loss=tf.keras.losses.Huber(),metrics = ['mae']) history = model.fit(dataset,epochs =100) # + forecast = [] for time in range(len(close_price) - 20): forecast.append(model.predict(tf.expand_dims(close_price[time:time+20],axis=-1))) forecast = forecast[split_time-20:] results = np.array(forecast)[:,0,0] # - plot_series(time_valid, price_valid) plot_series(time_valid, results*3) plt.legend(["Valid","Pred"]) print(tf.keras.metrics.mean_absolute_error(price_valid,results*3).numpy())
Course-4/exercise-3-RNN-MindaCorp.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Open Boundary Conditions: A Survey # # An introduction to lateral boundary conditions using a simple shallow water equation model. # Closed, periodic, passive and active boundaries are considered. # + ''' some basic imports, numpy to do math and the rest for plotting and import our helper module with "everything but" the boundary conditions I include def and return lines for those functions here, you can find the whole functions in swe_module.py ''' import matplotlib.pyplot as plt import numpy as np import swe_module as sw # %matplotlib inline # - # Our model system is a simple flat bottom box with a single layer of fluid. The shallow water equation system. We will neglect any explicit viscosity, and the nonlinear advection terms but include the Coriolis Force. If your research is generally too small to feel the Coriolis force, set your box small and it will have a negligible impact. We will force the fluid with a Gaussian addition (rain or runoff) in the centre of the domain. # # So the governing equations are # \begin{equation} # \frac{\partial u}{\partial t} - f v = - g \frac{\partial \eta}{\partial x} # \end{equation} # \begin{equation} # \frac{\partial v}{\partial t} + f u = - g \frac{\partial \eta}{\partial y} # \end{equation} # \begin{equation} # \frac{\partial \eta}{\partial t} + H \left( \frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} \right) = 0 # \end{equation} # where $(u, v)$ is the horizontal velocity in $(x, y)$ directions, $f$ is the Coriolis parameter, $g = 9.8$m s$^{-2}$ is the acceleration due to gravity and $H$ is the depth of the fluid. # # The box has width and length $length$ and depth $H$. # Nice small system, keep npts small npts = 32 # Standard Parameters g = 9.8 # m/s2 # Choose your latitude lat = 40 # degrees N f = 2 * 7.27e-5 * np.sin(np.deg2rad(lat)) # Choose your Box Size length = 2000e3 # m H = 200 # m # The shallow water equations allow shallow water surface waves. Without rotation these waves have wavespeed, $c = (gH)^{1/2}$. With rotation two types of waves are possible, Poincare waves and coastally trapped Kelvin waves. Kelvin waves have the same wave speed. Poincare waves are slower but approach $c$ as wavelengths decrease. # We will solve the system on an Arakawa C-Grid with the u-velocity staggered 1/2 grid point in x from the $\eta$ point and the v-velocity staggered 1/2 grid point in y. Like this: # v # | # | # $\eta$ $-$ $-$ u # # We will use an explicit leap-frog scheme and so must choose our timestep, $dt$, small enough to meet the CFL condition. The leap-frog scheme requires two-previous steps so we need two sets of arrays. For one set of boundary conditions we require yet another previous time step, so we will use three sets of arrays. # # Derived Quantities dx = length/npts wavespeed = np.sqrt(g*H) dt = 0.25*dx/wavespeed # ## Initialization # *Set up our arrays* # # ```python # def set_arrays(npts): # return eta_now, u_now, v_now, eta_prev, # u_prev, v_prev, eta_next, u_next, v_next # ``` # ## Internal Forcing # Our forcing will be an initial surface height increase of Gaussian shape, offset so its in the southwest quadrant of the box. # *Initialize Gaussian* # ```python # def initialize(magnitude, gausshalfwidth, eta): # return eta_init # ``` # # *A 2-dimensional Gaussian if you want it* # ```python # def initialize_simple(magnitude, gausshalfwidth, eta): # return eta_init # ``` # ## External Forcing ## # The external forcing is a long wave length wave coming in from the west. # ### Large scale wave ### # $\eta = \eta_o \sin(kx - \omega t)$ # # $k = 2\pi/(10 L)$ # # $c = \sqrt(gH)$ # # $\omega = c/k$ # # $\left( \frac {\partial^2}{\partial t^2} + f^2\right) u = - g \left( # \frac {\partial^2 \eta}{\partial t \partial x} + f \frac{\partial \eta}{\partial y}\right)$ # # $\left( \frac {\partial^2}{\partial t^2} + f^2\right) v = - g \left( # \frac {\partial^2 \eta}{\partial t \partial y} - f \frac{\partial \eta}{\partial x}\right)$ # # # *Incoming Large Scale Wave* # ```python # def incoming_wave(t, npts, dx): # return eta_wave, u_wave, v_wave # ``` # ## Time Stepping # To start the code, we do a predictor-corrector which requires an Euler step # *Euler Step* # ```python # def euler(eta, u, v, idt, f, g, H, dx): # return eta_next, u_next, v_next # ``` # *Leap-frog Step* # ```python # def leapfrog(eta, u, v, etap, up, vp, idt, f, g, H, dx): # return eta_next, u_next, v_next # ``` # ## Boundary Conditions ## # The equations above neglect lateral viscosity and advection. These approximations, plus the choice of the c-grid, greatly reduce the number of boundary conditions we need. We only need to set the normal velocity at each of the four edges of the box. # # The figure below shows the western wall of the box # # $u_o$ is a real boundary point. $v_o$ and $\eta_o$ are ghost points. Neither are needed in the "stencil" to find interior points. So we just don't use them or set them. filename = 'grid_lateral.png' from IPython.display import Image Image(filename=filename) def boundary_conditions_periodic(u, v, **kwargs): '''Periodic boundary conditions wrap the solution, copying the first internal point on the eastern boundary to the boundary point on the western boundary''' u[0] = u[-2] u[-1] = u[1] v[:, 0] = v[:, -2] v[:, -1] = v[:, 1] return u, v def boundary_conditions_freeslip(u, v, **kwargs): '''These conditions impose wall boundaries with zero flow through the walls''' u[0] = 0 u[-1] = 0 v[:, 0] = 0 v[:, -1] = 0 return u, v def boundary_conditions_zerogradient(u, v, **kwargs): '''A simple, not particularly good, open boundary type where the value outside the boundary is the same as inside''' u[0] = u[1] u[-1] = u[-2] v[:, 0] = v[:, 1] v[:, -1] = v[:, -2] return u, v def boundary_conditions_cnstgradient(u, v, **kwargs): '''A simple open boundary type where the value outside the boundary is different from the value inside the boundary by the same amount as the value inside is from the next value''' u[0] = 2 * u[1] - u[2] u[-1] = 2 * u[-2] - u[-3] v[:, 0] = 2 * v[:, 1] - v[:, 2] v[:, -1] = 2 * v[:, -2] - v[:, -3] return u, v def boundary_conditions_sommerfield(u_next, v_next, u_now, v_now, u_prev, v_prev, dx, dt, wavespeed, **kwargs): '''A wave radiation boundary condition. Signals are moved out at the assumed wavespeed''' # cn is c*dt/dx or normalized wavespeed cn = -wavespeed * dt / dx u_next[0] = (1 + cn) / (1 - cn) * u_prev[0] - 2 * cn / (1 - cn) * u_now[1] u_next[-1] = (1 + cn) / (1 - cn) * u_prev[-1] - 2 * cn / (1 - cn ) * u_now[-2] v_next[:, 0] = (1 + cn) / (1 - cn) * v_prev[:, 0] - 2 * cn / ( 1 - cn) * v_now[:, 1] v_next[:, -1] = (1 + cn) / (1 - cn) * v_prev[:, -1] - 2 * cn / ( 1 - cn) * v_now[:, -2] return u_next, v_next def boundary_conditions_orlanski(u_next, v_next, u_now, v_now, u_prev, v_prev, dx, dt, **kwargs): '''The classic wave radiation boundary condition. A wave speed is estimated. If that wavespeed is toward the boundary, signals are moved out at that wavespeed. There is a limited on the wavespeed.''' # cn is c*dt/dx or normalized wavespeed epsilon = 1e-12 # c here is c*dt/dx cn = (u_next[1] - u_prev[1]) / ( u_next[1] + u_prev[1] - 2 * u_now[2] + epsilon) for j in range(u_next.shape[1]): if cn[j] > 0: cn[j] = 0 elif cn[j] < -1: cn[j] = -1 u_next[0] = (1 + cn) / (1 - cn) * u_prev[0] - 2 * cn / (1 - cn) * u_now[1] cn = (u_next[-2] - u_prev[-2]) / ( u_next[-2] + u_prev[-2] - 2 * u_now[-3] + epsilon) for j in range(u_next.shape[1]): if cn[j] > 0: cn[j] = 0 elif cn[j] < -1: cn[j] = -1 u_next[-1] = (1 + cn) / (1 - cn) * u_prev[-1] - 2 * cn / (1 - cn) * u_now[-2] c = (v_next[:, 1] - v_prev[:, 1]) / ( v_next[:, 1] + v_prev[:, 1] - 2 * v_now[:, 2] + epsilon) for i in range(v_next.shape[0]): if cn[i] > 0: cn[i] = 0 elif cn[i] < -1: cn[i] = -1 v_next[:, 0] = (1 + cn) / (1 - cn) * v_prev[:, 0] - 2 * cn / (1 - cn ) * v_now[:, 1] cn = (v_next[:, -2] - v_prev[:, -2]) / ( v_next[:, -2] + v_prev[:, -2] - 2 * v_now[:, -3] + epsilon) for i in range(v_next.shape[0]): if cn[i] > 0: cn[i] = 0 elif cn[i] < -1: cn[i] = -1 v_next[:, -1] = (1 + cn) / (1 - cn) * v_prev[:, -1] - 2 * cn / ( 1 - cn) * v_now[:, -2] return u_next, v_next def boundary_conditions_frs_sommerfield(u_next, v_next, u_now, v_now, u_prev, v_prev, dx, dt, wavespeed, eta_next, rim, t, **kwargs): '''Combined boundary conditions. Sommerfield radiation and then Relaxation to an applied external field''' cn = -wavespeed * dt / dx u_next[-1] = (1 + cn) / (1 - cn) * u_prev[-1] - 2 * cn / (1 - cn ) * u_now[-2] v_next[:, 0] = (1 + cn) / (1 - cn) * v_prev[:, 0] - 2 * cn / ( 1 - cn) * v_now[:, 1] v_next[:, -1] = (1 + cn) / (1 - cn) * v_prev[:, -1] - 2 * cn / ( 1 - cn) * v_now[:, -2] weight = np.array([np.tanh((np.arange(rim)) * 0.5), ] * u_next.shape[0]).transpose() eta_wave, u_wave, v_wave = sw.incoming_wave(t, u_next.shape[0], dx, f, g, wavespeed) u_next[:rim] = u_wave[:rim] * (1 - weight) + u_next[:rim] * weight v_next[:rim] = v_wave[:rim] * (1 - weight) + v_next[:rim] * weight eta_next[:rim] = eta_wave[:rim] * (1 - weight) + eta_next[:rim] * weight return eta_next, u_next, v_next def boundary_conditions_flather_sommerfield( u_next, v_next, u_now, v_now, u_prev, v_prev, dx, dt, eta_next, wavespeed, rim, t, eta_now): '''Combined boundary conditions. Sommerfield radiation and Flather conditions for an applied external barotropic surface height field''' cn = -wavespeed * dt / dx u_next[-1] = (1 + cn) / (1 - cn) * u_prev[-1] - 2 * cn / (1 - cn ) * u_now[-2] v_next[:, 0] = (1 + cn) / (1 - cn) * v_prev[:, 0] - 2 * cn / ( 1 - cn) * v_now[:, 1] v_next[:, -1] = (1 + cn) / (1 - cn) * v_prev[:, -1] - 2 * cn / ( 1 - cn) * v_now[:, -2] eta_wave, u_wave, v_wave = sw.incoming_wave(t, u_next.shape[0], dx, f, g, wavespeed) u_next[0] = u_wave[0] - np.sqrt(g / H) * (eta_now[1] - eta_wave[0]) return eta_next, u_next, v_next def boundary_conditions(kind, u_next, v_next, u_now, v_now, u_prev, v_prev, dx, dt, wavespeed, eta_next, rim, t, eta_now): '''Generalized driver for boundary condtions''' bc_funcs = { 'periodic': boundary_conditions_periodic, 'freeslip': boundary_conditions_freeslip, 'zerogradient': boundary_conditions_zerogradient, 'cnstgradient': boundary_conditions_cnstgradient, 'sommerfield': boundary_conditions_sommerfield, 'orlanski': boundary_conditions_orlanski, 'frs_sommerfield': boundary_conditions_frs_sommerfield, 'flather_sommerfield': boundary_conditions_flather_sommerfield, } with_eta = ['frs_sommerfield', 'flather_sommerfield'] if kind in with_eta: eta_next, u_next, v_next = bc_funcs[kind](u_next, v_next, u_now=u_now, v_now=v_now, u_prev=u_prev, v_prev=v_prev, dx=dx, dt=dt, wavespeed=wavespeed, eta_next=eta_next, rim=rim, t=t, eta_now=eta_now) else: u_next, v_next = bc_funcs[kind](u_next, v_next, u_now=u_now, v_now=v_now, u_prev=u_prev, v_prev=v_prev, dx=dx, dt=dt, wavespeed=wavespeed, eta_next=eta_next, rim=rim, t=t, eta_now=eta_now) return eta_next, u_next, v_next # # Main Program # # + # Chose how many time steps ntime = 10 # Chose boundary conditions boundary_type = 'periodic' rim = 3 # initialize array eta_now, u_now, v_now, eta_prev, u_prev, v_prev, eta_next, u_next, v_next = sw.set_arrays( npts) # add our internal forcing eta_prev = sw.initialize(magnitude=1.0, gausshalfwidth=3.5, eta=eta_now) # take an Euler step from 0 to dt eta_now, u_now, v_now = sw.euler(eta_prev, u_prev, v_prev, dt, f, g, H, dx) # and boundary conditions eta_now, u_now, v_now = boundary_conditions( boundary_type, u_now, v_now, u_now, v_now, u_prev, v_prev, dx, dt / 2, wavespeed, eta_now, rim, dt, eta_prev) # Average to give us the 1/2 dt value in now for var_now, var_prev in zip([eta_now, u_now, v_now], [eta_prev, u_prev, v_prev]): var_now = 0.5 * (var_now + var_prev) # Take a first leap-frog step eta_next, u_next, v_next = sw.leapfrog(eta_now, u_now, v_now, eta_prev, u_prev, v_prev, dt, f, g, H, dx) # Increment Time t = dt # Boundary Conditions eta_next, u_next, v_next = boundary_conditions( boundary_type, u_next, v_next, u_now, v_now, u_prev, v_prev, dx, dt / 2, wavespeed, eta_next, rim, t, eta_now) # and now we are started we can loop for i in range(ntime): eta_prev, u_prev, v_prev = eta_now, u_now, v_now eta_now, u_now, v_now = eta_next, u_next, v_next eta_next, u_next, v_next = sw.leapfrog(eta_now, u_now, v_now, eta_prev, u_prev, v_prev, 2 * dt, f, g, H, dx) t = t + dt eta_next, u_next, v_next = boundary_conditions( boundary_type, u_next, v_next, u_now, v_now, u_prev, v_prev, dx, dt, wavespeed, eta_next, rim, t, eta_now) # every 101 time steps average and restart to remove the leap frog # computational mode if (np.mod(i, 101) == 0): for var_now, var_next in zip([eta_now, u_now, v_now], [eta_next, u_next, v_next]): var_next = 0.5 * (var_now + var_next) eta_next, u_next, v_next = sw.leapfrog( eta_next, u_next, v_next, eta_now, u_now, v_now, dt, f, g, H, dx) eta_next, u_next, v_next = boundary_conditions( boundary_type, u_next, v_next, u_next, v_next, u_now, v_now, dx, dt, wavespeed, eta_next, rim, t, eta_next) # - # # Plotting # sw.make_plot(npts, dx, eta_next, u_next, v_next) sw.make_line_plots(eta_next, u_next, v_next, 'eastwest', ii=7) # # The Questions # # ## A. Simple Boundary Conditions ## # I have included a suite of simple boundary conditions: # * periodic # * freeslip # * zerograd # * cnstgrad # # For each of these boundary conditions, open a cell below. Choose markdown and answer the following questions (the first one **before** you run the code). You may wish to change the length of time you run for "ntime" # * What do you expect to see at the boundary? # * What do you see at the boundary when you run the code? # * What would you use these boundary conditions for? # ## B. Radiation Boundary Conditions ## # I have included two radiation type boundary conditions: # * sommerfield which uses the known wavespeed # * orlanski which calculates the wavespeed # # Open a cell below and answer these questions: # * What should radiation boundary conditions do? # * Which of the two methods works better? In what way is it better? # * Under which conditions, would you use the other method? # # C. Bringing Information In # # I have included two types of boundary conditions that bring external information in. # * frs # * flather # # In each case, I am bringing in a large wave from the west. Note that you still have your internal forcing. # # Open a cell below and answer these questions: # * Ideally what should happen? # * Do you get what you expect? # # D. Toward the Real World # # The major simplification that matters here, is that this system has only one wave speed. # # * Why does adding multiple wave speeds make open boundaries more difficult?
OpenBoundaryConditions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Genesis 20: From Exegetical Question to TF Query # Before we get started make sure that you run in the Anaconda prompt the following command: # # `pip install --upgrade text-fabric` # # # ## Getting TF ready # %load_ext autoreload # %autoreload 2 # + # First, I have to laod different modules that I use for analyzing the data and for plotting: import sys, os, collections import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt; plt.rcdefaults() from matplotlib.pyplot import figure from collections import Counter # Second, I have to load the Text Fabric app from tf.fabric import Fabric from tf.app import use # - A = use('bhsa', hoist=globals()) # ## the *valence* of לקח: rape or wedding? # As we saw in the previous notebook the use of the word of LQX[ in Gen 2:1 is rather ambigious. Lets search for all those cases where LQX has as direct object a female (`gn=f`) proper name (`sp=nmpr`). LQXvalence1=''' clause phrase function=Pred word lex=LQX[ phrase function=Objc word sp=nmpr gn=f nu=sg ''' LQXvalence1 = A.search(LQXvalence1) A.table(LQXvalence1, start=1, end=5, condensed=False, colorMap={3:'pink', 4:'blue'}) # + LQXvalence2=''' clause phrase word lex=LQX[ phrase function=Objc word gn=f nu=sg phrase function=Cmpl word lex=>CH/ ''' LQXvalence2 = A.search(LQXvalence2) A.table(LQXvalence2, start=1, end=5, condensed=False, colorMap={3:'pink', 4:'blue', 6:'red'}) #SHEBANQ query results are here: https://shebanq.ancient-data.org/hebrew/query?version=2017&id=493 # - # The findings suggest that Gen 20:2 seeks to play with the reader's bias and immitates for the reader the bias of Abraham. Abimelech probably raped Sara and he is just another Sodomite who has no moral boundaries. Well, turns out, that this is not the case. See the next query... # ## Finding *background* clauses in the narrative section # Narrative chains are sometimes interrupted by the narrator in order to provide background information that are supposed to chance the interpretative course of the reader and challange the reader's bias. # We can find these cases by searching for clauses that belong to the narrative domain (`domain=N`) but excluding that these clauses belong to the foreground tense Way. We exclude those tenses with `#` (`typ#WayX|Way0`). NarrativeBackground = ''' verse book=Genesis chapter=20 clause domain=N typ#WayX|Way0 rela=NA ''' NarrativeBackground = A.search(NarrativeBackground) A.table(NarrativeBackground, start=1, end=40, condensed=False, colorMap={2: 'blue'}) # What does this tell us? # # Well, for the first finding in Gen 20:4 the text tries to shake up the bias of the reader (cf. "Abimelek probably raped Sara"). No! He did not! He is a gentlemen and you might want to know that before you continue reading/watching the story! # ## finding syntactical incongruences in Genesis 20 # Lets see whether we find cases in which the subject in a clause has a different number then the prediate. SyntaxIncongruency1 = ''' verse book=Genesis chapter=20 clause phrase function=Pred w1:word phrase function=Subj w2:word sp=subs w1 .nu#nu. w2 ''' SyntaxIncongruency1 = A.search(SyntaxIncongruency1) A.table(SyntaxIncongruency1, start=1, end=40, condensed=False, colorMap={4:'pink', 6:'lime'}) # Well, that was not surprising. Didn't we all expect that the morphological plural of >LHJM does not say anything about the semantic (i.e. the real) number of God. God is one! [Deut 6:4](https://shebanq.ancient-data.org/hebrew/text?book=Deuteronomium&chapter=6&verse=4&version=2017), [Deut 4:35](https://shebanq.ancient-data.org/hebrew/text?book=Deuteronomium&chapter=4&verse=35&version=2017), [Deut 4:39](https://shebanq.ancient-data.org/hebrew/text?book=Deuteronomium&chapter=4&verse=39&version=2017). # # But lets double check do we find cases where the number of >LHJM and its prediate match? You might want to execute the command `S.relationsLegend()`in order to see what operator you would have to replace `#` with in order to have matching values for the `nu` feature. # Task 1 # check out the relations operators by running the following code: S.relationsLegend() # You can study how these relational operators are used by looking at the examples in this notebook: https://nbviewer.jupyter.org/github/annotation/tutorials/blob/master/bhsa/searchRelations.ipynb # + # Task 2 # search for clauses in Gen 20 in which we find >LHJM/ as subject # with a predicate, where both words (subject and predicate) match in number. # Look at your final result!! That is surprising. What does this tell us about Abraham's theology? # - # ## Text-grammatical rarities # Isn't the text-syntactical construction in Gen 20:9-10 weird? Why should the speaker and addressee be introduced for a second time when the same speaker is still speaking to the same addressee? TwoSpeachesSameSpeaker1 = ''' chapter c1:clause domain=N phrase function=Pred word lex=DBR[|QR>[|>MR[ phrase function=Subj speakerA:word sp=subs phrase function=Cmpl addresseeA:word c2:clause domain=Q c3:clause domain=N phrase function=Pred word lex=DBR[|QR>[|>MR[ phrase function=Subj speakerB:word lex* phrase function=Cmpl addresseeB:word lex* c1 <1: c2 c2 <50: c3 c1 < c3 speakerA .lex=lex. speakerB addresseeA .lex=lex. addresseeB ''' TwoSpeachesSameSpeaker1 = A.search(TwoSpeachesSameSpeaker1) A.table(TwoSpeachesSameSpeaker1, start=1, end=50, condensed=True) # + # Task 3 # change the code and find all cases of this phenomenon in Gen 16 # (we could search the entire OT but that would take to long for this time) # What observation do you make? And what does that mean for our case in Gen 20? # - # I have formulated the relevant query in SHEBANQ with the following results: # https://shebanq.ancient-data.org/hebrew/query?version=2017&id=491 # + # Task 4: Look at the results. What do you learn for our case in Gen 20:9-10 and the general text-linguistic phenomenon? # - # According to the query results one can indeed argue that the addressee in these narrations remains silent, unable to answer the question they are asked. # # Genesis 4:1 # Lets investigate the case in Gen 4:1 and build a query that informs us about how to treat the issue and how to respond to the scholarly debate! # I did some queries in SHEBANQ that should inform your own query building: # # # https://shebanq.ancient-data.org/hebrew/query?version=2017&id=946 # # https://shebanq.ancient-data.org/hebrew/query?version=2017&id=947 # # https://shebanq.ancient-data.org/hebrew/query?version=2017&id=948 # + # Task 5 # Study the SHEBANQ queries and rebuild all queries as TF queries. # - # ## General search for appositions within object phrases # Let us get acqainted with the phenomenon of appositions within object phrases in order to get a better understanding how the Hebrew language normally builds these constructions. # # # Lets have a look at the example in Gen 4:2: # # ![Annotation%202019-06-12%20181040.png](attachment:Annotation%202019-06-12%20181040.png) # # # Searching for this phenomenon is done in SHEBANQ here: https://shebanq.ancient-data.org/hebrew/query?version=2017&id=946 # # Lets do it in TF. # + # https://shebanq.ancient-data.org/hebrew/query?version=2017&id=946 AppInObject1 = ''' clause phrase function=Objc phrase_atom rela=Appo ''' AppInObject1 = A.search(AppInObject1) A.show(AppInObject1, start=1, end=2, condensed=False, colorMap={1: 'lightgreen', 2: 'yellow', 3: 'magenta', 4: 'magenta'}) # - # ## Searching for appositions in object phrases that are goverend by >T object marker # After the general query we are interested in all those cases where the apposition is goverend by an >T preposition (object marker). # + # https://shebanq.ancient-data.org/hebrew/query?version=2017&id=947 AppInObject2 = ''' clause phrase function=Objc =: phrase_atom rela=NA <: phrase_atom rela=Appo =: word lex~>T. ''' AppInObject2 = A.search(AppInObject2) A.show(AppInObject2, start=1, end=2, condensed=True) # - # We do not find any results. But - one could argue - that the results are biased since the database has made a decision on the matter already. Therefore, you cannot find exceptions to the rule, as the database would automatically exclude them through their interpretation. So lets build a query that is undogmatic about the database interpretation and will thus also find our case in Gen 4:1. AppInObject3 = ''' clause phrase function=Objc <: phrase_atom =: word lex~>T. ''' AppInObject3 = A.search(AppInObject3) A.show(AppInObject3, start=1, end=2, condensed=True) # These 76 cases are now intersting to study as the all catch the phenomenon we are looking for. Lets return to this query further below. # ## Two times אֶת (>T) - one in the object phrase, one in the apposition # Since we want to remain undogmatic for now, we continue searching for the consonants `>T` instead, so that we can catch all homographs. The database has many different word features that are available for undogmatic searches. Here are some of them: # # - `g_cons` / `g_cons_utf8` # - `g_lex` / `g_lex_utf8` # - `g_word` / `g_word_utf8` # - `lex` / `lex_utf8` # - `voc_lex` / `voc_lex_utf8` # # Lets see what they stand for: WordFeatures=''' book book=Genesis chapter chapter=2 verse verse=4 word g_cons* g_cons_utf8* g_lex* g_lex_utf8* g_word* g_word_utf8* lex* lex_utf8* voc_lex* voc_lex_utf8* ''' WordFeatures = A.search(WordFeatures) A.show(WordFeatures, start=1, end=5, condensed=False, colorMap={3:'pink', 4:'yellow'}) # For our purpose its enough to use the `lex` features with regular expression (activated by `~`) we can find all lexemes that contain the consonants >T (both `>T` as object marker and `>T==` ("with")). # + # https://shebanq.ancient-data.org/hebrew/query?version=2017&id=948 AppInObject4 = ''' clause phrase function=Objc phrase_atom rela=NA =: word lex~>T.* <: phrase_atom rela=Appo =: word lex~>T.* ''' AppInObject4 = A.search(AppInObject4) A.show(AppInObject4, start=1, end=2, condensed=True, colorMap={1: 'lightgreen', 2: 'cyan', 3: 'yellow', 5: 'magenta'}) # - # The search results confirm that the default construction for appositional phrases is that when the first phrase atom has the >T object marker, the second, appositional phrase, will have the >T object marker as well. # ## Searching object phrases goverened *not* by >T BUT followed by an >T apposition # AppInObject5 = ''' clause phrase function=Objc phrase_atom rela=NA =: word lex#>T phrase_atom rela=Appo =: word lex=>T ''' AppInObject5 = A.search(AppInObject5) A.show(AppInObject5, start=1, end=10, condensed=True, colorMap={4:'pink', 6:'lime'}) # As the search results, in the rare case of an >T apposition following a non >T object phrase_atom, the object phrase atom has definitiveness. This is, however, not the case in Gen 4:1. # ## >T governing specifiers - in support of Doukhan! # + # This shows results that could actually be used to support Douhkahn's claim. AppInObject7 = ''' clause p1:phrase function=Objc pa1:phrase_atom rela=NA =: word lex#>T <: pa2:phrase_atom =: word lex=>T ''' AppInObject7 = A.search(AppInObject7) A.table(AppInObject7, start=1, end=5, condensed=False, colorMap={4:'pink', 6:'lime'}) # - # **The results of this query can be used to support the claim that Gen 4:1 uses >T YHWH as a specification (not apposition) to the object phrase "QJN"!** # # # **Look at the results. What do you learn for our case in Gen 20:9-10 and the general text-linguistic phenomenon?** # Lets return to our previous query where we sought undogmatically for ~>T goverend phrase atoms, that are not part of a preceding object phrase. We also want to exclude the case that the object phrase is goverend by the >T object marker in order to mimmic the construction of Gen 4:1 by bypassing the potential database bias. We do this by placing the pa2 phrase outside of the object phrase. AppInObject5 = ''' clause p1:phrase function=Objc pa1:phrase_atom rela=NA =: word lex#>T <: pa2:phrase_atom =: word lex~>T.* ''' AppInObject5 = A.search(AppInObject5) A.table(AppInObject5, start=1, end=5, condensed=False, colorMap={4:'pink', 6:'lime'}) # ## Searching for Object phrase followed by separate >T== ("with") phrase # Lets now double check whether our preliminary conclusions are supported when we look for those cases that <NAME> regards as very seldom and therfore akwared: A object phrase followed by a prepositional phrase goverend by >T==. AppInObject5 = ''' clause phrase function=Objc <: phrase =: word lex=>T== ''' AppInObject5 = A.search(AppInObject5) A.show(AppInObject5, start=1, end=5, condensed=False, colorMap={2:'red', 4:'lime'}) # As the search results show, it is quite a normal phenomenon to have an >T== ("with") phrase follow an object phrase. Thus Gen 4:1 cannot be regarded as special but rather as a normal construction when rendered as "I purchased Kain with YHWH". But lets further dial into the phenomenon by searching for exactly the construction we find in Gen 4:1. AppInObject5 = ''' clause phrase function=Objc =: word lex#>T < phrase =: word lex=>T== ''' AppInObject5 = A.search(AppInObject5) A.show(AppInObject5, start=1, end=10, condensed=False, colorMap={3:'pink', 5:'lime'}) # What we have assumed is now confirmed with the above query results. Gen 4:1 is a normal construction and should be rendered "I purchased Kain with YHWH". # + # Task 6 # Look at the GT (stands for Greek Text = Septuagint) and check how it is rendered there. # How do the GT translators treat our case? # - # The GT renders the construction the following way: # # # Αδαμ δὲ ἔγνω Ευαν τὴν γυναῖκα αὐτοῦ, καὶ συλλαβοῦσα ἔτεκεν τὸν Καιν καὶ εἶπεν Ἐκτησάμην ἄνθρωπον **διὰ τοῦ θεοῦ**.† # # # Thus אֶת־יְהוָֽה is not rendered as an accusative (i.e. Eve gave birth to YHWH) but as a genetive construction with διὰ rendering "through the God". # + # Task 7 # Write up a short conclusion. What is your decision on the matter. How should one translate the phrase in question, # and what is right/wrong about the argumentation that you find in the new SDA commentary on Genesis? # - # **Oiriginal version**: Let me first say, that I believe that Eve had a messianic hope when she gave birth to Kain. However, I would disagree with your rendering of Gen 4:1 when you suggest “I have gotten a man, the Lord”. If you take אֶת־יְהוָֽה as apposition to אִ֖ישׁ, you would have a formulation that disagrees with the grammatical rules for appositions. Appositions always agree in their determination with the entity the re-render. Since יְהוָה is determined, אִישׁ must be determined, too, if an appositional construction is to be created. See the following queries: # 1. https://shebanq.ancient-data.org/hebrew/query?version=2017&id=946: This query finds all cases in which an object phrase is containing an apposition. Only the apposition is highlighted. The results show, that when the 1st sub-phrase is determined the apposition sub-phrase is determined as well. Therefore, if the sub-phrases are determined they each are generally preceded by the nota accusativi אֶת. # 2. https://shebanq.ancient-data.org/hebrew/query?version=2017&id=947: This query finds all cases in which an object phrase is containing an apposition which itself is part of a prepositional construction with אֵת. Only the apposition is highlighted. Of the 65 cases in 61 show that the preceding phrase atom is governed by the אֵת preposition as well. # 3. https://shebanq.ancient-data.org/hebrew/query?version=2017&id=948: This query finds all cases in which an object phrase is containing an apposition which itself is part of a prepositional construction with אֵת. The preceding phrase_atom is not allowed to be an apposition but should be governed by the אֵת preposition. Both phrase atoms are highlighted. In comparison with https://shebanq.ancient-data.org/hebrew/query?version=2017&id=947 we see that a large majority (61 of 65 cases 61) shows that both the 1st and 2nd sub-phrase are governed by the אֵת preposition as well. # # The same syntactical construction that we find in Gen 4:1 can be found e.g in Isa 28:15, Jer 34:13, Prov 16:19. # # Finally, supporting the apposition rule of Biblical Hebrew, the LXX renders אֶת־יְהוָֽה with διὰ τοῦ θεοῦ/through the Lord. # # This should not be the final word. Am more than happy if there are reasons that explain why the rule of grammar is broken here and the LXX is misled by following the default Hebrew Grammar rule. # **Revised Version** (based on the query 2.5 ">T governing specifiers...")
exegesis-course/0100_ETCBC-TF_BHS_0002_Gen20-Gen4_o.glanz_stud-edition.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # ## Collect additional LAD data for journalism analysis. # # This includes: # # * Population data # * Economic activity (eg unemployment) # * Health # * Education # * Crime # # # ### Preamble # # # %run ../notebook_preamble.ipy # ### Load data # #### Population # # We use the NOMIS API from [here](https://www.nomisweb.co.uk/datasets/pestsyoala) lad_pop = pd.read_csv('https://www.nomisweb.co.uk/api/v01/dataset/NM_2002_1.data.csv?geography=1820327937...1820328318&date=latestMINUS1-latest&gender=0&c_age=200,209&measures=20100') lad_pop.head() lad_pop.columns = [x.lower() for x in lad_pop.columns] lad_pop.columns # + # We are interested in the date name, the geography name, the age name and the observed value # + my_vars = ['date_name','geography_name','c_age_name','obs_value','geography_code'] # + distr = lad_pop[my_vars].loc[lad_pop['date_name']==2017].pivot(index='geography_name',columns='c_age_name',values='obs_value') distr.columns = ['age_over_65','age_all'] distr['age_over_65_share'] = distr['age_over_65']/distr['age_all'] # - # #### Also download longitudinal population data to meaure number of journalists per capita # + lad_pop_long_url = 'https://www.nomisweb.co.uk/api/v01/dataset/NM_2002_1.data.csv?geography=1820327937...1820328318&date=latestMINUS9-latest&gender=0&c_age=200&measures=20100' pop_long = pd.read_csv(lad_pop_long_url) pop_long.columns = [x.lower() for x in pop_long.columns] # + pop_long_selected = pop_long[['date_name','geography_name','geography_code','obs_value']] pop_long_selected.to_csv(f'../../data/processed/{today_str}_lad_pop_longitudinal.csv',index_label=False) # - # #### Economic activity and education # # We obtain this from [here]() econ = pd.read_csv('https://www.nomisweb.co.uk/api/v01/dataset/NM_17_5.data.csv?geography=1946157057...1946157436&date=latestMINUS5&variable=18,45,83,111,1487,290,344&measures=20599,21001,21002,21003') econ.columns = [x.lower() for x in econ.columns] econ.columns econ['variable_name'].value_counts() # + econ_vars = ['date_name','geography_name','geography_code','variable_name','measures_name','obs_value'] #Focus on variable instead of the numerator / denominator / confidence interval econ_val = econ.loc[econ['measures_name']=='Variable'][econ_vars].reset_index(drop=True) # - econ_wide = econ_val.pivot_table(index='geography_name',columns='variable_name',values='obs_value') econ_wide.columns = ['inactive_want_job_pc','inactive_pc','education_tertiary_pc', 'education_no_qual_pc', 'activity_rate_pc','employment_rate_pc','unemployment_rate_pc'] # #### Health # # We obtain this from [here](https://fingertips.phe.org.uk/profile/health-profiles/data#page/9/gid/1938132696/pat/6/par/E12000004/ati/101/are/E07000032) health = pd.read_csv('../../data/external/indicators-DistrictUApre419.data.csv') health.shape health.columns = [re.sub(' ','_',x.lower()) for x in health.columns] health.columns health.area_type.value_counts() health.indicator_name.value_counts() # There is variation in the periods for which the data is available. We will select some variables of interest and get appropriate years for them vars_of_interest = [ #'Life expectancy at birth', 'Under 75 mortality rate: all causes', 'Suicide rate', #'Inequality in life expectancy at birth', 'Infant mortality', 'Violent crime (including sexual violence) - violence offences per 1,000 population', 'Average Attainment 8 score', 'Deprivation score (IMD 2015)', 'Statutory homelessness - Eligible homeless people not in priority need'] years_of_interest = [ #'2015 - 17', '2015 - 17', '2015 - 17', #'2015 - 17', '2015 - 17','2016/17','2015/16','2015','2016/17'] # + health_container = [] for n,v in enumerate(vars_of_interest): out = health.loc[(health['indicator_name']==v)&(health['time_period']==years_of_interest[n]) & (health['area_type']=='District & UA (pre 4/19)') & ((health['sex']=='Persons'))].set_index('area_name') out_rel = out['value'] out_rel.name = v health_container.append(out_rel) #health_container.append(out[['indicator_name','value']]) # - clean = {'Bournemouth':, 'Christchurch':, 'East Dorset':, 'Forest Heath:', 'North Dorset':, 'Poole':, 'Purbeck':, 'St Edmundsbury':, 'Suffolk Coastal':, 'Taunton Deane':, 'Waveney':, 'West Dorset':, 'West Somerset':, 'Weymouth and Portland': } # + health_df = pd.concat(health_container,axis=1) clean_health_lad_lookup = {'Bristol':'Bristol, City of', 'Folkestone & Hythe':'Folkestone and Hythe', 'Herefordshire':'Herefordshire, County of', 'Kingston upon Hull':'Kingston upon Hull, City of', 'St. Edmundsbury':'St Edmundsbury' } health_df.index = [clean_health_lad_lookup[x] if x in clean_health_lad_lookup.keys() else x for x in health_df.index] # - health_df.columns = ['mortality_rate','suicide_rate','infant_mortality','violent_crime_per_1000','average_atainment','deprivation_score','statutory_homelessness'] # ### Brexit data # # Accessed from [here](https://www.electoralcommission.org.uk/who-we-are-and-what-we-do/elections-and-referendums/past-elections-and-referendums/eu-referendum/results-and-turnout-eu-referendum) brex = pd.read_csv('https://www.electoralcommission.org.uk/sites/default/files/2019-07/EU-referendum-result-data.csv') brex.columns = [re.sub(' ','_',x.lower()) for x in brex.columns] brex['leave_share'] = brex['leave']/brex['votes_cast'] brex = brex[['area','leave_share']].set_index('area') # ### Output # Check potential issues with indices from itertools import combinations,permutations # + def missing_indices(dict_of_dfs): ''' Returns disjoint indices between the dfs. Useful when merging ''' combs = list(combinations(dict_of_dfs.keys(),2)) for c in combs: print(f'{c[0]} and {c[1]}') print('====') print('\n') print(f'Disjoint {c[0]}-{c[1]}') print('---') disj = set(dict_of_dfs[c[0]].index)-set(dict_of_dfs[c[1]].index) print(sorted(list(disj))) print('\n') print(f'Disjoint {c[1]}-{c[0]}') print('---') disj = set(dict_of_dfs[c[1]].index)-set(dict_of_dfs[c[0]].index) print(sorted(list(disj))) print('\n') # - dict_of_dfs = {'pop':distr,'econ':econ_wide,'health':health_df,'brex':brex} missing_indices(dict_of_dfs) # Some of the gaps here seem to be due to changes in boundaries of LADs eg check Bournemouth & Poole # + output = pd.concat([distr,econ_wide,health_df,brex],axis=1) output # - output.to_csv(f'../../data/processed/{today_str}_secondary_data.csv') import seaborn as sns sns.clustermap(output.corr(),cmap='bwr')
notebooks/dev/02_jmg_lad_secondary.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="V0Owt_7jDyDk" colab_type="code" colab={} # instalar as dependências # !apt-get install openjdk-8-jdk-headless -qq > /dev/null # !wget -q https://archive.apache.org/dist/spark/spark-2.4.4/spark-2.4.4-bin-hadoop2.7.tgz # !tar xf spark-2.4.4-bin-hadoop2.7.tgz # !pip install -q findspark # configurar as variáveis de ambiente import os os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" os.environ["SPARK_HOME"] = "/content/spark-2.4.4-bin-hadoop2.7" # tornar o pyspark "importável" import findspark findspark.init('spark-2.4.4-bin-hadoop2.7') from pyspark.sql import SparkSession sc = SparkSession.builder.master('local[*]').getOrCreate() from pyspark.sql import * # + id="rD0f20-jD1wd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 191} outputId="4e83bfe5-3fe7-4855-dcd5-d563c663f525" # download do http para arquivo local # !wget --quiet --show-progress https://d3l36jjwr70u5l.cloudfront.net/data-engineer-test/part-00000.json.gz # !wget --quiet --show-progress https://d3l36jjwr70u5l.cloudfront.net/data-engineer-test/part-00001.json.gz # !wget --quiet --show-progress https://d3l36jjwr70u5l.cloudfront.net/data-engineer-test/part-00002.json.gz # !wget --quiet --show-progress https://d3l36jjwr70u5l.cloudfront.net/data-engineer-test/part-00003.json.gz # !wget --quiet --show-progress https://d3l36jjwr70u5l.cloudfront.net/data-engineer-test/part-00004.json.gz # !wget --quiet --show-progress https://d3l36jjwr70u5l.cloudfront.net/data-engineer-test/part-00005.json.gz # !wget --quiet --show-progress https://d3l36jjwr70u5l.cloudfront.net/data-engineer-test/part-00006.json.gz # !wget --quiet --show-progress https://d3l36jjwr70u5l.cloudfront.net/data-engineer-test/part-00007.json.gz # !wget --quiet --show-progress https://d3l36jjwr70u5l.cloudfront.net/data-engineer-test/part-00008.json.gz # !wget --quiet --show-progress https://d3l36jjwr70u5l.cloudfront.net/data-engineer-test/part-00009.json.gz # + id="8wRLRYRxD8BW" colab_type="code" colab={} from pyspark.sql import functions as f from pyspark.sql import types as t # carregando dados dos arquivos json em dataframes df_spark_json_0 = sc.read.json("./part-00000.json.gz") df_spark_json_1 = sc.read.json("./part-00001.json.gz") df_spark_json_2 = sc.read.json("./part-00002.json.gz") df_spark_json_3 = sc.read.json("./part-00003.json.gz") df_spark_json_4 = sc.read.json("./part-00004.json.gz") df_spark_json_5 = sc.read.json("./part-00005.json.gz") df_spark_json_6 = sc.read.json("./part-00006.json.gz") df_spark_json_7 = sc.read.json("./part-00007.json.gz") df_spark_json_8 = sc.read.json("./part-00008.json.gz") df_spark_json_9 = sc.read.json("./part-00009.json.gz") # + id="5METsk6cEN4x" colab_type="code" colab={} # converto aqui o formato epoch da coluna device_sent_timestamp para um formato timestamp legível df_spark_jdf_spark_json_0 = df_spark_json_0.withColumn('device_sent_timestamp',(df_spark_json_0.device_sent_timestamp/1000).cast("timestamp").alias("timestamp")) df_spark_jdf_spark_json_1 = df_spark_json_1.withColumn('device_sent_timestamp',(df_spark_json_1.device_sent_timestamp/1000).cast("timestamp").alias("timestamp")) df_spark_jdf_spark_json_2 = df_spark_json_2.withColumn('device_sent_timestamp',(df_spark_json_2.device_sent_timestamp/1000).cast("timestamp").alias("timestamp")) df_spark_jdf_spark_json_3 = df_spark_json_3.withColumn('device_sent_timestamp',(df_spark_json_3.device_sent_timestamp/1000).cast("timestamp").alias("timestamp")) df_spark_jdf_spark_json_4 = df_spark_json_4.withColumn('device_sent_timestamp',(df_spark_json_4.device_sent_timestamp/1000).cast("timestamp").alias("timestamp")) df_spark_jdf_spark_json_5 = df_spark_json_5.withColumn('device_sent_timestamp',(df_spark_json_5.device_sent_timestamp/1000).cast("timestamp").alias("timestamp")) df_spark_jdf_spark_json_6 = df_spark_json_6.withColumn('device_sent_timestamp',(df_spark_json_6.device_sent_timestamp/1000).cast("timestamp").alias("timestamp")) df_spark_jdf_spark_json_7 = df_spark_json_7.withColumn('device_sent_timestamp',(df_spark_json_7.device_sent_timestamp/1000).cast("timestamp").alias("timestamp")) df_spark_jdf_spark_json_8 = df_spark_json_8.withColumn('device_sent_timestamp',(df_spark_json_8.device_sent_timestamp/1000).cast("timestamp").alias("timestamp")) df_spark_jdf_spark_json_9 = df_spark_json_9.withColumn('device_sent_timestamp',(df_spark_json_9.device_sent_timestamp/1000).cast("timestamp").alias("timestamp")) # + id="NqxnQHI2t7tn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 468} outputId="3c6babe9-4f7a-4c9b-c00a-963337b2ea30" # faço aqui a uniaõ de todos os arquivos particionados df = df_spark_json_0.union(df_spark_json_1).union(df_spark_json_2)\ .union(df_spark_json_3).union(df_spark_json_4).union(df_spark_json_5)\ .union(df_spark_json_6).union(df_spark_json_7).union(df_spark_json_8).union(df_spark_json_9) df = df.withColumn('device_sent_timestamp',(df.device_sent_timestamp/1000).cast("timestamp").alias("timestamp")) df.show() # + id="lIv0bDsrtjMq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 55} outputId="f051d438-0fde-4b67-cba5-3ff6641a9462" df_spark_jdf_spark_json_9 # + id="qHDV0jsOjsQa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 468} outputId="be41e780-3d08-4391-b36b-6394c2c3bd75" df_spark_jdf_spark_json_9.show() # + [markdown] id="QLZt1bsTcXWV" colab_type="text" # Através do agrupamento de id de usuários, Identifico aqui se há algum id de usuário que se repita - em todos os dataframes # + id="FuWnwbGZ0WOz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 468} outputId="188e3cfe-81ca-4b7e-dcf5-8e6921a35e68" df_spark_jdf_spark_json_9.groupBy('anonymous_id').count().show() # + [markdown] id="7tnngQkGdHoJ" colab_type="text" # # #**Etapa 1** # ### Ao perceber que não há repetições nas colunas de usuário('anonymous_id') em nenhum dos arquivos particionados e que não há um tempo inicial e final para um mesmo usuário, para que eu possa contabilizar o tempo de uma sessão, eu assumo que cada linha da coluna 'anonymous id' é uma sessão de usuário diferente. Sendo assim , uma contagem nessa coluna me dará o número total de sessões únicas # + id="tRiSQhVCrZCL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 191} outputId="7e6e81da-1cd2-4d2a-bd59-271a1297333d" dict = {} # count = df_spark_json_0.select(df_spark_json_0.anonymous_id).count() dict.update({'part-00000.json.gz': df_spark_json_0.select('anonymous_id').count()}) dict.update({'part-00001.json.gz': df_spark_json_1.select('anonymous_id').count()}) dict.update({'part-00002.json.gz': df_spark_json_2.select('anonymous_id').count()}) dict.update({'part-00003.json.gz': df_spark_json_3.select('anonymous_id').count()}) dict.update({'part-00004.json.gz': df_spark_json_4.select('anonymous_id').count()}) dict.update({'part-00005.json.gz': df_spark_json_5.select('anonymous_id').count()}) dict.update({'part-00006.json.gz': df_spark_json_6.select('anonymous_id').count()}) dict.update({'part-00007.json.gz': df_spark_json_7.select('anonymous_id').count()}) dict.update({'part-00008.json.gz': df_spark_json_8.select('anonymous_id').count()}) dict.update({'part-00009.json.gz': df_spark_json_9.select('anonymous_id').count()}) dict # + [markdown] id="m1SxI7YhVXw5" colab_type="text" # #**Etapa 2** # + id="Ym09oruRDgz3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 468} outputId="88cba35b-9066-476d-f268-2ab44946f7cc" # faço aqui a uniaõ de todos os arquivos particionados df = df_spark_json_0.union(df_spark_json_1).union(df_spark_json_2)\ .union(df_spark_json_3).union(df_spark_json_4).union(df_spark_json_5)\ .union(df_spark_json_6).union(df_spark_json_7).union(df_spark_json_8).union(df_spark_json_9) df.show() # + id="9mlBO8g_MZNa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="cebd109c-06c3-4ad7-d4b1-4db315f381bd" #verifico se a quantidade de linhas desse novo dataframe bate com a quantidade de linhas somadas de todos os arquivos particionados - # cada file tem aproximadamente 10 milhoes de lihas - a qtd bate com o esperado! df.count() # + id="6ywj0AhRVeqz" colab_type="code" colab={} df_br = df.groupBy('browser_family').count().toPandas() # + id="d_oaJTWunD9R" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="4869392b-5168-462d-f04a-ebc25d1a32c5" df_br # + id="AcyeGM-tnDvN" colab_type="code" colab={} list_browser = df_br.browser_family.values list_browser_values = df_br['count'].values # + id="bM7JNIxjnnS8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 693} outputId="820a30f0-6566-457f-e5b9-76128c2d744c" dict_browser = {} for i in range(len(list_browser)): dict_browser.update({list_browser[i]: list_browser_values[i]}) dict_browser # + id="3sCbRI65Vf6K" colab_type="code" colab={} df_device = df.groupBy('device_family').count().toPandas() # + id="t54Yb_5QkSED" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 415} outputId="b6f11415-8ecc-41b8-9183-151076addefc" df_device # + id="DIizz7RUkYjH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="c88317df-671a-492a-82ec-787869b4ee46" list_devices = df_device.device_family.values list_devices_values = df_device['count'].values # + id="fkUQxm0Nk0qr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="fa765ef5-3b15-492d-fd21-a25e0ed0c6f5" dict_devices = {} for i in range(len(list_devices)): dict_devices.update({list_devices[i]:list_devices_values[i]}) dict_devices # + id="j1BxWxzTVfFK" colab_type="code" colab={} df_os = df.groupBy('os_family').count().toPandas() # + id="-ASL5JH7fKR4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 570} outputId="870ca172-d586-4611-f82a-801b03cbc85f" df_os # + id="gm-YbMJzW94b" colab_type="code" colab={} import pandas as pd list_os = df_os.os_family.values list_os_values = df_os['count'].values # + id="-xFEOpWSfjA3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 312} outputId="9b5b26a7-4740-4789-bcb9-3a2748dbc146" dict_os = {} for i in range(len(list_os)): dict_os.update({list_os[i]: list_os_values[i]}) dict_os # + [markdown] id="35PKpqBXq81m" colab_type="text" # # **Resultado Etapa 2** # Condenso aqui a quantidade de sessões únicas que ocorreram em cada Browser, Sistema Operacional e Dispositivo dentro de todo o conjunto de dados no formato solicitado # + id="CNJIr8gqW-RH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="20cc5acb-2691-4d68-bd32-3edb7e93fe81" dict_etapa2 = {} dict_etapa2.update({"browser_family": dict_browser,"os_family": dict_os, "device_family": dict_devices}) dict_etapa2 # + id="ShWNsslPW9tq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="e1b8012b-3dbb-48e3-87f9-dbe78f2fb566" dict_etapa2.keys() # dict_etapa2.values()
case_Escale_Ate_atapa2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # EXC RB simulation # # <NAME>, <NAME> # + # ~~~ # This file is part of the paper: # # "A NON-CONFORMING DUAL APPROACH FOR ADAPTIVE TRUST-REGION REDUCED BASIS # APPROXIMATION OF PDE-CONSTRAINED OPTIMIZATION" # # https://github.com/TiKeil/NCD-corrected-TR-RB-approach-for-pde-opt # # Copyright 2019-2020 all developers. All rights reserved. # License: Licensed as BSD 2-Clause License (http://opensource.org/licenses/BSD-2-Clause) # ~~~ # - # # Preparations # ## details # + import sys path = '../../' sys.path.append(path) import numpy as np from matplotlib import pyplot as plt from pymor.basic import * set_log_levels({'pymor': 'WARN'}) # + from pymor.core.logger import set_log_levels, getLogger set_log_levels({'pymor': 'ERROR', 'distributed_adaptive_discretizations': 'DEBUG', 'notebook': 'INFO'}) logger = getLogger('notebook.notebook') import matplotlib as mpl mpl.rcParams['figure.figsize'] = (12.0, 8.0) mpl.rcParams['font.size'] = 12 mpl.rcParams['savefig.dpi'] = 300 data_path = '../../../EXC_data' # domain of interest bounding_box = [[0,0],[2,1]] domain_of_interest = BitmapFunction('{}/Domain_of_interest.png'.format(data_path), range=[1,0], bounding_box=bounding_box) # - # ## problem definition # + from pdeopt.problems import EXC_problem, set_input_dict from pdeopt.discretizer import discretize_quadratic_pdeopt_stationary_cg parametric_quantities = {'walls': [1,4,9], 'windows': [], 'doors': [], 'heaters': [1,3,5,6,7,8,9]} inactive_quantities = {'removed_walls': [], 'open_windows': [], 'open_doors': [1,2,3,4,5,6,7,10], 'active_heaters': []} summed_quantities = {'walls': [[1,2,3,8],[4,5,6,7]], 'windows': [], 'doors': [], 'heaters': [[1,2],[3,4],[9,10,11,12]]} coefficient_expressions = None parameters_in_q = True input_dict = set_input_dict(parametric_quantities, inactive_quantities, coefficient_expressions, summed_quantities, parameters_in_q, ac=0.5, owc=[0.025,0.1], iwc= [0.025,0.1], idc=[0.005], wc=[0.0005], ht=[0,100], owc_c=0.001, iwc_c= 0.025, idc_c=0.01, wc_c=0.025, ht_c=80) parameter_scaling = False u_out = 5 problem, parameter_scales = EXC_problem(input_dict, summed_quantities, outside_temperature=u_out, #, q_inverse=0.0001 data_path = data_path,parameters_in_q=parameters_in_q, parameter_scaling=parameter_scaling, coefficient_expressions=coefficient_expressions) u_d = 18 mu_d = None print('desired_parameter: ', mu_d) sigma_d = 100 weights = {'walls': [0.5,0.25,0.05], 'doors': 1, 'heaters': [0.002,0.002,0.001,0.001,0.001,0.001,0.004], 'windows': 1, 'state': sigma_d} diameter = np.sqrt(2)/200. opt_fom, data, mu_bar = discretize_quadratic_pdeopt_stationary_cg(problem, diameter, weights, parameter_scales, domain_of_interest, desired_temperature=u_d, mu_for_u_d=mu_d, mu_for_tikhonov=mu_d, parameters_in_q=parameters_in_q, product='fixed_energy', use_corrected_gradient= True) # + print('information on the grid:') print(data['grid']) N = 48 validation_set_size = 100 tau_global_RB_J = 5e-4 tau_global_RB_DJ = 1e-1 # - # # Classical Model Order Reduction # ### BFGS-Greedy for non-corrected functional # We now construct a simple RB basis for primal, dual and all sensitivities. For this, we start with an empty basis # + params = [] from pdeopt.model import build_initial_basis RBbasis, dual_RBbasis = build_initial_basis(opt_fom, params, build_sensitivities=False) from pdeopt.reductor import QuadraticPdeoptStationaryCoerciveReductor from pymor.parameters.functionals import MinThetaParameterFunctional ce = MinThetaParameterFunctional(opt_fom.primal_model.operator.coefficients, mu_bar) opt_fom = opt_fom.with_(use_corrected_functional=False) opt_fom = opt_fom.with_(use_corrected_gradient=False) opt_fom = opt_fom.with_(adjoint_approach=False) pdeopt_reductor = QuadraticPdeoptStationaryCoerciveReductor(opt_fom, RBbasis, dual_RBbasis, opt_product=opt_fom.opt_product, coercivity_estimator=ce, prepare_for_gradient_estimate=True, mu_bar=mu_bar) # - # We start a greedy for the whole domain # + set_log_levels({'pymor': 'WARN'}) # <-- set this to 'INFO' if you want to have further details from pdeopt.greedy import pdeopt_adaptive_greedy result_J, result_DJ = pdeopt_adaptive_greedy(opt_fom, pdeopt_reductor, opt_fom.parameter_space, validation_mus=-100, max_extensions=N, J_atol=tau_global_RB_J, DJ_atol=tau_global_RB_DJ) opt_rom = result_DJ['rom'] tictoc = result_DJ['time'] + result_J['time'] print('Greedy took {}'.format(tictoc)) picked_mus = result_J['max_err_mus'][:-1] picked_mus.extend(result_DJ['max_err_mus']) # - print('training set sizes: ', result_J['training_set_sizes'], result_DJ['training_set_sizes']) print('Before the {}th extension of J goal, the max error was {}'.format(result_J['extensions'],result_J['max_errs'][-2])) print('Before the {}th extension of DJ goal, the max error was {}'.format(result_DJ['extensions'],result_DJ['max_errs'][-1])) params = picked_mus RBbasis, dual_RBbasis, RBPrimalSens, RBDualSens = build_initial_basis(opt_fom, params, build_sensitivities=True) # ### Model for 1a # + ce = MinThetaParameterFunctional(opt_fom.primal_model.operator.coefficients, mu_bar) opt_fom = opt_fom.with_(use_corrected_functional=False) opt_fom = opt_fom.with_(use_corrected_gradient=False) opt_fom = opt_fom.with_(adjoint_approach=False) pdeopt_reductor = QuadraticPdeoptStationaryCoerciveReductor(opt_fom, RBbasis.copy(), dual_RBbasis.copy(), opt_product=opt_fom.opt_product, coercivity_estimator=ce, prepare_for_gradient_estimate=True, mu_bar=mu_bar) opt_rom_1a = pdeopt_reductor.reduce() # - # ### Model for 2a # + opt_fom = opt_fom.with_(use_corrected_functional=True) opt_fom = opt_fom.with_(use_corrected_gradient=False) opt_fom = opt_fom.with_(adjoint_approach=False) pdeopt_reductor_2a = QuadraticPdeoptStationaryCoerciveReductor(opt_fom, RBbasis.copy(), dual_RBbasis.copy(), opt_product=opt_fom.opt_product, coercivity_estimator=ce, mu_bar=mu_bar, prepare_for_gradient_estimate=True, true_lagrange=True) opt_rom_2a = pdeopt_reductor_2a.reduce() # - # ### Model for 3a # + opt_fom = opt_fom.with_(use_corrected_functional=True) opt_fom = opt_fom.with_(use_corrected_gradient=False) opt_fom = opt_fom.with_(adjoint_approach=True) pdeopt_reductor_3a = QuadraticPdeoptStationaryCoerciveReductor(opt_fom, RBbasis.copy(), dual_RBbasis.copy(), opt_product=opt_fom.opt_product, coercivity_estimator=ce, mu_bar=mu_bar, adjoint_estimate=True, prepare_for_gradient_estimate=True, true_lagrange=True) opt_rom_3a = pdeopt_reductor_3a.reduce() # - # ### Model for 4a # + opt_fom = opt_fom.with_(use_corrected_functional=True) opt_fom = opt_fom.with_(use_corrected_gradient=False) opt_fom = opt_fom.with_(adjoint_approach=True) pdeopt_reductor_4a = QuadraticPdeoptStationaryCoerciveReductor(opt_fom, RBbasis.copy(), dual_RBbasis.copy(), opt_product=opt_fom.opt_product, coercivity_estimator=ce, mu_bar=mu_bar, prepare_for_gradient_estimate=True, prepare_for_sensitivity_estimate=True, true_lagrange=True) opt_rom_4a = pdeopt_reductor_4a.reduce() # - # ### Model for 5a # + opt_fom = opt_fom.with_(use_corrected_functional=True) opt_fom = opt_fom.with_(use_corrected_gradient=True) opt_fom = opt_fom.with_(adjoint_approach=False) pdeopt_reductor_5a = QuadraticPdeoptStationaryCoerciveReductor(opt_fom, RBbasis.copy(), dual_RBbasis.copy(), RBPrimalSens.copy(), RBDualSens.copy(), opt_product=opt_fom.opt_product, coercivity_estimator=ce, mu_bar=mu_bar, prepare_for_gradient_estimate=True, prepare_for_sensitivity_estimate=True, true_lagrange=True) opt_rom_5a = pdeopt_reductor_5a.reduce() # - # ## compute true errors and estimators validation_set = opt_fom.parameter_space.sample_randomly(validation_set_size, seed=0) # + code_folding=[] from pdeopt.tools import compute_all_errors_and_estimators_for_all_ROMS J_errors_1a, DJ_errors_1a, rel_J_errors_1a, rel_DJ_errors_1a, J_estimators_1a, DJ_estimators_1a, effectivities_J_1a, effectivities_DJ_1a, \ J_errors_2a, DJ_errors_2a, rel_J_errors_2a, rel_DJ_errors_2a, J_estimators_2a, DJ_estimators_2a, effectivities_J_2a, effectivities_DJ_2a, \ J_errors_3a, DJ_errors_3a, rel_J_errors_3a, rel_DJ_errors_3a, J_estimators_3a, DJ_estimators_3a, effectivities_J_3a, effectivities_DJ_3a, \ J_errors_4a, DJ_errors_4a, rel_J_errors_4a, rel_DJ_errors_4a, J_estimators_4a, DJ_estimators_4a, effectivities_J_4a, effectivities_DJ_4a, \ J_errors_5a, DJ_errors_5a, rel_J_errors_5a, rel_DJ_errors_5a, J_estimators_5a, DJ_estimators_5a, effectivities_J_5a, effectivities_DJ_5a, \ J, DJ, \ u_mu_errors_4a, rel_u_mu_errors_4a, u_mu_estimators_4a, effectivities_u_mu_4a, \ u_mu_errors_5a, rel_u_mu_errors_5a, u_mu_estimators_5a, effectivities_u_mu_5a, \ p_mu_errors_4a, rel_p_mu_errors_4a, p_mu_estimators_4a, effectivities_p_mu_4a, \ p_mu_errors_5a, rel_p_mu_errors_5a, p_mu_estimators_5a, effectivities_p_mu_5a \ = compute_all_errors_and_estimators_for_all_ROMS( validation_set, opt_fom, opt_rom_1a, opt_rom_2a, opt_rom_3a, opt_rom_4a, opt_rom_5a, pdeopt_reductor_4a, pdeopt_reductor_5a) # + #J max_J_error_1a = max(J_errors_1a) min_J_error_1a = min(J_errors_1a) max_J_error_2a = max(J_errors_2a) min_J_error_2a = min(J_errors_2a) max_J_error_3a = max(J_errors_3a) min_J_error_3a = min(J_errors_3a) max_J_error_4a = max(J_errors_4a) min_J_error_4a = min(J_errors_4a) max_J_error_5a = max(J_errors_5a) min_J_error_5a = min(J_errors_5a) #DJ max_DJ_error_1a = max(DJ_errors_1a) min_DJ_error_1a = min(DJ_errors_1a) max_DJ_error_2a = max(DJ_errors_2a) min_DJ_error_2a = min(DJ_errors_2a) max_DJ_error_3a = max(DJ_errors_3a) min_DJ_error_3a = min(DJ_errors_3a) max_DJ_error_4a = max(DJ_errors_4a) min_DJ_error_4a = min(DJ_errors_4a) max_DJ_error_5a = max(DJ_errors_5a) min_DJ_error_5a = min(DJ_errors_5a) #J estimator max_J_estimators_1a = max(J_estimators_1a) min_J_estimators_1a = min(J_estimators_1a) max_J_estimators_2a = max(J_estimators_2a) min_J_estimators_2a = min(J_estimators_2a) max_J_estimators_3a = max(J_estimators_3a) min_J_estimators_3a = min(J_estimators_3a) max_J_estimators_4a = max(J_estimators_4a) min_J_estimators_4a = min(J_estimators_4a) max_J_estimators_5a = max(J_estimators_5a) min_J_estimators_5a = min(J_estimators_5a) #DJ estimator max_DJ_estimators_1a = max(DJ_estimators_1a) min_DJ_estimators_1a = min(DJ_estimators_1a) max_DJ_estimators_2a = max(DJ_estimators_2a) min_DJ_estimators_2a = min(DJ_estimators_2a) max_DJ_estimators_3a = max(DJ_estimators_3a) min_DJ_estimators_3a = min(DJ_estimators_3a) max_DJ_estimators_4a = max(DJ_estimators_4a) min_DJ_estimators_4a = min(DJ_estimators_4a) max_DJ_estimators_5a = max(DJ_estimators_5a) min_DJ_estimators_5a = min(DJ_estimators_5a) median_effectivities_J_1a = np.sum(effectivities_J_1a)/len(effectivities_J_1a) median_effectivities_J_2a = np.sum(effectivities_J_2a)/len(effectivities_J_1a) median_effectivities_J_3a = np.sum(effectivities_J_3a)/len(effectivities_J_1a) median_effectivities_J_4a = np.sum(effectivities_J_4a)/len(effectivities_J_1a) median_effectivities_J_5a = np.sum(effectivities_J_5a)/len(effectivities_J_1a) median_effectivities_DJ_1a = np.sum(effectivities_DJ_1a)/len(effectivities_J_1a) median_effectivities_DJ_2a = np.sum(effectivities_DJ_2a)/len(effectivities_J_1a) median_effectivities_DJ_3a = np.sum(effectivities_DJ_3a)/len(effectivities_J_1a) median_effectivities_DJ_4a = np.sum(effectivities_DJ_4a)/len(effectivities_J_1a) median_effectivities_DJ_5a = np.sum(effectivities_DJ_5a)/len(effectivities_J_1a) median_errors_J_1a = np.sum(J_errors_1a)/len(effectivities_J_1a) median_errors_J_2a = np.sum(J_errors_2a)/len(effectivities_J_1a) median_errors_J_3a = np.sum(J_errors_3a)/len(effectivities_J_1a) median_errors_J_4a = np.sum(J_errors_4a)/len(effectivities_J_1a) median_errors_J_5a = np.sum(J_errors_5a)/len(effectivities_J_1a) median_estimators_J_1a = np.sum(J_estimators_1a)/len(effectivities_J_1a) median_estimators_J_2a = np.sum(J_estimators_2a)/len(effectivities_J_1a) median_estimators_J_3a = np.sum(J_estimators_3a)/len(effectivities_J_1a) median_estimators_J_4a = np.sum(J_estimators_4a)/len(effectivities_J_1a) median_estimators_J_5a = np.sum(J_estimators_5a)/len(effectivities_J_1a) median_errors_DJ_1a = np.sum(DJ_errors_1a)/len(effectivities_J_1a) median_errors_DJ_2a = np.sum(DJ_errors_2a)/len(effectivities_J_1a) median_errors_DJ_3a = np.sum(DJ_errors_3a)/len(effectivities_J_1a) median_errors_DJ_4a = np.sum(DJ_errors_4a)/len(effectivities_J_1a) median_errors_DJ_5a = np.sum(DJ_errors_5a)/len(effectivities_J_1a) median_estimators_DJ_1a = np.sum(DJ_estimators_1a)/len(effectivities_J_1a) median_estimators_DJ_2a = np.sum(DJ_estimators_2a)/len(effectivities_J_1a) median_estimators_DJ_3a = np.sum(DJ_estimators_3a)/len(effectivities_J_1a) median_estimators_DJ_4a = np.sum(DJ_estimators_4a)/len(effectivities_J_1a) median_estimators_DJ_5a = np.sum(DJ_estimators_5a)/len(effectivities_J_1a) # - # ## print tables # + from tabulate import tabulate # tabulate for the output functional print('J estimator comparison') print() headers = ['Method', 'max J error', 'min J error', 'max J estimate', 'min J estimate', 'average effectivity'] table = [ ['1a not-corr ', max_J_error_1a, min_J_error_1a, max_J_estimators_1a, min_J_estimators_1a, median_effectivities_J_1a], ['2a semi-corr', max_J_error_2a, min_J_error_2a, max_J_estimators_2a, min_J_estimators_2a, median_effectivities_J_2a], ['3a AA A-Est ', max_J_error_3a, min_J_error_3a, max_J_estimators_3a, min_J_estimators_3a, median_effectivities_J_3a], ['4a AA S-Est ', max_J_error_4a, min_J_error_4a, max_J_estimators_4a, min_J_estimators_4a, median_effectivities_J_4a], ['5a Corr SA ', max_J_error_5a, min_J_error_5a, max_J_estimators_5a, min_J_estimators_5a, median_effectivities_J_5a]] print(tabulate(table, headers=headers, tablefmt='github', floatfmt='.7f')) # + print() # tabulate for the gradient of the output functional print('DJ estimator comparison') print() headers = ['Method', 'max DJ error', 'min DJ error', 'max DJ estimate', 'min DJ estimate', 'average effectivity'] table = [ ['1a not-corr ', max_DJ_error_1a, min_DJ_error_1a, max_DJ_estimators_1a, min_DJ_estimators_1a, median_effectivities_DJ_1a], ['2a semi-corr', max_DJ_error_2a, min_DJ_error_2a, max_DJ_estimators_2a, min_DJ_estimators_2a, median_effectivities_DJ_2a], ['3a AA A-Est ', max_DJ_error_3a, min_DJ_error_3a, max_DJ_estimators_3a, min_DJ_estimators_3a, median_effectivities_DJ_3a], ['4a AA S-Est ', max_DJ_error_4a, min_DJ_error_4a, max_DJ_estimators_4a, min_DJ_estimators_4a, median_effectivities_DJ_4a], ['5a Corr SA ', max_DJ_error_5a, min_DJ_error_5a, max_DJ_estimators_5a, min_DJ_estimators_5a, median_effectivities_DJ_5a]] print(tabulate(table, headers=headers, tablefmt='github', floatfmt='.7f')) # + from tabulate import tabulate # tabulate for the output functional print('J estimator comparison') print() headers = ['Method', 'median J error', 'median J estimate', 'average effectivity'] table = [ ['1a not-corr ', median_errors_J_1a, median_estimators_J_1a, median_effectivities_J_1a], ['2a semi-corr', median_errors_J_2a, median_estimators_J_2a, median_effectivities_J_2a], ['3a AA A-Est ', median_errors_J_3a, median_estimators_J_3a, median_effectivities_J_3a], ['4a AA S-Est ', median_errors_J_4a, median_estimators_J_4a, median_effectivities_J_4a], ['5a Corr SA ', median_errors_J_5a, median_estimators_J_5a, median_effectivities_J_5a]] print(tabulate(table, headers=headers, tablefmt='github', floatfmt='.7f')) # + print() # tabulate for the gradient of the output functional print('DJ estimator comparison') print() headers = ['Method', 'median DJ error', 'median DJ estimate', 'average effectivity'] table = [ ['1a not-corr ', median_errors_DJ_1a, median_estimators_DJ_1a, median_effectivities_DJ_1a], ['2a semi-corr', median_errors_DJ_2a, median_estimators_DJ_2a, median_effectivities_DJ_2a], ['3a AA A-Est ', median_errors_DJ_3a, median_estimators_DJ_3a, median_effectivities_DJ_3a], ['4a AA S-Est ', median_errors_DJ_4a, median_estimators_DJ_4a, median_effectivities_DJ_4a], ['5a Corr SA ', median_errors_DJ_5a, median_estimators_DJ_5a, median_effectivities_DJ_5a]] print(tabulate(table, headers=headers, tablefmt='github', floatfmt='.7f')) # - # ## primal sensitivities # + max_u_error_4a = max(u_mu_errors_4a) min_u_error_4a = min(u_mu_errors_4a) max_u_error_5a = max(u_mu_errors_5a) min_u_error_5a = min(u_mu_errors_5a) max_u_estimators_4a = max(u_mu_estimators_4a) min_u_estimators_4a = min(u_mu_estimators_4a) max_u_estimators_5a = max(u_mu_estimators_5a) min_u_estimators_5a = min(u_mu_estimators_5a) median_effectivities_u_4a = np.sum(effectivities_u_mu_4a)/len(effectivities_J_1a) median_effectivities_u_5a = np.sum(effectivities_u_mu_5a)/len(effectivities_J_1a) median_errors_u_4a = np.sum(u_mu_errors_4a)/len(effectivities_J_1a) median_errors_u_5a = np.sum(u_mu_errors_5a)/len(effectivities_J_1a) median_estimators_u_4a = np.sum(u_mu_estimators_4a)/len(effectivities_J_1a) median_estimators_u_5a = np.sum(u_mu_estimators_5a)/len(effectivities_J_1a) # + # tabulate for the output functional print('u estimator comparison') print() headers = ['Method', 'max error', 'min error', 'max estimator', 'min estimator'] table = [ ['4a AA S-Est ', max_u_error_4a, min_u_error_4a, max_u_estimators_4a, min_u_estimators_4a], ['5a Corr SA ', max_u_error_5a, min_u_error_5a, max_u_estimators_5a, min_u_estimators_5a]] print(tabulate(table, headers=headers, tablefmt='github', floatfmt='.7f')) print() # + headers = ['Method', 'average error', 'average estimate', 'average effectivity'] table = [ ['4a AA S-Est ', median_errors_u_4a, median_estimators_u_4a, median_effectivities_u_4a], ['5a Corr SA ', median_errors_u_5a, median_estimators_u_5a, median_effectivities_u_5a]] print(tabulate(table, headers=headers, tablefmt='github', floatfmt='.7f')) print() # - # ## dual sensitivities # + max_p_error_4a = max(p_mu_errors_4a) min_p_error_4a = min(p_mu_errors_4a) max_p_error_5a = max(p_mu_errors_5a) min_p_error_5a = min(p_mu_errors_5a) #J estimator max_p_estimators_4a = max(p_mu_estimators_4a) min_p_estimators_4a = min(p_mu_estimators_4a) max_p_estimators_5a = max(p_mu_estimators_5a) min_p_estimators_5a = min(p_mu_estimators_5a) median_effectivities_p_4a = np.sum(effectivities_p_mu_4a)/len(effectivities_J_1a) median_effectivities_p_5a = np.sum(effectivities_p_mu_5a)/len(effectivities_J_1a) median_errors_p_4a = np.sum(p_mu_errors_4a)/len(effectivities_J_1a) median_errors_p_5a = np.sum(p_mu_errors_5a)/len(effectivities_J_1a) median_estimators_p_4a = np.sum(p_mu_estimators_4a)/len(effectivities_J_1a) median_estimators_p_5a = np.sum(p_mu_estimators_5a)/len(effectivities_J_1a) # + # tabulate for the output functional print('p estimator comparison') print() headers = ['Method', 'max error', 'min error', 'max estimator', 'min estimator'] table = [ ['4a AA S-Est ', max_p_error_4a, min_p_error_4a, max_p_estimators_4a, min_p_estimators_4a], ['5a Corr SA ', max_p_error_5a, min_p_error_5a, max_p_estimators_5a, min_p_estimators_5a]] print(tabulate(table, headers=headers, tablefmt='github', floatfmt='.7f')) print() # + headers = ['Method', 'average error', 'average estimate', 'average effectivity'] table = [ ['4a AA S-Est ', median_errors_p_4a, median_estimators_p_4a, median_effectivities_p_4a], ['5a Corr SA ', median_errors_p_5a, median_estimators_p_5a, median_effectivities_p_5a]] print(tabulate(table, headers=headers, tablefmt='github', floatfmt='.7f')) print() # - # ## Effectivities # + max_eff_J_1a = max(effectivities_J_1a) max_eff_J_2a = max(effectivities_J_2a) max_eff_J_3a = max(effectivities_J_3a) max_eff_J_4a = max(effectivities_J_4a) max_eff_J_5a = max(effectivities_J_5a) max_eff_DJ_1a = max(effectivities_DJ_1a) max_eff_DJ_2a = max(effectivities_DJ_2a) max_eff_DJ_3a = max(effectivities_DJ_3a) max_eff_DJ_4a = max(effectivities_DJ_4a) max_eff_DJ_5a = max(effectivities_DJ_5a) max_eff_u_4a = max(effectivities_u_mu_4a) max_eff_u_5a = max(effectivities_u_mu_5a) max_eff_p_4a = max(effectivities_p_mu_4a) max_eff_p_5a = max(effectivities_p_mu_5a) # + print('max effectivities') headers = ['Method', 'max eff for J', 'max eff for DJ', 'max eff for u_mu', 'max eff for p_mu'] table = [ ['1a not-corr ', max_eff_J_1a, max_eff_DJ_1a, '-', '-'], ['2a semi-corr', max_eff_J_2a, max_eff_DJ_2a, '-', '-'], ['3a AA A-Est ', max_eff_J_3a, max_eff_DJ_3a, '-', '-'], ['4a AA S-Est ', max_eff_J_4a, max_eff_DJ_4a, max_eff_u_4a, max_eff_p_4a], ['5a Corr SA ', max_eff_J_5a, max_eff_DJ_5a, max_eff_u_5a, max_eff_p_5a]] print(tabulate(table, headers=headers, tablefmt='github', floatfmt='.7f'))
notebooks/Paper1_simulations/Model_Problem_2_Estimator_study/EXC10-estimator_study_with_basis_size_48.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <a href="https://githubtocolab.com/giswqs/geemap/blob/master/examples/notebooks/17_add_colorbar_to_gif.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a> # # Uncomment the following line to install [geemap](https://geemap.org) if needed. # + # # !pip install geemap # - import geemap import os # ## Update the geemap package # # If you run into errors with this notebook, please uncomment the line below to update the [geemap](https://github.com/giswqs/geemap#installation) package to the latest version from GitHub. # Restart the Kernel (Menu -> Kernel -> Restart) to take effect. # + # geemap.update_package() # - # ### Download a GIF from geemap import * url = 'https://i.imgur.com/MSde1om.gif' out_dir = os.path.join(os.path.expanduser('~'), 'Downloads') if not os.path.exists(out_dir): os.makedirs(out_dir) download_from_url(url, out_file_name='temp.gif', out_dir=out_dir) in_gif = os.path.join(out_dir, 'temp.gif') show_image(in_gif) # ### Get image URLs noaa_logo = 'https://bit.ly/3ahJoMq' ee_logo = 'https://i.imgur.com/Qbvacvm.png' # ### Set output GIF path out_gif = os.path.join(out_dir, 'output.gif') # ### Add images to GIF add_image_to_gif(in_gif, out_gif, in_image=noaa_logo, xy = ('2%', '80%'), image_size=(80, 80)) add_image_to_gif(out_gif, out_gif, in_image=ee_logo, xy = ('13%', '79%'), image_size=(85, 85)) # ### Display output GIF show_image(out_gif) # ### Create a colorbar width = 250 height = 30 palette = ['blue', 'purple', 'cyan', 'green', 'yellow', 'red'] labels = [-40, 35] colorbar = create_colorbar(width=width, height=height, palette=palette, vertical=False, add_labels=True, font_size=20, labels=labels) show_image(colorbar) # ### Add colorbar to GIF add_image_to_gif(out_gif, out_gif, in_image=colorbar, xy = ('69%', '89%'), image_size=(250, 250)) show_image(out_gif)
examples/notebooks/17_add_colorbar_to_gif.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # # 设计自定义层 # # 神经网络的一个魅力是它有大量的层,例如全连接、卷积、循环、激活,和各式花样的连接方式。我们之前学到了如何使用Gluon提供的层来构建新的层(`nn.Block`)继而得到神经网络。虽然Gluon提供了大量的[层的定义](https://mxnet.incubator.apache.org/versions/master/api/python/gluon/gluon.html#neural-network-layers),但我们仍然会遇到现有层不够用的情况。 # # 这时候的一个自然的想法是,我们不是学习了如何只使用基础数值运算包`NDArray`来实现各种的模型吗?它提供了大量的[底层计算函数](https://mxnet.incubator.apache.org/versions/master/api/python/ndarray/ndarray.html)足以实现即使不是100%那也是95%的神经网络吧。 # # 但每次都从头写容易写到怀疑人生。实际上,即使在纯研究的领域里,我们也很少发现纯新的东西,大部分时候是在现有模型的基础上做一些改进。所以很可能大部分是可以沿用前面的而只有一部分是需要自己来实现。 # # 这个教程我们将介绍如何使用底层的`NDArray`接口来实现一个`Gluon`的层,从而可以以后被重复调用。 # # ## 定义一个简单的层 # # 我们先来看如何定义一个简单层,它不需要维护模型参数。事实上这个跟前面介绍的如何使用nn.Block没什么区别。下面代码定义一个层将输入减掉均值。 # + attributes={"classes": [], "id": "", "n": "1"} from mxnet import nd from mxnet.gluon import nn class CenteredLayer(nn.Block): def __init__(self, **kwargs): super(CenteredLayer, self).__init__(**kwargs) def forward(self, x): return x - x.mean() # - # 我们可以马上实例化这个层用起来。 # + attributes={"classes": [], "id": "", "n": "2"} layer = CenteredLayer() layer(nd.array([1,2,3,4,5])) # - # 我们也可以用它来构造更复杂的神经网络: # + attributes={"classes": [], "id": "", "n": "3"} net = nn.Sequential() with net.name_scope(): net.add(nn.Dense(128)) net.add(nn.Dense(10)) net.add(CenteredLayer()) # - # 确认下输出的均值确实是0: # + attributes={"classes": [], "id": "", "n": "4"} net.initialize() y = net(nd.random.uniform(shape=(4, 8))) y.mean() # - # 当然大部分情况你可以看不到一个实实在在的0,而是一个很小的数。例如`5.82076609e-11`。这是因为MXNet默认使用32位float,会带来一定的浮点精度误差。 # # ## 带模型参数的自定义层 # # 虽然`CenteredLayer`可能会告诉实现自定义层大概是什么样子,但它缺少了重要的一块,就是它没有可以学习的模型参数。 # # 记得我们之前访问`Dense`的权重的时候是通过`dense.weight.data()`,这里`weight`是一个`Parameter`的类型。我们可以显示的构建这样的一个参数。 # + attributes={"classes": [], "id": "", "n": "5"} from mxnet import gluon my_param = gluon.Parameter("exciting_parameter_yay", shape=(3,3)) # - # 这里我们创建一个$3\times3$大小的参数并取名为"exciting_parameter_yay"。然后用默认方法初始化打印结果。 # + attributes={"classes": [], "id": "", "n": "6"} my_param.initialize() (my_param.data(), my_param.grad()) # - # 通常自定义层的时候我们不会直接创建Parameter,而是用过Block自带的一个ParamterDict类型的成员变量`params`,顾名思义,这是一个由字符串名字映射到Parameter的字典。 # + attributes={"classes": [], "id": "", "n": "7"} pd = gluon.ParameterDict(prefix="block1_") pd.get("exciting_parameter_yay", shape=(3,3)) pd # - # 现在我们看下如果如果实现一个跟`Dense`一样功能的层,它概念跟前面的`CenteredLayer`的主要区别是我们在初始函数里通过`params`创建了参数: # + attributes={"classes": [], "id": "", "n": "19"} class MyDense(nn.Block): def __init__(self, units, in_units, **kwargs): super(MyDense, self).__init__(**kwargs) with self.name_scope(): self.weight = self.params.get( 'weight', shape=(in_units, units)) self.bias = self.params.get('bias', shape=(units,)) def forward(self, x): linear = nd.dot(x, self.weight.data()) + self.bias.data() return nd.relu(linear) # - # 我们创建实例化一个对象来看下它的参数,这里我们特意加了前缀`prefix`,这是`nn.Block`初始化函数自带的参数。 dense = MyDense(5, in_units=10, prefix='o_my_dense_') dense.params # 它的使用跟前面没有什么不一致: # + attributes={"classes": [], "id": "", "n": "20"} dense.initialize() dense(nd.random.uniform(shape=(2,10))) # - # 我们构造的层跟Gluon提供的层用起来没太多区别: # + attributes={"classes": [], "id": "", "n": "19"} net = nn.Sequential() with net.name_scope(): net.add(MyDense(32, in_units=64)) net.add(MyDense(2, in_units=32)) net.initialize() net(nd.random.uniform(shape=(2,64))) # - # 仔细的你可能还是注意到了,我们这里指定了输入的大小,而Gluon自带的`Dense`则无需如此。我们已经在前面节介绍过了这个延迟初始化如何使用。但如果实现一个这样的层我们将留到后面介绍了hybridize后。 # # ## 总结 # # 现在我们知道了如何把前面手写过的层全部包装了Gluon能用的Block,之后再用到的时候就可以飞起来了! # # ## 练习 # # 1. 怎么修改自定义层里参数的默认初始化函数。 # 1. (这个比较难),在一个代码Cell里面输入`nn.Dense??`,看看它是怎么实现的。为什么它就可以支持延迟初始化了。 # # **吐槽和讨论欢迎点**[这里](https://discuss.gluon.ai/t/topic/1256)
chapter_gluon-basics/custom-layer.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Create data for pooled analysis # # ### Content # # + [1. Notebook description](#1.-Notebook-Description) # + [2. Pooled Data](#2.-pooled-data) # + [3. Individual Subjects](#3.-individual-subjects) # # --- # # 1. Notebook Description # # To run the pooled analysis we have to transform the data for each subject according to some model and then combine it. # We load a model config file, transform the data according to the config object and concatenate all samples together in a single dataframe. We can do this, because the row index is unique over all sessions, trials, presentations *and* subjects. # # # --- # # **Imports:** # + from digits.data import matimport, select, utils from digits.utils import getoutname, dotdict from digits.transform.dimreduction import SubsampleTransform, FFTransform import yaml from os import path # - # Specify the subject IDs and which models (transformation schemes to use): # + subjects = [3130, 3131, 3132, 3134, 3135, 3136, 3138, 3146, 3147, 3149, 3154, 3156, 3157, 3158, 3159, 3161, 3162, 3233, 3237, 3239, 3240, 3241, 3242, 3243, 3245, 3248, 3250, 3251, 3252, 3253, 3255, 3260] configs = ['short_lda_1.yaml', 'short_lda_4.yaml', 'short_nofft20.yaml'] dataroot = '../../../data/thomas/artcorr/' # - # # 2. pooled data for config_file in configs: with open('../../../jobs/configs/'+config_file, 'r') as f: config = dotdict(yaml.load(f)['config']) for subject in [str(x) for x in subjects]: outfile = config_file+'.h5' if path.exists(path.join(dataroot, 'transformed', outfile)): print('skipping: '+outfile) continue imp = matimport.Importer(dataroot=dataroot) imp.open(subject+'.h5') samples = imp.store.samples targets = imp.store.targets samples = select.fromtimerange(samples, config.t0, config.t1) samples, targets = select.fromsessionblacklist(samples, targets, ['01']) samples = select.fromchannelblacklist(samples, ['LHEOG', 'RHEOG', 'IOL']) samples = SubsampleTransform(width=config.subsample_width, verbose=True).transform(samples) if config.fft: samples = FFTransform(verbose=True, bins=config.size, fmin=config.fmin, fmax=config.fmax, power=config.power, rate=config.subsample_width/1000.).transform(samples) if 'samples_all' in locals(): samples_all = samples_all.append(samples, verify_integrity=True) targets_all = targets_all.append(targets, verify_integrity=True) else: samples_all = samples targets_all = targets samples_all = utils.remove_duplicate_columns(samples_all, factor=2) pool = matimport.Importer(dataroot=dataroot) pool.ds = dotdict({ 'samples': samples_all, 'targets': targets_all, }) pool.save(outfile) del samples_all, targets_all # # 3. individual subjects # # Do this again for each individual subject, without pooling, in case we need the data. for config_file in configs: with open('../../../jobs/configs/'+config_file, 'r') as f: config = dotdict(yaml.load(f)['config']) for subject in [str(x) for x in subjects]: outfile = subject+'_'+config_file+'.h5' if path.exists(path.join(dataroot, 'transformed', outfile)): print('skipping: '+outfile) continue imp = matimport.Importer(dataroot=dataroot) imp.open(subject+'.h5') samples = imp.store.samples targets = imp.store.targets samples = select.fromtimerange(samples, config.t0, config.t1) samples, targets = select.fromsessionblacklist(samples, targets, ['01']) samples = select.fromchannelblacklist(samples, ['LHEOG', 'RHEOG', 'IOL']) samples = SubsampleTransform(width=config.subsample_width, verbose=True).transform(samples) if config.fft: samples = FFTransform(verbose=True, bins=config.size, fmin=config.fmin, fmax=config.fmax, power=config.power, rate=config.subsample_width/1000.).transform(samples) samples = utils.remove_duplicate_columns(samples, factor=2) pool = matimport.Importer(dataroot=dataroot) pool.ds = dotdict({ 'samples': samples, 'targets': targets, }) pool.save(outfile) print('saved: '+outfile)
analysis/pool/create_data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## How to work with pandas # [Pandas Tutorial](https://www.datacamp.com/community/tutorials/pandas-tutorial-dataframe-python#gs.1njeIsg) # ### Import Classes and Functions import numpy as np import pandas as pd # ### Load the dataset df = pd.read_csv('../datasets/pima-indians-diabetes.csv') # ### Print out the first 5 rows # Returns first n rows (n = 5 is default) # DataFrame.head(n=5) df.head() # ### Print out the 5 last rows # Returns last n rows (n = 5 is default) # DataFrame.tail(n=5) df.tail() # ### Generate descriptive statistics # # Generates descriptive statistics that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding NaN values. # Analyzes both numeric and object series, as well as DataFrame column sets of mixed data types. # The output will vary depending on what is provided. # DataFrame.describe(percentiles=None, include=None, exclude=None) df.describe() # ### Concise summary of the DataFrame # DataFrame.info(verbose=None, buf=None, max_cols=None, memory_usage=None, null_counts=None) df.info()
notebooks/0-basics/0-pandas.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + import json # %run ../Python_files/util.py tmc_ref_speed_dict =zload('../temp_files/tmc_ref_speed_dict_ext.pkz') data_folder = '/home/jzh/INRIX/All_INRIX_2012_filtered_ext/' def aggregate_speed_data(month): tmc_day_speed_AM_dict = {} tmc_day_speed_MD_dict = {} tmc_day_speed_PM_dict = {} tmc_day_speed_NT_dict = {} # Reading JSON data input_file_AM = data_folder + 'filtered_month-%s_AM_dict_ext.json' %(month) with open(input_file_AM, 'r') as json_file_AM: filtered_month_AM_dict = json.load(json_file_AM) input_file_MD = data_folder + 'filtered_month-%s_MD_dict_ext.json' %(month) with open(input_file_MD, 'r') as json_file_MD: filtered_month_MD_dict = json.load(json_file_MD) input_file_PM = data_folder + 'filtered_month-%s_PM_dict_ext.json' %(month) with open(input_file_PM, 'r') as json_file_PM: filtered_month_PM_dict = json.load(json_file_PM) input_file_NT = data_folder + 'filtered_month-%s_NT_dict_ext.json' %(month) with open(input_file_NT, 'r') as json_file_NT: filtered_month_NT_dict = json.load(json_file_NT) for tmc in tmc_ref_speed_dict.keys(): days_ = days(month) for day in range(days_)[1:]: speed_AM = [] travel_time_AM = [] speed_MD = [] travel_time_MD = [] speed_PM = [] travel_time_PM = [] speed_NT = [] travel_time_NT = [] for hour in [7, 8]: for minute in range(60): key = str(tmc) + '_' + str(month) + '_' + str(day) + '_' + str(hour) + '_' + str(minute) # dealing with missing data if filtered_month_AM_dict[key] == '_': filtered_month_AM_dict[key] = '30_0.02' speed_AM.append(float(filtered_month_AM_dict[key].split('_')[0])) travel_time_AM.append(float(filtered_month_AM_dict[key].split('_')[1])) for hour in [11, 12]: for minute in range(60): key = str(tmc) + '_' + str(month) + '_' + str(day) + '_' + str(hour) + '_' + str(minute) # dealing with missing data if filtered_month_MD_dict[key] == '_': filtered_month_MD_dict[key] = '30_0.02' speed_MD.append(float(filtered_month_MD_dict[key].split('_')[0])) travel_time_MD.append(float(filtered_month_MD_dict[key].split('_')[1])) for hour in [17, 18]: for minute in range(60): key = str(tmc) + '_' + str(month) + '_' + str(day) + '_' + str(hour) + '_' + str(minute) # dealing with missing data if filtered_month_PM_dict[key] == '_': filtered_month_PM_dict[key] = '30_0.02' speed_PM.append(float(filtered_month_PM_dict[key].split('_')[0])) travel_time_PM.append(float(filtered_month_PM_dict[key].split('_')[1])) for hour in [21, 22]: for minute in range(60): key = str(tmc) + '_' + str(month) + '_' + str(day) + '_' + str(hour) + '_' + str(minute) # dealing with missing data if filtered_month_NT_dict[key] == '_': filtered_month_NT_dict[key] = '30_0.02' speed_NT.append(float(filtered_month_NT_dict[key].split('_')[0])) travel_time_NT.append(float(filtered_month_NT_dict[key].split('_')[1])) tmc_day_speed_AM = TMC_Day_Speed(tmc, day, speed_AM, travel_time_AM) tmc_day_speed_AM_dict[tmc + str(day)] = tmc_day_speed_AM tmc_day_speed_MD = TMC_Day_Speed(tmc, day, speed_MD, travel_time_MD) tmc_day_speed_MD_dict[tmc + str(day)] = tmc_day_speed_MD tmc_day_speed_PM = TMC_Day_Speed(tmc, day, speed_PM, travel_time_PM) tmc_day_speed_PM_dict[tmc + str(day)] = tmc_day_speed_PM tmc_day_speed_NT = TMC_Day_Speed(tmc, day, speed_NT, travel_time_NT) tmc_day_speed_NT_dict[tmc + str(day)] = tmc_day_speed_NT zdump(tmc_day_speed_AM_dict, '../temp_files/%s_AM/tmc_day_speed_dict_ext.pkz' %(month_to_str(month))) zdump(tmc_day_speed_MD_dict, '../temp_files/%s_MD/tmc_day_speed_dict_ext.pkz' %(month_to_str(month))) zdump(tmc_day_speed_PM_dict, '../temp_files/%s_PM/tmc_day_speed_dict_ext.pkz' %(month_to_str(month))) zdump(tmc_day_speed_NT_dict, '../temp_files/%s_NT/tmc_day_speed_dict_ext.pkz' %(month_to_str(month))) aggregate_speed_data(1) aggregate_speed_data(4) aggregate_speed_data(7) aggregate_speed_data(10) # key, filtered_month_AM_dict[key], len(speed), len(travel_time) # tmc_day_speed_dict['129N0442420'].ave_speed() # -
01_INRIX_data_preprocessing_ifac17/INRIX_data_preprocessing_08_aggregate_speed_data_ext.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Forecasting for the volume of ISSRs in the entire 3-D grid # # Dataset: overall_sumamry.csv import numpy as np import pandas as pd # + def LR_Model(issr_volume, hw_length, pw_length): ''' issr_volume: an np.array with only issr volumes at each timestamp hw_length: history window length pw_length: prediction window length ''' hw_start = [] hw_end = [] pw_start = [] pw_end = [] all_rmse = [] all_best_hp = [] result = {'hw_start':hw_start, 'hw_end':hw_end, 'pw_start':pw_start, 'pw_end':pw_end, 'rmse': all_rmse, 'best_hyperparameter':all_best_hp} result = pd.DataFrame.from_dict(result) result['hw_length'] = hw_length result['hw_length'] = result['hw_length'].astype(object) result['pw_length'] = pw_length result['pw_length'] = result['pw_length'].astype(object) result['model'] = 'LR' return result # - def MLP_Model(issr_volume, hw_length, pw_length): ''' issr_volume: an np.array with only issr volumes at each timestamp hw_length: history window length pw_length: prediction window length ''' hw_start = [] hw_end = [] pw_start = [] pw_end = [] all_rmse = [] all_best_hp = [] result = {'hw_start':hw_start, 'hw_end':hw_end, 'pw_start':pw_start, 'pw_end':pw_end, 'rmse': all_rmse, 'best_hyperparameter':all_best_hp} result = pd.DataFrame.from_dict(result) result['hw_length'] = hw_length result['hw_length'] = result['hw_length'].astype(object) result['pw_length'] = pw_length result['pw_length'] = result['pw_length'].astype(object) result['model'] = 'MLP' return result def LSTM_Model(issr_volume, hw_length, pw_length): ''' issr_volume: an np.array with only issr volumes at each timestamp hw_length: history window length pw_length: prediction window length ''' hw_start = [] hw_end = [] pw_start = [] pw_end = [] all_rmse = [] all_best_hp = [] result = {'hw_start':hw_start, 'hw_end':hw_end, 'pw_start':pw_start, 'pw_end':pw_end, 'rmse': all_rmse, 'best_hyperparameter':all_best_hp} result = pd.DataFrame.from_dict(result) result['hw_length'] = hw_length result['hw_length'] = result['hw_length'].astype(object) result['pw_length'] = pw_length result['pw_length'] = result['pw_length'].astype(object) result['model'] = 'LSTM' return result
ISSR_Prediction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Rotate Array import sys; sys.path.append('../..') from puzzles import leet_puzzle leet_puzzle('rotate-array') n, k = 7, 4 example_array = list(range(n)) example_array # ## Simple Solution # # The simplest solution splits the array at the point to rotate, and constructs a new array using the two parts. # + def rotate_simple(input_array, order): order %= len(input_array) return input_array[order:] + input_array[:order] rotate_simple(example_array, k) # - # %%timeit rotate_simple(example_array, k) # This solution has both time and space complexity of O(n). # ## Bubble Rotate # However if we want to rotate a large array in place (without creating a new array), the solution above is inefficient. The time complexity is O(n), dependant only on the size of the input array, but the space complexity is also O(n) since we need to create a new array with the rotated elements. By applying a similar algorithm to bubble sort we can perform the rotation in place. # + def rotate_bubble_inplace(input_array, order): order %= len(input_array) for i in range(order): for j in range(len(input_array)): input_array[j], input_array[j - 1] = input_array[j - 1], input_array[j] return input_array example_array2 = list(example_array) rotate_bubble_inplace(example_array2, k) # - # %%timeit rotate_bubble_inplace(example_array, k) # However, although the space complexity is now O(1), the time complexity is O(n * k). It would be good to find a solution that has O(1) space complexity and O(n) time complexity. # ## Reverse Rotate # Another way of rotating the array is to split the array into two sub arrays at the point of rotation. Each subarray is reversed, before rotating the entire array. This solution achieves O(1) space complexity and O(n). # + def rotate_reverse_inplace(input_array, order): length = len(input_array) order = -order % length split_location = length - order input_array[:split_location] = reversed(input_array[:split_location]) input_array[split_location:] = reversed(input_array[split_location:]) input_array.reverse() return input_array example_array2 = list(example_array) rotate_reverse_inplace(example_array2, k) # - # %%timeit rotate_reverse_inplace(example_array, k)
Array/Rotate Array.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Chapter 17 - Decision Trees # # A decision tree uses a tree structure to represent a number of possible decision paths and an outcome for each path. They are very transparent and can provide a strong prediction using all types of data, but are computationally hard to create. It's very easy to overfit to your data. We focus on classification trees (rather than regression trees) and work through the ID3 algorithm for learning a decision tree. # ## Entropy # # "How much information do we need to get from each question?" # + from collections import Counter, defaultdict from functools import partial import math def entropy(class_probabilities): return sum(-p * math.log(p, 2) for p in class_probabilities if p) # for data consisting of pairs of inputs and labels, we'll need to compute the class probabilities ourselves def class_probabilities(labels): total_count = len(labels) return [count / total_count for count in Counter(labels).values()] def data_entropy(labeled_data): labels = [label for _, label in labeled_data] probabilities = class_probabilities(labels) return entropy(probabilities) # getting at the entropy of our partitions def partition_entropy(subsets): total_count = sum(len(subset) for subset in subsets) return sum( data_entropy(subset) * len(subset) / total_count for subset in subsets) # one drawback is that making partitions based on labels can cause overfitting. Imagine something that relies on a # SSN. It would separate people into categories of 1 and not generalize at all # - # ## Creating a Decision Tree # + inputs = [ ({'level':'Senior','lang':'Java','tweets':'no','phd':'no'}, False), ({'level':'Senior','lang':'Java','tweets':'no','phd':'yes'}, False), ({'level':'Mid','lang':'Python','tweets':'no','phd':'no'}, True), ({'level':'Junior','lang':'Python','tweets':'no','phd':'no'}, True), ({'level':'Junior','lang':'R','tweets':'yes','phd':'no'}, True), ({'level':'Junior','lang':'R','tweets':'yes','phd':'yes'}, False), ({'level':'Mid','lang':'R','tweets':'yes','phd':'yes'}, True), ({'level':'Senior','lang':'Python','tweets':'no','phd':'no'}, False), ({'level':'Senior','lang':'R','tweets':'yes','phd':'no'}, True), ({'level':'Junior','lang':'Python','tweets':'yes','phd':'no'}, True), ({'level':'Senior','lang':'Python','tweets':'yes','phd':'yes'},True), ({'level':'Mid','lang':'Python','tweets':'no','phd':'yes'}, True), ({'level':'Mid','lang':'Java','tweets':'yes','phd':'no'}, True), ({'level':'Junior','lang':'Python','tweets':'no','phd':'yes'},False)] # from Joel's github. # Our tree consists of decision nodes (which ask a question and partition) and leaf nodes (which provide an answer) # created by the ID3 method. It is as follows: # - # 1. If all the data have the same label, create a leaf node that predicts that label then stop # 2. If the list of attributes is empty (no more questions to ask), then create a leaf node that predicts the most common # label then stop. # 3. Otherwise, try partitioning the data by each of the attributes # 4. Choose the partition with the lowest partition entropy # 5. Add a decision node based on the chosen attribute # 6. Recur on each partitioned subset using the remaining attributes # # This is known as a greedy algorthim because it chooses the most immediately best option at each step. There are algorthims that can back propogate and improve even if a worse move is made in the beginning, but this is a good first step. # + # function for partitioning def group_by(items, key_fn): """returns a defaultdict(list), where each input item is in the list whose key is key_fn(item)""" groups = defaultdict(list) for item in items: key = key_fn(item) groups[key].append(item) return groups def partition_by(inputs, attribute): """returns a dict of inputs partitioned by the attribute each input is a pair (attribute_dict, label)""" return group_by(inputs, lambda x: x[0][attribute]) #function for computing entropy def partition_entropy_by(inputs, attribute): partitions = partition_by(inputs, attribute) return partition_entropy(partitions.values()) # + # find the minimum-entropy partition for the whole set for key in ['level', 'lang', 'tweets', 'phd']: print(key, partition_entropy_by(inputs, key)) # + # lowest entropy comes from splitting on level, so we make a subtree for each possible level value. senior_inputs = [(input, label) for input, label in inputs if input['level'] == 'Senior'] for key in ['lang','tweets','phd']: print( key, partition_entropy_by(senior_inputs, key)) # + # yes tweets always result in True while no tweets always results in False, so it is a zero entropy partition. Woo! #Mid candidates are all True, so they partition themselves here. Finally, Junior candidates. We end up splitting on # phd after which we see that PhD always results in True and PhD always results in False. See book for actual tree. # Sometimes we are missing a label or haven't seen it before, so we can replace in here. Finally, let's generalize # to make our nodes (True, False, or tuple(attribute, subtree_dict)) def classify(tree, input): if tree in [True, False]: return tree attribute, subtree_dict = tree subtree_key = input.get(attribute) if subtree_key not in subtree_dict: subtree_key = None subtree = subtree_dict[subtree_key] return classify(subtree, input) # now just build the tree from the training data # - def build_tree_id3(inputs, split_candidates=None): if split_candidates is None: split_candidates = inputs[0][0].keys() num_inputs = len(inputs) num_trues = len([label for item, label in inputs if label]) num_falses = num_inputs - num_trues if num_trues == 0: return False # No trues? Return false leaf if num_falses == 0: return True # No Falses? Return true leaf if not split_candidates: return num_trues >= num_falses # return the majority leaf best_attribute = min(split_candidates, key=partial(partition_entropy_by, inputs)) partitions = partition_by(inputs, best_attribute) new_candidates = [a for a in split_candidates if a != best_attribute] # recursively build the subtrees subtrees = { attribute_value : build_tree_id3(subset, new_candidates) for attribute_value, subset in partitions.items()} subtrees[None] = num_trues > num_falses # default case return (best_attribute, subtrees) # + tree = build_tree_id3(inputs) classify(tree, { "level" : "Junior", "lang" : "Java", "tweets" : "yes", "phd" : "no"}) # - # ## Random Forests # # We can build multiple decision trees and then let thom vote on how to classify inputs # + def forest_classify(trees, input): votes = [classify(tree, input) for tree in trees] vote_counts = Counter(votes) return vote_counts.most_common(1)[0][0] # We can build "random" trees by training each on bootstrapped data of the inputs. This is called "Bootstrap Aggregating" # or bagging. We can also randomly choose the next attribute to split on from a subset rather than all of the remaining if len(split_candidates) <= self.num_split_candidates: sampled_split_candidates = slpit_candidates else: sampled_split_candidates = random.sample(split_candidates, self.num_split_candidates) best_attribute = min(sampled_split_candidates, key=partial(partition_entropy_by, inputs)) partitions = partition_by(inputs, best_attribute) # - # ## This concludes Chapter 17 # # For further exploration, scikit-learn has decision tree models. It also has an ensemble module that includes a RandomForestClassifier as well as other ensemble methods. # # Wikipedia (https://en.wikipedia.org/wiki/Decision_tree) is a good place to learn more about decision tree algos.
DSFS Chapter 17 - Decision Trees.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: ML # language: python # name: ml # --- # # Embeddings With Sentence-Transformers # # We've worked through creating our embeddings using the `transformers` library - and at times it can be quite involved. Now, it's important to understand the steps, but we can make life easier by using the `sentence-transformers` library. # # We'll work through the same process - but using `sentence-transformers` instead. # + sentences = [ "Three years later, the coffin was still full of Jello.", "The fish dreamed of escaping the fishbowl and into the toilet where he saw his friend go.", "The person box was packed with jelly many dozens of months later.", "Standing on one's head at job interviews forms a lasting impression.", "It took him a month to finish the meal.", "He found a leprechaun in his walnut shell." ] # thanks to https://randomwordgenerator.com/sentence.php # - # Initialize our model: # + from sentence_transformers import SentenceTransformer model = SentenceTransformer('bert-base-nli-mean-tokens') # - # Encode the sentences: sentence_embeddings = model.encode(sentences) sentence_embeddings.shape sentence_embeddings # And no we have our sentence embeddings - a much quicker approach. We then compare just as we did before using cosine similarity: from sklearn.metrics.pairwise import cosine_similarity # Let's calculate cosine similarity for sentence `0`: cosine_similarity( [sentence_embeddings[0]], sentence_embeddings[1:] ) # These similarities translate to almost the exact same values as we calculated before: # # | Index | Sentence | Similarity (before) | New similarity | # | --- | --- | --- | --- | # | 1 | "The fish dreamed of escaping the fishbowl and into the toilet where he saw his friend go." | 0.3309 | 0.3309 | # | 2 | "The person box was packed with jelly many dozens of months later." | 0.7219 | 0.7219 | # | 3 | "Standing on one's head at job interviews forms a lasting impression." | 0.1748 | 0.174**7** | # | 4 | "It took him a month to finish the meal." | 0.4471 | 0.447**2** | # | 5 | "He found a leprechaun in his walnut shell." | 0.5548 | 0.554**7** | # # So, using `sentence-transformers` can make life much easier. But either option produces the same outcome.
course/similarity/04_sentence_transformers.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Python version import sys print('Python: {}'.format(sys.version)) # scipy import scipy print('scipy: {}'.format(scipy.__version__)) # numpy import numpy print('numpy: {}'.format(numpy.__version__)) # matplotlib import matplotlib print('matplotlib: {}'.format(matplotlib.__version__)) # pandas import pandas print('pandas: {}'.format(pandas.__version__)) import sklearn print('sklearn: {}'.format(sklearn.__version__)) # + from pandas import read_csv from pandas.plotting import scatter_matrix from matplotlib import pyplot from sklearn.model_selection import train_test_split from sklearn.model_selection import cross_val_score from sklearn.model_selection import StratifiedKFold from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.naive_bayes import GaussianNB from sklearn.svm import SVC # - url = "iris.csv" names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'class'] dataset = read_csv(url, names=names) # dimensions print(dataset.shape) # look at first 20 lines of dataset print(dataset.head(20)) # stats summary print(dataset.describe()) # class distribution print(dataset.groupby('class').size()) # Box and whisker plots dataset.plot(kind='box', subplots=True, layout=(2,2), sharex=False, sharey=False) pyplot.show() # Histograms dataset.hist() pyplot.show() # Scatter plot matrix scatter_matrix(dataset) pyplot.show() array = dataset.values X = array[:,0:4] y = array[:,4] X_train, X_validation, Y_train, Y_validation = train_test_split(X, y, test_size=0.20, random_state=1) # + models = [] # Logistic Regression (LR) models.append(('LR', LogisticRegression(solver='liblinear', multi_class='ovr'))) results = [] names = [] for name, model in models: kfold = StratifiedKFold(n_splits=10, random_state=1, shuffle=True) cv_results = cross_val_score(model, X_train, Y_train, cv=kfold, scoring='accuracy') results.append(cv_results) names.append(name) print('%s: %f (%f)' % (name, cv_results.mean(), cv_results.std())) # - # Linear Discriminant Analysis (LDA) models.append(('LDA', LinearDiscriminantAnalysis())) results = [] names = [] for name, model in models: kfold = StratifiedKFold(n_splits=10, random_state=1, shuffle=True) cv_results = cross_val_score(model, X_train, Y_train, cv=kfold, scoring='accuracy') results.append(cv_results) names.append(name) print('%s: %f (%f)' % (name, cv_results.mean(), cv_results.std())) # K-Nearest Neighbors (KNN) models.append(('KNN', KNeighborsClassifier())) results = [] names = [] for name, model in models: kfold = StratifiedKFold(n_splits=10, random_state=1, shuffle=True) cv_results = cross_val_score(model, X_train, Y_train, cv=kfold, scoring='accuracy') results.append(cv_results) names.append(name) print('%s: %f (%f)' % (name, cv_results.mean(), cv_results.std())) # Classification and Regression Trees (CART) models.append(('CART', DecisionTreeClassifier())) results = [] names = [] for name, model in models: kfold = StratifiedKFold(n_splits=10, random_state=1, shuffle=True) cv_results = cross_val_score(model, X_train, Y_train, cv=kfold, scoring='accuracy') results.append(cv_results) names.append(name) print('%s: %f (%f)' % (name, cv_results.mean(), cv_results.std())) #Gaussian Naive Bayes (NB) models.append(('NB', GaussianNB())) results = [] names = [] for name, model in models: kfold = StratifiedKFold(n_splits=10, random_state=1, shuffle=True) cv_results = cross_val_score(model, X_train, Y_train, cv=kfold, scoring='accuracy') results.append(cv_results) names.append(name) print('%s: %f (%f)' % (name, cv_results.mean(), cv_results.std())) #Support Vector Machines (SVM) models.append(('SVM', SVC(gamma='auto'))) results = [] names = [] for name, model in models: kfold = StratifiedKFold(n_splits=10, random_state=1, shuffle=True) cv_results = cross_val_score(model, X_train, Y_train, cv=kfold, scoring='accuracy') results.append(cv_results) names.append(name) print('%s: %f (%f)' % (name, cv_results.mean(), cv_results.std())) # Compare Algorithms pyplot.boxplot(results, labels=names) pyplot.title('Algorithm Comparison') pyplot.show() model = SVC(gamma='auto') model.fit(X_train, Y_train) predictions = model.predict(X_validation) print(accuracy_score(Y_validation, predictions)) print(confusion_matrix(Y_validation, predictions)) print(classification_report(Y_validation, predictions))
Python_MachineLearning.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os DATA_DIR = os.getenv('HYPERNET_DATA_DIR', os.path.join('..', '..', 'hypernet-data')) RESULTS_DIR = os.path.join(os.getenv('HYPERNET_RESULTS_DIR', os.path.join('..', '..', 'hypernet-data', 'results')), 'conv_3d_pso') # + from python_research.experiments.sota_models.conv_3D.pso_train_conv3D import Arguments, PsoRunner arguments = Arguments( run_idx=1, dtype='torch.cuda.FloatTensor', cont=None, epochs=999, data_path=os.path.join(DATA_DIR, 'PaviaU_corrected.npy'), data_name='paviaU_full', min_neighborhood_size=7, max_neighborhood_size=9, labels_path=os.path.join(DATA_DIR, 'PaviaU_gt.npy'), batch=64, patience=5, dest_path=RESULTS_DIR, classes=9, test_size=0.1, val_size=0.1, min_channels=[16, 16, 16], max_channels=[32, 32, 32], channels_step=[4, 4, 4], input_depth=103, swarm_size=5 ) pso_runner = PsoRunner(arguments) pso_runner.run() # -
jupyters/conv_3D_pso.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="Lvo0t7XVIkWZ" # ### Parameters # + colab={} colab_type="code" id="cCpkS9C_H7Tl" BATCH_SIZE = 64 EPOCHS = 10 training_images_file = 'gs://mnist-public/train-images-idx3-ubyte' training_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte' validation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte' validation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte' # + [markdown] colab_type="text" id="qpiJj8ym0v0-" # ### Imports # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="AoilhmYe1b5t" outputId="1c6f12f9-c269-41b7-f90b-ba600bf45e8b" import os, re, math, json, shutil, pprint import PIL.Image, PIL.ImageFont, PIL.ImageDraw import IPython.display as display import numpy as np import tensorflow as tf from matplotlib import pyplot as plt print("Tensorflow version " + tf.__version__) # + cellView="form" colab={} colab_type="code" id="qhdz68Xm3Z4Z" #@title visualization utilities [RUN ME] """ This cell contains helper functions used for visualization and downloads only. You can skip reading it. There is very little useful Keras/Tensorflow code here. """ # Matplotlib config plt.ioff() plt.rc('image', cmap='gray_r') plt.rc('grid', linewidth=1) plt.rc('xtick', top=False, bottom=False, labelsize='large') plt.rc('ytick', left=False, right=False, labelsize='large') plt.rc('axes', facecolor='F8F8F8', titlesize="large", edgecolor='white') plt.rc('text', color='a8151a') plt.rc('figure', facecolor='F0F0F0', figsize=(16,9)) # Matplotlib fonts MATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), "mpl-data/fonts/ttf") # pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO) def dataset_to_numpy_util(training_dataset, validation_dataset, N): # get one batch from each: 10000 validation digits, N training digits batch_train_ds = training_dataset.unbatch().batch(N) # eager execution: loop through datasets normally if tf.executing_eagerly(): for validation_digits, validation_labels in validation_dataset: validation_digits = validation_digits.numpy() validation_labels = validation_labels.numpy() break for training_digits, training_labels in batch_train_ds: training_digits = training_digits.numpy() training_labels = training_labels.numpy() break else: v_images, v_labels = validation_dataset.make_one_shot_iterator().get_next() t_images, t_labels = batch_train_ds.make_one_shot_iterator().get_next() # Run once, get one batch. Session.run returns numpy results with tf.Session() as ses: (validation_digits, validation_labels, training_digits, training_labels) = ses.run([v_images, v_labels, t_images, t_labels]) # these were one-hot encoded in the dataset validation_labels = np.argmax(validation_labels, axis=1) training_labels = np.argmax(training_labels, axis=1) return (training_digits, training_labels, validation_digits, validation_labels) # create digits from local fonts for testing def create_digits_from_local_fonts(n): font_labels = [] img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1 font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25) font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25) d = PIL.ImageDraw.Draw(img) for i in range(n): font_labels.append(i%10) d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2) font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded) font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28]) return font_digits, font_labels # utility to display a row of digits with their predictions def display_digits(digits, predictions, labels, title, n): fig = plt.figure(figsize=(13,3)) digits = np.reshape(digits, [n, 28, 28]) digits = np.swapaxes(digits, 0, 1) digits = np.reshape(digits, [28, 28*n]) plt.yticks([]) plt.xticks([28*x+14 for x in range(n)], predictions) plt.grid(b=None) for i,t in enumerate(plt.gca().xaxis.get_ticklabels()): if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red plt.imshow(digits) plt.grid(None) plt.title(title) display.display(fig) # utility to display multiple rows of digits, sorted by unrecognized/recognized status def display_top_unrecognized(digits, predictions, labels, n, lines): idx = np.argsort(predictions==labels) # sort order: unrecognized first for i in range(lines): display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n], "{} sample validation digits out of {} with bad predictions in red and sorted first".format(n*lines, len(digits)) if i==0 else "", n) def plot_learning_rate(lr_func, epochs): xx = np.arange(epochs+1, dtype=np.float) y = [lr_func(x) for x in xx] fig, ax = plt.subplots(figsize=(9, 6)) ax.set_xlabel('epochs') ax.set_title('Learning rate\ndecays from {:0.3g} to {:0.3g}'.format(y[0], y[-2])) ax.minorticks_on() ax.grid(True, which='major', axis='both', linestyle='-', linewidth=1) ax.grid(True, which='minor', axis='both', linestyle=':', linewidth=0.5) ax.step(xx,y, linewidth=3, where='post') display.display(fig) class PlotTraining(tf.keras.callbacks.Callback): def __init__(self, sample_rate=1, zoom=1): self.sample_rate = sample_rate self.step = 0 self.zoom = zoom self.steps_per_epoch = 60000//BATCH_SIZE def on_train_begin(self, logs={}): self.batch_history = {} self.batch_step = [] self.epoch_history = {} self.epoch_step = [] self.fig, self.axes = plt.subplots(1, 2, figsize=(16, 7)) plt.ioff() def on_batch_end(self, batch, logs={}): if (batch % self.sample_rate) == 0: self.batch_step.append(self.step) for k,v in logs.items(): # do not log "batch" and "size" metrics that do not change # do not log training accuracy "acc" if k=='batch' or k=='size':# or k=='acc': continue self.batch_history.setdefault(k, []).append(v) self.step += 1 def on_epoch_end(self, epoch, logs={}): plt.close(self.fig) self.axes[0].cla() self.axes[1].cla() self.axes[0].set_ylim(0, 1.2/self.zoom) self.axes[1].set_ylim(1-1/self.zoom/2, 1+0.1/self.zoom/2) self.epoch_step.append(self.step) for k,v in logs.items(): # only log validation metrics if not k.startswith('val_'): continue self.epoch_history.setdefault(k, []).append(v) display.clear_output(wait=True) for k,v in self.batch_history.items(): self.axes[0 if k.endswith('loss') else 1].plot(np.array(self.batch_step) / self.steps_per_epoch, v, label=k) for k,v in self.epoch_history.items(): self.axes[0 if k.endswith('loss') else 1].plot(np.array(self.epoch_step) / self.steps_per_epoch, v, label=k, linewidth=3) self.axes[0].legend() self.axes[1].legend() self.axes[0].set_xlabel('epochs') self.axes[1].set_xlabel('epochs') self.axes[0].minorticks_on() self.axes[0].grid(True, which='major', axis='both', linestyle='-', linewidth=1) self.axes[0].grid(True, which='minor', axis='both', linestyle=':', linewidth=0.5) self.axes[1].minorticks_on() self.axes[1].grid(True, which='major', axis='both', linestyle='-', linewidth=1) self.axes[1].grid(True, which='minor', axis='both', linestyle=':', linewidth=0.5) display.display(self.fig) # + [markdown] colab_type="text" id="Lz1Zknfk4qCx" # ### tf.data.Dataset: parse files and prepare training and validation datasets # Please read the [best practices for building](https://www.tensorflow.org/guide/performance/datasets) input pipelines with tf.data.Dataset # + colab={} colab_type="code" id="ZE8dgyPC1_6m" AUTO = tf.data.experimental.AUTOTUNE def read_label(tf_bytestring): label = tf.io.decode_raw(tf_bytestring, tf.uint8) label = tf.reshape(label, []) label = tf.one_hot(label, 10) return label def read_image(tf_bytestring): image = tf.io.decode_raw(tf_bytestring, tf.uint8) image = tf.cast(image, tf.float32)/256.0 image = tf.reshape(image, [28*28]) return image def load_dataset(image_file, label_file): imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16) imagedataset = imagedataset.map(read_image, num_parallel_calls=16) labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8) labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16) dataset = tf.data.Dataset.zip((imagedataset, labelsdataset)) return dataset def get_training_dataset(image_file, label_file, batch_size): dataset = load_dataset(image_file, label_file) dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset dataset = dataset.shuffle(5000, reshuffle_each_iteration=True) dataset = dataset.repeat() # Mandatory for Keras for now dataset = dataset.batch(batch_size, drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed dataset = dataset.prefetch(AUTO) # fetch next batches while training on the current one (-1: autotune prefetch buffer size) return dataset def get_validation_dataset(image_file, label_file): dataset = load_dataset(image_file, label_file) dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset dataset = dataset.batch(10000, drop_remainder=True) # 10000 items in eval dataset, all in one batch dataset = dataset.repeat() # Mandatory for Keras for now return dataset # instantiate the datasets training_dataset = get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE) validation_dataset = get_validation_dataset(validation_images_file, validation_labels_file) # For TPU, we will need a function that returns the dataset training_input_fn = lambda: get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE) validation_input_fn = lambda: get_validation_dataset(validation_images_file, validation_labels_file) # + [markdown] colab_type="text" id="_fXo6GuvL3EB" # ### Let's have a look at the data # + colab={"base_uri": "https://localhost:8080/", "height": 177} colab_type="code" id="yZ4tjPKvL2eh" outputId="05c66331-7997-4b82-b5aa-ca83ef527701" N = 24 (training_digits, training_labels, validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N) display_digits(training_digits, training_labels, training_labels, "training digits and their labels", N) display_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], "validation digits and their labels", N) font_digits, font_labels = create_digits_from_local_fonts(N) # + [markdown] colab_type="text" id="KIc0oqiD40HC" # ### Keras model # If you are not sure what cross-entropy, dropout, softmax or batch-normalization mean, head here for a crash-course: [Tensorflow and deep learning without a PhD](https://github.com/GoogleCloudPlatform/tensorflow-without-a-phd/#featured-code-sample) # + colab={"base_uri": "https://localhost:8080/", "height": 697} colab_type="code" id="56y8UNFQIVwj" outputId="fdb49885-bb2c-4cac-bfe6-d182fa28a448" model = tf.keras.Sequential( [ tf.keras.layers.Reshape(input_shape=(28*28,), target_shape=(28, 28, 1)), tf.keras.layers.Conv2D(kernel_size=3, filters=12, use_bias=False, padding='same'), tf.keras.layers.BatchNormalization(center=True, scale=False), tf.keras.layers.Activation('relu'), tf.keras.layers.Conv2D(kernel_size=6, filters=24, use_bias=False, padding='same', strides=2), tf.keras.layers.BatchNormalization(center=True, scale=False), tf.keras.layers.Activation('relu'), tf.keras.layers.Conv2D(kernel_size=6, filters=32, use_bias=False, padding='same', strides=2), tf.keras.layers.BatchNormalization(center=True, scale=False), tf.keras.layers.Activation('relu'), tf.keras.layers.Flatten(), tf.keras.layers.Dense(200, use_bias=False), tf.keras.layers.BatchNormalization(center=True, scale=False), tf.keras.layers.Activation('relu'), tf.keras.layers.Dropout(0.3), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer=tf.keras.optimizers.Adam(lr=0.01), loss='categorical_crossentropy', metrics=['accuracy']) # print model layers model.summary() # utility callback that displays training curves plot_training = PlotTraining(sample_rate=10, zoom=16) # + [markdown] colab_type="text" id="E0A7CSbW67EY" # ### Learning Rate schedule # + colab={"base_uri": "https://localhost:8080/", "height": 476} colab_type="code" id="2qz_k44U6_-f" outputId="7c7ff914-6cf0-473a-9b77-77df977baa47" # lr decay function def lr_decay(epoch): return 0.01 * math.pow(0.666, epoch) # lr schedule callback lr_decay_callback = tf.keras.callbacks.LearningRateScheduler(lr_decay, verbose=True) # important to see what you are doing plot_learning_rate(lr_decay, EPOCHS) # + [markdown] colab_type="text" id="CuhDh8ao8VyB" # ### Train and validate the model # + colab={"base_uri": "https://localhost:8080/", "height": 466} colab_type="code" id="TTwH_P-ZJ_xx" outputId="23d7b887-192b-4802-f558-60053193a35b" steps_per_epoch = 60000//BATCH_SIZE # 60,000 items in this dataset print("Steps per epoch: ", steps_per_epoch) history = model.fit(training_dataset, steps_per_epoch=steps_per_epoch, epochs=EPOCHS, validation_data=validation_dataset, validation_steps=1, callbacks=[plot_training, lr_decay_callback]) # + [markdown] colab_type="text" id="9jFVovcUUVs1" # ### Visualize predictions # + colab={"base_uri": "https://localhost:8080/", "height": 639} colab_type="code" id="w12OId8Mz7dF" outputId="5be2f10d-42ad-4ea3-95c7-10fb240800f9" # recognize digits from local fonts probabilities = model.predict(font_digits, steps=1) predicted_labels = np.argmax(probabilities, axis=1) display_digits(font_digits, predicted_labels, font_labels, "predictions from local fonts (bad predictions in red)", N) # recognize validation digits probabilities = model.predict(validation_digits, steps=1) predicted_labels = np.argmax(probabilities, axis=1) display_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7) # + [markdown] colab_type="text" id="SVY1pBg5ydH-" # ## License # + [markdown] colab_type="text" id="hleIN5-pcr0N" # # # --- # # # author: <NAME><br> # twitter: @martin_gorner # # # --- # # # Copyright 2019 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # --- # # # This is not an official Google product but sample code provided for an educational purpose #
tensorflow-mnist-tutorial/keras_05_mnist_batch_norm.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # language: R # name: ir # --- # ### Initiate h2o cluster to run Kmeans library(h2o) h2o.init(nthreads = -1, max_mem_size = '90G') # #### Data Munging # + predictors <- h2o.importFile(path = "https://s3-us-west-2.amazonaws.com/data516project/data/allDataCleaned.csv" , destination_frame = "predictors") predictors$C1 <- NULL # - ## Get feature lists lasso_features1 <- read.csv('lassoFeatures1_categorized.csv') lasso_features2 <- read.csv('lassoFeatures2_categorized.csv') ## Subset lasso_features1 for clustering and split training and validation frames full <- predictors[, c(as.character(lasso_features1$Variable))] dim(full) # #### PCA transformation on the group of variables table(lasso_features1$Category) pca_features1 <- lasso_features1[lasso_features1$Category == "Credit and Delinquency", ]$Variable pca_features2 <- lasso_features1[lasso_features1$Category == "Finance", ]$Variable pca_features3 <- lasso_features1[lasso_features1$Category == "Housing", ]$Variable pca_features4 <- lasso_features1[grep("Neighborhood", lasso_features1$Category), ]$Variable pca_features5 <- lasso_features1[lasso_features1$Category == "Trade", ]$Variable pca_group_variables <- function(features, k){ training_frame = predictors[, c(as.character(features))] pca_fit = h2o.prcomp(training_frame, transform = "STANDARDIZE", pca_method = "GramSVD", seed = 1234, k = k, impute_missing = TRUE) return(pca_fit) } # #### Train PCA for each features group # + # Credit and Delinquency pca_fit1 <- pca_group_variables(pca_features1, 18) # Finance pca_fit2 <- pca_group_variables(pca_features2, 9) # Housing pca_fit3 <- pca_group_variables(pca_features3, 8) # Neighborhood pca_fit4 <- pca_group_variables(pca_features4, 13) # Trade pca_fit5 <- pca_group_variables(pca_features5, 21) # - pca_fit1 # 10 PC to achieve 80% variance explaination pca_fit2 # 4 PC to achieve 80% variance explaination pca_fit3 # 6 PC to achieve 80% variance explaination pca_fit4 # 7 PC to achieve 80% variance explaination pca_fit5 # 9 PC to achieve 80% variance explaination # #### Represent original dataset with principal components features_pca1 <- h2o.predict(pca_fit1, predictors[, c(as.character(pca_features1))]) features_pca2 <- h2o.predict(pca_fit2, predictors[, c(as.character(pca_features2))]) #features_pca3 <- h2o.predict(pca_fit3, predictors[, c(as.character(pca_features3))]) # Housing variables are not chosen to be PCA transformed bacause 6 out of 8 dimensions are needed for # representing 80% of the variance. features_pca4 <- h2o.predict(pca_fit4, predictors[, c(as.character(pca_features4))]) features_pca5 <- h2o.predict(pca_fit5, predictors[, c(as.character(pca_features5))]) names(features_pca1) = paste0("credit_", names(features_pca1)) names(features_pca2) = paste0("finance_", names(features_pca2)) names(features_pca4) = paste0("neighborhood_", names(features_pca4)) names(features_pca5) = paste0("trade_", names(features_pca5)) names(features_pca1) full <- predictors[, c(as.character(lasso_features1$Variable))] full = h2o.cbind(full, features_pca1[, 1:10]) full = h2o.cbind(full, features_pca2[, 1:4]) full = h2o.cbind(full, features_pca4[, 1:7]) full = h2o.cbind(full, features_pca5[, 1:9]) dim(full) names(full) pca_reduced_LASSO = full[, !(names(full) %in% c(as.character(pca_features1), as.character(pca_features2), as.character(pca_features4), as.character(pca_features5)))] class(pca_reduced_LASSO) h2o.exportFile(pca_reduced_LASSO, 'pca_reduced_LASSO.csv') # #### Train Kmeans model on PCA reduced LASSO selected features data set # + pca_reduced_predictors <- h2o.importFile(path = normalizePath("/mnt/UW/outputDataset/pca_reduced_LASSO.csv") , destination_frame = "pca_reduced_predictors") pca_reduced_predictors$C1 <- NULL # - split <- h2o.splitFrame(pca_reduced_predictors, ratios = c(0.5, 0.2), seed = -1) training = split[[1]] validation = split[[2]] dim(training) x = names(training) # #### Optimal K ## For h2o kmeans, missing values are automatically imputed by the column mean of the training data fit_kmeans <- function(training_frame, k, init){ kmeans_fit <- h2o.kmeans(training_frame = training_frame, x, validation_frame = validation, nfolds = 8, standardize = TRUE, seed = 1234, k = k, init = init) return(kmeans_fit) } # + var_explained_plusplus = c() i=1 for (k in 3:13){ model <- fit_kmeans(training, k, "PlusPlus") var_explained_plusplus[i] = (1-model@model$model_summary$within_cluster_sum_of_squares/model@model$model_summary$total_sum_of_squares) i = i +1 } # - var_explained_plusplus ### PlusPlus initialization, elbow curve plot(c(3:13), var_explained_pluplus) # + var_explained_random = c() i=1 for (k in 3:13){ model <- fit_kmeans(training, k, "Random") var_explained_random[i] = (1-model@model$model_summary$within_cluster_sum_of_squares/model@model$model_summary$total_sum_of_squares) i = i + 1 } # - var_explained_random # + var_explained_furthest = c() i=1 for (k in 3:13){ model <- fit_kmeans(training, k, "Furthest") var_explained_furthest[i] = (1-model@model$model_summary$within_cluster_sum_of_squares/model@model$model_summary$total_sum_of_squares) i = i + 1 } var_explained_furthest # - # Save elbow curve data of three initialization methods df_Kmeans_LASSO_Cleaned <- data.frame(k = c(3:13), plusplus = c(var_explained_plusplus), random = c(var_explained_random), furthest = c(var_explained_furthest)) saveRDS(df_Kmeans_LASSO_Cleaned, "Kmeans_LASSO_Cleaned.rds") ls()
featureEngineering/Kmeans_LASSOSelectedFeatures.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # https://www.cvxpy.org/examples/basic/least_squares.html # # https://www.desmos.com/calculator/31dywqitez?lang=ko # # https://jcboyd.github.io/assets/ma2823_2017/Lab+2+2017-10-06++Convex+optimization+in+Python.html # # https://towardsdatascience.com/convex-and-non-convex-optimisation-899174802b60 # # https://www.coursera.org/lecture/operations-research-theory/5-2-convex-sets-and-functions-0GgiA # # http://www.stat.cmu.edu/~ryantibs/convexopt-F16/#assignments # # https://github.com/icme/cme252-optimization.git # What is optimization problem? # 1. Find Optimal Value or Approximation Value from optimal. # 2. maximization or minimization in cost function # # Bounded above/below : # # a = sup S => not reach, supremum or least upper bound, ( -inf, 1), upper bound, 최대 근사값 # # a = inf S => greast lower bound, u <= x, 최소 근사값 # Affine Function : the composition of a linear function followed by a translation. ax is linear ; (x+b)∘(ax) is affine # # *https://math.stackexchange.com/questions/275310/what-is-the-difference-between-linear-and-affine-function # # Feasible Solution : 제약 조건 안에 있는 해 # - ![image.png](attachment:image.png) # Convex Sets Definition # https://medium.com/swlh/visualizing-convex-sets-638ce373dd89 # # Convex Function # https://scipy-lectures.org/advanced/mathematical_optimization/auto_examples/plot_convex.html # # Convex함수의 local minimum은 항상 global minimum이다. # https://www.geeksforgeeks.org/local-and-global-optimum-in-uni-variate-optimization/ # # + import numpy as np import matplotlib.pyplot as plt x = np.linspace(-1, 2) plt.figure(1, figsize=(3, 2.5)) plt.clf() # A convex function plt.plot(x, x**2, linewidth=2) plt.text(-.7, -.6**2, '$f$', size=20) # The tangent in one point plt.plot(x, 2*x - 1) plt.plot(1, 1, 'k+') plt.text(.3, -.75, "Tangent to $f$", size=15) plt.text(1, 1 - .5, 'C', size=15) # Convexity as barycenter plt.plot([.35, 1.85], [.35**2, 1.85**2]) plt.plot([.35, 1.85], [.35**2, 1.85**2], 'k+') plt.text(.35 - .2, .35**2 + .1, 'A', size=15) plt.text(1.85 - .2, 1.85**2, 'B', size=15) plt.ylim(ymin=-1) plt.axis('off') plt.tight_layout() # Convexity as barycenter plt.figure(2, figsize=(3, 2.5)) plt.clf() plt.plot(x, x**2 + np.exp(-5*(x - .5)**2), linewidth=2) plt.text(-.7, -.6**2, '$f$', size=20) plt.ylim(ymin=-1) plt.axis('off') plt.tight_layout() plt.show() # + #https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.ConvexHull.html from scipy.spatial import ConvexHull, convex_hull_plot_2d import numpy as np points = np.random.rand(30, 2) # 30 random points in 2-D hull = ConvexHull(points) import matplotlib.pyplot as plt plt.plot(points[:,0], points[:,1], 'o') for simplex in hull.simplices: plt.plot(points[simplex, 0], points[simplex, 1], 'k-') plt.plot(points[hull.vertices,0], points[hull.vertices,1], 'r--', lw=2) plt.plot(points[hull.vertices[0],0], points[hull.vertices[0],1], 'ro') plt.show() # + import cvxpy as cp # pip install numpy==1.20.3 import numpy as np #solving problem # objective function : f(x) is Sum(Squere(A*x - b)) # x는 정의역, f(x)는 공역 # constrain function : 0 <= x <= 1 # Problem data. m = 30 n = 20 np.random.seed(1) A = np.random.randn(m, n) b = np.random.randn(m) # Construct the problem. x = cp.Variable(n) objective = cp.Minimize(cp.sum_squares(A*x - b)) constraints = [0 <= x, x <= 1] prob = cp.Problem(objective, constraints) # The optimal objective value is returned by `prob.solve()`. result = prob.solve() # The optimal value for x is stored in `x.value`. print(x.value) # The optimal Lagrange multiplier for a constraint is stored in # `constraint.dual_value`. print(constraints[0].dual_value) # + #http://scipy-lectures.org/advanced/image_processing/auto_examples/plot_face_tv_denoise.html import numpy as np import scipy import scipy.misc import matplotlib.pyplot as plt try: from skimage.restoration import denoise_tv_chambolle except ImportError: # skimage < 0.12 from skimage.filters import denoise_tv_chambolle f = scipy.misc.face(gray=True) f = f[230:290, 220:320] noisy = f + 0.4*f.std()*np.random.random(f.shape) tv_denoised = denoise_tv_chambolle(noisy, weight=10) plt.figure(figsize=(12, 2.8)) plt.subplot(131) plt.imshow(noisy, cmap=plt.cm.gray, vmin=40, vmax=220) plt.axis('off') plt.title('noisy', fontsize=20) plt.subplot(132) plt.imshow(tv_denoised, cmap=plt.cm.gray, vmin=40, vmax=220) plt.axis('off') plt.title('TV denoising', fontsize=20) tv_denoised = denoise_tv_chambolle(noisy, weight=50) plt.subplot(133) plt.imshow(tv_denoised, cmap=plt.cm.gray, vmin=40, vmax=220) plt.axis('off') plt.title('(more) TV denoising', fontsize=20) plt.subplots_adjust(wspace=0.02, hspace=0.02, top=0.9, bottom=0, left=0, right=1) plt.show() # - # https://github.com/icme/cme252-optimization.git # https://github.com/mfopt/mf_cvxpy # # Affine Set # Affine Hull # subspace : vector set을 이동해도, affine set의 특성을 유지 # # Convex Hull # https://towardsdatascience.com/the-concave-hull-c649795c0f0f # https://learnopencv.com/convex-hull-using-opencv-in-python-and-c/ # # # Convex Cone # # Hyper Space # # Hyper Plane # # Ellipsoids # # # + ## - Positive Semidefinite ## Minimum and Minimal - q1 : 왜 쓰는지?? - https://strutive07.github.io/2020/02/08/Lecture.2-Convex-set.html - https://math.stackexchange.com/questions/2142643/what-is-meant-by-minimum-element-whats-the-difference-between-minimum-and-min # - # - 다음 그림은 disjoint convex set인 C와 D를 나누는 separating hyperplane을 보여주고 있다. # ![image.png](attachment:image.png) # ## why l1과 l infinite이 같은지 확인 필요 # - https://www.robots.ox.ac.uk/~az/lectures/b1/vandenberghe_1_2.pdf # - https://web.stanford.edu/class/msande314/lecture02.pdf # - https://math.stackexchange.com/questions/1822810/geometric-interpretation-of-the-dual-cone-of-l1-is-l-infty # - https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-253-convex-analysis-and-optimization-spring-2012/lecture-notes/MIT6_253S12_lec08.pdf # # ### Definition (dual cones) - https://www.ics.uci.edu/~xhx/courses/ConvexOpt/convex_sets.pdf # - Let K be a cone. The set # - K∗ = {y ∣ xTy ≥ 0 ∀x ∈ K} is called the dual cone of K. # - Property: # - K∗ is always convex, even when the original cone K is not(why? intersection of convex sets) # - y ∈ K∗ if and only if −y is the normal of a hyperplane that supports K at the origin
02_Convex Sets.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 35 # language: python # name: python35 # --- # + """ Combine level 7 for coastal and level 6 inland. ------------------------------------------------------------------------------- TODO: - add area per polygon - remove small polygons Author: <NAME> Date: 20190613 Kernel: python35 Docker: rutgerhofste/gisdocker:ubuntu16.04 """ SCRIPT_NAME = "Y2019M06D13_RH_Combine_Levels_V01" OUTPUT_VERSION = 1 S3_INPUT_PATH= "s3://wri-projects/Aqueduct30/processData/Y2019M06D13_RH_Simplify_Geometries_V01" ec2_input_path = "/volumes/data/{}/input_V{:02.0f}".format(SCRIPT_NAME,OUTPUT_VERSION) ec2_output_path = "/volumes/data/{}/output_V{:02.0f}".format(SCRIPT_NAME,OUTPUT_VERSION) s3_output_path = "s3://wri-projects/Aqueduct30/processData/{}/output_V{:02.0f}/".format(SCRIPT_NAME,OUTPUT_VERSION) # - import time, datetime, sys dateString = time.strftime("Y%YM%mD%d") timeString = time.strftime("UTC %H:%M") start = datetime.datetime.now() print(dateString,timeString) sys.version # %matplotlib inline # !rm -r {ec2_input_path} # !rm -r {ec2_output_path} # !mkdir -p {ec2_input_path} # !mkdir -p {ec2_output_path} # !aws s3 cp {S3_INPUT_PATH} {ec2_input_path} --recursive import pandas as pd import geopandas as gpd input_path_level6 = "{}/hybas_merged_standard_level6_V01_simplified/hybas_merged_standard_level6_V01.shp".format(ec2_input_path) input_path_level7 = "{}/hybas_merged_standard_level7_V01_simplified/hybas_merged_standard_level7_V01.shp".format(ec2_input_path) gdf_level6_og = gpd.read_file(filename=input_path_level6) gdf_level7_og = gpd.read_file(filename=input_path_level7) gdf_level6_og.shape gdf_level7_og.shape gdf_level7 = gdf_level7_og.loc[gdf_level7_og["geometry"].notnull()] gdf_level6 = gdf_level6_og.loc[gdf_level6_og["geometry"].notnull()] # Select all inland level 6 basins gdf_level6_inland = gdf_level6.loc[gdf_level6["COAST"] == 0] gdf_level6_coast = gdf_level6.loc[gdf_level6["COAST"] == 1] def find_corresponding_basins(pfaf_id_level6,gdf_level7): """ Using a pfaf_id from level 6, find all hydrobasins in level 7 that make up the hydrobasin level 6 polygon. """ pfaf_id_level7_min = pfaf_id_level6*10 pfaf_id_level7_max = pfaf_id_level7_min + 9 gdf_level7_selection = gdf_level7.loc[(gdf_level7["PFAF_ID"] >= pfaf_id_level7_min)&(gdf_level7["PFAF_ID"] <= pfaf_id_level7_max)] return gdf_level7_selection list_level7_coast = [] for index, row in gdf_level6_coast.iterrows(): pfaf_id_level6 = row["PFAF_ID"] gdf_level7_coast_selection = find_corresponding_basins(pfaf_id_level6,gdf_level7) list_level7_coast.append(gdf_level7_coast_selection) df = pd.concat(list_level7_coast) gdf_level7_coast = gpd.GeoDataFrame(df) # + # Combine level 6 inland with level 7 equivalent for coastal. # - gdf_combined = gdf_level6_inland.append(gdf_level7_coast) def explode(gdf): """ Explodes a geodataframe Will explode muti-part geometries into single geometries. Original index is stored in column level_0 and zero-based count of geometries per multi- geometry is stored in level_1 Args: gdf (gpd.GeoDataFrame) : input geodataframe with multi-geometries Returns: gdf (gpd.GeoDataFrame) : exploded geodataframe with a new index and two new columns: level_0 and level_1 """ gs = gdf.explode() gdf2 = gs.reset_index().rename(columns={0: 'geometry'}) gdf_out = gdf2.merge(gdf.drop('geometry', axis=1), left_on='level_0', right_index=True) gdf_out = gdf_out.set_index(['level_0', 'level_1']).set_geometry('geometry') gdf_out.crs = gdf.crs return gdf_out gdf_exploded = explode(gdf_combined) gdf_exploded.head() gdf_exploded_noindex = gdf_exploded.reset_index() gdf_exploded_noindex.drop(columns=["level_0","level_1"],inplace=True) output_filename = "test.shp" output_path = "{}/{}".format(ec2_output_path,output_filename) gdf_exploded_noindex['index'] = gdf_exploded_noindex.index gdf_exploded_noindex.to_file(filename=output_path,driver="ESRI Shapefile") # !aws s3 cp {ec2_output_path} {s3_output_path} --recursive
scripts/Y2019M06D13_RH_Combine_Levels_V01.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python (ds_env) # language: python # name: ds_env # --- # + [markdown] toc=true # <h1>Table of Contents<span class="tocSkip"></span></h1> # <div class="toc"><ul class="toc-item"></ul></div> # - # standard DS stack import numpy as np import matplotlib.pyplot as plt import seaborn as sns; sns.set() import pandas as pd # embed static images in the ipynb # %matplotlib inline # The documentation for this dataset is horrible. I was able to find out that all of the features are numerical, and it says in the dataset description that the target is "number of comments" on a post, so I know it's an integer. train_df = pd.read_csv("training.csv", header=None) test_df = pd.read_csv("testing.csv", header=None) print("Original dataset shapes\n" +f"Training set:{train_df.shape}, Testing set:{test_df.shape}") train_df.head() # I think the last column has the targets, so I'll check to see if every value in the last column is an integer. # + def integer_check(vec): """Args: vec (np.ndarray, 1D): a vector.""" if np.all(((vec % 1) == 0)) == True: print("This vector contains only integers") else: print("This vector contains non-integer values") integer_check(vec=np.array(train_df.iloc[:,-1])) # + train = np.array(train) test = np.array(test) X_train, Y_train = train[:,:-1], train[:,-1] X_test, Y_test = test[:,:-1], test[:,-1] from sklearn.preprocessing import StandardScaler def scale_features(X_train, X_test): scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) scale_features(X_train, X_test) # Principal component analysis (PCA) feature reduction from sklearn.decomposition import PCA pca = PCA(n_components=10) X_train = pca.fit_transform(X_train) X_test = pca.transform(X_test) # SelectKBest feature selection # from sklearn.feature_selection import SelectKBest, f_regression # X_train = SelectKBest(f_regression, k=10).fit_transform(X_train, Y_train) # + rng = np.random.RandomState(5) def random_shrink(X, Y, shrink=0.5): """Shrinks the dataset size. Args: X (np.ndarray): feature matrix Y (np.ndarray): target matrix shrink (float, optional): Percentage of samples desired. Defaults to 0.5, i.e. a 50% reduction in the number of samples. Returns: X_small, Y_small : Random samples of the input sets """ n_samples = X.shape[0] sorted_indices = np.arange(n_samples) random_indices = rng.choice(sorted_indices, int(shrink * n_samples)) X_small = X[random_indices] Y_small = Y[random_indices] return X_small, Y_small X_train, Y_train = random_shrink(X_train, Y_train, shrink=0.25) X_test, Y_test = random_shrink(X_test, Y_test) print("Dataset shapes after PCA and random sampling\n" +f"X_train.shape:{X_train.shape}, Y_train.shape:{Y_train.shape}\n" +f"X_test.shape:{X_test.shape}, Y_test.shape:{Y_test.shape}") # - # We're not worried about performance, so I'll shrink the training set to 20,000 for time's sake in the tutorial.
datasets/FB_comments/process_data.ipynb