markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Create An Assignment For Each RowFor each assignment, First geocode the address to get the x,y location in (WGS84 Web Mercator) of the assignment. Then supply additional attributes.Finally use the batch_add method to add multiple assignments at once (this is faster than using the add method since validation is perform...
assignments = [] for index, row in df.iterrows(): geometry = geocode(f"{row['Location']}", out_sr=3857)[0]["location"] assignments.append( workforce.Assignment( project, geometry=geometry, location=row["Location"], description=row["Description"], ...
_____no_output_____
Apache-2.0
notebooks/dev_summit_2019/Step 2 - Add Assignments From csv File.ipynb
STEFANIHUNT/workforce-scripts
Verify the assignments on the mapLet's verify that the assignments were created.
webmap = gis.map("Palm Springs", zoomlevel=14) webmap.add_layer(project.assignments_layer) webmap
_____no_output_____
Apache-2.0
notebooks/dev_summit_2019/Step 2 - Add Assignments From csv File.ipynb
STEFANIHUNT/workforce-scripts
pyecharts[pyecharts](https://github.com/pyecharts/pyecharts) a wrapper for a new library [echarts](https://ecomfe.github.io/echarts-doc/public/en/index.html) made by [Baidu](https://www.baidu.com/). [documentation](https://github.com/pyecharts/pyecharts) [source](https://github.com/pyecharts/pyecharts) [installation](...
from jyquickhelper import add_notebook_menu add_notebook_menu() from pyecharts import __version__ __version__
_____no_output_____
MIT
_doc/notebooks/2016/pydata/js_pyecharts.ipynb
sdpython/jupytalk
example
from pyecharts.charts import Bar from pyecharts import options as opts attr = ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"] v1 = [2.0, 4.9, 7.0, 23.2, 25.6, 76.7, 135.6, 162.2, 32.6, 20.0, 6.4, 3.3] v2 = [2.6, 5.9, 9.0, 26.4, 28.7, 70.7, 175.6, 182.2, 48.7, 18.8, 6.0, 2.3] bar = ...
_____no_output_____
MIT
_doc/notebooks/2016/pydata/js_pyecharts.ipynb
sdpython/jupytalk
After you install [pyecharts-snapshot](https://github.com/pyecharts/pyecharts-snapshot) and [phantom-js](http://phantomjs.org/download.html) (not needed anymore apparently).
bar.render(path="echart_render.html") from pyecharts_snapshot.main import make_a_snapshot await make_a_snapshot("echart_render.html", "echart_render.png") from IPython.display import Image Image("echart_render.png", width='600')
_____no_output_____
MIT
_doc/notebooks/2016/pydata/js_pyecharts.ipynb
sdpython/jupytalk
Fit a volume via raymarchingThis tutorial shows how to fit a volume given a set of views of a scene using differentiable volumetric rendering.More specificially, this tutorial will explain how to:1. Create a differentiable volumetric renderer.2. Create a Volumetric model (including how to use the `Volumes` class).3. F...
import os import sys import torch need_pytorch3d=False try: import pytorch3d except ModuleNotFoundError: need_pytorch3d=True if need_pytorch3d: if torch.__version__.startswith("1.7") and sys.platform.startswith("linux"): # We try to install PyTorch3D via a released wheel. version_str="".join...
_____no_output_____
BSD-3-Clause
docs/tutorials/fit_textured_volume.ipynb
imlixinyang/pytorch3d
1. Generate images of the scene and masksThe following cell generates our training data.It renders the cow mesh from the `fit_textured_mesh.ipynb` tutorial from several viewpoints and returns:1. A batch of image and silhouette tensors that are produced by the cow mesh renderer.2. A set of cameras corresponding to each...
target_cameras, target_images, target_silhouettes = generate_cow_renders(num_views=40) print(f'Generated {len(target_images)} images/silhouettes/cameras.')
_____no_output_____
BSD-3-Clause
docs/tutorials/fit_textured_volume.ipynb
imlixinyang/pytorch3d
2. Initialize the volumetric rendererThe following initializes a volumetric renderer that emits a ray from each pixel of a target image and samples a set of uniformly-spaced points along the ray. At each ray-point, the corresponding density and color value is obtained by querying the corresponding location in the volu...
# render_size describes the size of both sides of the # rendered images in pixels. We set this to the same size # as the target images. I.e. we render at the same # size as the ground truth images. render_size = target_images.shape[1] # Our rendered scene is centered around (0,0,0) # and is enclosed inside a boundin...
_____no_output_____
BSD-3-Clause
docs/tutorials/fit_textured_volume.ipynb
imlixinyang/pytorch3d
3. Initialize the volumetric modelNext we instantiate a volumetric model of the scene. This quantizes the 3D space to cubical voxels, where each voxel is described with a 3D vector representing the voxel's RGB color and a density scalar which describes the opacity of the voxel (ranging between [0-1], the higher the mo...
class VolumeModel(torch.nn.Module): def __init__(self, renderer, volume_size=[64] * 3, voxel_size=0.1): super().__init__() # After evaluating torch.sigmoid(self.log_colors), we get # densities close to zero. self.log_densities = torch.nn.Parameter(-4.0 * torch.ones(1, *volume_size))...
_____no_output_____
BSD-3-Clause
docs/tutorials/fit_textured_volume.ipynb
imlixinyang/pytorch3d
4. Fit the volumeHere we carry out the volume fitting with differentiable rendering.In order to fit the volume, we render it from the viewpoints of the `target_cameras`and compare the resulting renders with the observed `target_images` and `target_silhouettes`.The comparison is done by evaluating the mean huber (smoot...
# First move all relevant variables to the correct device. target_cameras = target_cameras.to(device) target_images = target_images.to(device) target_silhouettes = target_silhouettes.to(device) # Instantiate the volumetric model. # We use a cubical volume with the size of # one side = 128. The size of each voxel of t...
_____no_output_____
BSD-3-Clause
docs/tutorials/fit_textured_volume.ipynb
imlixinyang/pytorch3d
5. Visualizing the optimized volumeFinally, we visualize the optimized volume by rendering from multiple viewpoints that rotate around the volume's y-axis.
def generate_rotating_volume(volume_model, n_frames = 50): logRs = torch.zeros(n_frames, 3, device=device) logRs[:, 1] = torch.linspace(0.0, 2.0 * 3.14, n_frames, device=device) Rs = so3_exponential_map(logRs) Ts = torch.zeros(n_frames, 3, device=device) Ts[:, 2] = 2.7 frames = [] print('Gen...
_____no_output_____
BSD-3-Clause
docs/tutorials/fit_textured_volume.ipynb
imlixinyang/pytorch3d
Python Google CollaboratoryGoogle Colaboratory หรือ Colab คือ IDE ภาษา Python รูปแบบ Online สามารถเข้าใช้บริการผ่าน Google Account โดยไม่จำเป็นต้องติดตั้งโปรแกรมใดๆ บนเครื่อง สั่งให้ Cell ทำงานไม่ว่า cell นั้นจะเป็น Markdown หรือ Code สามารถสั่งให้ทำงานด้วยคำสั่ง Shift+ Enter
_____no_output_____
MIT
Chapter1_IDE_COLAB.ipynb
akara-kij/Python101
เพิ่ม Cell ด้านล่างสามารถเพิ่ม Cell เพื่่อใช้สร้าง Code โดยการกด Ctrl ตามด้วย M และ B
_____no_output_____
MIT
Chapter1_IDE_COLAB.ipynb
akara-kij/Python101
เปลี่ยน Cell สำหรับ Code ให้กลายเป็น Cell ข้อความคลิก Cell สำหรับ Code ที่ต้องการแล้วกด Ctrl ตามด้วย M และ M cell สำหรับแสดงผล ลบ Cell ที่ไม่ต้องการเลือก Cell ที่ต้องการจากนั้นกด Ctrl ตามด้วย M และ D
_____no_output_____
MIT
Chapter1_IDE_COLAB.ipynb
akara-kij/Python101
เริ่มเขียนโปรแกรมเป็นครั้งแรก Hello worldคำสั่ง print สามารถใช้แสดงผลข้อความที่ต้องการ```print( Input String)``` ( หมายเหตุ ` สามาถพิมพ์ด้วยการกด Alt + 96 ส่วนเครื่องหมาย ~ ใช้ Alt + 126)**ตัวอย่างเช่น** : print("Hello World")
print("Hello World") print("Hello World")
_____no_output_____
MIT
Chapter1_IDE_COLAB.ipynb
akara-kij/Python101
แต่ถ้าหากต้องการ แสดงตัวอักษร ' หรือ " สามารถกระทำได้ดังตัวอย่างต่อไปนี้* กรณี Single Quote ' : นำเครื่องหมาย Single Quote ให้อยู่ภายใต้เครื่องหมาย Double Quote สองอัน* กรณี Double Quote " : นำเครื่องหมาย Double Quote ให้อยู่ภายใต้เครื่องหมาย Single Quote สองอัน
print(" ต้องการแสดงเครื่่องหมาย Single Quote ' ") print("แสดงเครื่องหมาย ' ") print(' ในกรณีที่ต้องการแสดงเครื่องหมาย Double Quote " ') print(' เครื่องหมาย double quote " ')
ต้องการแสดงเครื่่องหมาย Single Quote ' แสดงเครื่องหมาย ' ในกรณีที่ต้องการแสดงเครื่องหมาย Double Quote " เครื่องหมาย double quote "
MIT
Chapter1_IDE_COLAB.ipynb
akara-kij/Python101
การใช้ Comment Line* Comment line เป็นการบอกให้ Python ทราบว่าบรรทัดที่ระบุนี้มิใช่ Code หากแต่เป็นข้อความที่โปรแกรมเมอร์ใส่ไว้เพื่อให้อธิบายรายละเอียดของ Code* ใช้เครื่องหมาย เพื่อเปลี่ยนข้อความเป็น Comment Line
# ข้อความต่อไปนี้คือ Comment Line ดังนันภาษา Python จะข้าม Comment Line อันนี้ไป # ข้อความอธิบายโปรแกรม print("Hello World")
_____no_output_____
MIT
Chapter1_IDE_COLAB.ipynb
akara-kij/Python101
การใช้ Comment Blockสามารถใส่คำอธิบายที่ยาวหลายบรรทัดได้ โดยนำข้อความเหล่านั้นให้อยู่ภายใต้เครื่องหายต่อไปนี้* Single Quote จำนวน 3 อัน ตัวอย่างเช่น ''' ข้อความ '''* Double Quote จำนวน 3 อัน ตัวอย่างเช่น """ ข้อความ """
''' ตัวอย่าง Comment Block ภายใต้เครื่องหมาย Single Quote 1. บรรทัดที่ 1 2. บรรทัดที่ 2 3. บรรทัดที่ 3 ''' print("ตัวอย่างการสร้าง Comment Block ด้วย Single Quote") """ ตัวอย่าง Comment Block ภายใต้เครื่องหมาย Double Quote 1. บรรทัดที่ 1 2. บรรทัดที่ 2 3. บรรทัดที่ 3 """ print("ตัวอย่างการสร้าง Comment Block ด้วย Singl...
_____no_output_____
MIT
Chapter1_IDE_COLAB.ipynb
akara-kij/Python101
การเขียนคำสั่งยาวมากกว่า 1 บรรทัดหาก Code ที่ต้องการเขียนมีความยาวมากกว่าหนึ่งบรรทัด สามารถเขียนคำสั่งนั้นต่อในบรรทัดใหม่ได้โดยเชื่อมแต่ละบรรทัดด้วยเครื่องหมาย ```\```
# ทดสอบคำสั่งที่ยาวมากกว่าหนึ่งบรรทัด print( \ "Hello Wolrld " \ )
_____no_output_____
MIT
Chapter1_IDE_COLAB.ipynb
akara-kij/Python101
SIT742: Modern Data Science **(Module 04: Exploratory Data Analysis)**---- Materials in this module include resources collected from various open-source online repositories.- You are free to use, change and distribute this package.- If you found any issue/bug for this document, please submit an issue at [tulip-lab/sit...
from pylab import *
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
or
import matplotlib.pyplot
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
In fact it is a convention to import it under the name of `'plt'`:
import matplotlib.pyplot as plt
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
**note: The method, 'import matplotlib.pyplot as plt', is preferred.**
import matplotlib.pyplot as plt import numpy as np
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
Regardless of the method you use, it is better to configure matplotlib to embed figures in the notebook instead of opening them in a new window for each figure. To do this use the magic function:
%matplotlib inline
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
1.2 `plot` By using `'subplots()'` you have access to both figure and axes objects.
#To define the x-Axis and y-Axis x = np.linspace(0, 10) y = np.sin(x) #fig, ax = plt.subplots() is more consice the below code #If you use it,you unpack this tuple into the variables fig and ax #Actually, it equals the below 2 lines code. #fig = plt.figure() #ax = fig.add_subplot(111) #the 111 menas 1x1 grid, first su...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
1.3 title and labels
#To set the title, xlabel and ylabel ax.set_title('title here!') ax.set_xlabel('x') ax.set_ylabel('sin(x)') fig
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
You can also use [$\LaTeX$](https://www.latex-project.org/about/) in title or labels, or change the font size or font family.
#To set the domain of X-Axis from -10 to 10 x = np.linspace(-10, 10) #Parameter figsize : (float, float), optional, default: None #width, height in inches. If not provided, defaults to rcParams["figure.figsize"] = [6.4, 4.8]. #Parameter dpi : integer, optional, default: None #resolution of the figure. If not provided...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
1.4 SubplotsYou can pass the number of subplots to `'subplots()'`. In this case, `'axes'` will be an array that each of its elements associates with one of the subgraphs. You can set properties of each `'ax'` object separately like the cell below. Obviously you caould use a loop to iterate over `'axes'`.
#nrows, ncols : int, optional, default: 1 #They are used to define the number of rows/columns of the subplot grid. #You can try to comment the below code out and run the 'fig, axes = plt.subplots(nrows=1, ncols=2)', then you will find the difference. fig, axes = plt.subplots(nrows=2, ncols=1) #To define the domain of ...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
`'cos(x)'` label is overlapping with the `'sin'` graph. You can adjust the size of the graph or space between the subplots to fix it.
#nrows, ncols : int, optional, default: 1 #They are used to define the number of rows/columns of the subplot grid. #You can try to comment the below code out and run the 'fig, axes = plt.subplots(nrows=1, ncols=2)', then you will find the difference. #You also can use the fig, axes = plt.subplots(nrows=1, ncols=2, figs...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
1.5 Legend
##To define the domain of the x-Axis from 0 to 10 x = np.linspace(0, 10) #To define the figsize fig, ax = plt.subplots(figsize=(7, 5)) #To define the x-Axis and y-Axis and label ax.plot(x, np.sin(x), label='$sin(x)$') ax.plot(x, np.cos(x), label='$cos(x)$') #To place the legend for the two plot and define their font...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
1.6 Customizing ticks In many cases you want to customize the ticks and their labels on x or y axis. First draw a simple graph and look at the ticks on x-axis.
x = np.linspace(0, 10, num=100) fig, ax = plt.subplots(figsize=(10, 5)) ax.plot(x, np.sin(x), x, np.cos(x), linewidth=2)
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
You can change the ticks easily with passing a list (or array) to `'set_xticks()'` or `'set_yticks()'`:
xticks = [0, 1, 2, 5, 8, 8.5, 10] ax.set_xticks(xticks) fig
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
Or even you can change the labels:
xticklabels = ['$\gamma$', '$\delta$', 'apple', 'b', '', 'c'] ax.set_xticklabels(xticklabels, fontsize=18) fig
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
1.7 Saving figures
x = np.linspace(0, 10) fig, ax = plt.subplots(figsize=(7, 5)) ax.plot(x, np.sin(x), label='$sin(x)$') ax.plot(x, np.cos(x), label='$cos(x)$') ax.legend(fontsize=16, loc=3) fig.savefig('P03Saved.pdf', format='PDF', dpi=300)
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
1.8 Other plot stylesThere are many other plot types in addition to simple `'plot'` supported by `'matplotlib'`. You will find a complete list of them on [matplotlib gallery](http://matplotlib.org/gallery.html). 1.8.1 Scatter plot
fig, ax = plt.subplots(figsize=(10,5)) x = np.linspace(-0.75, 1., 100) #s is the marker size in points**2. Default is rcParams['lines.markersize'] ** 2. #alpha is the alpha blending value, between 0 (transparent) and 1 (opaque). #edgecolor it the edge color of the marker. Possible values: # 'face': The edge color w...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
1.8.2 Bar plot
fig, ax = plt.subplots() x = np.arange(1, 6) ax.bar(x, x**2, align="center") ax.set_title('bar')
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
--- 2. Plotting a histogram 2.1 Dataset You are provided with a dataset of percentage of body fat and 10 simple body measurements recoreded for 252 men (courtesy of Journal of Statistics Education - JSE). You can read about this and other [JSE datasets here](http://www.amstat.org/publications/jse/jse_data_archive.htm...
!pip install wget import numpy as np import wget link_to_data = 'https://github.com/tulip-lab/sit742/raw/master/Jupyter/data/fat.dat.txt' DataSet = wget.download(link_to_data) data = np.genfromtxt("fat.dat.txt") data.shape
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
Based on the [dataset description](http://www.amstat.org/publications/jse/datasets/fat.txt), 5th column represents the weight in lbs. Index the weight column and call it `'weights'`:
weights = data[:, 5] weights
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
Use array operators to convert the weigts into kg. 1 lb equals to 0.453592 kg.
# weights *= 0.453592 is equivalent to weights = weights * 0.453592 #It multiplies right operand with the left operand and assign the result to left operand weights *= 0.453592 #Round the converted weights to only two decimals: weights = weights.round(2) weights
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
2.2 Histogram A histogtram is a bar plot that shows you the statistical distribution of the data over a variable. The bars represent the frequency of occurenve by classess of data. We use the package `'matplotlib'` and the function `'hist()'` for plotting the histogram. To learn more about `'matplotlib'` make sure you...
%matplotlib inline import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.hist(weights) ax
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
The `'hist()'` functions automatically group the data over 10 bins. Usually you need to tweek the number of bins to obtain a more expressive histogram.
fig, ax = plt.subplots(figsize=(7, 5)) ax.hist(weights, bins=20) # title ax.set_title('Weight Distribution Diagram ') # label ax.set_xlabel('Weight (kg) ') ax.set_ylabel('The number of people') ax
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
2.3 BoxplotA `Boxplot` is a convenient way to graphically display numerical data.
import matplotlib fig, ax = matplotlib.pyplot.subplots(figsize=(7, 5)) matplotlib.rcParams.update({'font.size': 14}) ax.boxplot(weights, 0, labels=['group1']) ax.set_ylabel('weight (kg)', fontsize=16) ax.set_title('Weights BoxPlot', fontsize=16)
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
You have already been thought about different sorts of plots, how they help to get a better understanding of the data, and when to use which. In this practical session we will work with `matplotlib` package to learn more about plotting in Python. --- 3. Data Understanding You have already been thought about different s...
import numpy as np import csv import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
3.1 Pie ChartSuppose you have the frequency count of a variable (e.g. hair_colour). Draw a pie chart to explain it.
labels = 'Black', 'Red', 'Brown' # frequency count hair_colour_freq = [5, 3, 2] # Black, Red, Brown # colors colors = ['yellowgreen', 'gold', 'lightskyblue'] # explode the third one explode = (0, 0, 0.1) fig, ax = plt.subplots(figsize=(5, 5)) ax.pie(hair_colour_freq, labels=labels, explode=explode, colors=colors, ...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
What if we have too many tags and sectors?
# Excellence in Reasearch Australia labels = ['HEALTH', 'ENGINEERING', 'COMPUTER SCIENCES', 'HUMAN SOCIETY', 'TOURISM SERVICES', 'EDUCATION', 'CHEMISTRY', 'BIOLOGY', 'PSYCHOLOGY', 'CREATIVE ARTS', 'LINGUISTICS', 'BUILT ENVIRONMENT', 'HISTORY', 'ECONOMICS', 'PHILOSOPHY', 'AGRICULTURE', '...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
3.2 Bar ChartUse the hair colour data to draw a bar chart.
labels = ['Black', 'Red', 'Brown'] hair_colour_freq = [5, 3, 2] fig, ax = plt.subplots(figsize=(7, 5), dpi=100) x_pos = np.arange(len(hair_colour_freq)) colors = ['black', 'red', 'brown'] ax.bar(x_pos, hair_colour_freq, align='center', color=colors) ax.set_xlabel("Hair Colour") ax.set_ylabel("Number of participants...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
Now suppose we have the hair colour distribution across genders, so we can plot grouped bar charts. Plot a grouped bar chart to show the distribution of colours acros genders.
""" black red brown Male 4 1 3 Female 1 2 2 """ data = np.array([[4, 1, 3], [1, 2, 3]]) x_pos = np.arange(2) width = 0.2 fig, ax = plt.subplots(figsize=(7, 5), dpi=100) ax.bar(x_pos, data[:, 0], width=width, color='black', label='Black', align='center') ax.bar(x_p...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
Can we plot it more intelligently? We are doing the same thing multiple times! Is it a good idea to use a loop?
""" black red brown Male 4 1 3 Female 1 2 2 """ data = np.array([[4, 1, 3], [1, 2, 3]]) n_groups, n_colours = data.shape x_pos = np.arange(n_groups) width = 0.2 fig, ax = plt.subplots(figsize=(7, 5), dpi=100) colours = ['black', 'red', 'brown'] labels = ['Black',...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
What if we want to group the bar charts based on the hair colour?
""" black red brown Male 4 1 3 Female 1 2 2 """ labels = ['Black', 'Red', 'Brown'] colours = ['r', 'y'] data = np.array([[4, 1, 3], [1, 2, 3]]) n_groups, n_colours = data.shape width = 0.2 x_pos = np.arange(n_colours) fig, ax = plt.subplots(figsize=(7, 5), dpi=100...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
Stacked bar chartThe other type of bar chart is stacked bar chart. draw a stacked bar plot of the hair colour data grouped on hair colours.
""" black red brown Male 4 1 3 Female 1 2 2 """ labels = ['Black', 'Red', 'Brown'] data = np.array([[4, 1, 3], [1, 2, 3]]) male_freq = data[0,:] width = 0.4 x_pos = np.arange(n_colours) fig, ax = plt.subplots(figsize=(7, 5), dpi=100) ax.bar(x_pos, data[0, :], wid...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
draw a stacked bar plot grouped on the gender.
""" black red brown Male 4 1 3 Female 1 2 2 """ labels = ['Black', 'Red', 'Brown'] data = np.array([[4, 1, 3], [1, 2, 3]]) black = data[:,0] red = data[:,1] brown = data[:,2] x_pos = np.arange(2) width = 0.4 fig, ax = plt.subplots(figsize=(7, 5), dpi=100) ax.bar(...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
What if we have too many groups? Draw a bar chart for the Excellence in Research Australia data.
import matplotlib.pyplot as plt import numpy as np # Excellence in Research Australia labels = ['HEALTH', 'ENGINEERING', 'COMPUTER SCIENCES', 'HUMAN SOCIETY', 'TOURISM SERVICES', 'EDUCATION', 'CHEMISTRY', 'BIOLOGY', 'PSYCHOLOGY', 'CREATIVE ARTS', 'LINGUISTICS', 'BUILT ENVIRONMENT', 'HISTORY', ...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
You can also refer to one online [post](http://queirozf.com/entries/add-labels-and-text-to-matplotlib-plots-annotation-examples) for different customization of those plots. 3.3 WordcloudAs you saw, pie-chart is not very helpful when we have too many sectors. It is hard to read and visually ugly. Instead we can use wor...
for i in range(len(labels)): print("{}:{}".format(labels[i], xx[i])) !pip install wordcloud import wget link_to_data = 'https://github.com/tulip-lab/sit742/raw/master/Jupyter/data/constitution.txt' DataSet = wget.download(link_to_data) from os import path from wordcloud import WordCloud # Read the whole text. tex...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
3.4 Step plotDraw a step plot for the seatbelt data.
freq = np.array([0, 2, 1, 5, 7]) labels = ['Never', 'Rarely', 'Sometimes', 'Most-times', 'Always'] freq_cumsum = np.cumsum(freq) x_pos = np.arange(len(freq)) fig, ax = plt.subplots() ax.step(x_pos, freq_cumsum, where='mid') ax.set_xlabel("Fastening seatbelt behaviour") ax.set_ylabel("Cumulative frequency") ax.set_xtic...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
3.5 HistogramGoogle for this paper:``Johnson, Roger W. "Fitting percentage of body fat to simple body measurements." Journal of Statistics Education 4.1 (1996): 265-266.``Download the dataset and read the dataset description. Draw a histogram of male weights and female weights.
import wget link_to_data = 'https://github.com/tulip-lab/sit742/raw/master/Jupyter/data/body.dat.txt' DataSet = wget.download(link_to_data) data = np.genfromtxt('body.dat.txt') m_w = data[data[:, -1] == 1][:, -3] f_w = data[data[:, -1] == 0][:, -3] fig, ax = plt.subplots(figsize=(7, 5), dpi=100) ax.hist(m_w, bins=15,...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
3.6 BoxplotDraw a box plot for male and female weights of the previous dataset.
fig, ax = plt.subplots(figsize=(7, 5), dpi=100) #To define the style of the fliers. #red_square = dict(markerfacecolor='r', marker='s') is another example, you can try define you own style. green_diamond = dict(markerfacecolor='g', marker='D') #Set the value of showfliers with True to show the outliers beyond the cap...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
3.7 Scatter plotDraw a scatter plot of the car weights and their fuel consumption as displayed in the lecture.
import wget link_to_data = 'https://github.com/tulip-lab/sit742/raw/master/Jupyter/data/Auto.csv' DataSet = wget.download(link_to_data) datafile = 'Auto.csv' data = np.genfromtxt(datafile, delimiter=',') data = [] with open(datafile, 'r') as fp: reader = csv.reader(fp, delimiter=',') for row in reader: ...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
Can I also show the number of cylinders on this graph? In other words use the scatter plot to show three variable?
cylinder = 75 * np.array([int(dd[2]) for dd in data[1:]]) fig, ax = plt.subplots(figsize=(15, 5), dpi=100) ax.scatter(weights,miles, alpha=0.6, edgecolor='none', s=cylinder) ax.set_xlabel('Car Weight (tons)') ax.set_ylabel('Miles Per Gallon') ax
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
--- 4. Mini ExerciseIn 1970, US Congress instituted a random selection process for the military draft. All 366 possible birth dates were placed in plastic capsules in a rotating drum and were selected one by one. The first date drawn from the drum received draft number one and eligible men born on that date were drafte...
import wget link_to_data = 'https://github.com/tulip-lab/sit742/raw/master/Jupyter/data/DraftLottery.txt' DataSet = wget.download(link_to_data) data = [] with open('DraftLottery.txt', 'r') as fp: reader = csv.reader(fp, delimiter='\t') for row in reader: data.append(row) birthdays = np.array([...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
Plot a `'scatter plot'` of the draft priority vs birthdays.
fig, ax = plt.subplots(figsize=(10, 7), dpi=100) ax.scatter(birthdays, draft_no, alpha=0.7, s = 100, edgecolor='none') ax.set_xlabel("Birthday (day of the year)", fontsize=12) ax.set_ylabel("Draft priority value", fontsize=12) ax.set_title("USA Draft Lottery Data", fontsize=14) ax
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
In a truly random lottery there should be no relationship between the date and the draft number. To investigate this further we draw boxplots by months and compare them together.
fig, ax = plt.subplots(figsize=(10, 7), dpi=100) months_range = range(1, 13) # boxplot data boxplot_data = [draft_no[months == mm] for mm in months_range] ax.boxplot(boxplot_data) # medians medians = [np.median(dd) for dd in boxplot_data] ax.plot(months_range, medians, "g--", lw=2) # means means = [dd.mean() for dd ...
_____no_output_____
MIT
Jupyter/SIT742P04A-ExploratoryDA.ipynb
jllemusc/sit742
Welcome!Below, we will learn to implement and train a policy to play atari-pong, using only the pixels as input. We will use convolutional neural nets, multiprocessing, and pytorch to implement and train our policy. Let's get started!
# install package for displaying animation !pip install JSAnimation # custom utilies for displaying animation, collecting rollouts and more import pong_utils %matplotlib inline # check which device is being used. # I recommend disabling gpu until you've made sure that the code runs device = pong_utils.device print(...
List of available actions: ['NOOP', 'FIRE', 'RIGHT', 'LEFT', 'RIGHTFIRE', 'LEFTFIRE']
MIT
pong-PPO(1).ipynb
vigneshyaadav27/Pong-DRL
PreprocessingTo speed up training, we can simplify the input by cropping the images and use every other pixel
import matplotlib import matplotlib.pyplot as plt # show what a preprocessed image looks like env.reset() _, _, _, _ = env.step(0) # get a frame after 20 steps for _ in range(20): frame, _, _, _ = env.step(1) plt.subplot(1,2,1) plt.imshow(frame) plt.title('original image') plt.subplot(1,2,2) plt.title('preproces...
_____no_output_____
MIT
pong-PPO(1).ipynb
vigneshyaadav27/Pong-DRL
Policy Exercise 1: Implement your policy Here, we define our policy. The input is the stack of two different frames (which captures the movement), and the output is a number $P_{\rm right}$, the probability of moving left. Note that $P_{\rm left}= 1-P_{\rm right}$
import torch import torch.nn as nn import torch.nn.functional as F # set up a convolutional neural net # the output is the probability of moving right # P(left) = 1-P(right) class Policy(nn.Module): def __init__(self): super(Policy, self).__init__() ######## ## ## Modify y...
_____no_output_____
MIT
pong-PPO(1).ipynb
vigneshyaadav27/Pong-DRL
Game visualizationpong_utils contain a play function given the environment and a policy. An optional preprocess function can be supplied. Here we define a function that plays a game and shows learning progress
pong_utils.play(env, policy, time=200) # try to add the option "preprocess=pong_utils.preprocess_single" # to see what the agent sees
_____no_output_____
MIT
pong-PPO(1).ipynb
vigneshyaadav27/Pong-DRL
Function DefinitionsHere you will define key functions for training. Exercise 2: write your own function for training(what I call scalar function is the same as policy_loss up to a negative sign) PPOLater on, you'll implement the PPO algorithm as well, and the scalar function is given by$\frac{1}{T}\sum^T_t \min\left...
def discounted_future_rewards(rewards, ratio=0.99): n = rewards.shape[1] step = torch.arange(n)[:,None] - torch.arange(n)[:,None] ones = torch.ones_like(step) zeros = torch.zeros_like(step) target = torch.where(step >= 0, ones, zeros) step = torch.where(step >= 0, step, zeros) discount = tar...
_____no_output_____
MIT
pong-PPO(1).ipynb
vigneshyaadav27/Pong-DRL
TrainingWe are now ready to train our policy!WARNING: make sure to turn on GPU, which also enables multicore processing. It may take up to 45 minutes even with GPU enabled, otherwise it will take much longer!
from parallelEnv import parallelEnv import numpy as np # keep track of how long training takes # WARNING: running through all 800 episodes will take 30-45 minutes # training loop max iterations episode = 500 # widget bar to display progress !pip install progressbar import progressbar as pb widget = ['training loop: '...
_____no_output_____
MIT
pong-PPO(1).ipynb
vigneshyaadav27/Pong-DRL
Copyright 2021 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under...
_____no_output_____
Apache-2.0
site/en/guide/migrate/migrating_feature_columns.ipynb
jiankaiwang/docs
Migrating feature_columns to TF2 View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook
# Temporarily install tf-nightly as the notebook depends on symbols in 2.6. !pip uninstall -q -y tensorflow keras !pip install -q tf-nightly
_____no_output_____
Apache-2.0
site/en/guide/migrate/migrating_feature_columns.ipynb
jiankaiwang/docs
Training a model will usually come with some amount of feature preprocessing, particularly when dealing with structured data. When training an `tf.estimator.Estimator` in TF1, this feature preprocessing is done with the `tf.feature_column` API. In TF2, this preprocessing can be done directly with Keras layers, called _...
import tensorflow as tf import tensorflow.compat.v1 as tf1 import math
_____no_output_____
Apache-2.0
site/en/guide/migrate/migrating_feature_columns.ipynb
jiankaiwang/docs
and add a utility for calling a feature column for demonstration:
def call_feature_columns(feature_columns, inputs): # This is a convenient way to call a `feature_column` outside of an estimator # to display its output. feature_layer = tf1.keras.layers.DenseFeatures(feature_columns) return feature_layer(inputs)
_____no_output_____
Apache-2.0
site/en/guide/migrate/migrating_feature_columns.ipynb
jiankaiwang/docs
One-hot encoding integer IDsA common feature transformation is one-hot encoding integer inputs of a known range. Here is an example using feature columns:
categorical_col = tf1.feature_column.categorical_column_with_identity( 'type', num_buckets=3) indicator_col = tf1.feature_column.indicator_column(categorical_col) call_feature_columns(indicator_col, {'type': [0, 1, 2]})
_____no_output_____
Apache-2.0
site/en/guide/migrate/migrating_feature_columns.ipynb
jiankaiwang/docs
Using Keras preprocessing layers, these columns can be replaced by a single `tf.keras.layers.CategoryEncoding` layer with `output_mode` set to `'one_hot'`:
one_hot_layer = tf.keras.layers.CategoryEncoding( num_tokens=3, output_mode='one_hot') one_hot_layer([0, 1, 2])
_____no_output_____
Apache-2.0
site/en/guide/migrate/migrating_feature_columns.ipynb
jiankaiwang/docs
One-hot encoding string data with a vocabularyHandling string features often requires a vocabulary lookup to translate strings into indices. Here is an example using feature columns to lookup strings and then one-hot encode the indices:
vocab_col = tf1.feature_column.categorical_column_with_vocabulary_list( 'sizes', vocabulary_list=['small', 'medium', 'large'], num_oov_buckets=0) indicator_col = tf1.feature_column.indicator_column(vocab_col) call_feature_columns(indicator_col, {'sizes': ['small', 'medium', 'large']})
_____no_output_____
Apache-2.0
site/en/guide/migrate/migrating_feature_columns.ipynb
jiankaiwang/docs
Using Keras preprocessing layers, use the `tf.keras.layers.StringLookup` layer with `output_mode` set to `'one_hot'`:
string_lookup_layer = tf.keras.layers.StringLookup( vocabulary=['small', 'medium', 'large'], num_oov_indices=0, output_mode='one_hot') string_lookup_layer(['small', 'medium', 'large'])
_____no_output_____
Apache-2.0
site/en/guide/migrate/migrating_feature_columns.ipynb
jiankaiwang/docs
Embedding string data with a vocabularyFor larger vocabularies, an embedding is often needed for good performance. Here is an example embedding a string feature using feature columns:
vocab_col = tf1.feature_column.categorical_column_with_vocabulary_list( 'col', vocabulary_list=['small', 'medium', 'large'], num_oov_buckets=0) embedding_col = tf1.feature_column.embedding_column(vocab_col, 4) call_feature_columns(embedding_col, {'col': ['small', 'medium', 'large']})
_____no_output_____
Apache-2.0
site/en/guide/migrate/migrating_feature_columns.ipynb
jiankaiwang/docs
Using Keras preprocessing layers, this can be achieved by combining a `tf.keras.layers.StringLookup` layer and an `tf.keras.layers.Embedding` layer. The default output for the `StringLookup` will be integer indices which can be fed directly into an embedding.Note that the `Embedding` layer contains trainable parameters...
string_lookup_layer = tf.keras.layers.StringLookup( vocabulary=['small', 'medium', 'large'], num_oov_indices=0) embedding = tf.keras.layers.Embedding(3, 4) embedding(string_lookup_layer(['small', 'medium', 'large']))
_____no_output_____
Apache-2.0
site/en/guide/migrate/migrating_feature_columns.ipynb
jiankaiwang/docs
Complete training exampleTo show a complete training workflow, first prepare some data with three features of different types:
features = { 'type': [0, 1, 1], 'size': ['small', 'small', 'medium'], 'weight': [2.7, 1.8, 1.6], } labels = [1, 1, 0] predict_features = {'type': [0], 'size': ['foo'], 'weight': [-0.7]}
_____no_output_____
Apache-2.0
site/en/guide/migrate/migrating_feature_columns.ipynb
jiankaiwang/docs
Define some common constants for both TF1 and TF2 workflows:
vocab = ['small', 'medium', 'large'] one_hot_dim = 3 embedding_dim = 4 weight_mean = 2.0 weight_variance = 1.0
_____no_output_____
Apache-2.0
site/en/guide/migrate/migrating_feature_columns.ipynb
jiankaiwang/docs
With feature columnsFeature columns must be passed as a list to the estimator on creation, and will be called implicitly during training.
categorical_col = tf1.feature_column.categorical_column_with_identity( 'type', num_buckets=one_hot_dim) # Convert index to one-hot; e.g. [2] -> [0,0,1]. indicator_col = tf1.feature_column.indicator_column(categorical_col) # Convert strings to indices; e.g. ['small'] -> [1]. vocab_col = tf1.feature_column.categoric...
_____no_output_____
Apache-2.0
site/en/guide/migrate/migrating_feature_columns.ipynb
jiankaiwang/docs
The feature columns will also be used to transform input data when running inference on the model.
def _predict_fn(): return tf1.data.Dataset.from_tensor_slices(predict_features).batch(1) next(estimator.predict(_predict_fn))
_____no_output_____
Apache-2.0
site/en/guide/migrate/migrating_feature_columns.ipynb
jiankaiwang/docs
With Keras preprocessing layersKeras preprocessing layers are more flexible in where they can be called. A layer can be applied directly to tensors, used inside a `tf.data` input pipeline, or built directly into a trainable Keras model.In this example, we will apply preprocessing layers inside a `tf.data` input pipeli...
inputs = { 'type': tf.keras.Input(shape=(), dtype='int64'), 'size': tf.keras.Input(shape=(), dtype='string'), 'weight': tf.keras.Input(shape=(), dtype='float32'), } outputs = { # Convert index to one-hot; e.g. [2] -> [0,0,1]. 'type': tf.keras.layers.CategoryEncoding( one_hot_dim, output_mode='one_hot')(...
_____no_output_____
Apache-2.0
site/en/guide/migrate/migrating_feature_columns.ipynb
jiankaiwang/docs
You can now apply this model inside a call to `tf.data.Dataset.map`. Please note that the function passed to `map` will automatically be converted intoa `tf.function`, and usual caveats for writing `tf.function` code apply (no side effects).
# Apply the preprocessing in tf.data.Dataset.map. dataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(1) dataset = dataset.map(lambda x, y: (preprocessing_model(x), y), num_parallel_calls=tf.data.AUTOTUNE) # Display a preprocessed input sample. next(dataset.take(1).as_numpy_iter...
_____no_output_____
Apache-2.0
site/en/guide/migrate/migrating_feature_columns.ipynb
jiankaiwang/docs
Next, you can define a separate `Model` containing the trainable layers. Note how the inputs to this model now reflect the preprocessed feature types and shapes.
inputs = { 'type': tf.keras.Input(shape=(one_hot_dim,), dtype='float32'), 'size': tf.keras.Input(shape=(), dtype='int64'), 'weight': tf.keras.Input(shape=(), dtype='float32'), } # Since the embedding is trainable, it needs to be part of the training model. embedding = tf.keras.layers.Embedding(len(vocab), embeddi...
_____no_output_____
Apache-2.0
site/en/guide/migrate/migrating_feature_columns.ipynb
jiankaiwang/docs
You can now train the `training_model` with `tf.keras.Model.fit`.
# Train on the preprocessed data. training_model.compile( loss=tf.keras.losses.BinaryCrossentropy(from_logits=True)) training_model.fit(dataset)
_____no_output_____
Apache-2.0
site/en/guide/migrate/migrating_feature_columns.ipynb
jiankaiwang/docs
Finally, at inference time, it can be useful to combine these separate stages into a single model that handles raw feature inputs.
inputs = preprocessing_model.input outpus = training_model(preprocessing_model(inputs)) inference_model = tf.keras.Model(inputs, outpus) predict_dataset = tf.data.Dataset.from_tensor_slices(predict_features).batch(1) inference_model.predict(predict_dataset)
_____no_output_____
Apache-2.0
site/en/guide/migrate/migrating_feature_columns.ipynb
jiankaiwang/docs
Implementing a Neural NetworkIn this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
# A bit of setup import numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.neural_net import TwoLayerNet %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloadi...
_____no_output_____
MIT
assignments/2019/assignment1/two_layer_net.ipynb
comratvlad/cs231n.github.io
We will use the class `TwoLayerNet` in the file `cs231n/classifiers/neural_net.py` to represent instances of our network. The network parameters are stored in the instance variable `self.params` where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will...
# Create a small net and some toy data to check your implementations. # Note that we set the random seed for repeatable experiments. input_size = 4 hidden_size = 10 num_classes = 3 num_inputs = 5 def init_toy_model(): np.random.seed(0) return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1) def in...
_____no_output_____
MIT
assignments/2019/assignment1/two_layer_net.ipynb
comratvlad/cs231n.github.io
Forward pass: compute scoresOpen the file `cs231n/classifiers/neural_net.py` and look at the method `TwoLayerNet.loss`. This function is very similar to the loss functions you have written for the SVM and Softmax exercises: It takes the data and weights and computes the class scores, the loss, and the gradients on the...
scores = net.loss(X) print('Your scores:') print(scores) print() print('correct scores:') correct_scores = np.asarray([ [-0.81233741, -1.27654624, -0.70335995], [-0.17129677, -1.18803311, -0.47310444], [-0.51590475, -1.01354314, -0.8504215 ], [-0.15419291, -0.48629638, -0.52901952], [-0.00618733, -0.12435261,...
Your scores: [[-0.81233741 -1.27654624 -0.70335995] [-0.17129677 -1.18803311 -0.47310444] [-0.51590475 -1.01354314 -0.8504215 ] [-0.15419291 -0.48629638 -0.52901952] [-0.00618733 -0.12435261 -0.15226949]] correct scores: [[-0.81233741 -1.27654624 -0.70335995] [-0.17129677 -1.18803311 -0.47310444] [-0.51590475 -1...
MIT
assignments/2019/assignment1/two_layer_net.ipynb
comratvlad/cs231n.github.io
Forward pass: compute lossIn the same function, implement the second part that computes the data and regularization loss.
loss, _ = net.loss(X, y, reg=0.05) correct_loss = 1.30378789133 # should be very small, we get < 1e-12 print('Difference between your loss and correct loss:') print(np.sum(np.abs(loss - correct_loss)))
Difference between your loss and correct loss: 1.7985612998927536e-13
MIT
assignments/2019/assignment1/two_layer_net.ipynb
comratvlad/cs231n.github.io
Backward passImplement the rest of the function. This will compute the gradient of the loss with respect to the variables `W1`, `b1`, `W2`, and `b2`. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:
from cs231n.gradient_check import eval_numerical_gradient # Use numeric gradient checking to check your implementation of the backward pass. # If your implementation is correct, the difference between the numeric and # analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2. loss, grads = net.loss(...
W2 max relative error: 3.440708e-09 b2 max relative error: 4.447677e-11 W1 max relative error: 3.561318e-09 b1 max relative error: 2.738421e-09
MIT
assignments/2019/assignment1/two_layer_net.ipynb
comratvlad/cs231n.github.io
Train the networkTo train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function `TwoLayerNet.train` and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and ...
net = init_toy_model() stats = net.train(X, y, X, y, learning_rate=1e-1, reg=5e-6, num_iters=100, verbose=False) print('Final training loss: ', stats['loss_history'][-1]) # plot the loss history plt.plot(stats['loss_history']) plt.xlabel('iteration') plt.ylabel('training loss') plt.title('Trai...
Final training loss: 0.01714960793873204
MIT
assignments/2019/assignment1/two_layer_net.ipynb
comratvlad/cs231n.github.io
Load the dataNow that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.
from cs231n.data_utils import load_CIFAR10 def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000): """ Load the CIFAR-10 dataset from disk and perform preprocessing to prepare it for the two-layer neural net classifier. These are the same steps as we used for the SVM, but condense...
Train data shape: (49000, 3072) Train labels shape: (49000,) Validation data shape: (1000, 3072) Validation labels shape: (1000,) Test data shape: (1000, 3072) Test labels shape: (1000,)
MIT
assignments/2019/assignment1/two_layer_net.ipynb
comratvlad/cs231n.github.io
Train a networkTo train our network we will use SGD. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
input_size = 32 * 32 * 3 hidden_size = 50 num_classes = 10 net = TwoLayerNet(input_size, hidden_size, num_classes) # Train the network stats = net.train(X_train, y_train, X_val, y_val, num_iters=1000, batch_size=200, learning_rate=1e-4, learning_rate_decay=0.95, reg=0.25, verbose=Tr...
iteration 0 / 1000: loss 2.302954 iteration 100 / 1000: loss 2.302550 iteration 200 / 1000: loss 2.297648 iteration 300 / 1000: loss 2.259602 iteration 400 / 1000: loss 2.204170 iteration 500 / 1000: loss 2.118565 iteration 600 / 1000: loss 2.051535 iteration 700 / 1000: loss 1.988466 iteration 800 / 1000: loss 2.00659...
MIT
assignments/2019/assignment1/two_layer_net.ipynb
comratvlad/cs231n.github.io
Debug the trainingWith the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.Anot...
# Plot the loss function and train / validation accuracies plt.subplot(2, 1, 1) plt.plot(stats['loss_history']) plt.title('Loss history') plt.xlabel('Iteration') plt.ylabel('Loss') plt.subplot(2, 1, 2) plt.plot(stats['train_acc_history'], label='train') plt.plot(stats['val_acc_history'], label='val') plt.title('Classi...
_____no_output_____
MIT
assignments/2019/assignment1/two_layer_net.ipynb
comratvlad/cs231n.github.io
Tune your hyperparameters**What's wrong?**. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capa...
best_net = None # store the best model into this learning_rates = [5e-5, 1e-4, 1e-3] regularization_strengths = [0.0001, 0.001, 0.1] results = {} best_val = -1 best_net = None ################################################################################# # TODO: Tune hyperparameters using the validatio...
_____no_output_____
MIT
assignments/2019/assignment1/two_layer_net.ipynb
comratvlad/cs231n.github.io
Run on the test setWhen you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.
test_acc = (best_net.predict(X_test) == y_test).mean() print('Test accuracy: ', test_acc)
Test accuracy: 0.491
MIT
assignments/2019/assignment1/two_layer_net.ipynb
comratvlad/cs231n.github.io
dimer pipeline Read filenames
import os unid = 'u0496358' path = "/home/"+str(unid)+"/BRAT/"+str(unid)+"/Project_pe_test" files = os.listdir(path) len(files)
_____no_output_____
Apache-2.0
dimer_pipe.ipynb
asy1113/NLP_PE
define regular expression
import re rule=r'(?P<name>(d-dimer|ddimer))(?P<n1>.{1,25}?)(?P<value>[0-9]{1,4}(\.[0-9]{0,3})?\s*)(?P<n2>[^\n\w\d]*)(?P<unit>(ug\/l|ng\/ml|mg\/l|nmol\/l)?)' rule1=r'(elevated|pos|positive|increased|high|\+)(.{1,20})?(\n)?\s?(d-dimer|d\s?dimer)' rule2=r'(d-dimer|d\s?dimer)([^0-9-:]{1,15})?(positive|pos)' neg_regex =...
_____no_output_____
Apache-2.0
dimer_pipe.ipynb
asy1113/NLP_PE
d-dimer pipeline apply rules
def ddimer_val(rule='rule', rule1='rule1', rule2='rule2', file_txt='note'): # import libraries import re from pipeUtils import Annotation from pipeUtils import Document # initite Document obj file1a = '' doc = Document() doc.load_document_from_file(file_txt) # change to lowe...
_____no_output_____
Apache-2.0
dimer_pipe.ipynb
asy1113/NLP_PE