markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Embeddings | min_count = [2, 3, 4, 5]
window = [2, 3 , 4, 5]
size = [25, 50, 100, 200]
sample=6e-5
alpha=0.05
min_alpha=0.0007
negative=20
sgTrain = 1
cbowTrain = 0
emArgs = [min_count[0], window[0], size[3], sample, alpha, min_alpha, negative]
def tokenizeTweet(sent):
doc = nlp(sent)
tokens= [t.text.strip() for t in doc]
... | /usr/local/lib/python3.7/site-packages/ipykernel_launcher.py:2: FutureWarning: The signature of `Series.to_csv` was aligned to that of `DataFrame.to_csv`, and argument 'header' will change its default value from False to True: please pass an explicit value to suppress this warning.
| MIT | notebooks/09 spacymoji.ipynb | gregoryverghese/schizophrenia-twitter |
Converting HTML to TextTags: data, python, nlpdate: 2020-08-05T08:00:00+10:00feature_image: /images/jupyter-blog.png How can we convert HTML into text for processing?Whitespace in HTML [is complicated](https://developer.mozilla.org/en-US/docs/Web/API/Document_Object_Model/Whitespace). | def html2md(html):
parser = HTML2Text()
parser.ignore_images = True
parser.ignore_anchors = True
parser.body_width = 0
md = parser.handle(html)
return md
def html2plain(html):
# HTML to Markdown
md = html2md(html)
# Normalise custom lists
md = re.sub(r'(^|\n) ? ? ?\\?[•·–-—-*]( \... | _____no_output_____ | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
Example with linebreaks instead of paragraphs | p = paths[6]
data = read_jsonl(p)
html = data[0]['description'] | _____no_output_____ | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
The plain HTML | print(html) | <strong><u>The Client</u></strong><br/><br/>Our client is a secondary education institution in the Eastern Suburbs of Melbourne. They offer rewarding and diverse programs for Australian and overseas students as well as workforce development for Australia's corporate, government, and commercial organisations. They opera... | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
How it looks in a browser | HTML(html) | _____no_output_____ | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
Beautiful soup runs the sentences together because it doesn't process the `` tags. | print(BeautifulSoup(html).getText()) | The ClientOur client is a secondary education institution in the Eastern Suburbs of Melbourne. They offer rewarding and diverse programs for Australian and overseas students as well as workforce development for Australia's corporate, government, and commercial organisations. They operate seven campuses in Victoria, del... | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
It's better if we replace them with spaces, but we lose separations between headers and content. | print(BeautifulSoup(html).getText(' ')) | The Client Our client is a secondary education institution in the Eastern Suburbs of Melbourne. They offer rewarding and diverse programs for Australian and overseas students as well as workforce development for Australia's corporate, government, and commercial organisations. They operate seven campuses in Victoria, de... | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
Newlines work for this particular case. | print(BeautifulSoup(html).getText('\n')) | The Client
Our client is a secondary education institution in the Eastern Suburbs of Melbourne. They offer rewarding and diverse programs for Australian and overseas students as well as workforce development for Australia's corporate, government, and commercial organisations. They operate seven campuses in Victoria, de... | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
HTML2Text does an excellent job of converting this into markdown (though notice it's sensitive to spaces around markup in the headings which aren't visible in the HTML. | md = html2md(html)
print(md) | **_The Client_**
Our client is a secondary education institution in the Eastern Suburbs of Melbourne. They offer rewarding and diverse programs for Australian and overseas students as well as workforce development for Australia's corporate, government, and commercial organisations. They operate seven campuses in V... | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
Round trip it back to HTML. | html2 = mistletoe.markdown(md)
print(html2)
HTML(html2)
print(BeautifulSoup(html2).get_text(''))
print(html2plain(html)) | The Client
Our client is a secondary education institution in the Eastern Suburbs of Melbourne. They offer rewarding and diverse programs for Australian and overseas students as well as workforce development for Australia's corporate, government, and commercial organisations. They operate seven campuses in Victoria, de... | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
Example - HTML Tables | p = paths[16]
data = read_jsonl(p)
[idx for idx, d in enumerate(data) if '<tr>' in d['description']]
html = data[68]['description']
print(html)
HTML(html)
md = html2md(html)
print(md)
html2 = mistletoe.markdown(md)
print(html2)
HTML(html2)
print(BeautifulSoup(html2).get_text(''))
text = html2plain(html)
print(text) | Location Profile
SCHOOL PROFILE
Greenhills Primary School, established in 1962, is situated in a quiet residential location between the north-eastern Melbourne suburbs of Greensborough and Diamond Creek, within the municipality of Nillumbik. The location of the school, on a well maintained grassed and treed site provid... | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
Example - Blank bold | p = paths[21]
data = read_jsonl(p)
[idx for idx, d in enumerate(data) if '<tr>' in d['description']]
html = data[546]['description']
print(html)
HTML(html)
md = html2md(html)
print(md)
html2 = mistletoe.markdown(md)
print(html2)
HTML(html2)
print(BeautifulSoup(html2).get_text(''))
text = html2plain(html)
print(text) | About the role
Position Title: Field Organiser
Position Location: Darwin, NT
Employment Status: Ongoing (subject to probation) / Full Time
Classification and Salary range:
Organiser Level 1 – 2, $74,984 – $97,713 per annum (includes Organiser Expense Allowance paid as salary) + 15.4% superannuation
Darwin Remote Loc... | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
Example: Complicated HTML | p = paths[4]
data = read_jsonl(p)
html = data[0]['description'] | _____no_output_____ | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
The plain HTML | print(html) | <div class="job-detail-des">
<iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/dwoywMmGZEI" width="560"></iframe>
<div style=" "> </div>
<div align="center" style=""><b style="">Organisation Design S... | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
How it looks in a browser (after removing the video iframe): | HTML(re.sub('<iframe[^>]*>[^<]*</iframe>', '', html)) | _____no_output_____ | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
HTML2Text does an excellent job of converting this into markdown.Notice there are some empty bold `**` that are invisible in the rendered HTML.Also note that in CLOSING DATE they've had to insert an additional space between the phrase and the colon to bold it as markdown, which is different to how it reads. | md = html2md(html)
print(md)
html2 = mistletoe.markdown(md.replace('•', ' *'))
print(html2)
HTML(html2)
text = BeautifulSoup(html2).getText()
print(text)
text = html2plain(html)
print(text) | Organisation Design Specialist
TAFE Worker Level 9 – Talent Pool
We are seeking candidates who are interested in joining TAFE NSW’s Organisation Design Specialist Talent Pool.
This is a great opportunity for you to be considered for future roles over the next 12 months.
Competitive salary package and access to multipl... | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
Example - Processing Lists | p = paths[14]
data = read_jsonl(p)
html = data[0]['description'] | _____no_output_____ | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
The plain HTML | html | _____no_output_____ | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
How it looks in a browser | HTML(html) | _____no_output_____ | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
HTML2Text again does an excellent job of converting this into markdown.Note that there's a mixing of markup and styling in the Key Selection Criteria heading. | md = html2md(html)
print(md)
html2 = mistletoe.markdown(md)
print(html2)
HTML(html2)
text = BeautifulSoup(html2).getText()
print(text)
text = html2plain(html)
print(text) | Progressive peak body for Dementia
Full time, fixed term opportunity until June 2020
Attractive salary packaging options available
Dementia Australia is a well-known and respected organisation transforming the experience of people impacted by dementia by elevating their voices and inspiring excellence in support and c... | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
Example - Custom Lists | p = paths[6]
data = read_jsonl(p)
html = data[2]['description'] | _____no_output_____ | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
The plain HTML | md = html2md(html)
print(md)
html2 = mistletoe.markdown(md.replace('·', ' *'))
print(html2)
HTML(html2)
print(html2)
text = BeautifulSoup(html2).getText()
print(text)
text = html2plain(html)
print(text) | Do you have a fintech background and are hungry for your next move? National BDM role where you can work from home, apply now!
Duties and Responsibilities
Develop sales plans and exceed set KPI's
Generate leads by researching and networking with key stakeholders
Prepare presentations and proposals
Keep abreast of p... | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
Example 5 - Actually plain text | p = paths[10]
data = read_jsonl(p)
html = data[3]['description'] | _____no_output_____ | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
The plain HTML | print(html) | The Opportunity
Do you want to conduct impactful research of strategic importance to Australia?
Incorporate host plant resistance to pests and diseases in the cotton breeding program
Grow your research career with a CSIRO PhD Fellowship
CSIRO Early Research Career (CERC) Postdoctoral Fellowships provide opportunities... | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
How it looks in a browser | HTML(html) | _____no_output_____ | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
HTML2Text again does an excellent job of converting this into markdown.Note that there's a mixing of markup and styling in the Key Selection Criteria heading. | md = html2md(html)
print(md)
html2 = mistletoe.markdown(md)
print(html2)
HTML(html2)
print(html2)
text = BeautifulSoup(html2).getText()
print(text)
text = html2plain(html)
print(text) | The Opportunity Do you want to conduct impactful research of strategic importance to Australia? Incorporate host plant resistance to pests and diseases in the cotton breeding program Grow your research career with a CSIRO PhD Fellowship CSIRO Early Research Career (CERC) Postdoctoral Fellowships provide opportunities t... | MIT | notebooks/Converting HTML to Text.ipynb | EdwardJRoss/adzuna-salary-prediction |
Multiple Layer GRU | from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow_datasets as tfds
import tensorflow as tf
print(tf.__version__)
import tensorflow_datasets as tfds
import tensorflow as tf
print(tf.__version__)
# Get the data
dataset, info = tfds.load('imdb_reviews/subwords8k', with_... | _____no_output_____ | MIT | TensorFlow In Practice/Course 3 - NLP/Course 3 - Week 3 - Lesson 1convolution.ipynb | alwaysvamsi/nlp-tutorial |
Exercises 8. Adjust the learning rate. Try a value of 0.0001. Does it make a difference?** Solution **This is the simplest exercise and you have do that before. Find the line: optimize = tf.train.AdamOptimizer(learning_rate=0.001).minimize(mean_loss) And change the learning_rate to 0.0001.Since the training is ... | import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
# TensorFLow includes a data provider for MNIST that we'll use.
# This function automatically downloads the MNIST dataset to the chosen directory.
# The dataset is already split into training, validation, and test su... | _____no_output_____ | MIT | course_2/course_material/Part_7_Deep_Learning/S54_L390/8. TensorFlow_MNIST_Learning_rate_Part_1_Solution.ipynb | Alexander-Meldrum/learning-data-science |
Outline the modelThe whole code is in one cell, so you can simply rerun this cell (instead of the whole notebook) and train a new model.The tf.reset_default_graph() function takes care of clearing the old parameters. From there on, a completely new training starts. | input_size = 784
output_size = 10
# Use same hidden layer size for both hidden layers. Not a necessity.
hidden_layer_size = 50
# Reset any variables left in memory from previous runs.
tf.reset_default_graph()
# As in the previous example - declare placeholders where the data will be fed into.
inputs = tf.placeholder(... | _____no_output_____ | MIT | course_2/course_material/Part_7_Deep_Learning/S54_L390/8. TensorFlow_MNIST_Learning_rate_Part_1_Solution.ipynb | Alexander-Meldrum/learning-data-science |
Test the modelAs we discussed in the lectures, after training on the training and validation sets, we test the final prediction power of our model by running it on the test dataset that the algorithm has not seen before.It is very important to realize that fiddling with the hyperparameters overfits the validation data... | input_batch, target_batch = mnist.test.next_batch(mnist.test._num_examples)
test_accuracy = sess.run([accuracy],
feed_dict={inputs: input_batch, targets: target_batch})
# Test accuracy is a list with 1 value, so we want to extract the value from it, using x[0]
# Uncomment the print to see how it looks before the ... | _____no_output_____ | MIT | course_2/course_material/Part_7_Deep_Learning/S54_L390/8. TensorFlow_MNIST_Learning_rate_Part_1_Solution.ipynb | Alexander-Meldrum/learning-data-science |
Lifetimes Learning goals Relativistic kinematics. Standard model particles. Special Relativity. BackgroundEvery type of particle has different characteristics. They each have different masses, lifetimes, decay methods and many other properties. To find the distance a particle travels in one lifetime, you need... | particles = ["B+/-","D+/-","J/Psi"]
lifetimes = [1.64e-12,1.4e-12,7e-21]
c = 3e8 # m/s
v = c
for p,l in zip(particles,lifetimes):
distance = v*l
print "%-5s lifetime=%4.2e s distance traveled=%4.2e m" % (p,l,distance) | _____no_output_____ | MIT | activities/physicsbkg_Lifetimes.ipynb | particle-physics-playground/playground |
Particles$\mu^\pm$ $\tau^\pm$ $\pi^\pm$ $\pi^0$ $\rho^0$ $K^\pm$ $K^0_{\rm short}$ $K^0_{\rm long}$ $K^*(892)$ $D^\pm$ $B^\pm$ $B0$ $J/ \psi$ $\Upsilon(1S)$ proton neutron $\Delta^+$ $\Lambda^0$ $\Lambda_c$ Challenge!Finish the table for every particle listed above ... | # Your code here | _____no_output_____ | MIT | activities/physicsbkg_Lifetimes.ipynb | particle-physics-playground/playground |
The scale of many modern physics detectors ranges from the order of centimeters to 10's of meters. Given that information, what particles do you think will actually live long enough to travel through parts of the detector? | # Your code here | _____no_output_____ | MIT | activities/physicsbkg_Lifetimes.ipynb | particle-physics-playground/playground |
Which particles will decay (on average) before they reach the detectors? This means that these particles have to be reconstructed from their decay products. | # Your code here | _____no_output_____ | MIT | activities/physicsbkg_Lifetimes.ipynb | particle-physics-playground/playground |
Make a plot where the x-axis is the names of the above particles (or a number corresponding to each, where the number/particle relationship is clearly identified) and the y-axis is the lifetime of the particle. Color code the data points according to whether the primary decay is EM, weak, or strong. {\it Do not plot th... | # Your code here | _____no_output_____ | MIT | activities/physicsbkg_Lifetimes.ipynb | particle-physics-playground/playground |
Part 8 - Introduction to Plans Context > Warning: This is still experimental and may change during May / June 2019We introduce here an object which is crucial to scale to industrial Federated Learning: the Plan. It reduces dramatically the bandwidth usage, allows asynchronous schemes and give more autonomy to remote d... | import torch
import torch.nn as nn
import torch.nn.functional as F | _____no_output_____ | Apache-2.0 | examples/tutorials/Part 8 - Introduction to Plans.ipynb | racinger/PySyft |
And than those specific to PySyft | import syft as sy # import the Pysyft library
hook = sy.TorchHook(torch) # hook PyTorch ie add extra functionalities
server = hook.local_worker | _____no_output_____ | Apache-2.0 | examples/tutorials/Part 8 - Introduction to Plans.ipynb | racinger/PySyft |
We define remote workers or _devices_, to be consistent with the notions provided in the reference article.We provide them with some data. | x11 = torch.tensor([-1, 2.]).tag('input_data')
x12 = torch.tensor([1, -2.]).tag('input_data2')
x21 = torch.tensor([-1, 2.]).tag('input_data')
x22 = torch.tensor([1, -2.]).tag('input_data2')
device_1 = sy.VirtualWorker(hook, id="device_1", data=(x11, x12))
device_2 = sy.VirtualWorker(hook, id="device_2", data=(x21, x2... | _____no_output_____ | Apache-2.0 | examples/tutorials/Part 8 - Introduction to Plans.ipynb | racinger/PySyft |
Basic exampleLet's define a function that we want to transform into a plan. To do so, it's as simple as adding a decorator above the function definition! | @sy.func2plan
def plan_double_abs(x):
x = x + x
x = torch.abs(x)
return x | _____no_output_____ | Apache-2.0 | examples/tutorials/Part 8 - Introduction to Plans.ipynb | racinger/PySyft |
Let's check, yes we have now a plan! | plan_double_abs | _____no_output_____ | Apache-2.0 | examples/tutorials/Part 8 - Introduction to Plans.ipynb | racinger/PySyft |
To use a plan, you need two things: to build the plan (_ie register the sequence of operations present in the function_) and to send it to a worker / device. Fortunately you can do this very easily!We first get a reference to some remote data: a request is sent over the network and a reference pointer is returned. | pointer_to_data = device_1.search('input_data')[0]
pointer_to_data | _____no_output_____ | Apache-2.0 | examples/tutorials/Part 8 - Introduction to Plans.ipynb | racinger/PySyft |
We tell the plan it must be executed remotely on the device, but actually nothing happens on the network because we have not provided any input data! You can now observe that there is an attribute location specified `location:device_1`. | plan_double_abs.send(device_1) | _____no_output_____ | Apache-2.0 | examples/tutorials/Part 8 - Introduction to Plans.ipynb | racinger/PySyft |
One important thing to remember is now that we pre-set ahead of computation the id(s) where the result(s) should be stored. This will allow to send commands asynchronously, to already have a reference to a virtual result and to continue local computations without waiting for the remote result to be computed. One major ... | %%time
# %%time is a Magic comand to log a cell's execution time
pointer_to_result = plan_double_abs(pointer_to_data)
print(pointer_to_result) | [PointerTensor | me:14841324516 -> device_1:8459055087]
CPU times: user 52.8 ms, sys: 3.89 ms, total: 56.7 ms
Wall time: 54.7 ms
| Apache-2.0 | examples/tutorials/Part 8 - Introduction to Plans.ipynb | racinger/PySyft |
And you can simply ask the value back. | pointer_to_result.get() | _____no_output_____ | Apache-2.0 | examples/tutorials/Part 8 - Introduction to Plans.ipynb | racinger/PySyft |
What happen now if you ask a second computation with this plan? | pointer_to_data = device_1.search('input_data2')[0] | _____no_output_____ | Apache-2.0 | examples/tutorials/Part 8 - Introduction to Plans.ipynb | racinger/PySyft |
Now it is much faster because there is a single communication round: we already have a reference to the remote plan, so we can require a remote execution and just provide the reference location of the new remote inputs. For the end user, nothing changes. | %time
pointer_to_result = plan_double_abs(pointer_to_data)
print(pointer_to_result.get()) | CPU times: user 3 µs, sys: 1 µs, total: 4 µs
Wall time: 7.15 µs
tensor([2., 4.])
| Apache-2.0 | examples/tutorials/Part 8 - Introduction to Plans.ipynb | racinger/PySyft |
Towards a concrete exampleBut what we want to do is to apply Plan to Deep and Federated Learning, right? So let's look to a slightly more complicated example, using neural networks as you might be willing to use them.Note that we are now transforming a method into a plan, so we use the `@` `sy.meth2plan` decorator ins... | class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(2, 3)
self.fc2 = nn.Linear(3, 2)
@sy.method2plan
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=0)
net = Net() | _____no_output_____ | Apache-2.0 | examples/tutorials/Part 8 - Introduction to Plans.ipynb | racinger/PySyft |
So the only thing we did is adding the sy.method2plan decorator! And check that `net.forward` is again a plan. | net.forward | _____no_output_____ | Apache-2.0 | examples/tutorials/Part 8 - Introduction to Plans.ipynb | racinger/PySyft |
Now there is a subtlety: because the plan depend on the net instance, if you send the plan *you also need to send the model*.> For developers: this is not compulsory as you actually have a reference to the model in the plan, we could call model.send internally. | net.send(device_1)
net.forward.send(device_1) | _____no_output_____ | Apache-2.0 | examples/tutorials/Part 8 - Introduction to Plans.ipynb | racinger/PySyft |
Let's retrieve some remote data | pointer_to_data = device_1.search('input_data')[0] | _____no_output_____ | Apache-2.0 | examples/tutorials/Part 8 - Introduction to Plans.ipynb | racinger/PySyft |
Then, the syntax is just like normal remote sequential execution, that is, just like local execution. But compared to classic remote execution, there is a single communication round for each execution (except the first time where, as described above, we first build and send the plan). | pointer_to_result = net.forward(pointer_to_data)
pointer_to_result | _____no_output_____ | Apache-2.0 | examples/tutorials/Part 8 - Introduction to Plans.ipynb | racinger/PySyft |
And we get the result as usual! | pointer_to_result.get() | _____no_output_____ | Apache-2.0 | examples/tutorials/Part 8 - Introduction to Plans.ipynb | racinger/PySyft |
Et voilà! We have seen how to dramatically reduce the communication between the local worker (or server) and the remote devices! Switch between workersOne major feature that we want to have is to use the same plan for several workers, that we would change depending on the remote batch of data we are considering.In par... | class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(2, 3)
self.fc2 = nn.Linear(3, 2)
@sy.method2plan
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=0)
net = Net() | _____no_output_____ | Apache-2.0 | examples/tutorials/Part 8 - Introduction to Plans.ipynb | racinger/PySyft |
Here are the main steps we just executed | net.send(device_1)
net.forward.send(device_1)
pointer_to_data = device_1.search('input_data')[0]
pointer_to_result = net(pointer_to_data)
pointer_to_result.get() | _____no_output_____ | Apache-2.0 | examples/tutorials/Part 8 - Introduction to Plans.ipynb | racinger/PySyft |
Let's get the model and the network back | net.get()
net.forward.get() | _____no_output_____ | Apache-2.0 | examples/tutorials/Part 8 - Introduction to Plans.ipynb | racinger/PySyft |
And actually the syntax is straight forward: we just send it to another device | net.send(device_2)
net.forward.send(device_2)
pointer_to_data = device_2.search('input_data')[0]
pointer_to_result = net(pointer_to_data)
pointer_to_result.get() | _____no_output_____ | Apache-2.0 | examples/tutorials/Part 8 - Introduction to Plans.ipynb | racinger/PySyft |
5. Validation & Testing Welcome to the fifth notebook of our six part series part of our tutorial on Deep Learning for Human Activity Recognition. Within the last notebook you learned:- How do I define a sample neural network architecture in PyTorch? - What additional preprocessing do I need to apply to my data to fed... | import os, sys
use_colab = True
module_path = os.path.abspath(os.path.join('..'))
if use_colab:
# move to content directory and remove directory for a clean start
%cd /content/
%rm -rf dl-for-har
# clone package repository (will throw error if already cloned)
!git clone https://github.c... | _____no_output_____ | MIT | tutorial_notebooks/validation_and_testing.ipynb | jforsyth/dl-for-har |
5.1. Splitting your data Within the first part of this notebook we will split our data in the above mentioned three datasets, namely the train, validation and test dataset. There are multiple ways how to split the data into the two respective datasets, for example:- **Subject-wise:** split according to participants wi... | import numpy as np
import warnings
warnings.filterwarnings("ignore")
from data_processing.preprocess_data import load_dataset
# data loading (we are using a predefined method called load_dataset, which is part of the DL-ARC feature stack)
X, y, num_classes, class_names, sampling_rate, has_null = load_dataset('rwhar_... | _____no_output_____ | MIT | tutorial_notebooks/validation_and_testing.ipynb | jforsyth/dl-for-har |
5.2. Define the hyperparameters Before we go over talking about how to perform validation in Human Activtiy Recognition, we need to define our hyperparameters again. As you know from the previous notebook, it is common practice to track all your settings and parameters in a compiled `config` object. Due to fact that w... | from misc.torchutils import seed_torch
config = {
#### TRY AND CHANGE THESE PARAMETERS ####
# sliding window settings
'sw_length': 50,
'sw_unit': 'units',
'sampling_rate': 50,
'sw_overlap': 30,
# network settings
'nb_conv_blocks': 2,
'conv_block_type': 'normal',
'nb_filters': 64... | _____no_output_____ | MIT | tutorial_notebooks/validation_and_testing.ipynb | jforsyth/dl-for-har |
5.3. Validation Within the next segment we will explain the most prominent validation methods used in Human Activity Recognition. These are:- Train-Valid Split- k-Fold Cross-Validation- Cross-Participant Cross-Validation 5.3.1. Train-Valid Split The train-valid split is one of the most basic validation method, which ... | import time
import numpy as np
import torch
from sklearn.model_selection import train_test_split
from sklearn.metrics import precision_score, recall_score, f1_score, jaccard_score
from model.train import train
from model.DeepConvLSTM import DeepConvLSTM
from data_processing.sliding_window import apply_sliding_window
... | _____no_output_____ | MIT | tutorial_notebooks/validation_and_testing.ipynb | jforsyth/dl-for-har |
5.3.2. K-Fold Cross-Validation The k-fold cross-validation is the most popular form of cross-validation. Instead of only splitting our data once into a train and validation dataset, like we did in the previous validation method, we take the average of k different train-valid splits. To do so we take our concatenated v... | from sklearn.model_selection import StratifiedKFold
# number of splits, i.e. folds
config['splits_kfold'] = 10
# in order to get reproducible results, we need to seed torch and other random parts of our implementation
seed_torch(config['seed'])
# needed for saving results
log_date = time.strftime('%Y%m%d')
log_time... | _____no_output_____ | MIT | tutorial_notebooks/validation_and_testing.ipynb | jforsyth/dl-for-har |
5.3.3. Cross-Participant Cross-Validation Cross-participant cross-validation, also known as Leave-One-Subject-Out (LOSO) cross-validation is the most complex, but also most expressive validation method one can apply when dealing with multi-subject data. In general, it can be seen as a variation of the k-fold cross-val... | # needed for saving results
log_date = time.strftime('%Y%m%d')
log_timestamp = time.strftime('%H%M%S')
# in order to get reproducible results, we need to seed torch and other random parts of our implementation
seed_torch(config['seed'])
# iterate over all subjects
for i, sbj in enumerate(####SOMETHING IS MISSING HERE... | _____no_output_____ | MIT | tutorial_notebooks/validation_and_testing.ipynb | jforsyth/dl-for-har |
5.4 Testing Now, after having implemented each of the validation techniques we want to get an unbiased view of how our trained algorithm perfoms on unseen data. To do so we use the testing set which we split off the original dataset within the first step of this notebook. Task 5: Testing your trained networks 1. Appl... | from model.train import predict
# in order to get reproducible results, we need to seed torch and other random parts of our implementation
seed_torch(config['seed'])
# apply the sliding window on top of both the test data; use the "apply_sliding_window" function
# found in data_processing.sliding_window
X_test, y_tes... | _____no_output_____ | MIT | tutorial_notebooks/validation_and_testing.ipynb | jforsyth/dl-for-har |
5.3.2 連続変数の離散化 | import numpy as np
import pandas as pd
# カーネルの場合
#df_train = pd.read_csv('../input/train.csv')
#df_test = pd.read_csv('../input/test.csv')
# 本レポジトリの場合
df_train = pd.read_csv("./titanic_csv/train.csv")
df_test = pd.read_csv("./titanic_csv/test.csv")
import matplotlib as mpl
import matplotlib.pyplot as plt
import seabor... | _____no_output_____ | MIT | tb4_kaggle_book_ch5.3.2_5.3.3.ipynb | currypan/tb4-datarefinement |
5.3.3 特徴量の重要度を確認する | forest.feature_importances_
for i,k in zip(X_train.columns,forest.feature_importances_):
print(i,round(k,4)) | Pclass 0.1392
Sex 0.41
Age 0.1141
SibSp 0.0968
Parch 0.08
Fare 0.1023
Embarked_0 0.025
Embarked_1 0.0207
Embarked_2 0.0118
| MIT | tb4_kaggle_book_ch5.3.2_5.3.3.ipynb | currypan/tb4-datarefinement |
Python Training Hands on Material Rahul Reddy Gajjada Comments and Printing in Python | # Hello Python
print("Hello, Python!")
# Comments
# This is a single line comment
''' This is a multiline comment
to further describe your
thoughts... ''' | _____no_output_____ | MIT | Python Data Types and Sequences.ipynb | RahulR432/PythonTraining |
Working with variables | # Variables
msg = "Python!" # String
v2 = 'Python!' # Also String works same
v1 = 2 # Numbers
v3 = 3.564 # Floats / Doubles
v4 = True # Boolean (True / False)
# print()
# automatically adds a newline
print (msg)
print (v2)
print (v1)
print (v3)
print (v4)
print ("Hello Python!") | _____no_output_____ | MIT | Python Data Types and Sequences.ipynb | RahulR432/PythonTraining |
Data Types Numbers Integers | #Declaring and Assigning integers
'''Use the respective prefix to declare an integer in other formats
For Binary, ‘0b’ or ‘0B’
For Octal, ‘0o’ or ‘0O’
For Hexadecimal, ‘0x’ or ‘0X’ '''
x = 1
y = 882399773218279
z = -125634
a = 0b1100
#Determine the type of a variable using type keyword
print(type(x))
print(type(y))
p... | _____no_output_____ | MIT | Python Data Types and Sequences.ipynb | RahulR432/PythonTraining |
Floats | #Declaring and Assigning float numbers
x = 12.3
y = 12.9829379485794548679
z = -18.96
k = 23e2
print(type(x))
print(type(y))
print(type(z))
print(type(k))
print (x)
print (y)
print (z)
print(k) | _____no_output_____ | MIT | Python Data Types and Sequences.ipynb | RahulR432/PythonTraining |
Complex | #Declaring and Assigning complex numbers
x = -5j
y = 2 + 4j
z = 22j
print(type(x))
print(type(y))
print(type(z))
print(x)
print(y)
print(z) | _____no_output_____ | MIT | Python Data Types and Sequences.ipynb | RahulR432/PythonTraining |
Python Strings | # Note: Both " and ' can be used to make strings. And this flexibility allows for the following:
msg2 = 'Jenni said, "I love Python!"'
msg3 = "After that Jenni's Python Interpreter said it back to her!"
msg4 = 'Of Course she used the command `print("I love Jenni")`'
print (msg2)
print (msg3)
print (msg4)
# Import the ... | _____no_output_____ | MIT | Python Data Types and Sequences.ipynb | RahulR432/PythonTraining |
Python Lists | # Simple Lists
names = ["Jenni", "Python", "Scarlett"]
nums = [1, 2, 3, 4, 5]
chars = ['A', 'q', 'E', 'z', 'Y']
print (names)
print (nums)
print (chars)
# Can have multiple data types in one list
random_list = ["Jenni", "Python", "inneJ", 'J', '9', 9, 12.90, "Who"]
print (random_list)
# Accessing elements in a list
#... | _____no_output_____ | MIT | Python Data Types and Sequences.ipynb | RahulR432/PythonTraining |
Python Tuples | # Simple Tuples
# Tuples are list, the elements stored in which cannot be changed.
t1 = (1, 2, 3, 4, 5, 6, 7)
print (t1)
print (t1[4])
# Cannot change the elements stored as compared to lists
l1 = [1, 2, 3, 4, 5, 6, 7]
t1 = (1, 2, 3, 4, 5, 6, 7)
# Can change elements in list
l1[4] = 1
# Cannot change elements in tuple... | _____no_output_____ | MIT | Python Data Types and Sequences.ipynb | RahulR432/PythonTraining |
Reading from and Writing to Files using Python This notebook covers the following topics:- Interacting with the filesystem using the `os` module- Downloading files from the internet using the `urllib` module- Reading and processing data from text files- Parsing data from CSV files i... | import os | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
We can check the present working directory using the `os.getcwd` function. | os.getcwd() | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
To get the list of files in a directory, use `os.listdir`. You pass an absolute or relative path of a directory as the argument to the function. | help(os.listdir)
os.listdir('.') # relative path
os.listdir('e:\datasets') # absolute path | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
You can create a new directory using `os.makedirs`. Let's create a new directory called `data`, where we'll later download some files. | os.getcwd()
os.makedirs('./data', exist_ok=True) | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
Can you figure out what the argument `exist_ok` does? Try using the `help` function or [read the documentation](https://docs.python.org/3/library/os.htmlos.makedirs).Let's verify that the directory was created and is currently empty. | 'data' in os.listdir('.')
os.listdir('./data') | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
Let us download some files into the `data` directory using the `urllib` module. | url1 = 'https://gist.githubusercontent.com/aakashns/257f6e6c8719c17d0e498ea287d1a386/raw/7def9ef4234ddf0bc82f855ad67dac8b971852ef/loans1.txt'
url2 = 'https://gist.githubusercontent.com/aakashns/257f6e6c8719c17d0e498ea287d1a386/raw/7def9ef4234ddf0bc82f855ad67dac8b971852ef/loans2.txt'
url3 = 'https://gist.githubuserconte... | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
Let's verify that the files were downloaded. | os.listdir('./data') | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
You can also use the [`requests`](https://docs.python-requests.org/en/master/) library to dowload URLs, although you'll need to [write some additional code](https://stackoverflow.com/questions/44699682/how-to-save-a-file-downloaded-from-requests-to-another-directory) to save the contents of the page to a file. Reading... | file1 = open('./data/loans1.txt', mode='r') | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
The `open` function also accepts a `mode` argument to specifies how we can interact with the file. The following options are supported:``` ========= =============================================================== Character Meaning --------- --------------------------------------------------------------- 'r'... | file1_contents = file1.read()
print(file1_contents) | amount,duration,rate,down_payment
100000,36,0.08,20000
200000,12,0.1,
628400,120,0.12,100000
4637400,240,0.06,
42900,90,0.07,8900
916000,16,0.13,
45230,48,0.08,4300
991360,99,0.08,
423000,27,0.09,47200
| MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
The file contains information about loans. It is a set of comma-separated values (CSV). > **CSVs**: A comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas. A CSV file typic... | file1.close() | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
Once a file is closed, you can no longer read from it. | file1.read() | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
Closing files automatically using `with`To close a file automatically after you've processed it, you can open it using the `with` statement. | with open('./data/loans2.txt', 'r') as file2:
file2_contents = file2.read()
print(file2_contents) | amount,duration,rate,down_payment
828400,120,0.11,100000
4633400,240,0.06,
42900,90,0.08,8900
983000,16,0.14,
15230,48,0.07,4300
| MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
Once the statements within the `with` block are executed, the `.close` method on `file2` is automatically invoked. Let's verify this by trying to read from the file object again. | file2.read() | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
Reading a file line by lineFile objects provide a `readlines` method to read a file line-by-line. | with open('./data/loans3.txt', 'r') as file3:
file3_lines = file3.readlines()
file3_lines
file3_lines = [line.strip() for line in file3_lines]
file3_lines | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
Processing data from filesBefore performing any operations on the data stored in a file, we need to convert the file's contents from one large string into Python data types. For the file `loans1.txt` containing information about loans in a CSV format, we can do the following:* Read the file line by line* Parse the fir... | def parse_headers(header_line):
return header_line.strip().split(',') | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
The `strip` method removes any extra spaces and the newline character `\n`. The `split` method breaks a string into a list using the given separator (`,` in this case). | file3_lines[0]
headers = parse_headers(file3_lines[0])
headers | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
Next, let's define a function `parse_values` that takes a line containing some data and returns a list of floating-point numbers. | def parse_values(data_line):
values = []
for item in data_line.strip().split(','):
values.append(float(item))
return values
file3_lines[1]
parse_values(file3_lines[1]) | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
The values were parsed and converted to floating point numbers, as expected. Let's try it for another line from the file, which does not contain a value for the down payment. | file3_lines[2]
parse_values(file3_lines[2]) | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
The code above leads to a `ValueError` because the empty string `''` cannot be converted to a float. We can enhance the `parse_values` function to handle this *edge case*. We will also handle the case where the value is not a float. | def parse_values(data_line):
values = []
for item in data_line.strip().split(','):
if item == '':
values.append(0.0)
else:
try:
values.append(float(item))
except ValueError:
values.append(item)
return values
file3_lines[2]
p... | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
Next, let's define a function `create_item_dict` that takes a list of values and a list of headers as inputs and returns a dictionary with the values associated with their respective headers as keys. | def create_item_dict(values, headers):
result = {}
for value, header in zip(values, headers):
result[header] = value
return result | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
Can you figure out what the Python built-in function `zip` does? Try out an example, or [read the documentation](https://docs.python.org/3.3/library/functions.htmlzip). | for item in zip([1,2,3], ['a', 'b', 'c']):
print(item) | (1, 'a')
(2, 'b')
(3, 'c')
| MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
Let's try out `create_item_dict` with a couple of examples. | file3_lines[1]
values1 = parse_values(file3_lines[1])
create_item_dict(values1, headers)
file3_lines[2]
values2 = parse_values(file3_lines[2])
create_item_dict(values2, headers) | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
As expected, the values & header are combined to create a dictionary with the appropriate key-value pairs.We are now ready to put it all together and define the `read_csv` function. | def read_csv(path):
result = []
# Open the file in read mode
with open(path, 'r') as f:
# Get a list of lines
lines = f.readlines()
# Parse the header
headers = parse_headers(lines[0])
# Loop over the remaining lines
for data_line in lines[1:]:
# P... | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
Let's try it out! | with open('./data/loans2.txt') as file2:
print(file2.read())
read_csv('./data/loans2.txt') | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
The file is read and converted to a list of dictionaries, as expected. The `read_csv` file is generic enough that it can parse any file in the CSV format, with any number of rows or columns. Here's the full code for `read_csv` along with the helper functions: | def parse_headers(header_line):
return header_line.strip().split(',')
def parse_values(data_line):
values = []
for item in data_line.strip().split(','):
if item == '':
values.append(0.0)
else:
try:
values.append(float(item))
except ValueEr... | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
Try to create small, generic, and reusable functions whenever possible. They will likely be useful beyond just the problem at hand and save you significant effort in the future. | import math
def loan_emi(amount, duration, rate, down_payment=0):
"""Calculates the equal montly installment (EMI) for a loan.
Arguments:
amount - Total amount to be spent (loan + down payment)
duration - Duration of the loan (in months)
rate - Rate of interest (monthly)
do... | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
We can use this function to calculate EMIs for all the loans in a file. | loans2 = read_csv('./data/loans2.txt')
loans2
for loan in loans2:
loan['emi'] = loan_emi(
loan['amount'],
loan['duration'],
loan['rate']/12, # the CSV contains yearly rates
loan['down_payment']
)
loans2 | _____no_output_____ | MIT | python-os-and-filesystem.ipynb | Rakib1508/python-data-science |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.