markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Part 2. Make features Calculate the number of cell phones per person, and add this column onto your dataframe.(You've calculated correctly if you get 1.220 cell phones per person in the United States in 2017.)
df.head() df[df['country'] == 'United States'] df.dtypes df['cell_phones_total'].value_counts().sum() condition = (df['country'] == 'United States') & (df['time'] == 2017) columns = ['country', 'time', 'cell_phones_total', 'population_total'] subset = df[condition][columns] subset.shape subset.head() #value = subset['c...
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
Modify the `geo` column to make the geo codes uppercase instead of lowercase.
df.head(1) df['geo'] = df['geo'].str.upper() df.head()
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
Part 3. Process data Use the describe function, to describe your dataframe's numeric columns, and then its non-numeric columns.(You'll see the time period ranges from 1960 to 2017, and there are 195 unique countries represented.)
# numeric columns df.describe() # non-numeric columns df.describe(exclude='number')
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
In 2017, what were the top 5 countries with the most cell phones total?Your list of countries should have these totals:| country | cell phones total ||:-------:|:-----------------:|| ? | 1,474,097,000 || ? | 1,168,902,277 || ? | 458,923,202 || ? | 395,881,000 || ? | ...
# This optional code formats float numbers with comma separators pd.options.display.float_format = '{:,}'.format condition = (df['time'] == 2017) columns = ['country', 'cell_phones_total'] subset = df[condition][columns] subset.head() subset = subset.sort_values(by=['cell_phones_total'], ascending=False) subset.head(5)
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
2017 was the first year that China had more cell phones than people.What was the first year that the USA had more cell phones than people?
condition = (df['country'] == 'United States') columns = ['time', 'country', 'cell_phones_total', 'population_total'] subset1 = df[condition][columns] subset1.sort_values(by='time', ascending=False).head(5) df[(df.geo=='USA') & (df.cell_phones_per_person > 1)].time.min() # the way to get the answer via coding # The y...
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
Part 4. Reshape data *This part is not needed to pass the sprint challenge, only to get a 3! Only work on this after completing the other sections.*Create a pivot table:- Columns: Years 2007—2017- Rows: China, India, United States, Indonesia, Brazil (order doesn't matter)- Values: Cell Phones TotalThe table's shape sh...
condition = subset1['country'] == ('China', 'United States', 'Indonesia', 'Brazil') columns = ['time', 'country', 'cell_phones_total', 'population_total'] subset2 = subset1[condition][columns] subset2 subset1.pivot_table(index='columns', columns='time', values='cell_phones_total') # ran out of time...oh well
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
Sort these 5 countries, by biggest increase in cell phones from 2007 to 2017.Which country had 935,282,277 more cell phones in 2017 versus 2007?
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
If you have the time and curiosity, what other questions can you ask and answer with this data? Data StorytellingIn this part of the sprint challenge you'll work with a dataset from **FiveThirtyEight's article, [Every Guest Jon Stewart Ever Had On ‘The Daily Show’](https://fivethirtyeight.com/features/every-guest-jon-...
%matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd url = 'https://raw.githubusercontent.com/fivethirtyeight/data/master/daily-show-guests/daily_show_guests.csv' df = pd.read_csv(url).rename(columns={'YEAR': 'Year', 'Raw_Guest_List': 'Guest'}) def get_occupation(group): if gr...
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
Part 1 — What's the breakdown of guests’ occupations per year?For example, in 1999, what percentage of guests were actors, comedians, or musicians? What percentage were in the media? What percentage were in politics? What percentage were from another occupation?Then, what about in 2000? In 2001? And so on, up through ...
df.head() df.describe() df.describe(exclude='number') df1 = pd.crosstab(df['Year'], df['Occupation'], normalize='index') df1
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
Part 2 — Recreate this explanatory visualization:
from IPython.display import display, Image png = 'https://fivethirtyeight.com/wp-content/uploads/2015/08/hickey-datalab-dailyshow.png' example = Image(png, width=500) display(example)
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
**Hints:**- You can choose any Python visualization library you want. I've verified the plot can be reproduced with matplotlib, pandas plot, or seaborn. I assume other libraries like altair or plotly would work too.- If you choose to use seaborn, you may want to upgrade the version to 0.9.0.**Expectations:** Your plot ...
display(example) df2 = df1.drop(['Other'], axis=1) df2 display(example) plt.style.use('fivethirtyeight') yax = ['0', '25', '50', '75', '100%'] ax = df2.plot() ax.patch.set_alpha(0.1) plt.title("Who Got To Be On 'The Daily Show'?", fontsize=18, x=-0.1, y=1.1,loc='left', fontweight='bo...
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
Example by Alex
import matplotlib.pyplot as plt import seaborn as sns import numpy as np x = np.arange(0,10) x y = x**2 y plt.plot(x,y) y_labels = [f'{i}%' for i in y] plt.yticks(y, y_labels); y_labels = [f'{i}' if i!= 100 else f'{i}%' for i in range(0, 101, 10)] # y_labels = [f'{i}' if i= 100 else f'{i}%' for i in range(0, 101, 10)...
_____no_output_____
MIT
tdu1s3.ipynb
cardstud/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
IFTTT - Trigger workflow **Tags:** ifttt automation nocode Input Import library
from naas_drivers import ifttt
_____no_output_____
BSD-3-Clause
IFTTT/IFTTT_Trigger_workflow.ipynb
vivard/awesome-notebooks
Variables
event = "myevent" key = "cl9U-VaeBu1**********" data = { "value1": "Bryan", "value2": "Helmig", "value3": 27 }
_____no_output_____
BSD-3-Clause
IFTTT/IFTTT_Trigger_workflow.ipynb
vivard/awesome-notebooks
Model Connect to IFTTT
result = ifttt.connect(key)
_____no_output_____
BSD-3-Clause
IFTTT/IFTTT_Trigger_workflow.ipynb
vivard/awesome-notebooks
Output Display result
result = ifttt.send(event, data)
_____no_output_____
BSD-3-Clause
IFTTT/IFTTT_Trigger_workflow.ipynb
vivard/awesome-notebooks
Please enter the correct file path
train_dir = r'C:\Users\Ryukijano\Python_notebooks\Face_ Mask_ Dataset\Train' validation_dir = r'C:\Users\Ryukijano\Python_notebooks\Face_ Mask_ Dataset\Validation' test_dir =r'C:\Users\Ryukijano\Python_notebooks\Face_ Mask_ Dataset\Test' from tensorflow.keras.preprocessing.image import ImageDataGenerator train_datagen ...
_____no_output_____
MIT
It-can-see-you.ipynb
Ryukijano/it-can-see-you
L15 - Model evaluation 2 (confidence intervals)---- Instructor: Dalcimar Casanova (dalcimar@gmail.com)- Course website: https://www.dalcimar.com/disciplinas/aprendizado-de-maquina- Bibliography: based on lectures of Dr. Sebastian Raschka
import numpy as np import matplotlib.pyplot as plt
_____no_output_____
MIT
L15_model evaluation 2/code/L15_confidence intervals holdout.ipynb
pedrogomes-dev/MA28CP-Intro-to-Machine-Learning
from mlxtend.data import iris_data from sklearn.neighbors import KNeighborsClassifier from sklearn.model_selection import train_test_split X, y = iris_data() print(np.shape(y)) X_train_valid, X_test, y_train_valid, y_test = train_test_split(X, y, test_size=0.3, random_state=1, stratify=y) X_train, X_valid, y_train, ...
Requirement already satisfied: hypopt in /usr/local/lib/python3.6/dist-packages (1.0.9) Requirement already satisfied: numpy>=1.11.3 in /usr/local/lib/python3.6/dist-packages (from hypopt) (1.19.5) Requirement already satisfied: scikit-learn>=0.18 in /usr/local/lib/python3.6/dist-packages (from hypopt) (0.22.2.post1) R...
MIT
L15_model evaluation 2/code/L15_confidence intervals holdout.ipynb
pedrogomes-dev/MA28CP-Intro-to-Machine-Learning
from hypopt import GridSearch #from sklearn.model_selection import GridSearchCV knn = KNeighborsClassifier() param_grid = { 'n_neighbors': [2, 3, 4, 5] } grid = GridSearch(knn, param_grid=param_grid) grid.fit(X_train, y_train, X_valid, y_valid)
100%|██████████| 4/4 [00:00<00:00, 265.79it/s]
MIT
L15_model evaluation 2/code/L15_confidence intervals holdout.ipynb
pedrogomes-dev/MA28CP-Intro-to-Machine-Learning
print(grid.param_scores) print(grid.best_params) print(grid.best_score) print(grid.best_estimator_) clf = grid.best_estimator_ from sklearn.metrics import accuracy_score y_test_pred = clf.predict(X_test) acc_test = accuracy_score(y_test, y_test_pred) print(acc_test)
0.9333333333333333
MIT
L15_model evaluation 2/code/L15_confidence intervals holdout.ipynb
pedrogomes-dev/MA28CP-Intro-to-Machine-Learning
clf.fit(X_train_valid, y_train_valid)
_____no_output_____
MIT
L15_model evaluation 2/code/L15_confidence intervals holdout.ipynb
pedrogomes-dev/MA28CP-Intro-to-Machine-Learning
y_test_pred = clf.predict(X_test) acc_test = accuracy_score(y_test, y_test_pred) print(acc_test)
0.9777777777777777
MIT
L15_model evaluation 2/code/L15_confidence intervals holdout.ipynb
pedrogomes-dev/MA28CP-Intro-to-Machine-Learning
Confidence interval (via normal approximation)
ci_test = 1.96 * np.sqrt((acc_test*(1-acc_test)) / y_test.shape[0]) test_lower = acc_test-ci_test test_upper = acc_test+ci_test print(test_lower, test_upper) ci_test = 2.58 * np.sqrt((acc_test*(1-acc_test)) / y_test.shape[0]) test_lower = acc_test-ci_test test_upper = acc_test+ci_test print(test_lower, test_upper)
0.921085060454202 1.0344704951013535
MIT
L15_model evaluation 2/code/L15_confidence intervals holdout.ipynb
pedrogomes-dev/MA28CP-Intro-to-Machine-Learning
InventoryThe Inventory is arguably the most important piece of nornir. Let's see how it works. To begin with the [inventory](../api/nornir/core/inventory.htmlmodule-nornir.core.inventory) is comprised of [hosts](../api/nornir/core/inventory.htmlnornir.core.inventory.Hosts), [groups](../api/nornir/core/inventory.htmln...
# hosts file %highlight_file inventory/hosts.yaml
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
The hosts file is basically a map where the outermost key is the name of the host and then a `Host` object. You can see the schema of the object by executing:
from nornir.core.inventory import Host import json print(json.dumps(Host.schema(), indent=4))
{ "name": "str", "connection_options": { "$connection_type": { "extras": { "$key": "$value" }, "hostname": "str", "port": "int", "username": "str", "password": "str", "platform": "str" } }, ...
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
The `groups_file` follows the same rules as the `hosts_file`.
# groups file %highlight_file inventory/groups.yaml
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Finally, the defaults file has the same schema as the `Host` we described before but without outer keys to denote individual elements. We will see how the data in the groups and defaults file is used later on in this tutorial.
# defaults file %highlight_file inventory/defaults.yaml
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Accessing the inventoryYou can access the [inventory](../api/nornir/core/inventory.htmlmodule-nornir.core.inventory) with the `inventory` attribute:
from nornir import InitNornir nr = InitNornir(config_file="config.yaml") print(nr.inventory.hosts)
{'host1.cmh': Host: host1.cmh, 'host2.cmh': Host: host2.cmh, 'spine00.cmh': Host: spine00.cmh, 'spine01.cmh': Host: spine01.cmh, 'leaf00.cmh': Host: leaf00.cmh, 'leaf01.cmh': Host: leaf01.cmh, 'host1.bma': Host: host1.bma, 'host2.bma': Host: host2.bma, 'spine00.bma': Host: spine00.bma, 'spine01.bma': Host: spine01.bma,...
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
The inventory has two dict-like attributes `hosts` and `groups` that you can use to access the hosts and groups respectively:
nr.inventory.hosts nr.inventory.groups nr.inventory.hosts["leaf01.bma"]
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Hosts and groups are also dict-like objects:
host = nr.inventory.hosts["leaf01.bma"] host.keys() host["site"]
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Inheritance modelLet's see how the inheritance models works by example. Let's start by looking again at the groups file:
# groups file %highlight_file inventory/groups.yaml
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
The host `leaf01.bma` belongs to the group `bma` which in turn belongs to the groups `eu` and `global`. The host `spine00.cmh` belongs to the group `cmh` which doesn't belong to any other group.Data resolution works by iterating recursively over all the parent groups and trying to see if that parent group (or any of it...
leaf01_bma = nr.inventory.hosts["leaf01.bma"] leaf01_bma["domain"] # comes from the group `global` leaf01_bma["asn"] # comes from group `eu`
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Values in `defaults` will be returned if neither the host nor the parents have a specific value for it.
leaf01_cmh = nr.inventory.hosts["leaf01.cmh"] leaf01_cmh["domain"] # comes from defaults
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
If nornir can't resolve the data you should get a KeyError as usual:
try: leaf01_cmh["non_existent"] except KeyError as e: print(f"Couldn't find key: {e}")
Couldn't find key: 'non_existent'
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
You can also try to access data without recursive resolution by using the `data` attribute. For example, if we try to access `leaf01_cmh.data["domain"]` we should get an error as the host itself doesn't have that data:
try: leaf01_cmh.data["domain"] except KeyError as e: print(f"Couldn't find key: {e}")
Couldn't find key: 'domain'
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Filtering the inventorySo far we have seen that `nr.inventory.hosts` and `nr.inventory.groups` are dict-like objects that we can use to iterate over all the hosts and groups or to access any particular one directly. Now we are going to see how we can do some fancy filtering that will enable us to operate on groups of ...
nr.filter(site="cmh").inventory.hosts.keys()
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
You can also filter using multiple `` pairs:
nr.filter(site="cmh", role="spine").inventory.hosts.keys()
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Filter is cumulative:
nr.filter(site="cmh").filter(role="spine").inventory.hosts.keys()
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Or:
cmh = nr.filter(site="cmh") cmh.filter(role="spine").inventory.hosts.keys() cmh.filter(role="leaf").inventory.hosts.keys()
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
You can also grab the children of a group:
nr.inventory.children_of_group("eu")
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Advanced filteringSometimes you need more fancy filtering. For those cases you have two options:1. Use a filter function.2. Use a filter object. Filter functionsThe ``filter_func`` parameter let's you run your own code to filter the hosts. The function signature is as simple as ``my_func(host)`` where host is an objec...
def has_long_name(host): return len(host.name) == 11 nr.filter(filter_func=has_long_name).inventory.hosts.keys() # Or a lambda function nr.filter(filter_func=lambda h: len(h.name) == 9).inventory.hosts.keys()
_____no_output_____
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Filter ObjectYou can also use a filter objects to incrementally create a complex query objects. Let's see how it works by example:
# first you need to import the F object from nornir.core.filter import F # hosts in group cmh cmh = nr.filter(F(groups__contains="cmh")) print(cmh.inventory.hosts.keys()) # devices running either linux or eos linux_or_eos = nr.filter(F(platform="linux") | F(platform="eos")) print(linux_or_eos.inventory.hosts.keys()) # ...
dict_keys(['host1.cmh', 'host2.cmh', 'leaf00.cmh', 'leaf01.cmh'])
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
You can also access nested data and even check if dicts/lists/strings contains elements. Again, let's see by example:
nested_string_asd = nr.filter(F(nested_data__a_string__contains="asd")) print(nested_string_asd.inventory.hosts.keys()) a_dict_element_equals = nr.filter(F(nested_data__a_dict__c=3)) print(a_dict_element_equals.inventory.hosts.keys()) a_list_contains = nr.filter(F(nested_data__a_list__contains=2)) print(a_list_contains...
dict_keys(['host1.cmh', 'host2.cmh'])
Apache-2.0
docs/tutorial/inventory.ipynb
brigade-automation/brigade
Importing Libraries
# important packages import pandas as pd # data manipulation using dataframes import numpy as np # data statistical analysis import seaborn as sns # Statistical data visualization import cv2 # Image and Video processing library import matplotlib.pyplot as plt # data visualisation %matplotlib inline ...
/content
MIT
Sentiment_Analysis_of_Twitter_using_Naive_Bayes.ipynb
shreenath2001/Sentiment_Analysis_of_Twitter
Importing Dataset
df = pd.read_csv("/content/drive/My Drive/P1: Twitter Sentiment Analysis/train.txt") df_test = pd.read_csv("/content/drive/My Drive/P1: Twitter Sentiment Analysis/test_samples.txt") df.head() df.shape df_test.head() df.info() df["sentiment"].unique()
_____no_output_____
MIT
Sentiment_Analysis_of_Twitter_using_Naive_Bayes.ipynb
shreenath2001/Sentiment_Analysis_of_Twitter
Data Visualization
sns.countplot(df['sentiment'], label = "Count") positive = df[df["sentiment"] == 'positive'] negative = df[df["sentiment"] == 'negative'] neutral = df[df["sentiment"] == 'neutral'] positive_percentage = (positive.shape[0]/df.shape[0])*100 negative_percentage = (negative.shape[0]/df.shape[0])*100 neutral_percentage = (n...
Positve Tweets = 42.23% Negative Tweets = 15.78% Neutral Tweets = 41.99%
MIT
Sentiment_Analysis_of_Twitter_using_Naive_Bayes.ipynb
shreenath2001/Sentiment_Analysis_of_Twitter
Data Cleaning
df_test["sentiment"] = "NA" df_total = pd.concat((df, df_test), ignore_index=True) df_total.head() df_total.shape #### Removing Twitter Handles (@user) def remove_pattern(input_txt, pattern): r = re.findall(pattern, input_txt) for i in r: input_txt = re.sub(i, '', input_txt) return input_...
_____no_output_____
MIT
Sentiment_Analysis_of_Twitter_using_Naive_Bayes.ipynb
shreenath2001/Sentiment_Analysis_of_Twitter
Data Visualization
sns.countplot(df['sentiment'], label = "Count") positive = df[df["sentiment"] == 'positive'] negative = df[df["sentiment"] == 'negative'] neutral = df[df["sentiment"] == 'neutral'] positive_percentage = (positive.shape[0]/df.shape[0])*100 negative_percentage = (negative.shape[0]/df.shape[0])*100 neutral_percentage = (n...
Positve Tweets = 42.23% Negative Tweets = 15.78% Neutral Tweets = 41.99%
MIT
Sentiment_Analysis_of_Twitter_using_Naive_Bayes.ipynb
shreenath2001/Sentiment_Analysis_of_Twitter
Bag of Words Model CountVectorizer
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer cv = CountVectorizer() X= cv.fit_transform(corpus).toarray() len(X[0])
_____no_output_____
MIT
Sentiment_Analysis_of_Twitter_using_Naive_Bayes.ipynb
shreenath2001/Sentiment_Analysis_of_Twitter
TfidfVectorizer
#from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer #tv = TfidfVectorizer() #X = tv.fit_transform(corpus).toarray() len(X[0])
_____no_output_____
MIT
Sentiment_Analysis_of_Twitter_using_Naive_Bayes.ipynb
shreenath2001/Sentiment_Analysis_of_Twitter
Data Splitting
X_train = X[:21465] X_test = X[21465:] y_train = df.iloc[:, 1].values X_train.shape y_train.shape from sklearn.naive_bayes import MultinomialNB classifier = MultinomialNB() classifier.fit(X_train, y_train)
_____no_output_____
MIT
Sentiment_Analysis_of_Twitter_using_Naive_Bayes.ipynb
shreenath2001/Sentiment_Analysis_of_Twitter
Model Prediction
y_pred = classifier.predict(X_test) print(y_pred) print(y_pred.shape) list1 = [] heading = ['tweet_id', 'sentiment'] list1.append(heading) for i in range(len(y_pred)): sub = [] sub.append(df_test["tweet_id"][i]) sub.append(y_pred[i]) list1.append(sub)
_____no_output_____
MIT
Sentiment_Analysis_of_Twitter_using_Naive_Bayes.ipynb
shreenath2001/Sentiment_Analysis_of_Twitter
Generate Submission File
import csv with open('/content/drive/My Drive/P1: Twitter Sentiment Analysis/Models/NB_TV.csv', 'w', newline='') as fp: a = csv.writer(fp, delimiter = ",") data = list1 a.writerows(data)
_____no_output_____
MIT
Sentiment_Analysis_of_Twitter_using_Naive_Bayes.ipynb
shreenath2001/Sentiment_Analysis_of_Twitter
Optical Flow test This notebook will try to follow optical flow initial code comes from : https://docs.opencv.org/3.3.1/d7/d8b/tutorial_py_lucas_kanade.html
%matplotlib inline import cv2 import os import sys import numpy as np import glob import matplotlib.pyplot as plt samplehash = '10a278dc5ebd2b93e1572a136578f9dbe84d10157cc6cca178c339d9ca762c52' #'7fafc640d446cab1872e4376b5c2649f8c67e658b3fc89d2bced3b47c929e608'# files = sorted(glob.glob( "../data/train/data/" + samp...
_____no_output_____
MIT
analysis/optical flow.ipynb
hitennirmal/goucher
Comparações ContraintuitivasExistem algumas comparações no Python que não são tão intuitivas quando vemos pela primeira vez, mas que são muito usadas, principalmente por programadores mais experientes.É bom sabermos alguns exemplos e buscar sempre entender o que aquela comparação está buscando verificar. Exemplo 1:Dig...
faturamento = input('Qual foi o faturamento da loja nesse mês?') custo = input('Qual foi o custo da loja nesse mês?') if faturamento and custo: # serve para o usuário não deixar os valores vazios lucro = int(faturamento) - int(custo) print("O lucro da loja foi de {} reais".format(lucro)) else: # se os valores...
Qual foi o faturamento da loja nesse mês?500 Qual foi o custo da loja nesse mês? Preencha o faturamento e o lucro corretamente
MIT
if-else/comparacoes-contraintuitivas.ipynb
amarelopiupiu/python
Conditional Statements in Python Conditional statement controls the flow of execution depending on some condition. Python conditionsPython supports the usual logical conditions from mathematics: | **Condition** | **Expression** | |----:|:----:|| Equal |a == b|| Not Equal |a != b|| Less than |a < b|| Less than or equa...
a = 2 b = 5 # Equal a == b # Not equal a != b # Less than a < b # Less than or equal to a <= b # Greater than a > b # Greater than or equal to a >= b
_____no_output_____
MIT
02-Basic Python/07-Cond Stat.ipynb
Goliath-Research/Introduction-to-Data-Science
Python Logical Operators:- `and`: Returns True if both statements are true- `or`: Returns True if one of the statements is true- `not`: Reverse the result. Returns False if the result is true, and True is the result is False
a = 1 b = 2 c = 10 # True and True a < c and b < c # True and False a < c and b > c # True or False a < c or b > c # False or True a > c or b < c # True or True a < c or b < c # False or False a > c or b > c
_____no_output_____
MIT
02-Basic Python/07-Cond Stat.ipynb
Goliath-Research/Introduction-to-Data-Science
Using `not` before a boolean expression inverts it:
print(not False) not(a < c) not(a > c)
_____no_output_____
MIT
02-Basic Python/07-Cond Stat.ipynb
Goliath-Research/Introduction-to-Data-Science
If statements
a = 10 b = 20 if b > a: print("The condition is True") print('All these sentences are executed!')
The condition is True All these sentences are executed!
MIT
02-Basic Python/07-Cond Stat.ipynb
Goliath-Research/Introduction-to-Data-Science
Remember Python relies on indentation (whitespace at the beginning of a line) to define scope in the code. The same sentence, without indentation, raises an error.
if b > a: # This will raise an error print("The condition is True") print('All these sentences are executed')
_____no_output_____
MIT
02-Basic Python/07-Cond Stat.ipynb
Goliath-Research/Introduction-to-Data-Science
When the condition is False, the sentence is not executed.
a = 10 b = 20 if b < a: print("The condition is False") print('These sentences are NOT executed!')
_____no_output_____
MIT
02-Basic Python/07-Cond Stat.ipynb
Goliath-Research/Introduction-to-Data-Science
The else keyword catches anything which isn't caught by the preceding conditions.
a = 5 b = 10 if b < a: print("The condition is True.") else: print("The condition is False.")
The condition is False.
MIT
02-Basic Python/07-Cond Stat.ipynb
Goliath-Research/Introduction-to-Data-Science
The elif keyword is pythons way of saying "if the previous conditions were not true, then try this condition".
# using elif a = 3 b = 3 if b > a: print("b is greater than a") elif a == b: print("a and b are equal") # using else a = 6 b = 4 if b > a: print("b is greater than a") elif a == b: print("a and b are equal") else: print("a is greater than b")
a is greater than b
MIT
02-Basic Python/07-Cond Stat.ipynb
Goliath-Research/Introduction-to-Data-Science
An arbitrary number of `elif` clauses can be specified. The `else` clause is optional. If it is present, there can be only one, and it must be specified last.
name = 'Anna' if name == 'Maria': print('Hello Maria!') elif name == 'Sarah': print('Hello Sarah!') elif name == 'Anna': print('Hello Anna!') elif name == 'Sofia': print('Hello Sofia!') else: print("I do not know who you are!") name = 'Julia' if name == 'Maria': print('Hello Maria!') elif name =...
Above 10, and also above 20.
MIT
02-Basic Python/07-Cond Stat.ipynb
Goliath-Research/Introduction-to-Data-Science
The `pass` Statement: if statements cannot be empty, but if you for some reason have an if statement with no content, put in the `pass` statement to avoid getting an error.
a = 33 b = 200 if b > a: pass else: print('b <= a')
_____no_output_____
MIT
02-Basic Python/07-Cond Stat.ipynb
Goliath-Research/Introduction-to-Data-Science
Numbers In Pi[link](https://www.algoexpert.io/questions/Numbers%20In%20Pi) My Solution
def numbersInPi(pi, numbers): # Write your code here. # brute force d1 = {number: True for number in numbers} minSpaces = [float('inf')] numbersInPiHelper(pi, d1, 0, minSpaces, 0) return minSpaces[0] if minSpaces[0] != float('inf') else -1 def numbersInPiHelper(pi, d1, startIdx, minSpaces, ...
_____no_output_____
MIT
algoExpert/numbers_in_pi/solution.ipynb
maple1eaf/learning_algorithm
Expert Solution
# O(n^3 + m) time | O(n + m) space, where n is the number of digits in Pi and m is the number of favorite numbers # recursive solution def numbersInPi(pi, numbers): numbersTable = {number: True for number in numbers} minSpaces = getMinSpaces(pi, numbersTable, {}, 0) return -1 if minSpaces == float("inf") else minSpa...
_____no_output_____
MIT
algoExpert/numbers_in_pi/solution.ipynb
maple1eaf/learning_algorithm
Introduction to the Harmonic Oscillator *Note:* Much of this is adapted/copied from https://flothesof.github.io/harmonic-oscillator-three-methods-solution.html This week week we are going to begin studying molecular dynamics, which uses classical mechanics to study molecular systems. Our "hydrogen atom" in this sectio...
import sympy as sym
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
Next we initialize pretty printing
sym.init_printing()
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
Next we will set our symbols
t,a,b,c=sym.symbols("t,a,b,c")
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
Now for somehting new. We can define functions using `sym.Function("f")`
y=sym.Function("y") y(t)
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
Now, If I want to define a first or second derivative, I can use `sym.diff`
sym.diff(y(t),(t,1)),sym.diff(y(t),(t,2))
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
My differential equation can be written as follows
dfeq=a*sym.diff(y(t),(t,2))+b*sym.diff(y(t),(t,1))-c dfeq sol = sym.dsolve(dfeq) sol
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
The two constants $C_1$ and $C_2$ can be determined by setting boundry conditions.First, we can set the condition $y(t=0)=y_0$The next intial condition we will set is $y'(t=0)=v_0$To setup the equality we want to solve, we are using `sym.Eq`. This function sets up an equaility between a lhs aand rhs of an equation
# sym.Eq example alpha,beta=sym.symbols("alpha,beta") sym.Eq(alpha+2,beta)
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
Back to the actual problem
y0,v0=sym.symbols("y_0,v_0") ics=[sym.Eq(sol.args[1].subs(t, 0), y0), sym.Eq(sol.args[1].diff(t).subs(t, 0), v0)] ics
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
We can use this result to first solve for $C_2$ and then solve for $C_1$.Or we can use sympy to solve this for us.
solved_ics=sym.solve(ics) solved_ics
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
Substitute the result back into $y(t)$
full_sol = sol.subs(solved_ics[0]) full_sol
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
We can plot this result too. Assume that $a,b,c=1$ and that the starting conditions are $y_0=0,v_0=0$We will use two sample problems:* case 1 : initial position is nonzero and initial velocity is zero* case 2 : initial position is zero and initialvelocity is nonzero
# Print plots %matplotlib inline
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
Initial velocity set to zero
case1 = sym.simplify(full_sol.subs({y0:0, v0:0, a:1, b:1, c:1})) case1 sym.plot(case1.rhs) sym.plot(case1.rhs,(t,-2,2))
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
Initial velocity set to one
case2 = sym.simplify(full_sol.subs({y0:0, v0:1, a:1, b:1, c:1})) case2 sym.plot(case2.lhs,(t,-2,2))
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
Calculate the phase space As we will see in lecture, the state of our classical systems are defined as points in phase space, a hyperspace defined by ${{\bf{r}}^N},{{\bf{p}}^N}$. We will convert our sympy expression into a numerical function so that we can plot the path of $y(t)$ in phase space $y,y'$.
case1 # Import numpy library import numpy as np # Make numerical functions out of symbolic expressions yfunc=sym.lambdify(t,case1.rhs,'numpy') vfunc=sym.lambdify(t,case1.rhs.diff(t),'numpy') # Make list of numbers tlst=np.linspace(-2,2,100) # Import pyplot import matplotlib import matplotlib.pyplot as plt # Make plo...
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
Exercise 1.1 Change the initial starting conditions and see how that changes the plots. Make three different plots with different starting conditions
#Making initial velocity equal 10 and change initial positions to 5,5,5 case3 = sym.simplify(full_sol.subs({y0:1, v0:10, a:5, b:5, c:5})) case3 sym.plot(case3.rhs) sym.plot(case3.rhs,(t,-2,2)) case4 = sym.simplify(full_sol.subs({y0:5, v0:2, a:3, b:4, c:5})) case4 sym.plot(case4.rhs) sym.plot(case4.rhs,(t,-2,2)) case5 =...
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
2. Harmonic oscillator Applying the harmonic oscillator force to Newton's second law leads to the following second order differential equation$$ F = m a $$$$ F= - \omega_0^2 m x $$$$ a = - \omega_0^2 x $$$$ x(t)'' = - \omega_0^2 x $$ The final expression can be rearranged into a second order homogenous differential e...
# Your code here m,t,omega=sym.symbols("m,t,omega") x=sym.Function("x") x(t) sym.diff(x(t),(t,1)),sym.diff(x(t),(t,2)) dfeq1=sym.diff(x(t),(t,2))+omega**2*x(t) dfeq1 sol1 = sym.dsolve(dfeq1) sol1 x0,v0=sym.symbols("x_0,v_0") ics1=[sym.Eq(sol1.args[1].subs(t, 0), x0), sym.Eq(sol1.args[1].diff(t).subs(t, 0), v0)] ic...
_____no_output_____
MIT
harmonic_oscillator.ipynb
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-daliahassan98
_Python help: Running the notebook the first time, make sure to run all cells to be able to make changes in the notebook. Hit Shift+Enter to run the cell or click on the top menu: Kernel > Restart & Run All > Restart and Run All Cells to rerun the whole notebook. If you make any changes in a cell, rerun that cell._ Me...
# NGC 5533 r_NGC5533, v_NGC5533, v_err_NGC5533 = lg.NGC5533['m_radii'],lg.NGC5533['m_velocities'],lg.NGC5533['m_v_errors'] # NGC 891 r_NGC0891, v_NGC0891, v_err_NGC0891 = lg.NGC0891['m_radii'],lg.NGC0891['m_velocities'],lg.NGC0891['m_v_errors'] # NGC 7814 r_NGC7814, v_NGC7814, v_err_NGC7814 = lg.NGC7814['m_radii'],lg...
_____no_output_____
MIT
binder/DM_workshop_082721/Interactive_Measured_Data_Plotting.ipynb
villano-lab/galactic-spin
Plot measured data with errorbars Measured data points of 13 galaxies are plotted below.1. __Change the limits of the x-axis to zoom in and out of the graph.__ _Python help: change the limits of the x-axis by modifying the two numbers (left and right) of the line: plt.xlim then rerun the notebook or the cell._ 2. __Fi...
# Define radius for plotting r = np.linspace(0,100,100) # Plot plt.figure(figsize=(10.0,7.0)) # size of the plot plt.title('Measured data of multiple galaxies', fontsize=14) # giving the plot a title plt.xlabel('Radius (kpc)', fontsize=12) # labeling t...
Execution time: 8.77 minutes
MIT
binder/DM_workshop_082721/Interactive_Measured_Data_Plotting.ipynb
villano-lab/galactic-spin
Import packages
# import sklearn import numpy as np import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
define two functions for visualization
# from matplotlib.colors import ListedColormap def plot_decision_boundary(clf, X, y, axes=[0, 7.5, 0, 3], iris=True, legend=False, plot_training=True): x1s = np.linspace(axes[0], axes[1], 100) x2s = np.linspace(axes[2], axes[3], 100) x1, x2 = np.meshgrid(x1s, x2s) X_new = np.c_[x1.ravel(), x2.ravel()] ...
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Load the Iris data from sklearn. Use petal length and width as the training input.
from sklearn.datasets import load_iris iris = load_iris() iris.feature_names X = iris.data[:, 2:] y = iris.target
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Now fit a decision tree classifier. Set max depth at 2.
from sklearn.tree import DecisionTreeClassifier clf_tree = DecisionTreeClassifier(max_depth=2) clf_tree.fit(X, y)
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Now visualize the model by the plot_decision_boundary function.
plt.figure(figsize=(10, 6)) plot_decision_boundary(clf_tree, X, y, axes=[0, 7.5, 0, 3], iris=True, legend=False, plot_training=True)
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Predict the probability of each class for [5, 1.5]
clf_tree.predict_proba([[5, 1.5]])
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Run next cell to generate 100 moon data at noise=0.25 and random_state=53
from sklearn.datasets import make_moons X, y = make_moons(n_samples=100, noise=0.25, random_state=53)
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Now fit two decision tree model. One has no restriction, and another has min_samples_leaf = 4
clf_tree = DecisionTreeClassifier() clf_tree_4 = DecisionTreeClassifier(min_samples_leaf=4) clf_tree.fit(X, y) clf_tree_4.fit(X, y)
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Now use function plot_decision_boundary to visualize and compare these two models. Check for overfitting.
limit = [X[:, 0].min(), X[:, 0].max(), X[:, 1].min(), X[:, 1].max()] plt.figure(figsize=(12, 6)) plt.subplot(121) plot_decision_boundary(clf_tree, X, y, axes=limit, iris=False) plt.title('no restriction') plt.subplot(122) plot_decision_boundary(clf_tree_4, X, y, axes=limit, iris=False) plt.title('min_samples_leaf=4')
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Regression Run next cell to generate synthetic data
# np.random.seed(42) m = 200 X = np.random.rand(m, 1) y = 4 * (X - 0.5) ** 2 y = y + np.random.randn(m, 1) / 10 y = y.ravel()
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Fit two regression trees. The first three have max_depth of 2, 3, 5; and the last one has no restriction.
from sklearn.tree import DecisionTreeRegressor reg_tree_2 = DecisionTreeRegressor(max_depth=2) reg_tree_3 = DecisionTreeRegressor(max_depth=3) reg_tree_5 = DecisionTreeRegressor(max_depth=5) reg_tree_none = DecisionTreeRegressor() reg_tree_2.fit(X, y) reg_tree_3.fit(X, y) reg_tree_5.fit(X, y) reg_tree_none.fit(X, y)
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Now visualize these four trees by the function of plot_regression_predictions
limit = [X.min(), X.max(), y.min(), y.max()] plt.figure(figsize=(10, 10)) plt.subplot(221) plot_regression_predictions(reg_tree_2, X, y, axes=limit) plt.subplot(222) plot_regression_predictions(reg_tree_3, X, y, axes=limit) plt.subplot(223) plot_regression_predictions(reg_tree_5, X, y, axes=limit) plt.subplot(224) plot...
_____no_output_____
MIT
Chapter6.ipynb
VandyChris/HandsOnMachineLearning
Read the actual Bird Data CSV Convert Cornell Bird Migration CSV files to CZMLTrying out the CZML python package. Installed from PIP until we can build our own conda package
import pandas as pd import datetime as dt import numpy as np # parser to convert integer yeardays to datetimes in 2015 def parse(day): date = dt.datetime(2015,1,1,0,0) + dt.timedelta(days=(day.astype(np.int32)-1)) return date def csv_to_position(file='Acadian_Flycatcher.csv'): df = pd.read_csv(file, parse_d...
_____no_output_____
CC0-1.0
birds/bird_csv_to_czml.ipynb
rsignell-usgs/CZML
CoreNLP running on Python Server
!echo "Downloading CoreNLP..." !wget "http://nlp.stanford.edu/software/stanford-corenlp-full-2018-10-05.zip" -O corenlp.zip !unzip corenlp.zip !mv ./stanford-corenlp-full-2018-10-05 ./corenlp # Set the CORENLP_HOME environment variable to point to the installation location import os os.environ["CORENLP_HOME"] = "./cor...
<stanfordnlp.server.client.CoreNLPClient object at 0x7f6ed0cf68d0> Starting server with command: java -Xmx4G -cp ./corenlp/* edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9001 -timeout 60000 -threads 5 -maxCharLength 100000 -quiet True -serverProperties corenlp_server-5f4e4d7044944e52.props -preload tokenize,ss...
MIT
Colab NoteBooks/BertSum_and_PreSumm_preprocessing.ipynb
gagan94/PreSumm