markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
비교를 위해 `numpy` 모듈도 불러온다.Import `numpy` module to compare. | import numpy as np
np.pi
sym.pi
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
오일러 공식Euler formula $$e ^ {\pi i} + 1 = 0$$ | np.exp(np.pi * 1j) + 1
sym.exp(sym.pi * 1j) + 1
sym.simplify(_)
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
무한대Infinity | np.inf, np.inf > 999999
sym.oo, sym.oo > 999999
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
제곱근Square root 10의 제곱근을 구해보자.Let't find the square root of ten. | np.sqrt(10)
sym.sqrt(10)
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
결과를 숫자로 살펴보려면 `evalf()` 메소드를 사용한다.Use `evalf()` method to check the result in digits. | sym.sqrt(10).evalf()
sym.sqrt(10).evalf(30)
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
10의 제곱근을 제곱해보자.Let't square the square root of ten. | print(f"np.sqrt(10) ** 2 = {np.sqrt(10) ** 2}")
sym.sqrt(10) ** 2
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
위 결과의 차이에 대해 어떻게 생각하는가?What do you think about the differences of the results above? 분수Fractions 15 / 11 을 생각해보자.Let't think about 15/11. | num = 15
den = 11
division = num / den
division
print(division * den)
import fractions
fr_division = fractions.Fraction(num, den)
fr_division
fr_division * den
sym_division = sym.Rational(num, den)
sym_division
sym_division * den
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
위 결과의 차이에 대해 어떻게 생각하는가?What do you think about the differences of the results above? 변수를 포함하는 수식Expressions with variables 사용할 변수를 정의.Define variables to use. | a, b, c, x = sym.symbols('a b c x')
theta, phi = sym.symbols('theta phi')
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
변수들을 한번 살펴보자.Let's take a look at the variables | a, b, c, x
theta, phi
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
변수를 조합하여 새로운 수식을 만들어 보자.Let's make equations using variables. | y = a * x + b
y
z = a * x * x + b * x + c
z
w = a * sym.sin(theta) ** 2 + b
w
p = (x - a) * (x - b) * (x - c)
p
sym.expand(p, x)
sym.collect(_, x)
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
$$\frac{a + ab}{a}$$ | sym.simplify((a + a * b) / a)
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
`sympy` 범위 기호 생성Creating `sympy` symbols with range | sym.symbols('i:n')
sym.symbols('z1:3')
sym.symbols('w(:c)')
sym.symbols('a(:2)(:3)')
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
그래프Plot | import sympy.plotting as splot
splot.plot(sym.sin(x));
import mpmath
splot.plot(sym.sin(mpmath.radians(x)), (x, -360, 360));
splot.plot_parametric((sym.cos(theta), sym.sin(theta)), (theta, -sym.pi, sym.pi));
splot.plot_parametric(
16 * (sym.sin(theta)**3),
13 * sym.cos(theta) - 5 * sym.cos(2*theta) - 2 ... | _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
3차원 그래프3D Plot | x, y = sym.symbols('x y')
splot.plot3d(sym.cos(x) + sym.sin(y), (x, -5, 5), (y, -5, 5));
splot.plot3d_parametric_line(x, 25-x**2, 25-x**2, (x, -5, 5));
u, v = sym.symbols('u v')
splot.plot3d_parametric_surface(u + v, sym.sin(u), sym.cos(u), (u, -1, 1), (v, -1, 1));
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
극한Limits $$\lim_{x \to 0} \frac{sin x}{x}$$ | sym.limit(sym.sin(x) / x, x, 0)
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
$$\lim_{x \to \infty} x$$ | sym.limit(x, x, sym.oo)
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
$$\lim_{x \to \infty} \frac{1}{x}$$ | sym.limit(1 / x, x, sym.oo)
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
$$\lim_{x \to 0} x^x$$ | sym.limit(x ** x, x, 0)
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
미적분Calculus | z
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
$$\frac{dz}{dx} =\frac{d}{dx} \left( a x^2 + bx + c \right)$$ | z.diff(x)
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
$$\int{z}{dx} =\int{\left(a x^2 + bx + c \right)}{dx}$$ | sym.integrate(z, x)
w
w.diff(theta)
sym.integrate(w, theta)
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
정적분Definite integral | sym.integrate(w, (theta, 0, sym.pi))
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
근Root | z
z_sol_list = sym.solve(z, x)
z_sol_list
sym.solve(2* sym.sin(theta) ** 2 - 1, theta)
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
코드 생성Code generation | print(sym.python(z_sol_list[0]))
import sympy.utilities.codegen as sc
[(c_name, c_code), (h_name, c_header)] = sc.codegen(
("z_sol", z_sol_list[0]),
"C89",
"test"
)
c_name
print(c_code)
h_name
print(c_header)
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
방정식Equation solving $$x^4=1$$ | sym.solve(x ** 4 - 1, x)
sym.solveset(x ** 4 - 1, x)
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
$$e^x=-1$$ | sym.solve(sym.exp(x) + 1, x)
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
$$x^4 - 3x^2 +1$$ | f = x ** 4 - 3 * x ** 2 + 1
sym.factor(f)
sym.factor(f, modulus=5)
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
Boolean equations | sym.satisfiable(a & b)
sym.satisfiable(a ^ b)
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
연립방정식System of equations | a1, a2, a3 = sym.symbols('a1:4')
b1, b2, b3 = sym.symbols('b1:4')
c1, c2 = sym.symbols('c1:3')
x1, x2 = sym.symbols('x1:3')
eq1 = sym.Eq(
a1 * x1 + a2 * x2,
c1,
)
eq1
eq2 = sym.Eq(
b1 * x1 + b2 * x2,
c2,
)
eq2
eq_list = [eq1, eq2]
eq_list
sym.solve(eq_list, (x1, x2))
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
행렬Matrix | identity = sym.Matrix([[1, 0], [0, 1]])
identity
A = sym.Matrix([[1, a], [b, 1]])
A
A * identity
A * A
A ** 2
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
미분방정식Differential Equations $$\frac{d^2}{dx^2}f(x) + f(x)$$ | f = sym.Function('f', real=True)
(f(x).diff(x, x) + f(x))
sym.dsolve(f(x).diff(x, x) + f(x))
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
기계진동Mechanical Vibration $$m \frac{d^2x(t)}{dt^2} +c \frac{dx(t)}{dt} + k x(t) = 0$$ | m, c, k, t = sym.symbols('m c k t')
x = sym.Function('x', real=True)
vib_eq = m * x(t).diff(t, t) + c * x(t).diff(t) + k * x(t)
vib_eq
result = sym.dsolve(vib_eq)
result
sym.simplify(result)
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
강제진동Forced Vibration $$m \frac{d^2x(t)}{dt^2} +c \frac{dx(t)}{dt} + x(t) = sin(t)$$ | forced_vib_eq = m * x(t).diff(t, t) + c * x(t).diff(t) + k * x(t) - sym.sin(t)
forced_vib_eq
result = sym.dsolve(forced_vib_eq)
result
sym.simplify(result)
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
참고문헌References * SymPy Development Team, SymPy 1.4 documentation, sympy.org, 2019 04 10. [Online] Available : https://docs/sympy.org/latest/index.html.* SymPy Development Team, SymPy Tutorial, SymPy 1.4 documentation, sympy.org, 2019 04 10. [Online] Available : https://docs/sympy.org/latest/tutorial/index.html.* d84_n... | # stackoverfow.com/a/24634221
import os
os.system("printf '\a'");
| _____no_output_____ | BSD-3-Clause | 45_sympy/10_sympy.ipynb | kangwon-naver/nmisp |
Intersection CSV to GeoJSON Converstion Script Input list of intersections as CSV file in format: ID, Name1, Name2, Latitude, LongitudeWhere Name 1 could be north/south street name and Name 2 could be east/west street name CSV Input Example:2,California Blvd,Ygnacio Valley Rd,37.904976, -122.065751Outputs GeoJSON feat... | #imports
from geojson import Point, Feature, FeatureCollection, dump
import csv
import requests
import urllib
import os, sys
#Variables
features = []
input_filename = "../WalnutCreekIntersections.csv"
output_filename = "../WalnutCreekIntersections-April_2022.geojson"
url = r'https://nationalmap.gov/epqs/pqs.php?'
d... | Processing line #1
Processing line #2
Processing line #3
Processing line #4
Processing line #5
Processing line #6
Processing line #7
Processing line #8
Processing line #9
Processing line #10
Processing line #11
Processing line #12
Processing line #13
Processing line #14
Processing line #15
Processing line #16
Processin... | MIT | Tools/.ipynb_checkpoints/Intersections CSV to GeoJson-checkpoint.ipynb | redmondWC/Intersections-Walnut-Creek-CA |
Evasion and Poisoning Attacks on MNIST datasetIn this tutorial we show how to load the **MNIST handwritten digits dataset** and use it to train a Support Vector Machine (SVM).Later we are going to perform Evasion and Poisoning attacks against the trained classifier, as previosuly described in [evasion](03-Evasion.ipy... | %%capture --no-stderr --no-display
# NBVAL_IGNORE_OUTPUT
try:
import secml
except ImportError:
%pip install git+https://gitlab.com/secml/secml | _____no_output_____ | Apache-2.0 | tutorials/06-MNIST_dataset.ipynb | zangobot/secml |
Training of the classifierFirst, we load the dataset and train the classifier. For this tutorial, we only consider 2 digits, the 5 (five) and the 9 (nine). | # NBVAL_IGNORE_OUTPUT
from secml.data.loader import CDataLoaderMNIST
# MNIST dataset will be downloaded and cached if needed
loader = CDataLoaderMNIST()
random_state = 999
n_tr = 100 # Number of training set samples
n_val = 500 # Number of validation set samples
n_ts = 500 # Number of test set samples
digits = (5... | Training of classifier...
Accuracy on test set: 93.60%
| Apache-2.0 | tutorials/06-MNIST_dataset.ipynb | zangobot/secml |
Evasion attack with MNIST datasetLet's define the attack parameters. Firstly, we chose to generate an *l2* perturbation within a maximum ball of radius `eps = 2.5` from the initial points. Secondly, we also add a low/upper bound as our feature space is limited in `[0, 1]`. Lastly, as we are not interested in generat... | # For simplicity, let's attack a subset of the test set
attack_ds = ts[:25, :]
noise_type = 'l2' # Type of perturbation 'l1' or 'l2'
dmax = 2.5 # Maximum perturbation
lb, ub = 0., 1. # Bounds of the attack space. Can be set to `None` for unbounded
y_target = None # None if `error-generic` or a class label for `err... | Attack started...
Attack complete!
Accuracy on reduced test set before attack: 100.00%
Accuracy on reduced test set after attack: 12.00%
| Apache-2.0 | tutorials/06-MNIST_dataset.ipynb | zangobot/secml |
We can observe how the classifier trained on the MNIST dataset has been *successfully evaded* by the adversarial examples generated by our attack. Let's now visualize few of the adversarial examples. The first row are the original samples and the second row are the adversarial examples. Above each digit it is shown t... | from secml.figure import CFigure
# Only required for visualization in notebooks
%matplotlib inline
# Let's define a convenience function to easily plot the MNIST dataset
def show_digits(samples, preds, labels, digs, n_display=8):
samples = samples.atleast_2d()
n_display = min(n_display, samples.shape[0])
f... | _____no_output_____ | Apache-2.0 | tutorials/06-MNIST_dataset.ipynb | zangobot/secml |
Poisoning attack with MNIST datasetFor poisoning attacks the parameters are much simpler. We set the the bounds of the attack space and the number of adversarial points to generate, 50 in this example. Lastly, we chose the solver parameters for this specific optimization problem.*Please note that the attack using t... | lb, ub = 0., 1. # Bounds of the attack space. Can be set to `None` for unbounded
n_poisoning_points = 15 # Number of poisoning points to generate
# Should be chosen depending on the optimization problem
solver_params = {
'eta': 0.25,
'eta_min': 2.0,
'eta_max': None,
'max_iter': 100,
'eps': 1e-6
}... | Attack started...
Attack complete!
Original accuracy on test set: 93.60%
Accuracy after attack on test set: 50.40%
| Apache-2.0 | tutorials/06-MNIST_dataset.ipynb | zangobot/secml |
_Speech Processing: TTS_ | # run this first
import matplotlib.pyplot as plt
import numpy as np
import math
import IPython | _____no_output_____ | MIT | tts/tts-1-1-entropy.ipynb | laic/uoe_speech_processing_course |
1 Entropy Learning Outcomes* Understand that entropy measures uncertainty* Gain some intutitions about how entropy behaves* See that entropy can be reduced by splitting a data set into two partitions Need to know* Topic Videos: Decision tree, Learning decision treesOur goal in this sequence of notebooks is to understa... | from IPython.display import HTML
IPython.display.IFrame(width="640",height="428",src="https://fast.wistia.net/embed/iframe/utpd6km04m") | _____no_output_____ | MIT | tts/tts-1-1-entropy.ipynb | laic/uoe_speech_processing_course |
and here's a Python function to compute entropy from an array of counts, or probabilities. (It works for either case.) | def entropy(counts):
""" accepts an array of counts or probabilities and computes -1 * sum {p * log p}"""
H=0 # entropy
total_count=float(sum(counts))
for c in counts:
if c > 0: # cannot take log of zero
p=float(c)/total_count
H=H + p * math.log2(p)
H=H*-1.0
retur... | _____no_output_____ | MIT | tts/tts-1-1-entropy.ipynb | laic/uoe_speech_processing_course |
1.2 Get an intuitive understanding of entropy To help you visualise probability distributions, here's a function for plotting one. It also computes the entropy of the distribution. | def plot_distribution(labels,counts,title='Distribution'):
if sum(counts) == 0:
print("Cannot handle this case!")
return 0
total_count=float(sum(counts))
pdf = [c / total_count for c in counts]
x_pos = [i for i, _ in enumerate(labels)]
plt.bar(x_pos, pdf, color='blue')
plt.title(... | _____no_output_____ | MIT | tts/tts-1-1-entropy.ipynb | laic/uoe_speech_processing_course |
1.2.1 What entropy measures about a probability distributionNow find out by experimentation what the **highest and lowest values of entropy** are. The variable (which will be called the predictee when we build a Decision Tree) here is "Fruit" and it has two possible values (= classes) of "Apple" and "Orange". You are ... | # the labels of the two classes (i.e, the values the categorical random variable "Fruit" can take)
labels = ['Apple', 'Orange']
# the number of examples of each class in our data set
counts = [4, 10] # <- play with the distribution of counts
plot_distribution(labels,counts,"Fruit")
| _____no_output_____ | MIT | tts/tts-1-1-entropy.ipynb | laic/uoe_speech_processing_course |
1.2.2 Try different numbers of classesWhat is the relationship between the number of classes and the **highest value of entropy** you can acheive?(Hint: try with 2, 4, and 8 classes, as well as other numbers.) | # add and remove classes to change how many there are
labels = ['k', 's', 'ʃ', 'tʃ']
# the number of counts must match the number of classes
counts = [11180, 2185, 1170, 2005] # <- play with the distribution of counts
# for example, how about a distribution over 5 classes
labels = ['a', 'b', 'c', 'd', 'e']
counts = [... | _____no_output_____ | MIT | tts/tts-1-1-entropy.ipynb | laic/uoe_speech_processing_course |
Now go back to the equation and relate what you have found by experimentation to the terms in the equation. Where in the equation is the number of classes that you just varied? Where in the equation is the probability distribution over those classes? 1.3 Reduce entropyFrom your experiments above you should have learne... | labels = ['k', 's', 'ʃ', 'tʃ'] # do not change this
counts = np.array([11180, 2185, 1170, 2005]) # do not change this
print("The distribution before the split was",counts)
print("and the entropy of that distribution is {:.3} bits".format(entropy(counts)))
plot_distribution(labels,counts,"Original distribution") | The distribution before the split was [11180 2185 1170 2005]
and the entropy of that distribution is 1.41 bits
| MIT | tts/tts-1-1-entropy.ipynb | laic/uoe_speech_processing_course |
Now we split the above counts into two partitions. We'll call then 'left' and 'right' because we're eventually going to build a decision tree (not yet though!). | # play around with these values (they can't be larger than the original counts above though)
left_counts = np.array([46, 1339, 12, 104])
right_counts = np.subtract(counts,left_counts) # this is the remaining data; do not change this line
print("The two distributions after the split are",left_counts,"and",right_counts... | The two distributions after the split are [ 46 1339 12 104] and [11134 846 1158 1901]
Entropies of the two distributions are 0.624 bits and 1.22 bits.
Total entropy of the two distributions is 1.16 bits
which is a reduction of 0.244 bits compared to the original distribution.
| MIT | tts/tts-1-1-entropy.ipynb | laic/uoe_speech_processing_course |
Visualizing Chipotle's Data This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries | import pandas as pd
import matplotlib.pyplot as plt
from collections import Counter
# set this so the graphs open internally
%matplotlib inline | _____no_output_____ | BSD-3-Clause | 07_Visualization/Chipotle/Exercises.ipynb | geneh0/pandas_exercises |
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo. | chipo = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv', sep = '\t') | _____no_output_____ | BSD-3-Clause | 07_Visualization/Chipotle/Exercises.ipynb | geneh0/pandas_exercises |
Step 4. See the first 10 entries | chipo.head(10) | _____no_output_____ | BSD-3-Clause | 07_Visualization/Chipotle/Exercises.ipynb | geneh0/pandas_exercises |
Step 5. Create a histogram of the top 5 items bought | chipo.item_name.value_counts()[0:5].plot(kind = 'bar'); | _____no_output_____ | BSD-3-Clause | 07_Visualization/Chipotle/Exercises.ipynb | geneh0/pandas_exercises |
Step 6. Create a scatterplot with the number of items orderered per order price Hint: Price should be in the X-axis and Items ordered in the Y-axis | chipo.item_price = chipo.item_price.apply(lambda x: float(x[1:-1]))
chipo.groupby('order_id').sum().plot(kind = 'scatter', x = 'item_price', y = 'quantity')
plt.title('Order cost by number of items in order')
plt.xlabel('Order Total')
plt.ylabel('Number of Items') | _____no_output_____ | BSD-3-Clause | 07_Visualization/Chipotle/Exercises.ipynb | geneh0/pandas_exercises |
Worksheet 5.1 - Feature Engineering: Malicious URL Detection using Machine Learning - AnswersThis worksheet is a step-by-step guide on how to train a Machine Learning model that can detect malicious URLs. We will walk you through the process of transforming raw URL strings to Machine Learning features and creating ... | ## Load data
DATA_HOME = '../data/'
df = pd.read_csv(DATA_HOME + 'url_data_full.csv')
# df = pd.read_csv(DATA_HOME + 'url_data_small.csv')
df.isIP = df.isIP.astype(int)
print(df.shape)
df.sample(n=5).head() # print a random sample of the DataFrame
df['isMalicious'].value_counts() | _____no_output_____ | MIT | answers/Worksheet 5.1 - Feature Engineering - Answers.ipynb | d3vzer0/applied_data_science_amsterdam |
Part 1 - Feature EngineeringThe traditional approach is to hand-craft Machine Learning features. This can be the most tedious part and often requires extensive domain expertise and data wrangling skills.Previous academic research on identifying malicious or suspicious URLs has focused on studying the usefulness of an ... | def H_entropy (x):
# Calculate Shannon Entropy
return entropy.shannon_entropy(x)
def firstDigitIndex( s ):
for i, c in enumerate(s):
if c.isdigit():
return i + 1
return 0 | _____no_output_____ | MIT | answers/Worksheet 5.1 - Feature Engineering - Answers.ipynb | d3vzer0/applied_data_science_amsterdam |
Tasks - Sub-Section A - Lexical FeaturesAppend features to the pandas 2D DataFrame ```df``` with a new column for each feature. Later, simply drop the columns that are not features. Please focus on ```["Length"]```, ```["LengthDomain]```, ```["DigitsCount"]```, ```["EntropyDomain"]``` and ```["FirstDigitIndex"]``` her... | # derive simple lexical features
df['Length'] = df.url.str.len()
df['LengthDomain'] = df.domain.str.len()
df['DigitsCount'] = df.url.str.count('[0-9]')
df['EntropyDomain'] = df.domain.apply(H_entropy)
df['FirstDigitIndex'] = df.url.apply(firstDigitIndex)
# check intermediate 2D pandas DataFrame
print(len(df.columns))
... | 10
| MIT | answers/Worksheet 5.1 - Feature Engineering - Answers.ipynb | d3vzer0/applied_data_science_amsterdam |
Tasks - Sub-Section A - Lexical Features (continued)There are many different approaches of applying ```bag-of-words``` to URLs. Here we suggest the following approach:1. Extract the different portions of the URL (host names (domains), top-level-domains (tlds) [what is TLD](https://en.wikipedia.org/wiki/Top-level_domai... | def extract_path(url):
return re.sub('.'.join([tldextract.extract(url).domain, tldextract.extract(url).suffix]), '', url)
domains = df.url.apply(lambda x: tldextract.extract(x).domain)
tlds = df.url.apply(lambda x: tldextract.extract(x).suffix)
paths = df.url.apply(extract_path)
n_tlds = 20
top_tlds = list(tlds.val... | 160
| MIT | answers/Worksheet 5.1 - Feature Engineering - Answers.ipynb | d3vzer0/applied_data_science_amsterdam |
Feature Engineering Sub-Section B - Host-based FeaturesDerivation of host-based features often requires the use of APIs or querying information from some authoritative source. It took us 2 days to get all whois data for all of our unique domains (see ```domains_created_db.csv``` file). **Selection of host-based featur... | df=df3
df.created = pd.to_datetime(df.created, errors='coerce')
df['DurationCreated'] = (pd.to_datetime(datetime.date.today()) - df.created).dt.days
# check final 2D pandas DataFrame containing all final features and the target vector isMalicious
df.sample(n=5).head()
df_final = df
df_final = df_final.drop(['url', 'do... | _____no_output_____ | MIT | answers/Worksheet 5.1 - Feature Engineering - Answers.ipynb | d3vzer0/applied_data_science_amsterdam |
Breakpoint: Load Features and LabelsIf you got stuck in Part 1, please simply load the feature matrix we prepared for you, so you can move on to Part 2 and train a Decision Tree Classifier. | df_final = pd.read_csv(DATA_HOME + 'url_features_final_df.csv')
print(df_final.isMalicious.value_counts())
print(len(df_final.columns))
df_final.sample(n=5).head()
feature_names = list(df_final.columns)
feature_names.remove('isMalicious')
# Pickle certain variables, so they can be loaded again in part 2 to make new pre... | _____no_output_____ | MIT | answers/Worksheet 5.1 - Feature Engineering - Answers.ipynb | d3vzer0/applied_data_science_amsterdam |
Visualizing the FeaturesIn the last step, you're going to explore the feature space to see which features are potentially useful or not and of course whether there is too much noise to make predictions. First, using [Yellowbrick](http://pythonhosted.org/yellowbrick/examples/examples.html), create a Covariance rankin... | ## Load data
DATA_HOME = '../data/'
df_final = pd.read_csv(DATA_HOME + 'url_features_final_df.csv')
features = df_final.loc[:,'isIP':]
target = df_final['isMalicious']
visualizer = Rank2D(features=features.columns, algorithm='covariance')
plt.figure(figsize=(16,10))
visualizer.fit(features, target) # Fi... | /Users/cgivre/anaconda3/lib/python3.7/site-packages/yellowbrick/features/rankd.py:262: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
X = X.as_matrix()
| MIT | answers/Worksheet 5.1 - Feature Engineering - Answers.ipynb | d3vzer0/applied_data_science_amsterdam |
What did you see?If you did this correctly, you should see that most of the features are nearly useless. Next, pick 7 features yourself, either using the `feature_selection` functions in `scikit-learn` or by just picking them yourself, and create a pair plot using Seaborn to determine whether there are clear class bou... | #Gets an arrary of the best features in 1 step.
best_features = SelectKBest( score_func=chi2, k=7).fit_transform(features,target)
#Get the feature names and indexes
best = SelectKBest( score_func=chi2, k=7).fit(features,target)
feature_names = pd.Series(features.columns)
feature_names[best.get_support()]
sns.pairplot(... | /Users/cgivre/anaconda3/lib/python3.7/site-packages/statsmodels/nonparametric/kde.py:488: RuntimeWarning: invalid value encountered in true_divide
binned = fast_linbin(X, a, b, gridsize) / (delta * nobs)
/Users/cgivre/anaconda3/lib/python3.7/site-packages/statsmodels/nonparametric/kdetools.py:34: RuntimeWarning: inva... | MIT | answers/Worksheet 5.1 - Feature Engineering - Answers.ipynb | d3vzer0/applied_data_science_amsterdam |
Evaluación realizadaCompleta lo que falta. | # instalacion
!pip install pandas
!pip install matplotlib
!pip install pandas-datareader
# 1 importa las bibliotecas
import pandas as pd
import pandas_datareader.data as web
import matplotlib.pyplot as plt
# 2. Establecer una fecha de inicio "2020-01-01" y una fecha de finalización "2021-08-31"
start_date = "2020-01-0... | El volumen en la Bolsa se refiere a la cantidad de títulos negociados (equivalente en dinero) de una acción en un periodo determinado.
El volumen nos indica el interés de los inversores por una acción concreta. Atendiendo al volumen medio, existen valores de los que se negocian millones de títulos al día, mientras pode... | BSD-3-Clause | evaluacion_JudithCallisaya.ipynb | Jud18/training-python-novice |
* Entender los movimientos del precio, si suben o bajan.* Los precios de las acciones se mueven constantemente a lo largo del día de trading a medida que la oferta y la demanda de acciones cambian (precio mas alto o mas bajo). Cuando el mercado cierra, se registra el precio final de la acción.* EL precio de Apertura: P... | # 5. Muestre un resumen de la información básica sobre este DataFrame y sus datos
# use la funcion dataFrame.info() y dataFrame.describe()
data.describe()
# 6. Devuelve las primeras 5 filas del DataFrame con dataFrame.head() o dataFrame.iloc[]
data.head(5)
# 7. Seleccione solo las columnas 'Open','Close' y 'Volume' de... | Un gráfico de precios es útil porque nos ayuda a identificar, mediante niveles de referencia de soporte y resistencia, el momento más adecuado para comprar o vender Acciones en el Mercado de Capitales, mejor conocido como de Renta Variable.
Económicamente, el comportamiento de los precios que observamos en el gráfico ... | BSD-3-Clause | evaluacion_JudithCallisaya.ipynb | Jud18/training-python-novice |
Pie chart 05. 파이차트 성 민 석 (Minsuk Sung)류 회 성 (Hoesung Ryu) --- Table of Contents1 간단한 파이차트그리기2 파이차트 스타일3 파이차트 퍼센트와 같이 표시하기4 파이차트 Explode 간단한 파이차트그리기 pie 차트는 `matplotlib.pyplot` 모듈의 `pie 함수`를 사용해 그릴 수 있습니다.pie 함수의 첫 번째 인자는 각 범주가 데이터에서 차지하는 비율을 뜻하며, labels를 통해 범주를 전달할 수 ... | %matplotlib inline
import matplotlib.pyplot as plt
exp_vals = [1400,600,300,410,250]
exp_labels = ["Home Rent","Food","Phone/Internet Bill","Car ","Other Utilities"]
plt.pie(exp_vals,labels=exp_labels) | _____no_output_____ | MIT | matplotlib/.ipynb_checkpoints/matplotlib 05. Pie chart-checkpoint.ipynb | ikelee22/pythonlib |
파이차트 스타일 - ` axis('equal)` 를 통해 완벽한 원형으로 파이차트를 표시 할 수 있습니다. - `shadow=True` 를 통해 pie 차트에 그림자를 설정할 수 있습니다.- `startangle=` 를 통해 시작각도를 설정 할 수 있습니다. | plt.pie(exp_vals,labels=exp_labels, shadow=True)
plt.axis("equal")
plt.show()
plt.pie(exp_vals,labels=exp_labels, shadow=True,startangle=45)
plt.axis("equal")
plt.show() | _____no_output_____ | MIT | matplotlib/.ipynb_checkpoints/matplotlib 05. Pie chart-checkpoint.ipynb | ikelee22/pythonlib |
파이차트 퍼센트와 같이 표시하기 `autopct` 명령어를 통하여 퍼센트를 표시 할수 있으며, 소수점 자리수도 설정 할 수 있습니다. | plt.pie(exp_vals,
labels=exp_labels,
shadow=True,
autopct='%1.1f%%', # 퍼센트 표시하기
radius=1.5)
plt.axis("equal")
plt.show() | _____no_output_____ | MIT | matplotlib/.ipynb_checkpoints/matplotlib 05. Pie chart-checkpoint.ipynb | ikelee22/pythonlib |
파이차트 Explode`explode` 명령어로 파이조각이 돌출되는 크기를 설정 할 수있으며, 0이면 돌출되지 않습니다. | plt.axis("equal")
plt.pie(exp_vals,
labels=exp_labels,
shadow=True,
autopct='%1.1f%%',
radius=1.5,
explode=[0,0,0,0.1,0.2],
startangle=45)
plt.show() | _____no_output_____ | MIT | matplotlib/.ipynb_checkpoints/matplotlib 05. Pie chart-checkpoint.ipynb | ikelee22/pythonlib |
Convolutional Neural Networks: Step by StepWelcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation. **Notation**:- Superscript $[l]$ denotes an object of the $l^{th}$... | import numpy as np
import h5py
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1) | _____no_output_____ | MIT | Convolutional Neural Networks/Convolution model-Step by Step-v2.ipynb | xuxingya/deep-learning-coursera |
2 - Outline of the AssignmentYou will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed:- Convolution functions, including: - Zero Padding - Convolve window - Convolution forward ... | # GRADED FUNCTION: zero_pad
def zero_pad(X, pad):
"""
Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image,
as illustrated in Figure 1.
Argument:
X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images
pad -- i... | x.shape = (4, 3, 3, 2)
x_pad.shape = (4, 7, 7, 2)
x[1,1] = [[ 0.90085595 -0.68372786]
[-0.12289023 -0.93576943]
[-0.26788808 0.53035547]]
x_pad[1,1] = [[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]]
| MIT | Convolutional Neural Networks/Convolution model-Step by Step-v2.ipynb | xuxingya/deep-learning-coursera |
**Expected Output**: **x.shape**: (4, 3, 3, 2) **x_pad.shape**: (4, 7, 7, 2) **x[1,1]**: [[ 0.90085595 -0.68372786] [-0.12289023 -0.93576943]... | # GRADED FUNCTION: conv_single_step
def conv_single_step(a_slice_prev, W, b):
"""
Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation
of the previous layer.
Arguments:
a_slice_prev -- slice of input data of shape (f, f, n_C_prev)
W -- Weight ... | Z = -6.99908945068
| MIT | Convolutional Neural Networks/Convolution model-Step by Step-v2.ipynb | xuxingya/deep-learning-coursera |
**Expected Output**: **Z** -6.99908945068 3.3 - Convolutional Neural Networks - Forward passIn the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to... | # GRADED FUNCTION: conv_forward
def conv_forward(A_prev, W, b, hparameters):
"""
Implements the forward propagation for a convolution function
Arguments:
A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
W -- Weights, numpy array of shap... | Z's mean = 0.0489952035289
Z[3,2,1] = [-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437
5.18531798 8.75898442]
cache_conv[0][1][2][3] = [-0.20075807 0.18656139 0.41005165]
| MIT | Convolutional Neural Networks/Convolution model-Step by Step-v2.ipynb | xuxingya/deep-learning-coursera |
**Expected Output**: **Z's mean** 0.0489952035289 **Z[3,2,1]** [-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437 5.18531798 8.75898442] **cache_conv... | # GRADED FUNCTION: pool_forward
def pool_forward(A_prev, hparameters, mode = "max"):
"""
Implements the forward pass of the pooling layer
Arguments:
A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
hparameters -- python dictionary containing "f" and "stride"
mod... | mode = max
A = [[[[ 1.74481176 0.86540763 1.13376944]]]
[[[ 1.13162939 1.51981682 2.18557541]]]]
mode = average
A = [[[[ 0.02105773 -0.20328806 -0.40389855]]]
[[[-0.22154621 0.51716526 0.48155844]]]]
| MIT | Convolutional Neural Networks/Convolution model-Step by Step-v2.ipynb | xuxingya/deep-learning-coursera |
**Expected Output:** A = [[[[ 1.74481176 0.86540763 1.13376944]]] [[[ 1.13162939 1.51981682 2.18557541]]]] A = [[[[ 0.02105773 -0.20328806 -0.40389855]]] [[[-0.22154621 0.51716526 0.48155844]]]] Congratulations! You have now i... | def conv_backward(dZ, cache):
"""
Implement the backward propagation for a convolution function
Arguments:
dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward(), output of conv... | _____no_output_____ | MIT | Convolutional Neural Networks/Convolution model-Step by Step-v2.ipynb | xuxingya/deep-learning-coursera |
** Expected Output: ** **dA_mean** 1.45243777754 **dW_mean** 1.72699145831 **db_mean** 7.83923256462 5.2 Pooling layer - backward pas... | def create_mask_from_window(x):
"""
Creates a mask from an input matrix x, to identify the max entry of x.
Arguments:
x -- Array of shape (f, f)
Returns:
mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x.
"""
###... | _____no_output_____ | MIT | Convolutional Neural Networks/Convolution model-Step by Step-v2.ipynb | xuxingya/deep-learning-coursera |
**Expected Output:** **x =**[[ 1.62434536 -0.61175641 -0.52817175] [-1.07296862 0.86540763 -2.3015387 ]] **mask =**[[ True False False] [False False False]] Why do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backpro... | def distribute_value(dz, shape):
"""
Distributes the input value in the matrix of dimension shape
Arguments:
dz -- input scalar
shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz
Returns:
a -- Array of size (n_H, n_W) for which we dis... | _____no_output_____ | MIT | Convolutional Neural Networks/Convolution model-Step by Step-v2.ipynb | xuxingya/deep-learning-coursera |
**Expected Output**: distributed_value =[[ 0.5 0.5] [ 0.5 0.5]] 5.2.3 Putting it together: Pooling backward You now have everything you need to compute backward propagation on a pooling layer.**Exercise**: Implement the `pool_backward` function in both modes (`"max"` and `"average"`). You will once again use 4 for... | def pool_backward(dA, cache, mode = "max"):
"""
Implements the backward pass of the pooling layer
Arguments:
dA -- gradient of cost with respect to the output of the pooling layer, same shape as A
cache -- cache output from the forward pass of the pooling layer, contains the layer's input and h... | _____no_output_____ | MIT | Convolutional Neural Networks/Convolution model-Step by Step-v2.ipynb | xuxingya/deep-learning-coursera |
用户使用指南 安装 从PyPI安装`pip install lixinger-openapi` 从Github安装`pip install git+http://github.com/ShekiLyu/lixinger-openapi.git` 从PyPI更新`pip install --upgrade lixinger-openapi` 从Github更新`pip install --upgrade git+http://github.com/ShekiLyu/lixinger-openapi.git` 接口列表接口名 | 接口功能------------------- | -------------... | import lixinger_openapi as lo | _____no_output_____ | Apache-2.0 | doc/user_guide.ipynb | ShekiLyu/lixinger-openapi |
加载token | lo.set_token("your_token") | _____no_output_____ | Apache-2.0 | doc/user_guide.ipynb | ShekiLyu/lixinger-openapi |
set_token会目录下生成token.cfg文件保存token,所以在当前目录只需加载一次。如果不想写token.cfg文件,可以如下设置: | lo.set_token("your_token", write_token=False) | _____no_output_____ | Apache-2.0 | doc/user_guide.ipynb | ShekiLyu/lixinger-openapi |
查询(使用理杏仁开放平台上的示例) A股公司基本面数据 json格式 | json_rlt = lo.query_json('a.stock.fundamental.non_financial',
{
"date": "2018-01-19",
"stockCodes": [
"000028",
"600511"
],
"metricsList": [
"pe_ttm",
"mc"
]
})
print(json_rlt) | {'data': [{'date': '2018-01-19T00:00:00+08:00', 'pe_ttm': 21.046568599508507, 'stockCode': '000028', 'mc': 26663748314.4}, {'date': '2018-01-19T00:00:00+08:00', 'pe_ttm': 21.459988206744743, 'stockCode': '600511', 'mc': 20346751061}], 'code': 0, 'msg': 'success'}
| Apache-2.0 | doc/user_guide.ipynb | ShekiLyu/lixinger-openapi |
dataframe格式 | dataframe_rlt = lo.query_dataframe('a.stock.fundamental.non_financial',
{
"date": "2018-01-19",
"metricsList": ["pe_ttm", "mc"],
"stockCodes": ["000028", "600511"]
})
print('code: '+ str(dataframe_rlt['code']))
print('\ndata:')
print(dataframe_rlt['data'])
print('\nmsg: ' + dataframe_rl... | code: 0
data:
date mc pe_ttm stockCode
0 2018-01-19T00:00:00+08:00 2.666375e+10 21.046569 000028
1 2018-01-19T00:00:00+08:00 2.034675e+10 21.459988 600511
msg: success
| Apache-2.0 | doc/user_guide.ipynb | ShekiLyu/lixinger-openapi |
A股指数基本信息 json格式 | json_rlt = lo.query_json('a.index', {
"stockCodes": [
"000016"
]
})
print(json_rlt) | {'data': [{'source': 'sh', 'cnName': '上证50', 'publishDate': '2004-01-01T16:00:00.000Z', 'stockCode': '000016', 'areaCode': 'cn', 'market': 'a'}], 'code': 0, 'msg': 'success'}
| Apache-2.0 | doc/user_guide.ipynb | ShekiLyu/lixinger-openapi |
dataframe格式 | dataframe_rlt = lo.query_dataframe('a.index', {
"stockCodes": [
"000016"
]
})
print('code: '+ str(dataframe_rlt['code']))
print('\ndata:')
print(dataframe_rlt['data'])
print('\nmsg: ' + dataframe_rlt['msg']) | code: 0
data:
areaCode cnName market publishDate source stockCode
0 cn 上证50 a 2004-01-01T16:00:00.000Z sh 000016
msg: success
| Apache-2.0 | doc/user_guide.ipynb | ShekiLyu/lixinger-openapi |
肿瘤个体化诊疗基因检测统计 样本选取条件 | try:
print(cdx.filter_description())
print(f'样本总量为{cdx.sample_size()}例。')
except Exception as e:
print(e) | _____no_output_____ | MIT | PETA_report_template__Python__Product_statistics.ipynb | JaylanLiu/PETA_report_template__Python__Product_statistics |
送检样本量 | try:
scdx=cdx
chosen_cancer_types='肾癌/胃肠道间质瘤/胰腺癌/食管癌/神经内分泌肿瘤/非小细胞肺癌/甲状腺癌/睾丸癌/胃癌/卵巢癌/膀胱癌/恶性胸膜间皮瘤/乳腺癌/阴茎癌/黑色素瘤/胆管癌/胆囊癌/肝细胞癌/软组织肉瘤/胸腺癌/胸腺瘤/骨癌/子宫肿瘤/结直肠癌/中枢神经系统肿瘤/非黑色素瘤皮肤癌/小细胞肺癌/宫颈癌/头颈癌/前列腺癌/外阴癌/肛门癌/小肠腺癌/默克尔细胞癌/不分癌种'.split('/')
value_counts=scdx.sample_size('CANCER_TYPE').reindex(chosen_cancer_types).fillna(0).... | _____no_output_____ | MIT | PETA_report_template__Python__Product_statistics.ipynb | JaylanLiu/PETA_report_template__Python__Product_statistics |
药物检测阳性率 | #判断数据集是否支持药物阳性率的统计
try:
scdx=cdx
support_for_drug_sensitivity=False
if 'GENETIC_TEST_RESULT' in scdx.cli.columns:
support_for_drug_sensitivity=True
if support_for_drug_sensitivity:
pr=pypeta.positive_rate(scdx.cli.GENETIC_TEST_RESULT,['阳性'])
print(f'总例数为{pr[0]},其中有效{pr[1]}例,阳性... | _____no_output_____ | MIT | PETA_report_template__Python__Product_statistics.ipynb | JaylanLiu/PETA_report_template__Python__Product_statistics |
基因检出率 | try:
ser=cdx.test_positive_rate(groupby_genes=True)
mut_freq_per_gene_df=ser.sort_values(ascending=False).reset_index()
mut_freq_per_gene_df.columns=pd.Index(['基因','频率'])
print('各基因的检出率为:')
fig = px.bar(mut_freq_per_gene_df, x='基因', y='频率',text='频率')
#fig.update_traces(texttemplate='%{text:%.2f... | _____no_output_____ | MIT | PETA_report_template__Python__Product_statistics.ipynb | JaylanLiu/PETA_report_template__Python__Product_statistics |
基因突变类型检出率 | try:
mut_freq_per_gene_df=cdx.test_positive_rate(groupby_variant_type=True).reset_index()
mut_freq_per_gene_df.columns=pd.Index(['类型','频率'])
print('各类型的检出率为:')
fig = px.bar(mut_freq_per_gene_df, x='类型', y='频率',text='频率')
#fig.update_traces(texttemplate='%{text:%.2f%%}', textposition='outside',)
... | _____no_output_____ | MIT | PETA_report_template__Python__Product_statistics.ipynb | JaylanLiu/PETA_report_template__Python__Product_statistics |
TMB分布 | try:
cdx_tmb=cdx
chosen_cancer_types='肾癌/胃肠道间质瘤/胰腺癌/食管癌/神经内分泌肿瘤/非小细胞肺癌/甲状腺癌/睾丸癌/胃癌/卵巢癌/膀胱癌/恶性胸膜间皮瘤/乳腺癌/阴茎癌/黑色素瘤/胆管癌/胆囊癌/肝细胞癌/软组织肉瘤/胸腺癌/胸腺瘤/骨癌/子宫肿瘤/结直肠癌/中枢神经系统肿瘤/非黑色素瘤皮肤癌/小细胞肺癌/宫颈癌/头颈癌/前列腺癌/外阴癌/肛门癌/小肠腺癌/默克尔细胞癌/不分癌种'.split('/')
cli=cdx_tmb.cli[cdx_tmb.cli.TMB.map(lambda x: pypeta.is_float(x))].copy()
c... | _____no_output_____ | MIT | PETA_report_template__Python__Product_statistics.ipynb | JaylanLiu/PETA_report_template__Python__Product_statistics |
MSI分布 | try:
cdx_tmb=cdx
chosen_cancer_types='肾癌/胃肠道间质瘤/胰腺癌/食管癌/神经内分泌肿瘤/非小细胞肺癌/甲状腺癌/睾丸癌/胃癌/卵巢癌/膀胱癌/恶性胸膜间皮瘤/乳腺癌/阴茎癌/黑色素瘤/胆管癌/胆囊癌/肝细胞癌/软组织肉瘤/胸腺癌/胸腺瘤/骨癌/子宫肿瘤/结直肠癌/中枢神经系统肿瘤/非黑色素瘤皮肤癌/小细胞肺癌/宫颈癌/头颈癌/前列腺癌/外阴癌/肛门癌/小肠腺癌/默克尔细胞癌/不分癌种'.split('/')
cli=cli[cli.CANCER_TYPE.isin(chosen_cancer_types)].copy()
cli=cl... | _____no_output_____ | MIT | PETA_report_template__Python__Product_statistics.ipynb | JaylanLiu/PETA_report_template__Python__Product_statistics |
基因融合详情 | try:
pd.set_option('display.max_rows', None)
display(cdx.sv)
except:
print('Data selected don`t support this calculation.') | _____no_output_____ | MIT | PETA_report_template__Python__Product_statistics.ipynb | JaylanLiu/PETA_report_template__Python__Product_statistics |
Apprentice ChallengeThis challenge is diagnostic of your current python pandas, matplotlib/seaborn, and numpy skills. These diagnostics will help inform your selection into the Machine Learning Guild's Apprentice program. Please ensure you are using Python 3 as the notebook won't work in 2.7 Challenge Background: AirB... | # Import packages
import pandas as pd
import numpy as np
import data_load_files
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn import metrics
from sklearn.metrics import mean_squared_error,r2_score, mean_absolute_error
from sklearn import preprocessin... | _____no_output_____ | MIT | Regressions/AirBnB Price prediction/.ipynb_checkpoints/Apprentice_Challenge_2021_Answers-Copy1-checkpoint.ipynb | shobhit009/Machine_Learning_Projects |
Task 1**Instructions**AirBnB just sent you the NYC rentals data as a text file (`AB_NYC_2019_pt1.csv`). First, we'll need to read that text file in as a pandas DataFrame called `df`. As it turns out, AirBnB also received an additional update (`AB_NYC_2019_pt2.csv`) overnight to add to the main dataset, so you'll have ... | # Task 1
# -- YOUR CODE FOR TASK 1 --
#Import primary AirBnB data file as a pandas DataFrame
# df = ...
df = pd.read_csv('AB_NYC_2019_pt1.csv')
#Import the additional AirBnB data file as a pandas DataFrame and append it to the primary data DataFrame
# df2 = ...
df2 = pd.read_csv('AB_NYC_2019_pt2.csv')
#Append df2... | df is correct
| MIT | Regressions/AirBnB Price prediction/.ipynb_checkpoints/Apprentice_Challenge_2021_Answers-Copy1-checkpoint.ipynb | shobhit009/Machine_Learning_Projects |
Task 2 Part 1**Instructions**AirBnB is aware that some of its listings are missing values. Let's see if we can determine how much of the dataset is affected. Start by printing out the number of rows in the df that contain any null (NaN) values.Once you've done that, drop those rows from the df before any further analy... | # Task 2 (Part 1)
# Import packages
import datetime
# -- YOUR CODE FOR TASK 2 (PART 1) --
#Print out the number of rows in the df that contain any null (NaN) values
# Your code here
print(df.isna().any(axis=1).sum())
#Drop all rows with any NaNs from the DataFrame
# Your code here
df.dropna(axis=0, inplace=True)
... | df is correct
| MIT | Regressions/AirBnB Price prediction/.ipynb_checkpoints/Apprentice_Challenge_2021_Answers-Copy1-checkpoint.ipynb | shobhit009/Machine_Learning_Projects |
Task 2 Part 2**Instructions**Airbnb team wants to further explore the expansion of their listings in the Neighbourhood Group of Brooklyn. Create a DataFrame `df_brooklyn` containing only these listings, and then, using that DataFrame, create a new DataFrame `df_brooklyn_prices_room_type` showing the mean price per roo... | # Run this cell
pd.set_option('mode.chained_assignment', None)
#Create a pandas DataFrame containing only listings in the Brooklyn neighborhood group. Don't
#forget to reset the index!
#df_brooklyn = ...
df_brooklyn = df[(df['neighbourhood_group']=='Brooklyn')].reset_index(drop=True)
#Printing Results
df_brooklyn
#... | dfs are correct
| MIT | Regressions/AirBnB Price prediction/.ipynb_checkpoints/Apprentice_Challenge_2021_Answers-Copy1-checkpoint.ipynb | shobhit009/Machine_Learning_Projects |
Task 3, part 1**Instructions**We want to be able to model using the ‘neighbourhood’ column as a feature, but to do so we’ll have to transform it into a series of binary features (one per neighbourhood), and right now there are way too many unique values. To solve this problem, we will re-label all neighbourhoods not i... | #Task 3
# -- YOUR CODE FOR TASK 3 --
#Create a list of the top 10 most common neighbourhoods, using the 'top_10_brooklyn_series'
#that you created earlier
#top_10_brooklyn_list = ...
top_10_brooklyn_list = list(top_10_brooklyn_series.index.values)
#Replace all 'neighbourhood' column values NOT in the top 10 with 'Ot... | df is correct
| MIT | Regressions/AirBnB Price prediction/.ipynb_checkpoints/Apprentice_Challenge_2021_Answers-Copy1-checkpoint.ipynb | shobhit009/Machine_Learning_Projects |
Task 3, Part 2You want to take a closer look at price in the dataset. You decide to categorize rental properties by their affordability. Categorize each listing into one of three price categories by binning the `price` column and creating a new `price_category` column. | price_bins = [0, 100, 200, np.inf]
price_cat = ['low', 'medium', 'high']
#df['price_category'] = ...
df_brooklyn['price_category'] = pd.cut(df_brooklyn['price'], price_bins, labels=price_cat)
df_brooklyn
## RUN THIS CELL AS-IS TO CHECK IF YOUR OUTPUTS ARE CORRECT. IF THEY ARE NOT,
## THE APPROPRIATE OBJECTS WILL BE L... | df is correct
| MIT | Regressions/AirBnB Price prediction/.ipynb_checkpoints/Apprentice_Challenge_2021_Answers-Copy1-checkpoint.ipynb | shobhit009/Machine_Learning_Projects |
Task 3, Part 3**Instructions*** Create a barchart of your dataset `Price Category` from Part 2, comparing the number of rentals in each category.**Expected Output*** barchart with listing count as bar* grouped by 3 price categories | pd.value_counts(df_brooklyn['price_category']).plot.bar() | _____no_output_____ | MIT | Regressions/AirBnB Price prediction/.ipynb_checkpoints/Apprentice_Challenge_2021_Answers-Copy1-checkpoint.ipynb | shobhit009/Machine_Learning_Projects |
Task 3, ExtraYou would like to see the above plot broken down by top 10 neighborhoods. Use Seaborn to create 10 bar graphs, one for each top 10 neighborhood, breaking down the listings in that neighborhood by price category and using hue to separate out the room types. Please use the seaborn plotting library. You can ... | import seaborn as sns
# sns.catplot(#<<enter your code here>>#, col_wrap=3,height=3, legend = True)
sns.catplot(x="price_category", col="neighbourhood", data=df_brooklyn, kind='count', hue='room_type', col_wrap=3,height=3, legend = True) | _____no_output_____ | MIT | Regressions/AirBnB Price prediction/.ipynb_checkpoints/Apprentice_Challenge_2021_Answers-Copy1-checkpoint.ipynb | shobhit009/Machine_Learning_Projects |
Task 4 Part 1**Instructions **Airbnb's business team would like to understand the revenue the hosts make in Brookyln. As you do not have the Airbnb booking details, you can estimate the number of bookings for each property based on the number of reviews they received. You can then extrapolate each property’s revenue w... | # Write a function to calculate the estimated host revenue, update the dataframe with a new column `estimated_host_revenue` calculated using the above formula
# and return the updated dataframe
#Your code here
def generate_estimate_host_revenue(dataframe):
dataframe['estimated_host_revenue'] = dataframe['price'] * ... | df is correct
| MIT | Regressions/AirBnB Price prediction/.ipynb_checkpoints/Apprentice_Challenge_2021_Answers-Copy1-checkpoint.ipynb | shobhit009/Machine_Learning_Projects |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.