markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
As an additional test, we can try sending the `test_review` that we looked at earlier.
predictor.predict(test_review)
_____no_output_____
MIT
SageMaker Project.ipynb
KatherineKing/sagemaker-deployment
Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back. Step 7 (again): Use the model for th...
predictor.endpoint
_____no_output_____
MIT
SageMaker Project.ipynb
KatherineKing/sagemaker-deployment
Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function. Setting up API GatewayNow that our Lambda function is set up, it is time to create a new API using API Gateway that wi...
predictor.delete_endpoint()
_____no_output_____
MIT
SageMaker Project.ipynb
KatherineKing/sagemaker-deployment
Probability and statisticsIn some form or another, machine learning is all about making predictions. We might want to predict the *probability* of a patient suffering a heart attack in the next year,given their clinical history.In anomaly detection, we might want to assess how *likely* a set of readings from an airpla...
import mxnet as mx from mxnet import nd
_____no_output_____
Apache-2.0
chapter01_crashcourse/probability.ipynb
aswinjeff/git-clone-https-github.com-aswinjeff-deeplearning1
Next, we'll want to be able to cast the die.In statistics we call this process of drawing examples from probability distributions *sampling*.The distribution which assigns probabilities to a number of discrete choices is called the *multinomial* distribution. We'll give a more formal definition of *distribution* later,...
probabilities = nd.ones(6) / 6 nd.sample_multinomial(probabilities)
_____no_output_____
Apache-2.0
chapter01_crashcourse/probability.ipynb
aswinjeff/git-clone-https-github.com-aswinjeff-deeplearning1
If you run this line (`nd.sample_multinomial(probabilities)`) a bunch of times, you'll find that you get out random values each time. As with estimating the fairness of a die, we often want to generate many samples from the same distribution.It would be really slow to do this with a Python `for` loop,so `sample_multino...
print(nd.sample_multinomial(probabilities, shape=(10))) print(nd.sample_multinomial(probabilities, shape=(5,10)))
[3 4 5 3 5 3 5 2 3 3] <NDArray 10 @cpu(0)> [[2 2 1 5 0 5 1 2 2 4] [4 3 2 3 2 5 5 0 2 0] [3 0 2 4 5 4 0 5 5 5] [2 4 4 2 3 4 4 0 4 3] [3 0 3 5 4 3 0 2 2 1]] <NDArray 5x10 @cpu(0)>
Apache-2.0
chapter01_crashcourse/probability.ipynb
aswinjeff/git-clone-https-github.com-aswinjeff-deeplearning1
Now that we know how to sample rolls of a die,we can simulate 1000 rolls.
rolls = nd.sample_multinomial(probabilities, shape=(1000))
_____no_output_____
Apache-2.0
chapter01_crashcourse/probability.ipynb
aswinjeff/git-clone-https-github.com-aswinjeff-deeplearning1
We can then go through and count, after each of the 1000 rolls,how many times each number was rolled.
counts = nd.zeros((6,1000)) totals = nd.zeros(6) for i, roll in enumerate(rolls): totals[int(roll.asscalar())] += 1 counts[:, i] = totals
_____no_output_____
Apache-2.0
chapter01_crashcourse/probability.ipynb
aswinjeff/git-clone-https-github.com-aswinjeff-deeplearning1
To start, we can inspect the final tally at the end of $1000$ rolls.
totals / 1000
_____no_output_____
Apache-2.0
chapter01_crashcourse/probability.ipynb
aswinjeff/git-clone-https-github.com-aswinjeff-deeplearning1
As you can see, the lowest estimated probability for any of the numbers is about $.15$ and the highest estimated probability is $0.188$. Because we generated the data from a fair die,we know that each number actually has probability of $1/6$, roughly $.167$,so these estimates are pretty good. We can also visualize how ...
counts
_____no_output_____
Apache-2.0
chapter01_crashcourse/probability.ipynb
aswinjeff/git-clone-https-github.com-aswinjeff-deeplearning1
Normalizing by the number of tosses, we get:
x = nd.arange(1000).reshape((1,1000)) + 1 estimates = counts / x print(estimates[:,0]) print(estimates[:,1]) print(estimates[:,100])
[ 0. 1. 0. 0. 0. 0.] <NDArray 6 @cpu(0)> [ 0. 0.5 0. 0. 0.5 0. ] <NDArray 6 @cpu(0)> [ 0.1980198 0.15841584 0.17821783 0.18811882 0.12871288 0.14851485] <NDArray 6 @cpu(0)>
Apache-2.0
chapter01_crashcourse/probability.ipynb
aswinjeff/git-clone-https-github.com-aswinjeff-deeplearning1
As you can see, after the first toss of the die, we get the extreme estimate that one of the numbers will be rolled with probability $1.0$ and that the others have probability $0$. After $100$ rolls, things already look a bit more reasonable.We can visualize this convergence by using the plotting package `matplotlib`. ...
from matplotlib import pyplot as plt plt.plot(estimates[0, :].asnumpy(), label="Estimated P(die=1)") plt.plot(estimates[1, :].asnumpy(), label="Estimated P(die=2)") plt.plot(estimates[2, :].asnumpy(), label="Estimated P(die=3)") plt.plot(estimates[3, :].asnumpy(), label="Estimated P(die=4)") plt.plot(estimates[4, :].as...
_____no_output_____
Apache-2.0
chapter01_crashcourse/probability.ipynb
aswinjeff/git-clone-https-github.com-aswinjeff-deeplearning1
Each solid curve corresponds to one of the six values of the dieand gives our estimated probability that the die turns up that value as assessed after each of the 1000 turns. The dashed black line gives the true underlying probability.As we get more data, the solid curves converge towards the true answer. In our exampl...
import numpy as np # we go over one observation at a time (speed doesn't matter here) def transform(data, label): return (nd.floor(data/128)).astype(np.float32), label.astype(np.float32) mnist_train = mx.gluon.data.vision.MNIST(train=True, transform=transform) mnist_test = mx.gluon.data.vision.MNIST(train=False, t...
_____no_output_____
Apache-2.0
chapter01_crashcourse/probability.ipynb
aswinjeff/git-clone-https-github.com-aswinjeff-deeplearning1
Now that we computed per-pixel counts of occurrence for all pixels, it's time to see how our model behaves. Time to plot it. We show the estimated probabilities of observing a switched-on pixel. These are some mean looking digits.
import matplotlib.pyplot as plt fig, figarr = plt.subplots(1, 10, figsize=(15, 15)) for i in range(10): figarr[i].imshow(xcount[:, i].reshape((28, 28)).asnumpy(), cmap='hot') figarr[i].axes.get_xaxis().set_visible(False) figarr[i].axes.get_yaxis().set_visible(False) plt.show() print(py)
_____no_output_____
Apache-2.0
chapter01_crashcourse/probability.ipynb
aswinjeff/git-clone-https-github.com-aswinjeff-deeplearning1
Now we can compute the likelihoods of an image, given the model. This is statistican speak for $p(x|y)$, i.e. how likely it is to see a particular image under certain conditions (such as the label). Since this is computationally awkward (we might have to multiply many small numbers if many pixels have a small probabili...
logxcount = nd.log(xcount) logxcountneg = nd.log(1-xcount) logpy = nd.log(py) fig, figarr = plt.subplots(2, 10, figsize=(15, 3)) # show 10 images ctr = 0 for data, label in mnist_test: x = data.reshape((784,)) y = int(label) # we need to incorporate the prior probability p(y) since p(y|x) is # pr...
_____no_output_____
Apache-2.0
chapter01_crashcourse/probability.ipynb
aswinjeff/git-clone-https-github.com-aswinjeff-deeplearning1
As we can see, this classifier is both incompetent and overly confident of its incorrect estimates. That is, even if it is horribly wrong, it generates probabilities close to 1 or 0. Not a classifier we should use very much nowadays any longer. While Naive Bayes classifiers used to be popular in the 80s and 90s, e.g. f...
import random for i in range(10): print(random.random())
0.970844720223 0.11442244666 0.476145849846 0.154138063676 0.925771401913 0.347466944833 0.288795056587 0.855051122608 0.32666729925 0.932922304219
Apache-2.0
chapter01_crashcourse/probability.ipynb
aswinjeff/git-clone-https-github.com-aswinjeff-deeplearning1
Uniform DistributionThese are some pretty random numbers. As we can see, their range is between 0 and 1, and they are evenly distributed. That is, there is (actually, should be, since this is not a *real* random number generator) no interval in which numbers are more likely than in any other. In other words, the chanc...
for i in range(10): print(random.randint(1, 100))
75 23 34 85 99 66 13 42 19 14
Apache-2.0
chapter01_crashcourse/probability.ipynb
aswinjeff/git-clone-https-github.com-aswinjeff-deeplearning1
What if we wanted to check that ``randint`` is actually really uniform. Intuitively the best strategy would be to run it, say 1 million times, count how many times it generates each one of the values and to ensure that the result is uniform.
import math counts = np.zeros(100) fig, axes = plt.subplots(2, 3, figsize=(15, 8), sharex=True) axes = axes.reshape(6) # mangle subplots such that we can index them in a linear fashion rather than # a 2d grid for i in range(1, 1000001): counts[random.randint(0, 99)] += 1 if i in [10, 100, 1000, 10000, 100000,...
_____no_output_____
Apache-2.0
chapter01_crashcourse/probability.ipynb
aswinjeff/git-clone-https-github.com-aswinjeff-deeplearning1
What we can see from the above figures is that the initial number of counts looks *very* uneven. If we sample fewer than 100 draws from a distribution over 100 outcomes this is pretty much expected. But even for 1000 samples there is a significant variability between the draws. What we are really aiming for is a situat...
# number of samples n = 1000000 y = np.random.uniform(0, 1, n) x = np.arange(1, n+1) # count number of occurrences and divide by the number of total draws p0 = np.cumsum(y < 0.35) / x p1 = np.cumsum(y >= 0.35) / x plt.figure(figsize=(15, 8)) plt.semilogx(x, p0) plt.semilogx(x, p1) plt.show()
_____no_output_____
Apache-2.0
chapter01_crashcourse/probability.ipynb
aswinjeff/git-clone-https-github.com-aswinjeff-deeplearning1
As we can see, on average this sampler will generate 35% zeros and 65% ones. Now what if we have more than two possible outcomes? We can simply generalize this idea as follows. Given any probability distribution, e.g. $p = [0.1, 0.2, 0.05, 0.3, 0.25, 0.1]$ we can compute its cumulative distribution (python's ``cumsum``...
x = np.arange(-10, 10, 0.01) p = (1/math.sqrt(2 * math.pi)) * np.exp(-0.5 * x**2) plt.figure(figsize=(10, 5)) plt.plot(x, p) plt.show()
_____no_output_____
Apache-2.0
chapter01_crashcourse/probability.ipynb
aswinjeff/git-clone-https-github.com-aswinjeff-deeplearning1
Sampling from this distribution is a lot less trivial. First off, the support is infinite, that is, for any $x$ the density $p(x)$ is positive. Secondly, the density is nonuniform. There are many tricks for sampling from it - the key idea in all algorithms is to stratify $p(x)$ in such a way as to map it to the uniform...
# generate 10 random sequences of 10,000 random normal variables N(0,1) tmp = np.random.uniform(size=(10000,10)) x = 1.0 * (tmp > 0.3) + 1.0 * (tmp > 0.8) mean = 1 * 0.5 + 2 * 0.2 variance = 1 * 0.5 + 4 * 0.2 - mean**2 print('mean {}, variance {}'.format(mean, variance)) # cumulative sum and normalization y = np.arange...
mean 0.9, variance 0.49
Apache-2.0
chapter01_crashcourse/probability.ipynb
aswinjeff/git-clone-https-github.com-aswinjeff-deeplearning1
______Content Copyright by Pierian Data Statements Assessment Solutions _____**Use for, .split(), and if to create a Statement that will print out words that start with 's':**
st = 'Print only the words that start with s in this sentence' for word in st.split(): if word[0] == 's': print(word)
start s sentence
Apache-2.0
03-Python-Statements/Statements Assessment Test - Solutions.ipynb
porcelainruler/Python-Complete-Bootcamp-Practice
______**Use range() to print all the even numbers from 0 to 10.**
list(range(0,11,2))
_____no_output_____
Apache-2.0
03-Python-Statements/Statements Assessment Test - Solutions.ipynb
porcelainruler/Python-Complete-Bootcamp-Practice
___**Use List comprehension to create a list of all numbers between 1 and 50 that are divisible by 3.**
[x for x in range(1,51) if x%3 == 0]
_____no_output_____
Apache-2.0
03-Python-Statements/Statements Assessment Test - Solutions.ipynb
porcelainruler/Python-Complete-Bootcamp-Practice
_____**Go through the string below and if the length of a word is even print "even!"**
st = 'Print every word in this sentence that has an even number of letters' for word in st.split(): if len(word)%2 == 0: print(word+" <-- has an even length!")
word <-- has an even length! in <-- has an even length! this <-- has an even length! sentence <-- has an even length! that <-- has an even length! an <-- has an even length! even <-- has an even length! number <-- has an even length! of <-- has an even length!
Apache-2.0
03-Python-Statements/Statements Assessment Test - Solutions.ipynb
porcelainruler/Python-Complete-Bootcamp-Practice
____**Write a program that prints the integers from 1 to 100. But for multiples of three print "Fizz" instead of the number, and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz".**
for num in range(1,101): if num % 3 == 0 and num % 5 == 0: print("FizzBuzz") elif num % 3 == 0: print("Fizz") elif num % 5 == 0: print("Buzz") else: print(num)
_____no_output_____
Apache-2.0
03-Python-Statements/Statements Assessment Test - Solutions.ipynb
porcelainruler/Python-Complete-Bootcamp-Practice
____**Use a List Comprehension to create a list of the first letters of every word in the string below:**
st = 'Create a list of the first letters of every word in this string' [word[0] for word in st.split()]
_____no_output_____
Apache-2.0
03-Python-Statements/Statements Assessment Test - Solutions.ipynb
porcelainruler/Python-Complete-Bootcamp-Practice
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his go...
import numpy as np import sys import matplotlib.pyplot as plt if "../" not in sys.path: sys.path.append("../")
_____no_output_____
MIT
DP/Gamblers Problem Solution.ipynb
itsmeashutosh43/reinforcement-learning
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0): """ Args: p_h: Probability of the coin coming up heads """ # The reward is zero on all transitions except those on which the gambler reaches his goal, # when it is +1. rewards = np.zeros(101) rewards[100] = 1 ...
Optimized Policy: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11. 15. 16. 17. 18. 6. 20. 21. 3. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 38. 11. 10. 9. 42. 7. 44. 5. 46. 47. 48. 49. 50. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 11. 10. 9. 17. 7. 19. 5. 21. 22. 23. 2...
MIT
DP/Gamblers Problem Solution.ipynb
itsmeashutosh43/reinforcement-learning
Show your results graphically, as in Figure 4.3.
# Plotting Final Policy (action stake) vs State (Capital) # x axis values x = range(100) # corresponding y axis values y = v[:100] # plotting the points plt.plot(x, y) # naming the x axis plt.xlabel('Capital') # naming the y axis plt.ylabel('Value Estimates') # giving a title to the graph plt.title('Final Polic...
_____no_output_____
MIT
DP/Gamblers Problem Solution.ipynb
itsmeashutosh43/reinforcement-learning
Amazon web scraper
import csv from bs4 import BeautifulSoup # firefox and Chrome from selenium import webdriver
_____no_output_____
Apache-2.0
Amazon Nike Shoes reviews data extraction to csv.ipynb
jasp9559/Web-scraping-the-Amazon-data-using-Python-into-CSV
Startup the webdriver
pip install webdriver-manager ## we activate the webdriver for Chrome as we are using Google Chrome from webdriver_manager.chrome import ChromeDriverManager driver = webdriver.Chrome(ChromeDriverManager().install()) # Using webdriver we'll now open the Amazon website in chrome url = 'https://www.amazon.in' # We'll us...
_____no_output_____
Apache-2.0
Amazon Nike Shoes reviews data extraction to csv.ipynb
jasp9559/Web-scraping-the-Amazon-data-using-Python-into-CSV
**Extract the collection**
#taking the page source and trying to extract from html soup = BeautifulSoup(driver.page_source, 'html.parser') # assigning the specific identity of the component we need to extract from the website # in this case we need to extract the whole component that we search in the site # say the mobile phone in this case and ...
_____no_output_____
Apache-2.0
Amazon Nike Shoes reviews data extraction to csv.ipynb
jasp9559/Web-scraping-the-Amazon-data-using-Python-into-CSV
Generalize the pattern now
def extract_records(item): '''Extract and return data from a single record''' #description and url atag = item.h2.a description = atag.text.strip() url = "https://www.amazon.in" + atag.get("href") # price price = item.find('span', 'a-price-whole').text # rank and rating rat...
_____no_output_____
Apache-2.0
Amazon Nike Shoes reviews data extraction to csv.ipynb
jasp9559/Web-scraping-the-Amazon-data-using-Python-into-CSV
**We encounter the attribute error, this is basically because there are numerous results in the website page that need not match all the description as given for each of the products. Not all products might be having the same descriptions. Hence the attribute error occuring. We need to give exception for this error in ...
def extract_records(item): '''Extract and return data from a single record''' #description and url atag = item.h2.a description = atag.text.strip() url = "https://www.amazon.in" + atag.get("href") '''Basically we put the exception for price here which most definitely should not be ...
====== WebDriver manager ====== Current google-chrome version is 97.0.4692 Get LATEST chromedriver version for 97.0.4692 google-chrome Driver [C:\Users\bikem\.wdm\drivers\chromedriver\win32\97.0.4692.71\chromedriver.exe] found in cache C:\Users\bikem\AppData\Local\Temp/ipykernel_16024/2398463386.py:56: DeprecationWar...
Apache-2.0
Amazon Nike Shoes reviews data extraction to csv.ipynb
jasp9559/Web-scraping-the-Amazon-data-using-Python-into-CSV
Test the hypothesis whether GRE/TOEFL grades play pivotal role in graduate admissions
# for some basic operations import numpy as np import pandas as pd # for data visualizations import matplotlib.pyplot as plt import seaborn as sns plt.style.use('fivethirtyeight') # for advanced visualizations import plotly.offline as py from plotly.offline import init_notebook_mode, iplot import plotly.graph_objs as...
_____no_output_____
MIT
ADS-Spring2019/TestGradesInGraduateAdmission.ipynb
peperaj/XBUS-507-01.Applied_Data_Science
파이썬 기초강의01. 연 습 문 제이 인 구 (Ike Lee) Example : 1 직육면체의 부피를 구해보자
# 변수 설정 length = 5 height = 5 width = 20 volume = length*width*height print('직육면체의 부피 : %d'%volume) length = 10 # 다시 할당하면 됨 volume = length*width*height print('직육면체의 부피 : %d'%volume)
_____no_output_____
MIT
python/.ipynb_checkpoints/examples-checkpoint.ipynb
ikelee22/pythonlib
Example : 2 for 를 사용해서 암컷 개를 찾으세요. and를 사용해서 일치하는것을 찾으세요
suspects = [['낙타', '포유류','암컷'], ['상어','어류','숫컷'], ['푸들','개','암컷']] for suspect in suspects: if suspect[1] == '개' and suspect[2] =='암컷': print('범인은', suspect[0], '입니다') volume = length*width*height print('직육면체의 부피 : %d'%volume) length = 10 # 다시 할당하면 됨 volume = length*width*height print('직육면체의 부피 : %d'%volume...
_____no_output_____
MIT
python/.ipynb_checkpoints/examples-checkpoint.ipynb
ikelee22/pythonlib
Example : 3 연이율구하기 ``` 2017년 7월 2일 연이율 3% 계좌를 생성하여 3000000원을 입금한 경우 2018년 7월 2일 계좌 총액을 계산하여 출력하는 프로그램을 작성하십시오. 프로그램에서 입금액을 위한 변수, 연이율을 위한 변수를 만들어 사용하십시오.위의 프로그램을 입금액과 연이율을 입력받아 총액을 출력하도록 변경하십시오.언어 : python3입력 설명 :다음은 입금액과 연이율의 입력예입니다.===============================입금액(원), 연이율(%):: 4000, 3출력 설명 :다음과 같이 1년 후 총액을 출력합니다....
money, ratio = eval(input('입금액(원), 연이율(%)::')) print(money*(1+(1/100)*ratio))
_____no_output_____
MIT
python/.ipynb_checkpoints/examples-checkpoint.ipynb
ikelee22/pythonlib
Example : 4 삼각형 넓이를 구하시오```삼각형의 세변의 길이가 3,4,5인 경우 삼각형 넓이는 다음과 같이 계산합니다. x = (3 + 4 + 5)/2 넒이는 x(x-3)(x-4)(x-5) 의 양의 제곱근언어 : python3입력 설명 :다음과 같이 삼각형의 세변의 길이를 입력합니다.======================삼각형 세변의 길이(comma로 구분): 3,4,5출력 설명 :다음과 같이 삼각형의 넓이를 출력합니다.======================6.0샘플 입력 : 3,4,5샘플 출력 : 6.0```
a,b,c = eval(input()) x = (a+b+c)/2 area = (x*(x-a)*(x-b)*(x-c))**(0.5) print(area)
_____no_output_____
MIT
python/.ipynb_checkpoints/examples-checkpoint.ipynb
ikelee22/pythonlib
Example : 5 for 를 사용해서 암컷 개를 찾으세요. and를 사용해서 일치하는것을 찾으세요
suspects = [['낙타', '포유류','암컷'], ['상어','어류','숫컷'], ['푸들','개','암컷']] for suspect in suspects: if suspect[1] == '개' and suspect[2] =='암컷': print('범인은', suspect[0], '입니다')
_____no_output_____
MIT
python/.ipynb_checkpoints/examples-checkpoint.ipynb
ikelee22/pythonlib
Example : 6 중복되지 않는 카드 두 장을 뽑도록 빈칸을 채우세요.
import random cities = ['서울','부산','울산','인천' ] print(random.sample(cities, 2))
_____no_output_____
MIT
python/.ipynb_checkpoints/examples-checkpoint.ipynb
ikelee22/pythonlib
Example : 7 다음중 하나를 무작위로 뽑아주세요! annimals = 얼룩말, 황소, 개구리, 참새.
#리스트[] import random annimals = ['얼룩말','황소', '개구리', '참새'] print(random.choice(annimals))
_____no_output_____
MIT
python/.ipynb_checkpoints/examples-checkpoint.ipynb
ikelee22/pythonlib
Example : 8 def 를 이용해서 서로에게 인사하는 문구를 만들어 보세요! 가브리엘 님 안녕하세요? \엘리스 님 안녕하세요?
def welcome(name): print(name,'님 안녕하세요?') welcome('가브리엘') welcome('엘리스')
_____no_output_____
MIT
python/.ipynb_checkpoints/examples-checkpoint.ipynb
ikelee22/pythonlib
Example : 9 점수에 따라 학점을 출력 해주세요. 철수의 점수는 75점 입니다. 몇 학점 인지 표시해 주세요. A학점은 80< score <=100 B학점은 60< score <=80 C학점은 40< score <=60
score =75 if 80< score <=100: print('학점은 A 입니다') if 60< score <=80: print('학점은 B 입니다') if 40< score <=60: print('학점은 C 입니다')
_____no_output_____
MIT
python/.ipynb_checkpoints/examples-checkpoint.ipynb
ikelee22/pythonlib
Example : 10 변수를 사용해서 매출액을 계산해 주세요. 주문서1 - 커피2잔, 홍차4잔, 레몬티5잔 주문서2 - 커피1잔, 홍차1잔, 레몬티5잔 주문서3 - 커피2잔, 홍차3잔, 레몬티1잔
coffee =4000 tea = 3000 lemon =200 order1 = (coffee*2 + tea*4 + lemon*5) order2 = (coffee*1 + tea*1 + lemon*5) order3 = (coffee*2 + tea*3 + lemon*1) print(order1+order2+order3)
_____no_output_____
MIT
python/.ipynb_checkpoints/examples-checkpoint.ipynb
ikelee22/pythonlib
Example : 11 5바퀴를 도는 레이싱 경주를 하고 있습니다. while 코드를 이용해서 트랙의 수를 카운트하고 5바퀴를 돌면 종료 멧세지를 주세요. 반복할 때마다 몇 번째 바퀴인지 출력하세요. \5바퀴를 돌면 종료 멧세지와 함께 종료해 주세요.
count = 0 while count <5: count =count +1 print(count, "번째 바퀴입니다.") print('경주가 종료되었습니다!')
_____no_output_____
MIT
python/.ipynb_checkpoints/examples-checkpoint.ipynb
ikelee22/pythonlib
Example : 12 정답을 맟춰보세요. 미국이 수도는 어디인기요? \보기에서 찾아서 답하게 하세요. 런던,오타와, 파리, 뉴욕틀린 답을 말하면 어느 나라의 수도인지 말해주세요.
while True: answer = input('런던,오타와,파리,뉴욕 중 미국이 수도는 어디일까요?') if answer == '뉴욕': print('정답입니다. 뉴욕은 미국의 수도 입니다') break elif answer == '오타와': print('오타와는 캐나다의 수도 입니다') elif answer == '파리': print('파리는 프랑스의 수도 입니다') elif answer == '런던': print('런던은 영국의 수도 입니다') e...
_____no_output_____
MIT
python/.ipynb_checkpoints/examples-checkpoint.ipynb
ikelee22/pythonlib
Example : 13 물건을 교환 해주세요 철수는 마트에서 형광등을 샀습니다. 그런데 LED 전구가 전기 효율이 좋아 형광등을 LED 전구로 교환 하고자 합니다. 형광등 3개를 LED 3개로 바꾸어 주세요. 형광등, 형광등, 형광등 ==> LED 전구, LED 전구, LED전구
전구 = ['형광등', '형광등', '형광등'] for i in range(3): 전구[i] = 'LED 전구' print(전구)
_____no_output_____
MIT
python/.ipynb_checkpoints/examples-checkpoint.ipynb
ikelee22/pythonlib
Example : 14 반복하기 동물원 원숭이 10 마리에게 인사하기. for을 사용해서 10마리에게 한번에 인사하기 코드를 적어주세요.
for num in range(10): print ('안녕 원숭이', num) my_str ='My name is %s' % 'Lion' print(my_str) '%d %d' % (1,2) '%f %f' % (1,2)
_____no_output_____
MIT
python/.ipynb_checkpoints/examples-checkpoint.ipynb
ikelee22/pythonlib
print Options
print('집단지성', end='/') print('집단지성', end='통합하자')
_____no_output_____
MIT
python/.ipynb_checkpoints/examples-checkpoint.ipynb
ikelee22/pythonlib
$part \ 2b)$
#Define Subroutines def f(x): return np.sin(x) + np.exp(-x) def df(x): return -np.sin(x) + np.exp(-x) def g(x,h): return 1/h**2*(f(x+h)-2*f(x)+f(x-h)) def G(x,h, p): G = (2**p*g(x,h/2) - g(x,h))/(2**p-1) return G # #Computation t = np.arange(0,10,0.1) for h in [0.1, 0.5]: plt.plot(t,f(t) ,'...
_____no_output_____
IJG
16Nov17.ipynb
FHomewood/ScientificComputing
Embedding a Bokeh server in a NotebookThis notebook shows how a Bokeh server application can be embedded inside a Jupyter notebook.
import yaml from bokeh.layouts import column from bokeh.models import ColumnDataSource, Slider from bokeh.plotting import figure from bokeh.themes import Theme from bokeh.io import show, output_notebook from bokeh.sampledata.sea_surface_temperature import sea_surface_temperature output_notebook()
_____no_output_____
BSD-3-Clause
examples/howto/server_embed/notebook_embed.ipynb
GangminLi/bokeh
There are various application handlers that can be used to build up Bokeh documents. For example, there is a `ScriptHandler` that uses the code from a `.py` file to produce Bokeh documents. This is the handler that is used when we run `bokeh serve app.py`. In the notebook we can use a function to define a Bokehg applic...
def bkapp(doc): df = sea_surface_temperature.copy() source = ColumnDataSource(data=df) plot = figure(x_axis_type='datetime', y_range=(0, 25), y_axis_label='Temperature (Celsius)', title="Sea Surface Temperature at 43.18, -70.43") plot.line('time', 'temperature', sour...
_____no_output_____
BSD-3-Clause
examples/howto/server_embed/notebook_embed.ipynb
GangminLi/bokeh
Now we can display our application using ``show``, which will automatically create an ``Application`` that wraps ``bkapp`` using ``FunctionHandler``. The end result is that the Bokeh server will call ``bkapp`` to build new documents for every new sessions that is opened.**Note**: If the current notebook is not displaye...
show(bkapp) # notebook_url="http://localhost:8888"
_____no_output_____
BSD-3-Clause
examples/howto/server_embed/notebook_embed.ipynb
GangminLi/bokeh
What is Abstraction in OOP Abstraction is the concept of object-oriented programming that “shows” only essential attributes and “hides” unnecessary information.The main purpose of abstraction is hiding the unnecessary details from the users. Abstraction is selecting data from a larger pool to show only relevant detai...
from abc import ABC # declaration class classname(ABC): def pau(self): pass
_____no_output_____
MIT
Week 12 - Abstraction Practice.ipynb
chimaCSC102/chimaCSC102
Abstract method Syntax is declared as
def abstractmethod_name(): pass
_____no_output_____
MIT
Week 12 - Abstraction Practice.ipynb
chimaCSC102/chimaCSC102
Few things to be noted in Python:In python, an abstract class can hold both an abstract method and a normal method.The second point is an abstract class is not initiated (no objects are created).The derived class implementation methods are defined in abstract base classes.
from ABC import abc # here abc and ABC are case-sensitive. When we swap it creates
_____no_output_____
MIT
Week 12 - Abstraction Practice.ipynb
chimaCSC102/chimaCSC102
Code I:
from abc import ABC, abstractmethod # Abstract Class class product(abc): # Normal Method def item_list(self, rate): print("amount submitted : ",rate) # Abstract Method @abstractmethod def product(self, rate):
_____no_output_____
MIT
Week 12 - Abstraction Practice.ipynb
chimaCSC102/chimaCSC102
Code II:A program to generate the volume of geometric shapes
from abc import ABC class geometric(ABC): def volume(self): #abstract method pass class Rect(geometric): length = 4 width = 6 height = 6 def volume(self): return self.length * self.width *self.height class Sphere(geometric): radius = 8 def volume(...
Volume of a rectangle: 144 Volume of a circle: 2089.9840000000004 Volume of a square: 125 Volume of a triangle: 10.0
MIT
Week 12 - Abstraction Practice.ipynb
chimaCSC102/chimaCSC102
Code IIIA program to generate different invoices
from abc import ABC, abstractmethod class Bill(ABC): def final_bill(self, pay): print('Purchase of the product: ', pay) @abstractmethod def Invoice(self, pay): pass class Paycheque(Bill): def Invoice(self, pay): print('paycheque of: ', pay) class CardPayme...
_____no_output_____
MIT
Week 12 - Abstraction Practice.ipynb
chimaCSC102/chimaCSC102
Code IV: Python program showing abstract base class work
from abc import ABC, abstractmethod class Animal(ABC): @abstractmethod def move(self): pass class Human(Animal): def move(self): print("I can walk and run") class Snake(Animal): def move(self): print("I can crawl") class Dog(Animal): def move(self): pr...
I can walk and run I can crawl I can bark I can roar
MIT
Week 12 - Abstraction Practice.ipynb
chimaCSC102/chimaCSC102
Concrete Methods in Abstract Base Classes : Concrete (normal) classes contain only concrete (normal) methods whereas abstract classes may contain both concrete methods and abstract methods. The concrete class provides an implementation of abstract methods, the abstract base class can also provide an implementation by ...
from abc import ABC, abstractmethod class R(ABC): def rk(self): print("Abstract Base Class") class K(R): def rk(self): super().rk() print("subclass") # Object instantiation r = K() r.rk()
Abstract Base Class subclass
MIT
Week 12 - Abstraction Practice.ipynb
chimaCSC102/chimaCSC102
Code VI:
from abc import ABC, abstractmethod class Bank(ABC): def branch(self, Naira): print("Fees submitted : ",Naira) @abstractmethod def Bank(Naira): class private(Bank): def Bank(naira): print("Total Naira Value here: ",Naira) class public(bank): def Bank(Naira...
_____no_output_____
MIT
Week 12 - Abstraction Practice.ipynb
chimaCSC102/chimaCSC102
Class Project I Develop a python OOP program that creates an abstract base class called coup_de_ecriva. The base class will have one abstract method called Fan_Page and four subclassses namely; FC_Cirok, Madiba_FC, Blue_Jay_FC and TSG_Walker. The program will receive as input the name of the club the user supports an...
from abc import ABC, abstractmethod class coup_de_escriva(ABC): @abstractmethod def Fan_page(self): pass class FC_Cirok(coup_de_escriva): def Fan_page(self): print(str(input("Enter your name"))) print(str(input("Which club do you support?"))) print("WELCOME TO CIRO...
Enter your name Chima Chima Which club do you support? Cirok Cirok WELCOME TO CIROK FC! Enter your name Toju Toju Which club do you support? Madiba Madiba WELCOME TO MADIBA FC! Enter your name Daniel Daniel Which club do you support? Bluejays Bluejays WELCOME TO THE BLUES! Enter your name Murew...
MIT
Week 12 - Abstraction Practice.ipynb
chimaCSC102/chimaCSC102
Class Project II The Service Unit of PAU has contacted you to develop a program to manage some of the External Food Vendors. With your knowledge in python OOP develop a program to manage the PAU External Food Vendors. The program receives as input the vendor of interest and display the menu of the interested vendor. T...
from abc import ABC, abstractmethod class External_Vendors(ABC): @abstractmethod def menu(self): pass class Cooperative_cafeteria(External_Vendors): def menu(self): print(str(input("Which external vendor would you prefer?"))) print("Menu ; Jollof Rice and Stew, White Rice and S...
Which external vendor would you prefer? Cooperative Cooperative Menu ; Jollof Rice and Stew, White Rice and Stew, Fried Rice, Salad, Platain Which external vendor would you prefer? Faith Faith Menu ; Jollof Rice , White Rice and Stew, Fried Rice, Beans, Chicken Which external vendor would you prefer? Students Ce...
MIT
Week 12 - Abstraction Practice.ipynb
chimaCSC102/chimaCSC102
Boston Housing KNN
import sys sys.path.append("..") from pyspark.sql.types import BooleanType from pyspark.ml.feature import StandardScaler, VectorAssembler, BucketedRandomProjectionLSH from pyspark.ml.classification import LinearSVC from pyspark.sql import Row from pyspark.sql.session import SparkSession from pyspark.sql.functions impor...
_____no_output_____
Apache-2.0
exercises/boston_housing_knn.ipynb
sturc/bd_and_ds
Spark session creation
spark = (SparkSession .builder .appName("BostonHousingKNN") .getOrCreate())
_____no_output_____
Apache-2.0
exercises/boston_housing_knn.ipynb
sturc/bd_and_ds
DataFrame creation using an ifered Schema
df = spark.read.option("header", "true") \ .option("inferSchema", "true") \ .option("delimiter", ";") \ .csv(inputFile) \ .withColumn("CATBOOL", expr("CAT").cast(BooleanType())) print(df.printSchema())
_____no_output_____
Apache-2.0
exercises/boston_housing_knn.ipynb
sturc/bd_and_ds
Prepare training and test data.
featureCols = df.columns.copy() featureCols.remove("MEDV") featureCols.remove("CAT") featureCols.remove("CATBOOL") print(featureCols) assembler = VectorAssembler(outputCol="features", inputCols=featureCols) scaler = StandardScaler(inputCol="features", outputCol="scaledFeatures", withStd=True,...
_____no_output_____
Apache-2.0
exercises/boston_housing_knn.ipynb
sturc/bd_and_ds
LHS Euclidean Distance
# TODO optimize the params to minimize the test error # TODO try the MinHashLSH too lhsED = BucketedRandomProjectionLSH(inputCol="scaledFeatures", outputCol="hashes", bucketLength =2.0, numHashTables=3)
_____no_output_____
Apache-2.0
exercises/boston_housing_knn.ipynb
sturc/bd_and_ds
Train the model
modelED = lhsED.fit(training)
_____no_output_____
Apache-2.0
exercises/boston_housing_knn.ipynb
sturc/bd_and_ds
Test the model
resultList = [] # The Nearest neighbor testing # TODO add other aggregation methods for row in test.collect() : neighbors = modelED.approxNearestNeighbors(training, row.scaledFeatures, 5) grouped = neighbors.groupBy(df.CAT).count() if grouped.count() > 0 : result = grouped.orderBy(desc("count"...
_____no_output_____
Apache-2.0
exercises/boston_housing_knn.ipynb
sturc/bd_and_ds
Load some text data - from wikipedia, pages on people
people = turicreate.SFrame('people_wiki.sframe') people
_____no_output_____
MIT
machine_learning/1_machine_learning_foundations/lecture/week4/Document retrieval.ipynb
bomin0624/coursera-university-of-washington
Explore the dataset and checkout the text it contains
obama = people[people['name'] == 'Barack Obama'] obama obama['text']
_____no_output_____
MIT
machine_learning/1_machine_learning_foundations/lecture/week4/Document retrieval.ipynb
bomin0624/coursera-university-of-washington
Explore the entry for actor George Clooney
clooney = people[people['name'] == 'George Clooney'] clooney['text']
_____no_output_____
MIT
machine_learning/1_machine_learning_foundations/lecture/week4/Document retrieval.ipynb
bomin0624/coursera-university-of-washington
Word counts for Obama acticle
obama['word_count'] = turicreate.text_analytics.count_words(obama['text']) obama['word_count']
_____no_output_____
MIT
machine_learning/1_machine_learning_foundations/lecture/week4/Document retrieval.ipynb
bomin0624/coursera-university-of-washington
Sort the word counts for the Obama article
obama.stack('word_count',new_column_name=['word','count']) obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count']) obama_word_count_table obama_word_count_table.sort('count',ascending=False)
_____no_output_____
MIT
machine_learning/1_machine_learning_foundations/lecture/week4/Document retrieval.ipynb
bomin0624/coursera-university-of-washington
Compute TF-IDF for the corpus
people['word_count'] = turicreate.text_analytics.count_words(people['text']) people people['tfidf'] = turicreate.text_analytics.tf_idf(people['text']) people
_____no_output_____
MIT
machine_learning/1_machine_learning_foundations/lecture/week4/Document retrieval.ipynb
bomin0624/coursera-university-of-washington
Examine the TF-IDF for the Obama article
obama = people[people['name'] == 'Barack Obama'] obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
_____no_output_____
MIT
machine_learning/1_machine_learning_foundations/lecture/week4/Document retrieval.ipynb
bomin0624/coursera-university-of-washington
Manually compute distances between a few people
clinton = people[people['name'] == 'Bill Clinton'] beckham = people[people['name'] == 'David Beckham']
_____no_output_____
MIT
machine_learning/1_machine_learning_foundations/lecture/week4/Document retrieval.ipynb
bomin0624/coursera-university-of-washington
Is Obama closer to Clinton than to Beckham?
turicreate.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0]) # The smaller the cosine value, the more relevant it is. turicreate.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])
_____no_output_____
MIT
machine_learning/1_machine_learning_foundations/lecture/week4/Document retrieval.ipynb
bomin0624/coursera-university-of-washington
Build the nearest neighbor model for document retrieval
knn_model = turicreate.nearest_neighbors.create(people,features=['tfidf'],label='name')
_____no_output_____
MIT
machine_learning/1_machine_learning_foundations/lecture/week4/Document retrieval.ipynb
bomin0624/coursera-university-of-washington
Applying the nearest-neighbors model for retrieval Who is closest to Obama?
knn_model.query(obama)
_____no_output_____
MIT
machine_learning/1_machine_learning_foundations/lecture/week4/Document retrieval.ipynb
bomin0624/coursera-university-of-washington
Other examples of document retrieval
taylor_swift = people[people['name'] == 'Taylor Swift'] knn_model.query(taylor_swift) jolie = people[people['name'] == 'Angelina Jolie'] knn_model.query(jolie) arnold = people[people['name'] == 'Arnold Schwarzenegger'] knn_model.query(arnold)
_____no_output_____
MIT
machine_learning/1_machine_learning_foundations/lecture/week4/Document retrieval.ipynb
bomin0624/coursera-university-of-washington
Chapter 3This is the third in a series of notebooks related to astronomy data.As a running example, we are replicating parts of the analysis in a recent paper, "[Off the beaten path: Gaia reveals GD-1 stars outside of the main stream](https://arxiv.org/abs/1805.00425)" by Adrian M. Price-Whelan and Ana Bonaca.In the f...
# If we're running on Colab, install libraries import sys IN_COLAB = 'google.colab' in sys.modules if IN_COLAB: !pip install astroquery astro-gala pyia python-wget
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
Reload the dataIn the previous lesson, we ran a query on the Gaia server and downloaded data for roughly 100,000 stars. We saved the data in a FITS file so that now, picking up where we left off, we can read the data from a local file rather than running the query again.If you ran the previous lesson successfully, yo...
import os from wget import download filename = 'gd1_results.fits' path = 'https://github.com/AllenDowney/AstronomicalData/raw/main/data/' if not os.path.exists(filename): print(download(path+filename))
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
Now here's how we can read the data from the file back into an Astropy `Table`:
from astropy.table import Table results = Table.read(filename)
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
The result is an Astropy `Table`.We can use `info` to refresh our memory of the contents.
results.info
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
Selecting rows and columnsIn this section we'll see operations for selecting columns and rows from an Astropy `Table`. You can find more information about these operations in the [Astropy documentation](https://docs.astropy.org/en/stable/table/access_table.html).We can get the names of the columns like this:
results.colnames
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
And select an individual column like this:
results['ra']
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
The result is a `Column` object that contains the data, and also the data type, units, and name of the column.
type(results['ra'])
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
The rows in the `Table` are numbered from 0 to `n-1`, where `n` is the number of rows. We can select the first row like this:
results[0]
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
As you might have guessed, the result is a `Row` object.
type(results[0])
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
Notice that the bracket operator selects both columns and rows. You might wonder how it knows which to select.If the expression in brackets is a string, it selects a column; if the expression is an integer, it selects a row.If you apply the bracket operator twice, you can select a column and then an element from the c...
results['ra'][0]
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
Or you can select a row and then an element from the row.
results[0]['ra']
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
You get the same result either way. Scatter plotTo see what the results look like, we'll use a scatter plot. The library we'll use is [Matplotlib](https://matplotlib.org/), which is the most widely-used plotting library for Python.The Matplotlib interface is based on MATLAB (hence the name), so if you know MATLAB, so...
import matplotlib.pyplot as plt
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
Pyplot part of the Matplotlib library. It is conventional to import it using the shortened name `plt`.Pyplot provides two functions that can make scatterplots, [plt.scatter](https://matplotlib.org/3.3.0/api/_as_gen/matplotlib.pyplot.scatter.html) and [plt.plot](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot...
x = results['ra'] y = results['dec'] plt.plot(x, y, 'ko') plt.xlabel('ra (degree ICRS)') plt.ylabel('dec (degree ICRS)');
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
The arguments to `plt.plot` are `x`, `y`, and a string that specifies the style. In this case, the letters `ko` indicate that we want a black, round marker (`k` is for black because `b` is for blue).The functions `xlabel` and `ylabel` put labels on the axes.This scatter plot has a problem. It is "[overplotted](https:...
from pyia import GaiaData gaia_data = GaiaData(results) type(gaia_data)
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData
Now we can extract sky coordinates from the `GaiaData` object, like this:
import astropy.units as u skycoord = gaia_data.get_skycoord( distance=8*u.kpc, radial_velocity=0*u.km/u.s)
_____no_output_____
MIT
03_motion.ipynb
abostroem/AstronomicalData