markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Save the cats to a separate variable called "cats." Save the dogs to a separate variable called "dogs."
cats = df['animal'] == 'cat' dogs = df['animal'] == 'dog'
07/pandas-homework-hon-june13.ipynb
honjy/foundations-homework
mit
Display all of the animals that are cats and above 12 inches long. First do it using the "cats" variable, then do it using your normal dataframe.
long_animals = df['length_inches'] > 12 df[cats & long_animals] df[(df['length_inches'] > 12) & (df['animal'] == 'cat')] #Amazing!
07/pandas-homework-hon-june13.ipynb
honjy/foundations-homework
mit
What's the mean length of a cat? What's the mean length of a dog Cats are mean but dogs are not
df[cats].mean() df[dogs].mean()
07/pandas-homework-hon-june13.ipynb
honjy/foundations-homework
mit
Use groupby to accomplish both of the above tasks at once.
df.groupby('animal').mean() #groupby
07/pandas-homework-hon-june13.ipynb
honjy/foundations-homework
mit
Make a histogram of the length of dogs. I apologize that it is so boring.
df[dogs].plot.hist(y='length_inches')
07/pandas-homework-hon-june13.ipynb
honjy/foundations-homework
mit
Change your graphing style to be something else (anything else!)
df[dogs].plot.bar(x='name', y='length_inches')
07/pandas-homework-hon-june13.ipynb
honjy/foundations-homework
mit
Make a horizontal bar graph of the length of the animals, with their name as the label (look at the billionaires notebook I put on Slack!)
df[dogs].plot.barh(x='name', y='length_inches') #Fontaine is such an annoying name for a dog
07/pandas-homework-hon-june13.ipynb
honjy/foundations-homework
mit
Make a sorted horizontal bar graph of the cats, with the larger cats on top.
df[cats].sort(['length_inches'], ascending=False).plot(kind='barh', x='name', y='length_inches') #df[df['animal']] == 'cat'].sort_values(by='length).plot(kind='barh', x='name', y='length', legend=False)
07/pandas-homework-hon-june13.ipynb
honjy/foundations-homework
mit
Experiment Database
import naminggamesal.ngdb as ngdb import naminggamesal.ngsimu as ngsimu
notebooks/6_Intro_Experiment_Database.ipynb
flowersteam/naminggamesal
agpl-3.0
Using ngdb instead of ngsimu, we create Experiments objects that are re-usable via a database. Execute the code below, and a second time for the graph().show() part, you will notice the difference between the 2 libs (testexp is ngdb.Experiment, testexp2 is ngsimu.Experiment).
xp_cfg={ 'pop_cfg':{ 'voc_cfg':{ 'voc_type':'pandas', #'voc_type':'sparse_matrix', #'M':5, #'W':10 }, 'strat_cfg':{ 'strat_type':'naive', 'voc_update':'minimal' }, 'interact_cfg':{ 'interact_type':'speakerschoice' }, 'env_cfg':{'env_type':'simple','M':5,'W':10}, 'nbagent':nbagent }, 'step':10 } testexp=ngdb.Experiment(**xp_cfg) testexp testexp2=ngsimu.Experiment(**xp_cfg) testexp2 testexp.continue_exp_until(0) testexp2.continue_exp_until(3) for i in range() print testexp2._poplist.get_last()._agentlist[i]._vocabulary._content_m print testexp2._poplist.get_last()._agentlist[i]._vocabulary._content_m.index print testexp2._poplist.get_last()._agentlist[i]._vocabulary.get_accessible_words() testexp2._poplist.get_last()._agentlist[0]._vocabulary.add() testexp.graph("entropy").show() testexp2.graph("entropy").show()
notebooks/6_Intro_Experiment_Database.ipynb
flowersteam/naminggamesal
agpl-3.0
Get back existing experiments, merge DBs, and plot with different abscisse
db=ngdb.NamingGamesDB("ng2.db") testexp3=db.get_experiment(force_new=True,**xp_cfg) testexp3.continue_exp_until(200) db.merge("naminggames.db",remove=False) testexp3=db.get_experiment(force_new=False,**xp_cfg) testexp3.continue_exp_until(100) testexp3.graph("srtheo").show() testexp3.graph("entropy").show() testexp3.graph("entropy",X="srtheo").show() testexp3.graph("entropy",X="interactions_per_agent").show()
notebooks/6_Intro_Experiment_Database.ipynb
flowersteam/naminggamesal
agpl-3.0
aspell check
makedailthi = os.listdir('/home/wcmckee/Downloads/writersdenhamilton/posts/') import arrow artim = arrow.now() artim.strftime('%d') daiyrst = artim.strftime('%d') yrrst = artim.strftime('%y') yrrst fulrst = daiyrst for maked in makedailthi: #print(maked) if ('nanowrimo') in makedailthi: print(maked) if ('nanwri' in makedailthi): print (makedailthi) for its in range(1, int(daiyrst) + 1): print(its)
nanowritmorst.ipynb
wcmckee/wcmckee
mit
Swap this around - number of words as key and word as value
for dakey in dayil.keys(): if ',' in dakey: print(dakey.replace(',', '')) elif '.' in dakey: print(dakey.replace('.', ''))
nanowritmorst.ipynb
wcmckee/wcmckee
mit
How many single words in the file? Script to edit. Find and replace certain words/pharses. Keylog whatever you type Report generatored at midnight that summery of days writing.
plusonev = list() isonep = list() for dayi in dayil.values(): if dayi > 1: print(dayi) plusonev.append(dayi) elif dayi < 2: isonep.append(dayi)
nanowritmorst.ipynb
wcmckee/wcmckee
mit
get the difference between two len
len(isonep) len(plusonev) sum(isonep) sum(plusonev) sum(isonep) + sum(plusonev) makedailthi import requests import xmltodict reqartc = requests.get('http://nanowrimo.org/wordcount_api/wchistory/artctrl') reqtx = reqartc.text txmls = xmltodict.parse(reqtx) txkeys = txmls.keys() txmls['wchistory']['wordcounts']['wcentry'][0] lentxm = len(txmls['wchistory']['wordcounts']['wcentry']) wclis = list() for lent in range(lentxm): print(txmls['wchistory']['wordcounts']['wcentry'][lent]['wc']) wclis.append(int(txmls['wchistory']['wordcounts']['wcentry'][lent]['wc'])) len(wclis) sum(wclis) txmls.values()
nanowritmorst.ipynb
wcmckee/wcmckee
mit
Let's make some examples using regexps Pattern in string
text = """Python is an interpreted high-level general-purpose programming language. Its design philosophy emphasizes code readability with its use of significant indentation. Its language constructs as well as its object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects.""" # citation from Wikipedia
english/python/regexp_in_python.ipynb
OSGeoLabBp/tutorials
cc0-1.0
re.match searches for the pattern only at the beginning of string. It returns an object or None if the pattern not found.
re.match("Python", text) # is Python at the beginning of the text? if re.match("[Pp]ython", text): # is Python or python at the beginning of the text? print('text starts with Python') result = re.match("[Pp]ython", text) result.span(), result.group(0)
english/python/regexp_in_python.ipynb
OSGeoLabBp/tutorials
cc0-1.0
re.search searches the first occurence of the pattern in the string.
re.search('prog', text) re.search('levels?', text) # optional 's' after level re.findall('pro', text)
english/python/regexp_in_python.ipynb
OSGeoLabBp/tutorials
cc0-1.0
r preface is often used for regular expression
re.findall(r'[ \t\r\n]a[a-zA-Z0-9_][ \t\r\n]', text) # two letter words starting with letter 'a' re.findall(r'\sa\w\s', text) # the same as above but shorter re.findall(r'\sa\w*\s', text) # words strarting with 'a'
english/python/regexp_in_python.ipynb
OSGeoLabBp/tutorials
cc0-1.0
We can use regexp to find/match functions to validate input data. In the example below, is a string a valid number?
int_numbers = ('12356', '1ac', 'twelve', '23.65', '0', '-768') for int_number in int_numbers: if re.match(r'[+-]?(0|[1-9][0-9]*)$', int_number): print(f'{int_number} is an integer number') float_numbers =('12', '0.0', '-43.56', '1.76e-1', '1.1.1', '00.289') for float_number in float_numbers: if re.match(r'[+-]?(0|[1-9][0-9]*)(\.[0-9]*)?([eg][+-]?[0-9]+)?$', float_number): print(f'{float_number} is a float number')
english/python/regexp_in_python.ipynb
OSGeoLabBp/tutorials
cc0-1.0
There is another approach to check numerical values without regexp, as follows:
for float_number in float_numbers: try: float(float_number) # try to convert to float number except ValueError: continue # can't convert skip it print(f'{float_number} is a float number')
english/python/regexp_in_python.ipynb
OSGeoLabBp/tutorials
cc0-1.0
Email address validation: We'll use the precompiled regular expression (re.compile). This alternative is faster than the alternative of using the same regexp evaluated several times:
email = re.compile(r'^[a-zA-Z0-9.!#$%&\'*+/=?^_`{|}~-]+@[a-zA-Z0-9]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(\.[a-zA-Z0-9]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*$') addresses = ['a.b@c', 'siki.zoltan@emk.bme.hu', 'plainaddress', '#@%^%#$@#$@#.com', '@example.com', 'Joe Smith <email@example.com>', 'email.example.com', 'email@example@example.com', 'email@123.123.123.123'] valid_addresses = [addr for addr in addresses if email.search(addr)] print('valid email addresses:\n', valid_addresses) invalid_addresses = [addr for addr in addresses if not email.search(addr)] print('invalid email addresses:\n', invalid_addresses)
english/python/regexp_in_python.ipynb
OSGeoLabBp/tutorials
cc0-1.0
Other functions re.sub replaces the occurrence of a regexp with a given text in a string.
print(re.sub(r' *', ' ', 'Text with several unnecessary spaces')) # truncate adjecent spaces to a single space print(re.sub(r'[ \t,;]', ',', 'first,second;third fourth fifth')) # unify separators
english/python/regexp_in_python.ipynb
OSGeoLabBp/tutorials
cc0-1.0
re.split splits a text into a list of parts, where separators are given by regexp.
words = re.split(r'[, \.\t\r\n]', text) # word separators are space, dot, tabulator and EOL words
english/python/regexp_in_python.ipynb
OSGeoLabBp/tutorials
cc0-1.0
Please note that the previous result contains some empty words where two or more separators are adjecent. Let's correct it:
words = re.split(r'[, \.\t\r\n]+', text) # join adjecent separators words
english/python/regexp_in_python.ipynb
OSGeoLabBp/tutorials
cc0-1.0
Why is there an empty word at the end? Complex example Let's make a complex example: Find the most frequent four-letter word starting with "s" in Kipling's The Jungle Book.
import urllib.request url = 'https://www.gutenberg.org/files/236/236-0.txt' words = {} with urllib.request.urlopen(url) as file: for line in file: ws = re.split(r'[, \.\t\r\n]+', line.decode('utf8')) for w in ws: w = w.lower() if re.match('[sS][a-z]{3}', w): if w in words: words[w] += 1 else: words[w] = 1 print(f'{len(words.keys())} different four letter words starting with "s"') m = max(words, key=words.get) print(f'{m}: {words[m]}')
english/python/regexp_in_python.ipynb
OSGeoLabBp/tutorials
cc0-1.0
Creating Custom pandas Data Reader Packages
import pd_datareader_nhs.nhs_digital_ods as ods ods.search(string='Prison', field='Label') dd=ods.download('eprison') dd.head()
notebooks/ShowNtell_nov17.ipynb
psychemedia/parlihacks
mit
Package Issues development building up example and reusable recipes ownership and production quality (participation in development) Notebooks as Open / Shared Recipes But How Do I Share Working Examples? BinderHub Build Sequence "[P]hilosophically similar to Heroku Build Packs" requirements.txt python packages environment.yml conda environment specification apt.txt debian packages that should be installed (latest version of Ubuntu) postBuild arbitrary commands to be run after the whole repository has been built REQUIRE Julia packages Dockerfile treated as a regular Dockerfile. The presence of a Dockerfile will cause all other building behavior to not be triggered. Building a Local Docker Image From a Github Repository ```bash pip3 install jupyter-repo2docker jupyter-repo2docker --image-name psychemedia/parlihacks --no-run https://github.com/psychemedia/parlihacks docker push psychemedia/parlihacks ``` Creating Simple Service APIs In terminal: jupyter kernelgateway --KernelGatewayApp.api='kernel_gateway.notebook_http' --KernelGatewayApp.seed_uri='./SimpleAPI2.ipynb' --port 8899
import requests requests.get('http://127.0.0.1:8899/demo/role/worker').json() requests.get('http://127.0.0.1:8899/demo/name/jo').json()
notebooks/ShowNtell_nov17.ipynb
psychemedia/parlihacks
mit
The normalization of the sensing matrices $A$ is choosen in such a way that $\mathbb{E}\Vert A \Vert_2 = 1$ is independent of $m$ and $n$. This allows us to normalize $A$ later to $\Vert A \Vert_2 < 1$ with high probability. The latter condition is important for the convergence of the constant step-size IHT algorithm. For the other approaches (convex programming or adaptive IHT), the normalization does not matter. Convex Programming The first question we will check is: "How many measurements do we need to recover the signal?". For this purpose we will use basis pursuit denoising (the $\ell_1$-regularized least square estimator), which is very efficient in the number of samples. On the other hand it does not scale well for large systems, due to the higher order polynomial scaling of the corresponding Second order cone program.
def recover(x, m, eta=1.0, sigma=0.0, rgen=np.random): A = sensingmat_gauss(m, len(x), rgen) y = np.dot(A, x) + sigma / np.sqrt(len(x)) * rgen.randn(m) x_hat = cvx.Variable(len(x)) objective = cvx.Minimize(cvx.norm(A * x_hat - y, 2)**2 + eta * cvx.norm(x_hat, 1)) problem = cvx.Problem(objective, []) problem.solve() return np.ravel(x_hat.value) def check(signal, sigma, rgen=np.random): return np.frompyfunc(lambda m, eta: np.linalg.norm(signal - recover(signal, m, eta, sigma, rgen)), 2, 1) DIM = 100 NNZ = 5 SAMPLES = 20 MAX_MEASUREMENTS = 100 SIGMA = 0 signal = random_sparse_vector(DIM, NNZ) ms, etas = np.meshgrid(np.linspace(1, MAX_MEASUREMENTS, 20), np.logspace(-4, 1, 20)) errors = [check(random_sparse_vector(DIM, NNZ), SIGMA)(ms, etas) for i in Progress(range(SAMPLES))] errors = np.mean(errors, axis=0).reshape(ms.shape).astype('float64') value = np.log(errors) levels = np.linspace(np.min(value), np.max(value), 15) pl.yscale('log') pl.contourf(ms, etas, value, levels=levels) pl.xlabel = r"m" pl.ylabel = r"$\lambda$" pl.colorbar()
Compressed Sensing/IHT -- Compressed Sensing.ipynb
dseuss/notebooks
unlicense
As we can see from the figure above, about 25 measurements are enough to recover the signal with high precision. Since we have perfect measurements (no noise) we obtain better results for a smaller regularization parameter $\eta$. This changes as soon we add noise to our measurements:
DIM = 100 NNZ = 5 SAMPLES = 20 MAX_MEASUREMENTS = 100 SIGMA = .2 signal = random_sparse_vector(N, NNZ) ms, etas = np.meshgrid(np.linspace(1, MAX_MEASUREMENTS, 20), np.logspace(-4, 1, 20)) errors = [check(random_sparse_vector(N, NNZ), SIGMA)(ms, etas) for i in Progress(range(SAMPLES))] errors = np.mean(errors, axis=0).reshape(ms.shape).astype('float64') value = np.log(errors) levels = np.linspace(np.min(value), np.max(value), 15) pl.yscale('log') pl.contourf(ms, etas, value, levels=levels) pl.xlabel = r"m" pl.ylabel = r"$\lambda$" pl.colorbar()
Compressed Sensing/IHT -- Compressed Sensing.ipynb
dseuss/notebooks
unlicense
Here, $\eta \approx 10^{-2}$ gives the best results. For a larger value the signal is too sparse to fit the data well. On the other hand, for smaller values we overfit the data to the noisy measurements. In the following we will use noiseless measurements. Constant IHT First of all we will try iterative hard thresholding (IHT) with a constant stepsize $\mu = 1$. In this case it can be proven that the algorithm converges provided $\Vert A \Vert_2 < 1$ [1]. Note that a rescaling of the design matrix $A$ can be compensated by rescaling the step size [2]. Due to our choice of normalization for $A$, we have that $\mu = 0.5$ will suffice with high probability. Since we cannot expect that cIHT is as efficient w.r.t. the sample size as the basis pursuit denoising, we increase the number of measurements.
import sys sys.path.append('/Users/dsuess/Code/CS\ Algorithms/') from csalgs.cs import iht %autoreload 2 def cIHT(x, m, r, stepsize=1.0, rgen=np.random, sensingmat=None, x_init=None): A = sensingmat_gauss(m, len(x)) if sensingmat is None else sensingmat y = np.dot(A, x) x_hat = np.zeros(x.shape) if x_init is None else x_init while True: x_hat += stepsize * A.T.dot(y - A.dot(x_hat)) compress(x_hat, r) yield x_hat %debug MEASUREMENTS = 50 for _ in Progress(range(1)): A = sensingmat_gauss(MEASUREMENTS, len(signal)) y = A @ signal solution = iht.csIHT(A, y, 2 * NNZ, stepsize=iht.adaptive_stepsize()) pl.plot([np.linalg.norm(signal - x_hat) for x_hat in it.islice(solution, 100)]) pl.xlabel = "\# iterations" pl.ylabel = r"\Vert x - \hat x \Vert_2"
Compressed Sensing/IHT -- Compressed Sensing.ipynb
dseuss/notebooks
unlicense
Note that the cIHT algorithm does not converge reliably even for a large number of iterations. Even worse, the algorithm gets trapped for a finite amount of time. This behavior is not good if one wants to check convergence by comparing two consecutive iteration steps. Adaptive IHT
SCALE_CONST = .5 KAPPA = 3. assert KAPPA > 1 / (1 - SCALE_CONST) def get_stepsize(A, g, supp): return norm(g[supp])**2 / norm(np.dot(A[:, supp], g[supp]))**2 def same_supports(supp1, supp2): return np.all(np.sort(supp1) == np.sort(supp2)) def compute_omega(x_np, x_n, A): diff = x_np - x_n return (1 - SCALE_CONST) * norm(diff)**2 / norm(A.dot(diff))**2 def get_update(A, x, y, supp, r): g = A.T.dot(y - A.dot(x)) mu = norm(g[supp])**2 / norm(np.dot(A[:, supp], g[supp]))**2 while True: x_new, supp_new = compression(x + mu * g, r, retsupp=True) if same_supports(supp, supp_new) or (mu < compute_omega(x_new, x, A)): return x_new, supp_new mu /= KAPPA * SCALE_CONST def aIHT(x, m, r, rgen=np.random, sensingmat=None, x_init=None): A = sensingmat_gauss(m, len(x)) if sensingmat is None else sensingmat y = np.dot(A, x) x_hat = np.zeros(x.shape) if x_init is None else x_init _, supp = compression(A.T.dot(y), r, retsupp=True) while True: x_hat, supp = get_update(A, x_hat, y, supp, r) yield x_hat DIM = 1000 NNZ = 25 MEASUREMENTS = 200 for _ in Progress(range(20)): signal = random_sparse_vector(DIM, NNZ) solution = aIHT(signal, MEASUREMENTS, int(2 * NNZ)) pl.plot([np.linalg.norm(signal - x_hat) for x_hat in it.islice(solution, 250)]) pl.xlabel = "\# iterations" pl.ylabel = r"\Vert x - \hat x \Vert_2"
Compressed Sensing/IHT -- Compressed Sensing.ipynb
dseuss/notebooks
unlicense
Plot items Lines, Bars, Points and Right yAxis
x = [1, 4, 6, 8, 10] y = [3, 6, 4, 5, 9] pp = Plot(title='Bars, Lines, Points and 2nd yAxis', xLabel="xLabel", yLabel="yLabel", legendLayout=LegendLayout.HORIZONTAL, legendPosition=LegendPosition.RIGHT, omitCheckboxes=True) pp.add(YAxis(label="Right yAxis")) pp.add(Bars(displayName="Bar", x=[1,3,5,7,10], y=[100, 120,90,100,80], width=1)) pp.add(Line(displayName="Line", x=x, y=y, width=6, yAxis="Right yAxis")) pp.add(Points(x=x, y=y, size=10, shape=ShapeType.DIAMOND, yAxis="Right yAxis")) plot = Plot(title= "Setting line properties") ys = [0, 1, 6, 5, 2, 8] ys2 = [0, 2, 7, 6, 3, 8] plot.add(Line(y= ys, width= 10, color= Color.red)) plot.add(Line(y= ys, width= 3, color= Color.yellow)) plot.add(Line(y= ys, width= 4, color= Color(33, 87, 141), style= StrokeType.DASH, interpolation= 0)) plot.add(Line(y= ys2, width= 2, color= Color(212, 57, 59), style= StrokeType.DOT)) plot.add(Line(y= [5, 0], x= [0, 5], style= StrokeType.LONGDASH)) plot.add(Line(y= [4, 0], x= [0, 5], style= StrokeType.DASHDOT)) plot = Plot(title= "Changing Point Size, Color, Shape") y1 = [6, 7, 12, 11, 8, 14] y2 = [4, 5, 10, 9, 6, 12] y3 = [2, 3, 8, 7, 4, 10] y4 = [0, 1, 6, 5, 2, 8] plot.add(Points(y= y1)) plot.add(Points(y= y2, shape= ShapeType.CIRCLE)) plot.add(Points(y= y3, size= 8.0, shape= ShapeType.DIAMOND)) plot.add(Points(y= y4, size= 12.0, color= Color.orange, outlineColor= Color.red)) plot = Plot(title= "Changing point properties with list") cs = [Color.black, Color.red, Color.orange, Color.green, Color.blue, Color.pink] ss = [6.0, 9.0, 12.0, 15.0, 18.0, 21.0] fs = [False, False, False, True, False, False] plot.add(Points(y= [5] * 6, size= 12.0, color= cs)) plot.add(Points(y= [4] * 6, size= 12.0, color= Color.gray, outlineColor= cs)) plot.add(Points(y= [3] * 6, size= ss, color= Color.red)) plot.add(Points(y= [2] * 6, size= 12.0, color= Color.black, fill= fs, outlineColor= Color.black)) plot = Plot() y1 = [1.5, 1, 6, 5, 2, 8] cs = [Color.black, Color.red, Color.gray, Color.green, Color.blue, Color.pink] ss = [StrokeType.SOLID, StrokeType.SOLID, StrokeType.DASH, StrokeType.DOT, StrokeType.DASHDOT, StrokeType.LONGDASH] plot.add(Stems(y= y1, color= cs, style= ss, width= 5)) plot = Plot(title= "Setting the base of Stems") ys = [3, 5, 2, 3, 7] y2s = [2.5, -1.0, 3.5, 2.0, 3.0] plot.add(Stems(y= ys, width= 2, base= y2s)) plot.add(Points(y= ys)) plot = Plot(title= "Bars") cs = [Color(255, 0, 0, 128)] * 5 # transparent bars cs[3] = Color.red # set color of a single bar, solid colored bar plot.add(Bars(x= [1, 2, 3, 4, 5], y= [3, 5, 2, 3, 7], color= cs, outlineColor= Color.black, width= 0.3))
doc/python/ChartingAPI.ipynb
twosigma/beaker-notebook
apache-2.0
Areas, Stems and Crosshair
ch = Crosshair(color=Color.black, width=2, style=StrokeType.DOT) plot = Plot(crosshair=ch) y1 = [4, 8, 16, 20, 32] base = [2, 4, 8, 10, 16] cs = [Color.black, Color.orange, Color.gray, Color.yellow, Color.pink] ss = [StrokeType.SOLID, StrokeType.SOLID, StrokeType.DASH, StrokeType.DOT, StrokeType.DASHDOT, StrokeType.LONGDASH] plot.add(Area(y=y1, base=base, color=Color(255, 0, 0, 50))) plot.add(Stems(y=y1, base=base, color=cs, style=ss, width=5)) plot = Plot() y = [3, 5, 2, 3] x0 = [0, 1, 2, 3] x1 = [3, 4, 5, 8] plot.add(Area(x= x0, y= y)) plot.add(Area(x= x1, y= y, color= Color(128, 128, 128, 50), interpolation= 0)) p = Plot() p.add(Line(y= [3, 6, 12, 24], displayName= "Median")) p.add(Area(y= [4, 8, 16, 32], base= [2, 4, 8, 16], color= Color(255, 0, 0, 50), displayName= "Q1 to Q3")) ch = Crosshair(color= Color(255, 128, 5), width= 2, style= StrokeType.DOT) pp = Plot(crosshair= ch, omitCheckboxes= True, legendLayout= LegendLayout.HORIZONTAL, legendPosition= LegendPosition.TOP) x = [1, 4, 6, 8, 10] y = [3, 6, 4, 5, 9] pp.add(Line(displayName= "Line", x= x, y= y, width= 3)) pp.add(Bars(displayName= "Bar", x= [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], y= [2, 2, 4, 4, 2, 2, 0, 2, 2, 4], width= 0.5)) pp.add(Points(x= x, y= y, size= 10))
doc/python/ChartingAPI.ipynb
twosigma/beaker-notebook
apache-2.0
Пример программы на Python
# Задача: найти 10 самых популярных Python-репозиториев на GitHub # Можно посмотреть стандартный модуль urllib — https://docs.python.org/3/library/urllib.html import requests API_URL = 'https://api.github.com/search/repositories?q=language:python&sort=stars&order=desc' def get_most_starred_github_repositories(): response = requests.get(API_URL) if response.status_code == 200: return response.json()['items'][:10] return for repo in get_most_starred_github_repositories(): print(repo['name'])
lectures/01_Intro/notebook.ipynb
park-python/course
bsd-3-clause
Переменные и базовые типы данных Что такое переменная? Garbage Collector variable = 1 variable = '2'
%%html <style> table {float:left} </style> variable = 1 variable = '1'
lectures/01_Intro/notebook.ipynb
park-python/course
bsd-3-clause
Именование переменных my_variable = 'value'
a, b = 0, 1 print(a, b) a, b = b, a print(a, b)
lectures/01_Intro/notebook.ipynb
park-python/course
bsd-3-clause
Числа | Тип | Пример | | ---------------- | ---------| | Integer | 42 | | Integer (Hex) | 0xA | | Integer (Binary) | 0b110101 | | Float | 2.7182 | | Float | 1.4e3 | | Complex | 14+0j | | Underscore | 100_000 |
year = 2017 pi = 3.1415 print(year) print(year + 1) amount = 100_000_000 type(pi) int('wat?') round(10.2), round(10.6) type(10.)
lectures/01_Intro/notebook.ipynb
park-python/course
bsd-3-clause
| Операция | Результат | | ----------- | ---------------- | | num + num2 | Сложение | | num - num2 | Вычитание | | num == num2 | Равенство | | num != num2 | Неравенство | | num >= num2 | Больше-равно | | num > num2 | Больше | | num * num2 | Умножение | | num / num2 | Деление | | num // num2 | Целочисленное деление | | num % num2 | Модуль | | num ** num2 | Степень |
6 / 3 6 // 4 6 / 0 (2 + 2) * 2
lectures/01_Intro/notebook.ipynb
park-python/course
bsd-3-clause
Строки | Тип | Пример | | ----------- | ----------- | | Строка | 'hello' | | Строка | "hello" | | Строка | '''hello''' | | Raw string | r'hello' | | Byte string | b'hello' |
s = '"Python" is the capital of Great Britain' print(s) type(s) print("\"Python\" is the capital of Great Britain") print('hi \n there') print(r'hi \n there') course_name = 'Курс Python Programming' # строки в Python 3 — Unicode course_name = "Курс Python Programming" print(course_name) long_string = "Perl — это тот язык, который одинаково " \ "выглядит как до, так и после RSA шифрования." \ "(Keith Bostic)" long_string = """ Обычно в таких кавычках пишут докстринги к функциям """
lectures/01_Intro/notebook.ipynb
park-python/course
bsd-3-clause
| Операция | Результат | | ------------| -------------- | | s + s2 | Сложение | | 'foo' in s2 | Вхождение | | s == s2 | Равенство | | s != s2 | Неравенство | | s >= s2 | Больше-равно | | s > s2 | Больше | | s * num | Умножение | | s[0] | Доступ по индексу | | len(s) | Длина |
'one' + 'two' 'one' * 10 s1 = 'first' print(id(s1)) s1 += '\n' print(id(s1)) print('python'[10]) len('python') # O(1) 'python'[:3] 'python'[::-1] 'p' in 'python' 'python'.capitalize() byte_string = b'python' print(byte_string[0])
lectures/01_Intro/notebook.ipynb
park-python/course
bsd-3-clause
Форматирование строк
name = 'World' print('Hello, {}{}'.format(name, '!')) print('Hello, %s' % (name,)) print(f'Hello, {name}!') tag_list = 'park, mstu, 21.09' splitted = tag_list.split(', ') print(splitted) ':'.join(splitted) input_string = ' 79261234567 ' input_string.strip(' 7') dir(str) help(int) import this # знать хотя бы первые 3 пункта
lectures/01_Intro/notebook.ipynb
park-python/course
bsd-3-clause
Базовые конструкции Условный оператор
type(10 > 9) 10 < 9 type(False)
lectures/01_Intro/notebook.ipynb
park-python/course
bsd-3-clause
Boolean True / False (__bool__) | True | False | | ----------------- | ----- | | True | False | | Большинство объектов | None | | 1 | 0 | | 3.2 | 0.0 | | 'string' | "" |
a = 10 b = 10 print(a == b) print(a is b) # magic bool(0.0) bool('') 13 < 12 < foo_call() import random temperature_tomorrow = random.randint(18, 27) if temperature_tomorrow >= 23: print('Срочно на пляж!') else: print(':(') temperature_tomorrow = random.randint(18, 27) decision = 'пляж' if temperature_tomorrow >= 23 else 'дома посижу' print(decision) answer = input('The answer to life the universe and everything is: ') answer = answer.strip().lower() if answer == '42': print('Точно!') elif (answer == 'сорок два') or (answer == 'forty two'): print('Тоже вариант!') else: print('Нет') bool(None) type(None) a = None print(a is None)
lectures/01_Intro/notebook.ipynb
park-python/course
bsd-3-clause
Задача Определить, является ли введеный год високосным. Год является високосным, если его номер кратен 4, но не кратен 100, а также если он кратен 400
import calendar calendar.isleap(1980) raw_year = input('Year: ') year = int(raw_year) if year % 400 == 0: print('Високосный') elif year % 4 == 0 and not year % 100 == 0: print('Високосный') else: print('Нет :(')
lectures/01_Intro/notebook.ipynb
park-python/course
bsd-3-clause
Циклы
for letter in 'python': print(letter) s = 'python' for idx in range(10): print(idx) for idx, letter in enumerate('python'): print(idx, letter) for letter in 'Python, Ruby. Perl, PHP.': if letter == ',': continue elif letter == '.': break print(letter)
lectures/01_Intro/notebook.ipynb
park-python/course
bsd-3-clause
for/while-else — знать можно, использовать лучше не стоит
patience = 5 while patience != 0: patience -= 1 print(patience)
lectures/01_Intro/notebook.ipynb
park-python/course
bsd-3-clause
Ошибки
user_range = int(input('Введите максимальное число диапазона: ')) for num in range(user_range): print(num)
lectures/01_Intro/notebook.ipynb
park-python/course
bsd-3-clause
FizzBuzz Напишите программу, которая выводит на экран числа от 1 до 100. При этом вместо чисел, кратных трем, программа должна выводить слово Fizz, а вместо чисел, кратных пяти — слово Buzz. Если число кратно пятнадцати, то программа должна выводить слово FizzBuzz.
for number in range(1, 101): result = '' if number % 3 == 0: result += 'Fizz' if number % 5 == 0: result += 'Buzz' print(result or number)
lectures/01_Intro/notebook.ipynb
park-python/course
bsd-3-clause
ProjectEuler 1 If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000.
# multiples_sum = 0 # for number in range(1000): # if number % 3 == 0 or number % 5 == 0: # multiples_sum += number # print(multiples_sum) sum( num for num in range(1000) if num % 3 == 0 or num % 5 == 0 )
lectures/01_Intro/notebook.ipynb
park-python/course
bsd-3-clause
2 Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be: 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ... By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.
a, b = 1, 1 fib_sum = 0 while b < 4_000_000: if b % 2 == 0: fib_sum += b a, b = b, a + b print(fib_sum)
lectures/01_Intro/notebook.ipynb
park-python/course
bsd-3-clause
4 A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99. Find the largest palindrome made from the product of two distinct 3-digit numbers.
def is_palindrome(number): str_number = str(number) return str_number == str_number[::-1] is_palindrome(9009) max_palindrome = 0 for a in range(999, 100, -1): for b in range(999, 100, -1): multiple = a * b if multiple > max_palindrome and is_palindrome(multiple): max_palindrome = multiple break print(max_palindrome)
lectures/01_Intro/notebook.ipynb
park-python/course
bsd-3-clause
Функции
def add_numbers(x, y): return x + y add_numbers(10, 5)
lectures/01_Intro/notebook.ipynb
park-python/course
bsd-3-clause
Aprendizaje no supervisado: algoritmos de clustering jerárquicos y basados en densidades En el cuaderno número 8, introdujimos uno de los algoritmos de agrupamiento más básicos y utilizados, el K-means. Una de las ventajas del K-means es que es extremadamente fácil de implementar y que es muy eficiente computacionalmente si lo comparamos a otros algoritmos de agrupamiento. Sin embargo, ya vimos que una de las debilidades de K-Means es que solo trabaja bien si los datos a agrupar se distribuyen en formas esféricas. Además, tenemos que decidir un número de grupos, k, a priori, lo que puede ser un problema si no tenemos conocimiento previo acerca de cuántos grupos esperamos obtener. En este cuaderno, vamos a ver dos formas alternativas de hacer agrupamiento, agrupamiento jerárquico y agrupamiento basado en densidades. Agrupamiento jerárquico Una característica importante del agrupamiento jerárquico es que podemos visualizar los resultados como un dendograma, un diagrama de árbol. Utilizando la visualización, podemos entonces decidir el umbral de profundidad a partir del cual vamos a cortar el árbol para conseguir un agrupamiento. En otras palabras, no tenemos que decidir el número de grupos sin tener ninguna información. Agrupamiento aglomerativo y divisivo Además, podemos distinguir dos formas principales de clustering jerárquico: divisivo y aglomerativo. En el clustering aglomerativo, empezamos con un único patrón por clúster y vamos agrupando clusters (uniendo aquellos que están más cercanos), siguiendo una estrategia bottom-up para construir el dendograma. En el clustering divisivo, sin embargo, empezamos incluyendo todos los puntos en un único grupo y luego vamos dividiendo ese grupo en subgrupos más pequeños, siguiendo una estrategia top-down. Nosotros nos centraremos en el clustering aglomerativo. Enlace simple y completo Ahora, la pregunta es cómo vamos a medir la distancia entre ejemplo. Una forma habitual es usar la distancia Euclídea, que es lo que hace el algoritmo K-Means. Sin embargo, el algoritmo jerárquico requiere medir la distancia entre grupos de puntos, es decir, saber la distancia entre un clúster (agrupación de puntos) y otro. Dos formas de hacer esto es usar el enlace simple y el enlace completo. En el enlace simple, tomamos el par de puntos más similar (basándonos en distancia Euclídea, por ejemplo) de todos los puntos que pertenecen a los dos grupos. En el enlace competo, tomamos el par de puntos más lejano. Para ver como funciona el clustering aglomerativo, vamos a cargar el dataset Iris (pretendiendo que no conocemos las etiquetas reales y queremos encontrar las espacies):
from sklearn import datasets iris = datasets.load_iris() X = iris.data[:, [2, 3]] y = iris.target n_samples, n_features = X.shape plt.scatter(X[:, 0], X[:, 1], c=y);
notebooks-spanish/20-clustering_jerarquico_y_basado_densidades.ipynb
pagutierrez/tutorial-sklearn
cc0-1.0
Ahora vamos haciendo una exploración basada en clustering, visualizando el dendograma utilizando las funciones linkage (que hace clustering jerárquico) y dendrogram (que dibuja el dendograma) de SciPy:
from scipy.cluster.hierarchy import linkage from scipy.cluster.hierarchy import dendrogram clusters = linkage(X, metric='euclidean', method='complete') dendr = dendrogram(clusters) plt.ylabel('Distancia Euclídea');
notebooks-spanish/20-clustering_jerarquico_y_basado_densidades.ipynb
pagutierrez/tutorial-sklearn
cc0-1.0
Alternativamente, podemos usar el AgglomerativeClustering de scikit-learn y dividr el dataset en 3 clases. ¿Puedes adivinar qué tres clases encontraremos?
from sklearn.cluster import AgglomerativeClustering ac = AgglomerativeClustering(n_clusters=3, affinity='euclidean', linkage='complete') prediction = ac.fit_predict(X) print('Etiquetas de clase: %s\n' % prediction) plt.scatter(X[:, 0], X[:, 1], c=prediction);
notebooks-spanish/20-clustering_jerarquico_y_basado_densidades.ipynb
pagutierrez/tutorial-sklearn
cc0-1.0
Clustering basado en densidades - DBSCAN Otra forma útil de agrupamiento es la conocida como Density-based Spatial Clustering of Applications with Noise (DBSCAN). En esencia, podríamos pensar que DBSCAN es un algoritmo que divide el dataset en subgrupos, buscando regiones densas de puntos. En DBSCAN, hay tres tipos de puntos: Puntos núcleo: puntos que tienen un mínimo número de puntos (MinPts) contenidos en una hiperesfera de radio epsilon. Puntos fronterizos: puntos que no son puntos núcleo, ya que no tienen suficientes puntos en su vecindario, pero si que pertenecen al vecindario de radio epsilon de algún punto núcleo. Puntos de ruido: todos los puntos que no pertenecen a ninguna de las categorías anteriores. Una ventaja de DBSCAN es que no tenemos que especificar el número de clusters a priori. Sin embargo, requiere que establezcamos dos hiper-parámetros adicionales que son MinPts y el radio epsilon.
from sklearn.datasets import make_moons X, y = make_moons(n_samples=400, noise=0.1, random_state=1) plt.scatter(X[:,0], X[:,1]) plt.show() from sklearn.cluster import DBSCAN db = DBSCAN(eps=0.2, min_samples=10, metric='euclidean') prediction = db.fit_predict(X) print("Etiquetas predichas:\n", prediction) plt.scatter(X[:, 0], X[:, 1], c=prediction);
notebooks-spanish/20-clustering_jerarquico_y_basado_densidades.ipynb
pagutierrez/tutorial-sklearn
cc0-1.0
<div class="alert alert-success"> <b>EJERCICIO</b>: <ul> <li> Usando el siguiente conjunto sintético, dos círculos concéntricos, experimenta los resultados obtenidos con los algoritmos de clustering que hemos considerado hasta el momento: `KMeans`, `AgglomerativeClustering` y `DBSCAN`. ¿Qué algoritmo reproduce o descubre mejor la estructura oculta (suponiendo que no conocemos `y`)? ¿Puedes razonar por qué este algoritmo funciona mientras que los otros dos fallan? </li> </ul> </div>
from sklearn.datasets import make_circles X, y = make_circles(n_samples=1500, factor=.4, noise=.05) plt.scatter(X[:, 0], X[:, 1], c=y);
notebooks-spanish/20-clustering_jerarquico_y_basado_densidades.ipynb
pagutierrez/tutorial-sklearn
cc0-1.0
Object Detection <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/hub/tutorials/object_detection"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/object_detection.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> <td> <a href="https://tfhub.dev/s?q=google%2Ffaster_rcnn%2Fopenimages_v4%2Finception_resnet_v2%2F1%20OR%20google%2Ffaster_rcnn%2Fopenimages_v4%2Finception_resnet_v2%2F1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub models</a> </td> </table> This Colab demonstrates use of a TF-Hub module trained to perform object detection. Setup
#@title Imports and function definitions # For running inference on the TF-Hub module. import tensorflow as tf import tensorflow_hub as hub # For downloading the image. import matplotlib.pyplot as plt import tempfile from six.moves.urllib.request import urlopen from six import BytesIO # For drawing onto the image. import numpy as np from PIL import Image from PIL import ImageColor from PIL import ImageDraw from PIL import ImageFont from PIL import ImageOps # For measuring the inference time. import time # Print Tensorflow version print(tf.__version__) # Check available GPU devices. print("The following GPU devices are available: %s" % tf.test.gpu_device_name())
site/en-snapshot/hub/tutorials/object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
Example use Helper functions for downloading images and for visualization. Visualization code adapted from TF object detection API for the simplest required functionality.
def display_image(image): fig = plt.figure(figsize=(20, 15)) plt.grid(False) plt.imshow(image) def download_and_resize_image(url, new_width=256, new_height=256, display=False): _, filename = tempfile.mkstemp(suffix=".jpg") response = urlopen(url) image_data = response.read() image_data = BytesIO(image_data) pil_image = Image.open(image_data) pil_image = ImageOps.fit(pil_image, (new_width, new_height), Image.ANTIALIAS) pil_image_rgb = pil_image.convert("RGB") pil_image_rgb.save(filename, format="JPEG", quality=90) print("Image downloaded to %s." % filename) if display: display_image(pil_image) return filename def draw_bounding_box_on_image(image, ymin, xmin, ymax, xmax, color, font, thickness=4, display_str_list=()): """Adds a bounding box to an image.""" draw = ImageDraw.Draw(image) im_width, im_height = image.size (left, right, top, bottom) = (xmin * im_width, xmax * im_width, ymin * im_height, ymax * im_height) draw.line([(left, top), (left, bottom), (right, bottom), (right, top), (left, top)], width=thickness, fill=color) # If the total height of the display strings added to the top of the bounding # box exceeds the top of the image, stack the strings below the bounding box # instead of above. display_str_heights = [font.getsize(ds)[1] for ds in display_str_list] # Each display_str has a top and bottom margin of 0.05x. total_display_str_height = (1 + 2 * 0.05) * sum(display_str_heights) if top > total_display_str_height: text_bottom = top else: text_bottom = top + total_display_str_height # Reverse list and print from bottom to top. for display_str in display_str_list[::-1]: text_width, text_height = font.getsize(display_str) margin = np.ceil(0.05 * text_height) draw.rectangle([(left, text_bottom - text_height - 2 * margin), (left + text_width, text_bottom)], fill=color) draw.text((left + margin, text_bottom - text_height - margin), display_str, fill="black", font=font) text_bottom -= text_height - 2 * margin def draw_boxes(image, boxes, class_names, scores, max_boxes=10, min_score=0.1): """Overlay labeled boxes on an image with formatted scores and label names.""" colors = list(ImageColor.colormap.values()) try: font = ImageFont.truetype("/usr/share/fonts/truetype/liberation/LiberationSansNarrow-Regular.ttf", 25) except IOError: print("Font not found, using default font.") font = ImageFont.load_default() for i in range(min(boxes.shape[0], max_boxes)): if scores[i] >= min_score: ymin, xmin, ymax, xmax = tuple(boxes[i]) display_str = "{}: {}%".format(class_names[i].decode("ascii"), int(100 * scores[i])) color = colors[hash(class_names[i]) % len(colors)] image_pil = Image.fromarray(np.uint8(image)).convert("RGB") draw_bounding_box_on_image( image_pil, ymin, xmin, ymax, xmax, color, font, display_str_list=[display_str]) np.copyto(image, np.array(image_pil)) return image
site/en-snapshot/hub/tutorials/object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
Apply module Load a public image from Open Images v4, save locally, and display.
# By Heiko Gorski, Source: https://commons.wikimedia.org/wiki/File:Naxos_Taverna.jpg image_url = "https://upload.wikimedia.org/wikipedia/commons/6/60/Naxos_Taverna.jpg" #@param downloaded_image_path = download_and_resize_image(image_url, 1280, 856, True)
site/en-snapshot/hub/tutorials/object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
Pick an object detection module and apply on the downloaded image. Modules: * FasterRCNN+InceptionResNet V2: high accuracy, * ssd+mobilenet V2: small and fast.
module_handle = "https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1" #@param ["https://tfhub.dev/google/openimages_v4/ssd/mobilenet_v2/1", "https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1"] detector = hub.load(module_handle).signatures['default'] def load_img(path): img = tf.io.read_file(path) img = tf.image.decode_jpeg(img, channels=3) return img def run_detector(detector, path): img = load_img(path) converted_img = tf.image.convert_image_dtype(img, tf.float32)[tf.newaxis, ...] start_time = time.time() result = detector(converted_img) end_time = time.time() result = {key:value.numpy() for key,value in result.items()} print("Found %d objects." % len(result["detection_scores"])) print("Inference time: ", end_time-start_time) image_with_boxes = draw_boxes( img.numpy(), result["detection_boxes"], result["detection_class_entities"], result["detection_scores"]) display_image(image_with_boxes) run_detector(detector, downloaded_image_path)
site/en-snapshot/hub/tutorials/object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
More images Perform inference on some additional images with time tracking.
image_urls = [ # Source: https://commons.wikimedia.org/wiki/File:The_Coleoptera_of_the_British_islands_(Plate_125)_(8592917784).jpg "https://upload.wikimedia.org/wikipedia/commons/1/1b/The_Coleoptera_of_the_British_islands_%28Plate_125%29_%288592917784%29.jpg", # By Américo Toledano, Source: https://commons.wikimedia.org/wiki/File:Biblioteca_Maim%C3%B3nides,_Campus_Universitario_de_Rabanales_007.jpg "https://upload.wikimedia.org/wikipedia/commons/thumb/0/0d/Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg/1024px-Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg", # Source: https://commons.wikimedia.org/wiki/File:The_smaller_British_birds_(8053836633).jpg "https://upload.wikimedia.org/wikipedia/commons/0/09/The_smaller_British_birds_%288053836633%29.jpg", ] def detect_img(image_url): start_time = time.time() image_path = download_and_resize_image(image_url, 640, 480) run_detector(detector, image_path) end_time = time.time() print("Inference time:",end_time-start_time) detect_img(image_urls[0]) detect_img(image_urls[1]) detect_img(image_urls[2])
site/en-snapshot/hub/tutorials/object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
Note that the more global package <i>docplex</i> contains another subpackage <i>docplex.mp</i> that is dedicated to Mathematical Programming, another branch of optimization. Step 2: Model the data
from docplex.cp.model import *
examples/cp/jupyter/SteelMill.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Set model parameter
from collections import namedtuple ############################################################################## # Model configuration ############################################################################## # The number of coils to produce TUPLE_ORDER = namedtuple("TUPLE_ORDER", ["index", "weight", "color"]) orders = [ TUPLE_ORDER(1, 22, 5), TUPLE_ORDER(2, 9, 3), TUPLE_ORDER(3, 9, 4), TUPLE_ORDER(4, 8, 5), TUPLE_ORDER(5, 8, 7), TUPLE_ORDER(6, 6, 3), TUPLE_ORDER(7, 5, 6), TUPLE_ORDER(8, 3, 0), TUPLE_ORDER(9, 3, 2), TUPLE_ORDER(10, 3, 3), TUPLE_ORDER(11, 2, 1), TUPLE_ORDER(12, 2, 5) ] NB_SLABS = 12 MAX_COLOR_PER_SLAB = 2 # The total number of slabs available. In theory this can be unlimited, # but we impose a reasonable upper bound in order to produce a practical # optimization model. # The different slab weights available. slab_weights = [ 0, 11, 13, 16, 17, 19, 20, 23, 24, 25, 26, 27, 28, 29, 30, 33, 34, 40, 43, 45 ] nb_orders = len(orders) slabs = range(NB_SLABS) allcolors = set([ o.color for o in orders ]) # CPO needs lists for pack constraint order_weights = [ o.weight for o in orders ] # The heaviest slab max_slab_weight = max(slab_weights) # The amount of loss incurred for different amounts of slab use # The loss will depend on how much less steel is used than the slab # just large enough to produce the coils. loss = [ min([sw-use for sw in slab_weights if sw >= use]) for use in range(max_slab_weight+1)]
examples/cp/jupyter/SteelMill.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 3: Set up the prescriptive model Create CPO model
mdl = CpoModel(name="trucks")
examples/cp/jupyter/SteelMill.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Define the decision variables
# Which slab is used to produce each coil production_slab = integer_var_dict(orders, 0, NB_SLABS-1, "production_slab") # How much of each slab is used slab_use = integer_var_list(NB_SLABS, 0, max_slab_weight, "slab_use")
examples/cp/jupyter/SteelMill.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Express the business constraints
# The total loss is total_loss = sum([element(slab_use[s], loss) for s in slabs]) # The orders are allocated to the slabs with capacity mdl.add(pack(slab_use, [production_slab[o] for o in orders], order_weights)) # At most MAX_COLOR_PER_SLAB colors per slab for s in slabs: su = 0 for c in allcolors: lo = False for o in orders: if o.color==c: lo = (production_slab[o] == s) | lo su += lo mdl.add(su <= MAX_COLOR_PER_SLAB)
examples/cp/jupyter/SteelMill.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Express the objective
# Add minimization objective mdl.add(minimize(total_loss))
examples/cp/jupyter/SteelMill.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Solve the model
print("\nSolving model....") # Search strategy mdl.set_search_phases([search_phase([production_slab[o] for o in orders])]) msol = mdl.solve(FailLimit=100000, TimeLimit=10)
examples/cp/jupyter/SteelMill.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 4: Investigate the solution and then run an example analysis
# Print solution if msol: print("Solution: ") from_slabs = [set([o.index for o in orders if msol[production_slab[o]]== s])for s in slabs] slab_colors = [set([o.color for o in orders if o.index in from_slabs[s]])for s in slabs] for s in slabs: if len(from_slabs[s]) > 0: print("Slab = " + str(s)) print("\tLoss = " + str(loss[msol[slab_use[s]]])) print("\tcolors = " + str(slab_colors[s])) print("\tOrders = " + str(from_slabs[s]) + "\n") else: print("No solution found")
examples/cp/jupyter/SteelMill.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Now we can perform the regression to find $\alpha$ and $\beta$:
# Let's define everything in familiar regression terms X = r_b.values # Get just the values, ignore the timestamps Y = r_a.values def linreg(x,y): # We add a constant so that we can also fit an intercept (alpha) to the model # This just adds a column of 1s to our data x = sm.add_constant(x) model = regression.linear_model.OLS(y,x).fit() # Remove the constant now that we're done x = x[:, 1] return model.params[0], model.params[1] alpha, beta = linreg(X,Y) print 'alpha: ' + str(alpha) print 'beta: ' + str(beta)
Notebooks/quantopian_research_public/notebooks/lectures/Beta_Hedging/notebook.ipynb
d00d/quantNotebooks
unlicense
If we plot the line $\alpha + \beta X$, we can see that it does indeed look like the line of best fit:
X2 = np.linspace(X.min(), X.max(), 100) Y_hat = X2 * beta + alpha plt.scatter(X, Y, alpha=0.3) # Plot the raw data plt.xlabel("SPY Daily Return") plt.ylabel("TSLA Daily Return") # Add the regression line, colored in red plt.plot(X2, Y_hat, 'r', alpha=0.9);
Notebooks/quantopian_research_public/notebooks/lectures/Beta_Hedging/notebook.ipynb
d00d/quantNotebooks
unlicense
Risk Exposure More generally, this beta gets at the concept of how much risk exposure you take on by holding an asset. If an asset has a high beta exposure to the S&P 500, then while it will do very well while the market is rising, it will do very poorly when the market falls. A high beta corresponds to high speculative risk. You are taking out a more volatile bet. At Quantopian, we value stratgies that have negligible beta exposure to as many factors as possible. What this means is that all of the returns in a strategy lie in the $\alpha$ portion of the model, and are independent of other factors. This is highly desirable, as it means that the strategy is agnostic to market conditions. It will make money equally well in a crash as it will during a bull market. These strategies are the most attractive to individuals with huge cash pools such as endowments and soverign wealth funds. Risk Management The process of reducing exposure to other factors is known as risk management. Hedging is one of the best ways to perform risk management in practice. Hedging If we determine that our portfolio's returns are dependent on the market via this relation $$Y_{portfolio} = \alpha + \beta X_{SPY}$$ then we can take out a short position in SPY to try to cancel out this risk. The amount we take out is $-\beta V$ where $V$ is the total value of our portfolio. This works because if our returns are approximated by $\alpha + \beta X_{SPY}$, then adding a short in SPY will make our new returns be $\alpha + \beta X_{SPY} - \beta X_{SPY} = \alpha$. Our returns are now purely alpha, which is independent of SPY and will suffer no risk exposure to the market. Market Neutral When a stragy exhibits a consistent beta of 0, we say that this strategy is market neutral. Problems with Estimation The problem here is that the beta we estimated is not necessarily going to stay the same as we walk forward in time. As such the amount of short we took out in the SPY may not perfectly hedge our portfolio, and in practice it is quite difficult to reduce beta by a significant amount. We will talk more about problems with estimating parameters in future lectures. In short, each estimate has a stardard error that corresponds with how stable the estimate is within the observed data. Implementing hedging Now that we know how much to hedge, let's see how it affects our returns. We will build our portfolio using the asset and the benchmark, weighing the benchmark by $-\beta$ (negative since we are short in it).
# Construct a portfolio with beta hedging portfolio = -1*beta*r_b + r_a portfolio.name = "TSLA + Hedge" # Plot the returns of the portfolio as well as the asset by itself portfolio.plot(alpha=0.9) r_b.plot(alpha=0.5); r_a.plot(alpha=0.5); plt.ylabel("Daily Return") plt.legend();
Notebooks/quantopian_research_public/notebooks/lectures/Beta_Hedging/notebook.ipynb
d00d/quantNotebooks
unlicense
It looks like the portfolio return follows the asset alone fairly closely. We can quantify the difference in their performances by computing the mean returns and the volatilities (standard deviations of returns) for both:
print "means: ", portfolio.mean(), r_a.mean() print "volatilities: ", portfolio.std(), r_a.std()
Notebooks/quantopian_research_public/notebooks/lectures/Beta_Hedging/notebook.ipynb
d00d/quantNotebooks
unlicense
We've decreased volatility at the expense of some returns. Let's check that the alpha is the same as before, while the beta has been eliminated:
P = portfolio.values alpha, beta = linreg(X,P) print 'alpha: ' + str(alpha) print 'beta: ' + str(beta)
Notebooks/quantopian_research_public/notebooks/lectures/Beta_Hedging/notebook.ipynb
d00d/quantNotebooks
unlicense
Note that we developed our hedging strategy using historical data. We can check that it is still valid out of sample by checking the alpha and beta values of the asset and the hedged portfolio in a different time frame:
# Get the alpha and beta estimates over the last year start = '2014-01-01' end = '2015-01-01' asset = get_pricing('TSLA', fields='price', start_date=start, end_date=end) benchmark = get_pricing('SPY', fields='price', start_date=start, end_date=end) r_a = asset.pct_change()[1:] r_b = benchmark.pct_change()[1:] X = r_b.values Y = r_a.values historical_alpha, historical_beta = linreg(X,Y) print 'Asset Historical Estimate:' print 'alpha: ' + str(historical_alpha) print 'beta: ' + str(historical_beta) # Get data for a different time frame: start = '2015-01-01' end = '2015-06-01' asset = get_pricing('TSLA', fields='price', start_date=start, end_date=end) benchmark = get_pricing('SPY', fields='price', start_date=start, end_date=end) # Repeat the process from before to compute alpha and beta for the asset r_a = asset.pct_change()[1:] r_b = benchmark.pct_change()[1:] X = r_b.values Y = r_a.values alpha, beta = linreg(X,Y) print 'Asset Out of Sample Estimate:' print 'alpha: ' + str(alpha) print 'beta: ' + str(beta) # Create hedged portfolio and compute alpha and beta portfolio = -1*historical_beta*r_b + r_a P = portfolio.values alpha, beta = linreg(X,P) print 'Portfolio Out of Sample:' print 'alpha: ' + str(alpha) print 'beta: ' + str(beta) # Plot the returns of the portfolio as well as the asset by itself portfolio.name = "TSLA + Hedge" portfolio.plot(alpha=0.9) r_a.plot(alpha=0.5); r_b.plot(alpha=0.5) plt.ylabel("Daily Return") plt.legend();
Notebooks/quantopian_research_public/notebooks/lectures/Beta_Hedging/notebook.ipynb
d00d/quantNotebooks
unlicense
Import Concepts How does this work in the real world? How much training data do you need? How is the tree created? What makes a good feature? 2 Many types of classifiers Artificial neural network Support Vector Machine Lions Tigers Bears Oh my! Goals 1. Import dataset
from sklearn.datasets import load_iris import numpy as np iris = load_iris() print(iris.feature_names) print(iris.target_names) print(iris.data[0]) print(iris.target[0])
machine_learning/Machine Learning Notebook.ipynb
KECB/learn
mit
Testing Data Examples used to "test" the classifier's accuracy. Not part of the training data. Just like in programming, testing is a very important part of ML.
test_idx = [0, 50, 100] # training data train_target = np.delete(iris.target, test_idx) train_data = np.delete(iris.data, test_idx, axis=0) print(train_target.shape) print(train_data.shape) # testing data test_target = iris.target[test_idx] test_data = iris.data[test_idx] print(test_target.shape) print(test_data.shape)
machine_learning/Machine Learning Notebook.ipynb
KECB/learn
mit
2. Train a classifier
clf = tree.DecisionTreeClassifier() clf.fit(train_data, train_target)
machine_learning/Machine Learning Notebook.ipynb
KECB/learn
mit
3. Predict label for new flower.
print(test_target) print(clf.predict(test_data))
machine_learning/Machine Learning Notebook.ipynb
KECB/learn
mit
4. Visualize the tree.
# viz code from sklearn.externals.six import StringIO import pydotplus dot_data = StringIO() tree.export_graphviz(clf, out_file=dot_data, feature_names=iris.feature_names, class_names=iris.target_names, filled=True, rounded = True, impurity=False) graph = pydotplus.graph_from_dot_data(dot_data.getvalue()) graph.write_pdf('iris.pdf')
machine_learning/Machine Learning Notebook.ipynb
KECB/learn
mit
More to learn How are trees built automatically from examples? How well do they work in parctice? 3 What Makes a Good Feature?
import numpy as np %matplotlib inline import matplotlib.pyplot as plt greyhounds = 500 labs = 500 grey_height = 28 + 4 * np.random.randn(greyhounds) lab_height = 24 + 4 * np.random.randn(labs) plt.hist([grey_height, lab_height], stacked=True, color=['r', 'b']) plt.show()
machine_learning/Machine Learning Notebook.ipynb
KECB/learn
mit
Analysis 35 肯定是 greyhounds 20左右是 lab的几率最大 但是很难判断在25左右的时候是谁. 所以这个 Feature 是好的, 但不是充分的. 所以问题是: 我们需要多少 Feature? 注意事项 Avoid redundant features: 例如 用英尺做单位的高度, 用厘米做单位的高度 Features should be easy to understand: 例如 预测邮件发送时间, 使用距离和发送所用天数 而不选择使用经纬度坐标. SImpler relationships are easier to learn Ideal features are Informative Independent Simple 4. Lets Write a Pipeline
from sklearn import datasets iris = datasets.load_iris() X = iris.data # input: features y = iris.target # output: label from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= .5) # from sklearn import tree # my_classifier = tree.DecisionTreeClassifier() from sklearn.neighbors import KNeighborsClassifier my_classifier = KNeighborsClassifier() my_classifier.fit(X_train, y_train) predictions = my_classifier.predict(X_test) from sklearn.metrics import accuracy_score print(accuracy_score(y_test, predictions))
machine_learning/Machine Learning Notebook.ipynb
KECB/learn
mit
what is X, y? X: features y: labels ``` python def classify(features): # do some logic return label ``` 5. Write Our First Classifier
from scipy.spatial import distance def euc(a, b): return distance.euclidean(a, b) class ScrappyKNN(): def fit(self, X_train, y_train): self.X_train = X_train self.y_train = y_train def predict(self, X_test): predictions = [] for row in X_test: label = self.closest(row) predictions.append(label) return predictions def closest(self, row): best_dist = euc(row, self.X_train[0]) best_index = 0 for i in range(1, len(self.X_train)): dist = euc(row, self.X_train[i]) if dist < best_dist: best_dist = dist best_index = i return self.y_train[best_index] from sklearn import datasets iris = datasets.load_iris() X = iris.data # input: features y = iris.target # output: label from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= .5) my_classifier = ScrappyKNN() my_classifier.fit(X_train, y_train) predictions = my_classifier.predict(X_test) from sklearn.metrics import accuracy_score print(accuracy_score(y_test, predictions))
machine_learning/Machine Learning Notebook.ipynb
KECB/learn
mit
This section provides an introduction to the Symbulate commands for simulating and summarizing values of a random variable. <a id='counting_numb_heads'></a> Example 2.1: Counting the number of Heads in a sequence of coin flips In Example 1.7 we simulated the value of the number of Heads in a sequence of five coin flips. In that example, we simulated the individual coin flips (with 1 representing Heads and 0 Tails) and then used .apply() with the sum function to count the number of Heads. The following Symbulate commands achieve the same goal by defining an RV, X, which measures the number of Heads for each outcome.
P = BoxModel([1, 0], size=5) X = RV(P, sum) X.sim(10000)
tutorial/gs_rv.ipynb
dlsun/symbulate
mit
The number of Heads in five coin flips is a random variable: a function that takes as an input an outcome of a probability space and returns a real number. The first argument of RV is the probability space on which the RV is defined, e.g., sequences of five 1/0s. The second argument is the function which maps outcomes in the probability space to real numbers, e.g., the sum of the 1/0 values. Values of an RV can be simulated with .sim(). <a id='sum_of_two_dice'></a> Exercise 2.2: Sum of two dice After defining an appropriate BoxModel probability space, define an RV X representing the sum of two six-sided fair dice, and simulate 10000 values of X.
### Type your commands in this cell and then run using SHIFT-ENTER.
tutorial/gs_rv.ipynb
dlsun/symbulate
mit
Solution <a id='dist_of_five_flips'></a> Example 2.3: Summarizing simulation results with tables and plots In Example 2.1 we defined a RV, X, the number of Heads in a sequence of five coin flips. Simulated values of a random variable can be summarized using .tabulate() (with normalize=False (default) for frequencies (counts) or True for relative frequencies (proportions)).
P = BoxModel([1, 0], size=5) X = RV(P, sum) sims = X.sim(10000) sims.tabulate()
tutorial/gs_rv.ipynb
dlsun/symbulate
mit
The table above can be used to approximate the distribution of the number of Heads in five coin flips. The distribution of a random variable specifies the possible values that the random variable can take and their relative likelihoods. The distribution of a random variable can be visualized using .plot().
sims.plot()
tutorial/gs_rv.ipynb
dlsun/symbulate
mit
By default, .plot() displays relative frequencies (proportions). Use .plot(normalize=False) to display frequencies (counts). <a id='dist_of_sum_of_two_dice'></a> Exercise 2.4: The distribution of the sum of two dice rolls Continuing Exercise 2.2 summarize with a table and a plot the distribution of the sum of two rolls of a fair six-sided die.
### Type your commands in this cell and then run using SHIFT-ENTER.
tutorial/gs_rv.ipynb
dlsun/symbulate
mit
Solution <a id='prob_of_three_heads'></a> Example 2.5: Estimating probabilities from simulations There are several other tools for summarizing simulations, like the count functions. For example, the following commands approximate P(X &lt;= 3) for Example 2.1, the probability that in five coin flips at most three of the flips land on Heads.
P = BoxModel([1, 0], size=5) X = RV(P, sum) sims = X.sim(10000) sims.count_leq(3)/10000
tutorial/gs_rv.ipynb
dlsun/symbulate
mit
<a id='prob_of_10_two_dice'></a> Exercise 2.6: Estimating probabilities for the sum of two dice rolls Continuing Exercise 2.2, estimate P(X &gt;= 10), the probability that the sum of two fair six-sided dice is at least 10.
### Type your commands in this cell and then run using SHIFT-ENTER.
tutorial/gs_rv.ipynb
dlsun/symbulate
mit
Solution <a id='sim_from_binom'></a> Example 2.7: Specifying a RV by its distribution The plot in Example 2.3 displays the approximate distribution of the random variable X, the number of Heads in five flips of a fair coin. This distribution is called the Binomial distribution with n=5 trials (flips) and a probability that each trial (flip) results in success (1 i.e. Heads) equal to p=0.5. In the above examples the RV X was explicitly defined on the probability space P - i.e. the BoxModel for the outcomes (1 or 0) of the five individual flips - via the sum function. This setup implied a Binomial(5, 0.5) distribution for X. In many situations the distribution of an RV is assumed or specified directly, without mention of the underlying probabilty space or the function defining the random variable. For example, a problem might state "let Y have a Binomial distribution with n=5 and p=0.5". The RV command can also be used to define a random variable by specifying its distribution, as in the following.
Y = RV(Binomial(5, 0.5)) Y.sim(10000).plot()
tutorial/gs_rv.ipynb
dlsun/symbulate
mit
By definition, a random variable must always be a function defined on a probability space. Specifying a random variable by specifying its distribution, as in Y = RV(Binomial(5, 0.5)), has the effect of defining the probability space to be the distribution of the random variable and the function defined on this space to be the identity (f(x) = x). However, it is more appropriate to think of such a specification as defining a random variable with the given distribution on an unspecified probability space through an unspecified function. For example, the random variable $X$ in each of the following situations has a Binomial(5, 0.5) distribution. - $X$ is the number of Heads in five flips of a fair coin - $X$ is the number of Tails in five flips of a fair coin - $X$ is the number of even numbers rolled in five rolls of a fair six-sided die - $X$ is the number of boys in a random sample of five births Each of these situations involves a different probability space (coins, dice, births) with a random variable which counts according to different criteria (Heads, Tails, evens, boys). These examples illustrate that knowledge that a random variable has a specific distribution (e.g. Binomial(5, 0.5)) does not necessarily convey any information about the underlying observational units or variable being measured. This is why we say a specification like X = RV(Binomial(5, 0.5)) defines a random variable X on an unspecified probability space via an unspecified function. The following code compares the two methods for definiting of a random variable with a Binomial(5, 0.5) distribution. (The jitter=True option offsets the vertical lines so they do not coincide.)
P = BoxModel([1, 0], size=5) X = RV(P, sum) X.sim(10000).plot(jitter=True) Y = RV(Binomial(5, 0.5)) Y.sim(10000).plot(jitter=True)
tutorial/gs_rv.ipynb
dlsun/symbulate
mit
In addition to Binomial, many other commonly used distributions are built in to Symbulate. <a id='discrete_unif_dice'></a> Exercise 2.8: Simulating from a discrete Uniform model A random variable has a DiscreteUniform distribution with parameters a and b if it is equally likely to to be any of the integers between a and b (inclusive). Let X be the roll of a fair six-sided die. Define an RV X by specifying an appropriate DiscreteUniform distribution, then simulate 10000 values of X and summarize its approximate distribution in a plot.
### Type your commands in this cell and then run using SHIFT-ENTER.
tutorial/gs_rv.ipynb
dlsun/symbulate
mit
Solution <a id='numb_tails'></a> Example 2.9: Random variables versus distributions Continuing Example 2.1, if X is the random variable representing number of Heads in five coin flips then Y = 5 - X is random variable representing the number of Tails.
P = BoxModel([1, 0], size=5) X = RV(P, sum) Y = 5 - X Y.sim(10000).tabulate()
tutorial/gs_rv.ipynb
dlsun/symbulate
mit
It is important not to confuse a random variable with its distribution. Note that X and Y are two different random variables; they measure different things. For example, if the outcome of the flips is (1, 0, 0, 1, 0) then X = 2 but Y = 3. The following code illustrates how an RV can be called as a function to return its value for a particular outcome in the probability space.
outcome = (1, 0, 0, 1, 0) X(outcome) Y(outcome)
tutorial/gs_rv.ipynb
dlsun/symbulate
mit
In fact, in this example the values of X and Y are unequal for every outcome in the probability space . However, while X and Y are two different random variables, they do have the same distribution over many outcomes.
X.sim(10000).plot(jitter=True) Y.sim(10000).plot(jitter=True)
tutorial/gs_rv.ipynb
dlsun/symbulate
mit
See Example 2.7 for further comments about the difference between random variables and distributions. <a id='expected_value_numb_of_heads'></a> Example 2.10: Expected value of the number of heads in five coin flips The expected value, or probability-weighted average value, of an RV can be approximated by simulating many values of the random variable and finding the sample mean (i.e. average) using .mean(). Continuing Example 2.1, the following code estimates the expected value of the number of Heads in five coin flips.
P = BoxModel([1, 0], size=5) X = RV(P, sum) X.sim(10000).mean()
tutorial/gs_rv.ipynb
dlsun/symbulate
mit
Over many sets of five coin flips, we expect that there will be on average about 2.5 Heads per set. Note that 2.5 is not the number of Heads we would expect in a single set of five coin flips. <a id='expected_value_sum_of_dice'></a> Exercise 2.11: Expected value of the sum of two dice rolls Continuing Exercise 2.2, approximate the expected value of the sum of two six-sided dice rolls. (Bonus: interpret the value as an appropriate long run average.)
### Type your commands in this cell and then run using SHIFT-ENTER.
tutorial/gs_rv.ipynb
dlsun/symbulate
mit