markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
TRY IT Don't try it. Infinite loops aren't good code. Lists A list is a sequence of values. These values can be anything: strings, numbers, booleans, even other lists. To make a list you put the items separated by commas between brackets [ ]
weathers = ['rain', 'sun', 'snow'] tens = [10, 10, 10, 10] print(weathers) print(tens)
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
You can access a single element in a list by indexing in using brackets. List indexing starts at 0 so to get the first element, you use 0, the second element is 1 and so on. list[index]
print(weathers[0]) print(weathers[2])
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
You can find the length of a list using the built-in function 'len'. len(list)
print(len(tens))
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Create a list of famous storms and store it in a variable called storms. Break and Continue There are other ways to control a loop besides its condition. The keyword break immediately exits a loop. With break you can write a loop without a terminating condition that is not an 'infinite loop'. Continue stops that iteration of the loop and goes back to the condition check.
while True: user_message = eval(input('> ')) # Leave when the user talks about snow. if user_message == 'snow': print("I don't want to hear about that.") break print("and then what happened?") stock_up = ['eggs', 'milk', 'bread', 'puppies', 'cereal', 'toilet paper'] index = 0 while index < len(stock_up): item = stock_up[index] index += 1 # Make sure we don't stock up on puppies if item == 'puppies': print("No! We are not getting another puppy.") continue print("Storm is coming, better stock up on " + item)
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
For loops For loops are a faster way of writing a common loop pattern. Many loops go through each element in a list. Python makes this easy. for item in list: run code for item
# Let's rewrite the stock up loop as a for loop (we'll add the puppies in later) stock_up = ['eggs', 'milk', 'bread', 'puppies', 'cereal', 'toilet paper'] for item in stock_up: print("Storm is coming, better stock up on " + item)
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
See how much shorter it is? If you can use a for loop, then do use a for loop. Also, since we don't have to increment a counter, there is less of a chance of an infinte loop.
# Let's add the puppies condition just to show that break and continue work with for loops too stock_up = ['eggs', 'milk', 'bread', 'puppies', 'cereal', 'toilet paper'] for item in stock_up: if item == 'puppies': print("No! We are not getting another puppy") continue print("Storm is coming, better stock up on " + item)
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
We can use loops to sum, average, or find the max or min I'll give you some examples
hourly_snow_fall = [4, 3, 2, 4, 1, 4, 3, 2, 1, 1, 1, 0] # Finding the sum total_snow = 0 for hour in hourly_snow_fall: total_snow += hour print("Total snow " + str(total_snow)) # Finding the average total_snow = 0 number_snowy_hours = 0 for hour in hourly_snow_fall: if hour == 0: # Ignore hours that it didn't snow continue total_snow += hour number_snowy_hours += 1 ave_snow = total_snow / number_snowy_hours print("Average snow " + str(ave_snow)) # Finding the max max_snow = 0 for hour in hourly_snow_fall: if max_snow < hour: max_snow = hour print("Max snow was " + str(max_snow))
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Write a loop that find the maximum word length of a list of words (provided).
words = ['Oh', 'the', 'weather', 'outside', 'is', 'frightful', 'But', 'the', 'fire', 'is', 'so', 'delightful']
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
By the way, at this point you could go back to the previous lesson's project Yahtzee and use a loop to play until you get a yahtzee. Range Python has a built in function called range() that returns a generator of integers in order # If you only include 1 parameter it starts at zero and goes to that number-1 range(5) #-&gt; [0, 1, 2, 3, 4] # If you include 2 parameters it goes from [start, stop) range(2,5) #-&gt; [2,3,4] # If you include 3 parameters it goes from [start, stop) and skips by the third number range(1,10,2) #-&gt; [1,3,5,7,9] What is a generator? Well it is a list that only stores one item at a time. We can just make it a list by using type casting list(generator). You can use generators without casting them to lists when you are in a loop.
cumulative_snowfall = list(range(1, 15)) print(list(cumulative_snowfall))
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
You can use range as the iterable in a for loop instead of using a while loop with a counting variable if you just want to iterate over integers for i in range(1,6): print(i)
for inches in range(1,7): suffix = " inch" if inches == 1: suffix += "." else: suffix += "es." print("It has snowed " + str(inches) + suffix)
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
Filter hits
filt_hits = all_hmmer_hits[ (all_hmmer_hits.e_value < 1e-3) & (all_hmmer_hits.best_dmn_e_value < 1e-3) ] filt_hits.to_csv("1_out/filtered_hmmer_all_hits.csv",index=False) print(filt_hits.shape) filt_hits.head()
orfan_2016_annotation/20161129_summarize_results/1_filter_hmmer_hits.ipynb
maubarsom/ORFan-proteins
mit
Keep best hit per database for each cluster Filtered by e-value < 1e-3 and best domain e-value < 1
gb = filt_hits.groupby(["cluster","db"]) reliable_fam_hits = pd.DataFrame( hits.ix[hits.bitscore.idxmax()] for _,hits in gb )[["cluster","db","tool","query_id","subject_id", "bitscore","e_value","s_description","best_dmn_e_value"]] sorted_fam_hits = pd.concat( hits.sort_values(by="bitscore",ascending=False) for _,hits in reliable_fam_hits.groupby("cluster") ) sorted_fam_hits.to_csv("1_out/filtered_hmmer_best_hits.csv",index=False) print(sorted_fam_hits.shape) sorted_fam_hits.head()
orfan_2016_annotation/20161129_summarize_results/1_filter_hmmer_hits.ipynb
maubarsom/ORFan-proteins
mit
7. Out-of-core computation Here's a quick demo of how xarray can leverage dask to work with data that doesn't fit in memory. This lets xarray substitute for tools like cdo and nco. Let's open 10 years of runoff data xarraycan open multiple files at once using string pattern matching. In this case we open all the files that match our filestr, i.e. all the files for the 2080s. Each of these files (compressed) is approximately 80 MB. PS - these files weren't available during the tutorial. The data we used was daily discharge hydrological data from the ISIMIP project (e.g. HadGEM2-ES / PCRGLOBWB / RCP2p6), which we cannot share here but is available for download.
from glob import glob files = glob('data/*dis*.nc') runoff = xr.open_mfdataset(files) runoff
xarray-tutorial-egu2017.ipynb
iiasa/xarray_tutorial
bsd-3-clause
We start by defining a series helper functions which we will use in creating the plot below.
def truncate_colormap(cmap, minval=0.0, maxval=1.0, n=100): """ Return new colormap obtained from `cmap` by extracting the slice betwen `minval` and `maxval (using `n` values). """ new_cmap = colors.LinearSegmentedColormap.from_list( 'trunc({n},{a:.2f},{b:.2f})'.format(n=cmap.name, a=minval, b=maxval), cmap(np.linspace(minval, maxval, n))) return new_cmap def draw_normalised_dipole_field(ax, dipole_x, dipole_y, xmin, xmax, nx, ymin, ymax, ny): """ Draw arrows representing the dipole field created by a dipole located at (dipole_x, dipole_y). The arrows are placed on a grid with the bounds (xmin, xmax, ymin, ymax) and with nx and ny subdivisions along the x- and y-axis, respectively. """ Y, X = np.mgrid[ymin:ymax:ny*1j, xmin:xmax:nx*1j] MX = np.zeros_like(X) MY = np.ones_like(Y) RX = X - dipole_x RY = Y - dipole_y r = np.sqrt((X - dipole_x)**2 + (Y - dipole_y)**2) mdotr = MX * RX + MY * RY U = 3*mdotr*RX/r**5 - MX/r**3 V = 3*mdotr*RY/r**5 - MY/r**3 speed = np.sqrt(U**2 + V**2) UN = U/speed VN = V/speed cmap = truncate_colormap(cm.inferno_r, 0.15, 1.0) ax.quiver(X, Y, UN, VN, # data r, # colour the arrows based on this array scale=40, width=0.0030, pivot='middle', cmap=cmap) def draw_vertical_line(ax, x, ymin, ymax, annotation, color): """ Add vertical line to a matplotlib axes, including an annotation. Arguments: ax: matplotlib Axes instance to which the line is to be added. x: x-position of the line ymin, ymax: vertical extent of the line annotation: Text to be added at the bottom of the line. color: Color of the line and annotation. """ linewidth=2.0 linestyle='-' ax.plot((x, x), (ymin - 3, ymax + 3), linewidth=linewidth, linestyle=linestyle, color=color) ax.annotate(annotation, xy=(x, ymin), xytext=(0, -20), ha='center', va='top', color=color, rotation=0, fontsize=20, xycoords='data', textcoords='offset points') def draw_particle(ax, x=0, y=10, diameter=20, color='#aaeeff', arrow_color='blue'): """ Draw a circle representing the nanoparticle as well as an arrow indicating its magnetization. """ arrow_length = 0.6 * diameter arrow_bottom = y - 0.5*arrow_length particle = Ellipse((x, y), diameter, diameter, facecolor=color) m_particle = FancyArrow(x, arrow_bottom, 0, arrow_length, color=arrow_color, linewidth=5, length_includes_head=True, head_width=3, head_length=4) ax.add_patch(particle) ax.add_patch(m_particle) def draw_nanodisc(ax, x=0, y=-70, width=150, height=10, thickness=2, color='white'): """ Draw the nanodisc as two ellipses with given width and height, centered at (x, y). """ y_top = y y_bottom = y - thickness ellipse_top = Ellipse((x, y_top), width, height, angle=0.0, facecolor=color) ellipse_bottom = Ellipse((x, y_bottom), width, height, angle=0.0, facecolor=color) rect = Rectangle((-0.5*width, y_bottom), width, thickness, facecolor=color, edgecolor='none') # Draw the bottom ellipse first, then the rectangle # to cover the upper half of it and then draw the top # ellipse. Note that this is not strictly necessary # for a white ellipse but it ax.add_patch(ellipse_bottom) ax.add_patch(rect) ax.add_patch(ellipse_top) # Draw the sides connecting the top and bottom ellipse. # For some reason it looks better if we shift them to # the left by a tiny amount. xshift = -0.1 xleft = -0.5 * width + xshift xright = +0.5 * width + xshift ax.plot([xleft, xleft], [y_bottom, y_top], color='black', linewidth=1) ax.plot([xright, xright], [y_bottom, y_top], color='black', linewidth=1)
notebooks/fig_2_dipole_field_visualisation.ipynb
maxalbert/paper-supplement-nanoparticle-sensing
mit
Finally we can plot the actual figure.
# Position of dipole (shown as cyan-coloured particle below) dipole_x, dipole_y = 0, 10 xmin, xmax = -80, 80 ymin, ymax = -82, 25 xmin_fld, xmax_fld = -75, 75 ymin_fld, ymax_fld = -50, -5 nx_fld, ny_fld = 40, 16 plt.style.use('style_sheets/fig2.mplstyle') fig, ax = plt.subplots(figsize=(12, 7)) # Tweak appearance of the axis spines etc. ax.set_aspect('equal') ax.set_xlim(xmin, xmax) ax.set_ylim(ymin, ymax) ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') xticks = [-50, 0, 50] yticks = [0, -10, -20, -30, -40, -50] ax.set_xticks(xticks) ax.set_yticks(yticks) ax.set_yticklabels([str(-y) for y in yticks]) ax.get_xaxis().tick_bottom() ax.get_yaxis().tick_left() ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['left'].set_bounds(-55, 5) ax.set_xlabel('Lateral offset from particle centre (nm)', labelpad=20) ax.set_ylabel(r'Vertical separation $d$ (nm)', labelpad=20, y=0.62) # Plot particle, nanodisc, dipole field and vertical lines draw_particle(ax, x=dipole_x, y=dipole_y, color='#aaeeff') draw_nanodisc(ax, color='white') draw_normalised_dipole_field(ax, dipole_x, dipole_y, xmin_fld, xmax_fld, nx_fld, ymin_fld, ymax_fld, ny_fld) draw_vertical_line(ax, x= 0, ymin=ymin_fld, ymax=ymax_fld, annotation='N=1', color='darkblue') draw_vertical_line(ax, x=-41, ymin=ymin_fld, ymax=ymax_fld, annotation='N=2', color='red') draw_vertical_line(ax, x=+41, ymin=ymin_fld, ymax=ymax_fld, annotation='N=2', color='red')
notebooks/fig_2_dipole_field_visualisation.ipynb
maxalbert/paper-supplement-nanoparticle-sensing
mit
1. Especificar una sustancia pura sin especificar una temperatura. Luego se carga el archivo que contine las constantes de las correlaciones de las propiedades termodinamicas, que se llama en este caso "PureFull_mod_properties.xls" y se asigna a la variable dppr_file. Creamos un objeto llamado thermodynamic_correlations y se pasan como parametros las variables component y property_thermodynamics que en el ejemplo se especifica para el componente METHANE la Vapour_Pressure
prueba_1 = pt.Thermodynamic_correlations() component = ['METHANE'] property_thermodynamics = "Vapour_Pressure" Vapour_Pressure = prueba_1.property_cal(component, property_thermodynamics) print("Vapour Pressure = {0}". format(Vapour_Pressure)) prueba_2 = pt.Thermodynamic_correlations() component = ['ETHANE'] property_thermodynamics = "Vapour_Pressure" Vapour_Pressure_2 = prueba_2.property_cal(component, property_thermodynamics) print("Vapour Pressure = {0}". format(Vapour_Pressure)) Vapour_Pressure[:5]*100
thermodynamic_correlations.ipynb
pysg/pyther
mit
para realizar un gráfico simple de la propiedad termodinámica se utiliza el método graphical(temperature, property_thermodynamics, label_property_thermodynamics, units). En donde se pasan como argumentos la temperatura a la cual se claculó la propiedad termodinamica, los valores calculados de la propiedad termodinamica, el label de la propiedad termodinámica y las unidades correspondientes de temperatura y la propiedad termodinámica en cada caso.
temperature_vapour = prueba_1.temperature units = prueba_1.units print(units) thermodynamic_correlations.graphical(temperature_vapour, Vapour_Pressure, property_thermodynamics, units)
thermodynamic_correlations.ipynb
pysg/pyther
mit
Работа с директориями
import os os.mkdir("/tmp/park-python") try: os.rmdir("/tmp/park-python") except IOError as err: print(err) path = "/tmp/park-python/lectures/04" if not os.path.exists(path): os.makedirs(path) os.rmdir("/tmp/park-python") import shutil shutil.rmtree("/tmp/park-python") import pprint pprint.pprint(list(os.walk(os.curdir)))
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Работа с файлами
# открываем дескриптор файла для записи f = open("/tmp/example.txt", "w") # записываем содержимое f.write("Технопарк\n") # обязательно закрываем f.close() # открываем дескриптор файла для чтения f = open("/tmp/example.txt", "r") # читаем содержимое полностью. data = f.read() # обязательно закрываем! f.close() print(data)
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
"r" – открытие на чтение (является значением по умолчанию). "w" – открытие на запись, содержимое файла удаляется, если файла не существует, создается новый. "x" – открытие на запись, если файла не существует, иначе исключение. "a" – открытие на дозапись, информация добавляется в конец файла. "b" – открытие в двоичном режиме. "t" – открытие в текстовом режиме (является значением по умолчанию). "+" – открытие на чтение и запись
# используя context-manager with open("/tmp/example.txt", "a") as f: f.write("МГТУ\n") with open("/tmp/example.txt", "r") as f: print(f.readlines()) # читаем файл по строке, не загружая его полность в память with open("/tmp/example.txt", "r") as f: for line in f: print(repr(line)) # Чтобы проверить целостность сохраненного файла import hashlib def hash_file(filename): h = hashlib.sha1() # открываем файл в бинарном виде. with open(filename,'rb') as file: chunk = 0 while chunk != b'': # читаем кусочками по 1024 байта chunk = file.read(1024) h.update(chunk) # hex-представление полученной суммы. return h.hexdigest() print(hash_file("/tmp/example.txt")) print(hash_file("/tmp/example.txt")) with open("/tmp/example.txt", "a") as f: f.write("1") print("После изменений:", hash_file("/tmp/example.txt"))
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
stdin, stdout, stderr По умолчанию Unix-оболочки связывают файловый дескриптор 0 с потоком стандартного ввода процесса (stdin), файловый дескриптор 1 — с потоком стандартного вывода (stdout), и файловый дескриптор 2 — с потоком диагностики (stderr, куда обычно выводятся сообщения об ошибках).
import sys print(sys.stdin) print(sys.stdout) print(sys.stderr) print(sys.stdin.fileno()) print(sys.stdout.fileno())
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Так как дескрипторы stdout и stderr переопределены в Jupyter notebook. Давайте посмотрим куда они ведут:
sys.stdout.write("where am I")
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
А ведут они как раз в этот ноутбук:) Отладка Показываем на примере файла debugging.py Полезные библиотеки для отладки (debugging) - https://github.com/vinta/awesome-python#debugging-tools Тестирование Зачем?
def get_max_length_word(sentence): longest_word = None words = sentence.split() for word in words: if not longest_word or len(word) > len(longest_word): longest_word = word return longest_word
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Что может пойти не так? Да все что угодно: SyntaxError Ошибка в логике Обратная несовместимость новой версии используемой библиотеки ... Через полгода после запуска приложения без тестов изменения в код большого приложения вносить очень страшно! Некоторые даже используют TDD - Test-Driven Development. unittest
import unittest class LongestWordTestCase(unittest.TestCase): def test_sentences(self): sentences = [ ["Beautiful is better than ugly.", "Beautiful"], ["Complex is better than complicated.", "complicated"] ] for sentence, correct_word in sentences: self.assertEqual(get_max_length_word(sentence), correct_word) # Обычно в реальных проектах использует механизм автоматического нахождения тестов (discover). suite = unittest.defaultTestLoader.loadTestsFromTestCase(LongestWordTestCase) unittest.TextTestRunner().run(suite)
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
<table border="1" class="docutils" align="left"> <colgroup> <col width="48%"> <col width="34%"> <col width="18%"> </colgroup> <thead valign="bottom"> <tr class="row-odd"><th class="head">Method</th> <th class="head">Checks that</th> </tr> </thead> <tbody valign="top"> <tr class="row-even"><td>assertEqual(a, b)</td> <td><code class="docutils literal"><span class="pre">a</span> <span class="pre">==</span> <span class="pre">b</span></code></td> </tr> <tr class="row-odd"><td>assertNotEqual(a, b)</td> <td><code class="docutils literal"><span class="pre">a</span> <span class="pre">!=</span> <span class="pre">b</span></code></td> </tr> <tr class="row-even"><td>assertTrue(x)</td> <td><code class="docutils literal"><span class="pre">bool(x)</span> <span class="pre">is</span> <span class="pre">True</span></code></td> </tr> <tr class="row-odd"><td>assertFalse(x)</td> <td><code class="docutils literal"><span class="pre">bool(x)</span> <span class="pre">is</span> <span class="pre">False</span></code></td> </tr> <tr class="row-even"><td>assertIsNone(x)</td> <td><code class="docutils literal"><span class="pre">x</span> <span class="pre">is</span> <span class="pre">None</span></code></td> </tr> <tr class="row-odd"><td>assertIsNotNone(x)</td> <td><code class="docutils literal"><span class="pre">x</span> <span class="pre">is</span> <span class="pre">not</span> <span class="pre">None</span></code></td> </tr> <tr class="row-even"><td>assertIsInstance(a, b)</td> <td><code class="docutils literal"><span class="pre">isinstance(a,</span> <span class="pre">b)</span></code></td> </tr> <tr class="row-even"><td colspan="2">И другие... https://docs.python.org/3.5/library/unittest.html</td> </tr> </tbody> </table> Протестируем class.
class BoomException(Exception): pass class Material: def __init__(self, name, reacts_with=None): self.name = name self.reacts_with = reacts_with or [] def __repr__(self): return self.name class Alchemy: def __init__(self): self.materials = [] def add(self, material): for existing_material in self.materials: if material.name not in existing_material.reacts_with: continue self.materials = [] raise BoomException("{0} + {1}".format(existing_material.name, material.name)) self.materials.append(material) def remove(self, material): self.materials.remove(material) # 2Na + 2H2O = 2NaOH + H2 (Не повторять дома!!! Щелочь чрезвычайно опасна!) alchemy = Alchemy() material_ca = Material("Ca", reacts_with=[]) material_h20 = Material("H2O", reacts_with=["Na"]) material_na = Item("Na", reacts_with=["H2O"]) alchemy.add(material_ca) alchemy.add(material_h20) try: alchemy.add(material_na) except BoomException: print("We are alive! But all items lost!") import unittest class AlchemyTest(unittest.TestCase): def setUp(self): self.alchemy = Alchemy() def test_add(self): self.alchemy.add(Material("C")) self.alchemy.add(Material("F")) self.assertEqual(len(self.alchemy.materials), 2) def test_remove(self): material_c = Material("C") self.alchemy.add(material_c) self.assertEqual(len(self.alchemy.materials), 1) self.alchemy.remove(material_c) self.assertEqual(len(self.alchemy.materials), 0) def test_boom(self): material_na = Material("Na", reacts_with=["H2O"]) material_h20 = Material("H2O", reacts_with=["Na"]) self.alchemy.add(material_na) self.assertRaises(BoomException, self.alchemy.add, material_h20) self.assertEqual(len(self.alchemy.materials), 0) # Обычно в реальных проектах использует механизм автоматического нахождения тестов (discover). suite = unittest.defaultTestLoader.loadTestsFromTestCase(AlchemyTest) unittest.TextTestRunner().run(suite)
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
unittest.mock Как тестировать код, выполняющий внешние вызовы: чтение файла, запрос содержимого URL?
import requests def get_location_city(ip): data = requests.get("https://freegeoip.net/json/{ip}".format(ip=ip)).json() return data["city"] def get_ip(): data = requests.get("https://httpbin.org/ip").json() return data["origin"] get_location_city(get_ip())
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Для начала посмотрим что такое monkey patching.
import math def fake_sqrt(num): return 42 original_sqrt = math.sqrt math.sqrt = fake_sqrt # вызываем ф-ю, которую мы запатчили. print(math.sqrt(16)) math.sqrt = original_sqrt math.sqrt(16) import unittest from unittest.mock import patch, Mock class FakeIPResponse: def json(self): return {"origin": "127.0.0.1"} class LongestWordTestCase(unittest.TestCase): @patch('requests.get', Mock(return_value=FakeIPResponse())) def test_get_ip(self): self.assertEqual(get_ip(), "127.0.0.1") suite = unittest.defaultTestLoader.loadTestsFromTestCase(LongestWordTestCase) unittest.TextTestRunner().run(suite) from unittest.mock import Mock mock = Mock() mock.method(1, 2, 3, test='wow') mock.method.assert_called_with(1, 2, 3, test='wow') mock.non_existing_method.assert_not_called()
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
https://docs.python.org/3/library/unittest.mock.html Библиотека coverage позволяет оценить степень покрытия кода тестами. Помимо unit-тестирования существует масса других типов тестов: * Интеграционные (как разные компоненты взаимодействуют друг с другом) * Функциональные (напр. Selenium) * Тестирование производительности (бенчмарки) * ... Полезные библиотеки для тестирования - https://github.com/vinta/awesome-python#testing Многопоточность Что такое процесс? UNIX является многозадачной операционной системой. Это означает, что одновременно может быть запущена более чем одна программа. Каждая программа, работающая в некоторый момент времени, называется процессом. http://www.kharchuk.ru/%D0%A1%D1%82%D0%B0%D1%82%D1%8C%D0%B8/15-unix-foundations/80-unix-processes
STEPS = 50000000 # Простая программа, складывающая числа. def worker(steps): count = 0 for i in range(steps): count += 1 return count %timeit -n1 -r1 worker(STEPS) print("Напомните преподавателю показать actvity monitor")
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
<div style="float:left;margin:0 10px 10px 0" markdown="1">![title](img/g3.png)</div> Что такое поток? Многопоточность является естественным продолжением многозадачности. Каждый из процессов может выполнятся в несколько потоков. Программа выше исполнялась в одном процессе в главном потоке. <div style="float:left;margin:0 10px 10px 0" markdown="1">![title](img/threads.jpg)</div> http://www.cs.miami.edu/home/visser/Courses/CSC322-09S/Content/UNIXProgramming/UNIXThreads.shtml Логичный шаг предположить, что 2 потока выполнят программу выше быстрее. Проверим?
import threading import queue result_queue = queue.Queue() STEPS = 50000000 NUM_THREADS = 2 def worker(steps): count = 0 for i in range(steps): count += 1 result_queue.put(count) def get_count_threaded(): count = 0 threads = [] for i in range(NUM_THREADS): t = threading.Thread(target=worker, args=(STEPS//NUM_THREADS,)) threads.append(t) t.start() for i in range(NUM_THREADS): count += result_queue.get() return count %timeit -n1 -r1 get_count_threaded()
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
<div style="float:left;margin:0 10px 10px 0" markdown="1">![title](img/g4.png)</div> GIL https://jeffknupp.com/blog/2012/03/31/pythons-hardest-problem/ Ок. Неужели выхода нет? Есть - multiprocessing Мультипроцессинг
import multiprocessing NUM_PROCESSES = 2 STEPS = 50000000 result_queue = multiprocessing.Queue() def worker(steps): count = 0 for i in range(steps): count += 1 result_queue.put(count) def get_count_in_processes(): count = 0 processes = [] for i in range(NUM_PROCESSES): p = multiprocessing.Process(target=worker, args=(STEPS//NUM_PROCESSES,)) processes.append(p) p.start() for i in range(NUM_PROCESSES): count += result_queue.get() return count %timeit -n1 -r1 get_count_in_processes()
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Зачем тогда нужны потоки? Все потому что не все задачи CPU-bound. Есть IO-bound задачи, которые прекрасно параллелятся на несколько CPU. Кто приведет пример? В качестве примера поднимем HTTP-сервер на порту 8000 (server.go). По адресу http://localhost:8000 будет отдаваться небольшой кусочек текста. Наша задача - скачивать контент по этому адресу.
import requests STEPS = 100 def download(): requests.get("http://127.0.0.1:8000").text # Простая программа, загружающая контент URL-странички. Типичная IO-bound задача. def worker(steps): for i in range(steps): download() %timeit -n1 -r1 worker(STEPS)
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
<div style="float:left;margin:0 10px 10px 0" markdown="1">![title](img/g1.png)</div>
import threading STEPS = 100 NUM_THREADS = 2 def worker(steps): count = 0 for i in range(steps): download() def run_worker_threaded(): threads = [] for i in range(NUM_THREADS): t = threading.Thread(target=worker, args=(STEPS//NUM_THREADS,)) threads.append(t) t.start() for t in threads: t.join() %timeit -n1 -r1 run_worker_threaded()
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
<div style="float:left;margin:0 10px 10px 0" markdown="1">![title](img/g2.png)</div> Ради интереса попробуем мультипроцессинг для этой задачи:
import multiprocessing NUM_PROCESSES = 2 def worker(steps): count = 0 for i in range(steps): download() def run_worker_in_processes(): processes = [] for i in range(NUM_PROCESSES): p = multiprocessing.Process(target=worker, args=(STEPS//NUM_PROCESSES,)) processes.append(p) p.start() for p in processes: p.join() %timeit -n1 -r1 run_worker_in_processes()
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Как мы видим - треды позволили получить лучший результат (Macbook Pro 2016 - 64 треда). Чтобы упростить работу с тредами в Python есть модуль concurrent.futures: он предоставляет доступ к 2-м высокоуровневым объектам: ThreadPoolExecutor и ProcessPoolExecutor
import concurrent.futures import requests STEPS = 100 def download(): return requests.get("http://127.0.0.1:8000").text def run_in_executor(): executor = concurrent.futures.ThreadPoolExecutor(max_workers=64) future_to_url = {executor.submit(download): i for i in range(STEPS)} for future in concurrent.futures.as_completed(future_to_url): i = future_to_url[future] try: data = future.result() except Exception as exc: print('%d generated an exception: %s' % (i, exc)) else: pass #print('%d page is %d bytes' % (i, len(data))) executor.shutdown() %timeit -n1 -r1 run_in_executor()
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Аналогично можно использовать ProcessPoolExecutor, чтобы вынести работу в пул процессов. Сложность многопоточных приложений
counter = 0 def worker(num): global counter for i in range(num): counter += 1 worker(1000000) print(counter) import threading counter = 0 def worker(num): global counter for i in range(num): counter += 1 threads = [] for i in range(10): t = threading.Thread(target=worker, args=(100000,)) threads.append(t) t.start() for t in threads: t.join() #print(counter) import threading counter = 0 lock = threading.Lock() def worker(num): global counter for i in range(num): lock.acquire() counter += 1 lock.release() threads = [] for i in range(10): t = threading.Thread(target=worker, args=(100000,)) threads.append(t) t.start() for t in threads: t.join() print(counter) # deadlock example import threading counter = 0 lock = threading.Lock() def print_counter(): lock.acquire() print(counter) lock.release() def worker(): global counter lock.acquire() print_counter() counter += 1 lock.release() worker()
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Вернемся к примеру с загрузкой URL: можем ли мы сделать еще лучше? загружать странички еще быстрее потреблять меньше памяти, не расходуя ее на создание потоков не задумываться о синхронизации потоков Асинхронное программирование Мотивация - IO-операции очень медленные, нужно заставить программу выполнять полезную работу во время ожидания ввода-вывода. Сравнение Latency некоторых операций (https://gist.github.com/jboner/2841832) <table align="left"> <tr> <td>L1 CPU cache reference</td> <td>0.5 ns</td> <td></td> </tr> <tr> <td>Main memory reference</td> <td>100 ns</td> <td>200x L1 cache</td> </tr> <tr> <td>Read 1 MB sequentially from memory</td> <td>250,000 ns (250 us)</td> <td></td> </tr> <tr> <td>Round trip within same datacenter</td> <td>500,000 ns (500 us)</td> <td></td> </tr> <tr> <td>Read 1 MB sequentially from SSD*</td> <td>1,000,000 ns (1,000 us, 1ms)</td> <td></td> </tr> <tr> <td>Read 1 MB sequentially from disk</td> <td>20,000,000 ns (20,000 us, 20 ms)</td> <td>80x memory, 20X SSD</td> </tr> <tr> <td>Send packet CA->Netherlands->CA</td> <td>150,000,000 ns (150,000 us, 150 ms)</td> <td></td> </tr> </table> Вернемся к упрощенному варианту нашей программы, загружавшей URL в одном потоке синхронно.
import time def request(i): print(f"Sending request {i+1}") time.sleep(1) print(f"Got response from request {i+1}") print() for i in range(5): request(i)
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
На запрос тратится 1 секунда, и мы ждем 5 секунд на 5 запросов - а ведь могли бы отправить их друг за другом и через секунду получить результаты для всех и обработать. Подход с callback-ами:
import time def request(i): print("Sending request %d" % i) def on_data(data): print("Got response from request %d" % i) return on_data callbacks = [] for i in range(5): cb = request(i) callbacks.append(cb) time.sleep(1) for cb in callbacks: cb("data")
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Генераторы Это функция, которая генерирует последовательность значений используя ключевое слово yield Самый простой пример:
def simple_gen(): yield 1 yield 2 gen = simple_gen() print(next(gen)) print(next(gen)) print(next(gen)) gen = simple_gen() for i in gen: print(i)
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Первый плюс: получить значения, не загружая все элементы в память. Яркий пример - range. Чуть посложнее (с состоянием):
def fib(): a, b = 0, 1 while True: yield a a, b = b, a + b gen = fib() for i in range(6): print(next(gen))
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Второй плюс: получить значения сразу после того как они были вычислены Корутины на основе генераторов:
def coro(): next_value = yield "Hello" yield next_value c = coro() print(next(c)) print(c.send("World"))
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Можно работать с бесконечным потоком данных. Можно обмениваться результатами между отдельными генераторами по мере готовности - то есть иметь дело с несколькими параллельными задачами. При этом не обязательно эти задачи зависят друг от друга. Для более глубокого понимания и изучения других особенностей - http://www.dabeaz.com/finalgenerator/
import time def request(i): print("Sending request %d" % i) data = yield print("Got response from request %d" % i) generators = [] for i in range(5): gen = request(i) generators.append(gen) next(gen) time.sleep(1) for gen in generators: try: gen.send("data") except StopIteration: pass
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
В контексте лекции важно понять, что выполнение функции-генератора в Python можно приостановить, дождаться нужных данных, а затем продолжить выполнение с места прерывания. При этом сохраняется локальный контекст выполнения и пока мы ждем данных интерпретатор может заниматься другой полезной работой Asyncio Асинхронное программирование с использованием библиотеки asyncio строится вокруг понятия Event Loop - "цикл событий". Event loop является основным координатором в асинхронных программах на Python. Он отвечает за: шедулинг корутин и коллбеков регистрацию отложенных вызовов регистрацию таймеров Это позволяет писать программы так, что в момент блокирующих IO операций контекст выполнения будет переключаться на другие задачи, ждущие выполнения.
import asyncio loop = asyncio.get_event_loop() loop.run_forever()
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
<div style="float:left;margin:0 10px 10px 0" markdown="1">![title](img/loop.jpg)</div>
import asyncio def cb(): print("callback called") loop.stop() loop = asyncio.get_event_loop() loop.call_later(delay=3, callback=cb) print("start event loop") loop.run_forever()
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
В Python 3.4 вызов результата корутины выполнялся с помощью конструкции yield from (https://www.python.org/dev/peps/pep-0380/), а функции-корутины помечались декоратором @asyncio.coroutine
import asyncio @asyncio.coroutine def return_after_delay(): yield from asyncio.sleep(3) print("return called") loop = asyncio.get_event_loop() print("start event loop") loop.run_until_complete(return_after_delay())
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
В версии 3.5 появились специальные ключевые слова, позволяющие программировать в асинхронном стиле: async и await async - ключевое слово, позволяющее обозначить функцию как асинхронную (корутина, coroutine). Такая функция может прервать свое выполнение в определенной точке (на блокирующей операции), а затем, дождавшись результата этой операции, продолжить свое выполнение. await позволяет запустить такую функцию и дождаться результата.
import asyncio async def return_after_delay(): await asyncio.sleep(3) print("return called") loop = asyncio.get_event_loop() print("start event loop") loop.run_until_complete(return_after_delay())
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Чтобы программа стала работать асинхронно нужно использовать примитивы, которые есть в библиотеке asyncio:
import asyncio async def get_data(): await asyncio.sleep(1) return "boom" async def request(i): print(f"Sending request {i+1}") data = await get_data() print(f"Got response from request {i+1}: {data}") loop = asyncio.get_event_loop() loop.run_until_complete(asyncio.gather(*[request(i) for i in range(5)]))
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Исключения при работе с корутинами работают точно так же как и в синхронном коде:
import asyncio async def get_data(): await asyncio.sleep(1) raise ValueError async def request(i): print("Sending request %d" % i) try: data = await get_data() except ValueError: print("Error in request %d" % i) else: print("Got response from request %d" % i) loop = asyncio.get_event_loop() loop.run_until_complete(asyncio.gather(*[request(i) for i in range(5)]))
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Примеры других реализаций Event Loop'ов: Tornado IOLoop (https://github.com/tornadoweb/tornado) Twisted reactor (https://twistedmatrix.com/trac/) pyuv (https://github.com/saghul/pyuv) PyGame (http://pygame.org/hifi.html) ... Мы еще вернемся к asyncio в лекции про интернет и клиент-серверные приложения. В том числе подробно посмотрим на сетевые операции - неблокирующее чтение и запись в сокеты. Список библиотек, написанных поверх asyncio - https://github.com/python/asyncio/wiki/ThirdParty А пока вернемся к нашему примеру и перепишем его, используя асинхронный подход и asyncio в частности.
import aiohttp import asyncio STEPS = 100 async def download(loop): async with aiohttp.ClientSession(loop=loop) as session: async with session.get("http://127.0.0.1:8000") as response: return await response.text() async def worker(steps, loop): await asyncio.gather(*[download(loop) for x in range(steps)]) loop = asyncio.get_event_loop() %timeit -n1 -r1 loop.run_until_complete(worker(STEPS, loop))
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
# future example. import asyncio async def slow_operation(future): try: await asyncio.wait_for(asyncio.sleep(1), 2) except asyncio.TimeoutError: future.set_exception(ValueError("Error sleeping")) else: future.set_result('Future is done!') def got_result(future): if future.exception(): print("Exception:", type(future.exception())) else: print(future.result()) loop.stop() loop = asyncio.get_event_loop() future = asyncio.Future() future.add_done_callback(got_result) asyncio.ensure_future(slow_operation(future)) loop.run_forever() # выносим блокирующие вызовы в пул тредов import asyncio import requests async def main(): loop = asyncio.get_event_loop() future1 = loop.run_in_executor(None, requests.get, 'http://127.0.0.1:8000') future2 = loop.run_in_executor(None, requests.get, 'http://127.0.0.1:8000') response1 = await future1 response2 = await future2 print(response1.text) print(response2.text) loop = asyncio.get_event_loop() loop.run_until_complete(main())
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Завершающий пример (asyncio + multiprocessing)
# asyncio + multiprocessing import aiohttp import asyncio import multiprocessing NUM_PROCESSES = 2 STEPS = 100 async def download(loop): async with aiohttp.ClientSession(loop=loop) as session: async with session.get("http://127.0.0.1:8000") as response: return await response.text() async def worker(steps, loop): await asyncio.gather(*[download(loop) for x in range(steps)], loop=loop) def run(steps): loop = asyncio.new_event_loop() loop.run_until_complete(worker(steps, loop)) def run_in_processes(): processes = [] for i in range(NUM_PROCESSES): p = multiprocessing.Process(target=run, args=(STEPS//NUM_PROCESSES,)) processes.append(p) p.start() for p in processes: p.join() %timeit -n1 -r1 run_in_processes()
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Subprocess. Если останется время...
import subprocess import os result = subprocess.run(["ls", "-l", os.getcwd()], stdout=subprocess.PIPE) print(result.stdout) # используя shell result = subprocess.run( "ls -l " + os.getcwd() + "|grep debug", stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True ) print(result.stdout)
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Cycling, ladies and gentlemen, aka a slow spiral into insanity. $\epsilon-$perturbations Another earlier (and nowadays less popular) method for avoiding degeneracy is by introducing $\epsilon$-perturbations into the problem. Recall that the standard system goes like \begin{align} \text{maximise} \quad & c^T x \ \text{subject to} \quad & Ax = b \ \text{and} \quad & x \geq 0 \end{align} With $\epsilon$-perturbations, we will instead solve \begin{align} \text{maximise} \quad & c^T x \ \text{subject to} \quad & Ax = b + \epsilon \ \text{and} \quad & x \geq 0 \end{align} which will give us a close enough answer to the original problem, and help us avoid the problem with the 0's. This kind of happens automatically as a bonus if you're running the simplex algorithm on a computer; as the program runs, errors from truncation, etc. build up, and you eventually get out of the cycle because your computer is doing floating point arithmetic. Which is just about the one good thing about floating point arithmetic, I guess. Time Complexity of the Simplex Method The Klee-Minty Cube \begin{align} \text{maximise} \quad & 100x_1 + 10x_2 + x_3 \ \text{subject to} \quad & x_1 \leq 1 \ & 20x_1 + x_2 \leq 100 \ & 200x_1 + 20x_2 + x_3 \leq 10000\ \text{and} \quad & x_1, x_2, x_3 \geq 0 \end{align}
c_1 = numpy.array([[ 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 's_1']]) c_2 = numpy.array([[ 20.0, 1.0, 0.0, 0.0, 1.0, 0.0, 100.0, 's_2']]) c_3 = numpy.array([[ 200.0, 20.0, 1.0, 0.0, 0.0, 1.0, 10000.0, 's_3']]) z = numpy.array([[-100.0, -10.0, -1.0, 0.0, 0.0, 0.0, 0.0, '']]) rows= numpy.concatenate((c_1, c_2, c_3, z), axis=0) tableau_klee_minty = pd.DataFrame(rows, columns=['x_1','x_2', 'x_3','s_1','s_2','s_3','value', 'basic_variable'], index=['c_1','c_2','c_3','z']) tableau_klee_minty.ix[:,0:-1] = tableau_klee_minty.ix[:,0:-1].astype('float') tableaux_klee_minty = dict() run_simplex(tableaux_klee_minty, tableau_klee_minty)
lpsm.ipynb
constellationcolon/simplexity
mit
Statistical Hypothesis Testing Classical testing involves a null hypothesis $H_0$ that represents the default and an alternative hypothesis $H_1$ to test. Statistics helps us to determine whether $H_0$ should be considered false or not. Example: Flipping A Coin Null hypothesis = the coin is fair = $p = 0.5$ Alternative hypothesis = coins is not fair = $p \not = 0.5$ To test, $n$ samples will be collected. Each toss is a Bernoulli(n, p) trial.
def normal_approximation_to_binomial(n, p): """return mu and sigma corresponding to Binomial(n, p)""" mu = p * n sigma = math.sqrt(p * (1 - p) * n) return mu, sigma normal_probability_below = normal_cdf def normal_probability_above(lo, mu=0, sigma=1): return 1 - normal_cdf(lo, mu, sigma) def normal_probability_between(lo, hi, mu=0, sigma=1): return normal_cdf(hi, mu, sigma) - normal_cdf(lo, mu, sigma) def normal_probabilty_outside(lo, hi, mu=0, sigma=1): return 1 - normal_probability_between(lo, hi, mu, sigma) def normal_upper_bound(probability, mu=0, sigma=1): """returns the z for which P(Z <= z) = probability""" return inverse_normal_cdf(probability, mu, sigma) def normal_lower_bound(probability, mu=0, sigma=1): """returns the z for which P(Z >= z) = probability""" return inverse_normal_cdf(1 - probability, mu, sigma) def normal_two_sided_bounds(probability, mu=0, sigma=1): """returns the symmetric (about the mean) bounds that contain the specified probability""" tail_probability = (1 - probability) / 2 # upper bound should have tail_probability above it upper_bound = normal_lower_bound(tail_probability, mu, sigma) # lower bound should have tail_probability below it lower_bound = normal_upper_bound(tail_probability, mu, sigma) return lower_bound, upper_bound
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
Now we flip the coin 1,000 times and see if our null hypothesis is true. If so, $X$ will be approximately normally distributed with a mean of 500 and a standard deviation of 15.8:
mu_0, sigma_0 = normal_approximation_to_binomial(1000, 0.5) mu_0, sigma_0
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
A decision must be made with respect to significance, i.e., how willing are we to accept "false positives" (type 1 errors) by rejecting $H_0$ even though it is true? This is often set to 5% or 1%. We will use 5%:
normal_two_sided_bounds(0.95, mu_0, sigma_0)
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
If $H_0$ is true, $p$ should equal 0.5. The interval above denotes the values outside of which there is a 5% chance that this is false. Now we can determine the power of a test, i.e., how willing are we to accept "false positives" (type 2 errors) by failing to reject $H_0$ even though it is false.
# 95% bounds based on assumption p is 0.5 lo, hi = normal_two_sided_bounds(0.95, mu_0, sigma_0) # actual mu and sigma based on p = 0.55 mu_1, sigma_1 = normal_approximation_to_binomial(1000, 0.55) # a type 2 error means we fail to reject the null hypothesis # which will happen when X is still in our original interval type_2_probability = normal_probability_between(lo, hi, mu_1, sigma_1) power = 1 - type_2_probability # 0.887
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
Run a 5% significance test to find the cutoff below which 95% of the probability lies:
hi = normal_upper_bound(0.95, mu_0, sigma_0) # is 526 (< 531, since we need more probability in the upper tail) type_2_probability = normal_probability_below(hi, mu_1, sigma_1) power = 1 - type_2_probability # 0.936 def two_sided_p_value(x, mu=0, sigma=1): if x >= mu: # if x is greater than the mean, the tail is what's greater than x return 2 * normal_probability_above(x, mu, sigma) else: # if x is less than the mean, the tail is what's less than x return 2 * normal_probability_below(x, mu, sigma) two_sided_p_value(529.5, mu_0, sigma_0) # 0.062 extreme_value_count = 0 for _ in range(100000): num_heads = sum(1 if random.random() < 0.5 else 0 # count # of heads for _ in range(1000)) # in 1000 flips if num_heads >= 530 or num_heads <= 470: # and count how often extreme_value_count += 1 # the # is 'extreme' print(extreme_value_count / 100000) # 0.062 two_sided_p_value(531.5, mu_0, sigma_0) # 0.0463
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
Make sure your data is roughly normally distributed before using normal_probability_above to compute p-values. The annals of bad data science are filled with examples of people opining that the chance of some observed event occurring at random is one in a million, when what they really mean is “the chance, assuming the data is distributed normally,” which is pretty meaningless if the data isn’t. There are various statistical tests for normality, but even plotting the data is a good start. Confidence Intervals We can construct a confidence interval around an observed value of a parameter. We can do this for our assumption of an unfair coin (biased towards heads in that we observed 525 of 1,000 flips giving heads):
math.sqrt(p * (1 - p) / 1000)
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
Not knowing $p$ we use our estimate:
p_hat = 525 / 1000 mu = p_hat sigma = math.sqrt(p_hat * (1 - p_hat) / 1000) sigma
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
Assuming a normal distribution, we conclude that we are 95% confident that the interval below includes the true $p$:
normal_two_sided_bounds(0.95, mu, sigma)
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
0.5 falls within our interval so we do not conclude that the coin is unfair. What if we observe 540 heads though?
p_hat = 540 / 1000 mu = p_hat sigma = math.sqrt(p_hat * (1 - p_hat) / 1000) # 0.0158 normal_two_sided_bounds(0.95, mu, sigma) # [0.5091, 0.5709]
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
In this scenario, 0.5 falls outside of our interval so the "fair coin" hypothesis is not confirmed. P-hacking P-hacking involves using various "hacks" to get a $p$ value to go below 0.05: creating a superfluous number of hypotheses, selectively removing outliers, etc.
def run_experiment(): """flip a fair coin 1000 times, True = heads, False = tails""" return [random.random() < 0.5 for _ in range(1000)] def reject_fairness(experiment): """using the 5% significance levels""" num_heads = len([flip for flip in experiment if flip]) return num_heads < 469 or num_heads > 531 random.seed(0) experiments = [run_experiment() for _ in range(1000)] num_rejections = len([experiment for experiment in experiments if reject_fairness(experiment)]) num_rejections # 46
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
Valid inferences come from a priori hypotheses (hypotheses created before collecting any data) and data cleansing without reference to the hypotheses. Example: Running An A/B Test
def estimated_parameters(N, n): p = n / N sigma = math.sqrt(p * (1 - p) / N) return p, sigma def a_b_test_statistic(N_A, n_A, N_B, n_B): p_A, sigma_A = estimated_parameters(N_A, n_A) p_B, sigma_B = estimated_parameters(N_B, n_B) return (p_B - p_A) / math.sqrt(sigma_A ** 2 + sigma_B ** 2) z = a_b_test_statistic(1000, 200, 1000, 180) z two_sided_p_value(z) z = a_b_test_statistic(1000, 200, 1000, 150) two_sided_p_value(z)
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
Bayesian Inference In Bayesian Inference we start with a prior distribution for the parameters and then use the actual observations to get the posterior distribution of the same parameters. So instead of judging the probability of hypotheses, we make probability judgments about the parameters. We will use the Beta distribution to convert all probabilities to a value between 0 and 1.
def B(alpha, beta): """a normalizing constant so that the total probability is 1""" return math.gamma(alpha) * math.gamma(beta) / math.gamma(alpha + beta) def beta_pdf(x, alpha, beta): if x < 0 or x > 1: # no weight outside of [0, 1] return 0 return x ** (alpha - 1) * (1 - x) ** (beta - 1) / B(alpha, beta)
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
import data and drop NAs, calculate metascore/10 and rating*10
imdb = pd.read_csv("C:\\Users\\Adam\\Google Drive\\School\\ComputerScience\\intro to data science\\rotten_needles\\data\\datasets\\movies_dataset.csv") #imdb = imdb.dropna() imdb = imdb.assign(rating10=(imdb['rating']*10)) imdb = imdb.assign(metascore10=(imdb['metascore']/10))
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
create movie profit score column
imdb = imdb.assign(score1=100*(imdb.gross_income-imdb.budget)/imdb.budget) imdb = imdb.assign(score2=(imdb['gross_income']-imdb['budget'])) # best score measure imdb = imdb.assign(score3=np.log(imdb['gross_income'])/np.log(imdb['budget'])) # imdb[['score2', 'name','rating','metascore']].sort_values('score2',ascending=0)
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
Figure shows scatter of gross income against meta score and imdb rating
plt.figure() imdb_temp = imdb imdb_temp['scaled_gross_income'] = np.log(imdb['gross_income']) # / 1000000 sns.regplot(x = imdb['rating']*10, y = 'scaled_gross_income', data = imdb_temp, color = 'yellow') sns.regplot(x = imdb['metascore'], y = 'scaled_gross_income', data = imdb_temp, color = 'Green') sns.plt.title("Gross Income against MetaScore \ IMDB Rating - Scatter") sns.plt.xlabel("IMDB Rating, Metascore") sns.plt.ylabel("Log of Gross Income") # legend_patches = matplotlib.patches.Patch(color='green', label='label') # Plot the legend sns.plt.legend(['IMDB Ratings', 'Metascore']) # imdb.isnull().sum()
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
Figure shows distribution of Movie Ratings
plt.figure() sns.countplot(x = 'rating', data = imdb) plt.xticks(rotation=60) sns.plt.title("Distribution of Movie Ratings") sns.plt.xlabel("Movie Rating") sns.plt.ylabel("Count of Ratings")
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
Distribution of ratings by Genres
temp = pd.DataFrame( data = { 'type': [i for i in range(1,11) for genre in imdb.columns if 'genre' in genre], 'votes': [imdb[imdb[genre] == 1]['rating_freq.1'].mean() for genre in imdb.columns if 'genre' in genre] + [imdb[imdb[genre] == 1]['rating_freq.2'].mean() for genre in imdb.columns if 'genre' in genre] + [imdb[imdb[genre] == 1]['rating_freq.3'].mean() for genre in imdb.columns if 'genre' in genre] + [imdb[imdb[genre] == 1]['rating_freq.4'].mean() for genre in imdb.columns if 'genre' in genre] + [imdb[imdb[genre] == 1]['rating_freq.5'].mean() for genre in imdb.columns if 'genre' in genre] + [imdb[imdb[genre] == 1]['rating_freq.6'].mean() for genre in imdb.columns if 'genre' in genre] + [imdb[imdb[genre] == 1]['rating_freq.7'].mean() for genre in imdb.columns if 'genre' in genre] + [imdb[imdb[genre] == 1]['rating_freq.8'].mean() for genre in imdb.columns if 'genre' in genre] + [imdb[imdb[genre] == 1]['rating_freq.9'].mean() for genre in imdb.columns if 'genre' in genre] + [imdb[imdb[genre] == 1]['rating_freq.10'].mean() for genre in imdb.columns if 'genre' in genre] }, index= [genre[genre.rfind('.')+1:] for genre in imdb.columns if 'genre' in genre]*10 ) plt.figure() sns.barplot(x = temp.index , y = 'votes',hue = 'type', data = temp) plt.xticks(rotation=45, ha='right') sns.plt.title("Distribution of Ratings by Genres") sns.plt.xlabel("Genres") sns.plt.ylabel("Number of Votes")
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
scattering stuff
# plt.figure() # plt.ylim([0,10]) # plt.xlim([0,10]) # sns.regplot(x ='avg_rating_per_demo.aged_under_18', y = 'avg_rating_per_demo.aged_45+', data = imdb, color = 'red') # plt.figure() # plt.ylim([0,10]) # plt.xlim([0,10]) # sns.regplot(x ='avg_rating_per_demo.aged_18-29', y = 'avg_rating_per_demo.aged_45+', data = imdb, color = 'green') # imdb.plot(kind='scatter', x='rating', y='avg_rating_per_demo.us_users');
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
Figure shows high correlation between opening weekend incomes and Total weekend
plt.figure() sns.regplot(x = 'opening_weekend_income', y = 'gross_income', data=imdb, color='seagreen') sns.plt.title("Opening weeked Incomes vs Total Incomes") sns.plt.xlabel("Opening Weekend") sns.plt.ylabel("Total")
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
correlations
# imdb[['metascore','critic_review_count','rating','rating_count','gross_income','rating_freq.3','rating_freq.4','rating_freq.5','rating_freq.6', # 'rating_freq.7','rating_freq.8','rating_freq.9','score2']].corr() # imdb[['avg_rating_per_demo.males','avg_rating_per_demo.females']].corr()
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
figure shows how different age groups tend to vote the same, the diagonal shows the rating distribution of each age group
from pandas.tools.plotting import scatter_matrix temp = imdb[['avg_rating_per_demo.aged_under_18','avg_rating_per_demo.aged_18-29', 'avg_rating_per_demo.aged_30-44','avg_rating_per_demo.aged_45+']] temp.columns = ['-18','18-29','30-44','45+'] scatter_matrix(temp, alpha=0.2,figsize=(6,6)) plt.suptitle('Rating Scatter over Different Age Groups')
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
figure shows that above 400K voters, the average rating is allways greater than 7 - people tend to rate when they like a movie
plt.figure() sns.regplot(x = 'rating_count', y = 'rating', data=imdb, color='seagreen') sns.plt.title("IMDB Rating vs Number of Votes") sns.plt.xlabel("Number of Votes") sns.plt.ylabel("IMDB Rating")
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
figure shows the difference of males and females number of votes over different genres
temp = pd.DataFrame( data={ 'sex': ['Male' for genre in imdb.columns if 'genre' in genre] + ['Female' for genre in imdb.columns if 'genre' in genre], 'score': [ imdb[imdb[genre] == 1]['votes_per_demo.males'].mean() for genre in imdb.columns if 'genre' in genre ] + [ imdb[imdb[genre] == 1]['votes_per_demo.females'].mean() for genre in imdb.columns if 'genre' in genre ] }, index= [genre[genre.rfind('.')+1:] for genre in imdb.columns if 'genre' in genre] + [genre[genre.rfind('.')+1:] for genre in imdb.columns if 'genre' in genre] ) plt.figure() sns.barplot(x = temp.index , y = 'score',hue = 'sex', data = temp) plt.xticks(rotation=45, ha='right') sns.plt.title("Number of Votes, Difference between Male and Female") sns.plt.xlabel("Genres") sns.plt.ylabel("Number of Votes")
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
figure shows the similarity of males and females average scores over different genres - women are more mefargenot!
temp1 = pd.DataFrame( data={ 'sex': ['Male' for genre in imdb.columns if 'genre' in genre] + ['Female' for genre in imdb.columns if 'genre' in genre], 'score': [ imdb[imdb[genre] == 1]['avg_rating_per_demo.males'].mean() for genre in imdb.columns if 'genre' in genre ] + [ imdb[imdb[genre] == 1]['avg_rating_per_demo.females'].mean() for genre in imdb.columns if 'genre' in genre ] }, index= [genre[genre.rfind('.')+1:] for genre in imdb.columns if 'genre' in genre] + [genre[genre.rfind('.')+1:] for genre in imdb.columns if 'genre' in genre] ) plt.figure() sns.barplot(x = temp1.index , y = 'score',hue = 'sex', data = temp1) plt.xticks(rotation=45, ha='right') sns.plt.title("Average Ratings, Difference between Male and Female") sns.plt.xlabel("Genres") sns.plt.ylabel("Average Rating") # plt.figure() # plt.ylim([0,10]) # plt.xlim([0,10]) # sns.regplot(x ='avg_rating_per_demo.males', y = 'avg_rating_per_demo.females', data = imdb, color = 'red')
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
figure shows retrun on investment (gross income divided by budget)
temp2 = pd.DataFrame( data={ 'score': [ imdb[imdb[genre] == 1]['score1'].mean() for genre in imdb.columns if 'genre' in genre ] }, index= [genre[genre.rfind('.')+1:] for genre in imdb.columns if 'genre' in genre] ) plt.figure() sns.barplot(x = temp2.index , y = 'score', data = temp2) plt.xticks(rotation=45, ha='right') sns.plt.title("Return on Investment by Genre") sns.plt.xlabel("Genres") sns.plt.ylabel("Roi %")
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
To print part of a text on a new line, \n tag is used.
print("Imagine all the people \nliving life in peace... \nJohn Lennon")
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
To tabulate part of a text forward, \t tag is used.
print("Imagine all the people \nliving life in peace... \n\tJohn Lennon")
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
User inputs To automatically save all user inputs ass one single string, raw_input() function is used.
collection = raw_input("Input some numbers seprated by a comma: ") print(collection) type(collection)
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
To convert the comma-separated string (as the one above, inputted by the user) into a list of elements, split() function is used.
# split function takes one argument: the character to split the string on (comma in our case). our_list = collection.split(",") print(our_list)
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
Functions def is used to define a function. A function should either return some value, or print it.
def adjectivizer(noun): return noun+"ly" adjectivizer("man")
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
THe function below is the upgrade of above function. It first checks the length of the function argument. If it is more than 3, it adjectivizes the noun, else it adds "super" in front.
def superly(noun): if len(noun)>3: return noun+"ly" else: return "super"+noun superly("cat")
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
The function below is the upgrade of above function. It checks one more condition: adjectivizes if more than 4 letters, leaves the same in case of four (note the double equal sign) and adds the "super" in front in other cases.
def superly4(noun): if len(noun)>4: return noun+"ly" elif len(noun)==4: return noun else: return "super"+noun superly4("late")
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
Same function as above, just with one more condition checked (==5).
def superly5(noun): if len(noun)>4: return noun+"ly" elif len(noun)==4: return noun elif len(noun)==5: return "super"+noun else: return "to short noun" superly5("truth")
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
Currency converter The function below asks the user to input some currency (\$ or e sign followd by amount of money) and converts it to Armenian drams. Those are the steps taken: Inputs is saved as one single string. The very first character of string is checked to understand the currency. If it is "\$", then the function takes the amount user inputted (all the string but the very first character which was the "\$" sign), converts to an integer (using the int() function), multiplies it by 484 (exchange rate) to get the value in drams, convert the results back to string (using the str() function) and adds " AMD" to that string. If it is "e", then the function takes the amount user inputted (all the string but the very first character which was the "e sign), converts to an integer (using the int() function), multiplies it by 535 (exchange rate) to get the value in drams, convert the results back to string (using the str() function) and adds " AMD" to that string. If the very first character is neither "\$" nor "e", then the function returns a notification, asking the user to type "\$" or "e" in front. Then the result is printed.
amount = raw_input("Please, input currency followed by amount ") def c_converter(amount): if amount[0]=="$": return str(484*int(amount[1:]))+" AMD" elif amount[0]=="e": return str(535*int(amount[1:])) + " AMD" else: return "Please, use $ or e sign in front" print c_converter(amount)
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
Multiple conditions Given the numbers between 7 (included) and 1700 (not included), let's choose those that can be divided by 5. For that reason the % (knows as "modulus") operator is used. % operator results in the residual after the division. For example, 5%2 will result in 1 and 6%4 will result in 2. In order to check whether some number can be divided by 5 or not, we must check whether numer%5 is 0 or not. If we want to save the resulting values somewhere, then let's first create an empty list, which will then be added by 5-divisable numbers.
# create an empty list our_list=[] # iterate over all numbers in the given range for number in range(7,1700): # if the reminder is 0 if number%5==0: # append/add that number to our list our_list.append(number) # let's print our list print our_list
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
Above, are all the numbers that can be divided by 5 without residual. Let's now choose a smaller set of numbers. Let's choose those that can both be divided by 5 and by 7 without residual.
our_list=[] for number in range(7,1700): if number%5==0 and number%7==0: our_list.append(number) print our_list
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
As you nticed above, in order to add one more required condition inside if statement, one needs to use just and in between. Similarly, or can be used to checked whether at least one of the conditions is satisfied or not.
our_list=[] for number in range(7,1700): if number%5==0 or number%7==0: our_list.append(number)
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
Maximum function Let's define a function that will take two numbers as an argument and return the maximum of them
def max_of_2(x,y): if x>y: return x else: return y print max_of_2(15,12)
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
Let's now define a max_of_3 function. As we already have max_of_2, we just need to nest them inside one another as shown below:
def max_of_3(x,y,z): return max_of_2(z,max_of_2(x,y)) max_of_3(10,15,33)
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
Text reverser Let's degine a function that will reverse a text. As we want to save the reversed text as a string, let's first create an empty string. Then, each letter, one by one, will be added to our empty strin.
def reverser(text): r_text = "" i = len(text) while i>0: r_text = r_text + text[i-1] i = i-1 return r_text print reverser("Globalization")
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
Finally, yet importantly, let read the Zen of Python (python's philosophy).
import this
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
Data Exploration In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results. Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively. Implementation: Calculate Statistics For your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since numpy has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model. In the code cell below, you will need to implement the following: - Calculate the minimum, maximum, mean, median, and standard deviation of 'MEDV', which is stored in prices. - Store each calculation in their respective variable.
prices_a = prices.as_matrix() # TODO: Minimum price of the data minimum_price = np.amin(prices_a) # TODO: Maximum price of the data maximum_price = np.amax(prices_a) # TODO: Mean price of the data mean_price = np.mean(prices_a) # TODO: Median price of the data median_price = np.median(prices_a) # TODO: Standard deviation of prices of the data std_price = np.std(prices_a) # Show the calculated statistics print("Statistics for Boston housing dataset:\n") print(("Minimum price: ${:,.2f}").format(minimum_price)) print("Maximum price: ${:,.2f}".format(maximum_price)) print("Mean price: ${:,.2f}".format(mean_price)) print("Median price ${:,.2f}".format(median_price)) print("Standard deviation of prices: ${:,.2f}".format(std_price))
boston_housing.ipynb
rodrigomas/boston_housing
mit
Question 1 - Feature Observation As a reminder, we are using three features from the Boston housing dataset: 'RM', 'LSTAT', and 'PTRATIO'. For each data point (neighborhood): - 'RM' is the average number of rooms among homes in the neighborhood. - 'LSTAT' is the percentage of homeowners in the neighborhood considered "lower class" (working poor). - 'PTRATIO' is the ratio of students to teachers in primary and secondary schools in the neighborhood. Using your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an increase in the value of 'MEDV' or a decrease in the value of 'MEDV'? Justify your answer for each. Hint: Would you expect a home that has an 'RM' value of 6 be worth more or less than a home that has an 'RM' value of 7? Answer: RM should increase the MEDV as it increases its value (directly proportional) because it impacts with the house size (more rooms). LSTAT should be reversely proportional to MEDV, because as we have more "lower class" in the neighborhood, we probably will have a simpler house (not well finished) and face more security problems, so lower price. PTRATIO, it is hard to guess, because more students could be a family area that is good, but we need to analyze the LSTAT in conjunction, also if we have more teachers, could be a more quite region (good) or a not residential area that will be (not good). I think there's a correlation with the other classes that gives us a better hint. Developing a Model In this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions. Implementation: Define a Performance Metric It is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the coefficient of determination, R<sup>2</sup>, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how "good" that model is at making predictions. The values for R<sup>2</sup> range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the target variable. A model with an R<sup>2</sup> of 0 is no better than a model that always predicts the mean of the target variable, whereas a model with an R<sup>2</sup> of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the features. A model can be given a negative R<sup>2</sup> as well, which indicates that the model is arbitrarily worse than one that always predicts the mean of the target variable. For the performance_metric function in the code cell below, you will need to implement the following: - Use r2_score from sklearn.metrics to perform a performance calculation between y_true and y_predict. - Assign the performance score to the score variable.
# TODO: Import 'r2_score' from sklearn.metrics import r2_score def performance_metric(y_true, y_predict): """ Calculates and returns the performance score between true and predicted values based on the metric chosen. """ # TODO: Calculate the performance score between 'y_true' and 'y_predict' score = r2_score(y_true, y_predict) # Return the score return score
boston_housing.ipynb
rodrigomas/boston_housing
mit
Question 2 - Goodness of Fit Assume that a dataset contains five data points and a model made the following predictions for the target variable: | True Value | Prediction | | :-------------: | :--------: | | 3.0 | 2.5 | | -0.5 | 0.0 | | 2.0 | 2.1 | | 7.0 | 7.8 | | 4.2 | 5.3 | Would you consider this model to have successfully captured the variation of the target variable? Why or why not? Run the code cell below to use the performance_metric function and calculate this model's coefficient of determination.
# Calculate the performance of this model score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3]) print(("Model has a coefficient of determination, R^2, of {:.3f}.").format(score))
boston_housing.ipynb
rodrigomas/boston_housing
mit
Answer: The estmator got an excellent R2 value, 0.923 is very close to 1, so it can predicts very well the model. But I think it could not captures the variation (of a real example) because it has only 5 data points, and that is too few. However, it has a good prediction to the model for the given data, and for that it captures the variation. Implementation: Shuffle and Split Data Your next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset. For the code cell below, you will need to implement the following: - Use train_test_split from sklearn.cross_validation to shuffle and split the features and prices data into training and testing sets. - Split the data into 80% training and 20% testing. - Set the random_state for train_test_split to a value of your choice. This ensures results are consistent. - Assign the train and testing splits to X_train, X_test, y_train, and y_test.
# TODO: Import 'train_test_split' from sklearn.model_selection import train_test_split prices_a = prices.as_matrix() features_a = features.as_matrix() # TODO: Shuffle and split the data into training and testing subsets X_train, X_test, y_train, y_test = train_test_split(features_a, prices_a, test_size=0.2, random_state=42) # Success print("Training and testing split was successful.")
boston_housing.ipynb
rodrigomas/boston_housing
mit
Question 3 - Training and Testing What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm? Hint: What could go wrong with not having a way to test your model? Answer: If you don't have a test subset, you can not estimate if your predictor is good enought for the "new data", or data that is not in the traning set, so you will probably overfit the predicted model and you can not verify it until new data arives. So, probably, the predictor will work great with the training set, but could not be a good estimator for the real data. Also, it will be hard to analyze the predictor efficiency. Analyzing Model Performance In this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing 'max_depth' parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone. Learning Curves The following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R<sup>2</sup>, the coefficient of determination. Run the code cell below and use these graphs to answer the following question.
# Produce learning curves for varying training set sizes and maximum depths vs.ModelLearning(features, prices)
boston_housing.ipynb
rodrigomas/boston_housing
mit
Question 4 - Learning the Data Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model? Hint: Are the learning curves converging to particular scores? Answer: Analysing max_depth = 10, with more training points we improve our model, but we can notice that at some time it do not increase the score. As we add more training points we have better test results because we start to generalize our model, but it also has a convergence, so more training points do not affect the test at some point. Until it reaches the stagnation point, the added points were good, but after that, it could not improve the model. Complexity Curves The following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the learning curves, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the performance_metric function. Run the code cell below and use this graph to answer the following two questions.
vs.ModelComplexity(X_train, y_train)
boston_housing.ipynb
rodrigomas/boston_housing
mit
Question 5 - Bias-Variance Tradeoff When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions? Hint: How do you know when a model is suffering from high bias or high variance? Answer: If the model uses a maximum depth of 1, it will be oversimplified, and it suffers from high bias. At the figure above, it is possible to say that the performance is low (r2), even for the training score, what means that it is not capturing the features correlation. If it uses a maximum depth of 10, it, probably, is overfitting (not generalized well), so it suffers from the high variance. It is noticed because the error on the test set is much higher (lower r2) that in the training set. So it a start to get not generalized behavior, but some specific behavior from the training set. Question 6 - Best-Guess Optimal Model Which maximum depth do you think results in a model that best generalizes to unseen data? What intuition lead you to this answer? Answer: We can use 3 or 4 as de maximum depth (I go with 3). The validation score has an inflection at 4, so it starts to decrease the validation score (changes the tangential direction). Evaluating Model Performance In this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from fit_model. Question 7 - Grid Search What is the grid search technique and how it can be applied to optimize a learning algorithm? Answer: It is a systematically way to optimize an algorithm parameter (or parameters). GridSearch tries to analyze the parameters variation we want to test and select the optimal values for them. It outputs parameters that achieve the best performance (as a combination). The parameters can be manually specified and which range or values we want to use. Also, it depends on the training set. Question 8 - Cross-Validation What is the k-fold cross-validation training technique? What benefit does this technique provide for grid search when optimizing a model? Hint: Much like the reasoning behind having a testing set, what could go wrong with using grid search without a cross-validated set? Answer: The k-fold cross-validation creates k equal sized subsamples. k times, one subsample is selected for validation and the remaining k - 1 are used for training. After all tries, k-fold computes the average error. It could generalize the model and gives more accuracy because the model will be trained and test aginst all data, and also " matters less how the data gets divided". Each data value will be used k -1 times for training and one time for validation. We can also use the multiple training for optimization, as we have more sets to test. A simple train/test split does not give a good way to evalute the best parameter, because the goal rely on get a good result with train and test set, so there no much changes to do, but, with k-fold we need to get a good result with much more train/test variation, so we force the estimator to be more generalized. Implementation: Fitting a Model Your final implementation requires that you bring everything together and train a model using the decision tree algorithm. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the 'max_depth' parameter for the decision tree. The 'max_depth' parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called supervised learning algorithms. In addition, you will find your implementation is using ShuffleSplit() for an alternative form of cross-validation (see the 'cv_sets' variable). While it is not the K-Fold cross-validation technique you describe in Question 8, this type of cross-validation technique is just as useful!. The ShuffleSplit() implementation below will create 10 ('n_splits') shuffled sets, and for each shuffle, 20% ('test_size') of the data will be used as the validation set. While you're working on your implementation, think about the contrasts and similarities it has to the K-fold cross-validation technique. For the fit_model function in the code cell below, you will need to implement the following: - Use DecisionTreeRegressor from sklearn.tree to create a decision tree regressor object. - Assign this object to the 'regressor' variable. - Create a dictionary for 'max_depth' with the values from 1 to 10, and assign this to the 'params' variable. - Use make_scorer from sklearn.metrics to create a scoring function object. - Pass the performance_metric function as a parameter to the object. - Assign this scoring function to the 'scoring_fnc' variable. - Use GridSearchCV from sklearn.grid_search to create a grid search object. - Pass the variables 'regressor', 'params', 'scoring_fnc', and 'cv_sets' as parameters to the object. - Assign the GridSearchCV object to the 'grid' variable.
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV' from sklearn.tree import DecisionTreeRegressor from sklearn.metrics import make_scorer from sklearn.grid_search import GridSearchCV #from sklearn.model_selection import ShuffleSplit from sklearn.cross_validation import ShuffleSplit def fit_model(X, y): """ Performs grid search over the 'max_depth' parameter for a decision tree regressor trained on the input data [X, y]. """ # Create cross-validation sets from the training data cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0) #cv_sets = ShuffleSplit(n_splits = 10, test_size = 0.20, random_state = 0) # TODO: Create a decision tree regressor object regressor = DecisionTreeRegressor(random_state=0) # TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10 #params = {'max_depth': range(1,11)} params = {'max_depth': [1,2,3,4,5,6,7,8,9,10]} # TODO: Transform 'performance_metric' into a scoring function using 'make_scorer' scoring_fnc = make_scorer(performance_metric, greater_is_better = True) # TODO: Create the grid search object #grid = GridSearchCV(regressor, param_grid=params, scoring=scoring_fnc, cv=cv_sets.get_n_splits()) grid = GridSearchCV(regressor, param_grid=params, scoring=scoring_fnc, cv=cv_sets) # Fit the grid search object to the data to compute the optimal model grid = grid.fit(X, y) # Return the optimal model after fitting the data return grid.best_estimator_
boston_housing.ipynb
rodrigomas/boston_housing
mit