markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
TRY IT Don't try it. Infinite loops aren't good code. Lists A list is a sequence of values. These values can be anything: strings, numbers, booleans, even other lists. To make a list you put the items separated by commas between brackets [ ]
weathers = ['rain', 'sun', 'snow'] tens = [10, 10, 10, 10] print(weathers) print(tens)
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
You can access a single element in a list by indexing in using brackets. List indexing starts at 0 so to get the first element, you use 0, the second element is 1 and so on. list[index]
print(weathers[0]) print(weathers[2])
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
You can find the length of a list using the built-in function 'len'. len(list)
print(len(tens))
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Create a list of famous storms and store it in a variable called storms. Break and Continue There are other ways to control a loop besides its condition. The keyword break immediately exits a loop. With break you can write a loop without a terminating condition that is not an 'infinite loop'. Continue stops tha...
while True: user_message = eval(input('> ')) # Leave when the user talks about snow. if user_message == 'snow': print("I don't want to hear about that.") break print("and then what happened?") stock_up = ['eggs', 'milk', 'bread', 'puppies', 'cereal', 'toilet paper'] index = 0 while inde...
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
For loops For loops are a faster way of writing a common loop pattern. Many loops go through each element in a list. Python makes this easy. for item in list: run code for item
# Let's rewrite the stock up loop as a for loop (we'll add the puppies in later) stock_up = ['eggs', 'milk', 'bread', 'puppies', 'cereal', 'toilet paper'] for item in stock_up: print("Storm is coming, better stock up on " + item)
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
See how much shorter it is? If you can use a for loop, then do use a for loop. Also, since we don't have to increment a counter, there is less of a chance of an infinte loop.
# Let's add the puppies condition just to show that break and continue work with for loops too stock_up = ['eggs', 'milk', 'bread', 'puppies', 'cereal', 'toilet paper'] for item in stock_up: if item == 'puppies': print("No! We are not getting another puppy") continue print("Storm is coming, bett...
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
We can use loops to sum, average, or find the max or min I'll give you some examples
hourly_snow_fall = [4, 3, 2, 4, 1, 4, 3, 2, 1, 1, 1, 0] # Finding the sum total_snow = 0 for hour in hourly_snow_fall: total_snow += hour print("Total snow " + str(total_snow)) # Finding the average total_snow = 0 number_snowy_hours = 0 for hour in hourly_snow_fall: if hour == 0: # Ignore hours that...
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
TRY IT Write a loop that find the maximum word length of a list of words (provided).
words = ['Oh', 'the', 'weather', 'outside', 'is', 'frightful', 'But', 'the', 'fire', 'is', 'so', 'delightful']
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
By the way, at this point you could go back to the previous lesson's project Yahtzee and use a loop to play until you get a yahtzee. Range Python has a built in function called range() that returns a generator of integers in order # If you only include 1 parameter it starts at zero and goes to that number-1 range(5) ...
cumulative_snowfall = list(range(1, 15)) print(list(cumulative_snowfall))
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
You can use range as the iterable in a for loop instead of using a while loop with a counting variable if you just want to iterate over integers for i in range(1,6): print(i)
for inches in range(1,7): suffix = " inch" if inches == 1: suffix += "." else: suffix += "es." print("It has snowed " + str(inches) + suffix)
Lesson04_Iteration/Iterations.ipynb
WomensCodingCircle/CodingCirclePython
mit
Filter hits
filt_hits = all_hmmer_hits[ (all_hmmer_hits.e_value < 1e-3) & (all_hmmer_hits.best_dmn_e_value < 1e-3) ] filt_hits.to_csv("1_out/filtered_hmmer_all_hits.csv",index=False) print(filt_hits.shape) filt_hits.head()
orfan_2016_annotation/20161129_summarize_results/1_filter_hmmer_hits.ipynb
maubarsom/ORFan-proteins
mit
Keep best hit per database for each cluster Filtered by e-value < 1e-3 and best domain e-value < 1
gb = filt_hits.groupby(["cluster","db"]) reliable_fam_hits = pd.DataFrame( hits.ix[hits.bitscore.idxmax()] for _,hits in gb )[["cluster","db","tool","query_id","subject_id", "bitscore","e_value","s_description","best_dmn_e_value"]] sort...
orfan_2016_annotation/20161129_summarize_results/1_filter_hmmer_hits.ipynb
maubarsom/ORFan-proteins
mit
7. Out-of-core computation Here's a quick demo of how xarray can leverage dask to work with data that doesn't fit in memory. This lets xarray substitute for tools like cdo and nco. Let's open 10 years of runoff data xarraycan open multiple files at once using string pattern matching. In this case we open all the file...
from glob import glob files = glob('data/*dis*.nc') runoff = xr.open_mfdataset(files) runoff
xarray-tutorial-egu2017.ipynb
iiasa/xarray_tutorial
bsd-3-clause
We start by defining a series helper functions which we will use in creating the plot below.
def truncate_colormap(cmap, minval=0.0, maxval=1.0, n=100): """ Return new colormap obtained from `cmap` by extracting the slice betwen `minval` and `maxval (using `n` values). """ new_cmap = colors.LinearSegmentedColormap.from_list( 'trunc({n},{a:.2f},{b:.2f})'.format(n=cmap.name, a=minval,...
notebooks/fig_2_dipole_field_visualisation.ipynb
maxalbert/paper-supplement-nanoparticle-sensing
mit
Finally we can plot the actual figure.
# Position of dipole (shown as cyan-coloured particle below) dipole_x, dipole_y = 0, 10 xmin, xmax = -80, 80 ymin, ymax = -82, 25 xmin_fld, xmax_fld = -75, 75 ymin_fld, ymax_fld = -50, -5 nx_fld, ny_fld = 40, 16 plt.style.use('style_sheets/fig2.mplstyle') fig, ax = plt.subplots(figsize=(12, 7)) # Tweak appearance ...
notebooks/fig_2_dipole_field_visualisation.ipynb
maxalbert/paper-supplement-nanoparticle-sensing
mit
1. Especificar una sustancia pura sin especificar una temperatura. Luego se carga el archivo que contine las constantes de las correlaciones de las propiedades termodinamicas, que se llama en este caso "PureFull_mod_properties.xls" y se asigna a la variable dppr_file. Creamos un objeto llamado thermodynamic_correlation...
prueba_1 = pt.Thermodynamic_correlations() component = ['METHANE'] property_thermodynamics = "Vapour_Pressure" Vapour_Pressure = prueba_1.property_cal(component, property_thermodynamics) print("Vapour Pressure = {0}". format(Vapour_Pressure)) prueba_2 = pt.Thermodynamic_correlations() component = ['ETHANE'] prop...
thermodynamic_correlations.ipynb
pysg/pyther
mit
para realizar un gráfico simple de la propiedad termodinámica se utiliza el método graphical(temperature, property_thermodynamics, label_property_thermodynamics, units). En donde se pasan como argumentos la temperatura a la cual se claculó la propiedad termodinamica, los valores calculados de la propiedad termodinamica...
temperature_vapour = prueba_1.temperature units = prueba_1.units print(units) thermodynamic_correlations.graphical(temperature_vapour, Vapour_Pressure, property_thermodynamics, units)
thermodynamic_correlations.ipynb
pysg/pyther
mit
Работа с директориями
import os os.mkdir("/tmp/park-python") try: os.rmdir("/tmp/park-python") except IOError as err: print(err) path = "/tmp/park-python/lectures/04" if not os.path.exists(path): os.makedirs(path) os.rmdir("/tmp/park-python") import shutil shutil.rmtree("/tmp/park-python") import pprint pprint.pprint(list(...
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Работа с файлами
# открываем дескриптор файла для записи f = open("/tmp/example.txt", "w") # записываем содержимое f.write("Технопарк\n") # обязательно закрываем f.close() # открываем дескриптор файла для чтения f = open("/tmp/example.txt", "r") # читаем содержимое полностью. data = f.read() # обязательно закрываем! f.close() pri...
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
"r" – открытие на чтение (является значением по умолчанию). "w" – открытие на запись, содержимое файла удаляется, если файла не существует, создается новый. "x" – открытие на запись, если файла не существует, иначе исключение. "a" – открытие на дозапись, информация добавляется в конец файла. "b" – открытие в двоичном р...
# используя context-manager with open("/tmp/example.txt", "a") as f: f.write("МГТУ\n") with open("/tmp/example.txt", "r") as f: print(f.readlines()) # читаем файл по строке, не загружая его полность в память with open("/tmp/example.txt", "r") as f: for line in f: print(repr(line)) # Чтобы провери...
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
stdin, stdout, stderr По умолчанию Unix-оболочки связывают файловый дескриптор 0 с потоком стандартного ввода процесса (stdin), файловый дескриптор 1 — с потоком стандартного вывода (stdout), и файловый дескриптор 2 — с потоком диагностики (stderr, куда обычно выводятся сообщения об ошибках).
import sys print(sys.stdin) print(sys.stdout) print(sys.stderr) print(sys.stdin.fileno()) print(sys.stdout.fileno())
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Так как дескрипторы stdout и stderr переопределены в Jupyter notebook. Давайте посмотрим куда они ведут:
sys.stdout.write("where am I")
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
А ведут они как раз в этот ноутбук:) Отладка Показываем на примере файла debugging.py Полезные библиотеки для отладки (debugging) - https://github.com/vinta/awesome-python#debugging-tools Тестирование Зачем?
def get_max_length_word(sentence): longest_word = None words = sentence.split() for word in words: if not longest_word or len(word) > len(longest_word): longest_word = word return longest_word
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Что может пойти не так? Да все что угодно: SyntaxError Ошибка в логике Обратная несовместимость новой версии используемой библиотеки ... Через полгода после запуска приложения без тестов изменения в код большого приложения вносить очень страшно! Некоторые даже используют TDD - Test-Driven Development. unittest
import unittest class LongestWordTestCase(unittest.TestCase): def test_sentences(self): sentences = [ ["Beautiful is better than ugly.", "Beautiful"], ["Complex is better than complicated.", "complicated"] ] for sentence, correct_word in sentences: s...
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
<table border="1" class="docutils" align="left"> <colgroup> <col width="48%"> <col width="34%"> <col width="18%"> </colgroup> <thead valign="bottom"> <tr class="row-odd"><th class="head">Method</th> <th class="head">Checks that</th> </tr> </thead> <tbody valign="top"> <tr class="row-even"><td>assertEqual(a, b)</td> <td...
class BoomException(Exception): pass class Material: def __init__(self, name, reacts_with=None): self.name = name self.reacts_with = reacts_with or [] def __repr__(self): return self.name class Alchemy: def __init__(self): self.materials = [] def add(self, mat...
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
unittest.mock Как тестировать код, выполняющий внешние вызовы: чтение файла, запрос содержимого URL?
import requests def get_location_city(ip): data = requests.get("https://freegeoip.net/json/{ip}".format(ip=ip)).json() return data["city"] def get_ip(): data = requests.get("https://httpbin.org/ip").json() return data["origin"] get_location_city(get_ip())
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Для начала посмотрим что такое monkey patching.
import math def fake_sqrt(num): return 42 original_sqrt = math.sqrt math.sqrt = fake_sqrt # вызываем ф-ю, которую мы запатчили. print(math.sqrt(16)) math.sqrt = original_sqrt math.sqrt(16) import unittest from unittest.mock import patch, Mock class FakeIPResponse: def json(self): return {"origi...
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
https://docs.python.org/3/library/unittest.mock.html Библиотека coverage позволяет оценить степень покрытия кода тестами. Помимо unit-тестирования существует масса других типов тестов: * Интеграционные (как разные компоненты взаимодействуют друг с другом) * Функциональные (напр. Selenium) * Тестирование производительно...
STEPS = 50000000 # Простая программа, складывающая числа. def worker(steps): count = 0 for i in range(steps): count += 1 return count %timeit -n1 -r1 worker(STEPS) print("Напомните преподавателю показать actvity monitor")
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
<div style="float:left;margin:0 10px 10px 0" markdown="1">![title](img/g3.png)</div> Что такое поток? Многопоточность является естественным продолжением многозадачности. Каждый из процессов может выполнятся в несколько потоков. Программа выше исполнялась в одном процессе в главном потоке. <div style="float:left;margin...
import threading import queue result_queue = queue.Queue() STEPS = 50000000 NUM_THREADS = 2 def worker(steps): count = 0 for i in range(steps): count += 1 result_queue.put(count) def get_count_threaded(): count = 0 threads = [] for i in range(NUM_THREADS): t = threading...
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
<div style="float:left;margin:0 10px 10px 0" markdown="1">![title](img/g4.png)</div> GIL https://jeffknupp.com/blog/2012/03/31/pythons-hardest-problem/ Ок. Неужели выхода нет? Есть - multiprocessing Мультипроцессинг
import multiprocessing NUM_PROCESSES = 2 STEPS = 50000000 result_queue = multiprocessing.Queue() def worker(steps): count = 0 for i in range(steps): count += 1 result_queue.put(count) def get_count_in_processes(): count = 0 processes = [] for i in range(NUM_PROCESSES): ...
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Зачем тогда нужны потоки? Все потому что не все задачи CPU-bound. Есть IO-bound задачи, которые прекрасно параллелятся на несколько CPU. Кто приведет пример? В качестве примера поднимем HTTP-сервер на порту 8000 (server.go). По адресу http://localhost:8000 будет отдаваться небольшой кусочек текста. Наша задача - скачив...
import requests STEPS = 100 def download(): requests.get("http://127.0.0.1:8000").text # Простая программа, загружающая контент URL-странички. Типичная IO-bound задача. def worker(steps): for i in range(steps): download() %timeit -n1 -r1 worker(STEPS)
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
<div style="float:left;margin:0 10px 10px 0" markdown="1">![title](img/g1.png)</div>
import threading STEPS = 100 NUM_THREADS = 2 def worker(steps): count = 0 for i in range(steps): download() def run_worker_threaded(): threads = [] for i in range(NUM_THREADS): t = threading.Thread(target=worker, args=(STEPS//NUM_THREADS,)) threads.append(t) t.sta...
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
<div style="float:left;margin:0 10px 10px 0" markdown="1">![title](img/g2.png)</div> Ради интереса попробуем мультипроцессинг для этой задачи:
import multiprocessing NUM_PROCESSES = 2 def worker(steps): count = 0 for i in range(steps): download() def run_worker_in_processes(): processes = [] for i in range(NUM_PROCESSES): p = multiprocessing.Process(target=worker, args=(STEPS//NUM_PROCESSES,)) processes.append(p...
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Как мы видим - треды позволили получить лучший результат (Macbook Pro 2016 - 64 треда). Чтобы упростить работу с тредами в Python есть модуль concurrent.futures: он предоставляет доступ к 2-м высокоуровневым объектам: ThreadPoolExecutor и ProcessPoolExecutor
import concurrent.futures import requests STEPS = 100 def download(): return requests.get("http://127.0.0.1:8000").text def run_in_executor(): executor = concurrent.futures.ThreadPoolExecutor(max_workers=64) future_to_url = {executor.submit(download): i for i in range(STEPS)} for future in concurren...
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Аналогично можно использовать ProcessPoolExecutor, чтобы вынести работу в пул процессов. Сложность многопоточных приложений
counter = 0 def worker(num): global counter for i in range(num): counter += 1 worker(1000000) print(counter) import threading counter = 0 def worker(num): global counter for i in range(num): counter += 1 threads = [] for i in range(10): t = threading.Thread(target=worker, args...
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Вернемся к примеру с загрузкой URL: можем ли мы сделать еще лучше? загружать странички еще быстрее потреблять меньше памяти, не расходуя ее на создание потоков не задумываться о синхронизации потоков Асинхронное программирование Мотивация - IO-операции очень медленные, нужно заставить программу выполнять полезную раб...
import time def request(i): print(f"Sending request {i+1}") time.sleep(1) print(f"Got response from request {i+1}") print() for i in range(5): request(i)
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
На запрос тратится 1 секунда, и мы ждем 5 секунд на 5 запросов - а ведь могли бы отправить их друг за другом и через секунду получить результаты для всех и обработать. Подход с callback-ами:
import time def request(i): print("Sending request %d" % i) def on_data(data): print("Got response from request %d" % i) return on_data callbacks = [] for i in range(5): cb = request(i) callbacks.append(cb) time.sleep(1) for cb in callbacks: cb("data")
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Генераторы Это функция, которая генерирует последовательность значений используя ключевое слово yield Самый простой пример:
def simple_gen(): yield 1 yield 2 gen = simple_gen() print(next(gen)) print(next(gen)) print(next(gen)) gen = simple_gen() for i in gen: print(i)
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Первый плюс: получить значения, не загружая все элементы в память. Яркий пример - range. Чуть посложнее (с состоянием):
def fib(): a, b = 0, 1 while True: yield a a, b = b, a + b gen = fib() for i in range(6): print(next(gen))
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Второй плюс: получить значения сразу после того как они были вычислены Корутины на основе генераторов:
def coro(): next_value = yield "Hello" yield next_value c = coro() print(next(c)) print(c.send("World"))
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Можно работать с бесконечным потоком данных. Можно обмениваться результатами между отдельными генераторами по мере готовности - то есть иметь дело с несколькими параллельными задачами. При этом не обязательно эти задачи зависят друг от друга. Для более глубокого понимания и изучения других особенностей - http://www.dab...
import time def request(i): print("Sending request %d" % i) data = yield print("Got response from request %d" % i) generators = [] for i in range(5): gen = request(i) generators.append(gen) next(gen) time.sleep(1) for gen in generators: try: gen.send("data") except StopItera...
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
В контексте лекции важно понять, что выполнение функции-генератора в Python можно приостановить, дождаться нужных данных, а затем продолжить выполнение с места прерывания. При этом сохраняется локальный контекст выполнения и пока мы ждем данных интерпретатор может заниматься другой полезной работой Asyncio Асинхронное ...
import asyncio loop = asyncio.get_event_loop() loop.run_forever()
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
<div style="float:left;margin:0 10px 10px 0" markdown="1">![title](img/loop.jpg)</div>
import asyncio def cb(): print("callback called") loop.stop() loop = asyncio.get_event_loop() loop.call_later(delay=3, callback=cb) print("start event loop") loop.run_forever()
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
В Python 3.4 вызов результата корутины выполнялся с помощью конструкции yield from (https://www.python.org/dev/peps/pep-0380/), а функции-корутины помечались декоратором @asyncio.coroutine
import asyncio @asyncio.coroutine def return_after_delay(): yield from asyncio.sleep(3) print("return called") loop = asyncio.get_event_loop() print("start event loop") loop.run_until_complete(return_after_delay())
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
В версии 3.5 появились специальные ключевые слова, позволяющие программировать в асинхронном стиле: async и await async - ключевое слово, позволяющее обозначить функцию как асинхронную (корутина, coroutine). Такая функция может прервать свое выполнение в определенной точке (на блокирующей операции), а затем, дождавшись...
import asyncio async def return_after_delay(): await asyncio.sleep(3) print("return called") loop = asyncio.get_event_loop() print("start event loop") loop.run_until_complete(return_after_delay())
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Чтобы программа стала работать асинхронно нужно использовать примитивы, которые есть в библиотеке asyncio:
import asyncio async def get_data(): await asyncio.sleep(1) return "boom" async def request(i): print(f"Sending request {i+1}") data = await get_data() print(f"Got response from request {i+1}: {data}") loop = asyncio.get_event_loop() loop.run_until_complete(asyncio.gather(*[request(i) for i in ra...
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Исключения при работе с корутинами работают точно так же как и в синхронном коде:
import asyncio async def get_data(): await asyncio.sleep(1) raise ValueError async def request(i): print("Sending request %d" % i) try: data = await get_data() except ValueError: print("Error in request %d" % i) else: print("Got response from request %d" % i) loop = as...
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Примеры других реализаций Event Loop'ов: Tornado IOLoop (https://github.com/tornadoweb/tornado) Twisted reactor (https://twistedmatrix.com/trac/) pyuv (https://github.com/saghul/pyuv) PyGame (http://pygame.org/hifi.html) ... Мы еще вернемся к asyncio в лекции про интернет и клиент-серверные приложения. В том числе по...
import aiohttp import asyncio STEPS = 100 async def download(loop): async with aiohttp.ClientSession(loop=loop) as session: async with session.get("http://127.0.0.1:8000") as response: return await response.text() async def worker(steps, loop): await asyncio.gather(*[download(loop) for x ...
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
# future example. import asyncio async def slow_operation(future): try: await asyncio.wait_for(asyncio.sleep(1), 2) except asyncio.TimeoutError: future.set_exception(ValueError("Error sleeping")) else: future.set_result('Future is done!') def got_result(future): if future.exce...
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Завершающий пример (asyncio + multiprocessing)
# asyncio + multiprocessing import aiohttp import asyncio import multiprocessing NUM_PROCESSES = 2 STEPS = 100 async def download(loop): async with aiohttp.ClientSession(loop=loop) as session: async with session.get("http://127.0.0.1:8000") as response: return await response.text() async def...
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Subprocess. Если останется время...
import subprocess import os result = subprocess.run(["ls", "-l", os.getcwd()], stdout=subprocess.PIPE) print(result.stdout) # используя shell result = subprocess.run( "ls -l " + os.getcwd() + "|grep debug", stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True ) print(result.stdout)
lectures/04_OS_Debug_Tests_Threads_Async/notebook.ipynb
park-python/course
bsd-3-clause
Cycling, ladies and gentlemen, aka a slow spiral into insanity. $\epsilon-$perturbations Another earlier (and nowadays less popular) method for avoiding degeneracy is by introducing $\epsilon$-perturbations into the problem. Recall that the standard system goes like \begin{align} \text{maximise} \quad & c^T x \ \text...
c_1 = numpy.array([[ 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 's_1']]) c_2 = numpy.array([[ 20.0, 1.0, 0.0, 0.0, 1.0, 0.0, 100.0, 's_2']]) c_3 = numpy.array([[ 200.0, 20.0, 1.0, 0.0, 0.0, 1.0, 10000.0, 's_3']]) z = numpy.array([[-100.0, -10.0, -1.0, 0.0, 0.0, 0.0, 0.0, '']]) rows= numpy.concatenate(...
lpsm.ipynb
constellationcolon/simplexity
mit
Statistical Hypothesis Testing Classical testing involves a null hypothesis $H_0$ that represents the default and an alternative hypothesis $H_1$ to test. Statistics helps us to determine whether $H_0$ should be considered false or not. Example: Flipping A Coin Null hypothesis = the coin is fair = $p = 0.5$ Alternative...
def normal_approximation_to_binomial(n, p): """return mu and sigma corresponding to Binomial(n, p)""" mu = p * n sigma = math.sqrt(p * (1 - p) * n) return mu, sigma normal_probability_below = normal_cdf def normal_probability_above(lo, mu=0, sigma=1): return 1 - normal_cdf(lo, mu, sigma) def norm...
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
Now we flip the coin 1,000 times and see if our null hypothesis is true. If so, $X$ will be approximately normally distributed with a mean of 500 and a standard deviation of 15.8:
mu_0, sigma_0 = normal_approximation_to_binomial(1000, 0.5) mu_0, sigma_0
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
A decision must be made with respect to significance, i.e., how willing are we to accept "false positives" (type 1 errors) by rejecting $H_0$ even though it is true? This is often set to 5% or 1%. We will use 5%:
normal_two_sided_bounds(0.95, mu_0, sigma_0)
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
If $H_0$ is true, $p$ should equal 0.5. The interval above denotes the values outside of which there is a 5% chance that this is false. Now we can determine the power of a test, i.e., how willing are we to accept "false positives" (type 2 errors) by failing to reject $H_0$ even though it is false.
# 95% bounds based on assumption p is 0.5 lo, hi = normal_two_sided_bounds(0.95, mu_0, sigma_0) # actual mu and sigma based on p = 0.55 mu_1, sigma_1 = normal_approximation_to_binomial(1000, 0.55) # a type 2 error means we fail to reject the null hypothesis # which will happen when X is still in our original interval...
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
Run a 5% significance test to find the cutoff below which 95% of the probability lies:
hi = normal_upper_bound(0.95, mu_0, sigma_0) # is 526 (< 531, since we need more probability in the upper tail) type_2_probability = normal_probability_below(hi, mu_1, sigma_1) power = 1 - type_2_probability # 0.936 def two_sided_p_value(x, mu=0, sigma=1): if x >= mu: # if x is greater than the mean, the...
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
Make sure your data is roughly normally distributed before using normal_probability_above to compute p-values. The annals of bad data science are filled with examples of people opining that the chance of some observed event occurring at random is one in a million, when what they really mean is “the chance, assuming the...
math.sqrt(p * (1 - p) / 1000)
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
Not knowing $p$ we use our estimate:
p_hat = 525 / 1000 mu = p_hat sigma = math.sqrt(p_hat * (1 - p_hat) / 1000) sigma
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
Assuming a normal distribution, we conclude that we are 95% confident that the interval below includes the true $p$:
normal_two_sided_bounds(0.95, mu, sigma)
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
0.5 falls within our interval so we do not conclude that the coin is unfair. What if we observe 540 heads though?
p_hat = 540 / 1000 mu = p_hat sigma = math.sqrt(p_hat * (1 - p_hat) / 1000) # 0.0158 normal_two_sided_bounds(0.95, mu, sigma) # [0.5091, 0.5709]
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
In this scenario, 0.5 falls outside of our interval so the "fair coin" hypothesis is not confirmed. P-hacking P-hacking involves using various "hacks" to get a $p$ value to go below 0.05: creating a superfluous number of hypotheses, selectively removing outliers, etc.
def run_experiment(): """flip a fair coin 1000 times, True = heads, False = tails""" return [random.random() < 0.5 for _ in range(1000)] def reject_fairness(experiment): """using the 5% significance levels""" num_heads = len([flip for flip in experiment if flip]) return num_heads < 469 or num_heads...
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
Valid inferences come from a priori hypotheses (hypotheses created before collecting any data) and data cleansing without reference to the hypotheses. Example: Running An A/B Test
def estimated_parameters(N, n): p = n / N sigma = math.sqrt(p * (1 - p) / N) return p, sigma def a_b_test_statistic(N_A, n_A, N_B, n_B): p_A, sigma_A = estimated_parameters(N_A, n_A) p_B, sigma_B = estimated_parameters(N_B, n_B) return (p_B - p_A) / math.sqrt(sigma_A ** 2 + sigma_B ** 2) z = a...
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
Bayesian Inference In Bayesian Inference we start with a prior distribution for the parameters and then use the actual observations to get the posterior distribution of the same parameters. So instead of judging the probability of hypotheses, we make probability judgments about the parameters. We will use the Beta dist...
def B(alpha, beta): """a normalizing constant so that the total probability is 1""" return math.gamma(alpha) * math.gamma(beta) / math.gamma(alpha + beta) def beta_pdf(x, alpha, beta): if x < 0 or x > 1: # no weight outside of [0, 1] return 0 return x ** (alpha - 1) * (1 - x) ** (beta - 1) / B(...
Data Science From Scratch/7 - Hypothesis And Inference.ipynb
rvernagus/data-science-notebooks
mit
import data and drop NAs, calculate metascore/10 and rating*10
imdb = pd.read_csv("C:\\Users\\Adam\\Google Drive\\School\\ComputerScience\\intro to data science\\rotten_needles\\data\\datasets\\movies_dataset.csv") #imdb = imdb.dropna() imdb = imdb.assign(rating10=(imdb['rating']*10)) imdb = imdb.assign(metascore10=(imdb['metascore']/10))
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
create movie profit score column
imdb = imdb.assign(score1=100*(imdb.gross_income-imdb.budget)/imdb.budget) imdb = imdb.assign(score2=(imdb['gross_income']-imdb['budget'])) # best score measure imdb = imdb.assign(score3=np.log(imdb['gross_income'])/np.log(imdb['budget'])) # imdb[['score2', 'name','rating','metascore']].sort_values('score2',ascending...
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
Figure shows scatter of gross income against meta score and imdb rating
plt.figure() imdb_temp = imdb imdb_temp['scaled_gross_income'] = np.log(imdb['gross_income']) # / 1000000 sns.regplot(x = imdb['rating']*10, y = 'scaled_gross_income', data = imdb_temp, color = 'yellow') sns.regplot(x = imdb['metascore'], y = 'scaled_gross_income', data = imdb_temp, color = 'Green') sns.plt.title("Gros...
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
Figure shows distribution of Movie Ratings
plt.figure() sns.countplot(x = 'rating', data = imdb) plt.xticks(rotation=60) sns.plt.title("Distribution of Movie Ratings") sns.plt.xlabel("Movie Rating") sns.plt.ylabel("Count of Ratings")
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
Distribution of ratings by Genres
temp = pd.DataFrame( data = { 'type': [i for i in range(1,11) for genre in imdb.columns if 'genre' in genre], 'votes': [imdb[imdb[genre] == 1]['rating_freq.1'].mean() for genre in imdb.columns if 'genre' in genre] + [imdb[imdb[genre] == 1]['ra...
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
scattering stuff
# plt.figure() # plt.ylim([0,10]) # plt.xlim([0,10]) # sns.regplot(x ='avg_rating_per_demo.aged_under_18', y = 'avg_rating_per_demo.aged_45+', data = imdb, color = 'red') # plt.figure() # plt.ylim([0,10]) # plt.xlim([0,10]) # sns.regplot(x ='avg_rating_per_demo.aged_18-29', y = 'avg_rating_per_demo.aged_45+', data = i...
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
Figure shows high correlation between opening weekend incomes and Total weekend
plt.figure() sns.regplot(x = 'opening_weekend_income', y = 'gross_income', data=imdb, color='seagreen') sns.plt.title("Opening weeked Incomes vs Total Incomes") sns.plt.xlabel("Opening Weekend") sns.plt.ylabel("Total")
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
correlations
# imdb[['metascore','critic_review_count','rating','rating_count','gross_income','rating_freq.3','rating_freq.4','rating_freq.5','rating_freq.6', # 'rating_freq.7','rating_freq.8','rating_freq.9','score2']].corr() # imdb[['avg_rating_per_demo.males','avg_rating_per_demo.females']].corr()
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
figure shows how different age groups tend to vote the same, the diagonal shows the rating distribution of each age group
from pandas.tools.plotting import scatter_matrix temp = imdb[['avg_rating_per_demo.aged_under_18','avg_rating_per_demo.aged_18-29', 'avg_rating_per_demo.aged_30-44','avg_rating_per_demo.aged_45+']] temp.columns = ['-18','18-29','30-44','45+'] scatter_matrix(temp, alpha=0.2,figsize=(6,6)) plt.supti...
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
figure shows that above 400K voters, the average rating is allways greater than 7 - people tend to rate when they like a movie
plt.figure() sns.regplot(x = 'rating_count', y = 'rating', data=imdb, color='seagreen') sns.plt.title("IMDB Rating vs Number of Votes") sns.plt.xlabel("Number of Votes") sns.plt.ylabel("IMDB Rating")
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
figure shows the difference of males and females number of votes over different genres
temp = pd.DataFrame( data={ 'sex': ['Male' for genre in imdb.columns if 'genre' in genre] + ['Female' for genre in imdb.columns if 'genre' in genre], 'score': [ imdb[imdb[genre] == 1]['votes_per_demo.males'].mean() for genre in imdb.colu...
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
figure shows the similarity of males and females average scores over different genres - women are more mefargenot!
temp1 = pd.DataFrame( data={ 'sex': ['Male' for genre in imdb.columns if 'genre' in genre] + ['Female' for genre in imdb.columns if 'genre' in genre], 'score': [ imdb[imdb[genre] == 1]['avg_rating_per_demo.males'].mean() for genre in imd...
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
figure shows retrun on investment (gross income divided by budget)
temp2 = pd.DataFrame( data={ 'score': [ imdb[imdb[genre] == 1]['score1'].mean() for genre in imdb.columns if 'genre' in genre ] }, index= [genre[genre.rfind('.')+1:] for genre in imdb.columns if 'genre' in genre] ) plt.figure() sns.barplot(x = temp...
notebooks/Stats.ipynb
shaypal5/rotten_needles
mit
To print part of a text on a new line, \n tag is used.
print("Imagine all the people \nliving life in peace... \nJohn Lennon")
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
To tabulate part of a text forward, \t tag is used.
print("Imagine all the people \nliving life in peace... \n\tJohn Lennon")
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
User inputs To automatically save all user inputs ass one single string, raw_input() function is used.
collection = raw_input("Input some numbers seprated by a comma: ") print(collection) type(collection)
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
To convert the comma-separated string (as the one above, inputted by the user) into a list of elements, split() function is used.
# split function takes one argument: the character to split the string on (comma in our case). our_list = collection.split(",") print(our_list)
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
Functions def is used to define a function. A function should either return some value, or print it.
def adjectivizer(noun): return noun+"ly" adjectivizer("man")
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
THe function below is the upgrade of above function. It first checks the length of the function argument. If it is more than 3, it adjectivizes the noun, else it adds "super" in front.
def superly(noun): if len(noun)>3: return noun+"ly" else: return "super"+noun superly("cat")
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
The function below is the upgrade of above function. It checks one more condition: adjectivizes if more than 4 letters, leaves the same in case of four (note the double equal sign) and adds the "super" in front in other cases.
def superly4(noun): if len(noun)>4: return noun+"ly" elif len(noun)==4: return noun else: return "super"+noun superly4("late")
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
Same function as above, just with one more condition checked (==5).
def superly5(noun): if len(noun)>4: return noun+"ly" elif len(noun)==4: return noun elif len(noun)==5: return "super"+noun else: return "to short noun" superly5("truth")
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
Currency converter The function below asks the user to input some currency (\$ or e sign followd by amount of money) and converts it to Armenian drams. Those are the steps taken: Inputs is saved as one single string. The very first character of string is checked to understand the currency. If it is "\$", then the func...
amount = raw_input("Please, input currency followed by amount ") def c_converter(amount): if amount[0]=="$": return str(484*int(amount[1:]))+" AMD" elif amount[0]=="e": return str(535*int(amount[1:])) + " AMD" else: return "Please, use $ or e sign in front" print c_converter(amount)
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
Multiple conditions Given the numbers between 7 (included) and 1700 (not included), let's choose those that can be divided by 5. For that reason the % (knows as "modulus") operator is used. % operator results in the residual after the division. For example, 5%2 will result in 1 and 6%4 will result in 2. In order to che...
# create an empty list our_list=[] # iterate over all numbers in the given range for number in range(7,1700): # if the reminder is 0 if number%5==0: # append/add that number to our list our_list.append(number) # let's print our list print our_list
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
Above, are all the numbers that can be divided by 5 without residual. Let's now choose a smaller set of numbers. Let's choose those that can both be divided by 5 and by 7 without residual.
our_list=[] for number in range(7,1700): if number%5==0 and number%7==0: our_list.append(number) print our_list
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
As you nticed above, in order to add one more required condition inside if statement, one needs to use just and in between. Similarly, or can be used to checked whether at least one of the conditions is satisfied or not.
our_list=[] for number in range(7,1700): if number%5==0 or number%7==0: our_list.append(number)
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
Maximum function Let's define a function that will take two numbers as an argument and return the maximum of them
def max_of_2(x,y): if x>y: return x else: return y print max_of_2(15,12)
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
Let's now define a max_of_3 function. As we already have max_of_2, we just need to nest them inside one another as shown below:
def max_of_3(x,y,z): return max_of_2(z,max_of_2(x,y)) max_of_3(10,15,33)
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
Text reverser Let's degine a function that will reverse a text. As we want to save the reversed text as a string, let's first create an empty string. Then, each letter, one by one, will be added to our empty strin.
def reverser(text): r_text = "" i = len(text) while i>0: r_text = r_text + text[i-1] i = i-1 return r_text print reverser("Globalization")
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
Finally, yet importantly, let read the Zen of Python (python's philosophy).
import this
Week 2/Intro_2.ipynb
HrantDavtyan/Data_Scraping
apache-2.0
Data Exploration In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results. Since the main ...
prices_a = prices.as_matrix() # TODO: Minimum price of the data minimum_price = np.amin(prices_a) # TODO: Maximum price of the data maximum_price = np.amax(prices_a) # TODO: Mean price of the data mean_price = np.mean(prices_a) # TODO: Median price of the data median_price = np.median(prices_a) # TODO: Standard dev...
boston_housing.ipynb
rodrigomas/boston_housing
mit
Question 1 - Feature Observation As a reminder, we are using three features from the Boston housing dataset: 'RM', 'LSTAT', and 'PTRATIO'. For each data point (neighborhood): - 'RM' is the average number of rooms among homes in the neighborhood. - 'LSTAT' is the percentage of homeowners in the neighborhood considered "...
# TODO: Import 'r2_score' from sklearn.metrics import r2_score def performance_metric(y_true, y_predict): """ Calculates and returns the performance score between true and predicted values based on the metric chosen. """ # TODO: Calculate the performance score between 'y_true' and 'y_predict' ...
boston_housing.ipynb
rodrigomas/boston_housing
mit
Question 2 - Goodness of Fit Assume that a dataset contains five data points and a model made the following predictions for the target variable: | True Value | Prediction | | :-------------: | :--------: | | 3.0 | 2.5 | | -0.5 | 0.0 | | 2.0 | 2.1 | | 7.0 | 7.8 | | 4.2 | 5.3 | Would you consider this model to have succe...
# Calculate the performance of this model score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3]) print(("Model has a coefficient of determination, R^2, of {:.3f}.").format(score))
boston_housing.ipynb
rodrigomas/boston_housing
mit
Answer: The estmator got an excellent R2 value, 0.923 is very close to 1, so it can predicts very well the model. But I think it could not captures the variation (of a real example) because it has only 5 data points, and that is too few. However, it has a good prediction to the model for the given data, and for that i...
# TODO: Import 'train_test_split' from sklearn.model_selection import train_test_split prices_a = prices.as_matrix() features_a = features.as_matrix() # TODO: Shuffle and split the data into training and testing subsets X_train, X_test, y_train, y_test = train_test_split(features_a, prices_a, test_size=0.2, random_s...
boston_housing.ipynb
rodrigomas/boston_housing
mit
Question 3 - Training and Testing What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm? Hint: What could go wrong with not having a way to test your model? Answer: If you don't have a test subset, you can not estimate if your predictor is good enought for ...
# Produce learning curves for varying training set sizes and maximum depths vs.ModelLearning(features, prices)
boston_housing.ipynb
rodrigomas/boston_housing
mit
Question 4 - Learning the Data Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model? Hint: Are the learning curves converging to parti...
vs.ModelComplexity(X_train, y_train)
boston_housing.ipynb
rodrigomas/boston_housing
mit
Question 5 - Bias-Variance Tradeoff When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions? Hint: How do you know when a model is suffering fro...
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV' from sklearn.tree import DecisionTreeRegressor from sklearn.metrics import make_scorer from sklearn.grid_search import GridSearchCV #from sklearn.model_selection import ShuffleSplit from sklearn.cross_validation import ShuffleSplit def fit_mode...
boston_housing.ipynb
rodrigomas/boston_housing
mit