markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Now, a Spark dataframe 'nyvDF' will be created using SQL that will contain the restaurant name (FACILITY), latitude, longitude and violations. Note that the latitude and longitude are combined in the final column (Location1) of the retrieved data. They will be extracted separately using regular expressions in the SQL. The results are ordered by number of violations in descending order and the top 10 are displayed.
query = """ select FACILITY, trim(regexp_extract(location1, '(\\\()(.*),(.*)(\\\))',2)) as lat, trim(regexp_extract(location1, '(\\\()(.*),(.*)(\\\))',3)) as lon, cast(`TOTAL # CRITICAL VIOLATIONS` as int) as Violations from nyrDF order by Violations desc limit 1000 """ nyvDF = sqlContext.sql(query) nyvDF.show(10)
New York Restaurants Demo.ipynb
sharynr/notebooks
apache-2.0
Brunel visualization will be used to map the latitude and longitude to a New York state map. Colors represent the number of violations as noted in the key.
import brunel nyvPan = nyvDF.toPandas() %brunel map ('NY') + data('nyvPan') x(lon) y(lat) color(Violations) tooltip(FACILITY)
New York Restaurants Demo.ipynb
sharynr/notebooks
apache-2.0
One of the many key strengths of Watson Studio is the ability to easily search and quickly learn about various topics. For example, to find articles, tutorials or notebooks on Brunel, click on the 'link' icon on the top right hand corner of this web page ('Find Resources in the Commuity'). A side palette will appear where you can enter 'Brunel' or other topics of interest. Related articles, tutorials, notebooks, data cards will appear. Pixiedust provides charting and visualization. It is an open source Python library that works as an add-on to Jupyter notebooks to improve the user experience of working with data. Please execute the next cell for a tabular view of the data.
from pixiedust.display import * display(nyvDF)
New York Restaurants Demo.ipynb
sharynr/notebooks
apache-2.0
NOTE: Check the init function calls, in the above example.
class Parent: def override(self): print( "PARENT override()") def implicit(self): print ("PARENT implicit()") def altered(self): print ("PARENT altered()") class Child(Parent): def override(self): print ("CHILD override()") def altered(self): p = super(Child, self) print(type(p)) print ("CHILD, BEFORE PARENT altered()") p.altered() print ("CHILD, AFTER PARENT altered()") dad = Parent() child = Child() dad.implicit() child.implicit() dad.override() child.override() dad.altered() child.altered() class Parent: x = 10 def override(self): print( "PARENT override()") def implicit(self): print ("PARENT implicit()") def altered(self): print ("PARENT altered()") def update(self, val): self.x = val class Child(Parent): def override(self): print ("CHILD override()") def altered(self): p = super(Child, self) print(type(p)) print ("CHILD, BEFORE PARENT altered()") p.altered() print ("CHILD, AFTER PARENT altered()") dad = Parent() child1 = Child() child2 = Child() child1.update(100) print(child1.x) print(child2.x) class Parent: x = 10 def update(self, val): self.x = val class Child(Parent): def altered(self, val): p = super(Child, self) p.update(val) dad = Parent() child1 = Child() child2 = Child() child1.altered(100) print(child1.x) print(child2.x)
Section 1 - Core Python/Chapter 09 - Classes & OOPS/OOPs Fundamentals - Inheritance.ipynb
mayank-johri/LearnSeleniumUsingPython
gpl-3.0
The Reason for super() This should seem like common sense, but then we get into trouble with a thing called multiple inheritance. Multiple inheritance is when you define a class that inherits from one or more classes, like this: python class SuperFun(Child, BadStuff): pass This is like saying, "Make a class named SuperFun that inherits from the classes Child and BadStuff at the same time." In this case, whenever you have implicit actions on any SuperFun instance, Python has to look-up the possible function in the class hierarchy for both Child and BadStuff, but it needs to do this in a consistent order. To do this Python uses "method resolution order" (MRO) and an algorithm called C3 to get it straight. Because the MRO is complex and a well-defined algorithm is used, Python can't leave it to you to get the MRO right. Instead, Python gives you the super() function, which handles all of this for you in the places that you need the altering type of actions as I did in Child.altered. With super() you don't have to worry about getting this right, and Python will find the right function for you. Using super() with init The most common use of super() is actually in init functions in base classes. This is usually the only place where you need to do some things in a child, then complete the initialization in the parent. Here's a quick example of doing that in the Child: ```python class Child(Parent): def __init__(self, stuff): self.stuff = stuff super(Child, self).__init__() ``` This is pretty much the same as the Child.altered example above, except I'm setting some variables in the init before having the Parent initialize with its Parent.init.
class Child(Parent): def __init__(self, stuff): self.stuff = stuff super(Child, self).__init__()
Section 1 - Core Python/Chapter 09 - Classes & OOPS/OOPs Fundamentals - Inheritance.ipynb
mayank-johri/LearnSeleniumUsingPython
gpl-3.0
Quiz Question. Among the words that appear in both Barack Obama and Francisco Barrio, take the 5 that appear most frequently in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words? Hint: * Refer to the previous paragraph for finding the words that appear in both articles. Sort the common words by their frequencies in Obama's article and take the largest five. * Each word count vector is a Python dictionary. For each word count vector in SFrame, you'd have to check if the set of the 5 common words is a subset of the keys of the word count vector. Complete the function has_top_words to accomplish the task. - Convert the list of top 5 words into set using the syntax set(common_words) where common_words is a Python list. See this link if you're curious about Python sets. - Extract the list of keys of the word count dictionary by calling the keys() method. - Convert the list of keys into a set as well. - Use issubset() method to check if all 5 words are among the keys. * Now apply the has_top_words function on every row of the SFrame. * Compute the sum of the result column to obtain the number of articles containing all the 5 top words.
common_words = combined_words['word'][:5] common_words = set(common_words) def has_top_words(word_count_vector): # extract the keys of word_count_vector and convert it to a set unique_words = set(word_count_vector.keys()) print "length of unique words = " + str(len(unique_words)) # return True if common_words is a subset of unique_words # return False otherwise return 1 if common_words.issubset(unique_words) else 0 wiki['has_top_words'] = wiki['word_count'].apply(has_top_words) # use has_top_words column to answer the quiz question print "#articles in the Wikipedia dataset contain all of those 5 words = " + str(wiki['has_top_words'].sum())
ml-clustering-and-retrieval/week-2/0_nearest-neighbors-features-and-metrics_blank.ipynb
zomansud/coursera
mit
Quiz Question. Measure the pairwise distance between the Wikipedia pages of Barack Obama, George W. Bush, and Joe Biden. Which of the three pairs has the smallest distance? Hint: To compute the Euclidean distance between two dictionaries, use graphlab.toolkits.distances.euclidean. Refer to this link for usage.
obama = wiki[wiki['name'] == 'Barack Obama'] bush = wiki[wiki['name'] == 'George W. Bush'] biden = wiki[wiki['name'] == 'Joe Biden'] isinstance(obama['word_count'][0], dict) # pair-wise distances obama_bush = graphlab.toolkits.distances.euclidean(obama['word_count'][0], bush['word_count'][0]) print "distance b/w obama and bush = " + str(obama_bush) obama_biden = graphlab.toolkits.distances.euclidean(obama['word_count'][0], biden['word_count'][0]) print "distance b/w obama and biden = " + str(obama_biden) bush_biden = graphlab.toolkits.distances.euclidean(biden['word_count'][0], bush['word_count'][0]) print "distance b/w biden and bush = " + str(bush_biden)
ml-clustering-and-retrieval/week-2/0_nearest-neighbors-features-and-metrics_blank.ipynb
zomansud/coursera
mit
Quiz Question. Collect all words that appear both in Barack Obama and George W. Bush pages. Out of those words, find the 10 words that show up most often in Obama's page.
bush_words = top_words('Francisco Barrio') bush_words new_combined_words = obama_words.join(bush_words, on='word') new_combined_words new_combined_words = new_combined_words.rename({'count':'Obama', 'count.1':'Bush'}) new_combined_words new_combined_words.sort('Obama', ascending=False) new_combined_words.print_rows(10)
ml-clustering-and-retrieval/week-2/0_nearest-neighbors-features-and-metrics_blank.ipynb
zomansud/coursera
mit
Using the join operation we learned earlier, try your hands at computing the common words shared by Obama's and Schiliro's articles. Sort the common words by their TF-IDF weights in Obama's document.
combined_words_tf_idf = obama_tf_idf.join(schiliro_tf_idf, on='word') combined_words_tf_idf combined_words_tf_idf = combined_words_tf_idf.rename({'weight': 'Obama', 'weight.1' : 'Schiliro'}) combined_words_tf_idf combined_words_tf_idf.sort('Obama', ascending=False) combined_words_tf_idf.print_rows(10)
ml-clustering-and-retrieval/week-2/0_nearest-neighbors-features-and-metrics_blank.ipynb
zomansud/coursera
mit
The first 10 words should say: Obama, law, democratic, Senate, presidential, president, policy, states, office, 2011. Quiz Question. Among the words that appear in both Barack Obama and Phil Schiliro, take the 5 that have largest weights in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?
common_words = set(combined_words_tf_idf['word'][:5]) common_words def has_top_words(word_count_vector): # extract the keys of word_count_vector and convert it to a set unique_words = set(word_count_vector.keys()) # return True if common_words is a subset of unique_words # return False otherwise return 1 if common_words.issubset(unique_words) else 0 wiki['has_top_words'] = wiki['word_count'].apply(has_top_words) # use has_top_words column to answer the quiz question print "#articles in the Wikipedia dataset contain all of those 5 words = " + str(wiki['has_top_words'].sum())
ml-clustering-and-retrieval/week-2/0_nearest-neighbors-features-and-metrics_blank.ipynb
zomansud/coursera
mit
Notice the huge difference in this calculation using TF-IDF scores instead of raw word counts. We've eliminated noise arising from extremely common words. Choosing metrics You may wonder why Joe Biden, Obama's running mate in two presidential elections, is missing from the query results of model_tf_idf. Let's find out why. First, compute the distance between TF-IDF features of Obama and Biden. Quiz Question. Compute the Euclidean distance between TF-IDF features of Obama and Biden. Hint: When using Boolean filter in SFrame/SArray, take the index 0 to access the first match.
obama = wiki[wiki['name'] == 'Barack Obama'] biden = wiki[wiki['name'] == 'Joe Biden'] obama_biden = graphlab.toolkits.distances.euclidean(obama['tf_idf'][0], biden['tf_idf'][0]) print "distance between obama and biden based on tf-idf = " + str(obama_biden)
ml-clustering-and-retrieval/week-2/0_nearest-neighbors-features-and-metrics_blank.ipynb
zomansud/coursera
mit
Un détour par le Web : comment fonctionne un site ? Même si nous n'allons pas aujourd'hui faire un cours de web, il vous faut néanmoins certaines bases pour comprendre comment un site internet fonctionne et comment sont structurées les informations sur une page. Un site Web est un ensemble de pages codées en HTML qui permet de décrire à la fois le contenu et la forme d'une page Web. HTML Les balises Sur une page web, vous trouverez toujours à coup sûr des éléments comme &lt;head&gt;, &lt;title&gt;, etc. Il s'agit des codes qui vous permettent de structurer le contenu d'une page HTML et qui s'appellent des balises. Citons, par exemple, les balises &lt;p&gt;, &lt;h1&gt;, &lt;h2&gt;, &lt;h3&gt;, &lt;strong&gt; ou &lt;em&gt;. Le symbole &lt; &gt; est une balise : il sert à indiquer le début d'une partie. Le symbole &lt;!-- --&gt; indique la fin de cette partie. La plupart des balises vont par paires, avec une balise ouvrante et une balise fermante (par exemple &lt;p&gt; et &lt;/p&gt;). Exemple : les balise des tableaux $$ \begin{array}{rr} \hline Balise & \text{Description} \ \hline < table> & \text{Tableau} \ < caption>& \text{Titre du tableau} \ < tr> & \text{Ligne de tableau} \ < th> & \text{Cellule d'en-tête}\ < td> & \text{Cellule} \ < thead> & \text{Section de l'en-tête du tableau} \ < tbody> & \text{Section du corps du tableau} \ < tfoot> & \text{Section du pied du tableau} \ \end{array} $$ Application : un tableau en HTML Le code HTML du tableau suivant Donnera dans le navigateur | Prénom | Mike | Mister | |------------|-----------|----------| | Nom | Stuntman | Pink | | Profession | Cascadeur | Gangster | Parent et enfant Dans le cadre du langage HTML, les termes de parents (parent) et enfants (child) servent à désigner des élements emboîtés les uns dans les autres. Dans la construction suivante, par exemple : On dira que l'élément &lt;div&gt; est le parent de l'élément &lt;p&gt; tandis que l'élément &lt;p&gt; est l'enfant de l'élément &lt;div&gt;. Mais pourquoi apprendre ça pour scraper me direz-vous ? Pour bien récupérer les informations d'un site internet, il faut pouvoir comprendre sa structure et donc son code HTML. Les fonctions python qui servent au scrapping sont principalement construites pour vous permettre de naviguer entre les balises. Optionnel - CSS - le style de la page WEB Quand le bout de code html est écrit, il apaprait sous la forme d'un texte noir sur un fond blanc. Une manière simple de rendre la page plus belle, c'est d'y ajouter de la couleur. La feuille de style qui permet de rendre la page plus belle correspond au(x) fichier(s) CSS. Toutes les pages HTML qui font référence à cette feuille de style externe hériteront de toutes ses définitions. Nous y reviendrons plus en détail dans le TD sur Flask (module Python de création de site internet). Scrapper avec python Nous allons essentiellement utiliser le package BeautifulSoup4 pour ce cours, mais d'autres packages existent (Selenium, Scrapy...). BeautifulSoup sera suffisant quand vous voudrez travailler sur des pages HTML statiques, dès que les informations que vous recherchez sont générées via l'exécution de scripts Javascript, il vous faudra passer par des outils comme Selenium. De même, si vous ne connaissez pas l'URL, il faudra passer par un framework comme Scrapy, qui passe facilement d'une page à une autre ("crawl"). Scrapy est plus complexe à manipuler que BeautifulSoup : si vous voulez plus de détails, rendez-vous sur la page du tutorial Scrapy Tutorial. Utiliser BeautifulSoup Les packages pour scrapper des pages HTML : - BeautifulSoup4 (pip install bs4) - urllib
import urllib import bs4 #help(bs4)
_doc/notebooks/td2a_eco/TD2A_Eco_Web_Scraping.ipynb
sdpython/ensae_teaching_cs
mit
1ere page HTML On va commencer facilement, prenons une page wikipedia, par exemple celle de la Ligue 1 de football : Championnat de France de football 2016-2017. On va souhaiter récupérer la liste des équipes, ainsi que les url des pages Wikipedia de ces équipes.
# Etape 1 : se connecter à la page wikipedia et obtenir le code source url_ligue_1 = "https://fr.wikipedia.org/wiki/Championnat_de_France_de_football_2016-2017" from urllib import request request_text = request.urlopen(url_ligue_1).read() print(request_text[:1000]) # Etape 2 : utiliser le package BeautifulSoup # qui "comprend" les balises contenues dans la chaine de caractères renvoyée par la fonction request page = bs4.BeautifulSoup(request_text, "lxml") #print(page)
_doc/notebooks/td2a_eco/TD2A_Eco_Web_Scraping.ipynb
sdpython/ensae_teaching_cs
mit
La methode .find ne renvoie que la première occurence de l'élément
print(page.find("table"))
_doc/notebooks/td2a_eco/TD2A_Eco_Web_Scraping.ipynb
sdpython/ensae_teaching_cs
mit
Pour trouver toutes les occurences, on utilise .findAll().
print("Il y a", len(page.findAll("table")), "éléments dans la page qui sont des <table>") print(" Le 2eme tableau de la page : Hiérarchie \n", page.findAll("table")[1]) print("--------------------------------------------------------") print("Le 3eme tableau de la page : Palmarès \n",page.findAll("table")[2])
_doc/notebooks/td2a_eco/TD2A_Eco_Web_Scraping.ipynb
sdpython/ensae_teaching_cs
mit
Exercice guidé : obtenir la liste des équipes de Ligue 1 La liste des équipes est dans le tableau "Participants" : dans le code source, on voit que ce tableau est celui qui a class="DebutCarte". On voit également que les balises qui encerclent les noms et les urls des clubs sont de la forme suivante &lt;a href="url_club" title="nom_club"&gt; Nom du club &lt;/a&gt;
for item in page.find('table', {'class' : 'DebutCarte'}).findAll({'a'})[0:5] : print(item, "\n-------")
_doc/notebooks/td2a_eco/TD2A_Eco_Web_Scraping.ipynb
sdpython/ensae_teaching_cs
mit
On n'a pas envie de prendre le premier élément qui ne correspond pas à un club mais à une image. Or cet élément est le seul qui n'ait pas de title="". Il est conseillé d'exclure les élements qui ne nous intéressent pas en indiquant les éléments que la ligne doit avoir au lieu de les exclure en fonction de leur place dans la liste.
### condition sur la place dans la liste >>>> MAUVAIS for e, item in enumerate(page.find('table', {'class' : 'DebutCarte'}).findAll({'a'})[0:5]) : if e == 0: pass else : print(item) #### condition sur les éléments que doit avoir la ligne >>>> BIEN for item in page.find('table', {'class' : 'DebutCarte'}).findAll({'a'})[0:5] : if item.get("title") : print(item)
_doc/notebooks/td2a_eco/TD2A_Eco_Web_Scraping.ipynb
sdpython/ensae_teaching_cs
mit
Enfin la dernière étape, consiste à obtenir les informations souhaitées, c'est à dire dans notre cas, le nom et l'url des 20 clubs. Pour cela, nous allons utiliser deux méthodes de l'élement item : getText() qui permet d'obtenir le texte qui est sur la page web et dans la balise &lt;a&gt; get('xxxx') qui permet d'obtenir l'élément qui est égal à xxxx Dans notre cas, nous allons vouloir le nom du club ainsi que l'url : on va donc utiliser __getText__ et __get("href")__.
for item in page.find('table', {'class' : 'DebutCarte'}).findAll({'a'})[0:5] : if item.get("title") : print(item.get("href")) print(item.getText()) # pour avoir le nom officiel, on aurait utiliser l'élément <title> for item in page.find('table', {'class' : 'DebutCarte'}).findAll({'a'})[0:5] : if item.get("title") : print(item.get("title"))
_doc/notebooks/td2a_eco/TD2A_Eco_Web_Scraping.ipynb
sdpython/ensae_teaching_cs
mit
Exercice de web scraping avec BeautifulSoup Pour cet exercice, nous vous demandons d'obtenir 1) les informations personnelles des 721 pokemons sur le site internet pokemondb.net. Les informations que nous aimerions obtenir au final pour les pokemons sont celles contenues dans 4 tableaux : Pokédex data Training Breeding Base stats Pour exemple : Pokemon Database. 2) Nous aimerions que vous récupériez également les images de chacun des pokémons et que vous les enregistriez dans un dossier (indice : utilisez les modules request et shutil) pour cette question ci, il faut que vous cherchiez de vous même certains éléments, tout n'est pas présent dans le TD. Aller sur internet avec Selenium L'avantage du package Selenium est d'obtenir des informations du site qui ne sont pas dans le code html mais qui apparaissent uniquement à la suite de l'exécution de script javascript en arrière plan. Selenium se comporte comme un utilisateur qui surfe sur internet : il clique sur des liens, il remplit des formulaires etc. Dans cet exemple, nous allons essayer de aller sur le site de Bing Actualités et entrer dans la barre de recherche un sujet donné. La version de chromedriver doit être &gt;= 2.36.
# Si selenium n'est pas installé. # !pip install selenium import selenium #pip install selenium # télécharger le chrome driver https://chromedriver.storage.googleapis.com/index.html?path=74.0.3729.6/ path_to_web_driver = "chromedriver" import os, sys from pyquickhelper.filehelper import download, unzip_files version = "73.0.3683.68" url = "https://chromedriver.storage.googleapis.com/%s/" % version if "win" in sys.platform: if not os.path.exists("chromedriver_win32.zip"): d = download(url + "chromedriver_win32.zip") if not os.path.exists("chromedriver.exe"): unzip_files("chromedriver_win32.zip", where_to=".") elif sys.platform.startswith("linux"): if not os.path.exists("chromedriver_linux64.zip"): d = download(url + "chromedriver_linux64.zip") if not os.path.exists("chromedriver"): unzip_files("chromedriver_linux64.zip", where_to=".") elif sys.platform.startswith("darwin"): if not os.path.exists("chromedriver_mac64.zip"): d = download(url + "chromedriver_mac64.zip") if not os.path.exists("chromedriver"): unzip_files("chromedriver_mac64.zip", where_to=".")
_doc/notebooks/td2a_eco/TD2A_Eco_Web_Scraping.ipynb
sdpython/ensae_teaching_cs
mit
On soumet la requête.
import time from selenium import webdriver from selenium.webdriver.common.keys import Keys chrome_options = webdriver.ChromeOptions() chrome_options.add_argument('--headless') chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--verbose') browser = webdriver.Chrome(executable_path=path_to_web_driver, options=chrome_options) browser.get('https://www.bing.com/news') # on cherche l'endroit où on peut remplir un formulaire # en utilisant les outils du navigateur > inspecter les éléments de la page # on voit que la barre de recherche est un élement du code appelé 'q' comme query # on lui demande de chercher cet élément search = browser.find_element_by_name('q') print(search) print([search.text, search.tag_name, search.id]) # on envoie à cet endroit le mot qu'on aurait tapé dans la barre de recherche search.send_keys("alstom") search_button = browser.find_element_by_xpath("//input[@id='sb_form_go']") #search_button = browser.find_element_by_id('search_button_homepage') search_button.click() # on appuie sur le bouton "Entrée" Return en anglais #search.send_keys(Keys.RETURN) png = browser.get_screenshot_as_png() from IPython.display import Image Image(png, width='500')
_doc/notebooks/td2a_eco/TD2A_Eco_Web_Scraping.ipynb
sdpython/ensae_teaching_cs
mit
On extrait les résultats.
from selenium.common.exceptions import StaleElementReferenceException links = browser.find_elements_by_xpath("//div/a[@class='title'][@href]") results = [] for link in links: try: url = link.get_attribute('href') except StaleElementReferenceException as e: print("Issue with '{0}' and '{1}'".format(url, link)) print("It might be due to slow javascript which produces the HTML page.") results.append(url) len(results) # on a une pause de 10 secondes pour aller voir ce qui se passe sur la page internet # on demande de quitter le navigateur quand tout est fini browser.quit() print(results)
_doc/notebooks/td2a_eco/TD2A_Eco_Web_Scraping.ipynb
sdpython/ensae_teaching_cs
mit
Obtenir des informations entre deux dates sur Google News En réalité, l'exemple de Google News aurait pu se passer de Selenium et être utilisé directement avec BeautifulSoup et les url qu'on réussit à deviner de Google. Ici, on utilise l'url de Google News pour créer une petite fonction qui donne pour chaque ensemble de (sujet, debut d'une période, fin d'une période) des liens pertinents issus de la recherche Google.
import time from selenium import webdriver def get_news_specific_dates (beg_date, end_date, subject, hl="fr", gl="fr", tbm="nws", authuser="0") : ''' Permet d obtenir pour une requete donnée et un intervalle temporel précis les 10 premiers résultats d articles de presse parus sur le sujet ''' get_string = 'https://www.google.com/search?hl={}&gl={}&tbm={}&authuser={}&q={}&tbs=cdr%3A1%2Ccd_min%3A{}%2Ccd_max%3A{}&tbm={}'.format( hl, gl, tbm, authuser, subject, beg_date, end_date,tbm) print(get_string) browser.get(get_string) # La class peut changer si Google met à jour le style de sa page. # Cela arrive régulièrement. Dans ce cas, il faut utiliser des # outils de débuggage web (Chrome - Outils de développement) # links = browser.find_elements_by_xpath("//h3[@class='r dO0Ag']/a[@href]") links = browser.find_elements_by_xpath("//h3/a[@href]") print(len(links)) results = [] for link in links: url = link.get_attribute('href') results.append(url) browser.quit() return results
_doc/notebooks/td2a_eco/TD2A_Eco_Web_Scraping.ipynb
sdpython/ensae_teaching_cs
mit
On appelle la fonction créée à l'instant.
browser = webdriver.Chrome(executable_path=path_to_web_driver, options=chrome_options) articles = get_news_specific_dates("3/15/2018", "3/31/2018", "alstom", hl="fr") print(articles)
_doc/notebooks/td2a_eco/TD2A_Eco_Web_Scraping.ipynb
sdpython/ensae_teaching_cs
mit
Utiliser selenium pour jouer à 2048 Dans cet exemple, on utilise le module pour que python appuie lui même sur les touches du clavier afin de jouer à 2048. Note : ce bout de code ne donne pas une solution à 2048, il permet juste de voir ce qu'on peut faire avec selenium
from selenium import webdriver from selenium.webdriver.common.keys import Keys # on ouvre la page internet du jeu 2048 browser = webdriver.Chrome(executable_path=path_to_web_driver, options=chrome_options) browser.get('https://gabrielecirulli.github.io/2048/') # Ce qu'on va faire : une boucle qui répète inlassablement la même chose : haut / droite / bas / gauche # on commence par cliquer sur la page pour que les touches sachent browser.find_element_by_class_name('grid-container').click() grid = browser.find_element_by_tag_name('body') # pour savoir quels coups faire à quel moment, on crée un dictionnaire direction = {0: Keys.UP, 1: Keys.RIGHT, 2: Keys.DOWN, 3: Keys.LEFT} count = 0 while True: try: # on vérifie que le bouton "Try again" n'est pas là - sinon ça veut dire que le jeu est fini retryButton = browser.find_element_by_link_text('Try again') scoreElem = browser.find_element_by_class_name('score-container') break except: #Do nothing. Game is not over yet pass # on continue le jeu - on appuie sur la touche suivante pour le coup d'après count += 1 grid.send_keys(direction[count % 4]) time.sleep(0.1) print('Score final : {} en {} coups'.format(scoreElem.text, count)) browser.quit()
_doc/notebooks/td2a_eco/TD2A_Eco_Web_Scraping.ipynb
sdpython/ensae_teaching_cs
mit
Search RNA Quantification Sets Method This instance returns a list of RNA quantification sets in a dataset. RNA quantification sets are a way to associate a group of related RNA quantifications. Note that we use the dataset_id obtained from the 1kg_metadata_service notebook.
counter = 0 for rna_quant_set in c.search_rna_quantification_sets(dataset_id=dataset.id): if counter > 5: break counter += 1 print(" id: {}".format(rna_quant_set.id)) print(" dataset_id: {}".format(rna_quant_set.dataset_id)) print(" name: {}\n".format(rna_quant_set.name))
python_notebooks/1kg_rna_quantification_service.ipynb
david4096/bioapi-examples
apache-2.0
Get RNA Quantification Set by id method This method obtains an single RNA quantification set by it's unique identifier. This id was chosen arbitrarily from the returned results.
single_rna_quant_set = c.get_rna_quantification_set( rna_quantification_set_id=rna_quant_set.id) print(" name: {}\n".format(single_rna_quant_set.name))
python_notebooks/1kg_rna_quantification_service.ipynb
david4096/bioapi-examples
apache-2.0
Search RNA Quantifications We can list all of the RNA quantifications in an RNA quantification set. The rna_quantification_set_id was chosen arbitrarily from the returned results.
counter = 0 for rna_quant in c.search_rna_quantifications( rna_quantification_set_id=rna_quant_set.id): if counter > 5: break counter += 1 print("RNA Quantification: {}".format(rna_quant.name)) print(" id: {}".format(rna_quant.id)) print(" description: {}\n".format(rna_quant.description)) test_quant = rna_quant
python_notebooks/1kg_rna_quantification_service.ipynb
david4096/bioapi-examples
apache-2.0
Get RNA Quantification by Id Similar to RNA quantification sets, we can retrieve a single RNA quantification by specific id. This id was chosen arbitrarily from the returned results. The RNA quantification reported contains details of the processing pipeline which include the source of the reads as well as the annotations used.
single_rna_quant = c.get_rna_quantification( rna_quantification_id=test_quant.id) print(" name: {}".format(single_rna_quant.name)) print(" read_ids: {}".format(single_rna_quant.read_group_ids)) print(" annotations: {}\n".format(single_rna_quant.feature_set_ids))
python_notebooks/1kg_rna_quantification_service.ipynb
david4096/bioapi-examples
apache-2.0
Search Expression Levels The feature level expression data for each RNA quantification is reported as a set of Expression Levels. The rna_quantification_service makes it easy to search for these.
def getUnits(unitType): units = ["", "FPKM", "TPM"] return units[unitType] counter = 0 for expression in c.search_expression_levels( rna_quantification_id=test_quant.id): if counter > 5: break counter += 1 print("Expression Level: {}".format(expression.name)) print(" id: {}".format(expression.id)) print(" feature: {}".format(expression.feature_id)) print(" expression: {} {}".format(expression.expression, getUnits(expression.units))) print(" read_count: {}".format(expression.raw_read_count)) print(" confidence_interval: {} - {}\n".format( expression.conf_interval_low, expression.conf_interval_high))
python_notebooks/1kg_rna_quantification_service.ipynb
david4096/bioapi-examples
apache-2.0
It is also possible to restrict the search to a specific feature or to request expression values exceeding a threshold amount.
counter = 0 for expression in c.search_expression_levels( rna_quantification_id=test_quant.id, feature_ids=[]): if counter > 5: break counter += 1 print("Expression Level: {}".format(expression.name)) print(" id: {}".format(expression.id)) print(" feature: {}\n".format(expression.feature_id))
python_notebooks/1kg_rna_quantification_service.ipynb
david4096/bioapi-examples
apache-2.0
Let's look for some high expressing features.
counter = 0 for expression in c.search_expression_levels( rna_quantification_id=test_quant.id, threshold=1000): if counter > 5: break counter += 1 print("Expression Level: {}".format(expression.name)) print(" id: {}".format(expression.id)) print(" expression: {} {}\n".format(expression.expression, getUnits(expression.units)))
python_notebooks/1kg_rna_quantification_service.ipynb
david4096/bioapi-examples
apache-2.0
TODO: Implementing the basic functions Here is your turn to shine. Implement the following formulas, as explained in the text. - Sigmoid activation function $$\sigma(x) = \frac{1}{1+e^{-x}}$$ Output (prediction) formula $$\hat{y} = \sigma(w_1 x_1 + w_2 x_2 + b)$$ Error function $$Error(y, \hat{y}) = - y \log(\hat{y}) - (1-y) \log(1-\hat{y})$$ The function that updates the weights $$ w_i \longrightarrow w_i + \alpha (y - \hat{y}) x_i$$ $$ b \longrightarrow b + \alpha (y - \hat{y})$$
# Implement the following functions # Activation (sigmoid) function def sigmoid(x): pass # Output (prediction) formula def output_formula(features, weights, bias): pass # Error (log-loss) formula def error_formula(y, output): pass # Gradient descent step def update_weights(x, y, weights, bias, learnrate): pass
gradient-descent/GradientDescent.ipynb
samirma/deep-learning
mit
Model summary Run done with model with three convolutional layers, two fully connected layers and a final softmax layer, with a constant of 48 channels per convolutional layer. Initially run with dropout in two fully connected layers and minor random augmentation (4 rotations and flip), when learning appeared to stop this run then halted, dropout removed and more signficant random augmentation applied (random arbitrary rotations, shunting, scaling and flipping). This gave a further gain in performance with an eventual best of 0.848 NLL on validation set achieved. Various manual changes to learning rate etc. at this point did not seem to give any further gain in performance.
print('## Model structure summary\n') print(model) params = model.get_params() n_params = {p.name : p.get_value().size for p in params} total_params = sum(n_params.values()) print('\n## Number of parameters\n') print(' ' + '\n '.join(['{0} : {1} ({2:.1f}%)'.format(k, v, 100.*v/total_params) for k, v in sorted(n_params.items(), key=lambda x: x[0])])) print('\nTotal : {0}'.format(total_params))
notebooks/model_modifications/Fewer convolutional channels with dropout experiment (large).ipynb
Neuroglycerin/neukrill-net-work
mit
Train and valid set NLL trace The discontinuity at just over 80 epoch is due to resuming without dropout and with more augmentation.
tr = np.array(model.monitor.channels['valid_y_y_1_nll'].time_record) / 3600. fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(111) ax1.plot(model.monitor.channels['valid_y_y_1_nll'].val_record) ax1.plot(model.monitor.channels['train_y_y_1_nll'].val_record) ax1.set_xlabel('Epochs') ax1.legend(['Valid', 'Train']) ax1.set_ylabel('NLL') ax1.set_ylim(0., 5.) ax1.grid(True) ax2 = ax1.twiny() ax2.set_xticks(np.arange(0,tr.shape[0],20)) ax2.set_xticklabels(['{0:.2f}'.format(t) for t in tr[::20]]) ax2.set_xlabel('Hours') print("Minimum validation set NLL {0}".format(min(model.monitor.channels['valid_y_y_1_nll'].val_record)))
notebooks/model_modifications/Fewer convolutional channels with dropout experiment (large).ipynb
Neuroglycerin/neukrill-net-work
mit
To initialize a new class instance, we make use of the constructor method from_array():
coeffs_l5m2 = SHCoeffs.from_array(coeffs)
examples/notebooks/tutorial_3.ipynb
MMesch/SHTOOLS
bsd-3-clause
When initializing a new class instance, the default is to assume that the input coefficients are 4-pi normalized excluding the Condon-Shortley phase. This normalization convention can be overridden by setting the optional parameter 'normalization', which takes values of '4pi', 'ortho' or 'schmidt', along with the parameter 'csphase', which can be 1 (to exclude the Condon-Shortley phase) or -1 (to include it). The SHCoeffs class contains many methods, and here we use plot_spectrum() to plot the power spectrum:
fig, ax = coeffs_l5m2.plot_spectrum(xscale='log')
examples/notebooks/tutorial_3.ipynb
MMesch/SHTOOLS
bsd-3-clause
To plot the function that corresponds to the coefficients, we first need to expand it on a grid, which can be accomplished using the expand() method:
grid_l5m2 = coeffs_l5m2.expand('DH2')
examples/notebooks/tutorial_3.ipynb
MMesch/SHTOOLS
bsd-3-clause
This returns a new SHGrid class instance. The resolution of the grid is determined automatically to correspond to the maximum degree of the spherical harmonic coefficients in order to ensure good sampling. The optional parameter 'grid' can be 'DH2' for a Driscoll and Healy sampled grid with nlon = 2 * nlat, 'DH' for a Driscol and Healy sampled grid with nlon = nlat, or 'GLQ' for a grid used with the Gauss-Legendre quadrature expansion routines. Once the grid is created, it can be plotted using the built-in method plot().
fig, ax = grid_l5m2.plot()
examples/notebooks/tutorial_3.ipynb
MMesch/SHTOOLS
bsd-3-clause
Initialize with a random model Another constructor for the SHCoeffs class is the from_random() method. It takes a power spectrum (power per degree l of the coefficients) and generates coefficients that are independent normal distributed random variables with the provided expected power spectrum. This corresponds to a stationary and isotropic random model on the surface of the sphere whose autocorrelation function is given by the spherical harmonic addition theorem. We initialize coefficients here with a scale-free power spectrum that has equal band power beyond the scale length that defines the size of the largest model features. The particular property of this model is that it is invariant under zoom operations.
a = 10 # scale length ls = np.arange(lmax+1, dtype=np.float) power = 1. / (1. + (ls / a) ** 2) ** 0.5 coeffs_global = SHCoeffs.from_random(power) fig, ax = coeffs_global.plot_spectrum(unit='per_dlogl', xscale='log') fig, ax = coeffs_global.expand('DH2').plot()
examples/notebooks/tutorial_3.ipynb
MMesch/SHTOOLS
bsd-3-clause
Rotating the coordinate system Spherical harmonics coefficients can be expressed in a different coordinate system very efficiently. Importantly, the power per degree spectrum is invariant under rotation. We demonstrate this by rotating a zonal spherical harmonic (m=0) that is centered about the north-pole to the equator. The rotations are specified by the three Euler angles alpha, beta, and gamma. There are several different conventions for specifying these angles, and they can either provide the angles for rotating the physical body or coordinate system. Please read the documentation of this method before proceeding! In this example, we use the constructor from_zeros(), and then use the method set_coeffs() to initialize a single spherical harmonic coefficient:
coeffs_l5m0 = SHCoeffs.from_zeros(lmax) coeffs_l5m0.set_coeffs(1., 5, 0) alpha = 0. # around z-axis beta = 90. # around x-axis (lon=0) gamma = 10. # around z-axis again coeffs_l5m0_rot = coeffs_l5m0.rotate(alpha, beta, gamma, degrees=True) fig, ax = coeffs_l5m0_rot.plot_spectrum(xscale='log', show=False) ax.set(ylim=[0.01,10]) grid_l5m0_rot = coeffs_l5m0_rot.expand('DH2') fig, ax = grid_l5m0_rot.plot()
examples/notebooks/tutorial_3.ipynb
MMesch/SHTOOLS
bsd-3-clause
Addition, multiplication, and subtraction Similar grids can be added, multiplied and subtracted using standard python operators. It is easily verified that the following sequence of operations return the same rotated grid as above:
grid_new = (2 * grid_l5m0_rot + grid_l5m2**2 - grid_l5m2 * grid_l5m2) / 2.0 grid_new.plot() coeffs = grid_new.expand() fig, ax = coeffs.plot_spectrum()
examples/notebooks/tutorial_3.ipynb
MMesch/SHTOOLS
bsd-3-clause
Jak wygrać konkursy 2 1. Bagging - Uzupełnienie Ważenie podczas głosowania/uśredniania W Bagging, losujemy $m$ przykładów z powtorzeniami. Prawie 40% danych nie jest wykorzystywanych, ponieważ $\lim_{n \rightarrow \infty}\left(1-\frac{1}{n}\right)^n = e^{-1} \approx 0.368 $. Możemy te dany wykorzystać jako zestaw walidacyjny i obliczyć na nim błąd $J_w(\theta)$. Wtedy gdy mamy $N$ klasyfikatorów, dla $i$-tego klasyfikatora obliczamy $w_i$ (dlaczego $-J$?): $$ w_i = \dfrac{\exp(-J_w(\theta_i))}{ \sum_{j=1}^N \exp(-J_w(\theta_j))} $$ Klasyfikacja przez ważone głosowanie (zbiór klas $C$, $y_i$ to odpowiedź $i$-tego klasyfikatora): $$y = \mathop{\mathrm{argmax}}{c \in C} \sum{i=1}^N w_i I(c = y_i) $$ gdzie $$I(A) = \left{\begin{array}{cl}1 & \textrm{gdy zachodzi zdarzenie A}\ 0 & \textrm{wpp.}\end{array}\right.$$ Klasyfikacja przez obliczenie ważonych średnich prawdopodobieństw (zbiór klas $C$, $y_i$ to odpowiedź $i$-tego klasyfikatora): $$y = \mathop{\mathrm{argmax}}{c \in C} \dfrac{w_ip{c,i}}{\sum_{j=1}^{N} w_j p_{c,j}} $$ gdzie $p_{c,i}$ jest prawdopodobieństwem wyboru klasy $c$ przez $i$-ty klasyfikator. Ważony bagging na MNIST Do samodzielnego sprawdzenia w ramach zadań bonusowych na ćwiczeniach (20 pkt.). 2. Zjawisko nadmiernego dopasowania i regularyzacja
def runningMeanFast(x, N): return np.convolve(x, np.ones((N,))/N, mode='valid') def powerme(x1,x2,n): X = [] for m in range(n+1): for i in range(m+1): X.append(np.multiply(np.power(x1,i),np.power(x2,(m-i)))) return np.hstack(X) def safeSigmoid(x, eps=0): y = 1.0/(1.0 + np.exp(-x)) # przytnij od dolu i gory if eps > 0: y[y < eps] = eps y[y > 1 - eps] = 1 - eps return y def h(theta, X, eps=0.0): return safeSigmoid(X*theta, eps) def J(h,theta,X,y, lamb=0): m = len(y) f = h(theta, X, eps=10**-7) j = -np.sum(np.multiply(y, np.log(f)) + np.multiply(1 - y, np.log(1 - f)), axis=0)/m# \ #+ lamb/(2*m) * np.sum(np.power(theta[1:],2)) return j def dJ(h,theta,X,y,lamb=0): g = 1.0/y.shape[0]*(X.T*(h(theta,X)-y)) #g[1:] += lamb/float(y.shape[0]) * theta[1:] return g def SGD(h, fJ, fdJ, theta, X, Y, alpha=0.001, maxEpochs=1.0, batchSize=100, adaGrad=False, logError=True, validate=0.0, valStep=100, lamb=0): errorsX, errorsY = [], [] errorsVX, errorsVY = [], [] XT, YT = X, Y if validate > 0: mv = int(X.shape[0] * validate) XV, YV = X[:mv], Y[:mv] XT, YT = X[mv:], Y[mv:] m, n = XT.shape start, end = 0, batchSize maxSteps = (m * float(maxEpochs)) / batchSize if adaGrad: hgrad = np.matrix(np.zeros(n)).reshape(n,1) for i in range(int(maxSteps)): XBatch, YBatch = XT[start:end,:], YT[start:end,:] grad = fdJ(h, theta, XBatch, YBatch, lamb=lamb) if adaGrad: hgrad += np.multiply(grad, grad) Gt = 1.0 / (10**-7 + np.sqrt(hgrad)) theta = theta - np.multiply(alpha * Gt, grad) else: theta = theta - alpha * grad if logError: errorsX.append(float(i*batchSize)/m) errorsY.append(fJ(h, theta, XBatch, YBatch).item()) if validate > 0 and i % valStep == 0: errorsVX.append(float(i*batchSize)/m) errorsVY.append(fJ(h, theta, XV, YV).item()) if start + batchSize < m: start += batchSize else: start = 0 end = min(start + batchSize, m) return theta, (errorsX, errorsY, errorsVX, errorsVY) def classifyBi(theta, X): prob = h(theta, X) return prob n = 6 sgd = True data = np.matrix(np.loadtxt("ex2data2.txt",delimiter=",")) np.random.shuffle(data) X = powerme(data[:,0], data[:,1],n) Y = data[:,2] pyplot.figure(figsize=(16,8)) pyplot.subplot(121) pyplot.scatter(X[:,2].tolist(), X[:,1].tolist(), c=Y.tolist(), s=100, cmap=pyplot.cm.get_cmap('prism')); if sgd: theta = np.matrix(np.zeros(X.shape[1])).reshape(X.shape[1],1) thetaBest, err = SGD(h, J, dJ, theta, X, Y, alpha=1, adaGrad=True, maxEpochs=2500, batchSize=100, logError=True, validate=0.25, valStep=1, lamb=0) xx, yy = np.meshgrid(np.arange(-1.5, 1.5, 0.02), np.arange(-1.5, 1.5, 0.02)) l = len(xx.ravel()) C = powerme(xx.reshape(l,1),yy.reshape(l,1),n) z = classifyBi(thetaBest, C).reshape(np.sqrt(l),np.sqrt(l)) pyplot.contour(xx, yy, z, levels=[0.5], lw=3); pyplot.ylim(-1,1.2); pyplot.xlim(-1,1.2); pyplot.legend(); pyplot.subplot(122) pyplot.plot(err[0],err[1], lw=3, label="Training error") pyplot.plot(err[2],err[3], lw=3, label="Validation error"); pyplot.legend() pyplot.ylim(0.2,0.8);
Wyklady/08/Konkursy2.ipynb
emjotde/UMZ
cc0-1.0
이진 분류 결과표 Binary Confusion Matrix 클래스가 0과 1 두 종류 밖에 없는 경우에는 일반적으로 클래스 이름을 "Positive"와 "Negative"로 표시한다. 또, 분류 모형의 예측 결과가 맞은 경우, 즉 Positive를 Positive라고 예측하거나 Negative를 Negative라고 예측한 경우에는 "True"라고 하고 예측 결과가 틀린 경우, 즉 Positive를 Negative라고 예측하거나 Negative를 Positive라고 예측한 경우에는 "False"라고 한다. 이 경우의 이진 분류 결과의 명칭과 결과표는 다음과 같다. | | Positive라고 예측 | Negative라고 예측 | |-|-|-| | 실제 Positive | True Positive | False Negative | | 실제 Negative | False Positive | True Negative | FDS(Fraud Detection System)의 예 FDS(Fraud Detection System)는 금융 거래, 회계 장부 등에서 잘못된 거래, 사기 거래를 찾아내는 시스템을 말한다. FDS의 예측 결과가 Positive 이면 사기 거래라고 예측한 것이고 Negative 이면 정상 거래라고 예측한 것이다. 이 결과가 사실과 일치하는지 틀리는지에 따라 다음과 같이 말한다. True Positive: 사기를 사기라고 정확하게 예측 True Negative: 정상을 정상이라고 정확하게 예측 False Positive: 정상을 사기라고 잘못 예측 False Negative: 사기를 정상이라고 잘못 예측 | | 사기 거래라고 예측 | 정상 거래라고 예측 | | --------------------| ------------------------ | --------------------------------- | | 실제로 사기 거래 | True Positive | False Negative | | 실제로 정상 거래 | False Positive | True Negative | 평가 스코어 Accuracy 정확도 전체 샘플 중 맞게 출력한 샘플 수의 비율 $$\text{accuracy} = \dfrac{TP + TN}{TP + TN + FP + FN}$$ Precision 정밀도 클래스에 속한다고 출력한 샘플 중 실제로 클래스에 속하는 샘플 수의 비율 FDS의 경우, 사기 거래라고 판단한 거래 중 실제 사기 거래의 비율. 유죄율 $$\text{precision} = \dfrac{TP}{TP + FP}$$ Recall 재현율 TPR: true positive rate 실제 클래스에 속한 샘플 중에 클래스에 속한다고 출력한 샘플의 수 FDS의 경우, 실제 사기 거래 중에서 실제 사기 거래라고 예측한 거래의 비율. 검거율 sensitivity(민감도) $$\text{recall} = \dfrac{TP}{TP + FN}$$ Fall-Out FPR: false positive rate 실제 클래스에 속하지 않는 샘플 중에 클래스에 속한다고 출력한 샘플의 수 FDS의 경우, 실제 정상 거래 중에서 FDS가 사기 거래라고 예측한 거래의 비율, 원죄(寃罪)율 $$\text{fallout} = \dfrac{FP}{FP + TN}$$ F (beta) score 정밀도(Precision)과 재현율(Recall)의 가중 조화 평균 $$ F_\beta = (1 + \beta^2) \, ({\text{precision} \times \text{recall}}) \, / \, ({\beta^2 \, \text{precision} + \text{recall}}) $$ F1 score beta = 1 $$ F_1 = 2 \cdot \text{precision} \cdot \text{recall} \, / \, (\text{precision} + \text{recall}) $$
from sklearn.metrics import classification_report y_true = [0, 1, 2, 2, 2] y_pred = [0, 0, 2, 2, 1] target_names = ['class 0', 'class 1', 'class 2'] print(classification_report(y_true, y_pred, target_names=target_names)) y_true = ["cat", "ant", "cat", "cat", "ant", "bird"] y_pred = ["ant", "ant", "cat", "cat", "ant", "cat"] print(classification_report(y_true, y_pred, target_names=["ant", "bird", "cat"]))
18. 분류의 기초/04. 분류(classification) 성능 평가.ipynb
zzsza/Datascience_School
mit
Intermediate Python - List Comprehension In this Colab, we will discuss list comprehension, an extremely useful and idiomatic way to process lists in Python. List Comprehension List comprehension is a compact way to create a list of data. Say you want to create a list containing ten random numbers. One way to do this is to just hard-code a ten-element list.
import random [ random.randint(0, 100), random.randint(0, 100), random.randint(0, 100), random.randint(0, 100), random.randint(0, 100), random.randint(0, 100), random.randint(0, 100), random.randint(0, 100), random.randint(0, 100), random.randint(0, 100), ]
content/00_prerequisites/01_intermediate_python/03-list-comprehension.ipynb
google/applied-machine-learning-intensive
apache-2.0
Note: In the code above, we've introduced the random module. random is a Python package that comes as part of the standard Python distribution. To use Python packages we rely on the import keyword. That's pretty intensive, and requires a bit of copy-paste work. We could clean it up with a for loop:
import random my_list = [] for _ in range(10): my_list.append(random.randint(0, 100)) my_list
content/00_prerequisites/01_intermediate_python/03-list-comprehension.ipynb
google/applied-machine-learning-intensive
apache-2.0
This looks much nicer. Less repetition is always a good thing. Note: Did you notice the use of the underscore to consume the value returned from range? You can use this when you don't actually need the range value, and it saves Python from assigning it to memory. There is an even more idiomatic way of creating this list of numbers in Python. Here is an example of a list comprehension:
import random my_list = [random.randint(0, 100) for _ in range(10)] my_list
content/00_prerequisites/01_intermediate_python/03-list-comprehension.ipynb
google/applied-machine-learning-intensive
apache-2.0
Let's start by looking at the "for _ in range()" part. This looks like the for loop that we are familiar with. In this case, it is a loop over the range from zero through nine. The strange part is the for doesn't start the expression. We are used to seeing a for loop with a body of statements indented below it. In this case, the body of the for loop is to the left of the for keyword. This is the signature of list comprehension. The body of the loop comes first and the for range comes last. for isn't the only option for list comprehensions. You can also add an if condition.
[x for x in range(10) if x % 2 == 0]
content/00_prerequisites/01_intermediate_python/03-list-comprehension.ipynb
google/applied-machine-learning-intensive
apache-2.0
You can add multiple if statements by using boolean operators.
print([x for x in range(10) if x % 2 == 0 and x % 3 == 0]) print([x for x in range(10) if x % 2 == 0 or x % 3 == 0])
content/00_prerequisites/01_intermediate_python/03-list-comprehension.ipynb
google/applied-machine-learning-intensive
apache-2.0
You can even have multiple loops chained in a single list comprehension. The left-most loop is the outer loop and the subsequent loops are nested within. However, when cases become sufficiently complicated, we recommend using standard loop notation, to enhance code readability.
[(x, y) for x in range(5) for y in range(3)]
content/00_prerequisites/01_intermediate_python/03-list-comprehension.ipynb
google/applied-machine-learning-intensive
apache-2.0
Exercises Exercise 1 Create a list expansion that builds a list of numbers between 5 and 67 (inclusive) that are divisible by 7 but not divisible by 3. Student Solution
### YOUR CODE HERE ###
content/00_prerequisites/01_intermediate_python/03-list-comprehension.ipynb
google/applied-machine-learning-intensive
apache-2.0
Exercise 2 Use list comprehension to find the lengths of all the words in the following sentence. Student Solution
sentence = "I love list comprehension so much it makes me want to cry" words = sentence.split() print(words) ### YOUR CODE GOES HERE ###
content/00_prerequisites/01_intermediate_python/03-list-comprehension.ipynb
google/applied-machine-learning-intensive
apache-2.0
Initial set-up Load experiments used for unified dataset calibration: - Steady-state activation [Wang1993] - Activation time constant [Courtemanche1998] - Deactivation time constant [Courtemanche1998] - Steady-state inactivation [Wang1993] - Inactivation time constant [Courtemanche1998] - Recovery time constant [Courtemanche1998]
from experiments.ito_wang import wang_act, wang_inact from experiments.ito_courtemanche import courtemanche_kin, courtemanche_rec, courtemanche_deact modelfile = 'models/nygren_ito.mmt'
docs/examples/human-atrial/nygren_ito_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
Combine model and experiments to produce: - observations dataframe - model function to run experiments and return traces - summary statistics function to accept traces
observations, model, summary_statistics = setup(modelfile, wang_act, wang_inact, courtemanche_kin, courtemanche_deact, courtemanche_rec) assert len(observations)==len(summary_statistics(model({}))) g = plot_sim_results(modelfile, wang_act, wang_inact, courtemanche_kin, courtemanche_deact, courtemanche_rec)
docs/examples/human-atrial/nygren_ito_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
Run ABC-SMC inference Set-up path to results database.
db_path = ("sqlite:///" + os.path.join(tempfile.gettempdir(), "nygren_ito_unified.db")) logging.basicConfig() abc_logger = logging.getLogger('ABC') abc_logger.setLevel(logging.DEBUG) eps_logger = logging.getLogger('Epsilon') eps_logger.setLevel(logging.DEBUG)
docs/examples/human-atrial/nygren_ito_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
Analysis of results
history = History('sqlite:///results/nygren/ito/unified/nygren_ito_unified.db') df, w = history.get_distribution() df.describe()
docs/examples/human-atrial/nygren_ito_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
Plot summary statistics compared to calibrated model output.
sns.set_context('poster') mpl.rcParams['font.size'] = 14 mpl.rcParams['legend.fontsize'] = 14 g = plot_sim_results(modelfile, wang_act, wang_inact, courtemanche_kin, courtemanche_deact, courtemanche_rec, df=df, w=w) plt.tight_layout()
docs/examples/human-atrial/nygren_ito_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
Plot gating functions
import pandas as pd N = 100 nyg_par_samples = df.sample(n=N, weights=w, replace=True) nyg_par_samples = nyg_par_samples.set_index([pd.Index(range(N))]) nyg_par_samples = nyg_par_samples.to_dict(orient='records') sns.set_context('talk') mpl.rcParams['font.size'] = 14 mpl.rcParams['legend.fontsize'] = 14 f, ax = plot_variables(V, nyg_par_map, 'models/nygren_ito.mmt', [nyg_par_samples], figshape=(2,2))
docs/examples/human-atrial/nygren_ito_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
Plot parameter posteriors
from ionchannelABC.visualization import plot_kde_matrix_custom import myokit import numpy as np m,_,_ = myokit.load(modelfile) originals = {} for name in limits.keys(): if name.startswith("log"): name_ = name[4:] else: name_ = name val = m.value(name_) if name.startswith("log"): val_ = np.log10(val) else: val_ = val originals[name] = val_ sns.set_context('paper') g = plot_kde_matrix_custom(df, w, limits=limits, refval=originals) plt.tight_layout()
docs/examples/human-atrial/nygren_ito_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
1) First example
x,y = sp.symbols('x,y') f = x**2 + y**2 gs = [x+y>=4, x+y<=4] print_problem(f, gs) sol = mp.solvers.solve_GMP(f, gs) mp.extractors.extract_solutions_lasserre(sol['MM'], sol['x'], 1)
polynomial_optimization.ipynb
sidaw/mompy
mit
2) Unconstrained optimization: the six hump camel back function A plot of this function can be found at library of simutations. MATLAB got the solution $x^ = [0.0898 -0.7127]$ and corresponding optimal values of $f(x^) = -1.0316$
x1,x2 = sp.symbols('x1:3') f = 4*x1**2+x1*x2-4*x2**2-2.1*x1**4+4*x2**4+x1**6/3 print_problem(f) sol = mp.solvers.solve_GMP(f, rounds=1) mp.extractors.extract_solutions_lasserre(sol['MM'], sol['x'], 2)
polynomial_optimization.ipynb
sidaw/mompy
mit
3) Multiple rounds Generally more rounds are needed to get the correct solutions.
x1,x2,x3 = sp.symbols('x1:4') f = -(x1 - 1)**2 - (x1 - x2)**2 - (x2 - 3)**2 gs = [1 - (x1 - 1)**2 >= 0, 1 - (x1 - x2)**2 >= 0, 1 - (x2 - 3)**2 >= 0] print_problem(f, gs) sol = mp.solvers.solve_GMP(f, gs, rounds=4) mp.extractors.extract_solutions_lasserre(sol['MM'], sol['x'], 3)
polynomial_optimization.ipynb
sidaw/mompy
mit
Yet another example
x1,x2,x3 = sp.symbols('x1:4') f = -2*x1 + x2 - x3 gs = [0<=x1, x1<=2, x2>=0, x3>=0, x3<=3, x1+x2+x3<=4, 3*x2+x3<=6,\ 24-20*x1+9*x2-13*x3+4*x1**2 - 4*x1*x2+4*x1*x3+2*x2**2-2*x2*x3+2*x3**2>=0]; hs = []; print_problem(f, gs) sol = mp.solvers.solve_GMP(f, gs, hs, rounds=4) print mp.extractors.extract_solutions_lasserre(sol['MM'], sol['x'], 2)
polynomial_optimization.ipynb
sidaw/mompy
mit
4) Motzkin polynomial The Motzkin polynomial is non-negative, but cannot be expressed in sum of squares. It attains global minimum of 0 at $|x_1| = |x_2| = \sqrt{3}/3$ (4 points). The first few relaxations are unbounded and might take a while for cvxopt to realize this.
x1,x2 = sp.symbols('x1:3') f = x1**2 * x2**2 * (x1**2 + x2**2 - 1) + 1./27 print_problem(f) sol = mp.solvers.solve_GMP(f, rounds=7) print mp.extractors.extract_solutions_lasserre(sol['MM'], sol['x'], 4, maxdeg=3)
polynomial_optimization.ipynb
sidaw/mompy
mit
Generalized Moment Problem (GMP) The GMP is to $ \begin{align} \text{minimize} \quad &f(x)\ \text{subject to} \quad &g_i(x) \geq 0,\quad i=1,\ldots,N\ \quad &\mathcal{L}(h_j(x)) \geq 0,\quad j=1,\ldots,M \end{align} $ where $x \in \Re^d$ and $f(x), g_i(x), h_j(x) \in \Re[x]$ are polynomials, and $\mathcal{L}(\cdot): \Re[x] \mapsto \Re$ is a linear functional. Loosely speaking, we would like a measure $\mu$ to exist so that $\mathcal{L}(h) = \int h\ d\mu$. For details see: Jean B. Lasserre, 2011, Moments, Positive Polynomials and Their Applications Jean B. Lasserre, 2008, A semidefinite programming approach to the generalized problem of moments Estimating mixtures of exponential distributions We are interested in estimating the parameters of a mixture model from some observed moments, possibly with constraints on the parameters. Suppose that we have a mixture of 2 exponential distributions with density function $p(x; \beta) \sim \exp(-x/\beta)/\beta$ in for $x \in [0,\infty)$. Suppose that $\pi_1 = \pi_2 = 0.5$ and $\beta_1 = 1, \beta_2 = 2$. We are allowed to observe moments of the data $\EE[X^n] = {n!}(\pi_1 \beta_1^n + \pi_2 \beta_2^n)$ for $n=1,\ldots,6$. For more details on estimating mixture model using this method, see our paper Estimating mixture models via mixtures of polynomials and see the extra examples worksheet for another easy example on mixture of Gaussians.
beta = sp.symbols('beta') beta0 = [1,2]; pi0 = [0.5,0.5] hs = [beta**m - (pi0[0]*beta0[0]**m+pi0[1]*beta0[1]**m) for m in range(1,5)] f = sum([beta**(2*i) for i in range(3)]) # note that hs are the LHS of h==0, whereas gs are sympy inequalities print_problem(f, None, hs) sol = mp.solvers.solve_GMP(f, None, hs) print mp.extractors.extract_solutions_lasserre(sol['MM'], sol['x'], 2, tol = 1e-3)
polynomial_optimization.ipynb
sidaw/mompy
mit
Now we try to solve the problem with insufficient moment conditions, but extra constraints on the parameters themselves.
gs=[beta>=1] f = sum([beta**(2*i) for i in range(3)]) hs_sub = hs[0:2] print_problem(f, gs, hs_sub) sol = mp.solvers.solve_GMP(f, gs, hs_sub) print mp.extractors.extract_solutions_lasserre(sol['MM'], sol['x'], 2, tol = 1e-3)
polynomial_optimization.ipynb
sidaw/mompy
mit
Plotting an ROC curve with an 1-D np.ndarray input
plot_curve(y, x1, kind="roc")
examples/plot_curve_examples.ipynb
jeongyoonlee/Kaggler
mit
Plotting an ROC curve with two 1-D np.ndarray inputs
plot_curve(y, [x1, x2], name=["x1", "x2"], kind="roc")
examples/plot_curve_examples.ipynb
jeongyoonlee/Kaggler
mit
Plotting an ROC curve with pd.DataFrame
plot_curve(y, df[["x1", "x2"]], kind="roc")
examples/plot_curve_examples.ipynb
jeongyoonlee/Kaggler
mit
Plotting an PR curve with pd.DataFrame
plot_curve(y, df[["x1", "x2"]], kind="pr")
examples/plot_curve_examples.ipynb
jeongyoonlee/Kaggler
mit
Reading Metadata from An archive
import tarfile with tarfile.open('example.tar', 'r') as t: print(t.getnames()) import tarfile import time with tarfile.open('example.tar', 'r') as t: for member_info in t.getmembers(): print(member_info.name) print(' Modified:', time.ctime(member_info.mtime)) print(' Mode :', oct(member_info.mode)) print(' Type :', member_info.type) print(' Size :', member_info.size, 'bytes') print() import tarfile import time with tarfile.open('example.tar', 'r') as t: for filename in ['zlib_server.py', 'notthere.txt']: try: info = t.getmember(filename) except KeyError: print('ERROR: Did not find {} in tar archive'.format( filename)) else: print('{} is {:d} bytes'.format( info.name, info.size))
DataCompression/tarfile.ipynb
gaufung/PythonStandardLibrary
mit
Creating New Archive
import tarfile print('creating archive') with tarfile.open('tarfile_add.tar', mode='w') as out: print('add zlib_server.py') out.add('zlib_server.py') print() print('Contents:') with tarfile.open('tarfile_add.tar', mode='r') as t: for member_info in t.getmembers(): print(member_info.name)
DataCompression/tarfile.ipynb
gaufung/PythonStandardLibrary
mit
Appending to Archives
import tarfile print('creating archive') with tarfile.open('tarfile_append.tar', mode='w') as out: out.add('gzip.ipynb') print('contents:',) with tarfile.open('tarfile_append.tar', mode='r') as t: print([m.name for m in t.getmembers()]) print('adding index.rst') with tarfile.open('tarfile_append.tar', mode='a') as out: out.add('zlib.ipynb') print('contents:',) with tarfile.open('tarfile_append.tar', mode='r') as t: print([m.name for m in t.getmembers()])
DataCompression/tarfile.ipynb
gaufung/PythonStandardLibrary
mit
We expect X to have 100 rows (data samples) and two columns (features), whereas the vector y should have a single column that contains all the target labels:
X.shape, y.shape
notebooks/06.01-Implementing-Your-First-Support-Vector-Machine.ipynb
mbeyeler/opencv-machine-learning
mit
Visualizing the dataset We can plot these data points in a scatter plot using Matplotlib. Here, the idea is to plot the $x$ values (found in the first column of X, X[:, 0]) against the $y$ values (found in the second column of X, X[:, 1]). A neat trick is to pass the target labels as color values (c=y):
import matplotlib.pyplot as plt plt.style.use('ggplot') plt.set_cmap('jet') %matplotlib inline plt.figure(figsize=(10, 6)) plt.scatter(X[:, 0], X[:, 1], c=y, s=100) plt.xlabel('x values') plt.ylabel('y values')
notebooks/06.01-Implementing-Your-First-Support-Vector-Machine.ipynb
mbeyeler/opencv-machine-learning
mit
You can see that, for the most part, data points of the two classes are clearly separated. However, there are a few regions (particularly near the left and bottom of the plot) where the data points of both classes intermingle. These will be hard to classify correctly, as we will see in just a second. Preprocessing the dataset The next step is to split the data points into training and test sets, as we have done before. But, before we do that, we have to prepare the data for OpenCV: - All feature values in X must be 32-bit floating point numbers - Target labels must be either -1 or +1 We can achieve this with the following code:
import numpy as np X = X.astype(np.float32) y = y * 2 - 1
notebooks/06.01-Implementing-Your-First-Support-Vector-Machine.ipynb
mbeyeler/opencv-machine-learning
mit
Now we can pass the data to scikit-learn's train_test_split function, like we did in the earlier chapters:
from sklearn import model_selection as ms X_train, X_test, y_train, y_test = ms.train_test_split( X, y, test_size=0.2, random_state=42 )
notebooks/06.01-Implementing-Your-First-Support-Vector-Machine.ipynb
mbeyeler/opencv-machine-learning
mit
Here I chose to reserve 20 percent of all data points for the test set, but you can adjust this number according to your liking. Building the support vector machine In OpenCV, SVMs are built, trained, and scored the same exact way as every other learning algorithm we have encountered so far, using the following steps. Call the create method to construct a new SVM:
import cv2 svm = cv2.ml.SVM_create()
notebooks/06.01-Implementing-Your-First-Support-Vector-Machine.ipynb
mbeyeler/opencv-machine-learning
mit
As shown in the following command, there are different modes in which we can operate an SVM. For now, all we care about is the case we discussed in the previous example: an SVM that tries to partition the data with a straight line. This can be specified with the setKernel method:
svm.setKernel(cv2.ml.SVM_LINEAR)
notebooks/06.01-Implementing-Your-First-Support-Vector-Machine.ipynb
mbeyeler/opencv-machine-learning
mit
Call the classifier's train method to find the optimal decision boundary:
svm.train(X_train, cv2.ml.ROW_SAMPLE, y_train);
notebooks/06.01-Implementing-Your-First-Support-Vector-Machine.ipynb
mbeyeler/opencv-machine-learning
mit
Call the classifier's predict method to predict the target labels of all data samples in the test set:
_, y_pred = svm.predict(X_test)
notebooks/06.01-Implementing-Your-First-Support-Vector-Machine.ipynb
mbeyeler/opencv-machine-learning
mit
Use scikit-learn's metrics module to score the classifier:
from sklearn import metrics metrics.accuracy_score(y_test, y_pred)
notebooks/06.01-Implementing-Your-First-Support-Vector-Machine.ipynb
mbeyeler/opencv-machine-learning
mit
Congratulations, we got 80 percent correctly classified test samples! Of course, so far we have no idea what happened under the hood. For all we know, we might as well have gotten these commands off a web search and typed them into the terminal, without really knowing what we're doing. But this is not who we want to be. Getting a system to work is one thing and understanding it is another. Let us get to that! Visualizing the decision boundary What was true in trying to understand our data is true for trying to understand our classifier: visualization is the first step in understanding a system. We know the SVM somehow came up with a decision boundary that allowed us to correctly classify 80 percent of the test samples. But how can we find out what that decision boundary actually looks like? For this, we will borrow a trick from the guys behind scikit-learn. The idea is to generate a fine grid of $x$ and $y$ coordinates and run that through the SVM's predict method. This will allow us to know, for every $(x, y)$ point, what target label the classifier would have predicted. We will do this in a dedicated function, which we call plot_decision_boundary. The function takes as inputs an SVM object, the feature values of the test set, and the target labels of the test set. The function then creates a contour plot, on top of which we will plot the individual data points colored by their true target labels:
def plot_decision_boundary(svm, X_test, y_test): # create a mesh to plot in h = 0.02 # step size in mesh x_min, x_max = X_test[:, 0].min() - 1, X_test[:, 0].max() + 1 y_min, y_max = X_test[:, 1].min() - 1, X_test[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) X_hypo = np.c_[xx.ravel().astype(np.float32), yy.ravel().astype(np.float32)] _, zz = svm.predict(X_hypo) zz = zz.reshape(xx.shape) plt.contourf(xx, yy, zz, cmap=plt.cm.coolwarm, alpha=0.8) plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, s=200) plt.figure(figsize=(10, 6)) plot_decision_boundary(svm, X_test, y_test)
notebooks/06.01-Implementing-Your-First-Support-Vector-Machine.ipynb
mbeyeler/opencv-machine-learning
mit
Now we get a better sense of what is going on! The SVM found a straight line (a linear decision boundary) that best separates the blue and the red data samples. It didn't get all the data points right, as there are three blue dots in the red zone and one red dot in the blue zone. However, we can convince ourselves that this is the best straight line we could have chosen by wiggling the line around in our heads. If you're not sure why, please refer to the book (p. 148). So what can we do to improve our classification performance? One solution is to move away from straight lines and onto more complicated decision boundaries. Dealing with nonlinear decision boundaries What if the data cannot be optimally partitioned using a linear decision boundary? In such a case, we say the data is not linearly separable. The basic idea to deal with data that is not linearly separable is to create nonlinear combinations of the original features. This is the same as saying we want to project our data to a higher-dimensional space (for example, from 2D to 3D) in which the data suddenly becomes linearly separable. The book has a nice figure illustrating this idea. Implementing nonlinear support vector machines OpenCV provides a whole range of different SVM kernels with which we can experiment. Some of the most commonly used ones include: - cv2.ml.SVM_LINEAR: This is the kernel we used previously. It provides a linear decision boundary in the original feature space (the $x$ and $y$ values). - cv2.ml.SVM_POLY: This kernel provides a decision boundary that is a polynomial function in the original feature space. In order to use this kernel, we also have to specify a coefficient via svm.setCoef0 (usually set to 0) and the degree of the polynomial via svm.setDegree. - cv2.ml.SVM_RBF: This kernel implements the kind of Gaussian function we discussed earlier. - cv2.ml.SVM_SIGMOID: This kernel implements a sigmoid function, similar to the one we encountered when talking about logistic regression in Chapter 3, First Steps in Supervised Learning. - cv2.ml.SVM_INTER: This kernel is a new addition to OpenCV 3. It separates classes based on the similarity of their histograms. In order to test some of the SVM kernels we just talked about, we will return to our code sample mentioned earlier. We want to repeat the process of building and training the SVM on the dataset generated earlier, but this time we want to use a whole range of different kernels:
kernels = [cv2.ml.SVM_LINEAR, cv2.ml.SVM_INTER, cv2.ml.SVM_SIGMOID, cv2.ml.SVM_RBF]
notebooks/06.01-Implementing-Your-First-Support-Vector-Machine.ipynb
mbeyeler/opencv-machine-learning
mit
Do you remember what all of these stand for? Setting a different SVM kernel is relatively simple. We take an entry from the kernels list and pass it to the setKernels method of the SVM class. That's all. The laziest way to repeat things is to use a for loop:
plt.figure(figsize=(14, 8)) for idx, kernel in enumerate(kernels): svm = cv2.ml.SVM_create() svm.setKernel(kernel) svm.train(X_train, cv2.ml.ROW_SAMPLE, y_train) _, y_pred = svm.predict(X_test) plt.subplot(2, 2, idx + 1) plot_decision_boundary(svm, X_test, y_test) plt.title('accuracy = %.2f' % metrics.accuracy_score(y_test, y_pred))
notebooks/06.01-Implementing-Your-First-Support-Vector-Machine.ipynb
mbeyeler/opencv-machine-learning
mit
Pandas is also provides a very easy and fast API to read stock data from various finance data providers.
from pandas_datareader import data, wb # pandas data READ API import datetime googPrices = data.get_data_yahoo("GOOG", start=datetime.datetime(2014, 5, 1), end=datetime.datetime(2014, 5, 7)) googFinalPrices=pd.DataFrame(googPrices['Close'], index=tradeDates) googFinalPrices
_oldnotebooks/Introduction_to_Pandas-4.ipynb
eneskemalergin/OldBlog
mit
We now have a time series that depicts the closing price of Google's stock from May 1, 2014 to May 7, 2014 with gaps in the date range since the trading only occur on business days. If we want to change the date range so that it shows calendar days (that is, along with the weekend), we can change the frequency of the time series index from business days to calendar days as follow:
googClosingPricesCDays = googClosingPrices.asfreq('D') googClosingPricesCDays
_oldnotebooks/Introduction_to_Pandas-4.ipynb
eneskemalergin/OldBlog
mit
Note that we have now introduced NaN values for the closingPrice for the weekend dates of May 3, 2014 and May 4, 2014. We can check which values are missing by using the isnull() and notnull() functions as follows:
googClosingPricesCDays.isnull() googClosingPricesCDays.notnull()
_oldnotebooks/Introduction_to_Pandas-4.ipynb
eneskemalergin/OldBlog
mit
A Boolean DataFrame is returned in each case. In datetime and pandas Timestamps, missing values are represented by the NaT value. This is the equivalent of NaN in pandas for time-based types
tDates=tradeDates.copy() tDates[1]=np.NaN tDates[4]=np.NaN tDates FBVolume=[82.34,54.11,45.99,55.86,78.5] TWTRVolume=[15.74,12.71,10.39,134.62,68.84] socialTradingVolume=pd.concat([pd.Series(FBVolume), pd.Series(TWTRVolume), tradeDates], axis=1,keys=['FB','TWTR','TradeDate']) socialTradingVolume socialTradingVolTS=socialTradingVolume.set_index('TradeDate') socialTradingVolTS socialTradingVolTSCal=socialTradingVolTS.asfreq('D') socialTradingVolTSCal
_oldnotebooks/Introduction_to_Pandas-4.ipynb
eneskemalergin/OldBlog
mit
We can perform arithmetic operations on data containing missing values. For example, we can calculate the total trading volume (in millions of shares) across the two stocks for Facebook and Twitter as follows:
socialTradingVolTSCal['FB']+socialTradingVolTSCal['TWTR']
_oldnotebooks/Introduction_to_Pandas-4.ipynb
eneskemalergin/OldBlog
mit
By default, any operation performed on an object that contains missing values will return a missing value at that position as shown in the following command:
pd.Series([1.0,np.NaN,5.9,6])+pd.Series([3,5,2,5.6]) pd.Series([1.0,25.0,5.5,6])/pd.Series([3,np.NaN,2,5.6])
_oldnotebooks/Introduction_to_Pandas-4.ipynb
eneskemalergin/OldBlog
mit
There is a difference, however, in the way NumPy treats aggregate calculations versus what pandas does. In pandas, the default is to treat the missing value as 0 and do the aggregate calculation, whereas for NumPy, NaN is returned if any of the values are missing. Here is an illustration:
np.mean([1.0,np.NaN,5.9,6]) np.sum([1.0,np.NaN,5.9,6])
_oldnotebooks/Introduction_to_Pandas-4.ipynb
eneskemalergin/OldBlog
mit
However, if this data is in a pandas Series, we will get the following output:
pd.Series([1.0,np.NaN,5.9,6]).sum() pd.Series([1.0,np.NaN,5.9,6]).mean()
_oldnotebooks/Introduction_to_Pandas-4.ipynb
eneskemalergin/OldBlog
mit
It is important to be aware of this difference in behavior between pandas and NumPy. However, if we wish to get NumPy to behave the same way as pandas, we can use the np.nanmean and np.nansum functions, which are illustrated as follows:
np.nanmean([1.0,np.NaN,5.9,6]) np.nansum([1.0,np.NaN,5.9,6])
_oldnotebooks/Introduction_to_Pandas-4.ipynb
eneskemalergin/OldBlog
mit
Handling Missing Values There are various ways to handle missing values, which are as follows: 1. By using the fillna() function to fill in the NA values:
socialTradingVolTSCal socialTradingVolTSCal.fillna(100)
_oldnotebooks/Introduction_to_Pandas-4.ipynb
eneskemalergin/OldBlog
mit
We can also fill forward or backward values using the ffill() or bfill() arguments:
socialTradingVolTSCal.fillna(method='ffill') socialTradingVolTSCal.fillna(method='bfill')
_oldnotebooks/Introduction_to_Pandas-4.ipynb
eneskemalergin/OldBlog
mit
2. By using the dropna() function to drop/delete rows and columns with missing values.
socialTradingVolTSCal.dropna()
_oldnotebooks/Introduction_to_Pandas-4.ipynb
eneskemalergin/OldBlog
mit
3. We can also interpolate and fill in the missing values by using the interpolate() function
pd.set_option('display.precision',4) socialTradingVolTSCal.interpolate()
_oldnotebooks/Introduction_to_Pandas-4.ipynb
eneskemalergin/OldBlog
mit
The interpolate() function also takes an argument—method that denotes the method. These methods include linear, quadratic, cubic spline, and so on. Plotting using matplotlib The matplotlib api is imported using the standard convention, as shown in the following command: import matplotlib.pyplot as plt Series and DataFrame have a plot method, which is simply a wrapper around plt.plot . Here, we will examine how we can do a simple plot of a sine and cosine function. Suppose we wished to plot the following functions over the interval pi to pi: $$ f(x) = \cos(x) + \sin(x) \ g(x) = \cos(x) - \sin(x) $$
X = np.linspace(-np.pi, np.pi, 256, endpoint=True) f,g = np.cos(X)+np.sin(X), np.sin(X)-np.cos(X) f_ser=pd.Series(f) g_ser=pd.Series(g) plotDF=pd.concat([f_ser,g_ser],axis=1) plotDF.index=X plotDF.columns=['sin(x)+cos(x)','sin(x)-cos(x)'] plotDF.head() plotDF.columns=['f(x)','g(x)'] plotDF.plot(title='Plot of f(x)=sin(x)+cos(x), \n g(x)=sinx(x)-cos(x)') plt.show() plotDF.plot(subplots=True, figsize=(6,6)) plt.show()
_oldnotebooks/Introduction_to_Pandas-4.ipynb
eneskemalergin/OldBlog
mit
Introduction Are you a machine learning engineer looking to use Keras to ship deep-learning powered features in real products? This guide will serve as your first introduction to core Keras API concepts. In this guide, you will learn how to: Prepare your data before training a model (by turning it into either NumPy arrays or tf.data.Dataset objects). Do data preprocessing, for instance feature normalization or vocabulary indexing. Build a model that turns your data into useful predictions, using the Keras Functional API. Train your model with the built-in Keras fit() method, while being mindful of checkpointing, metrics monitoring, and fault tolerance. Evaluate your model on a test data and how to use it for inference on new data. Customize what fit() does, for instance to build a GAN. Speed up training by leveraging multiple GPUs. Refine your model through hyperparameter tuning. At the end of this guide, you will get pointers to end-to-end examples to solidify these concepts: Image classification Text classification Credit card fraud detection Data loading & preprocessing Neural networks don't process raw data, like text files, encoded JPEG image files, or CSV files. They process vectorized & standardized representations. Text files need to be read into string tensors, then split into words. Finally, the words need to be indexed & turned into integer tensors. Images need to be read and decoded into integer tensors, then converted to floating point and normalized to small values (usually between 0 and 1). CSV data needs to be parsed, with numerical features converted to floating point tensors and categorical features indexed and converted to integer tensors. Then each feature typically needs to be normalized to zero-mean and unit-variance. Etc. Let's start with data loading. Data loading Keras models accept three types of inputs: NumPy arrays, just like Scikit-Learn and many other Python-based libraries. This is a good option if your data fits in memory. TensorFlow Dataset objects. This is a high-performance option that is more suitable for datasets that do not fit in memory and that are streamed from disk or from a distributed filesystem. Python generators that yield batches of data (such as custom subclasses of the keras.utils.Sequence class). Before you start training a model, you will need to make your data available as one of these formats. If you have a large dataset and you are training on GPU(s), consider using Dataset objects, since they will take care of performance-critical details, such as: Asynchronously preprocessing your data on CPU while your GPU is busy, and buffering it into a queue. Prefetching data on GPU memory so it's immediately available when the GPU has finished processing the previous batch, so you can reach full GPU utilization. Keras features a range of utilities to help you turn raw data on disk into a Dataset: tf.keras.preprocessing.image_dataset_from_directory turns image files sorted into class-specific folders into a labeled dataset of image tensors. tf.keras.preprocessing.text_dataset_from_directory does the same for text files. In addition, the TensorFlow tf.data includes other similar utilities, such as tf.data.experimental.make_csv_dataset to load structured data from CSV files. Example: obtaining a labeled dataset from image files on disk Supposed you have image files sorted by class in different folders, like this: main_directory/ ...class_a/ ......a_image_1.jpg ......a_image_2.jpg ...class_b/ ......b_image_1.jpg ......b_image_2.jpg Then you can do: ```python Create a dataset. dataset = keras.preprocessing.image_dataset_from_directory( 'path/to/main_directory', batch_size=64, image_size=(200, 200)) For demonstration, iterate over the batches yielded by the dataset. for data, labels in dataset: print(data.shape) # (64, 200, 200, 3) print(data.dtype) # float32 print(labels.shape) # (64,) print(labels.dtype) # int32 ``` The label of a sample is the rank of its folder in alphanumeric order. Naturally, this can also be configured explicitly by passing, e.g. class_names=['class_a', 'class_b'], in which cases label 0 will be class_a and 1 will be class_b. Example: obtaining a labeled dataset from text files on disk Likewise for text: if you have .txt documents sorted by class in different folders, you can do: ```python dataset = keras.preprocessing.text_dataset_from_directory( 'path/to/main_directory', batch_size=64) For demonstration, iterate over the batches yielded by the dataset. for data, labels in dataset: print(data.shape) # (64,) print(data.dtype) # string print(labels.shape) # (64,) print(labels.dtype) # int32 ``` Data preprocessing with Keras Once your data is in the form of string/int/float NumpPy arrays, or a Dataset object (or Python generator) that yields batches of string/int/float tensors, it is time to preprocess the data. This can mean: Tokenization of string data, followed by token indexing. Feature normalization. Rescaling the data to small values (in general, input values to a neural network should be close to zero -- typically we expect either data with zero-mean and unit-variance, or data in the [0, 1] range. The ideal machine learning model is end-to-end In general, you should seek to do data preprocessing as part of your model as much as possible, not via an external data preprocessing pipeline. That's because external data preprocessing makes your models less portable when it's time to use them in production. Consider a model that processes text: it uses a specific tokenization algorithm and a specific vocabulary index. When you want to ship your model to a mobile app or a JavaScript app, you will need to recreate the exact same preprocessing setup in the target language. This can get very tricky: any small discrepancy between the original pipeline and the one you recreate has the potential to completely invalidate your model, or at least severely degrade its performance. It would be much easier to be able to simply export an end-to-end model that already includes preprocessing. The ideal model should expect as input something as close as possible to raw data: an image model should expect RGB pixel values in the [0, 255] range, and a text model should accept strings of utf-8 characters. That way, the consumer of the exported model doesn't have to know about the preprocessing pipeline. Using Keras preprocessing layers In Keras, you do in-model data preprocessing via preprocessing layers. This includes: Vectorizing raw strings of text via the TextVectorization layer Feature normalization via the Normalization layer Image rescaling, cropping, or image data augmentation The key advantage of using Keras preprocessing layers is that they can be included directly into your model, either during training or after training, which makes your models portable. Some preprocessing layers have a state: TextVectorization holds an index mapping words or tokens to integer indices Normalization holds the mean and variance of your features The state of a preprocessing layer is obtained by calling layer.adapt(data) on a sample of the training data (or all of it). Example: turning strings into sequences of integer word indices
from tensorflow.keras.layers import TextVectorization # Example training data, of dtype `string`. training_data = np.array([["This is the 1st sample."], ["And here's the 2nd sample."]]) # Create a TextVectorization layer instance. It can be configured to either # return integer token indices, or a dense token representation (e.g. multi-hot # or TF-IDF). The text standardization and text splitting algorithms are fully # configurable. vectorizer = TextVectorization(output_mode="int") # Calling `adapt` on an array or dataset makes the layer generate a vocabulary # index for the data, which can then be reused when seeing new data. vectorizer.adapt(training_data) # After calling adapt, the layer is able to encode any n-gram it has seen before # in the `adapt()` data. Unknown n-grams are encoded via an "out-of-vocabulary" # token. integer_data = vectorizer(training_data) print(integer_data)
guides/ipynb/intro_to_keras_for_engineers.ipynb
keras-team/keras-io
apache-2.0
Example: turning strings into sequences of one-hot encoded bigrams
from tensorflow.keras.layers import TextVectorization # Example training data, of dtype `string`. training_data = np.array([["This is the 1st sample."], ["And here's the 2nd sample."]]) # Create a TextVectorization layer instance. It can be configured to either # return integer token indices, or a dense token representation (e.g. multi-hot # or TF-IDF). The text standardization and text splitting algorithms are fully # configurable. vectorizer = TextVectorization(output_mode="binary", ngrams=2) # Calling `adapt` on an array or dataset makes the layer generate a vocabulary # index for the data, which can then be reused when seeing new data. vectorizer.adapt(training_data) # After calling adapt, the layer is able to encode any n-gram it has seen before # in the `adapt()` data. Unknown n-grams are encoded via an "out-of-vocabulary" # token. integer_data = vectorizer(training_data) print(integer_data)
guides/ipynb/intro_to_keras_for_engineers.ipynb
keras-team/keras-io
apache-2.0