repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
essicolo/ecologie-mathematique | 02_Python/2.ipynb | mit | 2+2
67.1-43.3
2*4
2**4
1/2
1 / 2 # les espaces ne signifie rien ici
"""
Explanation: Chapitre 2 : Python
Le python est une famille de reptile avec pas de pattes comprenant 10 espèces. Mais Python est un langage de programmation lancé en 1991 par Guido van Rossum, un fan du groupe d'humoriste britanique Mounty Python.
Source: Ministry of silly walks, Mounty Python
Python figure parmi les langages de programmation les plus utilisés au monde. Il s'agit d'un langage dynamique, c'est à dire que le code peut être exécuté ligne par ligne ou bloc par bloc: un avantage majeur pour des activités qui nécessitent des intéractions fréquentes. Bien que Python soit surtout utilisé pour créer des applications, il s'impose de plus en plus comme outil privilégié pour le calcul scientifique en raison des récents développements de bibliothèques d'analyse, de modélisation et de visualisation, dont plusieurs seront utilisés dans ce manuel.
Préparer son ordinateur
Installer Python
Installer et gérer Python sur un ordinateur serait une tâche plutôt ardue si ce n'était de la distribution Anaconda, spécialisée pour le calcul scientifique. Anaconda est distribué librement sur Linux, Windows et OS X (il existe d'autres distribution Python équivalentes, notamment Enthought et Python(x,y)). Il est possible de travailler sur Python en mode infonuagique, notamment avec SageMath. Toutefois, l'option nuagique n'est en ce moment pas à la hauteur d'un environnement local en terme d'efficacité, de polyvalence et d'autonomie.
Sur Windows et sur OS X, téléchargez et installez! Sur Linux, téléchargez, ouvrez un terminal, naviguez dans le répertoire de téléchargement (par exemple cd ~/Téléchargements), puis lancez la commande spécifiée sur la page de téléchargement, par exemple bash Anaconda3-4.3.1-Linux-x86_64.sh.
Note. Les modules présentés dans ce cours devraient être disponibles sur Linux, Windows et Mac. Ce n'est pas le cas pour tous les modules Python. La plupart fonctionnent néanmoins sur Linux. Que ce soit sur Ubuntu, l'une de ses nombreuses dérivées (Elementary, Linux Mint, KDE Neon, etc.), sous Debian, openSUSE, Arch, Fedora ou autre, les systèmes d'opération Linux sont de bonnes options pour le calcul scientifique.
Premiers pas avec Python
Voyons si Python est aussi libre qu'on le prétend. Si Python est bien installé sur votre ordinateur, la manière la plus directe de le lancer est en ouvrant un terminal (chercher cmd dans le menu si vous êtes sur Windows). Dans ce terminal, écrivez python, puis tapez enter. Vous devriez obtenir quelque chose comme ceci.
Les symboles >>> forment ce que l'on nomme l'invite de commande. J'entre ici les commandes dans le carnet, mais pour l'instant, entrez-les dans le terminal (on débutera avec les carnets un peu plus loin).
Opérations de base
"La liberté, c’est la liberté de dire que deux et deux font quatre. Si cela est accordé, tout le reste suit." - George Orwell, 1984
End of explanation
"""
a = 3
"""
Explanation: Tout va bien pour l'instant. Remarquez que la dernière opération comporte des espaces entre les nombres et l'opérateur /. Dans ce cas (ce n'est pas toujours le cas), les espaces ne signifient rien - il est même suggéré de les placer pour éclaircir le code, ce qui est utile lorsque les équations sont complexes. Puis, après l'opération 1 / 2, j'ai placé le symbole # suivi d'une note. Le symbole # est interprété par Python comme un ordre de ne pas considérer ce qui le suit. Cela est très utile pour insérer à même le code des commentaires pertinents pour mieux comprendre les opérations.
Assigner des objets à des variables est fondamental en programmation. Par exemple.
End of explanation
"""
a * 6
A + 2
"""
Explanation: Techniquement, a pointe vers le nombre entier 3. Conséquemment, on peut effectuer des opérations sur a.
End of explanation
"""
rendement_arbre = 50 # pomme/arbre
nombre_arbre = 300 # arbre
nombre_pomme = rendement_arbre * nombre_arbre
nombre_pomme
"""
Explanation: Le message d'erreur nous dit que A n'est pas défini. Sa version minuscule, a, l'est pourtant. La raison est que Python considère la case dans la définition des objets. Utiliser la mauvaise case mène donc à des erreurs.
Le nom d'une variable doit toujours commencer par une lettre, et ne doit pas contenir de caractères réservés (espaces, +, *, .). Par convention, les objets qui commencent par une lettre majuscules sont utilisés pour définir des classes (modules), utiles pour le développement de logiciels, mais rarement utilisés dans le cadre d'un feuille de calcul scientifique.
End of explanation
"""
a = "L'ours"
b = "polaire"
a + " " + b + " ressemble à un faux zèbre."
"""
Explanation: Types de données
Jusqu'à maintenant, nous n'avons utilisé que des nombres entiers (integer ou int) et des nombres réels (float ou float64). Python inclue d'autres types. La chaîne de caractère (string) est un ou plusieurs symboles. Elle est définie entre des double-guillemets " " ou des apostrophes ' '. Il n'existe pas de standard sur l'utilisation de l'un ou de l'autre, mais en règle générale, on utilise les apostrophe pour les experssions courtes, contenant un simple mot ou séquence de lettres, et les guillements pour les phrases. Une raison pour cela: les guillemets sont utiles pour insérer des apostrophes dans une chaîne de caractère.
End of explanation
"""
c = a + " " + b
len(c)
"""
Explanation: Notez que l'objet a a été défini précédemment. Il est possible en Python de réassigner une variable, mais cela peut porter à confusion, jusqu'à générer des erreurs de calcul si une variable n'est pas assigné à l'objet auquel on voulait référer.
L'opérateur + sur des caractères retourne une concaténation.
Combien de caractères contient la chaîne "L'ours polaire"? Python sait compter. Demandons-lui.
End of explanation
"""
a = 17
print(a < 10)
print(a > 10)
print(a == 10)
print(a != 10)
print(a == 17)
print(~a == 17)
"""
Explanation: Quatorze, c'est bien cela (comptez "L'ours polaire", en incluant l'espace). len, pour lenght (longueur), est une fonction (aussi appelée méthode) incluse par défaut dans l'environnement de travail de Python. La fonction est appelée en écrivant len(). Mais une fonction de quoi? Des arguments qui se trouvent entre les parenthèses. Dans ce cas, il y a un seul argument: c.
En calcul scientifique, il est courrant de lancer des requêtes sur si une résultat est vrai ou faux.
End of explanation
"""
print('o' in 'Ours')
print('O' in 'Ours')
print('O' not in 'Ours')
"""
Explanation: Je viens d'introduire un nouveau type de donnée: les données booléennes (boolean, ou bool), qui ne peuvent prendre que deux états - True ou False. En même temps, j'ai utilisé la fonction print parce que dans mon carnet, seule la dernière opération permet d'afficher le résultat. Si l'on veut forcer une sortie, on utilise print. Puis, on a vu plus haut que le symbole = est réservé pour assigner des objets: pour les tests d'égalité, on utilise le double égal, ==, ou != pour la non égalité. Enfin, pour inverser une donnée de type booléenne, on utilise le symbole ~.
Pour les tests sur les chaînes de caractères, on utilisera in et son inverse not in.
End of explanation
"""
espece = ['Petromyzon marinus', 'Lepisosteus osseus', 'Amia calva', 'Hiodon tergisus']
espece
"""
Explanation: Les collections de données
Les exercices précédents ont permis de présenter les types de données offerts par défault sur Python qui sont les plus importants pour le calcul scientifique: int (integer, ou nombre entier), float64 (nombre réel), str (string, ou chaîne de caractère) et bool (booléen). D'autres s'ajouterons tout au long du cours, comme les unités de temps (date-heure), les catégories et les géométries (points, linges, polygones) géoréférencées. Lorsque l'on procède à des opérations de calcul en science, nous utilisons rarement des valeurs uniques. Nous préférons les oragniser et les traiter en collections. Par défaut, Python offre trois types importants: les listes, les tuples et les dictionnaires.
D'abord, les listes, ou list, sont une série de variables sans restriction sur leur type. Elles peuvent même contenir d'autres listes. Une liste est délimitée par des crochets [ ], et les éléments de la liste sont séparés par des virgules.
End of explanation
"""
print(espece[0])
print(espece[2])
print(espece[:2])
print(espece[2:])
"""
Explanation: Pour accéder aux éléments d'une liste, appelle la liste suivie de la position de l'objet désiré entre crochets. Fait important qui reviendra tout au long du cours: en Python, l'indice du premier élément est zéro.
End of explanation
"""
espece.append("Cyprinus carpio")
espece
"""
Explanation: Pour les deux dernières commandes, la position :2 signifie jusqu'à 2 non inclusivement et 2: signifie de 2 à la fin.
Pour ajouter un élément à notre liste, on peut utiliser la fonction append.
End of explanation
"""
print(espece)
espece[2] = "Lepomis gibbosus"
print(espece)
"""
Explanation: Notez que la fonction append est appelée après la variable et précédée un point. Cette manière de procéder est courrante en programmation orientée objet. La fonction append est un attribut d'un objet list et prend un seul argument: l'objet qui est ajouté à la liste. C'est une manière de dire grenouille.saute(longueur=1.0).
En lançant espece[2] = "Lepomis gibbosus", on note qu'il est possible de changer une élément de la liste.
End of explanation
"""
mat = [[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[10, 11, 12]]
mat
"""
Explanation: Si les données contenues dans une liste sont de même type, cette liste peut être considérée comme un vecteur. En créant une liste de vecteurs de dimensions cohérentes, on crée une matrice. Nous verrons plus tard que pour les vecteurs et les matrices, on utilisera un format offert par un module complémentaire. Pour l'instant, on pourrait définir une matrice comme suit.
End of explanation
"""
espece = ('Petromyzon marinus', 'Lepisosteus osseus', 'Amia calva', 'Hiodon tergisus')
espece[2] = "Lepomis gibbosus"
espece
"""
Explanation: Les tuples, définis tuple par Python, différent des listes du fait que ses éléments ne peuvent pas être modifiés. Un tuple est délimité par des parenthèses ( ), et comme chez la liste, ses éléments sont séparés par des virgules. Les tuples sont moins polyvalents que les listes. Vous les utiliserez probablement rarement, et surtout comme arguments dans certaines fonctions en calcul scientifique.
End of explanation
"""
tableau = {'espece': ['Petromyzon marinus', 'Lepisosteus osseus', 'Amia calva', 'Hiodon tergisus'], 'poids': [10, 13, 21, 4], 'longueur': [35, 44, 50, 8]}
print('Mon tableau: ', tableau)
print('Mes espèces:', tableau['espece'])
print('Noms des clés (ou colonnes):', tableau.keys())
"""
Explanation: Les dictionnaires, ou dict, sont des listes dont chaque élément est identifié par une clé. Un dictionnaire est délimité par des accolades sous forme mon_dict = {'clé1': x, 'clé2': y, 'clé3': z }. On appelle un élément par sa clé entre des crochets, par exemple mon_dict['clé1'].
Le dict se rapproche d'un tableau: nous verrons plus tard que le format de tableau (offert dans un module complémentaire) est bâti à partir du format dict. Contrairement à un tableau où les colonnes contiennent toutes le même nombre de lignes, chaque élément du dictionnaire est indépendant des autres.
End of explanation
"""
sqrt(2)
"""
Explanation: Les fonctions (ou méthodes)
Plus haut, j'ai présenté les fonctions len et append. Une myriade de fonctions sont livrées par défaut avec Python. Mais il en manque aussi cruellement.
End of explanation
"""
def racine(x, n=2):
r = x**(1/n)
return r
"""
Explanation: Message d'erreur: la commande sqrt n'est pas définie.
Quoi, Python n'est pas foutu de calculer une racine carrée?
Par défaut, non.
Mais!
De nombreuses extensions (les modules) permettent de combler ces manques. Nous aborderons ça un peu plus loin dans ce chapitre. Pour l'instant, exerçons-nous à créer notre propre fonction de racine carrée.
End of explanation
"""
print(racine(9))
print(racine(x=9))
print(racine(8, 3))
print(racine(x=8, n=3))
"""
Explanation: En Python, def est le mot-clé pour définir une fonction. Suit ensuite, après un espace, le nom que vous désirez donner à la fonction: racine. Les arguments de la fonction suivent entre les parenthèses. Dans ce cas, x est la valeur de laquelle on veut extraire la racine et n est l'ordre de la racine. L'agument x n'a pas de valeur par défaut: elle doit être spécifiée pour que la fonction fonctionne. La mention n=2 signifie que si la valeur de n n'est pas spécifiée, elle prendra la valeur de 2 (la racine carrée). Pour marquer la fin de la définition et le début de la suite d'instructions, on utilise les deux points :, puis un retour de ligne. Une indentation (ou retrait) de quatre barres d'espacement signifie que l'on se trouve à l'intérieur de la suite d'instructions, où l'on calcule une valeur de r comme l'exposant de l'inverse de l'ordre de la racine. La dernière ligne indique ce que la fonction doit retourner.
End of explanation
"""
data = [3.5, 8.1, 10.2, 0.5, 5.6]
racine(x=data, n=2)
"""
Explanation: S'ils ne sont pas spécifiés, Python comprend que les arguments sont entrés dans l'ordre défini dans la fonction. En entrant racine(9), Python comprend que le 9 est attribué à x et donne à n sa valeur par défaut, 2. Ce qui est équivalent à entrer racine(x=9). Les autres entrées sont aussi équivalentes, et extraient la racine cubique. S'il se peut qu'il y ait confusion entre les arguments nommés et ceux qui ne le sont pas, Python vous retournera un message d'erreur. Règle générale, il est préférable pour la lisibilité du code de nommer les arguments plutôt que de les spécifier dans l'ordre.
Supposons maintenant que vous avez une liste de données dont vous voulez extraire la racine.
End of explanation
"""
racine_data = []
for i in [0, 1, 2, 3, 4]:
r = racine(x=data[i], n=2)
racine_data.append(r)
racine_data
"""
Explanation: Oups. Python vous dit qu'il y a une erreur, et vous indique avec une flèche ----> à quelle ligne de notre fonction elle est encourrue. Les exposants ** (on peut aussi utiliser la fonction pow) ne sont pas applicables aux listes. Une solution est d'appliquer la fonction à chaque élément de la liste avec une ittération. On verra plus tard des manières plus efficaces de procéder. Je me sers de ce cas d'étude pour introduire les boucles ittératives.
Les boucles
Les boucles permettent d'effectuer une même suite d'opérations sur plusieurs objets. Pour faire suite à notre exemple:
End of explanation
"""
racine_data = []
for i in range(len(data)):
r = racine(x=data[i], n=2)
print('Racine carrée de ', data[i], ' = ', r)
racine_data.append(r)
"""
Explanation: Nous avons d'abord créé une liste vide, racine_data. Ensuite, pour (for) chaque indice de la liste (i in [0, 1, 2, 3, 4]), nous demandons à Python d'effectuer la suite d'opération qui suit le ; et qui est indentée de quatre espaces. Dans la suite d'opération, calculer la racine carrée de data à l'indice i, puis l'ajouter à la liste racine_data. Au lieu d'entrer une liste [0, 1, 2, 3, 4], on aurait pu utiliser la fonction range et lui assigner automatiquement la longueur de la liste.
End of explanation
"""
print(range(len(data)))
print(list(range(len(data))))
print(range(2, len(data)))
print(list(range(2, len(data))))
"""
Explanation: La fonction range retourne une séquence calculée au besoin. Elle est calculée si elle est évoquée dans une boucle ou en lançant list.
End of explanation
"""
x = 100
while (x > 1.1):
x=racine(x)
print(x)
"""
Explanation: Première observation, si un seul argument est inclus, range retourne une séquence partant de zéro. Seconde observation, la séquence se termine en excluant l'argument. Ainsi, range(2,5) retourne la séquence [2, 3, 4]. En spécifiant la longueur de data comme argument, la séquence range(5) retourne la liste [0, 1, 2, 3, 4], soit les indices dont nous avons besoin pour itérer dans la liste.
Les boucles for vous permettront par exemple de générer en peu de temps 10, 100, 1000 graphiques (autant que vous voulez), chacun issu de simulations obtenues à partir de conidtions initiales différentes, et de les enregistrer dans un répertoire sur votre ordinateur. Un travail qui pourrait prendre des semaines sur Excel peut être effectué en Python en quelques secondes.
Un second outil est disponible pour les itérations: les boucles while. Elles effectue une opération tant qu'un critère n'est pas atteint. Elles sont utiles pour les opérations où l'on cherche une convergence. Je les couvre rapidement puisque'elles sont rarement utilisées dans les flux de travail courrants. En voici un petit exemple.
End of explanation
"""
racine(x=-1, n=2)
"""
Explanation: Nous avons inité x à une valeur de 100. Puis, tant que (while) le test x > 1.1 est vrai, attribuer à x la nouvelle valeur calculée en extrayant la racine de la valeur précédente de x. Enfin, indiquer la valeur avec print.
Explorons maintenant comment Python réagit si on lui demande de calculer $\sqrt{-1}$.
End of explanation
"""
def racine_positive_nn(x, n=2):
if x<0:
raise ValueError("x est négatif")
elif x==0:
raise ValueError("x est nul")
else:
r = x**(1/n)
return r
"""
Explanation: D'abord, Python ne retourne pas de message d'erreur, mais un nouveau type de donnée: le nombre imaginaire. Puis, 6.123233995736766e-17 n'est pas zéro, mais très proche. La résolution des calculs étant numérique, on obeserve parfois de légères déviations par rapport aux solutions mathématiques.
Si pour un cas particulier, on veut éviter que notre fonction retourne un nombre imaginaire, comment s'y prendre? Avec une condition.
Conditions: if, elif, else
Si la condition 1 est remplie, effectuer une suite d'instruction 1. Si la condition 1 n'est pas remplie, et si la condition 2 est remplie, effectuer la suite d'instruction 2. Sinon, effectuer la suite d'instruction 3.
Voilà comment on exprime une suite de conditions. Pour notre racine d'un nombre négatif, on pourrait procéder comme suit.
End of explanation
"""
racine_positive_nn(x=-1, n=2)
racine_positive_nn(x=0, n=2)
racine_positive_nn(x=4, n=2)
"""
Explanation: La racine positive et non-nulle (racine_positive_nn) comprend les mot-clés if (si), elif (une contration de else if) et else (sinon). ValueError est une fonction pour retourner un message d'erreur lorsqu'elle est précédée de raise. Comme c'est le cas pour def et for, les instructions des conditions sont indentées. Notez la double indentation (8 espaces) pour les instructions des conditions. Alors que la plupart des langages de programmation demandent d'emboîter les instructions dans des parenthèses, accolades et crochets, Python préfère nous forcer à bien indenter le code (ce que l'on devrait faire de toute manière pour améliorer la lisibilité) et s'y fier pour effectuer ses opérations.
End of explanation
"""
import numpy as np
np.sqrt(9)
from numpy import sqrt
sqrt(9)
"""
Explanation: Charger un module
Le module numpy, installé par défaut avec Anaconda, est une boîte d'outil de calcul numérique populée par de nombreuses foncions mathématiques. Dont la racine carrée.
End of explanation
"""
a = 5/2
a
"""
Explanation: La plupart des fonctions que vous aurez à construire seront vouées à des instructions spécialisées à votre cas d'étude. Pour la plupart des opérations d'ordre générale (comme les racines carrées, les tests statistiques, la gestion de matrices et de tableau, les graphiques, les modèles d'apprentissage, etc.), des équipes ont déjà développé des fonctions nécessaires à leur utilisation, et les ont laissées disponibles au grand public. L'introduction à Python se termine là-dessus.
Comme une langue, on n'apprend à s'exprimer en un langage informatique qu'en se mettant à l'épreuve, ce que vous ferez tout au long de ce cours.
Prenons une pause de Python et passons à des technicalités.
L'environnement de travail
Le gestionnaire conda
Une fois que Anaconda est installé, l'installation de modules et des environnements virtuel devient possible avec le gestionnaire conda. Cette section ne présente qu'un aperçu de ses capacités, basé sur ce dont vous aurez besoin pour ce cours. Pour plus d'information, consultez le guide.
Installer des modules
Sans module, Python ne sera pas un environnement de calcul scientifique appréciable. Heureusement, il existe des modules pour faciliter la vie des scientifiques qui désirent calculer des opérations simples comme des moyennes et des angles, ou des opérations plus compliquées comme des intégrales et des algorithmes d'intelligence artificielle. Plusieurs modules sont installés par défaut avec Anaconda. Pour lister l'ensemble des modules installés dans un environnement, ouvrez un terminal (si vous vous trouvez dans une session Python, vous devez quitter par la commande quit()) et lancez:
conda list
Les modules sont téléchargés et installés depuis des dépôts en ligne. L'entreprise Continuum Analytics, qui développe et supporte Anaconda, offre ses propres dépôts. Par défaut, le module statsmodels (que nous utiliserons pour certaines opérations) sera téléchargé depuis les dépôts par défaut si vous lancez:
conda install statsmodels
Il est préférable d'utiliser le dépôt communautaire conda-forge plutôt que les dépôts officiels de Continuum Analytics. Sur conda-forge, davantage de modules sont disponibles, ceux-ci sont davantage à jour et leur qualité est contrôlée.
conda config --add channels conda-forge
Par la suite, tous les modules seront téléchargés depuis conda-forge. Pour effectuer une mise à jour de tous les modules, lancez:
conda update --all
Installer des environnements virtuels
Vous voilà en train de travailler sur des données complexes qui demandent plusieurs opérations. Vous avez l'habitude, à toutes les semaines, de lancer conda update --all pour mettre à jour les modules, ce qui corrige les bogues et ajoute des fonctionnalités. L'équipe de développement d'un module a décidé de modifier, pour le mieux, une fonction. Vous n'êtes pas au courant de ce changement et vous passez deux jours à cherche ce qui cause ce message d'erreur dans vos calculs. Vous envoyez votre fichier de calcul à votre collègue qui n'a pas mis à jour ce module, puis vos corrections lui causent des problèmes. Croyez-moi, ça arrive souvent.
Les environnements virtuels sont là pour éviter cela. Il s'agit d'un répertoire dans lequel Python ainsi que ses modules sont isolés. Pour un projet spécifique, vous pouvez créer un environnement virtuel sous Python 2.7.9 et installer des versions de modules spécifiques sans les mettre à jour. Ça permet d'une part de travailler avec des outils qui ne changent pas en cours de projet, et d'autre part à travailler entre collègues avec les mêmes versions.
Pour créer un environnement nommé fertilisation_laitue incluant Python en version 2.7.9 et le module statsmodels version 0.6.0, lancez:
conda create -n fertilisation_laitue python=2.7.9
Le répertoire de projet sera automatiquement installé dans le répertoire envs de votre installation de Anaconda.
Pour activer cet environnement, sous Linux et en OS X:
source activate fertilisation_laitue
Sous Windows:
activate fertilisation_laitue
Depuis l'environnement virtuel, vous pouvez installer les modules dont vous avez besoin, en spécifiant la version. Par exemple,
conda install statsmodels=0.6.0
Depuis votre environnement virtuel (y compris l'environnement root), vous pouvez aussi lancer Jupyter, une interface qui vous permettra d'intéragir de manière conviviale avec Python.
À titre d'exemple, préparons-nous au cours en créant un environnement virtuel qui incluera la version de Python 3.5. Précédemment, nous avions fait cela en deux étapes: (1) créer l'environnement, puis (2) installer les bibliothèques. Nous pouvons tout aussi bien le faire d'un coup. Je nomme arbitrairement l'environnement ecolopy.
conda create -n ecolopy numpy scipy pandas matplotlib jupyterlab
Activons l'environnement (Linux et OS X: source activate ecolopy, Windows: activate ecolopy), puis installons les bibliothèques nécessaires. Puisque j'utilise Linux,
source activate ecolopy
Pour partager un environnement de travail avec des collègues ou avec la postérité, vous pouvez générer une liste de prérequis via conda list -e > req.txt, à partir de laquelle quiconque utlise Anaconda pourra créer un environnement virtuel identique au vôtre via conda create -n ecolopy environment --file req.txt.
Pour tester l'environnement, lancez python!
python
Pour ce cours, vous êtes libres de générer un environnement de travail ou de travailler dans l'environnement par défaut (nommé root).
La première ligne importe le module NumPy (numpy) et en crée une instance dont on choisi optionnellement le nom: np (utilisé conventionnellement pour numpy). Ce faisant, on appelle numpy et on le lie avec np. Ainsi, on peut aller chercher l'instance sqrt de np avec np.sqrt(). Si l'on ne cherche qu'à importer la fonction sqrt et que l'on ne comte pas utiliser le tout NumPy:
De nombreux modules seront utilisés lors de ce cours. La section suivante vise à les présenter brièvement.
Modules de base
NumPy
NumPy, une contraction de Numerical Python, donne accès à de nombreuses fonctions mathématiques et intervient inmanquablement pour effectuer des opérations sur les matrices. La grande majorité des opérations effectuées lors de ce cours fera explicitement ou implicitement (via un autre module s'appuyant sur NumPy) référence à NumPy. NumPy permet notamment:
de donner accès à des opérations mathématiques de base comme la racine carrée, la trigonométrie, les logarithmes, etc.;
d'effectuer des opérations rapides sur des matrices multidimentionnelles (ndarray, ou n-dimensionnal array), dont des calculs d'algèbre linéaire;
d'effectuer des calculs élément par élément, ligne par ligne, colonne par colonne, etc., grâce à la "vectorisation" - par exemple en additionnant un scalaire à un vecteur, le scalaire sera additionné à tous les éléments du vecteur;
d'importer et exporter des fichiers de données;
de générer des nombres aléatoires selon des lois de probabilité.
SciPy
Basée sur NumPy, SciPy est une collection de fonctions mathématiques offrant une panoplie d'outil pour le calcul scientifique. Il simplifie certaines fonctions de Numpy, et offre des gadgets qui se rendront essentiels pour des opérations courrantes, notamment:
calcul intégral et résolution de systèmes d'équations différentielles ordinaires
interpolation entre coordonnées
traitement et analyse de signal
Note. Un bibliothèque portant le préfixe scikit fait partie de la trousse d'extensions de SciPy.
pandas
Les données sont souvent organisées sous forme de tableau, les colonnes représentant les variables mesurées et les lignes représentant les observations. La bibliothèque pandas offre un kit d'outil pour travailler avec des tableaux de données (DataFrame) de manière efficace. Avec une rapidité d'exécution héritée de NumPy, pandas inclut l'approche des bases de données relationnelles (SQL) pour filtrer, découper, synthétiser, formater et fusionner des tableaux.
matplotlib
Les graphiques sont des synthèses visuelles de données qui autrement seraient pénibles à interpréter. Malgré les récents développements en visualisation sur Python, matplotlib reste la bibliothèque de base pour la présentation de graphiques: nuages de points, lignes, boxplots, histogrammes, contours, etc. Il y en a d'autres comme altair, seaborn et bokeh, qui vous seront présentées au moment opportun.
Modules spécialisés: <<<À AJUSTER À LA FIN>>>
SymPy
Le calcul symbolique a une place théorique importante en calcul scientifique. SymPy sera utilisée pour valider des fonctions issues d'équations différentielles.
statsmodels
Plus que de la statistique, la bibliothèque statsmodels est conçue comme accompagnatrice dans l'analyse de données. Elle aidera à effectuer des statistiques comme des analyses de variance, des régressions et des analyses de survie, mais aussi des opérations de prétraitement comme l'imputation de données manquantes.
scikit-learn
L'apprentissage automatique (machine learning en anglais), permet de détecter des structures dans les données dans l'objectif de prédire une nouvelle occurance, que ce soit un ou plusieurs variables numériques (régression) ou catégorielles (classification). De nombreux algorhitmes sont appelés à être utilisés en sciences de la vie. scikit-learn est une trousse d'outil permettant d'appréhender ces outils complexes de manière efficace, conviviale et cohérente, en plus d'offir la possibilité d'empaqueter des machines d'apprentissage dans des logiciels. La documentation de scikit-learn est d'une rare qualité. scikit-learn peut aussi être utilisé pour effectuer des classifications non supervisées (classifier des données qui n'ont pas de catégorie prédéterminées), notamment l'analyse de partitionnement (clustering en anglais).
scikit-bio
scikit-bio sera utilisé principalement pour l'analyse compositionnelle et pour l'ordination. Ses possibilités ne s'arrêtent toutefois pas là. Techniquement, la bibliothèque scikit-bio a moins de lien avec scikit-learn qu'avec QIIME, un logiciel libre dédié à la bioinformatique, une discipline connexe au génie écologique, mais axée sur l'analyse génétique.
bokeh
Un graphique est une représentation visuelle de données. Bien que matplotlib un outil essentiel au calcul scientifique avec Python, de nombreuses autres bibliothèques ont été développées pour combler ses lacunes. L'une d'entre elle émerge du transfert de la publication traditionnelle (papier, puis pdf) vers la publication de documents interactifs. bokeh est une bibliothèque qui, parmi d'autres (notamment plotly et mpld3), offre la possibilité de créer des graphiques intéractifs. Bonus: bokeh est aussi une plateforme de développement de logiciels scientifiques.
ggplot
gg, pour Grammar of Graphics. C'est avant tout un langage pour exprimer le passage de la donnée à sa représentation graphique. Le module ggplot2 est l'un des plus prisés du langage R. Un groupe de travail a heureusement planché sur une version Python, moins complète mais hautement utile pour tracer des graphiques de manière conviviale autant pour l'exploration de données que pour la publication.
SfePy
Simple Finite Elements with Python est un gros module conçu pour appréhender de la manière la plus simple possible la modélisation d'équations différentielles partielles par éléments finis. Cette méthode, largement utilisée en ingénierie, sera utile pour modéliser une panoplie de mécanismes déterministes: les transferts d'énergie, l'écoulement de l'eau, le transport des solutés et la dispersion des espèces.
lifelines
Combien de temps reste-t-il avant un événment? C'est la question que pose l'analyste de survie. lifelines est un module Python conçu exactement pour cela.
Interfaces
On les appelle des interfaces graphiques ou des environnement intégrés de développement et son conçus pour faciliter l'utilisation d'un langage de programmation, souvent pour des applications particulières. Utiliser Python uniquement dans un terminal n'est pas très pratique pour garder la trace des calculs. Comme la plupart des interfaces conçus pour le calcul scientifique, Jupyter comporte trois composantes: un éditeur de commande, un moyen d'exécuter les commandes et un afficheur de graphiques.
Jupyter
Jupyter lab
Anciennement nommé IPython notebook, puis Jupyter notebook, Jupyter lab s'inspire d'un format usuelle en science: le carnet de laboratoire.
<!---->
<img src="https://raw.githubusercontent.com/jupyterlab/jupyterlab/master/jupyter-plugins-demo.gif" width="600">
Jupyter lab fonctionne dans une fenêtre de navigateur internet. Le code est interprété par IPython, un interprétateur pour le calcul scientifique sur Python. Chaque cellule peut contenir un texte explicatif (édité en markdown, un outil de traitement de texte où le formattage est effectué dans texte à l'aide de caractères spéciaux), incluant des équations (écrites en format LaTeX via MathJax - il existe des éditeurs d'équations en ligne), ou des morceaux de code. Par ailleurs, ces notes de cours sont rédigées dans des carnets Jupyter.
nteract
Afin de libérer Jupyter du navigateur web, une équipe a développé le logiciel nteract, une version épurée de Jupyter en format d'application bureau. C'est l'interface que nous allons utiliser lors de ce cours. Téléchargez l'installateur et installez!
Autres
L'interface de Rodeo comprend des fenêtres pour l'édition de code, pour interprétateur IPython, pour la session de travail et pour afficher les graphiques. En fait, il imite l'interface de RStudio pour R. C'est une solution visuellement élégante et moderne, mais pas aussi complète que Spyder.
Spyder est un acronyme pour "Scientific PYthon Development EnviRonment". Si vous avez installé Anaconda, Spyder est déjà installé sur votre ordiateur. Il est comparable à Rodeo, mais est plus ancien, plus complet, mais aussi plus complexe.
Il existe aussi plusieurs autres environnements de développement en mode libre (Atom/Hydrogen, Eclipse/PyDev, Komodo IDE, Lighttable, Ninja IDE). Mais certaines préféreront seulement utiliser un bon éditeur texte (Atom, Brackets, LighttTable, Notepad++, etc.) accompagné d'un terminal sous IPython.
Astuces pour utiliser Jupyter
Ce manuel est créé avec Jupyter. En double-cliquant dans ses cellules de texte, vous aurez accès au code markdown, un langage HTML simplifié permettant d'insérer des titres, des tableaux, des emphases en italique et en gras, des équations, des liens, des citations, des encadrés, des listes, etc.
Cellule markdown
Formatter un texte en markdown est relativement facile. Vous utiliserez sans doute ces outils:
```
Titre 1
Titre 2
Titre 6
Formater du code alligné. Ou bien un
```
paragraphe dédié au code
```
Liste à puce
- item 1
- item 2
- item 2.1
- item 2.1.1
Liste numérotée
0. item 1
0. item 2
0. item 2.1
0. item 2.1.1
Texte emphasé en italique ou en gras.
Hyperlien
Insérer une image
Insérer une équation en ligne: $\alpha + \beta$. Ou en un paragraphe:
$$ c = \sqrt{\left( a^2 + b^2 \right)} $$
| Symbole | Format |
| --- | ---|
| Titre 1 | # Titre 1|
| Titre 2 | ## Titre 2|
| Titre 6 | ###### Titre 6|
| code ligne | `code`|
| code paragraphe | ```code```|
| Items de liste à puce | - item avec indentation pour les sous-items |
| Items de liste numérotée | 0. item avec indentation pour les sous-items |
| Italique | Texte emphasé en *italique* |
| Gras | Texte emphasé en **gras** |
| Hyperlien | Créer un [lien](https://python.org) |
| Image | Insérer une image  |
| Équation | Insérer une équation en format LaTeX \$c = \sqrt \left( a^2 + b^2 \right)\$ |
```
Cellule de code
Il était suggéré au cours de ce chapitre d'entrer les commandes dans un terminal. À partir d'ici, il sera préférable d'utiliser un notebook à partir d'ici, et d'excécuter les calculs dans des cellules de code.
End of explanation
"""
|
synthicity/activitysim | activitysim/examples/example_estimation/notebooks/04_auto_ownership.ipynb | agpl-3.0 | import os
import larch # !conda install larch -c conda-forge # for estimation
import pandas as pd
"""
Explanation: Estimating Auto Ownership
This notebook illustrates how to re-estimate a single model component for ActivitySim. This process
includes running ActivitySim in estimation mode to read household travel survey files and write out
the estimation data bundles used in this notebook. To review how to do so, please visit the other
notebooks in this directory.
Load libraries
End of explanation
"""
os.chdir('test')
"""
Explanation: We'll work in our test directory, where ActivitySim has saved the estimation data bundles.
End of explanation
"""
modelname = "auto_ownership"
from activitysim.estimation.larch import component_model
model, data = component_model(modelname, return_data=True)
"""
Explanation: Load data and prep model for estimation
End of explanation
"""
data.coefficients
"""
Explanation: Review data loaded from the EDB
The next step is to read the EDB, including the coefficients, model settings, utilities specification, and chooser and alternative data.
Coefficients
End of explanation
"""
data.spec
"""
Explanation: Utility specification
End of explanation
"""
data.chooser_data
"""
Explanation: Chooser data
End of explanation
"""
model.estimate()
"""
Explanation: Estimate
With the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.
End of explanation
"""
model.parameter_summary()
"""
Explanation: Estimated coefficients
End of explanation
"""
from activitysim.estimation.larch import update_coefficients
result_dir = data.edb_directory/"estimated"
update_coefficients(
model, data, result_dir,
output_file=f"{modelname}_coefficients_revised.csv",
);
"""
Explanation: Output Estimation Results
End of explanation
"""
model.to_xlsx(
result_dir/f"{modelname}_model_estimation.xlsx",
data_statistics=False,
)
"""
Explanation: Write the model estimation report, including coefficient t-statistic and log likelihood
End of explanation
"""
pd.read_csv(result_dir/f"{modelname}_coefficients_revised.csv")
"""
Explanation: Next Steps
The final step is to either manually or automatically copy the *_coefficients_revised.csv file to the configs folder, rename it to *_coefficients.csv, and run ActivitySim in simulation mode.
End of explanation
"""
|
DJCordhose/ai | notebooks/rl/berater-v6.ipynb | mit | !pip install git+https://github.com/openai/baselines >/dev/null
!pip install gym >/dev/null
"""
Explanation: <a href="https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/rl/berater-v6.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Berater Environment v6
Changes from v5
use complex customer graph
next steps
per episode set certain rewards to 0 to simulate different customers per consultant
make sure things generalize well
Installation (required for colab)
End of explanation
"""
import numpy
import gym
from gym.utils import seeding
from gym import spaces
def state_name_to_int(state):
state_name_map = {
'S': 0,
'A': 1,
'B': 2,
'C': 3,
'D': 4,
'E': 5,
'F': 6,
'G': 7,
'H': 8,
'K': 9,
'L': 10,
'M': 11,
'N': 12,
'O': 13
}
return state_name_map[state]
def int_to_state_name(state_as_int):
state_map = {
0: 'S',
1: 'A',
2: 'B',
3: 'C',
4: 'D',
5: 'E',
6: 'F',
7: 'G',
8: 'H',
9: 'K',
10: 'L',
11: 'M',
12: 'N',
13: 'O'
}
return state_map[state_as_int]
class BeraterEnv(gym.Env):
"""
The Berater Problem
Actions:
There are 4 discrete deterministic actions, each choosing one direction
"""
metadata = {'render.modes': ['ansi']}
showStep = False
showDone = True
envEpisodeModulo = 100
def __init__(self):
# self.map = {
# 'S': [('A', 100), ('B', 400), ('C', 200 )],
# 'A': [('B', 250), ('C', 400), ('S', 100 )],
# 'B': [('A', 250), ('C', 250), ('S', 400 )],
# 'C': [('A', 400), ('B', 250), ('S', 200 )]
# }
self.map = {
'S': [('A', 300), ('B', 100), ('C', 200 )],
'A': [('S', 300), ('B', 100), ('E', 100 ), ('D', 100 )],
'B': [('S', 100), ('A', 100), ('C', 50 ), ('K', 200 )],
'C': [('S', 200), ('B', 50), ('M', 100 ), ('L', 200 )],
'D': [('A', 100), ('F', 50)],
'E': [('A', 100), ('F', 100), ('H', 100)],
'F': [('D', 50), ('E', 100), ('G', 200)],
'G': [('F', 200), ('O', 300)],
'H': [('E', 100), ('K', 300)],
'K': [('B', 200), ('H', 300)],
'L': [('C', 200), ('M', 50)],
'M': [('C', 100), ('L', 50), ('N', 100)],
'N': [('M', 100), ('O', 100)],
'O': [('N', 100), ('G', 300)]
}
self.action_space = spaces.Discrete(4)
# position, and up to 4 paths from that position, non existing path is -1000 and no position change
self.observation_space = spaces.Box(low=numpy.array([0,-1000,-1000,-1000,-1000]),
high=numpy.array([13,1000,1000,1000,1000]),
dtype=numpy.float32)
self.reward_range = (-1, 1)
self.totalReward = 0
self.stepCount = 0
self.isDone = False
self.envReward = 0
self.envEpisodeCount = 0
self.envStepCount = 0
self.reset()
self.optimum = self.calculate_customers_reward()
def seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def iterate_path(self, state, action):
paths = self.map[state]
if action < len(paths):
return paths[action]
else:
# sorry, no such action, stay where you are and pay a high penalty
return (state, 1000)
def step(self, action):
destination, cost = self.iterate_path(self.state, action)
lastState = self.state
customerReward = self.customer_reward[destination]
reward = (customerReward - cost) / self.optimum
self.state = destination
self.customer_visited(destination)
done = destination == 'S' and self.all_customers_visited()
stateAsInt = state_name_to_int(self.state)
self.totalReward += reward
self.stepCount += 1
self.envReward += reward
self.envStepCount += 1
if self.showStep:
print( "Episode: " + ("%4.0f " % self.envEpisodeCount) +
" Step: " + ("%4.0f " % self.stepCount) +
lastState + ' --' + str(action) + '-> ' + self.state +
' R=' + ("% 2.2f" % reward) + ' totalR=' + ("% 3.2f" % self.totalReward) +
' cost=' + ("%4.0f" % cost) + ' customerR=' + ("%4.0f" % customerReward) + ' optimum=' + ("%4.0f" % self.optimum)
)
if done and not self.isDone:
self.envEpisodeCount += 1
if BeraterEnv.showDone:
episodes = BeraterEnv.envEpisodeModulo
if (self.envEpisodeCount % BeraterEnv.envEpisodeModulo != 0):
episodes = self.envEpisodeCount % BeraterEnv.envEpisodeModulo
print( "Done: " +
("episodes=%6.0f " % self.envEpisodeCount) +
("avgSteps=%6.2f " % (self.envStepCount/episodes)) +
("avgTotalReward=% 3.2f" % (self.envReward/episodes) )
)
if (self.envEpisodeCount%BeraterEnv.envEpisodeModulo) == 0:
self.envReward = 0
self.envStepCount = 0
self.isDone = done
observation = self.getObservation(stateAsInt)
info = {"from": self.state, "to": destination}
return observation, reward, done, info
def getObservation(self, position):
result = numpy.array([ position,
self.getPathObservation(position, 0),
self.getPathObservation(position, 1),
self.getPathObservation(position, 2),
self.getPathObservation(position, 3)
],
dtype=numpy.float32)
return result
def getPathObservation(self, position, path):
source = int_to_state_name(position)
paths = self.map[self.state]
if path < len(paths):
target, cost = paths[path]
reward = self.customer_reward[target]
result = reward - cost
else:
result = -1000
return result
def customer_visited(self, customer):
self.customer_reward[customer] = 0
def all_customers_visited(self):
return self.calculate_customers_reward() == 0
def calculate_customers_reward(self):
sum = 0
for value in self.customer_reward.values():
sum += value
return sum
def reset(self):
self.totalReward = 0
self.stepCount = 0
self.isDone = False
reward_per_customer = 1000
self.customer_reward = {
'S': 0,
'A': reward_per_customer,
'B': reward_per_customer,
'C': reward_per_customer,
'D': reward_per_customer,
'E': reward_per_customer,
'F': reward_per_customer,
'G': reward_per_customer,
'H': reward_per_customer,
'K': reward_per_customer,
'L': reward_per_customer,
'M': reward_per_customer,
'N': reward_per_customer,
'O': reward_per_customer
}
self.state = 'S'
return self.getObservation(state_name_to_int(self.state))
"""
Explanation: Environment
End of explanation
"""
BeraterEnv.showStep = True
BeraterEnv.showDone = True
env = BeraterEnv()
print(env)
observation = env.reset()
print(observation)
for t in range(1000):
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(t+1))
break
env.close()
print(observation)
"""
Explanation: Try out Environment
End of explanation
"""
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)
!rm -r logs
!mkdir logs
!mkdir logs/berater
# https://github.com/openai/baselines/blob/master/baselines/deepq/experiments/train_pong.py
# log_dir = logger.get_dir()
log_dir = '/content/logs/berater/'
import gym
from baselines import bench
from baselines import logger
from baselines.common.vec_env.dummy_vec_env import DummyVecEnv
from baselines.common.vec_env.vec_monitor import VecMonitor
from baselines.ppo2 import ppo2
BeraterEnv.showStep = False
BeraterEnv.showDone = False
env = BeraterEnv()
wrapped_env = DummyVecEnv([lambda: BeraterEnv()])
monitored_env = VecMonitor(wrapped_env, log_dir)
# https://github.com/openai/baselines/blob/master/baselines/ppo2/ppo2.py
model = ppo2.learn(network='mlp', env=monitored_env, total_timesteps=150000)
# monitored_env = bench.Monitor(env, log_dir)
# https://en.wikipedia.org/wiki/Q-learning#Influence_of_variables
# %time model = deepq.learn(\
# monitored_env,\
# seed=42,\
# network='mlp',\
# lr=1e-3,\
# gamma=0.99,\
# total_timesteps=30000,\
# buffer_size=50000,\
# exploration_fraction=0.5,\
# exploration_final_eps=0.02,\
# print_freq=1000)
model.save('berater-ppo-v6.pkl')
monitored_env.close()
"""
Explanation: Train model
random has reward of -5.51
total cost when travelling all paths (back and forth): 5000
additional pernalty for liiegal moves 1000
all rewards: 13000
perfect score???
estimate: half the travel cost and no illegal moves: (13000 - 2500) / 13000 = .80
End of explanation
"""
!ls -l $log_dir
from baselines.common import plot_util as pu
results = pu.load_results(log_dir)
import matplotlib.pyplot as plt
import numpy as np
r = results[0]
# plt.ylim(-1, 1)
# plt.plot(np.cumsum(r.monitor.l), r.monitor.r)
plt.plot(np.cumsum(r.monitor.l), pu.smooth(r.monitor.r, radius=100))
"""
Explanation: Visualizing Results
https://github.com/openai/baselines/blob/master/docs/viz/viz.ipynb
End of explanation
"""
import numpy as np
observation = env.reset()
state = np.zeros((1, 2*128))
dones = np.zeros((1))
BeraterEnv.showStep = True
BeraterEnv.showDone = False
for t in range(1000):
actions, _, state, _ = model.step(observation, S=state, M=dones)
observation, reward, done, info = env.step(actions[0])
if done:
print("Episode finished after {} timesteps".format(t+1))
break
env.close()
"""
Explanation: Enjoy model
End of explanation
"""
|
asurve/arvind-sysml2 | samples/jupyter-notebooks/.ipynb_checkpoints/ALS_python_demo-checkpoint.ipynb | apache-2.0 | from pyspark.sql import SparkSession
from pyspark.sql.types import *
from systemml import MLContext, dml
spark = SparkSession\
.builder\
.appName("als-example")\
.getOrCreate()
schema = StructType([StructField("movieId", IntegerType(), True),
StructField("userId", IntegerType(), True),
StructField("rating", IntegerType(), True),
StructField("date", StringType(), True)])
ratings = spark.read.csv("./netflix/training_set_normalized/mv_0*.txt", schema = schema)
ratings = ratings.select('userId', 'movieId', 'rating')
ratings.show(10)
ratings.describe().show()
"""
Explanation: Scaling Alternating Least Squares Using Apache SystemML
Recommendation systems based on Alternating Least Squares (ALS) algorithm have gained popularity in recent years because, in general, they perform better as compared to content based approaches.
ALS is a matrix factorization algorithm, where a user-item matrix is factorized into two low-rank non-orthogonal matrices:
$$R = U M$$
The elements, $r_{ij}$, of matrix $R$ can represent, for example, ratings assigned to the $j$th movie by the $i$th user.
This matrix factorization assumes that each user can be described by $k$ latent features. Similarly, each item/movie can also be represented by $k$ latent features. The user rating of a particular movie can thus be approximated by the product of two $k$-dimensional vectors:
$$r_{ij} = {\bf u}_i^T {\bf m}_j$$
The vectors ${\bf u}_i$ are rows of $U$ and ${\bf m}_j$'s are columns of $M$. These can be learned by minimizing the cost function:
$$f(U, M) = \sum_{i,j} \left( r_{ij} - {\bf u}_i^T {\bf m}_j \right)^2 = \| R - UM \|^2$$
Regularized ALS
In this notebook, we'll implement ALS algorithm with weighted-$\lambda$-regularization formulated by Zhou et. al. The cost function with such regularization is:
$$f(U, M) = \sum_{i,j} I_{ij}\left( r_{ij} - {\bf u}i^T {\bf m}_j \right)^2 + \lambda \left( \sum_i n{u_i} \| {\bf u}\|i^2 + \sum_j n{m_j} \|{\bf m}\|_j^2 \right)$$
Here, $\lambda$ is the usual regularization parameter. $n_{u_i}$ and $n_{m_j}$ represent the number of ratings of user $i$ and movie $j$ respectively. $I_{ij}$ is an indicator variable such that $I_{ij} = 1$ if $r_{ij}$ exists and $I_{ij} = 0$ otherwise.
If we fix ${\bf m}_j$, we can determine ${\bf u}_i$ by solving a regularized least squares problem:
$$ \frac{1}{2} \frac{\partial f}{\partial {\bf u}_i} = 0$$
This gives the following matrix equation:
$$\left(M \text{diag}({\bf I}i^T) M^{T} + \lambda n{u_i} E\right) {\bf u}_i = M {\bf r}_i^T$$
Here ${\bf r}i^T$ is the $i$th row of $R$. Similarly, ${\bf I}_i$ the $i$th row of the matrix $I = [I{ij}]$. Please see Zhou et. al for details.
Reading Netflix Movie Ratings Data
In this example, we'll use Netflix movie ratings. This data set can be downloaded from here. We'll use spark to read movie ratings data into a dataframe. The csv files have four columns: MovieID, UserID, Rating, Date.
End of explanation
"""
#-----------------------------------------------------------------
# Create kernel in SystemML's DSL using the R-like syntax for ALS
# Algorithms available at : https://systemml.apache.org/algorithms
# Below algorithm based on ALS-CG.dml
#-----------------------------------------------------------------
als_dml = \
"""
# Default values of some parameters
r = rank
max_iter = 50
check = TRUE
thr = 0.01
R = table(X[,1], X[,2], X[,3])
# check the input matrix R, if some rows or columns contain only zeros remove them from R
R_nonzero_ind = R != 0;
row_nonzeros = rowSums(R_nonzero_ind);
col_nonzeros = t(colSums (R_nonzero_ind));
orig_nonzero_rows_ind = row_nonzeros != 0;
orig_nonzero_cols_ind = col_nonzeros != 0;
num_zero_rows = nrow(R) - sum(orig_nonzero_rows_ind);
num_zero_cols = ncol(R) - sum(orig_nonzero_cols_ind);
if (num_zero_rows > 0) {
print("Matrix R contains empty rows! These rows will be removed.");
R = removeEmpty(target = R, margin = "rows");
}
if (num_zero_cols > 0) {
print ("Matrix R contains empty columns! These columns will be removed.");
R = removeEmpty(target = R, margin = "cols");
}
if (num_zero_rows > 0 | num_zero_cols > 0) {
print("Recomputing nonzero rows and columns!");
R_nonzero_ind = R != 0;
row_nonzeros = rowSums(R_nonzero_ind);
col_nonzeros = t(colSums (R_nonzero_ind));
}
###### MAIN PART ######
m = nrow(R);
n = ncol(R);
# initializing factor matrices
U = rand(rows = m, cols = r, min = -0.5, max = 0.5);
M = rand(rows = n, cols = r, min = -0.5, max = 0.5);
# initializing transformed matrices
Rt = t(R);
loss = matrix(0, rows=max_iter+1, cols=1)
if (check) {
loss[1,] = sum(R_nonzero_ind * (R - (U %*% t(M)))^2) + lambda * (sum((U^2) * row_nonzeros) +
sum((M^2) * col_nonzeros));
print("----- Initial train loss: " + toString(loss[1,1]) + " -----");
}
lambda_I = diag (matrix (lambda, rows = r, cols = 1));
it = 0;
converged = FALSE;
while ((it < max_iter) & (!converged)) {
it = it + 1;
# keep M fixed and update U
parfor (i in 1:m) {
M_nonzero_ind = t(R[i,] != 0);
M_nonzero = removeEmpty(target=M * M_nonzero_ind, margin="rows");
A1 = (t(M_nonzero) %*% M_nonzero) + (as.scalar(row_nonzeros[i,1]) * lambda_I); # coefficient matrix
U[i,] = t(solve(A1, t(R[i,] %*% M)));
}
# keep U fixed and update M
parfor (j in 1:n) {
U_nonzero_ind = t(Rt[j,] != 0)
U_nonzero = removeEmpty(target=U * U_nonzero_ind, margin="rows");
A2 = (t(U_nonzero) %*% U_nonzero) + (as.scalar(col_nonzeros[j,1]) * lambda_I); # coefficient matrix
M[j,] = t(solve(A2, t(Rt[j,] %*% U)));
}
# check for convergence
if (check) {
loss_init = as.scalar(loss[it,1])
loss_cur = sum(R_nonzero_ind * (R - (U %*% t(M)))^2) + lambda * (sum((U^2) * row_nonzeros) +
sum((M^2) * col_nonzeros));
loss_dec = (loss_init - loss_cur) / loss_init;
print("Train loss at iteration (M) " + it + ": " + loss_cur + " loss-dec " + loss_dec);
if (loss_dec >= 0 & loss_dec < thr | loss_init == 0) {
print("----- ALS converged after " + it + " iterations!");
converged = TRUE;
}
loss[it+1,1] = loss_cur
}
} # end of while loop
loss = loss[1:it+1,1]
if (check) {
print("----- Final train loss: " + toString(loss[it+1,1]) + " -----");
}
if (!converged) {
print("Max iteration achieved but not converged!");
}
# inject 0s in U if original R had empty rows
if (num_zero_rows > 0) {
U = removeEmpty(target = diag(orig_nonzero_rows_ind), margin = "cols") %*% U;
}
# inject 0s in R if original V had empty rows
if (num_zero_cols > 0) {
M = removeEmpty(target = diag(orig_nonzero_cols_ind), margin = "cols") %*% M;
}
M = t(M);
"""
"""
Explanation: ALS implementation using DML
The following script implements the regularized ALS algorithm as described above. One thing to note here is that we remove empty rows/columns from the rating matrix before running the algorithm. We'll add back the zero rows and columns to matrices $U$ and $M$ after the algorithm converges.
End of explanation
"""
ml = MLContext(sc)
# Define input/output variables for DML script
alsScript = dml(als_dml).input("X", ratings) \
.input("lambda", 0.01) \
.input("rank", 100) \
.output("U", "M", "loss")
# Execute script
res = ml.execute(alsScript)
U, M, loss = res.get('U','M', "loss")
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(loss.toNumPy(), 'o');
"""
Explanation: Running the Algorithm
We'll first create an MLContext object which the entry point for SystemML. Inputs and outputs are defined through a dml function.
End of explanation
"""
predict_dml = \
"""
R = table(R[,1], R[,2], R[,3])
K = 5
Rrows = nrow(R);
Rcols = ncol(R);
zero_cols_ind = (colSums(M != 0)) == 0;
K = min(Rcols - sum(zero_cols_ind), K);
n = nrow(X);
Urows = nrow(U);
Mcols = ncol(M);
X_user_max = max(X[,1]);
if (X_user_max > Rrows) {
stop("Predictions cannot be provided. Maximum user-id exceeds the number of rows of R.");
}
if (Urows != Rrows | Mcols != Rcols) {
stop("Number of rows of U (columns of M) does not match the number of rows (column) of R.");
}
# creats projection matrix to select users
s = seq(1, n);
ones = matrix(1, rows = n, cols = 1);
P = table(s, X[,1], ones, n, Urows);
# selects users from factor U
U_prime = P %*% U;
# calculate rating matrix for selected users
R_prime = U_prime %*% M;
# selects users from original R
R_users = P %*% R;
# create indictor matrix to remove existing ratings for given users
I = R_users == 0;
# removes already recommended items and creating user2item matrix
R_prime = R_prime * I;
# stores sorted movies for selected users
R_top_indices = matrix(0, rows = nrow (R_prime), cols = K);
R_top_values = matrix(0, rows = nrow (R_prime), cols = K);
# a large number to mask the max ratings
range = max(R_prime) - min(R_prime) + 1;
# uses rowIndexMax/rowMaxs to update kth ratings
for (i in 1:K){
rowIndexMax = rowIndexMax(R_prime);
rowMaxs = rowMaxs(R_prime);
R_top_indices[,i] = rowIndexMax;
R_top_values[,i] = rowMaxs;
R_prime = R_prime - range * table(seq (1, nrow(R_prime), 1), rowIndexMax, nrow(R_prime), ncol(R_prime));
}
R_top_indices = R_top_indices * (R_top_values > 0);
# cbind users as a first column
R_top_indices = cbind(X[,1], R_top_indices);
R_top_values = cbind(X[,1], R_top_values);
"""
# user for which we want to recommend movies
ids = [116,126,130,131,133,142,149,158,164,168,169,177,178,183,188,189,192,195,199,201,215,231,242,247,248,
250,261,265,266,267,268,283,291,296,298,299,301,302,304,305,307,308,310,312,314,330,331,333,352,358,363,
368,369,379,383,384,385,392,413,416,424,437,439,440,442,453,462,466,470,471,477,478,479,481,485,490,491]
users = spark.createDataFrame([[i] for i in ids])
predScript = dml(predict_dml).input("R", ratings) \
.input("X", users) \
.input("U", U) \
.input("M", M) \
.output("R_top_indices")
pred = ml.execute(predScript).get("R_top_indices")
pred = pred.toNumPy()
"""
Explanation: Predictions
Once $U$ and $M$ are learned from the data, we can recommend movies for any users. If $U'$ represent the users for which we seek recommendations, we first obtain the predicted ratings for all the movies by users in $U'$:
$$R' = U' M$$
Finally, we sort the ratings for each user and present the top 5 movies with highest predicted ratings. The following dml script implements this. Since we're using very low rank in this example, these recommendations are not meaningful.
End of explanation
"""
import pandas as pd
titles = pd.read_csv("./netflix/movie_titles.csv", header=None, sep=';', names=['movieID', 'year', 'title'])
import re
import wikipedia as wiki
from bs4 import BeautifulSoup as bs
import requests as rq
from IPython.core.display import Image, display
def get_poster(title):
if title.endswith('Bonus Material'):
title = title.strip('Bonus Material')
title = re.sub(r'[^\w\s]','',title)
matches = wiki.search(title)
if matches is None:
return
film = [s for s in matches if 'film)' in s]
film = film[0] if len(film) > 0 else matches[0]
try:
url = wiki.page(film).url
except:
return
html = rq.get(url)
if html.status_code == 200:
soup = bs(html.content, 'html.parser')
infobox = soup.find('table', class_="infobox")
if (infobox):
img = infobox.find('img')
if img:
display(Image('http:' + img['src']))
def show_recommendations(userId, preds):
for row in preds:
if int(row[0]) == userId:
print("\nrecommendations for userId", int(row[0]) )
for title in titles.title[row[1:]].values:
print(title)
get_poster(title)
break
show_recommendations(192, preds=pred)
"""
Explanation: Just for Fun!
Once we have the movie recommendations, we can show the movie posters for those recommendations. We'll fetch these movie poster from wikipedia. If movie page doesn't exist on wikipedia, we'll just list the movie title.
End of explanation
"""
|
jbogaardt/chainladder-python | docs/tutorials/development-tutorial.ipynb | mit | # Black linter, optional
%load_ext lab_black
import pandas as pd
import numpy as np
import chainladder as cl
import os
print("pandas: " + pd.__version__)
print("numpy: " + np.__version__)
print("chainladder: " + cl.__version__)
"""
Explanation: Development Tutorial
Getting Started
This tutorial focuses on selecting the development factors.
Note that a lot of the examples shown here might not be applicable in a real world scenario, and is only meant to demonstrate some of the functionalities included in the package. The user should always exercise their best actuarial judgement, and follow any applicable laws, the Code of Professional Conduct, and applicable Actuarial Standards of Practice.
Be sure to make sure your packages are updated. For more info on how to update your pakages, visit Keeping Packages Updated.
End of explanation
"""
raa = cl.load_sample("raa")
print(
"Correlation across valuation years? ",
raa.valuation_correlation(p_critical=0.1, total=True).z_critical.values,
)
print(
"Correlation across origin years? ",
raa.development_correlation(p_critical=0.5).t_critical.values,
)
"""
Explanation: Testing for Violation of Chain Ladder's Assumptions
The Chain Ladder method is based on the strong assumptions of independence across origin periods and across valuation periods. Mack developed tests to verify if these assumptions hold, and these tests have been implemented in chainladder.
Before the Chain-Ladder model can be used, we should verify that the data satisfies the underlying assumptions using tests at the desired confidence interval level. If assumptions are violated, we should consider if ultimates can be estimated using other models.
Let's test for independence across origin and development periods. Note that for correlation across valuation periods, the Z-statistic is used; and for correlation across origin periods, the T-statistic is used. For the valuation_correlation test, an additional parameter, total, can be passed depends on if we want to calculate valuation correlation in total across all years (True) consistent with Mack 1993, or for each year separately (False) consistent with Mack 1997.
End of explanation
"""
raa.valuation_correlation(p_critical=0.1, total=False).z_critical
"""
Explanation: The above tests show that the raa triangle is independent in both cases, suggesting that there is no evidence that the Chain-Ladder model is not an appropriate method to develop the ultimate amounts. It is suggested to review Mack (1993) and Mack (1997) to ensure a proper understanding of the methodology and the choice of p_critical.
Mack (1997) differs from Mack (1993) for testing valuation years correlation. The 1993 paper looks at the aggregate of all years, while the latter suggests to check independence for each valuation year. To test for each valuation year, we set total to False.
End of explanation
"""
genins = cl.load_sample("genins")
dev = cl.Development().fit(genins)
"""
Explanation: Please note that the tests are run on the entire 4 dimensions of the triangle.
Estimator Basics
All development methods follow the sklearn estimator API. These estimators have a few properties that are worth getting used to.
We instantiate the estimator with your choice of assumptions. In the case where we don't opt for any assumptions, defaults are chosen for you.
At this point, we've chosen an estimator and assumptions (even if default) but we have not shown our estimator a Triangle. At this point it is merely instructions on how to fit development patterns, but no patterns exist as of yet.
All estimators have a fit method and you can pass a triangle to your estimator. Let's fit a Triangle in a Development estimator. Let's also assign the estimator to a variable so we can reference attributes about it.
End of explanation
"""
dev.ldf_
"""
Explanation: Now that we have fit a Development estimator, it has many additional properties that didn't exist before fitting. For example,
we can view the ldf_
End of explanation
"""
dev.cdf_
"""
Explanation: We can view the cdf_
End of explanation
"""
dev.ldf_.incr_to_cum()
dev.cdf_.cum_to_incr()
"""
Explanation: We can also convert between LDFs and CDFs using incr_to_cum() and cum_to_incr() similar to triangles.
End of explanation
"""
print("Assumption parameter (no underscore):", dev.average)
print("Estimated parameter (underscore):\n", dev.ldf_)
"""
Explanation: Notice these attributes have a trailing underscore (_). This is scikit-learn's API convention, as its documentation states, "attributes that have been estimated from the data must always have a name ending with trailing underscore, for example the coefficients of some regression estimator would be stored in a coef_ attribute after fit has been called." In summary, the trailing underscore in class attributes is a scikit-learn's convention to denote that the attributes are estimated, or to denote that they are fitted attributes.
End of explanation
"""
genins = cl.load_sample("genins")
genins
"""
Explanation: Development Averaging
Now that we have a grounding in triangle manipulation and the basics of estimators, we can start getting more creative with customizing our development factors.
The basic Development estimator uses a weighted regression through the origin for estimating parameters. Mack showed that using weighted regressions allows for:
1. volume weighted average development patterns<br>
2. simple average development factors<br>
3. OLS regression estimate of development factor where the regression equation is Y = mX + 0<br>
While he posited this framework to suggest the MackChainladder stochastic method, it is an elegant form even for deterministic development pattern selection.
End of explanation
"""
genins.age_to_age
"""
Explanation: We can also print the age_to_age factors.
End of explanation
"""
genins.age_to_age.heatmap()
vol = cl.Development(average="volume").fit(genins).ldf_
vol
sim = cl.Development(average="simple").fit(genins).ldf_
sim
"""
Explanation: And colorcode with heatmap().
End of explanation
"""
print("LDF Type: ", type(vol))
print("Difference between volume and simple average:")
vol - sim
"""
Explanation: In most cases, estimator attributes are Triangles themselves and can be manipulated with just like raw triangles.
End of explanation
"""
cl.Development(average=["volume", "simple", "regression"] * 3).fit(genins).ldf_
"""
Explanation: We can specify how the LDFs are averaged independently for each age-to-age period. For example, we can use volume averaging on the first pattern, simple the second, regression the third, and then repeat the cycle three times for the 9 age-to-age factors that we need. Note that the array of selected method must be of the same length as the number of age-to-age factors.
End of explanation
"""
cl.Development(average=["volume"] + ["simple"] * 5 + ["volume"] * 3).fit(genins).ldf_
"""
Explanation: Another example, using volume-weighting for the first factor, simple-weighting for the next 5 factors, and volume-weighting for the last 3 factors.
End of explanation
"""
cl.Development().fit(genins).ldf_
cl.Development(n_periods=-1).fit(genins).ldf_
cl.Development(n_periods=3).fit(genins).ldf_
"""
Explanation: Averaging Period
Development comes with an n_periods parameter that allows you to select the latest n origin periods for fitting your development patterns. n_periods=-1 is used to indicate the usage of all available periods, which is also the default if the parameter is not specified. The units of n_periods follows the origin_grain of the underlying triangle.
End of explanation
"""
cl.Development(n_periods=[8, 2, 6, 5, -1, 2, -1, -1, 5]).fit(genins).ldf_
"""
Explanation: Much like average, n_periods can also be set for each age-to-age individually.
End of explanation
"""
cl.Development(n_periods=[1, 2, 3, 4, 5, 6, 7, 8, 9]).fit(
genins
).ldf_ == cl.Development(n_periods=[1, 2, 3, 4, 5, 4, 3, 2, 1]).fit(genins).ldf_
"""
Explanation: Note that if we provide n_periods that is greater than what is available for any particular age-to-age period, all available periods will be used instead.
End of explanation
"""
cl.Development(drop_valuation="2004").fit(genins).ldf_
"""
Explanation: Discarding Problematic Link Ratios
Even with n_periods, there are situations where you might want to be more surgical in our selections. For example, you could have a valuation period with bad data and wish to omit the entire diagonal from your averaging.
End of explanation
"""
cl.Development(drop_high=True, drop_low=True).fit(genins).ldf_
"""
Explanation: We can also do an olympic averaging (i.e. exluding high and low from each period).
End of explanation
"""
genins
genins.age_to_age.heatmap()
"""
Explanation: Or maybe there is just a single outlier link-ratio that you don't think is indicative of future development. For these, you can specify the intersection of the origin and development age of the denominator of the link-ratio to drop.
End of explanation
"""
cl.Development(drop=("2004", 12)).fit(genins).ldf_
"""
Explanation: Let's say we believe the 4.5680 factor from origin 2004 between age 12 and 24 should be dropped, we can use drop=('2004', 12).
End of explanation
"""
cl.Development(drop=[("2004", 12), ("2008", 24)]).fit(genins).ldf_
"""
Explanation: If there are more than one outliers, you can also pass an array of array to the drop argument.
End of explanation
"""
transformed_triangle = cl.Development(drop_high=[True] * 4 + [False] * 5).fit_transform(
genins
)
transformed_triangle
transformed_triangle.link_ratio
"""
Explanation: Transformers
In sklearn, there are two types of estimators: transformers and predictors. A transformer transforms the input data (X) in some ways, and a predictor predicts a new value (or values, Y) by using the input data X.
Development is a transformer, as the returned object is a means to create development patterns, which is used to estimate ultimates, but itself is not a reserving model (predictor).
Transformers come with the tranform and fit_transform method. These will return a Triangle object, but augment it with additional information for use in a subsequent IBNR model (a predictor). drop_high (and drop_low) can take an array of boolean variables, indicating if the highest factor should be dropped for each of the LDF calculation.
End of explanation
"""
transformed_triangle.link_ratio.heatmap()
print(type(transformed_triangle))
transformed_triangle.latest_diagonal
"""
Explanation: Our transformed triangle behaves as our original genins triangle. However, notice the link_ratios exclude any droppped values you specified.
End of explanation
"""
transformed_triangle.cdf_
"""
Explanation: However, it has other attributes that make it IBNR model-ready.
End of explanation
"""
cl.Development().fit_transform(genins) == cl.Development().fit(genins).transform(genins)
"""
Explanation: fit_transform() is equivalent to calling fit and transform in succession on the same triangle. Again, this should feel very familiar to the sklearn practitioner.
End of explanation
"""
clrd = cl.load_sample("clrd")
comauto = clrd[clrd["LOB"] == "comauto"]["CumPaidLoss"]
comauto_industry = comauto.sum()
industry_dev = cl.Development().fit(comauto_industry)
industry_dev.transform(comauto)
"""
Explanation: The reason you might want want to use fit and transform separately would be when you want to apply development patterns to a a different triangle. For example, we can:
Extract the commercial auto triangles from the clrd dataset<br>
Summarize to an industry level and fit a Development object<br>
We can then transform the individual company triangles with the industry development patterns<br>
End of explanation
"""
clrd = cl.load_sample("clrd").groupby("LOB").sum()["CumPaidLoss"]
print("Fitting to " + str(len(clrd.index)) + " industries simultaneously.")
cl.Development().fit_transform(clrd).cdf_
"""
Explanation: Working with Multidimensional Triangles
Several (though not all) of the estimators in chainladder can be fit to several triangles simultaneously. While this can be a convenient shorthand, all these estimators use the same assumptions across every triangle.
End of explanation
"""
print(cl.Development(average="simple").fit(clrd.loc["wkcomp"]))
print(cl.Development(n_periods=4).fit(clrd.loc["ppauto"]))
print(cl.Development(average="regression", n_periods=6).fit(clrd.loc["comauto"]))
"""
Explanation: For greater control, you can slice individual triangles out and fit separate patterns to each.
End of explanation
"""
|
roatienza/Deep-Learning-Experiments | versions/2020/MLP/code/tf.keras/mnist-sampler.ipynb | mit | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.keras.datasets import mnist
import matplotlib.pyplot as plt
# load dataset
(x_train, y_train), (x_test, y_test) = mnist.load_data()
"""
Explanation: Draw sample MNIST images from dataset
Demonstrates how to sample and plot MNIST digits using tf.keras API.
Using tf.keras.datasets, loading the MNIST data is just 1-line of code. After loading, the dataset is already grouped into train and test splits.
End of explanation
"""
# count the number of unique train labels
unique, counts = np.unique(y_train, return_counts=True)
print("Train labels: ", dict(zip(unique, counts)))
plt.bar(unique, counts)
plt.xticks(unique, unique)
plt.show()
"""
Explanation: Bar graph of train data
We can see that the labels are almost uniformly distributed.
Note that this is ideal. In some datasets, the distribution may be highly un-balanced. In such cases, the training is more challenging.
End of explanation
"""
# count the number of unique test labels
unique, counts = np.unique(y_test, return_counts=True)
print("Test labels: ", dict(zip(unique, counts)))
plt.bar(unique, counts, color='orange')
plt.xticks(unique, unique)
plt.show()
"""
Explanation: The test data is also almost uniformly distributed.
End of explanation
"""
# sample 25 mnist digits from train dataset
indexes = np.random.randint(0, x_train.shape[0], size=25)
images = x_train[indexes]
labels = y_train[indexes]
# plot the 25 mnist digits
plt.figure(figsize=(5,5))
for i in range(len(indexes)):
plt.subplot(5, 5, i + 1)
image = images[i]
plt.imshow(image, cmap='gray')
plt.axis('off')
# plt.savefig("mnist-samples.png")
plt.show()
"""
Explanation: Random sample of data from train split
Let us get and show 25 random samples from the train split.
End of explanation
"""
|
ChadFulton/statsmodels | examples/notebooks/quantile_regression.ipynb | bsd-3-clause | %matplotlib inline
from __future__ import print_function
import patsy
import numpy as np
import pandas as pd
import statsmodels.api as sm
import statsmodels.formula.api as smf
import matplotlib.pyplot as plt
from statsmodels.regression.quantile_regression import QuantReg
data = sm.datasets.engel.load_pandas().data
data.head()
"""
Explanation: Quantile regression
This example page shows how to use statsmodels' QuantReg class to replicate parts of the analysis published in
Koenker, Roger and Kevin F. Hallock. "Quantile Regressioin". Journal of Economic Perspectives, Volume 15, Number 4, Fall 2001, Pages 143–156
We are interested in the relationship between income and expenditures on food for a sample of working class Belgian households in 1857 (the Engel data).
Setup
We first need to load some modules and to retrieve the data. Conveniently, the Engel dataset is shipped with statsmodels.
End of explanation
"""
mod = smf.quantreg('foodexp ~ income', data)
res = mod.fit(q=.5)
print(res.summary())
"""
Explanation: Least Absolute Deviation
The LAD model is a special case of quantile regression where q=0.5
End of explanation
"""
quantiles = np.arange(.05, .96, .1)
def fit_model(q):
res = mod.fit(q=q)
return [q, res.params['Intercept'], res.params['income']] + \
res.conf_int().loc['income'].tolist()
models = [fit_model(x) for x in quantiles]
models = pd.DataFrame(models, columns=['q', 'a', 'b','lb','ub'])
ols = smf.ols('foodexp ~ income', data).fit()
ols_ci = ols.conf_int().loc['income'].tolist()
ols = dict(a = ols.params['Intercept'],
b = ols.params['income'],
lb = ols_ci[0],
ub = ols_ci[1])
print(models)
print(ols)
"""
Explanation: Visualizing the results
We estimate the quantile regression model for many quantiles between .05 and .95, and compare best fit line from each of these models to Ordinary Least Squares results.
Prepare data for plotting
For convenience, we place the quantile regression results in a Pandas DataFrame, and the OLS results in a dictionary.
End of explanation
"""
x = np.arange(data.income.min(), data.income.max(), 50)
get_y = lambda a, b: a + b * x
fig, ax = plt.subplots(figsize=(8, 6))
for i in range(models.shape[0]):
y = get_y(models.a[i], models.b[i])
ax.plot(x, y, linestyle='dotted', color='grey')
y = get_y(ols['a'], ols['b'])
ax.plot(x, y, color='red', label='OLS')
ax.scatter(data.income, data.foodexp, alpha=.2)
ax.set_xlim((240, 3000))
ax.set_ylim((240, 2000))
legend = ax.legend()
ax.set_xlabel('Income', fontsize=16)
ax.set_ylabel('Food expenditure', fontsize=16);
"""
Explanation: First plot
This plot compares best fit lines for 10 quantile regression models to the least squares fit. As Koenker and Hallock (2001) point out, we see that:
Food expenditure increases with income
The dispersion of food expenditure increases with income
The least squares estimates fit low income observations quite poorly (i.e. the OLS line passes over most low income households)
End of explanation
"""
n = models.shape[0]
p1 = plt.plot(models.q, models.b, color='black', label='Quantile Reg.')
p2 = plt.plot(models.q, models.ub, linestyle='dotted', color='black')
p3 = plt.plot(models.q, models.lb, linestyle='dotted', color='black')
p4 = plt.plot(models.q, [ols['b']] * n, color='red', label='OLS')
p5 = plt.plot(models.q, [ols['lb']] * n, linestyle='dotted', color='red')
p6 = plt.plot(models.q, [ols['ub']] * n, linestyle='dotted', color='red')
plt.ylabel(r'$\beta_{income}$')
plt.xlabel('Quantiles of the conditional food expenditure distribution')
plt.legend()
plt.show()
"""
Explanation: Second plot
The dotted black lines form 95% point-wise confidence band around 10 quantile regression estimates (solid black line). The red lines represent OLS regression results along with their 95% confindence interval.
In most cases, the quantile regression point estimates lie outside the OLS confidence interval, which suggests that the effect of income on food expenditure may not be constant across the distribution.
End of explanation
"""
|
cpcloud/ibis | docs/tutorial/06-ComplexFiltering.ipynb | apache-2.0 | !curl -LsS -o $TEMPDIR/geography.db 'https://storage.googleapis.com/ibis-tutorial-data/geography.db'
import os
import tempfile
import ibis
ibis.options.interactive = True
connection = ibis.sqlite.connect(
os.path.join(tempfile.gettempdir(), 'geography.db')
)
"""
Explanation: Complex Filtering
The filtering examples we've shown to this point have been pretty simple, either comparisons between columns or fixed values, or set filter functions like isin and notin.
Ibis supports a number of richer analytical filters that can involve one or more of:
Aggregates computed from the same or other tables
Conditional aggregates (in SQL-speak these are similar to "correlated subqueries")
"Existence" set filters (equivalent to the SQL EXISTS and NOT EXISTS keywords)
Setup
End of explanation
"""
countries = connection.table('countries')
countries.limit(5)
"""
Explanation: Using scalar aggregates in filters
End of explanation
"""
countries.area_km2.mean()
"""
Explanation: We could always compute some aggregate value from the table and use that in another expression, or we can use a data-derived aggregate in the filter. Take the average of a column. For example the average of countries size:
End of explanation
"""
cond = countries.area_km2 > countries.area_km2.mean()
expr = countries[(countries.continent == 'EU') & cond]
expr
"""
Explanation: You can use this expression as a substitute for a scalar value in a filter, and the execution engine will combine everything into a single query rather than having to access the database multiple times. For example, we want to filter European countries larger than the average country size in the world. See how most countries in Europe are smaller than the world average:
End of explanation
"""
conditional_avg = countries[countries.continent == 'AF'].area_km2.mean()
countries[
(countries.continent == 'EU') & (countries.area_km2 > conditional_avg)
]
"""
Explanation: Conditional aggregates
Suppose that we wish to filter using an aggregate computed conditional on some other expressions holding true.
For example, we want to filter European countries larger than the average country size, but this time of the average in Africa. African countries have an smaller size compared to the world average, and France gets into the list:
End of explanation
"""
gdp = connection.table('gdp')
gdp
cond = ((gdp.country_code == countries.iso_alpha3) & (gdp.value > 3e12)).any()
countries[cond]['name']
"""
Explanation: "Existence" filters
Some filtering involves checking for the existence of a particular value in a column of another table, or amount the results of some value expression. This is common in many-to-many relationships, and can be performed in numerous different ways, but it's nice to be able to express it with a single concise statement and let Ibis compute it optimally.
An example could be finding all countries that had any year with a higher GDP than 3 trillion US dollars:
End of explanation
"""
arctic = countries.name.isin(
[
'United States',
'Canada',
'Finland',
'Greenland',
'Iceland',
'Norway',
'Russia',
'Sweden',
]
)
metrics = [
countries.count().name('# countries'),
countries.population.sum().name('total population'),
countries.population.sum(where=arctic).name('population arctic countries'),
]
(countries.groupby(countries.continent).aggregate(metrics))
"""
Explanation: Note how this is different than a join between countries and gdp, which would return one row per year. The method .any() is equivalent to filtering with a subquery.
Filtering in aggregations
Suppose that you want to compute an aggregation with a subset of the data for only one of the metrics / aggregates in question, and the complete data set with the other aggregates. Most aggregation functions are thus equipped with a where argument. Let me show it to you in action:
End of explanation
"""
|
albahnsen/PracticalMachineLearningClass | notebooks/09-Model_Deployment.ipynb | mit | import pandas as pd
data = pd.read_csv('https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/datasets/phishing.csv')
data.head()
data.tail()
data.phishing.value_counts()
"""
Explanation: 09 - Model Deployment
by Alejandro Correa Bahnsen & Iván Torroledo
version 1.4, February 2019
Part of the class Practical Machine Learning
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Agenda:
Creating and saving a model
Running the model in batch
Exposing the model as an API
Part 1: Phishing Detection
Phishing, by definition, is the act of defrauding an online user in order to obtain personal information by posing as a trustworthy institution or entity. Users usually have a hard time differentiating between legitimate and malicious sites because they are made to look exactly the same. Therefore, there is a need to create better tools to combat attackers.
End of explanation
"""
data.url[data.phishing==1].sample(50, random_state=1).tolist()
"""
Explanation: Creating features
End of explanation
"""
keywords = ['https', 'login', '.php', '.html', '@', 'sign']
for keyword in keywords:
data['keyword_' + keyword] = data.url.str.contains(keyword).astype(int)
"""
Explanation: Contain any of the following:
* https
* login
* .php
* .html
* @
* sign
* ?
End of explanation
"""
data['lenght'] = data.url.str.len() - 2
domain = data.url.str.split('/', expand=True).iloc[:, 2]
data['lenght_domain'] = domain.str.len()
domain.head(12)
data['isIP'] = (domain.str.replace('.', '') * 1).str.isnumeric().astype(int)
data['count_com'] = data.url.str.count('com')
data.sample(15, random_state=4)
"""
Explanation: Lenght of the url
Lenght of domain
is IP?
Number of .com
End of explanation
"""
X = data.drop(['url', 'phishing'], axis=1)
y = data.phishing
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
clf = RandomForestClassifier(n_jobs=-1, n_estimators=100, max_depth=3)
cross_val_score(clf, X, y, cv=10)
clf.fit(X, y)
"""
Explanation: Create Model
End of explanation
"""
from sklearn.externals import joblib
joblib.dump(clf, 'model_deployment/phishing_clf.pkl', compress=3)
"""
Explanation: Save model
End of explanation
"""
from model_deployment.m09_model_deployment import predict_proba
predict_proba('http://www.vipturismolondres.com/com.br/?atendimento=Cliente&/LgSgkszm64/B8aNzHa8Aj.php')
"""
Explanation: Part 2: Model in batch
See m07_model_deployment.py
End of explanation
"""
from flask import Flask
from flask_restplus import Api, Resource, fields
from sklearn.externals import joblib
"""
Explanation: Part 3: API
Flask is considered more Pythonic than Django because Flask web application code is in most cases more explicit. Flask is easy to get started with as a beginner because there is little boilerplate code for getting a simple app up and running.
First we need to install some libraries
pip install flask-restplus
Load Flask
End of explanation
"""
app = Flask(__name__)
api = Api(
app,
version='1.0',
title='Phishing Prediction API',
description='Phishing Prediction API')
ns = api.namespace('predict',
description='Phishing Classifier')
parser = api.parser()
parser.add_argument(
'URL',
type=str,
required=True,
help='URL to be analyzed',
location='args')
resource_fields = api.model('Resource', {
'result': fields.String,
})
"""
Explanation: Create api
End of explanation
"""
from model_deployment.m09_model_deployment import predict_proba
@ns.route('/')
class PhishingApi(Resource):
@api.doc(parser=parser)
@api.marshal_with(resource_fields)
def get(self):
args = parser.parse_args()
return {
"result": predict_proba(args['URL'])
}, 200
"""
Explanation: Load model and create function that predicts an URL
End of explanation
"""
app.run(debug=True, use_reloader=False, host='0.0.0.0', port=5000)
"""
Explanation: Run API
End of explanation
"""
|
CalPolyPat/phys202-project | .ipynb_checkpoints/Progress Report-checkpoint.ipynb | mit | import numpy as np
import matplotlib
from matplotlib import pyplot as plt
matplotlib.style.use('ggplot')
import IPython as ipynb
%matplotlib inline
"""
Explanation: An Exploration of Nueral Net Capabilities
End of explanation
"""
z = np.linspace(-10, 10, 100)
f=plt.figure(figsize=(15, 5))
plt.subplot(1, 2,1)
plt.plot(z, 1/(1+np.exp(-z)));
plt.xlabel("Input to Nueron")
plt.title("Sigmoid Response with Bias=0")
plt.ylabel("Sigmoid Response");
plt.subplot(1, 2,2)
plt.plot(z, 1/(1+np.exp(-z+5)));
plt.xlabel("Input to Nueron")
plt.title("Sigmoid Response with Bias=5")
plt.ylabel("Sigmoid Response");
"""
Explanation: Abstract
A nueral network is a computational analogy to the methods by which humans think. Their design builds upon the idea of a neuron either firing or not firing based on some stimuli and learn whether or not they made the right choice. To allow
for richer results with less complicated networks, boolean response is replaced with a continuous analog, the sigmoid
function. The network learns by taking our definition of how incorrect they are in the form of a so-called cost function and find the most effective way to reduce the function to a minimum, i.e. be the least incorrect. It is ideal to minimize the number of training sessions that must be used to get a maximum accuracy due to computational cost and time. In this
project, the minimum number of training sets to reach a sufficient accuracy will be explored for multiple standard cost functions. As well, a new cost function may be explored along with a method for generating cost functions. And finally,
given a sufficient amount of time, the network will be tested with nonconformant input, in this case, scanned and
partitioned handwritten digits.
Base Question
Does it work?
Does it work well?
The first step in building a neural net is simply understanding and building the base algorithms. There are three things that define a network:
Shape
The shape of a network merely describes how many neurons there are and where they are. There are typically the locations that neurons live in: The Input Layer, The Hidden Layer, and The Output Layer. The Hidden Layer can be composed of more than one layer, but by convention, it is referred to as one layer. The Input Layer is significant because it takes the inputs. It typically does not do any discrimination before passing it along, but there is nothing barring that from occurring. The Output Layer produces a result. In most cases, the result still requires some interpretation, but is in its final form as far as the network is concerned. Each of the layers can have as many neurons as are needed but it is favorable to reduce the number to the bare minimum for both computational reasons and for accuracy.
Weights
Weights live in between individual neurons and dictate how much the decision made by a neuron in the layer before it matters to the next neurons decision. A good analogy might be that Tom(a neuron) has two friends, Sally(a neurette?) and Joe(also a neuron). They are good friends so Tom likes to ask Sally and Joe's opinion about decisions he is about to make. However, Joe is a bit crazy, likes to go out and party, etc. so Tom trusts Sally's opinion a bit more than Joe's. If Tom quantified how much he trusted Sally or Joe, that quantification would be called a weight.
Biases
Biases are tied to each neuron and its decision making proccess. A bias in the boolean sense acts as a threshold at which point a true is returned. In the continuous generalization of the boolean proccess, the bias corresponds to the threshold at which point a value above 0.5 is returned. Back to our analogy with Tom and his friends, a bias might constitute how strongly each person feels about their opinion on a subject. So when Tim asks Sally and Joe about their opinion about someone else, call her Julie, Sally responds with a fairly nuetral response because she doesn't know Julie, so her bias is around 0. Joe, on the other hand, used to date Julie and they had a bad break up, so he responds quite negatively, and somewhat unintuitively, his bias is very high. (See the graph of the sigmoid function below with zero bias) In other words, he has a very high threshold for speaking positively about Julie.
End of explanation
"""
ipynb.display.Image("http://neuralnetworksanddeeplearning.com/images/tikz11.png")
"""
Explanation: So, how does it work?
There are three core algorithms behind every neural net: Feed Forward, Back Propagation/Error Computation, and Gradient Descent.
Feed Forward
The Feed Forward algorithm could be colloquially called the "Gimme an Answer" algorithm. It sends the inputs through the network and returns the outputs. We can break it down step by step and see what is really going on:
Inputs
Each input value is fed into the corresponding input nueron, that's it. In a more sophisticated network, some inputs could be rejected based on bias criterion, but for now we leave them alone.
Channels
Each input neuron is connected to every neuron in the first hidden layer through a channel, to see this visually, look at the diagram below. Each channel is given a weight that is multiplied by the value passed on by the input neuron and is then summed with all the channels feeding the same neuron and is passed into the hidden layer neuron. The channels can be thought of as pipes allowing water to flow from each input neuron to each hidden layer neuron. The weights in our network represent the diameter of these pipes(is it large or small). As well, pipes converge to a hidden layer neuron and dump all of their water into a basin representing the neuron.
Neurons
Once a value reaches a neuron that is not an input neuron, the value is passed through a sigmoid function similar to those above with the proper bias for that neuron. The sigmoid response is the value that gets passed on to the next layer of neurons.
Repeat
The Channels and Neurons steps are repeated through each layer until the final output is reached.
End of explanation
"""
ipynb.display.Image("http://blog.datumbox.com/wp-content/uploads/2013/10/gradient-descent.png")
"""
Explanation: Back Propagation/Error Computation
Back Propagation is one of the scary buzz words in the world of neural nets, it doesn't have to be so scary. I prefer to call it error computation to be more transparent because, in essence, that is what it does. Let's dig in!
Cost Function
The cost function is a major factor in how your network learns. It defines, numerically, how wrong your network is. The function itself is typically defined by some sort of difference of your networks output to the actual correct answer. Because it is a function of the output, it is also a function of every weight and bias in your network. This means that it could have potentially thousands of independant variables. In its simplest form, a cost function should have some quite definite properties: when the ouput is near the correct answer, the cost function should be near zero, a small change in any single weight or bias should result in a small change in the cost function, and the cost function must be non-negative everywhere.
Error Computation
Through a set of nifty equations which will not be shown here, once you have a cost function and take the gradient with respect to the output of said cost function, you are able to calculate a metric for the error of the output. Through some clever deductions based on the fact that a small change in any independent variable results in a small change in the cost function we can calculate that same metric for each independent variable. (That is the Back Propagation bit) You can then calculate, through further clever deductions, the partial derivative of the cost function with respect to each independent variable. The partial derivative of the cost function with respect to each variable will come in handy for when we do Gradient Descent.
Gradient Descent
Gradient Descent uses the fact that we want to minimize our cost function together with the idea of the gradient as the path of steepest descent.
Down the Mountain
The Gradient Descent uses the gradients we calculated in the Error Computation step and tells us how we should change our variables if we want to reach a minimum in the fastest way possible. The algorithm usess the fact that the gradient with respect to an independent variable represents the component of the vector pointing in the direction of most change in that variables dimension. Because even Euler couldn't imagine a thousand dimensional space, we draw some intuition from the familiar three dimensioanl case. Suppose that you are dropped at a random location on a mountain. Suppose further that you are blind.(or it is so foggy that you can't see anything) How do you find the fastest way to the bottom? Well, the only thing that you can do is sense the slope that seems to be the steepest and walk down it. But you are a mathemetician and have no grasp on estimating things, so you calculate the gradient with respect to your left-right direction and your front-back direction. You see that if you take a half step to the left and a quarter step forward you will move the furthest downwards. Wait! Why just one step? First of all, mountains are complicated surfaces and their slopes change from place to place so continuing to make the same steps may not take you the most downwards, or even downwards at all. Secondly, you are blind!(or it is really foggy) If you start running or jumping down the slope, you may overshoot a minimum and have to stop and turn around. In the actual gradient descent algorithm, the step size is represented by something called the learning rate. A step in the right direction is performed in the algorithm by reducing each individual variable by this learning constant multiplied by the gradient with respect to that particular variable. After doing this thousands of times, we find the local minimums of our cost funtion.
End of explanation
"""
|
JasonSanchez/w261 | exams/w261mt/MIDS-MidTerm-2016-10-16.ipynb | mit | import numpy as np
from __future__ import division
%reload_ext autoreload
%autoreload 2
"""
Explanation: MIDS Machine Learning at Scale
MidTerm Exam
4:00PM - 6:00PM(CT)
October 19, 2016
Midterm
MIDS Machine Learning at Scale
Please insert your contact information here
Insert you name here : Jason Sanchez
Insert you email here : jason.sanchez@ischool.berkeley.edu
Insert your UC Berkeley ID here: 26989981
End of explanation
"""
%%writefile kltext.txt
1.Data Science is an interdisciplinary field about processes and systems to extract knowledge or insights from large volumes of data in various forms (data in various forms, data in various forms, data in various forms), either structured or unstructured,[1][2] which is a continuation of some of the data analysis fields such as statistics, data mining and predictive analytics, as well as Knowledge Discovery in Databases.
2.Machine learning is a subfield of computer science[1] that evolved from the study of pattern recognition and computational learning theory in artificial intelligence.[1] Machine learning explores the study and construction of algorithms that can learn from and make predictions on data.[2] Such algorithms operate by building a model from example inputs in order to make data-driven predictions or decisions,[3]:2 rather than following strictly static program instructions.
"""
Explanation: Exam Instructions
: Please insert Name and Email address in the first cell of this notebook
: Please acknowledge receipt of exam by sending a quick email reply to the instructor
: Review the submission form first to scope it out (it will take a 5-10 minutes to input your
answers and other information into this form):
Exam Submission form
: Please keep all your work and responses in ONE (1) notebook only (and submit via the submission form)
: Please make sure that the NBViewer link for your Submission notebook works
: Please submit your solutions and notebook via the following form:
Exam Submission form
: For the midterm you will need access to MrJob and Jupyter on your local machines or on AltaScale/AWS to complete some of the questions (like fill in the code to do X).
: As for question types:
Knowledge test Programmatic/doodle (take photos; embed the photos in your notebook)
All programmatic questions can be run locally on your laptop (using MrJob only) or on the cluster
: This is an open book exam meaning you can consult webpages and textbooks, class notes, slides etc. but you can not discuss with each other or any other person/group. If any collusion, then this will result in a zero grade and will be grounds for dismissal from the entire program. Please complete this exam by yourself within the time limit.
Exam questions begins here
===Map-Reduce===
MT1. Which of the following statememts about map-reduce are true?
(I) If you only have 1 computer with 1 computing core, then map-reduce is unlikely to help
(II) If we run map-reduce using N single-core computers, then it is likely to get at least an N-Fold speedup compared to using 1 computer
(III) Because of network latency and other overhead associated with map-reduce, if we run map-reduce using N computers, then we will get less than N-Fold speedup compared to using 1 computer
(IV) When using map-reduce for learning a naive Bayes classifier for SPAM classification, we usually use a single machine that accumulates the partial class and word stats from each of the map machines, in order to compute the final model.
Please select one from the following that is most correct:
(a) I, II, III, IV
(b) I, III, IV
(c) I, III
(d) I,II, III
C
===Order inversion===
MT2. normalized product co-occurrence
Suppose you wish to write a MapReduce job that creates normalized product co-occurrence (i.e., pairs of products that have been purchased together) data form a large transaction file of shopping baskets. In addition, we want the relative frequency of coocurring products. Given this scenario, to ensure that all (potentially many) reducers
receive appropriate normalization factors (denominators)for a product
in the most effcient order in their input streams (so as to minimize memory overhead on the reducer side),
the mapper should emit/yield records according to which pattern for the product occurence totals:
(a) emit (*,product) count
(b) There is no need to use order inversion here
(c) emit (product,*) count
(d) None of the above
A
===Map-Reduce===
MT3. What is the input to the Reduce function in MRJob? Select the most correct choice.
(a) An arbitrarily sized list of key/value pairs.
(b) One key and a list of some values associated with that key
(c) One key and a list of all values associated with that key.
(d) None of the above
C
(Although it is not a list, but a generator)
===Bayesian document classification===
MT4. When building a Bayesian document classifier, Laplace smoothing serves what purpose?
(a) It allows you to use your training data as your validation data.
(b) It prevents zero-products in the posterior distribution.
(c) It accounts for words that were missed by regular expressions.
(d) None of the above
B
MT5. Big Data
Big data is defined as the voluminous amount of structured, unstructured or semi-structured data that has huge potential for mining but is so large that it cannot be processed nor stored using traditional (single computer) computing and storage systems. Big data is characterized by its high velocity, volume and variety that requires cost effective and innovative methods for information processing to draw meaningful business insights. More than the volume of the data – it is the nature of the data that defines whether it is considered as Big Data or not. What do the four V’s of Big Data denote? Here is a potential simple explanation for each of the four critical features of big data (some or all of which is correct):
Statements
* (I) Volume –Scale of data
* (II) Velocity – Batch processing of data offline
* (III)Variety – Different forms of data
* (IV) Veracity –Uncertainty of data
Which combination of the above statements is correct. Select a single correct response from the following :
(a) I, II, III, IV
(b) I, III, IV
(c) I, III
(d) I,II, III
B
MT6. Combiners can be integral to the successful utilization of the Hadoop shuffle.
Using combiners result in what?
(I) minimization of reducer workload
(II) minimization of disk storage for mapper results
(III) minimization of network traffic
(IV) none of the above
Select most correct option (i.e., select one option only) from the following:
(a) I
(b) I, II and III
(c) II and III
(d) IV
B (uncertain)
Pairwise similarity using K-L divergence
In probability theory and information theory, the Kullback–Leibler divergence
(also information divergence, information gain, relative entropy, KLIC, or KL divergence)
is a non-symmetric measure of the difference between two probability distributions P and Q.
Specifically, the Kullback–Leibler divergence of Q from P, denoted DKL(P\‖Q),
is a measure of the information lost when Q is used to approximate P:
For discrete probability distributions P and Q,
the Kullback–Leibler divergence of Q from P is defined to be
+ KLDistance(P, Q) = Sum_over_item_i (P(i) log (P(i) / Q(i))
In the extreme cases, the KL Divergence is 1 when P and Q are maximally different
and is 0 when the two distributions are exactly the same (follow the same distribution).
For more information on K-L Divergence see:
+ [K-L Divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence)
For the next three question we will use an MRjob class for calculating pairwise similarity
using K-L Divergence as the similarity measure:
Job 1: create inverted index (assume just two objects)
Job 2: calculate/accumulate the similarity of each pair of objects using K-L Divergence
Using the following cells then fill in the code for the first reducer to calculate
the K-L divergence of objects (letter documents) in line1 and line2, i.e., KLD(Line1||line2).
Here we ignore characters which are not alphabetical. And all alphabetical characters are lower-cased in the first mapper.
Using the MRJob Class below calculate the KL divergence of the following two string objects.
End of explanation
"""
import numpy as np
np.log(3)
!cat kltext.txt
%%writefile kldivergence.py
# coding: utf-8
from __future__ import division
from mrjob.job import MRJob
from mrjob.step import MRStep
import re
import numpy as np
class kldivergence(MRJob):
# process each string character by character
# the relative frequency of each character emitting Pr(character|str)
# for input record 1.abcbe
# emit "a" [1, 0.2]
# emit "b" [1, 0.4] etc...
def mapper1(self, _, line):
index = int(line.split('.',1)[0])
letter_list = re.sub(r"[^A-Za-z]+", '', line).lower()
count = {}
for l in letter_list:
if count.has_key(l):
count[l] += 1
else:
count[l] = 1
for key in count:
yield key, [index, count[key]*1.0/len(letter_list)]
# on a component i calculate (e.g., "b")
# Kullback–Leibler divergence of Q from P is defined as (P(i) log (P(i) / Q(i))
def reducer1(self, key, values):
p = 0
q = 0
for v in values:
if v[0] == 1: #String 1
p = v[1]
else: # String 2
q = v[1]
if p and q:
yield (None, p*np.log(p/q))
#Aggegate components
def reducer2(self, key, values):
kl_sum = 0
for value in values:
kl_sum = kl_sum + value
yield "KLDivergence", kl_sum
def steps(self):
mr_steps = [self.mr(mapper=self.mapper1,
reducer=self.reducer1),
self.mr(reducer=self.reducer2)]
# mr_steps = [MRStep(mapper=self.mapper1, reducer=self.reducer1)]
return mr_steps
if __name__ == '__main__':
kldivergence.run()
%reload_ext autoreload
%autoreload 2
from mrjob.job import MRJob
from kldivergence import kldivergence
#dont forget to save kltext.txt (see earlier cell)
mr_job = kldivergence(args=['kltext.txt'])
with mr_job.make_runner() as runner:
runner.run()
# stream_output: get access of the output
for line in runner.stream_output():
print mr_job.parse_output_line(line)
"""
Explanation: MRjob class for calculating pairwise similarity using K-L Divergence as the similarity measure
Job 1: create inverted index (assume just two objects) <P>
Job 2: calculate the similarity of each pair of objects
End of explanation
"""
words = """
1.Data Science is an interdisciplinary field about processes and systems to extract knowledge or insights from large volumes of data in various forms (data in various forms, data in various forms, data in various forms), either structured or unstructured,[1][2] which is a continuation of some of the data analysis fields such as statistics, data mining and predictive analytics, as well as Knowledge Discovery in Databases.
2.Machine learning is a subfield of computer science[1] that evolved from the study of pattern recognition and computational learning theory in artificial intelligence.[1] Machine learning explores the study and construction of algorithms that can learn from and make predictions on data.[2] Such algorithms operate by building a model from example inputs in order to make data-driven predictions or decisions,[3]:2 rather than following strictly static program instructions."""
for char in ['p', 'k', 'f', 'q', 'j']:
if char not in words:
print char
"""
Explanation: Questions:
MT7. Which number below is the closest to the result you get for KLD(Line1||line2)?
(a) 0.7
(b) 0.5
(c) 0.2
(d) 0.1
D
MT8. Which of the following letters are missing from these character vectors?
(a) p and t
(b) k and q
(c) j and q
(d) j and f
End of explanation
"""
%%writefile kldivergence_smooth.py
from __future__ import division
from mrjob.job import MRJob
import re
import numpy as np
class kldivergence_smooth(MRJob):
# process each string character by character
# the relative frequency of each character emitting Pr(character|str)
# for input record 1.abcbe
# emit "a" [1, (1+1)/(5+24)]
# emit "b" [1, (2+1)/(5+24) etc...
def mapper1(self, _, line):
index = int(line.split('.',1)[0])
letter_list = re.sub(r"[^A-Za-z]+", '', line).lower()
count = {}
# (ni+1)/(n+24)
for l in letter_list:
if count.has_key(l):
count[l] += 1
else:
count[l] = 1
for letter in ['q', 'j']:
if letter not in letter_list:
count[letter] = 0
for key in count:
yield key, [index, (1+count[key]*1.0)/(24+len(letter_list))]
def reducer1(self, key, values):
p = 0
q = 0
for v in values:
if v[0] == 1:
p = v[1]
else:
q = v[1]
yield (None, p*np.log(p/q))
# Aggregate components
def reducer2(self, key, values):
kl_sum = 0
for value in values:
kl_sum = kl_sum + value
yield "KLDivergence", kl_sum
def steps(self):
return [self.mr(mapper=self.mapper1,
reducer=self.reducer1),
self.mr(reducer=self.reducer2)
]
if __name__ == '__main__':
kldivergence_smooth.run()
%reload_ext autoreload
%autoreload 2
from kldivergence_smooth import kldivergence_smooth
mr_job = kldivergence_smooth(args=['kltext.txt'])
with mr_job.make_runner() as runner:
runner.run()
# stream_output: get access of the output
for line in runner.stream_output():
print mr_job.parse_output_line(line)
"""
Explanation: C
End of explanation
"""
%%writefile spam.txt
0002.2001-05-25.SA_and_HP 0 0 good
0002.2001-05-25.SA_and_HP 0 0 very good
0002.2001-05-25.SA_and_HP 1 0 bad
0002.2001-05-25.SA_and_HP 1 0 very bad
0002.2001-05-25.SA_and_HP 1 0 very bad, very BAD
%%writefile spam_test.txt
0002.2001-05-25.SA_and_HP 1 0 good? bad! very Bad!
%%writefile NaiveBayes.py
import sys
import re
from mrjob.job import MRJob
from mrjob.step import MRStep
from mrjob.protocol import TextProtocol, TextValueProtocol
# Prevents broken pipe errors from using ... | head
from signal import signal, SIGPIPE, SIG_DFL
signal(SIGPIPE,SIG_DFL)
def sum_hs(counts):
h_total, s_total = 0, 0
for h, s in counts:
h_total += h
s_total += s
return (h_total, s_total)
class NaiveBayes(MRJob):
MRJob.OUTPUT_PROTOCOL = TextValueProtocol
def mapper(self, _, lines):
_, spam, subject, email = lines.split("\t")
words = re.findall(r'[a-z]+', (email.lower()+" "+subject.lower()))
if spam == "1":
h, s = 0, 1
else:
h, s = 1, 0
yield "***Total Emails", (h, s)
for word in words:
yield word, (h, s)
yield "***Total Words", (h, s)
def combiner(self, key, count):
yield key, sum_hs(count)
def reducer_init(self):
self.total_ham = 0
self.total_spam = 0
def reducer(self, key, count):
ham, spam = sum_hs(count)
if key.startswith("***"):
if "Words" in key:
self.total_ham, self.total_spam = ham, spam
elif "Emails" in key:
total = ham + spam
yield "_", "***Priors\t%.10f\t%.10f" % (ham/total, spam/total)
else:
pg_ham, pg_spam = ham/self.total_ham, spam/self.total_spam
yield "_", "%s\t%.10f\t%.10f" % (key, pg_ham, pg_spam)
if __name__ == "__main__":
NaiveBayes.run()
!cat spam.txt | python NaiveBayes.py --jobconf mapred.reduce.tasks=1 -q | head
"""
Explanation: MT9. The KL divergence on multinomials is defined only when they have nonzero entries.
For zero entries, we have to smooth distributions. Suppose we smooth in this way:
(ni+1)/(n+24)
where ni is the count for letter i and n is the total count of all letters.
After smoothing, which number below is the closest to the result you get for KLD(Line1||line2)??
(a) 0.08
(b) 0.71
(c) 0.02
(d) 0.11
A
MT10. Block size, and mapper tasks
Given ten (10) files in the input directory for a Hadoop Streaming job (MRjob or just Hadoop) with the following filesizes (in megabytes): 1, 2,3,4,5,6,7,8,9,10; and a block size of 5M (NOTE: normally we should set the blocksize to 1 GigB using modern computers). How many map tasks will result from processing the data in the input directory? Select the closest number from the following list.
(a) 1 map task
(b) 14
(c) 12
(d) None of the above
B
MT11. Aggregation
Given a purchase transaction log file where each purchase transaction contains the customer identifier, item purchased and much more information about the transaction. Which of the following statements are true about a MapReduce job that performs an “aggregation” such as get the number of transaction per customer.
Statements
* (I) A mapper only job will not suffice, as each map tast only gets to see a subset of the data (e.g., one block). As such a mapper only job will only produce intermediate tallys for each customer.
* (II) A reducer only job will suffice and is most efficient computationally
* (III) If the developer provides a Mapper and Reducer it can potentially be more efficient than option II
* (IV) A reducer only job with a custom partitioner will suffice.
Select the most correct option from the following:
(a) I, II, III, IV
(b) II, IV
(c) III, IV
(d) III
C
MT12. Naive Bayes
Which of the following statements are true regarding Naive Bayes?
Statements
* (I) Naive Bayes is a machine learning algorithm that can be used for classifcation problems only
* (II) Multinomial Naive Bayes is a flavour of Naive Bayes for discrete input variables and can be combined with Laplace smoothing to avoid zero predictions for class posterior probabilities when attribute value combinations show up during classification but were not present during training.
* (III) Naive Bayes can be used for continous valued input variables. In this case, one can use Gaussian distributions to model the class conditional probability distributions Pr(X|Class).
* (IV) Naive Bayes can model continous target variables directly.
Please select the single most correct combination from the following:
(a) I, II, III, IV
(b) I, II, III
(c) I, III, IV
(d) I, II
B
MT13. Naive Bayes SPAM model
Given the following document dataset for a Two-Class problem: ham and spam. Use MRJob (please include your code) to build a muiltnomial Naive Bayes classifier. Please use Laplace Smoothing with a hyperparameter of 1. Please use words only (a-z) as features. Please lowercase all words.
End of explanation
"""
def inverse_vector_length(x1, x2):
norm = (x1**2 + x2**2)**.5
return 1.0/norm
inverse_vector_length(1, 5)
0 --> .2
%matplotlib inline
import numpy as np
import pylab
import pandas as pd
data = pd.read_csv("Kmeandata.csv", header=None)
pylab.plot(data[0], data[1], 'o', linewidth=0, alpha=.5);
%%writefile Kmeans.py
from numpy import argmin, array, random
from mrjob.job import MRJob
from mrjob.step import MRStep
from itertools import chain
import os
#Calculate find the nearest centroid for data point
def MinDist(datapoint, centroid_points):
datapoint = array(datapoint)
centroid_points = array(centroid_points)
diff = datapoint - centroid_points
diffsq = diff*diff
# Get the nearest centroid for each instance
minidx = argmin(list(diffsq.sum(axis = 1)))
return minidx
#Check whether centroids converge
def stop_criterion(centroid_points_old, centroid_points_new,T):
oldvalue = list(chain(*centroid_points_old))
newvalue = list(chain(*centroid_points_new))
Diff = [abs(x-y) for x, y in zip(oldvalue, newvalue)]
Flag = True
for i in Diff:
if(i>T):
Flag = False
break
return Flag
class MRKmeans(MRJob):
centroid_points=[]
k=3
def steps(self):
return [
MRStep(mapper_init = self.mapper_init, mapper=self.mapper,combiner = self.combiner,reducer=self.reducer)
]
#load centroids info from file
def mapper_init(self):
# print "Current path:", os.path.dirname(os.path.realpath(__file__))
self.centroid_points = [map(float,s.split('\n')[0].split(',')) for s in open("Centroids.txt").readlines()]
#open('Centroids.txt', 'w').close()
# print "Centroids: ", self.centroid_points
#load data and output the nearest centroid index and data point
def mapper(self, _, line):
D = (map(float,line.split(',')))
yield int(MinDist(D, self.centroid_points)), (D[0],D[1],1)
#Combine sum of data points locally
def combiner(self, idx, inputdata):
sumx = sumy = num = 0
for x,y,n in inputdata:
num = num + n
sumx = sumx + x
sumy = sumy + y
yield idx,(sumx,sumy,num)
#Aggregate sum for each cluster and then calculate the new centroids
def reducer(self, idx, inputdata):
centroids = []
num = [0]*self.k
for i in range(self.k):
centroids.append([0,0])
for x, y, n in inputdata:
num[idx] = num[idx] + n
centroids[idx][0] = centroids[idx][0] + x
centroids[idx][1] = centroids[idx][1] + y
centroids[idx][0] = centroids[idx][0]/num[idx]
centroids[idx][1] = centroids[idx][1]/num[idx]
yield idx,(centroids[idx][0],centroids[idx][1])
if __name__ == '__main__':
MRKmeans.run()
%reload_ext autoreload
%autoreload 2
from numpy import random
from Kmeans import MRKmeans, stop_criterion
mr_job = MRKmeans(args=['Kmeandata.csv', '--file=Centroids.txt'])
#Geneate initial centroids
centroid_points = []
k = 3
for i in range(k):
centroid_points.append([random.uniform(-3,3),random.uniform(-3,3)])
with open('Centroids.txt', 'w+') as f:
f.writelines(','.join(str(j) for j in i) + '\n' for i in centroid_points)
# Update centroids iteratively
i = 0
while(1):
# save previous centoids to check convergency
centroid_points_old = centroid_points[:]
print "iteration"+str(i)+":"
with mr_job.make_runner() as runner:
runner.run()
# stream_output: get access of the output
for line in runner.stream_output():
key,value = mr_job.parse_output_line(line)
print key, value
centroid_points[key] = value
# Update the centroids for the next iteration
with open('Centroids.txt', 'w') as f:
f.writelines(','.join(str(j) for j in i) + '\n' for i in centroid_points)
print "\n"
i = i + 1
if(stop_criterion(centroid_points_old,centroid_points,0.01)):
break
print "Centroids\n"
print centroid_points
pylab.plot(data[0], data[1], 'o', linewidth=0, alpha=.5);
for point in centroid_points:
pylab.plot(point[0], point[1], '*',color='pink',markersize=20)
for point in [(-4.5,0.0), (4.5,0.0), (0.0,4.5)]:
pylab.plot(point[0], point[1], '*',color='red',markersize=20)
pylab.show()
"""
Explanation: QUESTION
Having learnt the Naive Bayes text classification model for this problem using the training data and classified the test data (d6) please indicate which of the following is true:
Statements
* (I) P(very|ham) = 0.33
* (II) P(good|ham) = 0.50
* (I) Posterior Probability P(ham| d6) is approximately 24%
* (IV) Class of d6 is ham
Please select the single most correct combination of these statements from the following:
(a) I, II, III, IV
(b) I, II, III
(c) I, III, IV
(d) I, II
C (wild guess)
MT14. Is there a map input format (for Hadoop or MRJob)?
(a) Yes, but only in Hadoop 0.22+.
(b) Yes, in Hadoop there is a default expectation that each record is delimited by an end of line charcacter and that key is the first token delimited by a tab character and that the value-part is everything after the tab character.
(c) No, when MRJob INPUT_PROTOCOL = RawValueProtocol. In this case input is processed in format agnostic way thereby avoiding any type of parsing errors. The value is treated as a str, the key is read in as None.
(d) Both b and c are correct answers.
D
MT15. What happens if mapper output does not match reducer input?
(a) Hadoop API will convert the data to the type that is needed by the reducer.
(b) Data input/output inconsistency cannot occur. A preliminary validation check is executed prior to the full execution of the job to ensure there is consistency.
(c) The java compiler will report an error during compilation but the job will complete with exceptions.
(d) A real-time exception will be thrown and map-reduce job will fail.
D
MT16. Why would a developer create a map-reduce without the reduce step?
(a) Developers should design Map-Reduce jobs without reducers only if no reduce slots are available on the cluster.
(b) Developers should never design Map-Reduce jobs without reducers. An error will occur upon compile.
(c) There is a CPU intensive step that occurs between the map and reduce steps. Disabling the reduce step speeds up data processing.
(d) It is not possible to create a map-reduce job without at least one reduce step. A developer may decide to limit to one reducer for debugging purposes.
C
===Gradient descent===
MT17. Which of the following are true statements with respect to gradient descent for machine learning, where alpha is the learning rate. Select all that apply
(I) To make gradient descent converge, we must slowly decrease alpha over time and use a combiner in the context of Hadoop.
(II) Gradient descent is guaranteed to find the global minimum for any unconstrained convex objective function J() regardless of using a combiner or not in the context of Hadoop
(III) Gradient descent can converge even if alpha is kept fixed. (But alpha cannot be too large, or else it may fail to converge.) Combiners will help speed up the process.
(IV) For the specific choice of cost function J() used in linear regression, there is no local optima (other than the global optimum).
Select a single correct response from the following:
* (a) I, II, III, IV
* (b) I, III, IV
* (c) II, III
* (d) II,III, IV
D
===Weighted K-means===
Write a MapReduce job in MRJob to do the training at scale of a weighted K-means algorithm.
You can write your own code or you can use most of the code from the following notebook:
http://nbviewer.jupyter.org/urls/dl.dropbox.com/s/oppgyfqxphlh69g/MrJobKmeans_Corrected.ipynb
Weight each example as follows using the inverse vector length (Euclidean norm):
weight(X)= 1/||X||,
where ||X|| = SQRT(X.X)= SQRT(X1^2 + X2^2)
Here X is vector made up of two component X1 and X2.
Using the following data to answer the following TWO questions:
https://www.dropbox.com/s/ai1uc3q2ucverly/Kmeandata.csv?dl=0
End of explanation
"""
|
griffinfoster/fundamentals_of_interferometry | 3_Positional_Astronomy/3_4_direction_cosine_coordinates.ipynb | gpl-2.0 | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
"""
Explanation: Outline
Glossary
Positional Astronomy
Previous: Horizontal Coordinates
Next: Further Reading
Import standard modules:
End of explanation
"""
from IPython.display import HTML
HTML('../style/code_toggle.html')
"""
Explanation: Import section specific modules:
End of explanation
"""
RA_rad = (np.pi/12) * np.array([5. + 30./60, 5 + 32./60 + 0.4/3600, 5 + 36./60 + 12.8/3600, 5 + 40./60 + 45.5/3600])
DEC_rad = (np.pi/180)*np.array([60., 60. + 17.0/60 + 57./3600, 61. + 12./60 + 6.9/3600, 61 + 56./60 + 34./3600])
Flux_sources_labels = np.array(["", "1 Jy", "0.5 Jy", "0.2 Jy"])
Flux_sources = np.array([1., 0.5, 0.1]) #in Janskys
print "RA (rad) of Sources and Field Center = ", RA_rad
print "DEC (rad) of Sources = ", DEC_rad
"""
Explanation: Direction Cosine Coordinates
There is another useful astronomical coordinate system that we ought to introduce at this juncture, namely the direction cosine coordinate system. The direction cosine coordinate system is quite powerful and allows us to redefine the fundamental reference point on the celestial sphere, from which we measure all other celestial objects, to an arbitrary location (i.e. we can make local sky-maps around our own chosen reference point; the vernal equinox need not be our fundamental reference point). Usually this arbitrary location is chosen to be the celestial source that we are interested in observing. We generally refer to this arbitrary location as the field centre or phase centre.
<div class=advice>
<b>Note:</b> The direction cosine coordinate system is useful for another reason, when we use
it to image interferometric data, then it becomes evident that there exists a Fourier relationship between the sky brightness function and the measurements that an interferometer makes (see <a href='../4_Visibility_Space/4_0_introduction.ipynb'>Chapter 4 ➞</a>).
</div>
<br>
We use three coordinates in the direction cosine coordinate system, namely $l$, $m$ and $n$. The coordinates $l$, $m$ and $n$ are dimensionless direction cosines, i.e.
\begin{eqnarray}
l &=& \cos(\alpha) = \frac{a_1}{|\mathbf{a}|}\
m &=& \cos(\beta) = \frac{a_2}{|\mathbf{a}|}\
n &=& \cos(\gamma) = \frac{a_3}{|\mathbf{a}|}
\end{eqnarray}
<a id='pos:fig:cosines'></a> <!--\label{pos:fig:cosines}--><img src='figures/cosine.svg' width=35%>
Figure 3.4.1: Definition of direction cosines.
The quantities $\alpha$, $\beta$, $\gamma$, $a_1$, $a_2$, $a_3$ and $\mathbf{a}$ are all defined in <a class='pos_fig_cos_dir'></a> <!--\ref{pos:fig:cos}-->. Moreover, $|\cdot|$ denotes the magnitude of its operand. The definitions above also imply that $l^2+m^2+n^2 = 1$. When $|\mathbf{a}|=1$ then we may simply interpret $l$, $m$ and $n$ as Cartesian coordinates, i.e. we may simply relabel the axes $x$, $y$ and $z$ (in <a class='pos_fig_cos_dir'></a><!--\ref{pos:fig:cos}-->) to
$l$, $m$ and $n$.
So the question now arises, how do we use $l$, $m$ and $n$ to uniquely identify a location on the celestial sphere? The direction cosine coordinate system (and the relationship between it and the celestial coordinate sytem) is depicted in <a class='pos_fig_dirconversion_dir'></a><!--\ref{pos:fig:dirconversion}-->. Note that the $n$-axis points toward the field center (which is denoted by $\boldsymbol{s}_c$ in <a class='pos_fig_dirconversion_dir'></a><!--\ref{pos:fig:dirconversion}-->. It should be clear from <a class='pos_fig_dirconversion_dir'></a><!--\ref{pos:fig:dirconversion}--> that we can use $\mathbf{s} = (l,m,n)$ to uniquely idnetify any location on the celestial sphere.
<a id='pos:fig:convert_lmn_ra_dec'></a> <!--\label{pos:fig:convert_lmn_ra_dec}--><img src='figures/conversion2.svg' width=40%>
Figure 3.4.2: The source-celestial pole-field center triangle; which enables us to derive the conversion equations between direction cosine and equatorial coordinates. The red plane represents the fundamental plane of the equatorial coordinate system, while the blue plane represents the fundamental plane of the direction cosine coordinate system. We are able to label the orthogonal fundamental axes of the direction cosine coordinate system $l$,$m$ and $n$, since the radius of the celestial sphere is equal to one.
We use the following equations to convert between the equatorial and direction cosine coordinate systems:
<p class=conclusion>
<font size=4><b>Converting between the equatorial and direction cosine coordinates (3.1)</b></font>
<br>
<br>
\begin{eqnarray}
l &=& \sin \theta \sin \psi = \cos \delta \sin \Delta \alpha \nonumber\\
m &=& \sin \theta \cos \psi = \sin \delta \cos \delta_0 - \cos \delta \sin \delta_0 \cos\Delta \alpha \nonumber\\
\delta &=& \sin^{-1}(m\cos \delta_0 + \sin \delta_0\sqrt{1-l^2-m^2})\nonumber\\
\alpha &=& \alpha_0 + \tan^{-1}\bigg(\frac{l}{\cos\delta_0\sqrt{1-l^2-m^2}-m\sin\delta_0}\bigg)\nonumber
\end{eqnarray}
</p>
<a id='pos_eq_convertlmnradec'></a><!--\label{pos:eq:convertlmnradec}-->
<div class=advice>
<b>Note:</b> See <a href='../0_Introduction/2_Appendix.ipynb'>Appendix ➞</a> for the derivation of the above relations.
</div>
We can obtain the conversion relations above by applying the spherical trigonemetric identities in <a href='../2_Mathematical_Groundwork/2_13_spherical_trigonometry.ipynb'>$\S$ 2.13 ➞</a> to the triangle depicted in <a class='pos_fig_dirconversion_dir'></a><!--\ref{pos:fig:dirconversion}--> (the one formed by the source the field center and the NCP).
There is another important interpretation of direction cosine coordinates we should
be cognisant of. If we project the direction cosine position vector $\mathbf{s}$ of a celestial body onto the $lm$-plane it's projected length will be equal to $\sin \theta$, where $\theta$ is the angular distance between your field center $\mathbf{s}_c$ and $\mathbf{s}$ measured along the surface of the celestial sphere. If $\theta$ is small we may use the small angle approximation, i.e. $\sin \theta \approx \theta$. The projected length of $\mathbf{s}$ is also equal to $\sqrt{l^2+m^2}$, implying that $l^2+m^2 \approx \theta^2$. We may therefore loosely interpret $\sqrt{l^2+m^2}$ as the angular distance measured between the source at $\mathbf{s}$ and the field-center $\mathbf{s}_c$ measured along the surface of the celestial sphere, i.e. we may measure $l$ and $m$ in $^{\circ}$.
The explenation above is graphically illustrated in <a class='pos_fig_proj_dir'></a> <!--\ref{pos:fig:proj}-->.
<a id='pos:fig:understand_lm'></a> <!--\label{pos:fig:understand_lm}--><img src='figures/conversion2b.svg' width=40%>
Figure 3.4.3: Why do we measure $l$ and $m$ in degrees?
<p class=conclusion>
<font size=4><b>Three interpretations of direction cosine coordinates</b></font>
<br>
<br>
• **Direction cosines**: $l$,$m$ and $n$ are direction cosines<br><br>
• **Cartesian coordinates**: $l$,$m$ and $n$ are Cartesian coordinates if we work on the
unit sphere<br><br>
• <b>Angular distance</b>: $\sqrt{l^2+m^2}$ denotes the angular distance $\theta$, $(l,m,n)$ is from the field center (if $\theta$ is sufficiently small).
</p>
Example
Here we have a couple of sources given in RA ($\alpha$) and DEC ($\delta$):
* Source 1: (5h 32m 0.4s,60$^{\circ}$17' 57'') - 1Jy
* Source 2: (5h 36m 12.8s,61$^{\circ}$ 12' 6.9'') - 0.5Jy
* Source 3: (5h 40m 45.5s,61$^{\circ}$ 56' 34'') - 0.2Jy
The field center is located at $(\alpha_0,\delta_0) = $ (5h 30m,60$^{\circ}$). The first step is to convert right ascension and declination into radians with
\begin{eqnarray}
\alpha_{\textrm{rad}} &=& \frac{\pi}{12} \bigg(h + \frac{m}{60} + \frac{s}{3600}\bigg)\
\delta_{\textrm{rad}} &=& \frac{\pi}{180} \bigg(d + \frac{m_{\textrm{arcmin}}}{60}+\frac{s_{\textrm{arcsec}}}{3600}\bigg)
\end{eqnarray}
In the above equations $h,~m,~s,~d,~m_{\textrm{arcmin}}$ and $s_{\textrm{arcsec}}$ respectively denote hours, minutes, seconds, degrees, arcminutes and arcseconds. If we apply the above to our three sources we obtain
End of explanation
"""
RA_delta_rad = RA_rad-RA_rad[0] #calculating delta alpha
l = np.cos(DEC_rad) * np.sin(RA_delta_rad)
m = (np.sin(DEC_rad) * np.cos(DEC_rad[0]) - np.cos(DEC_rad) * np.sin(DEC_rad[0]) * np.cos(RA_delta_rad))
print "l (degrees) = ", l*(180./np.pi)
print "m (degrees) = ", m*(180./np.pi)
"""
Explanation: Recall that we can use <a class='pos_eq_convertlmnradec_dir'></a><!--\label{pos:eq:convertlmnradec}--> to convert between equatorial and direction cosine coordinates, in terms of the current example this translates into the python code below. Note that before we can do the conversion we first need to calculate $\Delta \alpha$.
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(111)
plt.xlim([-4., 4.])
plt.ylim([-4., 4.])
plt.xlabel("$l$ [degrees]")
plt.ylabel("$m$ [degrees]")
plt.plot(l[0], m[0], "bx")
plt.hold("on")
plt.plot(l[1:]*(180/np.pi), m[1:]*(180/np.pi), "ro")
counter = 1
for xy in zip(l[1:]*(180/np.pi)+0.25, m[1:]*(180/np.pi)+0.25):
ax.annotate(Flux_sources_labels[counter], xy=xy, textcoords='offset points',horizontalalignment='right',
verticalalignment='bottom')
counter = counter + 1
plt.grid()
"""
Explanation: Plotting the result.
End of explanation
"""
|
gte620v/PythonTutorialWithJupyter | exercises/solutions/Ex1-Dice_Simulation_solutions.ipynb | mit | import random
def single_die():
"""Outcome of a single die roll"""
return random.randint(1,6)
"""
Explanation: Dice Simulaiton
In this excercise, we want to simulate the outcome of rolling dice. We will walk through several levels of building up funcitonality.
Single Die
Let's create a function that will return a random value between one and six that emulates the outcome of the roll of one die. Python has a random number package called random.
End of explanation
"""
for _ in range(50):
print(single_die(),end=' ')
"""
Explanation: Check
To check our function, let's call it 50 times and print the output. We should see numbers between 1 and 6.
End of explanation
"""
def dice_roll(dice_count):
"""Outcome of a rolling dice_count dice
Args:
dice_count (int): number of dice to roll
Returns:
int: sum of face values of dice
"""
out = 0
for _ in range(dice_count):
out += single_die()
return out
"""
Explanation: Multiple Dice Roll
Now let's make a function that returns the sum of N 6-sided dice being rolled.
End of explanation
"""
for _ in range(100):
print(dice_roll(2), end=' ')
"""
Explanation: Check
Let's perform the same check with 100 values and make sure we see values in the range of 2 to 12.
End of explanation
"""
def dice_rolls(dice_count, rolls_count):
"""Return list of many dice rolls
Args:
dice_count (int): number of dice to roll
rolls_count (int): number of rolls to do
Returns:
list: list of dice roll values.
"""
out = []
for _ in range(rolls_count):
out.append(dice_roll(dice_count))
return out
print(dice_rolls(2,100))
"""
Explanation: Capture the outcome of multiple rolls
Write a function that will return a list of values for many dice rolls
End of explanation
"""
import pylab as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10, 4)
def dice_histogram(dice_count, rolls_count, bins):
"""Plots outcome of many dice rolls
Args:
dice_count (int): number of dice to roll
rolls_count (int): number of rolls to do
bins (int): number of histogram bins
"""
plt.hist(dice_rolls(dice_count, rolls_count),bins)
plt.show()
dice_histogram(2, 10000, 200)
"""
Explanation: Plot Result
Make a function that plots the histogram of the dice values.
End of explanation
"""
dice_histogram(100, 10000, 200)
"""
Explanation: Aside
The outputs follow a binomial distribution. As the number of dice increase, the binomial distribution approaches a Gaussian distribution due to the Central Limit Theorem (CLT). Try making a histogram with 100 dice. The resulting plot is a "Bell Curve" that represents the Gaussian distribution.
End of explanation
"""
import time
start = time.time()
dice_histogram(100, 10000, 200)
print(time.time()-start, 'seconds')
"""
Explanation: Slow?
That seemed slow. How do we time it?
End of explanation
"""
import numpy as np
np.random.randint(1,7,(2,10))
"""
Explanation: Seems like a long time... Can we make it faster? Yes!
Optimize w/ Numpy
Using lots of loops in python is not usually the most efficient way to accomplish numeric tasks. Instead, we should use numpy. With numpy we can "vectorize" operations and under the hood numpy is doing the computation with C code that has a python interface. We don't have to worry about anything under the hood.
2-D Array of Values
Start by checking out numpy's randint function. Let's rewrite dice_rolls using numpy functions and no loops.
To do this, we are going to use np.random.randint to create a 2-D array of random dice rolls. That array will have dice_count rows and rolls_count columns--ie, the size of the array is (dice_count, rolls_count).
End of explanation
"""
np.sum(np.random.randint(1,7,(2,10)),axis=0)
"""
Explanation: The result is a np.array object with is like a list, but better. The most notable difference is that we can to element-wise math operations on numpy arrays easily.
Column sum
To find the roll values, we need to sum up the 2-D array by each column.
End of explanation
"""
def dice_rolls_np(dice_count, rolls_count):
"""Return list of many dice rolls
Args:
dice_count (int): number of dice to roll
rolls_count (int): number of rolls to do
Returns:
np.array: list of dice roll values.
"""
return np.sum(
np.random.randint(1,7,(dice_count,rolls_count)),
axis=0)
print(dice_rolls(2,100))
"""
Explanation: Let's use this knowledge to rewrite dice_rolls
End of explanation
"""
def dice_histogram_np(dice_count, rolls_count, bins):
"""Plots outcome of many dice rolls
Args:
dice_count (int): number of dice to roll
rolls_count (int): number of rolls to do
bins (int): number of histogram bins
"""
plt.hist(dice_rolls_np(dice_count, rolls_count),bins)
plt.show()
start = time.time()
dice_histogram_np(100, 10000, 200)
print(time.time()-start, 'seconds')
"""
Explanation: Histogram and timeit
End of explanation
"""
%timeit dice_rolls_np(100, 1000)
%timeit dice_rolls(100, 1000)
"""
Explanation: That is way faster!
%timeit
Jupyter has a magic function to time function execution. Let's try that:
End of explanation
"""
def risk_battle():
"""Risk battle simulation"""
# get array of three dice values
attacking_dice = np.random.randint(1,7,3)
# get array of two dice values
defending_dice = np.random.randint(1,7,2)
# sort both sets and take top two values
attacking_dice_sorted = np.sort(attacking_dice)[::-1]
defending_dice_sorted = np.sort(defending_dice)[::-1]
# are the attacking values greater?
attack_wins = attacking_dice_sorted[:2] > defending_dice_sorted[:2]
# convert boolean values to -1, +1
attack_wins_pm = attack_wins*2 - 1
# sum up these outcomes
return np.sum(attack_wins_pm)
for _ in range(50):
print(risk_battle(), end=' ')
"""
Explanation: The improvement in the core function call was two orders of magnitude, but when we timed it initially, we were also waiting for the plot to render which consumed the majority of the time.
Risk Game Simulation
In risk two players roll dice in each battle to determine how many armies are lost on each side.
Here are the rules:
The attacking player rolls three dice
The defending player rolls two dice
The defending player wins dice ties
The dice are matched in sorted order
The outcome is a measure of the net increase in armies for the the attacking player with values of -2, -1, 0, 1, 2
Let's make a function that simulates the outcome of one Risk battle and outputs the net score.
The functions we created in the first part of this tutorial are not useful for this task.
End of explanation
"""
outcomes = [risk_battle() for _ in range(10000)]
plt.hist(outcomes)
plt.show()
"""
Explanation: Histogram
Let's plot the histogram. Instead of making a function, let's just use list comprehension to make a list and then plot.
End of explanation
"""
np.mean([risk_battle() for _ in range(10000)])
"""
Explanation: Expected Margin
If we run many simulations, how many armies do we expect the attacker to be ahead by on average?
End of explanation
"""
|
eneskemalergin/OldBlog | _oldnotebooks/Introduction_to_Pandas-1.ipynb | mit | # Using Scalar Values
import pandas as pd
ser = pd.Series([20, 21, 12], index=['London', 'New York','Helsinki'])
print(ser)
# Using Numpy ndarray
import numpy as np
np.random.seed(100)
ser=pd.Series(np.random.rand(7))
ser
"""
Explanation: In this post I will summarize the data structures of Pandas library. Pandas is a library written for data manipulation and analysis. It is built on top of the Numpy library and it provides features not available in Numpy, so to be able to follow this tutorial you should have some basics on Python and NumPy library.
I am going to follow the official data structure documentation here
Data Structures in Pandas
There are 3 main data structures:
1. Series
The series are the 1D NumPy array under the hood. It consists of a NumPy array coupled with an array of labels.
Create Series:
Python
import pandas as pd
ser = pd.Series(data, index=idx)
data can be ndarray, a Python dictionary, or a scalar value
End of explanation
"""
currDict = {'US' : 'dollar', 'UK' : 'pound',
'Germany': 'euro', 'Mexico':'peso',
'Nigeria':'naira',
'China':'yuan', 'Japan':'yen'}
currSeries=pd.Series(currDict)
currSeries
"""
Explanation: Using Python Dictionary:
If we want to use Python Dictionary, the usage is slighly different. Since dictionary data structure has the
End of explanation
"""
stockPrices = {'GOOG':1180.97,'FB':62.57,
'TWTR': 64.50, 'AMZN':358.69,
'AAPL':500.6}
stockPriceSeries=pd.Series(stockPrices,
index=['GOOG','FB','YHOO',
'TWTR','AMZN','AAPL'],
name='stockPrices')
stockPriceSeries
"""
Explanation: The index of a pandas Series structure is of type pandas.core.index.Index and can be viewed as an ordered multiset.
In the following case, we specify an index, but the index contains one entry that isn't a key in the corresponding dict. The result is that the value for the key is assigned as NaN, indicating that it is missing. We will deal with handling missing values in a later section.
End of explanation
"""
stockPriceSeries
stockPriceSeries['GOOG']=1200.0
stockPriceSeries
"""
Explanation: The behavior of Series is very similar to that of numpy arrays discussed in a previous section, with one caveat being that an operation such as slicing also slices the index.
Values can be set and accessed using the index label in a dictionary-like manner:
End of explanation
"""
stockPriceSeries.get('MSFT',np.NaN)
"""
Explanation: Just as in the case of dict, KeyError is raised if you try to retrieve a missing label,
We can avoid this error to be avoid by explicitly using get as follows:
End of explanation
"""
stockPriceSeries[:4]
"""
Explanation: In this case, the default value of np.NaN is specified as the value to return when the key does not exist in the Series structure.
The slice operation behaves the same way as a NumPy array:
End of explanation
"""
stockPriceSeries[stockPriceSeries > 100]
"""
Explanation: Logical slicing also works as follows:
End of explanation
"""
np.mean(stockPriceSeries)
np.std(stockPriceSeries)
ser
ser*ser
np.sqrt(ser)
ser[1:]
"""
Explanation: Arithmetic and statistical operations can be applied, just as with a NumPy array:
End of explanation
"""
# Dictionary created with pandas.Series
stockSummaries={
'AMZN': pd.Series([346.15,0.59,459,0.52,589.8,158.88],
index=['Closing price','EPS','Shares Outstanding(M)',
'Beta', 'P/E','Market Cap(B)']),
'GOOG': pd.Series([1133.43,36.05,335.83,0.87,31.44,380.64],
index=['Closing price','EPS','Shares Outstanding(M)',
'Beta','P/E','Market Cap(B)']),
'FB': pd.Series([61.48,0.59,2450,104.93,150.92],
index=['Closing price','EPS','Shares Outstanding(M)',
'P/E', 'Market Cap(B)']),
'YHOO': pd.Series([34.90,1.27,1010,27.48,0.66,35.36],
index=['Closing price','EPS','Shares Outstanding(M)',
'P/E','Beta', 'Market Cap(B)']),
'TWTR':pd.Series([65.25,-0.3,555.2,36.23],
index=['Closing price','EPS','Shares Outstanding(M)',
'Market Cap(B)']),
'AAPL':pd.Series([501.53,40.32,892.45,12.44,447.59,0.84],
index=['Closing price','EPS','Shares Outstanding(M)','P/E',
'Market Cap(B)','Beta'])}
stockDF=pd.DataFrame(stockSummaries)
stockDF
# We can also change the order...
stockDF=pd.DataFrame(stockSummaries,
index=['Closing price','EPS',
'Shares Outstanding(M)',
'P/E', 'Market Cap(B)','Beta'])
stockDF
"""
Explanation: 2. Data Frame
DataFrame is an 2-dimensional labeled array. Its column types can be heterogeneous: that is, of varying types. It is similar to structured arrays in NumPy with mutability added. It has the following properties:
Conceptually analogous to a table or spreadsheet of data.
Similar to a NumPy ndarray but not a subclass of np.ndarray.
Columns can be of heterogeneous types: float64, int, bool, and so on.
A DataFrame column is a Series structure.
It can be thought of as a dictionary of Series structures where both the columns and the rows are indexed, denoted as 'index' in the case of rows and 'columns' in the case of columns.
It is size mutable: columns can be inserted and deleted.
DataFrame is the most commonly used data structure in pandas. The constructor accepts many different types of arguments:
- Dictionary of 1D ndarrays, lists, dictionaries, or Series structures
- 2D NumPy array
- Structured or record ndarray
- Series structures
- Another DataFrame structure
Using Dictionaries of Series:
End of explanation
"""
stockDF.index
stockDF.columns
"""
Explanation: The row index labels and column labels can be accessed via the index and column attributes:
End of explanation
"""
algos={'search':['DFS','BFS','Binary Search','Linear','ShortestPath (Djikstra)'],
'sorting': ['Quicksort','Mergesort', 'Heapsort','Bubble Sort', 'Insertion Sort'],
'machine learning':['RandomForest', 'K Nearest Neighbor', 'Logistic Regression', 'K-Means Clustering', 'Linear Regression']}
algoDF=pd.DataFrame(algos)
algoDF
# Or we can index by specifying the index when creating data frame
pd.DataFrame(algos,index=['algo_1','algo_2','algo_3','algo_4','algo_5'])
"""
Explanation: Using ndarrays/lists:
When we create a dataframe from lists, the keys become the column labels in the DataFrame structure and the data in the list becomes the column values. Note how the row label indexes are generated using np.range(n).
End of explanation
"""
memberData = np.zeros((4,), dtype=[('Name','a15'), ('Age','i4'), ('Weight','f4')])
memberData[:] = [('Sanjeev',37,162.4), ('Yingluck',45,137.8), ('Emeka',28,153.2), ('Amy',67,101.3)]
memberDF=pd.DataFrame(memberData)
memberDF
pd.DataFrame(memberData, index=['a','b','c','d'])
"""
Explanation: Using Structured Array:
Structured arrayis an array of records or structs, for more info check structured arrays documentation:
End of explanation
"""
currSeries.name='currency'
pd.DataFrame(currSeries)
"""
Explanation: Using a Series Structure:
End of explanation
"""
# Specific column can be reached as Series
memberDF['Name']
# A new column can be added via assignment
memberDF['Height']=[60, 70,80, 90]
memberDF
# column can be deleted using del
del memberDF['Height']
memberDF
"""
Explanation: There are also alternative constructors for DataFrame; they can be summarized as follows:
DataFrame.from_dict: It takes a dictionary of dictionaries or sequences and returns DataFrame.
DataFrame.from_records: It takes a list of tuples or structured ndarray.
DataFrame.from_items: It takes a sequence of (key, value) pairs. The keys are the column or index names, and the values are the column or row values. If you wish the keys to be row index names, you must specify orient='index' as a parameter and specify the column names.
pandas.io.parsers.read_csv: This is a helper function that reads a CSV file into a pandas DataFrame structure.
pandas.io.parsers.read_table: This is a helper function that reads a delimited file into a pandas DataFrame structure.
pandas.io.parsers.read_fwf: This is a helper function that reads a table of fixed-width lines into a pandas DataFrame structure.
I will point out some of the operations most commonly used with data frames.
End of explanation
"""
memberDF.insert(2,'isSenior',memberDF['Age']>60)
memberDF
"""
Explanation: Basically, a DataFrame structure can be treated as if it were a dictionary of Series objects. Columns get inserted at the end; to insert a column at a specific location, you can use the insert function:
End of explanation
"""
ore1DF=pd.DataFrame(np.array([[20,35,25,20],[11,28,32,29]]),columns=['iron','magnesium','copper','silver'])
ore2DF=pd.DataFrame(np.array([[14,34,26,26],[33,19,25,23]]),columns=['iron','magnesium','gold','silver'])
ore1DF+ore2DF
ore1DF + pd.Series([25,25,25,25], index=['iron','magnesium', 'copper','silver'])
"""
Explanation: DataFrame objects align in a manner similar to Series objects, except that they align on both column and index labels. The resulting object is the union of the column and row labels:
End of explanation
"""
np.sqrt(ore1DF)
"""
Explanation: Mathematical operators can be applied element wise on DataFrame structures:
End of explanation
"""
stockData=np.array([[[63.03,61.48,75],
[62.05,62.75,46],
[62.74,62.19,53]],
[[411.90, 404.38, 2.9],
[405.45, 405.91, 2.6],
[403.15, 404.42, 2.4]]])
stockData
stockHistoricalPrices = pd.Panel(stockData, items=['FB', 'NFLX'],
major_axis=pd.date_range('2/3/2014', periods=3),
minor_axis=['open price', 'closing price', 'volume'])
stockHistoricalPrices
"""
Explanation: 3. Panel
Panel is a 3D array. It is not as widely used as Series or DataFrame. It is not as easily displayed on screen or visualized as the other two because of its 3D nature.
The three axis names are as follows:
items: This is axis 0. Each each item corresponds to a DataFrame structure.
major_axis: This is axis 1. Each item corresponds to the rows of the DataFrame structure.
minor_axis: This is axis 2. Each item corresponds to the columns of each DataFrame structure.
Using 3D Numpy array with axis labels:
End of explanation
"""
USData=pd.DataFrame(np.array([[249.62 , 8900], [ 282.16,12680], [309.35,14940]]),
columns=['Population(M)','GDP($B)'],
index=[1990,2000,2010])
USData
ChinaData=pd.DataFrame(np.array([[1133.68, 390.28], [ 1266.83,1198.48], [1339.72, 6988.47]]),
columns=['Population(M)','GDP($B)'],
index=[1990,2000,2010])
ChinaData
US_ChinaData={'US' : USData,
'China': ChinaData}
pd.Panel(US_ChinaData)
"""
Explanation: As I mentioned earlier visualizing the 3D data is not really easy so we see a small information instead.
Using Python Dictionary of Data Frame Objects:
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.17/_downloads/d4c795380277f09ea21841616baceb71/plot_dics_source_power.ipynb | bsd-3-clause | # Author: Marijn van Vliet <w.m.vanvliet@gmail.com>
# Roman Goj <roman.goj@gmail.com>
# Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.datasets import sample
from mne.time_frequency import csd_morlet
from mne.beamformer import make_dics, apply_dics_csd
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
subjects_dir = data_path + '/subjects'
"""
Explanation: Compute source power using DICS beamfomer
Compute a Dynamic Imaging of Coherent Sources (DICS) [1]_ filter from
single-trial activity to estimate source power across a frequency band.
References
.. [1] Gross et al. Dynamic imaging of coherent sources: Studying neural
interactions in the human brain. PNAS (2001) vol. 98 (2) pp. 694-699
End of explanation
"""
raw = mne.io.read_raw_fif(raw_fname)
raw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel
# Set picks
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=False,
stim=False, exclude='bads')
# Read epochs
event_id, tmin, tmax = 1, -0.2, 0.5
events = mne.read_events(event_fname)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, mag=4e-12))
evoked = epochs.average()
# Read forward operator
forward = mne.read_forward_solution(fname_fwd)
"""
Explanation: Reading the raw data:
End of explanation
"""
csd = csd_morlet(epochs, tmin=0, tmax=0.5, decim=20,
frequencies=np.linspace(6, 10, 4),
n_cycles=2.5) # short signals, must live with few cycles
# Compute DICS spatial filter and estimate source power.
filters = make_dics(epochs.info, forward, csd, reg=0.5, verbose='error')
print(filters)
stc, freqs = apply_dics_csd(csd, filters)
message = 'DICS source power in the 8-12 Hz frequency band'
brain = stc.plot(surface='inflated', hemi='rh', subjects_dir=subjects_dir,
time_label=message)
"""
Explanation: Computing the cross-spectral density matrix at 4 evenly spaced frequencies
from 6 to 10 Hz. We use a decim value of 20 to speed up the computation in
this example at the loss of accuracy.
<div class="alert alert-danger"><h4>Warning</h4><p>The use of several sensor types with the DICS beamformer is
not heavily tested yet. Here we use verbose='error' to
suppress a warning along these lines.</p></div>
End of explanation
"""
|
pauliacomi/pyGAPS | docs/examples/tplot.ipynb | mit | # import isotherms
%run import.ipynb
# import the characterisation module
import pygaps.characterisation as pgc
"""
Explanation: t-plot calculations
Another common characterisation method is the t-plot method. First, make sure the data is imported by running the previous notebook.
End of explanation
"""
isotherm = next(i for i in isotherms_n2_77k if i.material=='MCM-41')
print(isotherm.material)
results = pgc.t_plot(isotherm, verbose=True)
"""
Explanation: Besides an isotherm, this method requires a so-called thickness function, which
empirically describes the thickness of adsorbate layers on a non-porous surface
as a function of pressure. It can be specified by the user, otherwise the
"Harkins and Jura" thickness model is used by default. When the function is
called without any other parameters, the framework will attempt to find plateaus
in the data and automatically fit them with a straight line.
Let's look again at the our MCM-41 pore-controlled glass.
End of explanation
"""
print(isotherm.material)
results = pgc.t_plot(
isotherm,
thickness_model='Harkins/Jura',
t_limits=(0.3,0.44),
verbose=True
)
"""
Explanation: The first line can be attributed to adsorption on the inner pore surface, while
the second one is adsorption on the external surface after pore filling. Two
values are calculated for each section detected: the adsorbed volume and the
specific area. In this case, the area of the first linear region corresponds to
the pore area. Compare the specific surface area obtained of 340 $m^2$ with the
360 $m^2$ obtained through the BET method previously.
In the second region, the adsorbed volume corresponds to the total pore volume
and the area is the external surface area of the sample.
We can get a better result for the surface area by attempting to have the first
linear region at a zero intercept.
End of explanation
"""
results = []
for isotherm in isotherms_n2_77k:
results.append((isotherm.material, pgc.t_plot(isotherm, 'Harkins/Jura')))
[(x, f"{y['results'][0].get('area'):.2f}") for (x,y) in results]
"""
Explanation: A near perfect match with the BET method. Of course, the method is only this
accurate in certain cases, see more info in the
documentation of the module. Let's
do the calculations for all the nitrogen isotherms, using the same assumption
that the first linear region is a good indicator of surface area.
End of explanation
"""
def carbon_model(relative_p):
return 0.88*(relative_p**2) + 6.45*relative_p + 2.98
isotherm = next(i for i in isotherms_n2_77k if i.material=='Takeda 5A')
print(isotherm.material)
results = pgc.t_plot(isotherm, thickness_model=carbon_model, verbose=True)
"""
Explanation: We can see that, while we get reasonable values for the silica samples, all the
rest are quite different. This is due to a number of factors depending on the
material: adsorbate-adsorbent interactions having an effect on the thickness of
the layer or simply having a different adsorption mechanism. The t-plot requires
careful thought to assign meaning to the calculated values.
Since no thickness model can be universal, the framework allows for the
thickness model to be substituted with an user-provided function which will be
used for the thickness calculation, or even another isotherm, which will be
converted into a thickness model.
For example, using a carbon black type model:
End of explanation
"""
|
saga-survey/saga-code | ipython_notebooks/FLAGS experiments with remove list.ipynb | gpl-2.0 | data_dir = '../local_data/'
"""
Explanation: The actual catalogs were downloaded using the download_host_sqlfile.py file from https://github.com/saga-survey/marla to the data directory below
End of explanation
"""
webbrowser.open(targeting._DEFAULT_TREM_URL.replace('/export?format=csv&', '#'))
"""
Explanation: run this to look at the rem list in the browser
End of explanation
"""
fnremlist = 'TargetRemove{}.csv'.format(datetime.datetime.now().strftime('%b%d_%Y'))
if not os.path.exists(fnremlist):
urllib.request.urlretrieve(targeting._DEFAULT_TREM_URL, fnremlist)
fnremlist
"""
Explanation: Now save today's version
End of explanation
"""
remlist = table.Table.read(fnremlist, format='csv',data_start=3, header_start=1)
remlist.remove_column('col6') # google made a blank column for some reason?
remlist.add_column(remlist.ColumnClass(name='row_index', data=np.arange(len(remlist))), index=0)
remlist = remlist[~remlist['SDSS ID'].mask]
remlist.show_in_notebook(display_length=10)
"""
Explanation: Load the remove list as an Astropy table, and add a "row_index" to make it easier to look at in the notebook
End of explanation
"""
#for remlist comparison
tojoin = {'OBJID': [], 'FLAGS': []}
# for "all" stats
allcount = 0
all_flag_counts = {(i+1):0 for i in range(64)}
flag_masks = {(i+1):np.array(1 << i) for i in range(64)}
# have to loop over NSAIDs to efficiently load the catalogs one at a time
remlist_ids = remlist['SDSS ID']
unsaids = np.unique(remlist['NSA'])
for i, nsanum in enumerate(unsaids):
print('On NSA#', nsanum, 'which is', i+1, 'of', len(unsaids))
sys.stdout.flush()
fn = os.path.join(data_dir, 'sql_nsa{}.fits.gz'.format(nsanum))
try:
tab = table.Table.read(fn)
# find just the remove list objects' prpoerties
tab_matched = tab[np.in1d(tab['OBJID'], remlist_ids)]
for key, lst in tojoin.items():
lst.extend(tab_matched[key])
#extract FLAGS statstics for *all* objects
for i, mask in flag_masks.items():
all_flag_counts[i] += np.sum((tab['FLAGS'].astype('uint64')&mask)!=0)
allcount += len(tab)
except FileNotFoundError:
print('Did not find', fn, 'so skipping it')
"""
Explanation: Load the catalogs of all NSA objects in the remove list and extract the flags for any of the targets that are actually in the remove list. Also get flag-related statistics for everything
End of explanation
"""
if 'OBJID' in tojoin:
tojoin['SDSS ID'] = tojoin.pop('OBJID') # rename for join
remwflags = table.join(remlist, table.Table(tojoin), join_type='left')
remwflags.show_in_notebook(display_length=10)
"""
Explanation: Now combine the remove list table with the set of FLAGS we got above
End of explanation
"""
flags = remwflags['FLAGS']
flags = flags[~flags.mask].view(np.ndarray).astype('uint64')
bitcounts = {}
for i in range(64):
bitmask = np.array(1 << i, dtype='uint64')
bitcounts[i+1] = np.sum((flags & bitmask) != 0)
bitcounttable = table.Table(data=[list(bitcounts.values()), list(bitcounts.keys())], names=['count', 'bit'])
"""
Explanation: Convert FLAGS value to how many objects have specific bits set
Note that "bit #1" is the 0th bit (in python indexing)
End of explanation
"""
# this SQL query gets the meaning of the flags
qry = urllib.parse.urlencode({'cmd': 'SELECT * FROM PhotoFlags', 'format': 'csv'})
url = 'http://skyserver.sdss.org/dr12/en/tools/search/x_sql.aspx?' + qry
csv = urllib.request.urlopen(url).read()
names = []
descrs = []
bits = []
# now we hand-parse it to get the bit field right
rows = csv.split(b'\n')
for row in rows[2:]: #skip the "Table1" and the header
if row.strip() == b'':
continue
spl = row.split(b',')
name, val = spl[:2]
descr = b','.join(spl[2:])
names.append(name)
descrs.append(descr)
intval = int(val, 0)
for i in range(65):
if (intval >> i)==0:
bits.append(i)
break
else:
raise ValueError('value {} doesnt seem to be valid'.format(intval))
bittable = table.Table(data=[bits, names, descrs], names=['bit', 'name', 'description'])
bittable = table.join(bittable, bitcounttable)
"""
Explanation: Now get the description of each bit from SDSS SQL and combine that with the Counts from above
End of explanation
"""
bittable['frac_rem'] = bittable['count']/len(remwflags)
bittable['frac_all'] = np.array([all_flag_counts[bitnum] for bitnum in bittable['bit']])/allcount
bittable['rem_to_all'] = bittable['frac_rem']/bittable['frac_all']
bittable['rem_to_all'][np.isnan(bittable['rem_to_all'])] = -1
bittable.sort('count')
bittable.reverse()
bittable.show_in_notebook(display_length=10)
"""
Explanation: Now add some comparison statistics
End of explanation
"""
|
the-deep-learners/nyc-ds-academy | notebooks/dense_sentiment_classifier.ipynb | mit | import keras
from keras.datasets import imdb
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Flatten, Dropout
from keras.layers import Embedding # new!
from keras.callbacks import ModelCheckpoint # new!
import os # new!
from sklearn.metrics import roc_auc_score, roc_curve # new!
import pandas as pd
import matplotlib.pyplot as plt # new!
%matplotlib inline
"""
Explanation: Dense Sentiment Classifier
In this notebook, we build a dense neural net to classify IMDB movie reviews by their sentiment.
Load dependencies
End of explanation
"""
# output directory name:
output_dir = 'model_output/dense'
# training:
epochs = 4
batch_size = 128
# vector-space embedding:
n_dim = 64
n_unique_words = 5000 # as per Maas et al. (2011); may not be optimal
n_words_to_skip = 50 # ditto
max_review_length = 100
pad_type = trunc_type = 'pre'
# neural network architecture:
n_dense = 64
dropout = 0.5
"""
Explanation: Set hyperparameters
End of explanation
"""
(x_train, y_train), (x_valid, y_valid) = imdb.load_data(num_words=n_unique_words, skip_top=n_words_to_skip)
x_train[0:6] # 0 reserved for padding; 1 would be starting character; 2 is unknown; 3 is most common word, etc.
for x in x_train[0:6]:
print(len(x))
y_train[0:6]
len(x_train), len(x_valid)
"""
Explanation: Load data
For a given data set:
the Keras text utilities here quickly preprocess natural language and convert it into an index
the keras.preprocessing.text.Tokenizer class may do everything you need in one line:
tokenize into words or characters
num_words: maximum unique tokens
filter out punctuation
lower case
convert words to an integer index
End of explanation
"""
word_index = keras.datasets.imdb.get_word_index()
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["PAD"] = 0
word_index["START"] = 1
word_index["UNK"] = 2
word_index
index_word = {v:k for k,v in word_index.items()}
x_train[0]
' '.join(index_word[id] for id in x_train[0])
(all_x_train,_),(all_x_valid,_) = imdb.load_data()
' '.join(index_word[id] for id in all_x_train[0])
"""
Explanation: Restoring words from index
End of explanation
"""
x_train = pad_sequences(x_train, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_valid = pad_sequences(x_valid, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_train[0:6]
for x in x_train[0:6]:
print(len(x))
' '.join(index_word[id] for id in x_train[0])
' '.join(index_word[id] for id in x_train[5])
"""
Explanation: Preprocess data
End of explanation
"""
# CODE HERE
model = Sequential()
model.add(Embedding(n_unique_words, n_dim, input_length=max_review_length))
model.add(Flatten())
model.add(Dense(n_dense, activation='relu'))
model.add(Dropout(dropout))
model.add(Dense(1, activation='sigmoid'))
model.summary() # so many parameters!
# embedding layer dimensions and parameters:
n_dim, n_unique_words, n_dim*n_unique_words
# ...flatten:
max_review_length, n_dim, n_dim*max_review_length
# ...dense:
n_dense, n_dim*max_review_length*n_dense + n_dense # weights + biases
# ...and output:
n_dense + 1
"""
Explanation: Design neural network architecture
End of explanation
"""
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
modelcheckpoint = ModelCheckpoint(filepath=output_dir+"/weights.{epoch:02d}.hdf5")
if not os.path.exists(output_dir):
os.makedirs(output_dir)
"""
Explanation: Configure model
End of explanation
"""
# 84.7% validation accuracy in epoch 2
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_valid, y_valid), callbacks=[modelcheckpoint])
"""
Explanation: Train!
End of explanation
"""
model.load_weights(output_dir+"/weights.01.hdf5") # zero-indexed
y_hat = model.predict_proba(x_valid)
len(y_hat)
y_hat[0]
plt.hist(y_hat)
_ = plt.axvline(x=0.5, color='orange')
pct_auc = roc_auc_score(y_valid, y_hat)*100.0
"{:0.2f}".format(pct_auc)
float_y_hat = []
for y in y_hat:
float_y_hat.append(y[0])
ydf = pd.DataFrame(list(zip(float_y_hat, y_valid)), columns=['y_hat', 'y'])
ydf.head(10)
' '.join(index_word[id] for id in all_x_valid[0])
' '.join(index_word[id] for id in all_x_valid[6])
ydf[(ydf.y == 0) & (ydf.y_hat > 0.9)].head(10)
' '.join(index_word[id] for id in all_x_valid[489])
ydf[(ydf.y == 1) & (ydf.y_hat < 0.1)].head(10)
' '.join(index_word[id] for id in all_x_valid[927])
"""
Explanation: Evaluate
End of explanation
"""
|
ogoann/StatisticalMethods | examples/XrayImage/Summarizing.ipynb | gpl-2.0 | import astropy.io.fits as pyfits
import numpy as np
import astropy.visualization as viz
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 10.0)
targdir = 'a1835_xmm/'
imagefile = targdir+'P0098010101M2U009IMAGE_3000.FTZ'
expmapfile = targdir+'P0098010101M2U009EXPMAP3000.FTZ'
bkgmapfile = targdir+'P0098010101M2X000BKGMAP3000.FTZ'
!du -sch $targdir/*
"""
Explanation: Summarizing Images
Images are high dimensional objects: our XMM image contains 648*648 = datapoints (the pixel values).
Visualizing the data is an extremely important first step: the next is summarizing, which can be thought of as dimensionality reduction.
Let's dust off some standard statistics and put them to good use in summarizing this X-ray image.
End of explanation
"""
imfits = pyfits.open(imagefile)
im = imfits[0].data
plt.imshow(viz.scale_image(im, scale='log', max_cut=40), cmap='gray', origin='lower');
"""
Explanation: How Many Photons Came From the Cluster?
Let's estimate the total counts due to the cluster.
That means we need to somehow ignore
all the other objects in the field
the diffuse X-ray "background"
Let's start by masking various regions of the image to separate cluster from background.
End of explanation
"""
maskedimage = im.copy()
# First make some coordinate arrays, including polar r from the cluster center:
(ny,nx) = maskedimage.shape
centroid = np.where(maskedimage == np.max(maskedimage))
x = np.linspace(0, nx-1, nx)
y = np.linspace(0, ny-1, ny)
dx, dy = np.meshgrid(x,y)
dx = dx - centroid[1]
dy = dy - centroid[0]
r = np.sqrt(dx*dx + dy*dy)
# Now select an outer annulus, for the background and an inner circle, for the cluster:
background = maskedimage.copy()
background[r < 100] = -3
background[r > 150] = -3
signal = maskedimage.copy()
signal[r > 100] = 0.0
plt.imshow(viz.scale_image(background, scale='log', max_cut=40), cmap='gray', origin='lower')
"""
Explanation: Estimating the background
Now let's look at the outer parts of the image, far from the cluster, and estimate the background level there.
End of explanation
"""
meanbackground = np.mean(background[background > -1])
medianbackground = np.median(background[background > -1])
print "Mean background counts per pixel = ",meanbackground
print "Median background counts per pixel = ",medianbackground
"""
Explanation: Let's look at the mean and median of the pixels in this image that have non-negative values.
End of explanation
"""
plt.figure(figsize=(10,7))
n, bins, patches = plt.hist(background[background > -1], bins=np.linspace(-3.5,29.5,34))
# plt.yscale('log', nonposy='clip')
plt.xlabel('Background annulus pixel value (counts)')
plt.ylabel('Frequency')
plt.axis([-3.0, 30.0, 0, 40000])
plt.grid(True)
plt.show()
stdevbackground = np.std(background[background > -1])
print "Standard deviation: ",stdevbackground
"""
Explanation: Exercise:
Why do you think there is a difference? Talk to your neighbor for a minute, and be ready to suggest an answer.
To understand the difference in these two estimates, lets look at a pixel histogram for this annulus.
End of explanation
"""
plt.imshow(viz.scale_image(signal, scale='log', max_cut=40), cmap='gray', origin='lower')
plt.figure(figsize=(10,7))
n, bins, patches = plt.hist(signal[signal > -1], bins=np.linspace(-3.5,29.5,34), color='red')
plt.yscale('log', nonposy='clip')
plt.xlabel('Signal region pixel value (counts)')
plt.ylabel('Frequency')
plt.axis([-3.0, 30.0, 0, 500000])
plt.grid(True)
plt.show()
"""
Explanation: Exercise:
"The background level in this image is approximately $0.09 \pm 0.66$ counts"
What's wrong with this statement?
Talk to your neighbor for a few minutes, and see if you can come up with a better version.
Estimating the Cluster Counts
Now let's summarize the circular region centered on the cluster.
End of explanation
"""
# Total counts in signal region:
Ntotal = np.sum(signal[signal > -1])
# Background counts: mean in annulus, multiplied by number of pixels in signal region:
N = signal.copy()*0.0
N[signal > -1] = 1.0
Nbackground = np.sum(N)*meanbackground
# Difference is the cluster counts:
Ncluster = Ntotal - Nbackground
print "Counts in signal region: ",Ntotal
print "Approximate counts due to background: ",Nbackground
print "Approximate counts due to cluster: ",Ncluster
"""
Explanation: Now we can make our estimates:
End of explanation
"""
|
varun-invent/Autism-Connectome-Analysis | notebooks/binning_data.ipynb | apache-2.0 | import pandas as pd
import numpy as np
import json
import string
df = pd.read_csv('/home1/varunk/data/ABIDE1/RawDataBIDs/composite_phenotypic_file.csv') # , index_col='SUB_ID'
df = df.sort_values(['SUB_ID'])
df
"""
Explanation: Data Binning
Following script is used to bin the data and check stats of participants
End of explanation
"""
# saving the file paths
!find /home1/varunk/data/ABIDE1/RawDataBIDs/ -name 'task-rest_bold.json' > scan_params_file.txt
# read the above created file paths:
with open('scan_params_file.txt', 'r') as f:
scan_param_paths = f.read().split('\n')[0:-1]
scan_param_paths
# for json_path in scan_param_paths:
# with open(json_path, 'rt') as fp:
# task_info = json.load(fp)
# # Accessing the contents:
# tr = task_info['RepetitionTime']
# volumes = task_info['NumberofMeasurements']
# xdim_mm, ydim_mm = task_info['PixelSpacing'].split('x')
# zdim_mm = task_info['SpacingBetweenSlices']
# xdim_voxels, ydim_voxels = task_info['AcquisitionMatrix'].split('x')
# zdim_voxels = task_info['NumberOfSlices']
"""
Explanation: Reading scan json files and extracting scan parameters
End of explanation
"""
SITES = np.unique(df.as_matrix(['SITE_ID']).squeeze())
data_frame = pd.DataFrame({
'SITE_NAME': [] ,
'TR': [],
'VOLUMES': [],
'xdim_mm': [],
'ydim_mm': [],
'zdim_mm': [],
'xdim_voxels': [],
'ydim_voxels': [],
'zdim_voxels': [],
'NUM_AUT_DSM_V': [] ,
'NUM_AUT_MALE_DSM_V': [] ,
'NUM_AUT_FEMALE_DSM_V': [],
'NUM_AUT_AGE_lte12_DSM_V' : [],
'NUM_AUT_AGE_12_18_DSM_V' : [],
'NUM_AUT_AGE_18_24_DSM_V': [],
'NUM_AUT_AGE_24_34_DSM_V' :[],
'NUM_AUT_AGE_34_50_DSM_V' : [],
'NUM_AUT_AGE_gt50_DSM_V' : [],
'NUM_AUT_DSM_IV' : [],
'NUM_AUT_MALE_DSM_IV' : [],
'NUM_AUT_FEMALE_DSM_IV' : [],
'NUM_ASP_DSM_IV' : [],
'NUM_ASP_MALE_DSM_IV' : [],
'NUM_ASP_FEMALE_DSM_IV' : [],
'NUM_PDDNOS_DSM_IV' : [],
'NUM_PDDNOS_MALE_DSM_IV' : [],
'NUM_PDDNOS_FEMALE_DSM_IV' : [],
'NUM_ASP_PDDNOS_DSM_IV' : [],
'NUM_ASP_PDDNOS_MALE_DSM_IV' : [],
'NUM_ASP_PDDNOS_FEMALE_DSM_IV' : [],
'NUM_TD' : [],
'NUM_TD_MALE' : [],
'NUM_TD_FEMALE' : [],
'NUM_TD_AGE_lte12' : [],
'NUM_TD_AGE_12_18' : [],
'NUM_TD_AGE_18_24' : [],
'NUM_TD_AGE_24_34' : [],
'NUM_TD_AGE_34_50' : [],
'NUM_TD_AGE_gt50' : []
})
# NUM_AUT =
# df.loc[(df['DSM_IV_TR'] != 0) & (df['DSM_IV_TR'] != 1) & (df['DSM_IV_TR'] != 2) & (df['DSM_IV_TR'] != 3) & (df['DSM_IV_TR'] != 4)]
for SITE in SITES:
NUM_AUT_DSM_V = df.loc[(df['DX_GROUP'] == 1) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_MALE_DSM_V = df.loc[(df['DX_GROUP'] == 1) & (df['SEX'] == 1) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_FEMALE_DSM_V = df.loc[(df['DX_GROUP'] == 1) & (df['SEX'] == 2) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_AGE_lte12_DSM_V = df.loc[(df['DX_GROUP'] == 1) & (df['AGE_AT_SCAN'] <= 12) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_AGE_12_18_DSM_V = df.loc[(df['DX_GROUP'] == 1) & (df['AGE_AT_SCAN'] > 12) & (df['AGE_AT_SCAN'] <= 18) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_AGE_18_24_DSM_V = df.loc[(df['DX_GROUP'] == 1) & (df['AGE_AT_SCAN'] > 18) & (df['AGE_AT_SCAN'] <= 24) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_AGE_24_34_DSM_V = df.loc[(df['DX_GROUP'] == 1) & (df['AGE_AT_SCAN'] > 24) & (df['AGE_AT_SCAN'] <= 34) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_AGE_34_50_DSM_V = df.loc[(df['DX_GROUP'] == 1) & (df['AGE_AT_SCAN'] > 34) & (df['AGE_AT_SCAN'] <= 50) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_AGE_gt50_DSM_V = df.loc[(df['DX_GROUP'] == 1) & (df['AGE_AT_SCAN'] > 50 ) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_DSM_IV = df.loc[(df['DSM_IV_TR'] == 1) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_MALE_DSM_IV = df.loc[(df['DSM_IV_TR'] == 1) & (df['SEX'] == 1) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_FEMALE_DSM_IV = df.loc[(df['DSM_IV_TR'] == 1) & (df['SEX'] == 2) & (df['SITE_ID'] == SITE)].shape[0]
NUM_ASP_DSM_IV = df.loc[(df['DSM_IV_TR'] == 2) & (df['SITE_ID'] == SITE)].shape[0]
NUM_ASP_MALE_DSM_IV = df.loc[(df['DSM_IV_TR'] == 2) & (df['SEX'] == 1) & (df['SITE_ID'] == SITE)].shape[0]
NUM_ASP_FEMALE_DSM_IV = df.loc[(df['DSM_IV_TR'] == 2) & (df['SEX'] == 2) & (df['SITE_ID'] == SITE)].shape[0]
NUM_PDDNOS_DSM_IV = df.loc[(df['DSM_IV_TR'] == 3) & (df['SITE_ID'] == SITE)].shape[0]
NUM_PDDNOS_MALE_DSM_IV = df.loc[(df['DSM_IV_TR'] == 3) & (df['SEX'] == 1) & (df['SITE_ID'] == SITE)].shape[0]
NUM_PDDNOS_FEMALE_DSM_IV = df.loc[(df['DSM_IV_TR'] == 3) & (df['SEX'] == 2) & (df['SITE_ID'] == SITE)].shape[0]
NUM_ASP_PDDNOS_DSM_IV = df.loc[(df['DSM_IV_TR'] == 4) & (df['SITE_ID'] == SITE)].shape[0]
NUM_ASP_PDDNOS_MALE_DSM_IV = df.loc[(df['DSM_IV_TR'] == 4) & (df['SEX'] == 1) & (df['SITE_ID'] == SITE)].shape[0]
NUM_ASP_PDDNOS_FEMALE_DSM_IV = df.loc[(df['DSM_IV_TR'] == 4) & (df['SEX'] == 2) & (df['SITE_ID'] == SITE)].shape[0]
NUM_TD = df.loc[(df['DX_GROUP'] == 2) & (df['SITE_ID'] == SITE)].shape[0]
NUM_TD_MALE = df.loc[(df['DX_GROUP'] == 2) & (df['SEX'] == 1) & (df['SITE_ID'] == SITE)].shape[0]
NUM_TD_FEMALE = df.loc[(df['DX_GROUP'] == 2) & (df['SEX'] == 2) & (df['SITE_ID'] == SITE)].shape[0]
NUM_TD_AGE_lte12 = df.loc[(df['DX_GROUP'] == 2) & (df['AGE_AT_SCAN'] <= 12) & (df['SITE_ID'] == SITE)].shape[0]
NUM_TD_AGE_12_18 = df.loc[(df['DX_GROUP'] == 2) & (df['AGE_AT_SCAN'] > 12) & (df['AGE_AT_SCAN'] <= 18) & (df['SITE_ID'] == SITE)].shape[0]
NUM_TD_AGE_18_24 = df.loc[(df['DX_GROUP'] == 2) & (df['AGE_AT_SCAN'] > 18) & (df['AGE_AT_SCAN'] <= 24) & (df['SITE_ID'] == SITE)].shape[0]
NUM_TD_AGE_24_34 = df.loc[(df['DX_GROUP'] == 2) & (df['AGE_AT_SCAN'] > 24) & (df['AGE_AT_SCAN'] <= 34) & (df['SITE_ID'] == SITE)].shape[0]
NUM_TD_AGE_34_50 = df.loc[(df['DX_GROUP'] == 2) & (df['AGE_AT_SCAN'] > 34) & (df['AGE_AT_SCAN'] <= 50) & (df['SITE_ID'] == SITE)].shape[0]
NUM_TD_AGE_gt50 = df.loc[(df['DX_GROUP'] == 2) & (df['AGE_AT_SCAN'] > 50 ) & (df['SITE_ID'] == SITE)].shape[0]
tr = 0
volumes = 0
xdim_mm = 0
ydim_mm = 0
zdim_mm = 0
xdim_voxels = 0
ydim_voxels = 0
zdim_voxels = 0
# Accessing scan details
for json_path in scan_param_paths:
extracted_site = json_path.split('/')[-2]
if (SITE).lower() in (extracted_site).lower():
with open(json_path, 'rt') as fp:
print('Site matched with ',json_path)
task_info = json.load(fp)
# Accessing the contents:
tr = task_info['RepetitionTime']
volumes = task_info['NumberofMeasurements']
xdim_mm, ydim_mm = task_info['PixelSpacing'].split('x')
zdim_mm = task_info['SpacingBetweenSlices']
xdim_voxels, ydim_voxels = task_info['AcquisitionMatrix'].split('x')
zdim_voxels = task_info['NumberOfSlices']
_df = pd.DataFrame({
'SITE_NAME': SITE ,
'TR': tr ,
'VOLUMES': volumes,
'xdim_mm':xdim_mm,
'ydim_mm':ydim_mm,
'zdim_mm':zdim_mm,
'xdim_voxels':xdim_voxels,
'ydim_voxels':ydim_voxels,
'zdim_voxels':zdim_voxels,
'NUM_AUT_DSM_V': NUM_AUT_DSM_V ,
'NUM_AUT_MALE_DSM_V': NUM_AUT_MALE_DSM_V ,
'NUM_AUT_FEMALE_DSM_V': NUM_AUT_FEMALE_DSM_V,
'NUM_AUT_AGE_lte12_DSM_V' : NUM_AUT_AGE_lte12_DSM_V,
'NUM_AUT_AGE_12_18_DSM_V' : NUM_AUT_AGE_12_18_DSM_V,
'NUM_AUT_AGE_18_24_DSM_V': NUM_AUT_AGE_18_24_DSM_V,
'NUM_AUT_AGE_24_34_DSM_V' :NUM_AUT_AGE_24_34_DSM_V,
'NUM_AUT_AGE_34_50_DSM_V' : NUM_AUT_AGE_34_50_DSM_V,
'NUM_AUT_AGE_gt50_DSM_V' : NUM_AUT_AGE_gt50_DSM_V,
'NUM_AUT_DSM_IV' : NUM_AUT_DSM_IV,
'NUM_AUT_MALE_DSM_IV' : NUM_AUT_MALE_DSM_IV,
'NUM_AUT_FEMALE_DSM_IV' : NUM_AUT_FEMALE_DSM_IV,
'NUM_ASP_DSM_IV' : NUM_ASP_DSM_IV,
'NUM_ASP_MALE_DSM_IV' : NUM_ASP_MALE_DSM_IV,
'NUM_ASP_FEMALE_DSM_IV' : NUM_ASP_FEMALE_DSM_IV,
'NUM_PDDNOS_DSM_IV' : NUM_PDDNOS_DSM_IV,
'NUM_PDDNOS_MALE_DSM_IV' : NUM_PDDNOS_MALE_DSM_IV,
'NUM_PDDNOS_FEMALE_DSM_IV' : NUM_PDDNOS_FEMALE_DSM_IV,
'NUM_ASP_PDDNOS_DSM_IV' : NUM_ASP_PDDNOS_DSM_IV,
'NUM_ASP_PDDNOS_MALE_DSM_IV' : NUM_ASP_PDDNOS_MALE_DSM_IV,
'NUM_ASP_PDDNOS_FEMALE_DSM_IV' : NUM_ASP_PDDNOS_FEMALE_DSM_IV,
'NUM_TD' : NUM_TD,
'NUM_TD_MALE' : NUM_TD_MALE,
'NUM_TD_FEMALE' : NUM_TD_FEMALE,
'NUM_TD_AGE_lte12' : NUM_TD_AGE_lte12,
'NUM_TD_AGE_12_18' : NUM_TD_AGE_12_18,
'NUM_TD_AGE_18_24' : NUM_TD_AGE_18_24,
'NUM_TD_AGE_24_34' : NUM_TD_AGE_24_34,
'NUM_TD_AGE_34_50' : NUM_TD_AGE_34_50,
'NUM_TD_AGE_gt50' : NUM_TD_AGE_gt50
},index=[0],columns = [ 'SITE_NAME',
'TR',
'VOLUMES',
'xdim_mm',
'ydim_mm',
'zdim_mm',
'xdim_voxels',
'ydim_voxels',
'zdim_voxels',
'NUM_AUT_DSM_V',
'NUM_AUT_MALE_DSM_V',
'NUM_AUT_FEMALE_DSM_V',
'NUM_AUT_AGE_lte12_DSM_V',
'NUM_AUT_AGE_12_18_DSM_V',
'NUM_AUT_AGE_18_24_DSM_V',
'NUM_AUT_AGE_24_34_DSM_V',
'NUM_AUT_AGE_34_50_DSM_V',
'NUM_AUT_AGE_gt50_DSM_V',
'NUM_AUT_DSM_IV',
'NUM_AUT_MALE_DSM_IV',
'NUM_AUT_FEMALE_DSM_IV',
'NUM_ASP_DSM_IV',
'NUM_ASP_MALE_DSM_IV',
'NUM_ASP_FEMALE_DSM_IV',
'NUM_PDDNOS_DSM_IV',
'NUM_PDDNOS_MALE_DSM_IV',
'NUM_PDDNOS_FEMALE_DSM_IV',
'NUM_ASP_PDDNOS_DSM_IV',
'NUM_ASP_PDDNOS_MALE_DSM_IV',
'NUM_ASP_PDDNOS_FEMALE_DSM_IV',
'NUM_TD',
'NUM_TD_MALE',
'NUM_TD_FEMALE',
'NUM_TD_AGE_lte12',
'NUM_TD_AGE_12_18',
'NUM_TD_AGE_18_24',
'NUM_TD_AGE_24_34',
'NUM_TD_AGE_34_50',
'NUM_TD_AGE_gt50'])
data_frame = data_frame.append(_df, ignore_index=True)[_df.columns.tolist()]
# df = pd.DataFrame(raw_data, columns = [])
# Sanity Check
# NUM_AUT_DSM_V.shape[0] + NUM_TD.shape[0]
# df.loc[(df['DSM_IV_TR'] == 0)].shape[0] + NUM_AUT_DSM_V.shape[0] # Not exhaustive
# 'MAX_MUN'.lower() in '/home1/varunk/data/ABIDE1/RawDataBIDs/MaxMun_a/task-rest_bold.json'.lower()
_df
data_frame
# Save the csv file
data_frame.to_csv('demographics.csv')
"""
Explanation: Convention:
DX_GROUP : 1=Autism, 2= Control
DSM_IV_TR : 0=TD,1=Autism,2=Asperger's, 3= PDD-NOS, 4=Asperger's or PDD-NOS
SEX : 1=Male, 2=Female
End of explanation
"""
# df = pd.read_csv('/home1/varunk/data/ABIDE1/RawDataBIDs/composite_phenotypic_file.csv') # , index_col='SUB_ID'
# df = df.sort_values(['SUB_ID'])
# df_td_lt18_m_eyesopen = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 0) & (df['EYE_STATUS_AT_SCAN'] == 1)]
# df_td_lt18_m_eyesopen;
# df_td_lt18_m_eyesclosed = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 0) & (df['EYE_STATUS_AT_SCAN'] == 2)]
# df_td_lt18_m_eyesclosed;
# df_td_lt18_m_eyesopen;
# df_td_lt18_m_eyesclosed;
# Reading TR values
tr_path = '/home1/varunk/results_again_again/ABIDE1_Preprocess_Datasink/tr_paths/tr_list.npy'
tr = np.load(tr_path)
np.unique(tr)
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
bins = np.arange(0,3.5,0.1)
res = plt.hist(tr, rwidth=0.3, align='left', bins= bins)
# plt.xticks([0,0.5,1,1.5,2,2.5,3])
plt.xlabel('TR')
plt.ylabel('Number of participants')
plt.title('Frequency distribution of TRs')
# plt.text(60, .025, r'$\mu=100,\ \sigma=15$')
np.unique(tr)
df = pd.read_csv('/home1/varunk/data/ABIDE1/RawDataBIDs/composite_phenotypic_file.csv') # , index_col='SUB_ID'
df = df.sort_values(['SUB_ID'])
df_td_lt18_m_eyesopen = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 0) & (df['EYE_STATUS_AT_SCAN'] == 1)]
df_td_lt18_m_eyesopen;
df_td_lt18_m_eyesclosed = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 0) & (df['EYE_STATUS_AT_SCAN'] == 2)]
df_td_lt18_m_eyesclosed;
df_aut_lt18_m_eyesopen = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 1) & (df['EYE_STATUS_AT_SCAN'] == 1)]
df_aut_lt18_m_eyesopen;
df_aut_lt18_m_eyesclosed = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 1) & (df['EYE_STATUS_AT_SCAN'] == 2)]
df_aut_lt18_m_eyesclosed;
df_td_lt18_m_eyesopen_sub_id = df_td_lt18_m_eyesopen.as_matrix(['SUB_ID']).squeeze()
df_td_lt18_m_eyesclosed_sub_id = df_td_lt18_m_eyesclosed.as_matrix(['SUB_ID']).squeeze()
df_aut_lt18_m_eyesopen_sub_id = df_aut_lt18_m_eyesopen.as_matrix(['SUB_ID']).squeeze()
df_aut_lt18_m_eyesclosed_sub_id = df_aut_lt18_m_eyesclosed.as_matrix(['SUB_ID']).squeeze()
import re
sub_id = []
atlas_paths = np.load('/home1/varunk/results_again_again/ABIDE1_Preprocess_Datasink/atlas_paths/atlas_file_list.npy')
for path in atlas_paths:
sub_id_extracted = re.search('.+_subject_id_(\d+)', path).group(1)
sub_id.append(sub_id_extracted)
sub_id = list(map(int, sub_id))
# df_sub_id = df.as_matrix(['SUB_ID']).squeeze()
# Number of TD subjects with Age 12 to 18
df_td_lt18_m_eyesopen = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] >=12) &(df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 0) & (df['EYE_STATUS_AT_SCAN'] == 1)]
df_td_lt18_m_eyesopen.shape
# Number of Autistic subjects with Age 12 to 18
df_aut_lt18_m_eyesopen = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] >=12) &(df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 1) & (df['EYE_STATUS_AT_SCAN'] == 1)]
df_aut_lt18_m_eyesopen.shape
# tr[np.where(df_sub_id == df_td_lt18_m_eyesopen_sub_id)]
# np.isin(sub_id,df_td_lt18_m_eyesopen_sub_id)
tr1 = tr[np.isin(sub_id, df_aut_lt18_m_eyesopen_sub_id)]
bins = np.arange(0,3.5,0.1)
res = plt.hist(tr1, rwidth=0.3, align='left', bins= bins)
# plt.xticks([0,0.5,1,1.5,2,2.5,3])
plt.xlabel('TR')
plt.ylabel('Number of participants')
plt.title('Frequency distribution of TRs')
tr2 = tr[np.isin(sub_id, df_td_lt18_m_eyesopen_sub_id)]
bins = np.arange(0,3.5,0.1)
res = plt.hist(tr2, rwidth=0.3, align='left', bins= bins)
# plt.xticks([0,0.5,1,1.5,2,2.5,3])
plt.xlabel('TR')
plt.ylabel('Number of participants')
plt.title('Frequency distribution of TRs')
tr3 = tr[np.isin(sub_id, df_aut_lt18_m_eyesclosed_sub_id)]
bins = np.arange(0,3.5,0.1)
res = plt.hist(tr3, rwidth=0.3, align='left', bins= bins)
# plt.xticks([0,0.5,1,1.5,2,2.5,3])
plt.xlabel('TR')
plt.ylabel('Number of participants')
plt.title('Frequency distribution of TRs')
tr4 = tr[np.isin(sub_id, df_td_lt18_m_eyesclosed_sub_id)]
bins = np.arange(0,3.5,0.1)
res = plt.hist(tr4, rwidth=0.3, align='left', bins= bins)
# plt.xticks([0,0.5,1,1.5,2,2.5,3])
plt.xlabel('TR')
plt.ylabel('Number of participants')
plt.title('Frequency distribution of TRs')
"""
Explanation: Group Stats
The follwoing section checks the stats of participants lying in the ollwoing bins:
Autistic(DSM-IV), Males, Age <=18, Eyes Closed
Autistic(DSM-IV), Males, Age <=18, Eyes Open
End of explanation
"""
df_td_lt18_m_eyesopen_age = df_td_lt18_m_eyesopen.as_matrix(['AGE_AT_SCAN']).squeeze()
df_td_lt18_m_eyesclosed_age = df_td_lt18_m_eyesclosed.as_matrix(['AGE_AT_SCAN']).squeeze()
df_aut_lt18_m_eyesopen_age = df_aut_lt18_m_eyesopen.as_matrix(['AGE_AT_SCAN']).squeeze()
df_aut_lt18_m_eyesclosed_age = df_aut_lt18_m_eyesclosed.as_matrix(['AGE_AT_SCAN']).squeeze()
bins = np.arange(0,20,1)
# res = plt.hist(df_td_lt18_m_eyesopen_age, rwidth=0.3, align='left')
# res2 = plt.hist(df_aut_lt18_m_eyesopen_age, rwidth=0.3, align='left', bins= bins)
# # plt.xticks([0,0.5,1,1.5,2,2.5,3])
# plt.xlabel('TR')
# plt.ylabel('Number of participants')
# plt.title('Frequency distribution of TRs')
# import random
# import numpy
from matplotlib import pyplot
# x = [random.gauss(3,1) for _ in range(400)]
# y = [random.gauss(4,2) for _ in range(400)]
# bins = numpy.linspace(-10, 10, 100)
pyplot.hist(df_td_lt18_m_eyesopen_age, alpha=0.5,bins=bins, label='TD',rwidth=0.1, align='left')
pyplot.hist(df_aut_lt18_m_eyesopen_age,alpha=0.5, bins=bins, label='AUT',rwidth=0.1,align='right')
pyplot.legend(loc='upper right')
pyplot.xlabel('AGE')
pyplot.show()
pyplot.hist(df_td_lt18_m_eyesclosed_age, alpha=0.5,bins=bins, label='TD',rwidth=0.1, align='left')
pyplot.hist(df_aut_lt18_m_eyesclosed_age,alpha=0.5, bins=bins, label='AUT',rwidth=0.1,align='right')
pyplot.legend(loc='upper right')
pyplot.xlabel('AGE')
pyplot.show()
"""
Explanation: AGE
End of explanation
"""
pyplot.yticks(np.arange(0,20,1))
res = pyplot.boxplot([df_td_lt18_m_eyesopen_age,df_aut_lt18_m_eyesopen_age])
pyplot.yticks(np.arange(0,20,1))
res = pyplot.boxplot([df_td_lt18_m_eyesclosed_age, df_aut_lt18_m_eyesclosed_age])
"""
Explanation: Box Plots:
https://www.wellbeingatschool.org.nz/information-sheet/understanding-and-interpreting-box-plots
End of explanation
"""
eyes_open_age = np.concatenate((df_td_lt18_m_eyesopen_age,df_aut_lt18_m_eyesopen_age))
eyes_closed_age = np.concatenate((df_td_lt18_m_eyesclosed_age,df_aut_lt18_m_eyesclosed_age))
pyplot.yticks(np.arange(0,20,1))
res = pyplot.boxplot([eyes_open_age, eyes_closed_age])
"""
Explanation: Eyes Open vs Closed
End of explanation
"""
from scipy import stats
print(stats.ttest_ind(eyes_open_age,eyes_closed_age, equal_var = False))
print('Mean: ',np.mean(eyes_open_age), np.mean(eyes_closed_age))
print('Std: ',np.std(eyes_open_age), np.std(eyes_closed_age))
"""
Explanation: Stats: Differences in Ages of closed vs open
End of explanation
"""
# stats.ttest_ind(eyes_open_age,eyes_closed_age, equal_var = False)
eyes_open_tr = np.concatenate((tr1,tr2))
eyes_closed_tr = np.concatenate((tr3,tr4))
print(stats.ttest_ind(eyes_open_tr,eyes_closed_tr, equal_var = False))
print('Mean: ',np.mean(eyes_open_tr), np.mean(eyes_closed_tr))
print('Std: ',np.std(eyes_open_tr), np.std(eyes_closed_tr))
"""
Explanation: Result:
Mean Age is significantly different in two groups. That may be the reason for discrepancies in regions.
Stats: Differences in TR of closed vs open
End of explanation
"""
print(stats.ttest_ind(df_aut_lt18_m_eyesopen_age, df_td_lt18_m_eyesopen_age, equal_var = False))
print('Mean: ',np.mean(df_aut_lt18_m_eyesopen_age), np.mean(df_td_lt18_m_eyesopen_age))
print('Std: ',np.std(df_aut_lt18_m_eyesopen_age), np.std(df_td_lt18_m_eyesopen_age))
"""
Explanation: Result:
TRs of two groups are also significantly different
Age differences in AUT vs TD
Eyes Open
End of explanation
"""
print(stats.ttest_ind(df_aut_lt18_m_eyesclosed_age, df_td_lt18_m_eyesclosed_age, equal_var = False))
print('Mean: ',np.mean(df_aut_lt18_m_eyesclosed_age),np.mean(df_td_lt18_m_eyesclosed_age))
print('Std: ',np.std(df_aut_lt18_m_eyesclosed_age),np.std(df_td_lt18_m_eyesclosed_age))
"""
Explanation: Result:
Age difference not significant for eyes open
Eyes Closed
End of explanation
"""
motion_params_npy = '/home1/varunk/results_again_again/ABIDE1_Preprocess_Datasink/motion_params_paths/motion_params_file_list.npy'
mot_params_paths = np.load(motion_params_npy)
in_file = mot_params_paths[0]
trans_x = []
trans_y = []
trans_z = []
rot_x = []
rot_y = []
rot_z = []
# for in_file in mot_params_paths:
with open(in_file) as f:
for line in f:
line = line.split(' ')
print(line)
trans_x.append(float(line[6]))
trans_y.append(float(line[8]))
trans_z.append(float(line[10]))
rot_x.append(float(line[0]))
rot_y.append(float(line[2]))
rot_z.append(float(line[4]))
float('0.0142863')
max(rot_y)
"""
Explanation: Result:
Age difference not significant for eyes closed
Motion Parameters
https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=fsl;cda6e2ea.1112
Format: rot_x, rot_y, rot_z, trans_x, trans_y, trans_z
End of explanation
"""
# Load demographics file
df_demographics = pd.read_csv('/home1/varunk/Autism-Connectome-Analysis-brain_connectivity/notebooks/demographics.csv')
# df_demographics
df_demographics_volumes = df_demographics.as_matrix(['SITE_NAME','VOLUMES']).squeeze()
df_demographics_volumes
df_phenotype = pd.read_csv('/home1/varunk/data/ABIDE1/RawDataBIDs/composite_phenotypic_file.csv') # , index_col='SUB_ID'
df_phenotype = df_phenotype.sort_values(['SUB_ID'])
volumes_bins = np.array([[0,150],[151,200],[201,250],[251,300]])
bins_volumes_AUT = []
bins_volumes_TD = []
for counter, _bin in enumerate(volumes_bins):
df_demographics_volumes_selected_bin = df_demographics_volumes[np.where(np.logical_and((df_demographics_volumes[:,1] >= _bin[0]),(df_demographics_volumes[:,1] <= _bin[1])))]
selected_AUT = pd.DataFrame()
selected_TD = pd.DataFrame()
for site in df_demographics_volumes_selected_bin:
print(site[0])
selected_AUT = pd.concat([selected_AUT,df_phenotype.loc[(df_phenotype['SEX'] == 1) & (df_phenotype['DSM_IV_TR'] == 1) & (df_phenotype['SITE_ID'] == site[0])]])
selected_TD = pd.concat([selected_AUT,df_phenotype.loc[(df_phenotype['SEX'] == 1) & (df_phenotype['DSM_IV_TR'] == 0) & (df_phenotype['SITE_ID'] == site[0])]])
bins_volumes_AUT.append(selected_AUT)
bins_volumes_TD.append(selected_TD)
f = bins_volumes_AUT[0]
# f.loc[[2,3,4,5]]
f
f.iloc[[2,3,4,5,7]]
# num_bins = 4
print('Range ','TD ','AUT ','Ratio TD/AUT')
ratio = np.zeros((len(bins_volumes_AUT)))
for i in range(len(bins_volumes_AUT)):
ratio[i] = bins_volumes_TD[i].shape[0]/bins_volumes_AUT[i].shape[0]
print(volumes_bins[i],bins_volumes_TD[i].shape[0],bins_volumes_AUT[i].shape[0], ratio[i])
min_ratio = np.min(ratio)
min_index = np.argmin(ratio)
new_TD = np.zeros((len(bins_volumes_AUT)))
print('Range ','TD ','AUT ')
for i in range(len(bins_volumes_AUT)):
new_TD[i] = np.ceil(bins_volumes_AUT[i].shape[0] * min_ratio)
print(volumes_bins[i],new_TD[i],bins_volumes_AUT[i].shape[0])
# Now loop over all the bins created and select the specific number of subjects randomly from each TD bin
TD_idx_list = []
selected_df_TD = pd.DataFrame()
for i in range(len(bins_volumes_TD)):
idx = np.arange(len(bins_volumes_TD[i]))
np.random.shuffle(idx)
idx = idx[0:int(new_TD[i])]
TD_idx_list.append(idx)
selected_df_TD = pd.concat([selected_df_TD, bins_volumes_TD[i].iloc[idx]])
selected_df_TD= selected_df_TD.sort_values(['SUB_ID'])
# print(idx)
# Sanity check to see of no subjects are repeated
# subid = selected_df_TD.sort_values(['SUB_ID']).as_matrix(['SUB_ID']).squeeze()
# len(np.unique(subid)) == len(subid)
# Sanity check to see of the number of subjects are same as expected
# len(subid) == (89 + 105 + 109 + 56)
# Sanity check so that no subject index is repeated
# len(np.unique(TD_idx_list[3])) == len(TD_idx_list[3] )
# sanity check to check the new number of TD subjects in each Volumes bin
# len(TD_idx_list[3]) == 56
selected_df_TD
"""
Explanation: Matching based on Volumes
Volume bins
100 - 150
150 - 200
200 - 250
250 - 300
End of explanation
"""
age_bins = np.array([[0,9],[9,12],[12,15],[15,18]])
bins_age_AUT = []
bins_age_TD = []
# for counter, _bin in enumerate(age_bins):
for age in age_bins:
selected_AUT = pd.DataFrame()
selected_TD = pd.DataFrame()
print(age[0], age[1])
selected_AUT = pd.concat([selected_AUT,df_phenotype.loc[(df_phenotype['SEX'] == 1)
& (df_phenotype['DSM_IV_TR'] == 1)
& (df_phenotype['AGE_AT_SCAN'] > age[0])
& (df_phenotype['AGE_AT_SCAN'] <= age[1]) ]])
selected_TD = pd.concat([selected_TD,selected_df_TD.loc[(selected_df_TD['SEX'] == 1)
& (selected_df_TD['DSM_IV_TR'] == 0)
& (selected_df_TD['AGE_AT_SCAN'] > age[0])
& (selected_df_TD['AGE_AT_SCAN'] <= age[1]) ]])
bins_age_AUT.append(selected_AUT)
bins_age_TD.append(selected_TD)
bins_age_TD[0]
# num_bins = 4
print('Original data stats')
print('Age Range ','TD ','AUT ','Ratio TD/AUT')
ratio = np.zeros((len(bins_age_TD)))
for i in range(len(bins_age_TD)):
ratio[i] = bins_age_TD[i].shape[0]/bins_age_AUT[i].shape[0]
print(age_bins[i],bins_age_TD[i].shape[0],bins_age_AUT[i].shape[0], ratio[i])
min_ratio = np.min(ratio)
min_index = np.argmin(ratio)
new_TD = np.zeros((len(bins_age_AUT)))
print('Matched data stats')
print('Age Range ','TD ','AUT ')
for i in range(len(bins_age_AUT)):
new_TD[i] = np.ceil(bins_age_AUT[i].shape[0] * min_ratio)
print(age_bins[i],new_TD[i],bins_age_AUT[i].shape[0])
# Now loop over all the bins created and select the specific number of subjects randomly from each TD bin
TD_idx_list = []
selected_df_TD = pd.DataFrame()
for i in range(len(bins_age_TD)):
idx = np.arange(len(bins_age_TD[i]))
np.random.shuffle(idx)
idx = idx[0:int(new_TD[i])]
TD_idx_list.append(idx)
selected_df_TD = pd.concat([selected_df_TD, bins_age_TD[i].iloc[idx]])
selected_df_TD = selected_df_TD.sort_values(['SUB_ID'])
# print(idx)
selected_df_TD
# selected_df_TD.as_matrix(['SUB_ID']).squeeze()
x = np.arange(10)
np.random.shuffle(x)
x
48 * min_ratio
# selected = selected.loc[(selected['SEX'] == 1) & (selected['DSM_IV_TR'] == 0) & (selected['SITE_ID'] == site[0]) & (selected['EYE_STATUS_AT_SCAN'] == 1)]
selected;
df_phenotype.loc[(df_phenotype['SEX'] == 1) & (df_phenotype['DSM_IV_TR'] == 0) & (df_phenotype['SITE_ID'] == 'TRINITY') & (df_phenotype['EYE_STATUS_AT_SCAN'] == 1)]
"""
Explanation: Matching based on Age
Age bins
6 - 9
9 -12
12 - 15
15 - 18
End of explanation
"""
def volumes_matching(volumes_bins, demographics_file_path, phenotype_file_path):
# Load demographics file
# demographics_file_path = '/home1/varunk/Autism-Connectome-Analysis-brain_connectivity/notebooks/demographics.csv'
# phenotype_file_path = '/home1/varunk/data/ABIDE1/RawDataBIDs/composite_phenotypic_file.csv'
# volumes_bins = np.array([[0,150],[151,200],[201,250],[251,300]])
df_demographics = pd.read_csv(demographics_file_path)
df_demographics_volumes = df_demographics.as_matrix(['SITE_NAME','VOLUMES']).squeeze()
df_phenotype = pd.read_csv(phenotype_file_path)
df_phenotype = df_phenotype.sort_values(['SUB_ID'])
bins_volumes_AUT_data = []
bins_volumes_TD_data = []
for counter, _bin in enumerate(volumes_bins):
df_demographics_volumes_selected_bin = df_demographics_volumes[np.where(np.logical_and((df_demographics_volumes[:,1] >= _bin[0]),(df_demographics_volumes[:,1] <= _bin[1])))]
selected_AUT = pd.DataFrame()
selected_TD = pd.DataFrame()
for site in df_demographics_volumes_selected_bin:
# print(site[0])
selected_AUT = pd.concat([selected_AUT,df_phenotype.loc[(df_phenotype['SEX'] == 1) & (df_phenotype['DSM_IV_TR'] == 1) & (df_phenotype['SITE_ID'] == site[0])]])
selected_TD = pd.concat([selected_AUT,df_phenotype.loc[(df_phenotype['SEX'] == 1) & (df_phenotype['DSM_IV_TR'] == 0) & (df_phenotype['SITE_ID'] == site[0])]])
bins_volumes_AUT_data.append(selected_AUT)
bins_volumes_TD_data.append(selected_TD)
selected_df_TD = matching(volumes_bins, bins_volumes_TD_data, bins_volumes_AUT_data)
# sub_ids = selected_df_TD.as_matrix(['SUB_ID']).squeeze()
selected_df_TD.to_csv('selected_TD.csv')
return selected_df_TD
def matching(bins, bins_TD_data, bins_AUT_data):
# num_bins = 4
print('Original data stats')
print('Range ','TD ','AUT ','Ratio TD/AUT')
ratio = np.zeros((len(bins_TD_data)))
for i in range(len(bins_TD_data)):
ratio[i] = bins_TD_data[i].shape[0]/bins_AUT_data[i].shape[0]
print(bins[i],bins_TD_data[i].shape[0],bins_AUT_data[i].shape[0], ratio[i])
min_ratio = np.min(ratio)
min_index = np.argmin(ratio)
new_TD = np.zeros((len(bins_TD_data)))
print('Matched data stats')
print('Range ','TD ','AUT ')
for i in range(len(bins_TD_data)):
new_TD[i] = np.ceil(bins_AUT_data[i].shape[0] * min_ratio)
print(bins[i],new_TD[i],bins_AUT_data[i].shape[0])
# Now loop over all the bins created and select the specific number of subjects randomly from each TD bin
TD_idx_list = []
selected_df_TD = pd.DataFrame()
for i in range(len(bins_TD_data)):
idx = np.arange(len(bins_TD_data[i]))
np.random.shuffle(idx)
idx = idx[0:int(new_TD[i])]
TD_idx_list.append(idx)
selected_df_TD = pd.concat([selected_df_TD, bins_TD_data[i].iloc[idx]])
selected_df_TD = selected_df_TD.sort_values(['SUB_ID'])
return selected_df_TD
demographics_file_path = '/home1/varunk/Autism-Connectome-Analysis-brain_connectivity/notebooks/demographics.csv'
phenotype_file_path = '/home1/varunk/data/ABIDE1/RawDataBIDs/composite_phenotypic_file.csv'
volumes_bins = np.array([[0,150],[151,200],[201,250],[251,300]])
volumes_matching(volumes_bins, demographics_file_path, phenotype_file_path)
"""
Explanation: Create a function to do volumes matching
End of explanation
"""
df_phenotype.loc[(df_phenotype['SITE_ID'] == 'TRINITY')];
df_demographics_volumes_selected_bin
"""
Explanation: Recycle Bin
End of explanation
"""
df_phenotype = pd.read_csv('/home1/varunk/data/ABIDE1/RawDataBIDs/composite_phenotypic_file.csv') # , index_col='SUB_ID'
# df_phenotype = df.as_matrix(['SITE_ID']).squeeze()
df = df.sort_values(['SUB_ID'])
df_td_lt18_m_eyesopen_vol_100_150 = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 0) & (df['EYE_STATUS_AT_SCAN'] == 1)]
df_td_lt18_m_eyesopen_vol_100_150;
np.unique(df_phenotype)
np.mean(eyes_open_tr), np.mean(eyes_closed_tr)
df_td_lt18_m_eyesopen_age
df_td_lt18_m_eyesopen_sub_id
tr[637]
'50003' in X[1]
"""
Explanation: Extract the sub_id where volume lies in a particular bin
End of explanation
"""
|
f-guitart/data_mining | notes/02 - Apache Spark Programming Essentials.ipynb | gpl-3.0 | import pyspark
sc = pyspark.SparkContext(appName="my_spark_app")
"""
Explanation: What is Apache Spark?
distributed framework
in-memory data structures
data processing
it improves (most of the times) Hadoop workloads
Spark enables data scientists to tackle problems with larger data sizes than they could before with tools like R or Pandas
First Steps with Apache Spark Interactive Programming
First of all check that PySpark is running properly. You can check if PySpark is correctly loaded:
In case it is not, you can follow these posts:
Windows (IPython): http://jmdvinodjmd.blogspot.com.es/2015/08/installing-ipython-notebook-with-apache.html
Windows (Jupyter): http://www.ithinkcloud.com/tutorials/tutorial-on-how-to-install-apache-spark-on-windows/
End of explanation
"""
lines = sc.textFile("../data/people.csv")
lines.count()
lines.first()
"""
Explanation: The first thing to note is that with Spark all computation is parallelized by means of distributed data structures that are spread through the cluster. These collections are called Resilient Distributed Datasets (RDD). We will talk more about RDD, as they are the main piece in Spark.
As we have successfully loaded the Spark Context, we are ready to do some interactive analysis. We can read a simple file:
End of explanation
"""
lines = sc.textFile("../data/people.csv")
filtered_lines = lines.filter(lambda line: "individuum" in line)
filtered_lines.first()
"""
Explanation: This is a very simple first example, where we create an RDD (variable lines) and then we apply some operations (count and first) in a parallel manner. It has to be noted, that as we are running all our examples in a single computer the parallelization is not applied.
In the next section we will cover the core Spark concepts that allow Spark users to do parallel computation.
Core Spark Concepts
We will talk about Spark applications that are in charge of loading data and applying some distributed computation over it. Every application has a driver program that launches parallel operations to the cluster. In the case of interactive programming, the driver program is the shell (or Notebook) itself.
The "access point" to Spark from the driver program is the Spark Context object.
Once we have an Spark Context we can use it to build RDDs. In the previous examples we used sc.textFile() to represent the lines of the textFile. Then we run different operations over the RDD lines.
To run these operations over RDDs, driver programs manage different nodes called executors. For example, for the count operation, it is possible to run count in different ranges of the file.
Spark's API allows passing functions to its operators to run them on the cluster. For example, we could extend our example by filtering the lines in the file that contain a word, such as individuum.
End of explanation
"""
# loading an external dataset
lines = sc.textFile("../data/people.csv")
print(type(lines))
# applying a transformation to an existing RDD
filtered_lines = lines.filter(lambda line: "individuum" in line)
print(type(filtered_lines))
"""
Explanation: RDD Basics
An RDD can be defined as a distributed collection of elements.
All work done with Spark can be summarized as creating, transforming and applying operations over RDDs to compute a result.
Under the hood, Spark automatically distributes the data contained in RDDs across your cluster and parallelizes the operations you perform on them.
RDD properties:
* it is an immutable distributed collection of objects
* it is split into multiple partitions
* it is computed on different nodes of the cluster
* it can contain any type of Python object (user defined ones included)
An RDD can be created in two ways:
1. loading an external dataset
2. distributing a collection of objects in the driver program
We have already seen the two ways of creating an RDD.
End of explanation
"""
# if we print lines we get only this
print(lines)
# when we perform an action, then we get the result
action_result = lines.first()
print(type(action_result))
action_result
"""
Explanation: It is important to note that once we have an RDD, we can run two kind of operations:
* transformations: construct a new RDD from a previous one. For example, by filtering lines RDD we create a new RDD that holds the lines that contain "individuum" string. Note that the returning result is an RDD.
* actions: compute a result based on an RDD, and returns the result to the driver program or stores it to an external storage system (e.g. HDFS). Note that the returning result is not an RDD but another variable type.
Notice how when we go to print it, it prints out that it is an RDD and that the type is a PipelinedRDD not a list of values as we might expect. That's because we haven't performed an action yet, we've only performed a transformation.
End of explanation
"""
# filtered_lines is not computed until the next action is applied over it
# it make sense when working with big data sets, as it is not necessary to
# transform the whole RDD to get an action over a subset
# Spark doesn't even reads the complete file!
filtered_lines.first()
"""
Explanation: Transformations and actions are very different because of the way Spark computes RDDs.
Transformations are defined in a lazy manner this is they are only computed once they are used in an action.
End of explanation
"""
import time
lines = sc.textFile("../data/REFERENCE/*")
lines_nonempty = lines.filter( lambda x: len(x) > 0 )
words = lines_nonempty.flatMap(lambda x: x.split())
words_persisted = lines_nonempty.flatMap(lambda x: x.split())
t1 = time.time()
words.count()
print("Word count 1:",time.time() - t1)
t1 = time.time()
words.count()
print("Word count 2:",time.time() - t1)
t1 = time.time()
words_persisted.persist()
words_persisted.count()
print("Word count persisted 1:",time.time() - t1)
t1 = time.time()
words_persisted.count()
print("Word count persisted 2:", time.time() - t1)
"""
Explanation: The drawback is that Spark recomputes again the RDD at each action application.
This means that the computing effort over an already computed RDD may be lost.
To mitigate this drawback, the user can take the decision of persisting the RDD after computing it the first time, Spark will store the RDD contents in memory (partitioned across the machines in your cluster), and reuse them in future actions.
Persisting RDDs on disk instead of memory is also possible.
Let's see an example on the impact of persisting:
End of explanation
"""
# load a file
lines = sc.textFile("../data/REFERENCE/*")
# make a transformation filtering positive length lines
lines_nonempty = lines.filter( lambda x: len(x) > 0 )
print("-> lines_nonepmty is: {} and if we print it we get\n {}".format(type(lines_nonempty), lines_nonempty))
# we transform again
words = lines_nonempty.flatMap(lambda x: x.split())
print("-> words is: {} and if we print it we get\n {}".format(type(words), words))
words_persisted = lines_nonempty.flatMap(lambda x: x.split())
print("-> words_persisted is: {} and if we print it we get\n {}".format(type(words_persisted), words_persisted))
final_result = words.take(10)
print("-> final_result is: {} and if we print it we get\n {}".format(type(final_result), final_result))
"""
Explanation: RDD Operations
We have already seen that RDDs have two basic operations: transformations and actions.
Transformations are operations that return a new RDD. Examples: filter, map.
Remember that , transformed RDDs are computed lazily, only when you use them in an action.
Lazy evaluation means that when we call a transformation on an RDD (for instance, calling map()), the operation is not immediately performed.
Instead, Spark internally records metadata to indicate that this operation has been requested.
Loading data into an RDD is lazily evaluated in the same way trans formations are. So, when we call sc.textFile(), the data is not loaded until it is necessary.
As with transformations, the operation (in this case, reading the data) can occur multiple times. Take in mind that transformations DO HAVE impact over computation time.
Many transformations are element-wise; that is, they work on one element at a time; but this is not true for all transformations.
End of explanation
"""
import time
# we checkpint the initial time
t1 = time.time()
words.count()
# and count the time expmended on the computation
print("Word count 1:",time.time() - t1)
t1 = time.time()
words.count()
print("Word count 2:",time.time() - t1)
t1 = time.time()
words_persisted.persist()
words_persisted.count()
print("Word count persisted 1:",time.time() - t1)
t1 = time.time()
words_persisted.count()
print("Word count persisted 2:", time.time() - t1)
"""
Explanation: filter applies the lambda function to each line in lines RDD, only lines that accomplish the condition that the length is greater than zero are in lines_nonempty variable (this RDD is not computed yet!)
flatMap applies the lambda function to each element of the RDD and then the result is flattened (i.e. a list of lists would be converted to a simple list)
Actions are operations that return an object to the driver program or write to external storage, they kick a computation. Examples: first, count.
End of explanation
"""
lines = sc.textFile("../data/people.csv")
print("-> Three elements:\n", lines.take(3))
print("-> The whole RDD:\n", lines.collect())
"""
Explanation: Actions are the operations that return a final value to the driver program or write data to an external storage system.
Actions force the evaluation of the transformations required for the RDD they were called on, since they need to actually produce output.
Returning to the previous example, until we call count over words and words persisted, the RDD are not computed. See that we persisted words_persisted, and until its second computation we cannot see the impact of persisting that RDD in memory.
If we want to see a part of the RDD, we can use take, and to have the full RDD we can use collect.
End of explanation
"""
lines = sc.textFile("../data/people.csv")
# we create a lambda function to apply tp all lines of the dataset
# WARNING, see that after splitting we get only the first element
first_cells = lines.map(lambda x: x.split(",")[0])
print(first_cells.collect())
# we can define a function as well
def get_cell(x):
return x.split(",")[0]
first_cells = lines.map(get_cell)
print(first_cells.collect())
"""
Explanation: Question: Why is not a god idea to collect an RDD?
Passing functions to Spark
Most of Spark’s transformations, and some of its actions, depend on passing in functions that are used by Spark to compute data.
In Python, we have three options for passing functions into Spark.
* For shorter functions, we can pass in lambda expressions
* We can pass in top-level functions, or
* Locally defined functions.
End of explanation
"""
import urllib3
def download_file(csv_line):
link = csv_line[0]
http = urllib3.PoolManager()
r = http.request('GET', link, preload_content=False)
response = r.read()
return response
books_info = sc.textFile("../data/books.csv").map(lambda x: x.split(","))
print(books_info.take(10))
books_content = books_info.map(download_file)
print(books_content.take(10)[1][:100])
"""
Explanation: Working with common Spark transformations
The two most common transformations you will likely be using are map and filter.
The map() transformation takes in a function and applies it to each element in the RDD with the result of the function being the new value of each element in the resulting RDD.
The filter() transformation takes in a function and returns an RDD that only has elements that pass the filter() function.
Sometimes map() returns nested lists, to flatten these nested lists we can use flatMap().
So, flatMap() is called individually for each element in our input RDD. Instead of returning a single element, we return an iterator with our return values. Rather than producing an RDD of iterators, we get back an RDD that consists of the elements from all of the iterators.
Set operations
distinct() transformation to produce a new RDD with only distinct items.
Note that distinct() is expensive, however, as it requires shuffling all the data over the network to ensure that we receive only one copy of each element
RDD.union(other) back an RDD consisting of the data from both sources.
Unlike the mathematical union(), if there are duplicates in the input RDDs, the result of Spark’s union() will contain duplicates (which we can fix if desired with distinct()).
RDD.intersection(other) returns only elements in both RDDs. intersection() also removes all duplicates (including duplicates from a single RDD) while running.
While intersection() and union() are two similar concepts, the performance of intersection() is much worse since it requires a shuffle over the network to identify common elements.
RDD.subtract(other) function takes in another RDD and returns an RDD that has only values present in the first RDD and not the second RDD. Like intersection(), it performs a shuffle.
RDD.cartesian(other) transformation returns all possible pairs of (a,b) where a is in the source RDD and b is in the other RDD.
The Cartesian product can be useful when we wish to consider the similarity between all possible pairs, such as computing every user’s expected interest in each offer. We can also take the Cartesian product of an RDD with itself, which can be useful for tasks like user similarity. Be warned, however, that the Cartesian product is very expensive for large RDDs.
Actions
reduce(): which takes a function that operates on two elements of the type in your RDD and returns a new element of the same type.
aggregate(): takes an initial zero value of the type we want to return. We then supply a function to combine the elements from our RDD with the accumulator. Finally, we need to supply a second function to merge two accumulators, given that each node accumulates its own results locally. To know more:
http://stackoverflow.com/questions/28240706/explain-the-aggregate-functionality-in-spark
http://atlantageek.com/2015/05/30/python-aggregate-rdd/
collect(): returns the entire RDD’s contents. collect() is commonly used in unit tests where the entire contents of the RDD are expected to fit in memory, as that makes it easy to compare the value of our RDD with our expected result.
take(n): returns n elements from the RDD and attempts to minimize the number of partitions it accesses, so it may represent a biased collection
top(): will use the default ordering on the data, but we can supply our own comparison function to extract the top elements.
Exercises
We have a file (../data/books.csv) with a lot of links to books. We want to perform an analysis to the books and its contents.
Exercise 1: Download all books, from books.csv using the map function.
Exercise 2: Identify transformations and actions. When the returned data is calculated?
Exercise 3: Imagine that you only want to download Dickens books, how would you do that? Which is the impact of not persisting dickens_books_content?
Exercise 4: Use flatMap() in the resulting RDD of the previous exercise, how the result is different?
Exercise 5: You want to know the different books authors there are.
Exercise 6: Return Poe's and Dickens' books URLs (use union function).
Exercise 7: Return the list of books without Dickens' and Poe's books.
Exercise 8: Count the number of books using reduce function.
For the following two exercices, we will use ../data/Sacramentorealestatetransactions.csv
Exercise 9: Compute the mean price of estates from csv containing Sacramento's estate price using aggregate function.
Exercise 10: Get top 5 highest and lowest prices in Sacramento estate's transactions
Exercise 1: Download all books, from books.csv using the map function.
Answer 1:
End of explanation
"""
import re
def is_dickens(csv_line):
link = csv_line[0]
t = re.match("http://www.textfiles.com/etext/AUTHORS/DICKENS/",link)
return t != None
dickens_books_info = books_info.filter(is_dickens)
print(dickens_books_info.take(4))
dickens_books_content = dickens_books_info.map(download_file)
# take into consideration that each time an action is performed over dickens_book_content, the file is downloaded
# this has a big impact into calculations
print(dickens_books_content.take(2)[1][:100])
"""
Explanation: Exercise 2: Identify transformations and actions. When the returned data is calculated?
Answer 2:
If we consider the text reading as a transformation...
Transformations:
* books_info = sc.textFile("../data/books.csv").map(lambda x: x.split(","))
* books_content = books_info.map(lambda x: download_file(x[0]))
Actions:
* print books_info.take(10)
* print books_content.take(1)[0][:100]
Computation is carried out in actions. In this case we take advantage of it, as for downloading data we only apply the function to one element of the books_content RDD
Exercise 3: Imagine that you only want to download Dickens books, how would you do that? Which is the impact of not persisting dickens_books_content?
Answer 3:
End of explanation
"""
flat_content = dickens_books_info.map(lambda x: x)
print(flat_content.take(4))
flat_content = dickens_books_info.flatMap(lambda x: x)
print(flat_content.take(4))
"""
Explanation: Exercise 4: Use flatMap() in the resulting RDD of the previous exercise, how the result is different?
Answer 4:
End of explanation
"""
def get_author(csv_line):
link = csv_line[0]
t = re.match("http://www.textfiles.com/etext/AUTHORS/(\w+)/",link)
if t:
return t.group(1)
return u'UNKNOWN'
authors = books_info.map(get_author)
authors.distinct().collect()
"""
Explanation: Exercise 5: You want to know the different books authors there are.
Answer 5:
End of explanation
"""
import re
def get_author_and_link(csv_line):
link = csv_line[0]
t = re.match("http://www.textfiles.com/etext/AUTHORS/(\w+)/",link)
if t:
return (t.group(1), link)
return (u'UNKNOWN',link)
authors_links = books_info.map(get_author_and_link)
# not very efficient
dickens_books = authors_links.filter(lambda x: x[0]=="DICKENS")
poes_books = authors_links.filter(lambda x: x[0]=="POE")
poes_dickens_books = poes_books.union(dickens_books)
# sample is a transformation that returns an RDD sampled over the original RDD
# https://spark.apache.org/docs/1.1.1/api/python/pyspark.rdd.RDD-class.html
poes_dickens_books.sample(True,0.05).collect()
# takeSample is an action, returning a sampled subset of the RDD
poes_dickens_books.takeSample(True,10)
"""
Explanation: Exercise 6: Return Poe's and Dickens' books URLs (use union function).
Answer 6
End of explanation
"""
authors_links.subtract(poes_dickens_books).map(lambda x: x[0]).distinct().collect()
"""
Explanation: Exercise 7: Return the list of books without Dickens' and Poe's books.
Answer 7:
End of explanation
"""
authors_links.map(lambda x: 1).reduce(lambda x,y: x+y) == authors_links.count()
# let's see this approach more in detail
# this transformation generates an rdd of 1, one per element in the RDD
authors_map = authors_links.map(lambda x: 1)
authors_map.takeSample(True,10)
# with reduce, we pass a function with two parameters which is applied by pairs
# inside the the function we specify which operation we perform with the two parameters
# the result is then returned and the action is applied again using the result until there is only one element in the resulting
# this is a very efficient way to do a summation in parallel
# using a functional approach
# we could define any operation inside the function
authors_map.reduce(lambda x,y: x*y)
"""
Explanation: Exercise 8: Count the number of books using reduce function.
Answer 8
End of explanation
"""
sacramento_estate_csv = sc.textFile("../data/Sacramentorealestatetransactions.csv")
header = sacramento_estate_csv.first()
# first load the data
# we know that the price is in column 9
sacramento_estate = sacramento_estate_csv.filter(lambda x: x != header)\
.map(lambda x: x.split(","))\
.map(lambda x: int(x[9]))
sacramento_estate.takeSample(True, 10)
seqOp = (lambda x,y: (x[0] + y, x[1] + 1))
combOp = (lambda x,y: (x[0] + y[0], x[1] + y[1]))
total_sum, number = sacramento_estate.aggregate((0,0),seqOp,combOp)
mean = float(total_sum)/number
mean
"""
Explanation: Exercise 9: Compute the mean price of estates from csv containing Sacramento's estate price using aggregate function.
Answer 9
End of explanation
"""
print(sacramento_estate.top(5))
print(sacramento_estate.top(5, key=lambda x: -x))
"""
Explanation: Exercise 10: Get top 5 highest and lowest prices in Sacramento estate's transactions
Answer 10
End of explanation
"""
import re
def get_author_data(csv_line):
link = csv_line[0]
t = re.match("http://www.textfiles.com/etext/AUTHORS/(\w+)/",link)
if t:
return (t.group(1), csv_line)
return (u'UNKNOWN', csv_line)
books_info = sc.textFile("../data/books.csv").map(lambda x: x.split(","))
authors_info = books_info.map(get_author_data)
print(authors_info.take(5))
"""
Explanation: Spark Key/Value Pairs
Spark provides special operations on RDDs containing key/value pairs.
These RDDs are called pair RDDs, but are simple RDDs with an special structure. In Python, for the functions on keyed data to work we need to return an RDD composed of tuples.
Exercise 1: Create a pair RDD from our books information data, having author as key and the rest of the information as value. (Hint: the answer is very similar to the previous section Exercise 6)
Exercise 2: Check that pair RDDs are also RDDs and that common RDD operations work as well. Filter elements with author equals to "UNKNOWN" from previous RDD.
Exercise 3: Check mapValue in Spark API (http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD.mapValues) function that works on pair RDDs.
Exercise 1: Create a pair RDD from our books information data, having author as key and the rest of the information as value. (Hint: the answer is very similar to the previous section Exercise 6)
Answer 1:
End of explanation
"""
authors_info.filter(lambda x: x[0] != "UNKNOWN").take(3)
"""
Explanation: Exercise 2: Check that pair RDDs are also RDDs and that common RDD operations work as well. Filter elements with author equals to "UNKNOWN" from previous RDD.
Answer 2:
The operations over pair RDDs will also be slightly different.
But take into account that pair RDDs are just special RDDs that some operations can be applied, however common RDDs also fork for them.
End of explanation
"""
authors_info.mapValues(lambda x: x[2]).take(5)
"""
Explanation: Exercise 3: Check mapValue in Spark API (http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD.mapValues) function that works on pair RDDs.
Answer 3:
Sometimes is awkward to work with pairs, and Spark provides a map function that operates over values.
End of explanation
"""
# first get each book size, keyed by author
authors_data = authors_info.mapValues(lambda x: int(x[2]))
authors_data.take(5)
# ther reduce summing
authors_data.reduceByKey(lambda y,x: y+x).collect()
"""
Explanation: Transformations on Pair RDDs
Since pair RDDs contain tuples, we need to pass functions that operate on tuples rather than on individual elements.
reduceByKey(func): Combine values with the same key.
groupByKey(): Group values with the same key.
combineByKey(createCombiner, mergeValue, mergeCombiners, partitioner): Combine values with the same key using a different result type.
keys(): return RDD keys
values(): return RDD values
groupBy(): takes a function that it applies to every element in the source RDD and uses the result to determine the key.
cogroup(): over two RDDs sharing the same key type, K, with the respective value types V and W gives us back RDD[(K,(Iterable[V], Iterable[W]))]. If one of the RDDs doesn’t have elements for a given key that is present in the other RDD, the corresponding Iterable is simply empty. cogroup() gives us the power to group data from multiple RDDs.
Exercise 1: Get the total size of files for each author.
Exercise 2: Get the top 5 authors with more data.
Exercise 3: Try the combineByKey() with a randomly generated set of 5 values for 4 keys. Get the average value of the random variable for each key.
Exercise 4: Compute the average book size per author using combineByKey(). If you were an English Literature student and your teacher says: "Pick one Author and I'll randomly pick a book for you to read", what would be a Data Scientist answer?
Exercise 5: All Spark books have the word count example. Let's count words over all our books! (This might take some time)
Exercise 6: Group author data by author surname initial. How many authors have we grouped?
Exercise 7: Generate a pair RDD with alphabet letters in upper case as key, and empty list as value. Then group the previous RDD with this new one.
Exercise 1: Get the total size of files for each author.
Answer 1
End of explanation
"""
authors_data.reduceByKey(lambda y,x: y+x).top(5,key=lambda x: x[1])
"""
Explanation: Exercise 2: Get the top 5 authors with more data.
Answer 2:
End of explanation
"""
import numpy as np
# generate the data
for pair in list(zip(np.arange(5).tolist()*5, np.random.normal(0,1,5*5))):
print(pair)
rdd = sc.parallelize(zip(np.arange(5).tolist()*5, np.random.normal(0,1,5*5)))
createCombiner = lambda value: (value,1)
# you can check what createCombiner does
# rdd.mapValues(createCombiner).collect()
# here x is the combiner (sum,count) and value is value in the
# initial RDD (the random variable)
mergeValue = lambda x, value: (x[0] + value, x[1] + 1)
# here, all combiners are summed (sum,count)
mergeCombiner = lambda x, y: (x[0] + y[0], x[1] + y[1])
sumCount = rdd.combineByKey(createCombiner,
mergeValue,
mergeCombiner)
print(sumCount.collect())
sumCount.mapValues(lambda x: x[0]/x[1]).collect()
"""
Explanation: Exercise 3: Try the combineByKey() with a randomly generated set of 5 values for 4 keys. Get the average value of the random variable for each key.
Answer 3:
End of explanation
"""
createCombiner = lambda value: (value,1)
# you can check what createCombiner does
# rdd.mapValues(createCombiner).collect()
# here x is the combiner (sum,count) and value is value in the
# initial RDD (the random variable)
mergeValue = lambda x, value: (x[0] + value, x[1] + 1)
# here, all combiners are summed (sum,count)
mergeCombiner = lambda x, y: (x[0] + y[0], x[1] + y[1])
sumCount = authors_data.combineByKey(createCombiner,
mergeValue,
mergeCombiner)
print(sumCount.mapValues(lambda x: x[0]/x[1]).collect())
# I would choose the author with lowest average book size
print(sumCount.mapValues(lambda x: x[0]/x[1]).top(5,lambda x: -x[1]))
"""
Explanation: Exercise 4: Compute the average book size per author using combineByKey(). If you were an English Literature student and your teacher says: "Pick one Author and I'll randomly pick a book for you to read", what would be a Data Scientist answer?
Answer 4:
End of explanation
"""
import urllib3
import re
def download_file(csv_line):
link = csv_line[0]
http = urllib3.PoolManager()
r = http.request('GET', link, preload_content=False)
response = r.read()
return str(response)
books_info = sc.textFile("../data/books.csv").map(lambda x: x.split(","))
#books_content = books_info.map(download_file)
# while trying the function use only two samples
books_content = sc.parallelize(books_info.map(download_file).take(2))
words_rdd = books_content.flatMap(lambda x: x.split(" ")).\
flatMap(lambda x: x.split("\r\n")).\
map(lambda x: re.sub('[^0-9a-zA-Z]+', '', x).lower()).\
filter(lambda x: x != '')
words_rdd.map(lambda x: (x,1)).reduceByKey(lambda x,y: x+y).top(5, key=lambda x: x[1])
"""
Explanation: Exercise 5: All Spark books have the word count example. Let's count words over all our books! (This might take some time)
Answer 5:
End of explanation
"""
print(authors_info.groupBy(lambda x: x[0][0]).collect())
authors_info.map(lambda x: x[0]).distinct().\
map(lambda x: (x[0],1)).\
reduceByKey(lambda x,y: x+y).\
filter(lambda x: x[1]>1).\
collect()
"""
Explanation: Exercise 6: Group author data by author surname initial. How many authors have we grouped?
Answer 6:
End of explanation
"""
import string
sc.parallelize(list(string.ascii_uppercase)).\
map(lambda x: (x,[])).\
cogroup(authors_info.groupBy(lambda x: x[0][0])).\
take(5)
"""
Explanation: Exercise 7: Generate a pair RDD with alphabet letters in upper case as key, and empty list as value. Then group the previous RDD with this new one.
Answer 7:
End of explanation
"""
#more info: https://www.worlddata.info/downloads/
rdd_countries = sc.textFile("../data/countries_data_clean.csv").map(lambda x: x.split(","))
#more info: http://data.worldbank.org/data-catalog/GDP-ranking-table
rdd_gdp = sc.textFile("../data/countries_GDP_clean.csv").map(lambda x: x.split(";"))
# check rdds size
hyp_final_rdd_num = rdd_gdp.count() if rdd_countries.count() > rdd_gdp.count() else rdd_countries.count()
print("The final number of elements in the joined rdd should be: ", hyp_final_rdd_num)
p_rdd_gdp = rdd_gdp.map(lambda x: (x[3],x))
p_rdd_countries = rdd_countries.map(lambda x: (x[1],x))
print(p_rdd_countries.take(1))
print(p_rdd_gdp.take(1))
p_rdd_contry_data = p_rdd_countries.join(p_rdd_gdp)
final_join_rdd_size = p_rdd_contry_data.count()
hyp = hyp_final_rdd_num == final_join_rdd_size
print("The initial hypothesis is ", hyp)
if not hyp:
print("The final joined rdd size is ", final_join_rdd_size)
"""
Explanation: Joins
Some of the most useful operations we get with keyed data comes from using it together with other keyed data.
Joining data together is probably one of the most common operations on a pair RDD, and we have a full range of options including right and left outer joins, cross joins, and inner joins.
Inner Join
Only keys that are present in both pair RDDs are output.
When there are multiple values for the same key in one of the inputs, the resulting pair RDD will have an entry for every possible pair of values with that key from the two input RDDs
Exercise:
Take countries_data_clean.csv and countries_GDP_clean.csv and join them using country name as key.
Before doing the join, please, check how many element should the resulting pair RDD have.
After the join, check if the initial hypothesis was true.
In case it is not, what is the reason?
How would you resolve that problem?
End of explanation
"""
n = 5
rdd_1 = sc.parallelize([(x,1) for x in range(n)])
rdd_2 = sc.parallelize([(x*2,1) for x in range(n)])
print("rdd_1: ",rdd_1.collect())
print("rdd_2: ",rdd_2.collect())
print("leftOuterJoin: ",rdd_1.leftOuterJoin(rdd_2).collect())
print("rightOuterJoin: ",rdd_1.rightOuterJoin(rdd_2).collect())
print("join: ", rdd_1.join(rdd_2).collect())
#explore what hapens if a key is present twice or more
rdd_3 = sc.parallelize([(x*2,1) for x in range(n)] + [(4,2),(6,4)])
print("rdd_3: ",rdd_3.collect())
print("join: ", rdd_2.join(rdd_3).collect())
"""
Explanation: Left and Right outer Joins
Sometimes we don’t need the key to be present in both RDDs to want it in our result.
For example, imagine that our list of countries is not complete, and we don't want to miss data if it a country is not present in both RDDs.
leftOuterJoin(other) and rightOuterJoin(other) both join pair RDDs together by key, where one of the pair RDDs can be missing the key.
With leftOuterJoin() the resulting pair RDD has entries for each key in the source RDD.
The value associated with each key in the result is a tuple of the value from the source RDD and an Option for the value from the other pair RDD.
In Python, if a value isn’t present None is used; and if the value is present the regular value, without any wrapper, is used.
As with join(), we can have multiple entries for each key; when this occurs, we get the Cartesian product between the two lists of values.
rightOuterJoin() is almost identical to leftOuterJoin() except the key must be present in the other RDD and the tuple has an option for the source rather than the other RDD.
Exercise:
Use two simple RDDs to show the results of left and right outer join.
End of explanation
"""
rdd_gdp = sc.textFile("../data/countries_GDP_clean.csv").map(lambda x: x.split(";"))
rdd_gdp.take(2)
#generate a pair rdd with countrycode and GDP
rdd_cc_gdp = rdd_gdp.map(lambda x: (x[1],x[4]))
rdd_cc_gdp.take(2)
"""
Explanation: Exercise:
Generate two pair RDDs with country info:
1. A first one with country code and GDP
2. A second one with country code and life expectancy
Then join them to have a pair RDD with country code plus GDP and life expentancy.
Answer:
Inspect the dataset with GDP.
End of explanation
"""
rdd_countries = sc.textFile("../data/countries_data_clean.csv").map(lambda x: x.split(","))
print(rdd_countries.take(2))
#generate a pair rdd with countrycode and lifexpectancy
#(more info in https://www.worlddata.info/downloads/)
#we don't have countrycode in this dataset, but let's try to add it
#we have a dataset with countrynames and countrycodes
#let's take countryname and ISO 3166-1 alpha3 code
rdd_cc = sc.textFile("../data/countrycodes.csv").\
map(lambda x: x.split(";")).\
map(lambda x: (x[0].strip("\""),x[4].strip("\""))).\
filter(lambda x: x[0] != 'Country (en)')
print(rdd_cc.take(2))
rdd_cc_info = rdd_countries.map(lambda x: (x[1],x[16]))
rdd_cc_info.take(2)
#let's count and see if something is missing
print(rdd_cc.count())
print(rdd_cc_info.count())
#take only values, the name is no longer needed
rdd_name_cc_le = rdd_cc_info.leftOuterJoin(rdd_cc)
rdd_cc_le = rdd_name_cc_le.map(lambda x: x[1])
print(rdd_cc_le.take(5))
print(rdd_cc_le.count())
#what is missing?
rdd_name_cc_le.filter(lambda x: x[1][1] == None).collect()
#how can we solve this problem??
"""
Explanation: Inspect the dataset with life expectancy.
End of explanation
"""
print("Is there some data missing?", rdd_cc_gdp.count() != rdd_cc_le.count())
print("GDP dataset: ", rdd_cc_gdp.count())
print("Life expectancy dataset: ", rdd_cc_le.count())
#lets try to see what happens
print(rdd_cc_le.take(10))
print (rdd_cc_gdp.take(10))
rdd_cc_gdp_le = rdd_cc_le.map(lambda x: (x[1],x[0])).leftOuterJoin(rdd_cc_gdp)
#we have some countries that the data is missing
# we have to check if this data is available
# or there is any error
rdd_cc_gdp_le.take(10)
"""
Explanation: We have some missing data, that we have to complete, but we have quite a lot of data, let's follow.
Inspect the results of GDP and life expectancy and join them. Is there some data missing?
End of explanation
"""
p_rdd_contry_data.sortByKey().take(2)
"""
Explanation: Sort Data
sortByKey(): We can sort an RDD with key/value pairs provided that there is an ordering defined on the key.
Once we have sorted our data, any subsequent call on the sorted data to collect() or save() will result in ordered data.
Exercise:
Sort country data by key.
End of explanation
"""
p_rdd_contry_data.countByKey()["Andorra"]
p_rdd_contry_data.collectAsMap()["Andorra"]
#p_rdd_contry_data.lookup("Andorra")
"""
Explanation: Actions over Pair RDDs
countByKey(): Count the number of elements for each key.
collectAsMap(): Collect the result as a map to provide easy lookup.
lookup(key): Return all values associated with the provided key.
Exercises:
1. Count countries RDD by key
2. Collect countries RDD as map
3. Lookup Andorra info in countries RDD
End of explanation
"""
rdd_userinfo = sc.textFile("../data/users_events_example/user_info_1000users_20topics.csv")\
.filter(lambda x: len(x)>0)\
.map(lambda x: (x.split(",")[0],x.split(",")[1].split("|")))
rdd_userinfo.take(2)
"""
Explanation: Data Partitioning
(from: Learning Spark - O'Reilly)
Spark programs can choose to control their RDDs’ partitioning to reduce communication.
Partitioning will not be helpful in all applications— for example, if a given RDD is scanned only once, there is no point in partitioning it in advance.
It is useful only when a dataset is reused multiple times in key-oriented operations such as joins.
Spark’s partitioning is available on all RDDs of key/value pairs, and causes the system to group elements based on a function of each key.
Spark does not give explicit control of which worker node each key goes to (partly because the system is designed to work even if specific nodes fail), it lets the program ensure that a set of keys will appear together on some node.
Example:
As a simple example, consider an application that keeps a large table of user information in memory—say, an RDD of (UserID, UserInfo) pairs, where UserInfo contains a list of topics the user is subscribed to.
End of explanation
"""
rdd_userevents = sc.textFile("../data/users_events_example/userevents_*.log")\
.filter(lambda x: len(x))\
.map(lambda x: (x.split(",")[1], [x.split(",")[2]]))
print(rdd_userevents.take(2))
"""
Explanation: The application periodically combines this table with a smaller file representing events that happened in the past five minutes—say, a table of (UserID, LinkInfo) pairs for users who have clicked a link on a website in those five minutes.
End of explanation
"""
rdd_joined = rdd_userinfo.join(rdd_userevents)
print(rdd_joined.count())
print(rdd_joined.filter(lambda x: (x[1][1][0] not in x[1][0])).count())
print(rdd_joined.filter(lambda x: (x[1][1][0] in x[1][0])).count())
"""
Explanation: For example, we may wish to count how many users visited a link that was not to one of their subscribed topics. We can perform this combination with Spark’s join() operation, which can be used to group the User Info and LinkInfo pairs for each UserID by key.
End of explanation
"""
rdd_userinfo = sc.textFile("../data/users_events_example/user_info_1000users_20topics.csv")\
.filter(lambda x: len(x)>0)\
.map(lambda x: (x.split(",")[0],x.split(",")[1].split("|"))).persist()
def process_new_logs(event_fite_path):
rdd_userevents = sc.textFile(event_fite_path)\
.filter(lambda x: len(x))\
.map(lambda x: (x.split(",")[1], [x.split(",")[2]]))
rdd_joined = rdd_userinfo.join(rdd_userevents)
print("Number of visits to non-subscribed topics: ",
rdd_joined.filter(lambda x: (x[1][1][0] not in x[1][0])).count())
process_new_logs("../data/users_events_example/userevents_01012016000500.log")
"""
Explanation: Imagine that we want to count the number of visits to non-subscribed visits using a function.
End of explanation
"""
rdd_userinfo = sc.textFile("../data/users_events_example/user_info_1000users_20topics.csv")\
.filter(lambda x: len(x)>0)\
.map(lambda x: (x.split(",")[0],x.split(",")[1].split("|"))).partitionBy(10)
rdd_userinfo
"""
Explanation: This code will run fine as is, but it will be inefficient.
This is because the join() operation, called each time process_new_logs() is invoked, does not know anything about how the keys are partitioned in the datasets.
By default, this operation will hash all the keys of both datasets, sending elements with the same key hash across the network to the same machine, and then join together the elements with the same key on that machine (see figure below).
Because we expect the rdd_userinfo table to be much larger than the small log of events seen every five minutes, this wastes a lot of work: the rdd_userinfo table is hashed and shuffled across the network on every call, even though it doesn’t change.
Fixing this is simple: just use the partitionBy() transformation on rdd_userinfo to hash-partition it at the start of the program. We do this by passing a spark.HashPartitioner object to partitionBy.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/csir-csiro/cmip6/models/sandbox-1/atmos.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-1', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: CSIR-CSIRO
Source ID: SANDBOX-1
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:54
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
alfkjartan/control-computarizado | discrete-time-systems/notebooks/Zero-order-hold sampling.ipynb | mit | h, lam = sy.symbols('h, lambda', real=True, positive=True)
s, z = sy.symbols('s, z', real=False)
G = 1/(s-lam)
Y = G/s
Yp = sy.apart(Y, s)
Yp
from sympy.integrals.transforms import inverse_laplace_transform
from sympy.abc import t
inverse_laplace_transform(Yp, s, t)
"""
Explanation: Zero order hold sampling of a first order system
\begin{equation}
G(s) = \frac{1}{s-\lambda}
\end{equation}
End of explanation
"""
lam = -0.5
h = 0.1
G = cm.tf([1], [1, -lam])
Gd = cm.c2d(G, h)
Hd = 1/lam * cm.tf([np.exp(lam*h)-1],[1, np.exp(lam*h)])
print(Gd)
print(Hd)
"""
Explanation: Sampling and taking the z-transform of the step-response
\begin{equation}
Y(z) = \frac{1}{\lambda} \left( \frac{z}{z-\mathrm{e}^{\lambda h}} - \frac{z}{z-1} \right).
\end{equation}
Dividing by the z-transform of the input signal
\begin{equation}
H(z) = \frac{z-1}{z}Y(z) = \frac{1}{\lambda} \left( \frac{ \mathrm{e}^{\lambda h} - 1 }{ z - \mathrm{e}^{\lambda h} } \right)
\end{equation}
Verifying for specific value of lambda
End of explanation
"""
|
Saxafras/Spacetime | State Overlay Tests.ipynb | bsd-3-clause | overlay_test(rule_18.get_spacetime(),rule_18.get_spacetime(),t_max=20, x_max=20, text_color='red')
overlay_test(rule_18.get_spacetime(),rule_18.get_spacetime(),t_max=20, x_max=20, colors=plt.cm.Set2, text_color='black')
overlay_test(rule_18.get_spacetime(),rule_18.get_spacetime(),t_max=20, x_max=20, colorbar=True)
"""
Explanation: Tests overlaying rule 18 values on top of spacetime diagram
End of explanation
"""
rule_18 = ECA(18, domain_18(500))
rule_18.evolve(500)
states_18 = epsilon_field(rule_18.get_spacetime())
states_18.estimate_states(3,3,1,alpha=0)
states_18.filter_data()
overlay_test(rule_18.get_spacetime(), states_18.get_causal_field(), t_min=200, t_max=240, x_min=200, x_max=240)
overlay_test(rule_18.get_spacetime(), states_18.get_causal_field(), t_min=200, t_max=220, x_min=200, x_max=220)
print states_18.get_causal_field()[200:220, 200:220]
"""
Explanation: Tests overlaying inferred states on top of rule 18 spacetime diagram
End of explanation
"""
|
ChadFulton/statsmodels | examples/notebooks/pca_fertility_factors.ipynb | bsd-3-clause | %matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.multivariate.pca import PCA
"""
Explanation: Statsmodels Principal Component Analysis
Key ideas: Principal component analysis, world bank data, fertility
In this notebook, we use principal components analysis (PCA) to analyze the time series of fertility rates in 192 countries, using data obtained from the World Bank. The main goal is to understand how the trends in fertility over time differ from country to country. This is a slightly atypical illustration of PCA because the data are time series. Methods such as functional PCA have been developed for this setting, but since the fertility data are very smooth, there is no real disadvantage to using standard PCA in this case.
End of explanation
"""
data = sm.datasets.fertility.load_pandas().data
data.head()
"""
Explanation: The data can be obtained from the World Bank web site, but here we work with a slightly cleaned-up version of the data:
End of explanation
"""
columns = list(map(str, range(1960, 2012)))
data.set_index('Country Name', inplace=True)
dta = data[columns]
dta = dta.dropna()
dta.head()
"""
Explanation: Here we construct a DataFrame that contains only the numerical fertility rate data and set the index to the country names. We also drop all the countries with any missing data.
End of explanation
"""
ax = dta.mean().plot(grid=False)
ax.set_xlabel("Year", size=17)
ax.set_ylabel("Fertility rate", size=17);
ax.set_xlim(0, 51)
"""
Explanation: There are two ways to use PCA to analyze a rectangular matrix: we can treat the rows as the "objects" and the columns as the "variables", or vice-versa. Here we will treat the fertility measures as "variables" used to measure the countries as "objects". Thus the goal will be to reduce the yearly fertility rate values to a small number of fertility rate "profiles" or "basis functions" that capture most of the variation over time in the different countries.
The mean trend is removed in PCA, but its worthwhile taking a look at it. It shows that fertility has dropped steadily over the time period covered in this dataset. Note that the mean is calculated using a country as the unit of analysis, ignoring population size. This is also true for the PC analysis conducted below. A more sophisticated analysis might weight the countries, say by population in 1980.
End of explanation
"""
pca_model = PCA(dta.T, standardize=False, demean=True)
"""
Explanation: Next we perform the PCA:
End of explanation
"""
fig = pca_model.plot_scree(log_scale=False)
"""
Explanation: Based on the eigenvalues, we see that the first PC dominates, with perhaps a small amount of meaningful variation captured in the second and third PC's.
End of explanation
"""
fig, ax = plt.subplots(figsize=(8, 4))
lines = ax.plot(pca_model.factors.iloc[:,:3], lw=4, alpha=.6)
ax.set_xticklabels(dta.columns.values[::10])
ax.set_xlim(0, 51)
ax.set_xlabel("Year", size=17)
fig.subplots_adjust(.1, .1, .85, .9)
legend = fig.legend(lines, ['PC 1', 'PC 2', 'PC 3'], loc='center right')
legend.draw_frame(False)
"""
Explanation: Next we will plot the PC factors. The dominant factor is monotonically increasing. Countries with a positive score on the first factor will increase faster (or decrease slower) compared to the mean shown above. Countries with a negative score on the first factor will decrease faster than the mean. The second factor is U-shaped with a positive peak at around 1985. Countries with a large positive score on the second factor will have lower than average fertilities at the beginning and end of the data range, but higher than average fertility in the middle of the range.
End of explanation
"""
idx = pca_model.loadings.iloc[:,0].argsort()
"""
Explanation: To better understand what is going on, we will plot the fertility trajectories for sets of countries with similar PC scores. The following convenience function produces such a plot.
End of explanation
"""
def make_plot(labels):
fig, ax = plt.subplots(figsize=(9,5))
ax = dta.loc[labels].T.plot(legend=False, grid=False, ax=ax)
dta.mean().plot(ax=ax, grid=False, label='Mean')
ax.set_xlim(0, 51);
fig.subplots_adjust(.1, .1, .75, .9)
ax.set_xlabel("Year", size=17)
ax.set_ylabel("Fertility", size=17);
legend = ax.legend(*ax.get_legend_handles_labels(), loc='center left', bbox_to_anchor=(1, .5))
legend.draw_frame(False)
labels = dta.index[idx[-5:]]
make_plot(labels)
"""
Explanation: First we plot the five countries with the greatest scores on PC 1. These countries have a higher rate of fertility increase than the global mean (which is decreasing).
End of explanation
"""
idx = pca_model.loadings.iloc[:,1].argsort()
make_plot(dta.index[idx[-5:]])
"""
Explanation: Here are the five countries with the greatest scores on factor 2. These are countries that reached peak fertility around 1980, later than much of the rest of the world, followed by a rapid decrease in fertility.
End of explanation
"""
make_plot(dta.index[idx[:5]])
"""
Explanation: Finally we have the countries with the most negative scores on PC 2. These are the countries where the fertility rate declined much faster than the global mean during the 1960's and 1970's, then flattened out.
End of explanation
"""
fig, ax = plt.subplots()
pca_model.loadings.plot.scatter(x='comp_00',y='comp_01', ax=ax)
ax.set_xlabel("PC 1", size=17)
ax.set_ylabel("PC 2", size=17)
dta.index[pca_model.loadings.iloc[:, 1] > .2].values
"""
Explanation: We can also look at a scatterplot of the first two principal component scores. We see that the variation among countries is fairly continuous, except perhaps that the two countries with highest scores for PC 2 are somewhat separated from the other points. These countries, Oman and Yemen, are unique in having a sharp spike in fertility around 1980. No other country has such a spike. In contrast, the countries with high scores on PC 1 (that have continuously increasing fertility), are part of a continuum of variation.
End of explanation
"""
|
WomensCodingCircle/CodingCirclePython | Lesson11_JSONandAPIs/JSONandAPIs.ipynb | mit | import json
"""
Explanation: JSON and APIs
JSON
What is JSON? From JSON.org:
JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language, Standard ECMA-262 3rd Edition - December 1999. JSON is a text format that is completely language independent but uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. These properties make JSON an ideal data-interchange language.
But that isn't exactly helpful is it? JSON is a string format that allows you to store dictionaries, lists, strings, and numbers in a way that you can pass it from one source to another. You can take a Python dictionary and pass it to a perl program by printing it in JSON format and loading it or you can pull data from the web and create a python dictionary or list from that. Even if you don't understand now, after you use it, JSON will become more clear.
You have in this folder a json file called shapes.json. Take a look at it and then we can talk about JSON format.
JSON Format
JSON is a subset of python. You have a top level object that is either a list or a dictionary. You can have values and keys in the top level object be any of the following: strings, floats, ints, lists, boolean, null, or dictonary. To see how to represent these refer to the documentation www.json.org
JSON in Python
To use json in python we use the module json. It is part of the standard library so you don't need to install anything.
import json
End of explanation
"""
# Load from file
with open('shapes.json') as fh:
shapes = json.load(fh)
print(shapes)
# Load from string
complex_shapes_string = '["pentagon", "spiral", "double helix"]'
complex_shapes = json.loads(complex_shapes_string)
print(complex_shapes)
"""
Explanation: Loading data
You can load data from json format into python from either a string using the loads method or a file handle using the load method.
my_list = json.loads('[1, 2, 3]')
with open(my_file.json) as fh:
my_dict = json.load(fh)
End of explanation
"""
for shape in shapes:
title_shape = shape.title()
area_formula = shapes[shape]['area']
print("{}'s area can be calculated using {}".format(title_shape, area_formula))
"""
Explanation: TRY IT
Create a string called three_d_json which cotains the string '["cube", "sphere"]' and then load that data into a python list using json.load.
Using JSON data
Once you load data from python format, you can now use the data like you would any other python dictionary or list.
End of explanation
"""
# Dumping to string
favorite_shapes = ['hexagon', 'heart']
fav_shapes_json = json.dumps(favorite_shapes)
print(fav_shapes_json)
# Dumping to a file
with open('fav_shapes.json', 'w') as fh:
json.dump(favorite_shapes, fh)
"""
Explanation: TRY IT
for each shape in complex_shapes print "shape is hard to find the area of".
Dumping JSON Data
If you want to store data from your python program into JSON format, it is as simple as loading it. To dump to a string use json.dumps and to dump to a file use json.dump. Make sure that you are using only valid json values in your list or dictionary.
json_string = json.dumps(my_list)
with open('json_file.json', 'w') as fh:
json.dump(my_dict, fh)
End of explanation
"""
import urllib.request, urllib.error, urllib.parse
game_id = str(251990)
connection = urllib.request.urlopen('http://store.steampowered.com/api/appdetails?appids=' + game_id)
data = connection.read()
type(data)
"""
Explanation: TRY IT
create a list of 4 sided shapes and store in a variable called quads, dump quad to json and store the result in a variable called quads_json.
Web APIs
Web APIs are a way to retreive and send data to and from a url. The urls have a pattern so that you can retreive data programtically. With REST APIs specifically, you build a url putting data in the correct places to retreive the data you need. Many Web APIs (the best ones) return their data in JSON format.
There are many free api's available, most require that you sign up to recieve an API key. You will need to read the API docs for any specific api to figure out how to get the data you want.
Here are some fun APIs to try out:
* Dropbox: https://www.dropbox.com/developers
* Google Maps: https://developers.google.com/maps/web/
* Twitter: https://dev.twitter.com/docs
* YouTube: https://developers.google.com/youtube/v3/getting-started
* Soundcloud: http://developers.soundcloud.com/docs/api/guide#playing
* Stripe: https://stripe.com/docs/tutorials/checkout
* Instagram: http://instagram.com/developer/
* Twilio: https://www.twilio.com/docs
* Yelp: http://www.yelp.com/developers/getting_started
* Facebook: https://developers.facebook.com/docs/facebook-login/login-flow-for-web
* Etsy: https://www.etsy.com/developers/documentation
We are going to use the steam api because certain endpoints don't require an app id (and who has time to sign up for one when there is python to learn?)
The endpoint we will use is one that will get us metadata info about a specific game:
http://store.steampowered.com/api/appdetails?appids=<id number>
If the game doesn't exist it returns json that looks like this:
{"1":{"success":false}}
If the game does exist it returns json that looks like this:
"100":{
"success":true,
"data":{
"type":"game",
"name":"Counter-Strike: Condition Zero",
"steam_appid":80,
"required_age":0,
"is_free":false,
"detailed_description":"With its extensive Tour of Duty campaign, a near-limitless number of skirmish modes, updates and new content for Counter-Strike's award-winning multiplayer game play, plus over 12 bonus single player missions, Counter-Strike: Condition Zero is a tremendous offering of single and multiplayer content.",
"about_the_game":"With its extensive Tour of Duty campaign, a near-limitless number of skirmish modes, updates and new content for Counter-Strike's award-winning multiplayer game play, plus over 12 bonus single player missions, Counter-Strike: Condition Zero is a tremendous offering of single and multiplayer content.",
"supported_languages":"English, French, German, Italian, Spanish, Simplified Chinese, Traditional Chinese, Korean",
"header_image":"http:\/\/cdn.akamai.steamstatic.com\/steam\/apps\/80\/header.jpg?t=1447889920",
"website":null,
"pc_requirements":{
"minimum":"\r\n\t\t\t<p><strong>Minimum:<\/strong> 500 mhz processor, 96mb ram, 16mb video card, Windows XP, Mouse, Keyboard, Internet Connection<br \/><\/p>\r\n\t\t\t<p><strong>Recommended:<\/strong> 800 mhz processor, 128mb ram, 32mb+ video card, Windows XP, Mouse, Keyboard, Internet Connection<br \/><\/p>\r\n\t\t\t"
},
"mac_requirements":[
],
"linux_requirements":[
],
"developers":[
"Valve"
],
"publishers":[
"Valve"
],
"price_overview":{
"currency":"USD",
"initial":999,
"final":999,
"discount_percent":0
},
"packages":[
7
],
"package_groups":[
{
"name":"default",
"title":"Buy Counter-Strike: Condition Zero",
"description":"",
"selection_text":"Select a purchase option",
"save_text":"",
"display_type":0,
"is_recurring_subscription":"false",
"subs":[
{
"packageid":7,
"percent_savings_text":"",
"percent_savings":0,
"option_text":"Counter-Strike: Condition Zero $9.99",
"option_description":"",
"can_get_free_license":"0",
"is_free_license":false,
"price_in_cents_with_discount":999
}
]
}
],
"platforms":{
"windows":true,
"mac":true,
"linux":true
},
"metacritic":{
"score":65,
"url":"http:\/\/www.metacritic.com\/game\/pc\/counter-strike-condition-zero?ftag=MCD-06-10aaa1f"
},
"categories":[
{
"id":2,
"description":"Single-player"
},
{
"id":1,
"description":"Multi-player"
},
{
"id":8,
"description":"Valve Anti-Cheat enabled"
}
],
"genres":[
{
"id":"1",
"description":"Action"
}
],
"screenshots":[
{
"id":0,
"path_thumbnail":"http:\/\/cdn.akamai.steamstatic.com\/steam\/apps\/80\/0000002528.600x338.jpg?t=1447889920",
"path_full":"http:\/\/cdn.akamai.steamstatic.com\/steam\/apps\/80\/0000002528.1920x1080.jpg?t=1447889920"
},
{
"id":1,
"path_thumbnail":"http:\/\/cdn.akamai.steamstatic.com\/steam\/apps\/80\/0000002529.600x338.jpg?t=1447889920",
"path_full":"http:\/\/cdn.akamai.steamstatic.com\/steam\/apps\/80\/0000002529.1920x1080.jpg?t=1447889920"
},
{
"id":2,
"path_thumbnail":"http:\/\/cdn.akamai.steamstatic.com\/steam\/apps\/80\/0000002530.600x338.jpg?t=1447889920",
"path_full":"http:\/\/cdn.akamai.steamstatic.com\/steam\/apps\/80\/0000002530.1920x1080.jpg?t=1447889920"
},
{
"id":3,
"path_thumbnail":"http:\/\/cdn.akamai.steamstatic.com\/steam\/apps\/80\/0000002531.600x338.jpg?t=1447889920",
"path_full":"http:\/\/cdn.akamai.steamstatic.com\/steam\/apps\/80\/0000002531.1920x1080.jpg?t=1447889920"
},
{
"id":4,
"path_thumbnail":"http:\/\/cdn.akamai.steamstatic.com\/steam\/apps\/80\/0000002532.600x338.jpg?t=1447889920",
"path_full":"http:\/\/cdn.akamai.steamstatic.com\/steam\/apps\/80\/0000002532.1920x1080.jpg?t=1447889920"
},
{
"id":5,
"path_thumbnail":"http:\/\/cdn.akamai.steamstatic.com\/steam\/apps\/80\/0000002533.600x338.jpg?t=1447889920",
"path_full":"http:\/\/cdn.akamai.steamstatic.com\/steam\/apps\/80\/0000002533.1920x1080.jpg?t=1447889920"
},
{
"id":6,
"path_thumbnail":"http:\/\/cdn.akamai.steamstatic.com\/steam\/apps\/80\/0000002534.600x338.jpg?t=1447889920",
"path_full":"http:\/\/cdn.akamai.steamstatic.com\/steam\/apps\/80\/0000002534.1920x1080.jpg?t=1447889920"
},
{
"id":7,
"path_thumbnail":"http:\/\/cdn.akamai.steamstatic.com\/steam\/apps\/80\/0000002535.600x338.jpg?t=1447889920",
"path_full":"http:\/\/cdn.akamai.steamstatic.com\/steam\/apps\/80\/0000002535.1920x1080.jpg?t=1447889920"
}
],
"recommendations":{
"total":6647
},
"release_date":{
"coming_soon":false,
"date":"Mar 1, 2004"
},
"support_info":{
"url":"http:\/\/steamcommunity.com\/app\/80",
"email":""
},
"background":"http:\/\/cdn.akamai.steamstatic.com\/steam\/apps\/80\/page_bg_generated_v6b.jpg?t=1447889920"
}
}
}
You can actually use the url in a browser. Try that and see if you hit on any interesting games by entering an id number
Accessing API data with python
There are many options for getting data from a url with python: httplib, urllib, urllib2, requests. This isn't limited to JSON data from a web api, you can get the raw html from any website. We are going to use urllib2 because it is part of the standard library and it is easy to use.
First, as with any library, we import it
import urllib2
Then you open a url using the method urlopen
connection = urllib2.urlopen('url')
Then you can read the data
data = connection.read()
End of explanation
"""
game_data = json.loads(data)
print(type(game_data))
"""
Explanation: Now the result is a string, but it is valid json and we know how to turn a json string into a python dictionary: json.loads()
End of explanation
"""
print(game_data[game_id]['data']['name'])
print(game_data[game_id]['data']['about_the_game'])
print(game_data[game_id]['data']['price_overview']['final'])
"""
Explanation: Finally you can use this data just like you would any python dictionary.
End of explanation
"""
|
wittawatj/kernel-gof | ipynb/ex2_results.ipynb | mit | %load_ext autoreload
%autoreload 2
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
#%config InlineBackend.figure_format = 'pdf'
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import kgof.data as data
import kgof.glo as glo
import kgof.goftest as gof
import kgof.kernel as kernel
import kgof.plot as plot
import kgof.util as util
import scipy.stats as stats
import kgof.plot
kgof.plot.set_default_matplotlib_options()
def load_plot_vs_params(fname, xlabel='Problem parameter', show_legend=True):
func_xvalues = lambda agg_results: agg_results['prob_params']
ex = 2
def func_title(agg_results):
repeats, _, n_methods = agg_results['job_results'].shape
alpha = agg_results['alpha']
test_size = (1.0 - agg_results['tr_proportion'])*agg_results['sample_size']
title = '%s. %d trials. test size: %d. $\\alpha$ = %.2g.'%\
( agg_results['prob_label'], repeats, test_size, alpha)
return title
#plt.figure(figsize=(10,5))
results = plot.plot_prob_reject(
ex, fname, func_xvalues, xlabel, func_title=func_title)
plt.title('')
plt.gca().legend(loc='best').set_visible(show_legend)
if show_legend:
plt.legend(bbox_to_anchor=(1.80, 1.08))
plt.grid(False)
return results
def load_runtime_vs_params(fname, xlabel='Problem parameter',
show_legend=True, xscale='linear', yscale='linear'):
func_xvalues = lambda agg_results: agg_results['prob_params']
ex = 2
def func_title(agg_results):
repeats, _, n_methods = agg_results['job_results'].shape
alpha = agg_results['alpha']
title = '%s. %d trials. $\\alpha$ = %.2g.'%\
( agg_results['prob_label'], repeats, alpha)
return title
#plt.figure(figsize=(10,6))
results = plot.plot_runtime(ex, fname,
func_xvalues, xlabel=xlabel, func_title=func_title)
plt.title('')
plt.gca().legend(loc='best').set_visible(show_legend)
if show_legend:
plt.legend(bbox_to_anchor=(1.80, 1.05))
plt.grid(False)
if xscale is not None:
plt.xscale(xscale)
if yscale is not None:
plt.yscale(yscale)
return results
# # Gaussian mean difference. Fix dimension. Vary the mean
# #gmd_fname = 'ex2-gmd_d10_ms-me5_n1000_rs100_pmi0.000_pma0.600_a0.050_trp0.50.p'
# gmd_fname = 'ex2-gmd_d10_ms-me4_n2000_rs50_pmi0.000_pma0.060_a0.050_trp0.50.p'
# gmd_results = load_plot_vs_params(gmd_fname, xlabel='$m$', show_legend=True)
# #plt.ylim([0.03, 0.1])
# #plt.savefig(bsg_fname.replace('.p', '.pdf', 1), bbox_inches='tight')
"""
Explanation: A notebook to process experimental results of ex2_prob_params.py. p(reject) as problem parameters are varied.
End of explanation
"""
# # Gaussian increasing variance. Variance below 1.
# gvsub1_d1_fname = 'ex2-gvsub1_d1_vs-me8_n1000_rs100_pmi0.100_pma0.700_a0.050_trp0.50.p'
# gvsub1_d1_results = load_plot_vs_params(gvsub1_d1_fname, xlabel='$v$')
# plt.title('d=1')
# # plt.ylim([0.02, 0.08])
# # plt.xlim([0, 4])
# #plt.legend(bbox_to_anchor=(1.70, 1.05))
# #plt.savefig(gsign_fname.replace('.p', '.pdf', 1), bbox_inches='tight')
# # Gaussian increasing variance
# gvinc_d5_fname = 'ex2-gvinc_d5-me8_n1000_rs100_pmi1.000_pma2.500_a0.050_trp0.50.p'
# gvinc_d5_results = load_plot_vs_params(gvinc_d5_fname, xlabel='$v$',
# show_legend=True)
# plt.title('d=5')
# # plt.ylim([0.02, 0.08])
# # plt.xlim([0, 4])
# #plt.legend(bbox_to_anchor=(1.70, 1.05))
# #plt.savefig(gsign_fname.replace('.p', '.pdf', 1), bbox_inches='tight')
"""
Explanation: $$p(x) = \mathcal{N}(0, I) \
q(x) = \mathcal{N}((m,0,\ldots), I)$$
End of explanation
"""
# # Gaussian variance diffenece (GVD)
# gvd_fname = 'ex2-gvd-me4_n1000_rs100_pmi1.000_pma15.000_a0.050_trp0.50.p'
# # gvd_fname = 'ex2-gvd-me4_n1000_rs50_pmi1.000_pma15.000_a0.050_trp0.80.p'
# gvd_results = load_plot_vs_params(gvd_fname, xlabel='$d$', show_legend=True)
# plt.figure()
# load_runtime_vs_params(gvd_fname);
"""
Explanation: $$p(x)=\mathcal{N}(0, I) \
q(x)=\mathcal{N}(0, vI)$$
End of explanation
"""
# Gauss-Bernoulli RBM
# gb_rbm_fname = 'ex2-gbrbm_dx50_dh10-me4_n1000_rs200_pmi0.000_pma0.001_a0.050_trp0.20.p'
# gb_rbm_fname = 'ex2-gbrbm_dx50_dh10-me4_n1000_rs200_pmi0.000_pma0.000_a0.050_trp0.20.p'
# gb_rbm_fname = 'ex2-gbrbm_dx50_dh10-me6_n1000_rs300_pmi0.000_pma0.001_a0.050_trp0.20.p'
gb_rbm_fname = 'ex2-gbrbm_dx50_dh10-me6_n1000_rs200_pmi0_pma0.06_a0.050_trp0.20.p'
gb_rbm_results = load_plot_vs_params(gb_rbm_fname, xlabel='Perturbation SD $\sigma_{per}$',
show_legend=False)
plt.savefig(gb_rbm_fname.replace('.p', '.pdf', 1), bbox_inches='tight')
# plt.xlim([-0.1, -0.2])
load_runtime_vs_params(gb_rbm_fname, xlabel='Perturbation SD $\sigma_{per}$', yscale='linear', show_legend=False);
plt.savefig(gb_rbm_fname.replace('.p', '_time.pdf', 1), bbox_inches='tight')
# gbrbm_highd_fname = 'ex2-gbrbm_dx50_dh40-me6_n1000_rs200_pmi0_pma0.06_a0.050_trp0.20.p'
# gbrbm_highd_fname = 'ex2-gbrbm_dx50_dh40-me2_n1000_rs200_pmi0_pma0.06_a0.050_trp0.20.p'
gbrbm_highd_fname = 'ex2-gbrbm_dx50_dh40-me1_n1000_rs200_pmi0_pma0.06_a0.050_trp0.20.p'
gbrbm_highd_results = load_plot_vs_params(
gbrbm_highd_fname,
# xlabel='Perturbation SD $\sigma_{per}$',
xlabel='Perturbation noise',
show_legend=False)
plt.xticks([0, 0.02, 0.04, 0.06])
plt.yticks([0, 0.5, 1])
plt.ylim([0, 1.05])
plt.ylabel('P(detect difference)', fontsize=26)
plt.box(True)
plt.savefig(gbrbm_highd_fname.replace('.p', '.pdf', 1), bbox_inches='tight')
load_runtime_vs_params(gbrbm_highd_fname, xlabel='Perturbation SD $\sigma_{per}$',
yscale='linear', show_legend=False);
plt.savefig(gbrbm_highd_fname.replace('.p', '_time.pdf', 1), bbox_inches='tight')
## p: Gaussian, q: Laplace. Vary d
# glaplace_fname = 'ex2-glaplace-me4_n1000_rs100_pmi1.000_pma15.000_a0.050_trp0.50.p'
# glaplace_fname = 'ex2-glaplace-me4_n1000_rs200_pmi1.000_pma15.000_a0.050_trp0.20.p'
# glaplace_fname = 'ex2-glaplace-me5_n1000_rs400_pmi1.000_pma15.000_a0.050_trp0.20.p'
glaplace_fname = 'ex2-glaplace-me6_n1000_rs200_pmi1_pma15_a0.050_trp0.20.p'
glaplace_results = load_plot_vs_params(glaplace_fname, xlabel='dimension $d$', show_legend=False)
plt.savefig(glaplace_fname.replace('.p', '.pdf', 1), bbox_inches='tight')
load_runtime_vs_params(glaplace_fname, xlabel='dimension $d$', show_legend=False, yscale='linear');
plt.savefig(glaplace_fname.replace('.p', '_time.pdf', 1), bbox_inches='tight')
"""
Explanation: $$p(x)=\mathcal{N}(0, I) \
q(x)=\mathcal{N}(0, \mathrm{diag}(2,1,1,\ldots))$$
End of explanation
"""
|
sandipchatterjee/nltk_book_notes | 01_language_processing_and_python.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import nltk
from nltk.book import *
text1
text2
"""
Explanation: Language Processing and Python
Computing with Language: Texts and Words
Ran the following in python3 interpreter:
import nltk
nltk.download()
Select book to download corpora for NLTK Book
End of explanation
"""
text1.concordance("monstrous")
text2.concordance("affection")
text3.concordance("lived")
"""
Explanation: concordance is a view that shows every occurrence of a word alongside some context
End of explanation
"""
text1.similar("monstrous")
text2.similar("monstrous")
"""
Explanation: similar shows other words that appear in a similar context to the entered word
End of explanation
"""
text2.common_contexts(["monstrous", "very"])
"""
Explanation: text 1 (Melville) uses monstrous very differently from text 2 (Austen)
Text 2: monstrous has positive connotations, sometimes functions as an intensifier like very
common_contexts shows contexts that are shared by two or more words
End of explanation
"""
text2.similar("affection")
text2.common_contexts(["affection", "regard"])
"""
Explanation: trying out other words...
End of explanation
"""
plt.figure(figsize=(18,10))
text4.dispersion_plot(["citizens", "democracy", "freedom", "duties", "America", "liberty", "constitution"])
"""
Explanation: Lexical Dispersion Plot
Determining the location of words in text (how many words from beginning does this word appear?) -- using dispersion_plot
End of explanation
"""
# (not available in NLTK 3.0)
# text3.generate()
"""
Explanation: Generating some random text in the style of text3 -- using generate()
not yet supported in NLTK 3.0
End of explanation
"""
len(text3)
"""
Explanation: 1.4 Counting Vocabulary
Count the number of tokens using len
End of explanation
"""
len(set(text3))
# first 50
sorted(set(text3))[:50]
"""
Explanation: View/count vocabulary using set(text_obj)
End of explanation
"""
len(set(text3)) / len(text3)
"""
Explanation: Calculating lexical richness of the text
End of explanation
"""
text3.count("smote")
"""
Explanation: Count how often a word occurs in the text
End of explanation
"""
100 * text4.count('a') / len(text4)
text5.count('lol')
100 * text5.count('lol') / len(text5)
"""
Explanation: Compute what percentage of the text is taken up by a specific word
End of explanation
"""
def lexical_diversity(text):
return len(set(text)) / len(text)
def percentage(count, total):
return 100 * count / total
lexical_diversity(text3), lexical_diversity(text5)
percentage(text4.count('a'), len(text4))
"""
Explanation: Define some simple functions to calculate these values
End of explanation
"""
sent1
sent2
lexical_diversity(sent1)
"""
Explanation: A Closer Look at Python: Texts as Lists of Words
skipping some basic python parts of this section...
End of explanation
"""
['Monty', 'Python'] + ['and', 'the', 'Holy', 'Grail']
"""
Explanation: List Concatenation
End of explanation
"""
text4[173]
text4.index('awaken')
text5[16715:16735]
text6[1600:1625]
"""
Explanation: Indexing Lists (...and Text objects)
End of explanation
"""
saying = 'After all is said and done more is said than done'.split()
tokens = sorted(set(saying))
tokens[-2:]
"""
Explanation: Computing with Language: Simple Statistics
End of explanation
"""
fdist1 = FreqDist(text1)
print(fdist1)
fdist1.most_common(50)
fdist1['whale']
"""
Explanation: Frequency Distributions
End of explanation
"""
plt.figure(figsize=(18,10))
fdist1.plot(50, cumulative=True)
"""
Explanation: 50 most frequent words account for almost half of the book
End of explanation
"""
V = set(text1)
long_words = [w for w in V if len(w) > 15]
sorted(long_words)
"""
Explanation: Fine-grained Selection of Words
Looking at long words of a text (maybe these will be more meaningful words?)
End of explanation
"""
fdist5 = FreqDist(text5)
sorted(w for w in set(text5) if len(w) > 7 and fdist5[w] > 7)
"""
Explanation: words that are longer than 7 characters and occur more than 7 times
End of explanation
"""
list(nltk.bigrams(['more', 'is', 'said', 'than', 'done'])) # bigrams() returns a generator
"""
Explanation: collocation - sequence of words that occur together unusually often (red wine is a collocation, vs. the wine is not)
End of explanation
"""
text4.collocations()
text8.collocations()
"""
Explanation: collocations are just frequent bigrams -- we want to focus on the cases that involve rare words**
collocations() returns bigrams that occur more often than expected, based on word frequency
End of explanation
"""
[len(w) for w in text1][:10]
fdist = FreqDist(len(w) for w in text1)
print(fdist)
fdist
fdist.most_common()
fdist.max()
fdist[3]
fdist.freq(3)
"""
Explanation: counting other things
word length distribution in text1
End of explanation
"""
len(text1)
len(set(text1))
len(set(word.lower() for word in text1))
"""
Explanation: words of length 3 (~50k) make up ~20% of all words in the book
Back to Python: Making Decisions and Taking Control
skipping basic python stuff
More accurate vocabulary size counting -- convert all strings to lowercase
End of explanation
"""
len(set(word.lower() for word in text1 if word.isalpha()))
"""
Explanation: Only include alphabetic words -- no punctuation
End of explanation
"""
|
samuelsinayoko/kaggle-housing-prices | research/imputation.ipynb | mit | import pandas as pd
import numpy as np
import statsmodels
from statsmodels.imputation import mice
import random
random.seed(10)
"""
Explanation: Imputation
End of explanation
"""
df = pd.read_csv("http://goo.gl/19NKXV")
df.head()
original = df.copy()
original.describe().loc['count',:]
"""
Explanation: Create data frame
End of explanation
"""
def add_nulls(df, n):
new = df.copy()
new.iloc[random.sample(range(new.shape[0]), n), :] = np.nan
return new
df.Cholesterol = add_nulls(df[['Cholesterol']], 20)
df.Smoking = add_nulls(df[['Smoking']], 20)
df.Education = add_nulls(df[['Education']], 20)
df.Age = add_nulls(df[['Age']], 5)
df.BMI = add_nulls(df[['BMI']], 5)
"""
Explanation: Add some missing values
End of explanation
"""
df.describe()
"""
Explanation: Confirm the presence of null values
End of explanation
"""
for col in ['Gender', 'Smoking', 'Education']:
df[col] = df[col].astype('category')
df.dtypes
"""
Explanation: Create categorical variables
End of explanation
"""
df = pd.get_dummies(df);
"""
Explanation: Create dummy variables
End of explanation
"""
imp = mice.MICEData(df)
"""
Explanation: Impute data
Replace null values using MICE model
MICEData class
End of explanation
"""
imp.conditional_formula['BMI']
before = imp.data.BMI.copy()
"""
Explanation: Imputation for one feature
The conditional_formula attribute is a dictionary containing the models that will be used to impute the data for each column. This can be updated to change the imputation model.
End of explanation
"""
imp.perturb_params('BMI')
imp.impute('BMI')
after = imp.data.BMI
import matplotlib.pyplot as plt
plt.clf()
fig, ax = plt.subplots(1, 1)
ax.plot(before, 'or', label='before', alpha=1, ms=8)
ax.plot(after, 'ok', label='after', alpha=0.8, mfc='w', ms=8)
plt.legend();
pd.DataFrame(dict(before=before.describe(), after=after.describe()))
before[before != after]
after[before != after]
"""
Explanation: The perturb_params method must be called before running the impute method, that runs the imputation. It updates the specified column in the data attribute.
End of explanation
"""
imp.update_all(2)
imp.plot_fit_obs('BMI');
imp.plot_fit_obs('Age');
"""
Explanation: Impute all
End of explanation
"""
original.mean()
for col in original.mean().index:
x = original.mean()[col]
y = imp.data[col].mean()
e = abs(x - y) / x
print("{:<12} mean={:>8.2f}, exact={:>8.2f}, error={:>5.2g}%".format(col, x, y, e * 100))
"""
Explanation: Validation
End of explanation
"""
|
computational-class/cjc2016 | code/08.06-regression.ipynb | mit | num_friends_good = [49,41,40,25,21,21,19,19,18,18,16,15,15,15,15,14,14,13,13,13,13,12,12,11,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,8,8,8,8,8,8,8,8,8,8,8,8,8,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]
daily_minutes_good = [68.77,51.25,52.08,38.36,44.54,57.13,51.4,41.42,31.22,34.76,54.01,38.79,47.59,49.1,27.66,41.03,36.73,48.65,28.12,46.62,35.57,32.98,35,26.07,23.77,39.73,40.57,31.65,31.21,36.32,20.45,21.93,26.02,27.34,23.49,46.94,30.5,33.8,24.23,21.4,27.94,32.24,40.57,25.07,19.42,22.39,18.42,46.96,23.72,26.41,26.97,36.76,40.32,35.02,29.47,30.2,31,38.11,38.18,36.31,21.03,30.86,36.07,28.66,29.08,37.28,15.28,24.17,22.31,30.17,25.53,19.85,35.37,44.6,17.23,13.47,26.33,35.02,32.09,24.81,19.33,28.77,24.26,31.98,25.73,24.86,16.28,34.51,15.23,39.72,40.8,26.06,35.76,34.76,16.13,44.04,18.03,19.65,32.62,35.59,39.43,14.18,35.24,40.13,41.82,35.45,36.07,43.67,24.61,20.9,21.9,18.79,27.61,27.21,26.61,29.77,20.59,27.53,13.82,33.2,25,33.1,36.65,18.63,14.87,22.2,36.81,25.53,24.62,26.25,18.21,28.08,19.42,29.79,32.8,35.99,28.32,27.79,35.88,29.06,36.28,14.1,36.63,37.49,26.9,18.58,38.48,24.48,18.95,33.55,14.24,29.04,32.51,25.63,22.22,19,32.73,15.16,13.9,27.2,32.01,29.27,33,13.74,20.42,27.32,18.23,35.35,28.48,9.08,24.62,20.12,35.26,19.92,31.02,16.49,12.16,30.7,31.22,34.65,13.13,27.51,33.2,31.57,14.1,33.42,17.44,10.12,24.42,9.82,23.39,30.93,15.03,21.67,31.09,33.29,22.61,26.89,23.48,8.38,27.81,32.35,23.84]
alpha, beta = 22.9475, 0.90386
%matplotlib inline
import matplotlib.pyplot as plt
plt.scatter(num_friends_good, daily_minutes_good)
plt.plot(num_friends_good, [alpha + beta*i for i in num_friends_good], 'b-')
plt.xlabel('# of friends', fontsize = 20)
plt.ylabel('minutes per day', fontsize = 20)
plt.title('simple linear regression model', fontsize = 20)
plt.show()
"""
Explanation: Simple Linear Regression
We used the correlation function to measure the strength of the linear relationship between two variables. For most applications, knowing that such a linear relationship exists isn’t enough. We’ll want to be able to understand the nature of the relationship. This is where we’ll use simple linear regression.
The Model
$$y_i = \beta x_i + \alpha + \epsilon_i$$
where
$y_i$ is the number of minutes user i spends on the site daily,
$x_i$ is the number of friends user i has
$\alpha$ is the constant when x = 0.
$ε_i$ is a (hopefully small) error term representing the fact that there are other factors not accounted for by this simple model.
Least Squares Fit
最小二乘法
$$ y_i = X_i^T w$$
The constant could be represent by 1 in X
The squared error could be written as:
$$ \sum_{i = 1}^m (y_i -X_i^T w)^2 $$
If we know $\alpha$ and $\beta$, then we can make predictions.
Since we know the actual output $y_i$ we can compute the error for each pair.
Since the negative errors cancel out with the positive ones, we use squared errors.
The least squares solution is to choose the $\alpha$ and $\beta$ that make sum_of_squared_errors as small as possible.
The choice of beta means that when the input value increases by standard_deviation(x), the prediction increases by correlation(x, y) * standard_deviation(y).
In the case when x and y are perfectly positively correlated, a one standard deviation increase in x results in a one-standard-deviation-of-y increase in the prediction.
When they’re perfectly negatively correlated, the increase in x results in a decrease in the prediction.
And when the correlation is zero, beta is zero, which means that changes in x don’t affect the prediction at all.
In this case, the slope of the fitted line is equal to the correlation between y and x corrected by the ratio of standard deviations of these variables.
$$ y_i = \alpha + \beta x_i + \varepsilon_i $$
$$ \hat\varepsilon_i =y_i-a -b x_i $$
$$ \text{Find }\min_{a,\, b} Q(a, b), \quad \text{for } Q(a, b) = \sum_{i=1}^n\hat\varepsilon_i^{\,2} = \sum_{i=1}^n (y_i -a - b x_i)^2\ $$
By expanding to get a quadratic expression in $a$ and $b$, we can derive values of $a$ and $b$ that minimize the objective function $Q$ (these minimizing values are denoted $\hat{\alpha}$ and $\hat{\beta}$):
\begin{align}
\hat\alpha & = \bar{y} - \hat\beta\,\bar{x}, \
\hat\beta &= \frac{ \sum_{i=1}^n (x_i - \bar{x})(y_i - \bar{y}) }{ \sum_{i=1}^n (x_i - \bar{x})^2 } = \frac{ \operatorname{Cov}(x, y) }{ \operatorname{Var}(x) } = r_{xy} \frac{s_y}{s_x}. \[6pt]
\end{align}
$r_{xy}$ as the sample correlation coefficient between x and y
$s_x$ and $s_y$ as the uncorrected sample standard deviations of x and y
Kenney, J. F. and Keeping, E. S. (1962) "Linear Regression and Correlation." Ch. 15 in ''Mathematics of Statistics'', Pt. 1, 3rd ed. Princeton, NJ: Van Nostrand, pp. 252–285
Substituting the above expressions for $\hat{\alpha}$ and $\hat{\beta}$ into
$$f = \hat{\alpha} + \hat{\beta} x,$$
yields
$$\frac{ f - \bar{y}}{s_y} = r_{xy} \frac{ x - \bar{x}}{s_x} .$$
Kenney, J. F. and Keeping, E. S. (1962) "Linear Regression and Correlation." Ch. 15 in ''Mathematics of Statistics'', Pt. 1, 3rd ed. Princeton, NJ: Van Nostrand, pp. 252–285
End of explanation
"""
# https://github.com/computational-class/machinelearninginaction/blob/master/Ch08/regression.py
import pandas as pd
import random
dat = pd.read_csv('../data/ex0.txt', sep = '\t', names = ['x1', 'x2', 'y'])
dat['x3'] = [yi*.3 + .5*random.random() for yi in dat['y']]
dat.head()
from numpy import mat, linalg, corrcoef
def standRegres(xArr,yArr):
xMat = mat(xArr); yMat = mat(yArr).T
xTx = xMat.T*xMat
if linalg.det(xTx) == 0.0:
print("This matrix is singular, cannot do inverse")
return
ws = xTx.I * (xMat.T*yMat)
return ws
xs = [[dat.x1[i], dat.x2[i], dat.x3[i]] for i in dat.index]
y = dat.y
print(xs[:2])
ws = standRegres(xs, y)
print(ws)
xMat=mat(xs)
yMat=mat(y)
yHat = xMat*ws
xCopy=xMat.copy()
xCopy.sort(0)
yHat=xCopy*ws
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(xMat[:,1].flatten().A[0], yMat.T[:,0].flatten().A[0])
ax.plot(xCopy[:,1],yHat, 'r-')
plt.ylim(0, 5)
plt.show()
yHat = xMat*ws
corrcoef(yHat.T, yMat)
"""
Explanation: Of course, we need a better way to figure out how well we’ve fit the data than staring at the graph.
A common measure is the coefficient of determination (or R-squared), which measures the fraction of the total variation in the dependent variable that is captured by the model.
Multiple Regression using Matrix Method
Machine Learning in Action
https://github.com/computational-class/machinelearninginaction/
$$ y_i = X_i^T w$$
The constant could be represent by 1 in X
The squared error could be written as:
$$ \sum_{i = 1}^m (y_i -X_i^T w)^2 $$
We can also write this in matrix notation as $(y-Xw)^T(y-Xw)$.
If we take the derivative of this with respect to w, we’ll get $X^T(y-Xw)$.
We can set this to zero and solve for w to get the following equation:
$$\hat w = (X^T X)^{-1}X^T y$$
End of explanation
"""
import statsmodels.api as sm
import statsmodels.formula.api as smf
dat = pd.read_csv('ex0.txt', sep = '\t', names = ['x1', 'x2', 'y'])
dat['x3'] = [yi*.3 - .1*random.random() for yi in y]
dat.head()
results = smf.ols('y ~ x2 + x3', data=dat).fit()
results.summary()
fig = plt.figure(figsize=(12,8))
fig = sm.graphics.plot_partregress_grid(results, fig = fig)
plt.show()
import numpy as np
X = np.array(num_friends_good)
X = sm.add_constant(X, prepend=False)
mod = sm.OLS(daily_minutes_good, X)
res = mod.fit()
print(res.summary())
fig = plt.figure(figsize=(6,8))
fig = sm.graphics.plot_partregress_grid(res, fig = fig)
plt.show()
"""
Explanation: Doing Statistics with statsmodels
http://www.statsmodels.org/stable/index.html
statsmodels is a Python module that provides classes and functions for the estimation of many different statistical models, as well as for conducting statistical tests, and statistical data exploration.
End of explanation
"""
|
arushanova/echidna | echidna/scripts/tutorials/getting_started.ipynb | mit | %pylab inline
pylab.rc("savefig", dpi=120) # set resolution of inline figures
"""
Explanation: First set up environment with convenience imports and inline plotting:
<!--- The following cell should be commented out in the python script
version of this notebook --->
End of explanation
"""
%cd ../../..
%%bash
pwd
"""
Explanation: The %pylab magic imports matplotlib.pyplot as plt and numpy as
np. We'll also, change the working directory to echidna's base
directory, so that all the relative imports work.
<!--- The following cell should be commented out in the python script
version of this notebook --->
End of explanation
"""
print "Hello World!"
"""
Explanation: The %cd inline-magic emmulates the bash cd command, allowing us to
change directory and the %%bash magic lets you run any bash command in
the cell but remaining in the notebook!
<div class="alert alert-info">
<strong>A quick note about the ipython notebook:</strong>
<ul>
<li> To see the keyboard shortcuts at any time simply press the
`Esc` key and then the `H` key </li>
<li> The notebook has two basic modes: **Command** and **Edit**.
Command mode is enabled by the `Esc` key and Edit by the
`Enter` key. </li>
<li> The main comand you will need is `Shift`+`Enter` (make sure
you are in command mode first by pressing `Esc`). This
executes the current cell and then selects the cell below. Try
pressing `Shift`+`Enter` on this cell and then again to run
the cell below. </li>
</ul>
</div>
End of explanation
"""
import echidna.core.spectra as spectra
"""
Explanation: <div class="alert alert-info">
<par>
As you can see, for cells containing valid python, the code
snippet is executed as it would be in a python terminal shell and
the output is displayed below. Try selecting the cell above and
editing it (`Enter` for edit mode) so that it prints out
`Goodbye World!` when executed.
</par>
<par>
These commands should get you through the tutorial, but there are
more in-depth tutorials
<a href="https://nbviewer.jupyter.org/github/ipython/ipython/blob/4.0.x/examples/IPython%20Kernel/Index.ipynb">
here</a> if you are interested - you can even download them and
work through them in the Jupyter viewer.
</par>
</div>
<!--- Main script starts below ------------------------------------------->
Tutorial 1: Getting started with echidna
This guide tutorial aims to get you started with some basic tasks you can
accomplish using echidna.
Spectra creation
The Spectra class is echidna's most fundamental class. It holds the core
data structure and provides much of the core functionality required.
Coincidentally, this guide will be centred around this class, how to
create it and then some manipulations of the class.
We'll begin with how to create an instance of the Spectra class. It is
part of the echidna.core.spectra module, so we will import this and make
a Spectra instance.
End of explanation
"""
import echidna
from echidna.core.config import SpectraConfig
config = SpectraConfig.load_from_file(
echidna.__echidna_base__ + "/echidna/config/spectra_example.yml")
print config.get_pars()
"""
Explanation: Now we need a config file to create the spectrum from. There is an example
config file in echidna/config. If we look at the contents of this yaml
file, we see it tells the Spectra class to create a data structure to
hold two parameters:
energy_mc, with lower limit 0, upper limit 10 and 1000 bins
radial_mc, with lower limit 0, upper limit 15000 and 1500 bins
This config should be fine for us. We can load it using the
load_from_file method of the SpectraConfig class:
End of explanation
"""
print echidna.__echidna_base__
print echidna.__echidna_home__
"""
Explanation: Note we used the __echidna_base__ member of the echidna module here.
This module has two special members for denoting the base directory (the
outermost directory of the git repository) and the home directory (the
echidna directory inside the base directory. The following lines show
the current location of these directories:
End of explanation
"""
num_decays = 1000
spectrum = spectra.Spectra("spectrum", num_decays, config)
print spectrum
"""
Explanation: Finally before creating the spectrum, we should define the number of
events it should represent:
End of explanation
"""
# Import numpy
import numpy
# Generate random energies from a Gaussin with mean (mu) and sigma
# (sigma)
mu = 2.5 # MeV
sigma = 0.15 # MeV
# Generate random radial position from a Uniform distribution
outer_radius = 5997 # Radius of SNO+ AV
# Detector efficiency
efficiency = 0.9 # 90%
for event in range(num_decays):
energy = numpy.random.normal(mu, sigma)
radius = numpy.random.uniform(high=outer_radius)
event_detected = (numpy.random.uniform() < efficiency)
if event_detected: # Fill spectrum with values
spectrum.fill(energy_mc=energy, radial_mc=radius)
"""
Explanation: And there you have it, we've created a Spectra object.
Filling the spectrum
Ok, so we now have a spectrum, let's fill it with some events. We'll
generate random energies from a Gaussian distribution and random positions
from a Uniform distribution. Much of echidna is built using the numpy
and SciPy packages and we will use them here to generate the random
numbers. We'll also generate a third random number to simulate some form
rudimentary detector efficiency.
End of explanation
"""
print spectrum.sum()
"""
Explanation: This will have filled our Spectra class with the events. Make sure to
use the exact parameter names that were printed out above, as kewyord
arguments. To check we can now use the sum method. This returns the
total number of events stored in the spectrum at a given time - the
integral of the spectrum.
End of explanation
"""
print num_decays * efficiency
"""
Explanation: The value returned by sum, should roughly equal:
End of explanation
"""
print spectrum._data
"""
Explanation: We can also inspect the raw data structure. This is saved in the _data
member of the Spectra class:
End of explanation
"""
import echidna.output.plot as plot
import echidna.output.plot_root as plot_root
"""
Explanation: <div class="alert alert-info">
<strong>Note:</strong> you probably won't see any entries in the
above. For large arrays, numpy only prints the first three and last
three entries. Since our energy range is in the middle, all our events
are in the `...` part at the moment. But we will see entries printed
out later when we apply some cuts.
</div>
Plotting
Another useful way to inspect the Spectra created is to plot it. Support
is available within echidna to plot using either ROOT or matplotlib
and there are some useful plotting functions available in the plot an
plot_root modules.
End of explanation
"""
fig1 = plot.plot_projection(spectrum, "energy_mc",
fig_num=1, show_plot=False)
"""
Explanation: To plot the projection of the spectrum on the energy_mc axis:
End of explanation
"""
plot_root.plot_projection(spectrum, "radial_mc", fig_num=2)
"""
Explanation: and to plot the projection on the radial_mc axis, this time using root:
End of explanation
"""
fig_3 = plot.plot_surface(spectrum, "energy_mc", "radial_mc",
fig_num=3, show_plot=False)
"""
Explanation: We can also project onto two dimensions and plot a surface:
End of explanation
"""
shrink_dict = {"energy_mc_low": mu - 5.*sigma,
"energy_mc_high": mu + 5.*sigma,
"radial_mc_low": 0.0,
"radial_mc_high": 3500}
spectrum.shrink(**shrink_dict)
"""
Explanation: Convolution and cuts
The ability to smear the event, along a parameter axis, is built into
echidna in the smear module. There are three classes in the module that
allow us to create a smearer for different scenarios. There are two
smearers for energy-based parameters, EnergySmearRes and
EnergySmearLY, which allow smearing by energy resolution (e.g.
$\frac{5\%}{\sqrt{(E[MeV])}}$ and light yield (e.g. 200 NHit/Mev)
respectively. Then additionally the RadialSmear class handles smearing
along the axis of any radial based parameter.
We will go through an example of how to smear our spectrum by a fixed
energy resolution of 5%. There are two main smearing algorithms: "weighted
smear" and "random smear". The "random smear" algorithm takes each event
in each bin and randomly assigns it a new energy from the Gaussian
distribution for that bin - it is fast but not very accurate for low
statistics. The "weighted smear" algorithm is slower but much more
accurate, as re-weights each bin by taking into account all other nearby
bins within a pre-defined range. We will use the "weighted smear" method
in this example.
First to speed the smearing process, we will apply some loose cuts.
Although, fewer bins means faster smearing, you should be wary of cutting
the spectrum too tightly before smearing as you may end up cutting bins
that would have influenced the smearing. Cuts can be applied using the
shrink method. (Confusingly there is also a cut method which is almost
identical to the shrink method, but updates the number of events the
spectrum represents, after the cut is applied. Unless you are sure this is
what you want to do, it is probably better to use the shrink method.) To
shrink over multiple parameters, it is best to construct a dictionary of
_low and _high values for each parameter and then pass this to the
shrink method.
End of explanation
"""
print spectrum.sum()
"""
Explanation: Using the sum method, we can check to see how many events were cut.
End of explanation
"""
import echidna.core.smear as smear
"""
Explanation: Import the smear class:
End of explanation
"""
smearer = smear.EnergySmearRes()
"""
Explanation: and create the smearer object.
End of explanation
"""
smearer.set_num_sigma(3)
smearer.set_resolution(0.05)
"""
Explanation: By default the "weighted smear" method considers all bins within a $\pm
5\sigma$ range. For the sake of speed, we will reduce this to three here.
Also set the energy resolution - 0.05 for 5%.
End of explanation
"""
smeared_spectrum = smearer.weighted_smear(spectrum)
"""
Explanation: To smear our original spectrum and create the new Spectra object
smeared_spectrum:
End of explanation
"""
def overlay_spectra(original, smeared,
dimension="energy_mc", fig_num=1):
""" Overlay original and smeared spectra.
Args:
original (echidna.core.spectra.Spectra): Original spectrum.
smeared (echidna.core.spectra.Spectra): Smeared spectrum.
dimension (string, optional): Dimension to project onto.
Default is "energy_mc".
fignum (int, optional): Figure number, if producing multiple
figures. Default is 1.
Returns:
matplotlib.figure.Figure: Figure showing overlaid spectra.
"""
par = original.get_config().get_par(dimension)
# Define array of bin boundarie
bins = par.get_bin_boundaries()
# Define array of bin centres
x = par.get_bin_centres()
# Save bin width
width = par.get_width()
# Create figure and axes
fig, ax = plt.subplots(num=fig_num)
# Overlay two spectra using projection as weight
ax.hist(x, bins, weights=original.project(dimension),
histtype="stepfilled", color="RoyalBlue",
alpha=0.5, label=original._name)
ax.hist(x, bins, weights=smeared.project(dimension),
histtype="stepfilled", color="Red",
alpha=0.5, label=smeared._name)
# Add label/style
plt.legend(loc="upper right")
plt.ylim(ymin=0.0)
plt.xlabel(dimension + " [" + par.get_unit() + "]")
plt.ylabel("Events per " + str(width) +
" " + par.get_unit() + " bin")
return fig
fig_4 = overlay_spectra(spectrum, smeared_spectrum, fig_num=4)
"""
Explanation: this should hopefully only take a couple of seconds.
The following code shows how to make a simple script, using matplotlib, to
overlay the original and smeared spectra.
End of explanation
"""
# To get nice shape for rebinning
roi = (mu - 0.5*sigma, mu + 1.45*sigma)
smeared_spectrum.shrink_to_roi(roi[0], roi[1], "energy_mc")
print smeared_spectrum.get_roi("energy_mc")
"""
Explanation: Other spectra manipulations
We now have a nice smeared version of our original spectrum. To prepare
the spectrum for a final analysis there are a few final manipulations we
may wish to do.
Region of Interest (ROI)
There is a special version of the shrink method called shrink_to_roi
that can be used for ROI cuts. It saves some useful information about the
ROI in the Spectra class instance, including the efficiency i.e.
integral of spectrum after cut divided by integral of spectrum before cut.
End of explanation
"""
dimension = smeared_spectrum.get_config().get_pars().index("energy_mc")
old_shape = smeared_spectrum._data.shape
reduction_factor = 5 # how many bins to combine into a single bin
new_shape = tuple([j / reduction_factor if i == dimension else j
for i, j in enumerate(old_shape)])
print old_shape
print new_shape
smeared_spectrum.rebin(new_shape)
"""
Explanation: Rebin
Our spectrum is still quite finely binned, perhaps we want to bin it in 50
keV bins instead of 10 keV bins. The rebin method can be used to acheive
this.
The rebin method requires us to specify the new shape (tuple) of the
data. With just two dimensions this is trivial, but with more dimensions,
it may be better to use a construct such as:
End of explanation
"""
smeared_spectrum.scale(104.25)
print smeared_spectrum.sum()
"""
Explanation: Scaling
Finally, we "simulated" 1000 events, but we most likely want to scale this
down for to represent the number of events expected in our analysis. The
Spectra class has a scale method to accomplish this. Remember that the
scale method should always be supplied with the number of events the
full spectrum (i.e. before any cuts using shrink or shrink_to_roi)
should represent. Lets assume that our spectrum should actually represent
104.25 events:
End of explanation
"""
print smeared_spectrum._data
fig_5 = plot.plot_projection(smeared_spectrum, "energy_mc",
fig_num=5, show_plot=False)
plt.show()
"""
Explanation: Putting it all together
After creating, filling, convolving and various other manipulations what
does our final spectrum look like?
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion | notebooks/kubeflow_pipelines/walkthrough/solutions/kfp_walkthrough.ipynb | apache-2.0 | import json
import os
import pickle
import tempfile
import time
import uuid
from typing import NamedTuple
import numpy as np
import pandas as pd
from google.cloud import bigquery
from googleapiclient import discovery, errors
from jinja2 import Template
from kfp.components import func_to_container_op
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, StandardScaler
"""
Explanation: Using custom containers with AI Platform Training
Learning Objectives:
1. Learn how to create a train and a validation split with Big Query
1. Learn how to wrap a machine learning model into a Docker container and train in on CAIP
1. Learn how to use the hyperparameter tunning engine on GCP to find the best hyperparameters
1. Learn how to deploy a trained machine learning model GCP as a rest API and query it
In this lab, you develop, package as a docker image, and run on AI Platform Training a training application that trains a multi-class classification model that predicts the type of forest cover from cartographic data. The dataset used in the lab is based on Covertype Data Set from UCI Machine Learning Repository.
The training code uses scikit-learn for data pre-processing and modeling. The code has been instrumented using the hypertune package so it can be used with AI Platform hyperparameter tuning.
End of explanation
"""
!(gsutil ls | grep kubeflow)
REGION = "us-central1"
ARTIFACT_STORE = "gs://qwiklabs-gcp-00-97cd915af2d6-kubeflowpipelines-default"
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
os.environ["PROJECT_ID"] = PROJECT_ID
DATA_ROOT = f"{ARTIFACT_STORE}/data"
JOB_DIR_ROOT = f"{ARTIFACT_STORE}/jobs"
TRAINING_FILE_PATH = "{}/{}/{}".format(DATA_ROOT, "training", "dataset.csv")
VALIDATION_FILE_PATH = "{}/{}/{}".format(DATA_ROOT, "validation", "dataset.csv")
"""
Explanation: Configure environment settings
Set location paths, connections strings, and other environment settings. Make sure to update REGION, and ARTIFACT_STORE with the settings reflecting your lab environment.
REGION - the compute region for AI Platform Training and Prediction
ARTIFACT_STORE - the GCS bucket created during installation of AI Platform Pipelines. The bucket name ends with the -kubeflowpipelines-default suffix.
End of explanation
"""
%%bash
DATASET_LOCATION=US
DATASET_ID=covertype_dataset
TABLE_ID=covertype
DATA_SOURCE=gs://workshop-datasets/covertype/small/dataset.csv
SCHEMA=Elevation:INTEGER,\
Aspect:INTEGER,\
Slope:INTEGER,\
Horizontal_Distance_To_Hydrology:INTEGER,\
Vertical_Distance_To_Hydrology:INTEGER,\
Horizontal_Distance_To_Roadways:INTEGER,\
Hillshade_9am:INTEGER,\
Hillshade_Noon:INTEGER,\
Hillshade_3pm:INTEGER,\
Horizontal_Distance_To_Fire_Points:INTEGER,\
Wilderness_Area:STRING,\
Soil_Type:STRING,\
Cover_Type:INTEGER
bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
"""
Explanation: Importing the dataset into BigQuery
End of explanation
"""
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
"""
Explanation: Explore the Covertype dataset
End of explanation
"""
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
"""
Explanation: Create training and validation splits
Use BigQuery to sample training and validation splits and save them to GCS storage
Create a training split
End of explanation
"""
!bq query \
-n 0 \
--destination_table covertype_dataset.validation \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (8)'
!bq extract \
--destination_format CSV \
covertype_dataset.validation \
$VALIDATION_FILE_PATH
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
"""
Explanation: Create a validation split
End of explanation
"""
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
("num", StandardScaler(), numeric_feature_indexes),
("cat", OneHotEncoder(), categorical_feature_indexes),
]
)
pipeline = Pipeline(
[
("preprocessor", preprocessor),
("classifier", SGDClassifier(loss="log", tol=1e-3)),
]
)
"""
Explanation: Develop a training application
Configure the sklearn training pipeline.
The training pipeline preprocesses data by standardizing all numeric features using sklearn.preprocessing.StandardScaler and encoding all categorical features using sklearn.preprocessing.OneHotEncoder. It uses stochastic gradient descent linear classifier (SGDClassifier) for modeling.
End of explanation
"""
num_features_type_map = {
feature: "float64" for feature in df_train.columns[numeric_feature_indexes]
}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
"""
Explanation: Convert all numeric features to float64
To avoid warning messages from StandardScaler all numeric features are converted to float64.
End of explanation
"""
X_train = df_train.drop("Cover_Type", axis=1)
y_train = df_train["Cover_Type"]
X_validation = df_validation.drop("Cover_Type", axis=1)
y_validation = df_validation["Cover_Type"]
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
"""
Explanation: Run the pipeline locally.
End of explanation
"""
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
"""
Explanation: Calculate the trained model's accuracy.
End of explanation
"""
TRAINING_APP_FOLDER = "training_app"
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
"""
Explanation: Prepare the hyperparameter tuning application.
Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on AI Platform Training.
End of explanation
"""
%%writefile {TRAINING_APP_FOLDER}/train.py
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import subprocess
import sys
import fire
import pickle
import numpy as np
import pandas as pd
import hypertune
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path, validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
accuracy = pipeline.score(X_validation, y_validation)
print('Model accuracy: {}'.format(accuracy))
# Log it with hypertune
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=accuracy
)
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path], stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
"""
Explanation: Write the tuning script.
Notice the use of the hypertune package to report the accuracy optimization metric to AI Platform hyperparameter tuning service.
End of explanation
"""
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
WORKDIR /app
COPY train.py .
ENTRYPOINT ["python", "train.py"]
"""
Explanation: Package the script into a docker image.
Notice that we are installing specific versions of scikit-learn and pandas in the training image. This is done to make sure that the training runtime is aligned with the serving runtime. Later in the notebook you will deploy the model to AI Platform Prediction, using the 1.15 version of AI Platform Prediction runtime.
Make sure to update the URI for the base image so that it points to your project's Container Registry.
End of explanation
"""
IMAGE_NAME = "trainer_image"
IMAGE_TAG = "latest"
IMAGE_URI = f"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{IMAGE_TAG}"
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
"""
Explanation: Build the docker image.
You use Cloud Build to build the image and push it your project's Container Registry. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
End of explanation
"""
%%writefile {TRAINING_APP_FOLDER}/hptuning_config.yaml
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
trainingInput:
hyperparameters:
goal: MAXIMIZE
maxTrials: 4
maxParallelTrials: 4
hyperparameterMetricTag: accuracy
enableTrialEarlyStopping: TRUE
params:
- parameterName: max_iter
type: DISCRETE
discreteValues: [
200,
500
]
- parameterName: alpha
type: DOUBLE
minValue: 0.00001
maxValue: 0.001
scaleType: UNIT_LINEAR_SCALE
"""
Explanation: Submit an AI Platform hyperparameter tuning job
Create the hyperparameter configuration file.
Recall that the training code uses SGDClassifier. The training application has been designed to accept two hyperparameters that control SGDClassifier:
- Max iterations
- Alpha
The below file configures AI Platform hypertuning to run up to 6 trials on up to three nodes and to choose from two discrete values of max_iter and the linear range betwee 0.00001 and 0.001 for alpha.
End of explanation
"""
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
--config $TRAINING_APP_FOLDER/hptuning_config.yaml \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--hptune
"""
Explanation: Start the hyperparameter tuning job.
Use the gcloud command to start the hyperparameter tuning job.
End of explanation
"""
!gcloud ai-platform jobs describe $JOB_NAME
!gcloud ai-platform jobs stream-logs $JOB_NAME
"""
Explanation: Monitor the job.
You can monitor the job using GCP console or from within the notebook using gcloud commands.
End of explanation
"""
ml = discovery.build("ml", "v1")
job_id = f"projects/{PROJECT_ID}/jobs/{JOB_NAME}"
request = ml.projects().jobs().get(name=job_id)
try:
response = request.execute()
except errors.HttpError as err:
print(err)
except:
print("Unexpected error")
response
"""
Explanation: Retrieve HP-tuning results.
After the job completes you can review the results using GCP Console or programatically by calling the AI Platform Training REST end-point.
End of explanation
"""
response["trainingOutput"]["trials"][0]
"""
Explanation: The returned run results are sorted by a value of the optimization metric. The best run is the first item on the returned list.
End of explanation
"""
alpha = response["trainingOutput"]["trials"][0]["hyperparameters"]["alpha"]
max_iter = response["trainingOutput"]["trials"][0]["hyperparameters"][
"max_iter"
]
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--alpha=$alpha \
--max_iter=$max_iter \
--nohptune
!gcloud ai-platform jobs stream-logs $JOB_NAME
"""
Explanation: Retrain the model with the best hyperparameters
You can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset.
Configure and run the training job
End of explanation
"""
!gsutil ls $JOB_DIR
"""
Explanation: Examine the training output
The training script saved the trained model as the 'model.pkl' in the JOB_DIR folder on GCS.
End of explanation
"""
model_name = 'forest_cover_classifier'
labels = "task=classifier,domain=forestry"
!gcloud ai-platform models create $model_name \
--regions=$REGION \
--labels=$labels
"""
Explanation: Deploy the model to AI Platform Prediction
Create a model resource
End of explanation
"""
model_version = 'v01'
!gcloud ai-platform versions create {model_version} \
--model={model_name} \
--origin=$JOB_DIR \
--runtime-version=1.15 \
--framework=scikit-learn \
--python-version=3.7 \
--region=global
"""
Explanation: Create a model version
End of explanation
"""
input_file = "serving_instances.json"
with open(input_file, "w") as f:
for index, row in X_validation.head().iterrows():
f.write(json.dumps(list(row.values)))
f.write("\n")
!cat $input_file
"""
Explanation: Serve predictions
Prepare the input file with JSON formated instances.
End of explanation
"""
!gcloud ai-platform predict \
--model $model_name \
--version $model_version \
--json-instances $input_file \
--region global
"""
Explanation: Invoke the model
End of explanation
"""
|
saashimi/code_guild | wk1/notebooks/.ipynb_checkpoints/wk1.0-checkpoint.ipynb | mit | count = 1
for elem in range(1, 3 + 1):
count *= elem
print(count)
"""
Explanation: Wk1.0
Warm-up: I got 32767 problems and overflow is one of them.
1. Swap the values of two variables, a and b without using a temporary variable.
2. Suppose I had six different sodas. In how many different combinations could I drink the sodas? Write a program that calculates the number of unique combinations for 6 objects. Assume that I finish a whole sode before moving onto another one.
End of explanation
"""
from math import factorial as f
f(3)
"""
Explanation: 3. Extend your program to n objects. How many different combinations do I have for 5 objects? How about 15? What is the max number of objects I could calculate for if I was storing the result in a 32 bit integer? What happens if the combinations exceed 32 bits?
End of explanation
"""
def n_max():
inpt = eval(input("Please enter some values: "))
maximum = max_val(inpt)
print("The largest value is", maximum)
def max_val(ints):
"""Input: collection of ints.
Returns: maximum of the collection
int - the max integer."""
max = ints[0]
for x in ints:
if x > max:
max = x
return max
assert max_val([1, 2, 3]) == 3
assert max_val([1, 1, 1]) == 1
assert max_val([1, 2, 2]) == 2
n_max()
inpt = eval(input("Please enter three values: "))
list(inpt)
"""
Explanation: 4. What will the following code yield? Was it what you expected? What's going on here?
.1 + .1 + .1 == .3
5. Try typing in the command below and read this page
format(.1, '.100g')
Data structure of the day: tuples
Switching variables: a second look
How do we make a single tuple?
slicing, indexing
mutability
tuple packing and unpacking
using tuples in loops
using tuples to unpack enumerate(lst)
tuples as return values
comparing tuples
(0, 1, 2000000) < (0, 3, 4)
Design pattern: DSU
Decorate
Sort
Undecorate
Ex.
```
txt = 'but soft what light in yonder window breaks'
words = txt.split()
t = list()
for word in words:
t.append((len(word), word))
t.sort(reverse=True)
res = list()
for length, word in t:
res.append(word)
print(res)
```
Why would words.sort() not work?
We can use tuples as a way to store related data
addr = 'monty@python.org'
uname, domain = addr.split('@')
Advanced: tuples as argument parameters
t = (a, b, c)
func(*t)
Tuples: exercises
Exercise 1
Revise a previous program as follows: Read and parse the "From" lines and pull out the addresses from the line. Count the number of messages from each person using a dictionary.
After all the data has been read print the person with the most commits by creating a list of (count, email) tuples from the dictionary and then sorting the list in reverse order and print out the person who has the most commits.
```
Sample Line:
From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008
Enter a file name: mbox-short.txt
cwen@iupui.edu 5
Enter a file name: mbox.txt
zqian@umich.edu 195
```
Exercise 2
This program counts the distribution of the hour of the day for each of the messages. You can pull the hour from the "From" line by finding the time string and then splitting that string into parts using the colon character. Once you have accumulated the counts for each hour, print out the counts, one per line, sorted by hour as shown below.
Sample Execution:
python timeofday.py
Enter a file name: mbox-short.txt
04 3
06 1
07 1
09 2
10 3
11 6
14 1
15 2
16 4
17 2
18 1
19 1
Exercise 3
Write a program that reads a file and prints the letters in decreasing order of frequency. Your program should convert all the input to lower case and only count the letters a-z. Your program should not count spaces, digits, punctuation or anything other than the letters a-z. Find text samples from several different languages and see how letter frequency varies between languages. Compare your results with the tables at wikipedia.org/wiki/Letter_frequencies.
Afternoon warm-up
Write a function that takes three numbers $x_1, x_2, x_3$ from a user and returns the max value. Don't use the built in max function. Would your function work on more than three values?
End of explanation
"""
assert compress('AAAADDBBBBBCCEAA') == 'A4D2B5C2E1A2'
# %load ../scripts/compress/compressor.py
def groupby_char(lst):
"""Returns a list of strings containing identical characters.
Takes a list of characters produced by running split on a string.
Groups runs (in order sequences) of identical characters into string elements in the list.
Parameters
---------
Input:
lst: list
A list of single character strings.
Output:
grouped: list
A list of strings containing grouped characters."""
new_lst = []
count = 1
for i in range(len(lst) - 1): # we range to the second to last index since we're checking if lst[i] == lst[i + 1].
if lst[i] == lst[i + 1]:
count += 1
else:
new_lst.append([lst[i],count]) # Create a lst of lists. Each list contains a character and the count of adjacent identical characters.
count = 1
new_lst.append((lst[-1],count)) # Return the last character (we didn't reach it with our for loop since indexing until second to last).
grouped = [char*count for [char, count] in new_lst]
return grouped
def compress_group(string):
"""Returns a compressed two character string containing a character and a number.
Takes in a string of identical characters and returns the compressed string
consisting of the character and the length of the original string.
Example
-------
"AAA"-->"A3"
Parameters:
-----------
Input:
string: str
A string of identical characters.
Output:
------
compressed_str: str
A compressed string of length two containing a character and a number.
"""
return str(string[0]) + str(len(string))
def compress(string):
"""Returns a compressed representation of a string.
Compresses the string by mapping each run of identical characters to a
single character and a count.
Ex.
--
compress('AAABBCDDD')--> 'A3B2C1D3'.
Only compresses string if the compression is shorter than the original string.
Ex.
--
compress('A')--> 'A' # not 'A1'.
Parameters
----------
Input:
string: str
The string to compress
Output:
compressed: str
The compressed representation of the string.
"""
try:
split_str = [char for char in string] # Create list of single characters.
grouped = groupby_char(split_str) # Group characters if characters are identical.
compressed = ''.join( # Compress each element of the grouped list and join to a string.
[compress_group(elem) for elem in grouped])
if len(compressed) < len(string): # Only return compressed if compressed is actually shorter.
return compressed
else:
return string
except IndexError: # If our input string is empty, return an empty string.
return ""
except TypeError: # If we get something that's not compressible (including NoneType) return None.
return None
# %load ../scripts/compress/compress_tests.py
# This will fail to run because in wrong directory
from compress.compressor import *
def compress_test():
assert compress('AAABBCDDD') == 'A3B2C1D3'
assert compress('A') == 'A'
assert compress('') == ''
assert compress('AABBCC') == 'AABBCC' # compressing doesn't shorten string so just return string.
assert compress(None) == None
def groupby_char_test():
assert groupby_char(["A", "A", "A", "B", "B"]) == ["AAA", "BB"]
def compress_group_test():
assert compress_group("AAA") == "A3"
assert compress_group("A") == "A1"
"""
Explanation: Strategy 1: Compare each to all (brute force)
Strategy 2: Decision Tree
Strategy 3: Sequential processing
Strategy 4: Use python
The development process
A Problem Solving Algorithm
See Polya's How to Solve it
1. Understand the problem
2. Brainstorm on paper
3. Plan out program
4. Refine design
5. Create function
6. Create function docstring
7. Create function tests
8. Check that tests fail
9. If function is trivial, then solve it (i.e. get function tests to pass). Else, create sub-function (aka divide and conquer) and repeat steps 5-8.
Example: Compress
End of explanation
"""
|
DJCordhose/ai | notebooks/nlp/1-embeddings.ipynb | mit | # Based on
# https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/6.2-understanding-recurrent-neural-networks.ipynb
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%pylab inline
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)
# https://keras.io/datasets/#imdb-movie-reviews-sentiment-classification
max_features = 1000 # number of words to consider as features
maxlen = 20 # cut texts after this number of words (among top max_features most common words)
# each review is encoded as a sequence of word indexes
# indexed by overall frequency in the dataset
# output is 0 (negative) or 1 (positive)
imdb = tf.keras.datasets.imdb.load_data(num_words=max_features)
(raw_input_train, y_train), (raw_input_test, y_test) = imdb
# tf.keras.datasets.imdb.load_data?
y_train.min()
y_train.max()
# 25000 texts
len(raw_input_train)
# first text has 218 words
len(raw_input_train[0])
raw_input_train[0]
# tf.keras.preprocessing.sequence.pad_sequences?
# https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences
input_train = tf.keras.preprocessing.sequence.pad_sequences(raw_input_train, maxlen=maxlen)
input_test = tf.keras.preprocessing.sequence.pad_sequences(raw_input_test, maxlen=maxlen)
input_train.shape, input_test.shape, y_train.shape, y_test.shape
# left padded with zeros
# As a convention, "0" does not stand for a specific word, but instead is used to encode any unknown word.
input_train[0]
"""
Explanation: Understanding Embeddings on Texts
End of explanation
"""
# tf.keras.layers.Embedding?
embedding_dim = 3
random_model = tf.keras.Sequential()
# Parameters: max_features * embedding_dim
random_model.add(tf.keras.layers.Embedding(name='embedding',input_dim=max_features, output_dim=embedding_dim, input_length=maxlen))
random_model.summary()
random_model.predict(input_train[:1])
"""
Explanation: We can use a randomly initialized embedding without any training
End of explanation
"""
embedding_dim = 3
model = tf.keras.Sequential()
# Parameters: max_features * embedding_dim
model.add(tf.keras.layers.Embedding(name='embedding', input_dim=max_features, output_dim=embedding_dim, input_length=maxlen))
# Output: maxlen * embedding_dim (8)
model.add(tf.keras.layers.Flatten(name='flatten'))
# binary classifier
model.add(tf.keras.layers.Dense(name='fc', units=32, activation='relu'))
model.add(tf.keras.layers.Dense(name='classifier', units=1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.summary()
batch_size = 128
%time history = model.fit(input_train, y_train, epochs=10, batch_size=batch_size, validation_split=0.2)
train_loss, train_accuracy = model.evaluate(input_train, y_train, batch_size=batch_size)
train_accuracy
test_loss, test_accuracy = model.evaluate(input_test, y_test, batch_size=batch_size)
test_accuracy
# precition
model.predict(input_test[0:5])
# ground truth
y_test[0:5]
"""
Explanation: Training the embedding together with the whole model is more reasonable
Alternative: use a pre-trained model, probably trained using skip-gram
End of explanation
"""
embedding_layer = model.get_layer('embedding')
model_stub= tf.keras.Model(inputs=model.input, outputs=embedding_layer.output)
embedding_prediction = model_stub.predict(input_test[0:5])
# 5 sample reviews, 500 words per review, 8 dimensions per word
embedding_prediction.shape
# 8 embedding dimensions of first word of first sample review
embedding_prediction[0][0]
"""
Explanation: How does the output of the trained embedding look like?
End of explanation
"""
input_train[0]
model_stub.predict(input_train[:1])
random_model.predict(input_train[:1])
"""
Explanation: Comparing trained to untrained model
End of explanation
"""
|
JannesKlaas/MLiFC | Week 1/Ch. 3 - Training process and the learning rate.ipynb | mit | # Numpy handles matrix multiplication, see http://www.numpy.org/
import numpy as np
# PyPlot is a matlab like plotting framework, see https://matplotlib.org/api/pyplot_api.html
import matplotlib.pyplot as plt
# This line makes it easier to plot PyPlot graphs in Jupyter Notebooks
%matplotlib inline
import sklearn
import sklearn.datasets
import matplotlib
# Slightly larger plot rendering
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
"""
Explanation: Chapter 3 - Training process and learning rate
In this chapter we will clean up our code and create a logistic classifier class that works much like many modern deep learning libraries do. We will also have a closer look at our first hyper parameter, the learning rate alpha.
End of explanation
"""
class LogisticRegressor:
# Here we are just setting up some placeholder variables
# This is the dimensionality of our input, that is how many features our input has
input_dim = 0
# This is the learning rate alpha
learning_rate = 0.1
# We will store the parameters of our model in a dictionary
model = {}
# The values calculated in the forward propagation will be stored in this dictionary
cache = {}
# The gradients that we calculate during back propagation will be stored in a dictionary
gradients = {}
# Init function of the class
def __init__(self,input_dim, learning_rate):
'''
Assigns the given hyper parameters and initializes the initial parameters.
'''
# Assign input dimensionality
self.input_dim = input_dim
# Assign learning rate
self.learning_rate = learning_rate
# Trigger parameter setup
self.init_parameters()
# Parameter setup function
def init_parameters(self):
'''
Initializes weights with random number between -1 and 1
Initializes bias with 0
Assigns weights and parameters to model
'''
# Randomly init weights
W1 = 2*np.random.random((self.input_dim,1)) - 1
# Set bias to 0
b1 = 0
# Assign to model
self.model = {'W1':W1,'b1':b1}
return
# Sigmoid function
def sigmoid(self,x):
'''
Calculates the sigmoid activation of a given input x
See: https://en.wikipedia.org/wiki/Sigmoid_function
'''
return 1/(1+np.exp(-x))
#Log Loss function
def log_loss(self,y,y_hat):
'''
Calculates the logistic loss between a prediction y_hat and the labels y
See: http://wiki.fast.ai/index.php/Log_Loss
We need to clip values that get too close to zero to avoid zeroing out.
Zeroing out is when a number gets so small that the computer replaces it with 0.
Therefore, we clip numbers to a minimum value.
'''
minval = 0.000000000001
m = y.shape[0]
l = -1/m * np.sum(y * np.log(y_hat.clip(min=minval)) + (1-y) * np.log((1-y_hat).clip(min=minval)))
return l
# Derivative of log loss function
def log_loss_derivative(self,y,y_hat):
'''
Calculates the gradient (derivative) of the log loss between point y and y_hat
See: https://stats.stackexchange.com/questions/219241/gradient-for-logistic-loss-function
'''
return (y_hat-y)
# Forward prop (forward pass) function
def forward_propagation(self,A0):
'''
Forward propagates through the model, stores results in cache.
See: https://stats.stackexchange.com/questions/147954/neural-network-forward-propagation
A0 is the activation at layer zero, it is the same as X
'''
# Load parameters from model
W1, b1 = self.model['W1'],self.model['b1']
# Do the linear step
z1 = A0.dot(W1) + b1
#Pass the linear step through the activation function
A1 = self.sigmoid(z1)
# Store results in cache
self.cache = {'A0':X,'z1':z1,'A1':A1}
return
# Backprop function
def backward_propagation(self,y):
'''
Backward propagates through the model to calculate gradients.
Stores gradients in grads dictionary.
See: https://en.wikipedia.org/wiki/Backpropagation
'''
# Load results from forward pass
A0, z1, A1 = self.cache['A0'],self.cache['z1'], self.cache['A1']
# Load model parameters
W1, b1 = self.model['W1'], self.model['b1']
# Read m, the number of examples
m = A0.shape[0]
# Calculate the gradient of the loss function
dz1 = self.log_loss_derivative(y=y,y_hat=A1)
# Calculate the derivative of the loss with respect to the weights W1
dW1 = 1/m*(A0.T).dot(dz1)
# Calculate the derivative of the loss with respect to the bias b1
db1 = 1/m*np.sum(dz1, axis=0, keepdims=True)
#Make sure the weight derivative has the same shape as the weights
assert(dW1.shape == W1.shape)
# Store gradients in gradient dictionary
self.grads = {'dW1':dW1,'db1':db1}
return
# Parameter update
def update_parameters(self):
'''
Updates parameters accoarding to gradient descent algorithm
See: https://en.wikipedia.org/wiki/Gradient_descent
'''
# Load model parameters
W1, b1 = self.model['W1'],self.model['b1']
# Load gradients
dW1, db1 = self.grads['dW1'], self.grads['db1']
# Update weights
W1 -= self.learning_rate * dW1
# Update bias
b1 -= self.learning_rate * db1
# Store new parameters in model dictionary
self.model = {'W1':W1,'b1':b1}
return
# Prediction function
def predict(self,X):
'''
Predicts y_hat as 1 or 0 for a given input X
'''
# Do forward pass
self.forward_propagation(X)
# Get output of regressor
regressor_output = self.cache['A1']
# Turn values to either 1 or 0
regressor_output[regressor_output > 0.5] = 1
regressor_output[regressor_output < 0.5] = 0
# Return output
return regressor_output
# Train function
def train(self,X,y, epochs):
'''
Trains the regressor on a given training set X, y for the specified number of epochs.
'''
# Set up array to store losses
losses = []
# Loop through epochs
for i in range(epochs):
# Forward pass
self.forward_propagation(X)
# Calculate loss
loss = self.log_loss(y,self.cache['A1'])
# Store loss
losses.append(loss)
# Print loss every 10th iteration
if (i%10 == 0):
print('Epoch:',i,' Loss:', loss)
# Do the backward propagation
self.backward_propagation(y)
# Update parameters
self.update_parameters()
# Return losses for analysis
return losses
"""
Explanation: The regressor class
Let's jump straight into the code. In this chapter, we will create a python class for our logistic regressor. If you are unfamiliar with classes in python, check out Jeff Knup's blogpost for a nice overview. Read the code below carefully, we will deconstruct the different functions afterwards
End of explanation
"""
#Seed the random function to ensure that we always get the same result
np.random.seed(1)
#Variable definition
#define X
X = np.array([[0,1,0],
[1,0,0],
[1,1,1],
[0,1,1]])
#define y
y = np.array([[0,1,1,0]]).T
# Define instance of class
regressor = LogisticRegressor(input_dim=3,learning_rate=1)
# Train classifier
losses = regressor.train(X,y,epochs=100)
# Plot the losses for analyis
plt.plot(losses)
"""
Explanation: Using the regressor
To use the regressor, we define an instance of the class and can then train it. Here we will use the same data as in chapter 2.
End of explanation
"""
# Generate a dataset and plot it
np.random.seed(0)
X, y = sklearn.datasets.make_blobs(n_samples=200,centers=2)
y = y.reshape(200,1)
plt.scatter(X[:,0], X[:,1], s=40, c=y.flatten(), cmap=plt.cm.Spectral)
"""
Explanation: Revisiting the training process
As you can see, our classifier still works! We have improved modularity and created an easier to debug classifier. Let's have a look at its overall structure. As you can see, we make use of three dictionaries:
- model: Stores the model parameters, weights and bias
- cache: Stores all intermediate results from the forward pass. These are needed for the backward propagation
- grads: Stores the gradients from the backward propagation
These dictionaries store all information required to run the training process:
We run this process many times over. One full cycle done with the full training set is called an epoch. How often we have to go through this process can vary, depending on the complexity of the problem we want to solve and the learning rate $\alpha$. You see alpha being used in the code above already so let's give it a closer look.
What is the learning rate anyway?
The learning rate is a lot like the throttle setting in our learning algorithm. It is the multiplier to the update the parameter experiences.
$$a := a - \alpha * \frac{dL(w)}{da}$$
A high learning rate means that the parameters get updated by larger amounts. This can lead to faster training, but it can also mean that we might jump over a minimum.
As you can see with a bigger learning rate we are approaching the minimum much faster. But as we get close, our steps are too big and we are skipping over it. This can even lead to our loss going up over time.
Choosing the right learning rate is therefore crucial. Too small and our learning algorithm might be too slow. Too high and it might fail to converge at a minimum. So in the next step, we will have a look at how to tune this hyper parameter.
A slightly harder problem
So far we have worked with a really simple dataset in which one input feature is perfectly correlated with the labels $y$. Now we will look at a slightly harder problem.
We generate a dataset of two point clouds and we want to train our regressor on separating them. The data generation is done with sklearn's dataset generator.
End of explanation
"""
# Define instance of class
# Learning rate = 1, same as no learning rate used
regressor = LogisticRegressor(input_dim=2,learning_rate=10)
# Train classifier
losses = regressor.train(X,y,epochs=100)
"""
Explanation: Looking at the data we see that it is possible to separate the two clouds quite well, but there is a lot of noise so we can not hope to achieve zero loss. But we can get close to it. Let's set up a regressor. Here we will use a learning rate of 10, which is quite high.
End of explanation
"""
plt.plot(losses)
"""
Explanation: You will probably even get an error message mentioning an overflow and it doesn't look like the regressor converged smoothly. This was a bumpy ride.
End of explanation
"""
# Define instance of class
# Learning rate = 0.05
regressor = LogisticRegressor(input_dim=2,learning_rate=0.05)
# Train classifier
losses = regressor.train(X,y,epochs=100)
plt.plot(losses)
"""
Explanation: As you can see, the loss first went up quite significantly before then coming down. At multiple instances it moves up again. This is a clear sign that the learning rate is too large, let's try a lower one
End of explanation
"""
# Define instance of class
# Learning rate = 0.0005
regressor = LogisticRegressor(input_dim=2,learning_rate=0.0005)
# Train classifier
losses = regressor.train(X,y,epochs=100)
plt.plot(losses)
"""
Explanation: This looks a bit smoother already, and you can see that the error is nearly ten times lower in the end. Let's try an even lower learning rate to see where we can take this.
End of explanation
"""
# Define instance of class
# Tweak learning rate here
regressor = LogisticRegressor(input_dim=2,learning_rate=1)
# Train classifier
losses = regressor.train(X,y,epochs=100)
plt.plot(losses)
"""
Explanation: This is a very smooth gradient descent but also a very slow one. The error is more than twice as high as before in the end. If we would let this run for a few more epochs we probably could achieve a very good model but at a very large computing expense.
How to find a good value for the learning rate
A good learning rate converges fast and leads to low loss. But there is no silver bullet perfect learning rate that always works. It usually depends on your project. It is as much art as it is science to tune the learning rate and only repeated experimentation can lead you to a good result. Experience shows however, that a good learning rate is usually around 0.1, even though it can well be different for other projects.
To practice tuning the learning rate, play around with the example below and see whether you can find an appropriate one that converges fast and at a low loss.
End of explanation
"""
# Helper function to plot a decision boundary.
# If you don't fully understand this function don't worry, it just generates the boundary plot.
def plot_decision_boundary(pred_func):
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
Z = pred_func(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y.flatten(), cmap=plt.cm.Spectral)
"""
Explanation: Visualizing our regressor
In the last part of this chapter, I would like to give a closer look at what our regressor actually does. To do so, we will plot the decision boundary, that is the boundary the regressor assigns between the two classes.
End of explanation
"""
# Define instance of class
# Learning rate = 0.05
regressor = LogisticRegressor(input_dim=2,learning_rate=0.05)
# Train classifier
losses = regressor.train(X,y,epochs=100)
"""
Explanation: To plot the boundary, we train a new regressor first.
End of explanation
"""
# Plot the decision boundary
plot_decision_boundary(lambda x: regressor.predict(x))
plt.title("Decision Boundary for logistic regressor")
"""
Explanation: And then we plot the boundary. Again, do not worry if you do not understand exactly what is going on here, as it is not part of the class.
End of explanation
"""
# Generate a dataset and plot it
np.random.seed(0)
X, y = sklearn.datasets.make_moons(200, noise=0.1)
y = y.reshape(200,1)
plt.scatter(X[:,0], X[:,1], s=40, c=y.flatten(), cmap=plt.cm.Spectral)
# Define instance of class
# Learning rate = 0.05
y = y.reshape(200,1)
regressor = LogisticRegressor(input_dim=2,learning_rate=0.05)
# Train classifier
losses = regressor.train(X,y,epochs=100)
# Plot the decision boundary
plot_decision_boundary(lambda x: regressor.predict(x))
plt.title("Decision Boundary for hidden layer size 3")
"""
Explanation: As you can see, our logistic regressor seperates the two clouds with a simple line. This is appropriate for this case but might fail when the boundary is a more complex function. Let's try out a more complex function.
End of explanation
"""
|
ghvn7777/ghvn7777.github.io | content/fluent_python/3_1_dict_set.ipynb | apache-2.0 | tt = (1, 2, (30, 40))
hash(tt)
t1 = (1, 2, [30, 40]) # 其中列表是可变的,所以没有哈希值
hash(t1)
tf = (1, 2, frozenset([30, 40])) #frozenset 是冻结的集合,不可变的,所以有哈希值
hash(tf)
"""
Explanation: 我们在这章讨论字典和集合,因为它们背后都是哈希表,下面是本章的大纲
常用字典方法
特别处理遗失的键
在标准库中,dict 的变化
set 与 frozenset 形态
哈希表的工作原理
哈希表的影响(键形态限制,无法预知的排序等等)
什么是可散列化
如果一个对象有一个哈希值,而且在生命周期中不被改变(它需要实现一个 __hash__() 方法),而且可以与其它对象比较(需要实现 __eq__() 方法),就是可散列化的。原子不可变数据类型(str, bytes 和数值类型)都是可散列类型,fronenset 也是可散列类型,因为根据其定义,frozenset 只能容纳可散列类型,元祖的话,只有当一个元组的所有元素都是可散列的,元组才是可散列的。
End of explanation
"""
a = dict(one = 1, two = 2, three = 3)
b = {'one': 1, 'two': 2, 'three': 3}
c = dict(zip(['one', 'two', 'three'], [1, 2, 3]))
d = dict([('two', 2), ('one', 1), ('three', 3)])
e = dict({'three': 3, 'one': 1, 'two': 2})
a == b == c == d == e
"""
Explanation: 构建字典方法
End of explanation
"""
DIAL_CODES = [
(86, 'China'),
(91, 'India'),
(1, 'United States')
]
country_code = {country: code for code, country in DIAL_CODES}
country_code
"""
Explanation: 除了常规语法以及 dict 构建之外,我们可以使用字典推导来构建字典,dictcomp 会由任何一个可迭代对象产生一对 key:value 来构建 dict,下面是使用字典生成式的一个例子:
End of explanation
"""
#!/usr/bin/env python
# encoding: utf-8
import sys
import re
WORD_RE = re.compile('\w+') # \w 是匹配任意字母或数字,+ 是匹配一次到任意次
index = {}
#with open(sys.argv[1], encoding="utf-8") as fp: #正常文件名是参数传的
with open("/home/kaka/test.txt", encoding="utf-8") as fp:
for line_no, line in enumerate(fp, 1): # line_no 是索引(从 1 开始),line 是行的内容
for match in WORD_RE.finditer(line): # 返回所有匹配子串,返回类型是迭代器
word = match.group() # group 获取该单词 (match 是一个对象)
column_no = match.start() + 1 # 获取列数,索引从 0 开始
location = (line_no, column_no) # 构造一个元组,内容是 (row, col)
# 这样写很糟糕,这里仅仅是为了演示
occurrences = index.get(word, []) # 判断该单词是否被添加过,没有返回 [ ],注意返回的是原列表的一个备份
occurrences.append(location) # 为该 key 对应的值添加内容
index[word] = occurrences # 这要搜索 word 这个 key 第二次
for word in sorted(index, key = str.upper): # 按照字母顺序排序,忽略大小写
print(word, index[word])
"""
Explanation: 字典有一个内置方法 d.update(m, [**kargs]) 它会先判断 m,如果 m 有 keys 方法, update 就将它当做映射处理,否则,会退一步,将 m 当做包含键值对 (key, value) 的迭代器,python 绝大多数映射类型构造方法都用了类似的逻辑。因此我们既可以使用一个映射对象来新建一个映射对象,也可以使用 (key, value) 键值对来初始化一个映射对象
使用 setdefault 处理找不到的键
当 dict 使用 d[k] 时,发现 k 不是现有键时,会抛出 KeyError 错误,我们知道 d.get(k, default) 可以代替 d[k],给找不到的键一个默认值。这比处理 KeyError 异常方便,但是如果要更新某个键对应的值时,使用 __getitem__() 和 get() 效率都很低,就像下面的例子这样,dict.get 并不是处理找不到键的最好方法。
End of explanation
"""
#!/usr/bin/env python
# encoding: utf-8
import sys
import re
WORD_RE = re.compile('\w+') # \w 是匹配任意字母或数字,+ 是匹配一次到任意次
index = {}
#with open(sys.argv[1], encoding="utf-8") as fp: #正常文件名是参数传的
with open("/home/kaka/test.txt", encoding="utf-8") as fp:
for line_no, line in enumerate(fp, 1): # line_no 是索引(从 1 开始),line 是行的内容
for match in WORD_RE.finditer(line): # 返回所有匹配子串,返回类型是迭代器
word = match.group() # group 获取该单词 (match 是一个对象)
column_no = match.start() + 1 # 获取列数,索引从 0 开始
location = (line_no, column_no) # 构造一个元组,内容是 (row, col)
#如果单词没有这个 key,把单词和一个空列表放入映射,然后返回这个空列表,这样就不用二次搜索就可以被更新列表了
index.setdefault(word, []).append(location)
for word in sorted(index, key = str.upper): # 按照字母顺序排序,忽略大小写
print(word, index[word])
"""
Explanation: 我们处理 occurrences 的三行可以使用 dict.setdefault 来改为一行
End of explanation
"""
#if key not in my_dict:
# my_dict[key] = []
#my_dict[key].append(new_value)
"""
Explanation: 换句话说 index.setdefault(word, []).append(location) 与下面等价
End of explanation
"""
#!/usr/bin/env python
# encoding: utf-8
import sys
import re
import collections
WORD_RE = re.compile('\w+')
index = collections.defaultdict(list) # 使用 list 建立 defaultdict,将它当成 default_factory
#with open(sys.argv[1], encoding="utf-8") as fp:
with open("/home/kaka/test.txt", encoding="utf-8") as fp:
for line_no, line in enumerate(fp, 1):
for match in WORD_RE.finditer(line):
word = match.group()
column_no = match.start() + 1
location = (line_no, column_no)
# 如果不存在 word 键,会调用初始化传的 default_factory 产生一个预设值, 如果没有指定 default_factory,会产生 KeyError 异常
index[word].append(location)
for word in sorted(index, key = str.upper):
print(word, index[word])
"""
Explanation: 查找可弹性键的映射
有时在找不到键时,希望返回一个默认值,这里有两种方法,第一种是使用 defaultdict 而不是普通 dict,第二种是自定义一个字典的子类,然后在子类中添加一个 __missing__ 方法,下面会讨论两种做法
defaultdict 找不到键的一个选择
在实例化 defaultdict 时候,需要给构造方法提供一个可调用对象,这个可调用对象会在 __getitem__ 找不到键时被调用,让其返回一个默认值,例如我们创建了一个这样的字典 dd = defaultdict(lit),如果 'new-key' 键在 dd 中不存在的话,表达式 dd['new-key']会先调用 list 新建一个列表,把这个新列表作为值,'new-key' 作为键放到 dd 中,最后返回这个列表的引用。下面是上面问题使用 collections.defaultdict 的一个例子。
End of explanation
"""
import collections
index = collections.defaultdict(list)
print(index.get('hello'))
"""
Explanation: 工作原理: 当我们初始化一个 defaultdict 时候,要提供一个方法,当 __getitem__() 找不到键的时候,会用它产生一个预设值,这里我们将 list 传进去,每次调用会产生一个空列表。
注意:defaultdict 的 default_factory 只会在调用 __getitem__() 方法时起作用,在其它方法不起作用。例如 dd 是 defaultdict,k 是不存在的键, dd[k] 会调用 default_factory 产生一个预设值,而 dd.get(k) 仍然传回 None,下面是例子
End of explanation
"""
class StrKeyDict0(dict):
# 找不到键会调用此函数
def __missing__(self, key):
# 如果找不到的键本身就是 str 类型,直接抛出异常
# 这个判断是必要的,如果没有它,当 key 不是字符串,则会递归调用
if isinstance(key, str):
raise KeyError(key)
# 如果找不到的键不是字符串,那么将其转成字符串查找
return self[str(key)]
def get(self, key, default=None):
try:
# get 方法使用 self[key] 的方式调用 __getitem__() 方法查找,那么某个键不存在的话,还可以通过 __missing__() 给它一个机会
return self[key]
except KeyError:
# 如果抛出 KeyError,那么说明 __missing__() 也失败了,返回 default
return default
def __contains__(self, key):
# 先按照键原来的值查找,找不到将其转成字符串查找,注意 or 返回的不是 true 和 false,而是第一个不为假的那个值
# 例如 0 or 2 返回 2
# 注意这里没用 key in self, 这是因为这回导致 这个函数被递归调用,所以采用了显式的方法,直接在 self.keys() 中查询
return key in self.keys() or str(key) in self.keys()
d = StrKeyDict0([('2', 'two'), ('4', 'four')])
d['2']
d[4]
d[1]
d.get('2')
d.get(4)
d.get(1, 'N/A')
2 in d
1 in d
"""
Explanation: 这背后一切的工程其实是 __missing__() 方法,它会在 defaultdict 遇到找不到键的时候调用 default_factory,而实际上这个特性是所有映射类型都可以选择去支持的。
__missing__ 方法
映射处理找不到键时,底层使用的是 __missing__() 方法,基础的 dict 并没有这个方法,但是 dict 是知道有这个接口存在的,所以你可以继承 dict 并实现 __missing__() 方法,如果 dict.__getitem__() 在找不到键时就调用它,而非发出 KeyError
__missing__() 只被 __getitem__() 调用(即 d[k] 运算符),__missing__() 方法的存在,并不会影响到其它方法查询键的行为,例如 get 或 __contains__(in 运算符),这就是 defaultdict 只能与 __getitem__() 一起使用的原因
有时候我们希望查询时,映射类型里的键统统转成 str,下面是一个可编程电路板项目(Raspberry 和 Arduino 等)的例子,其中电路板上针脚可能是数字或字符串,比如 "A0" 或 "12",但是为了方便查询,我们希望 my_arduino.pins[13] 也是可以的,这样可以快速找到 arduino 的第 13 个针脚。
End of explanation
"""
import collections
class StrKeyDict(collections.UserDict):
def __missing__(self, key):
if isinstance(key, str): # 这个方法和之前的一模一样
raise KeyError(key)
return self[str(key)]
def __contains__(self, key): #这个方法更简洁一些,因为已经放心假设所有的键都是字符串
return str(key) in self.data
# 这个方法会把所有的键都转成字符串
def __setitem__(self, key, item):
self.data[str(key)] = item
"""
Explanation: 像 key in my_dict.keys() 的这种操作是很快的,即使映射类型对象很庞大也没关系,因为 dict.keys() 返回的是一个视图,视图就像一个集合,而且跟字典类似的是,在视图中查找一个元素速度很快。
字典的变种
collections.OrderedDict
让键维持插入顺序, popitem() 方法默认删除并返回字典中的最后一个元素,如果你使用 my_odict.popitem(last=False)调用,会删除并返回第一个被添加的元素
collections.ChainMap
可以容纳几个不同的映射对象,然后在进行键查找操作时,这些对象会被当成一个整体逐个查找,直到键被找到。
这个功能在给有嵌套作用域的语言做解释器时很有用,可以用一个对象来表示一个作用于的上下文
下面是 collections 文档介绍 ChainMap 对象里的一个具体使用实例,包含了下面这个对 Python 变量查询规则的代码片段
python
import builtins
pylookup = ChainMap(locals(), globals(), vars(builtins))
collections.Counter
会给键准备一个整数计数器,每更新一个键时候都会增加这个计数器,所以可以给散列表对象计数,或者当成多重集用
多冲击和就是集合里的元素可以不止出现一次, Counter 实现了 + 和 - 运算符用来合并记录,还有像 most_common([n]) 这类很有用的方法
most_common([n]) 会按照次序返回映射里最常见的 n 个键和它们的计数。
一个计算单词内字母数量的案例:
python
import collections
ct = collections.Counter('abracadabra')
ct
#Counter({'a': 5, 'b': 2, 'c': 1, 'd': 1, 'r': 2})
ct.update('aaaaazzz')
ct
#Counter({'a': 15, 'b': 2, 'c': 1, 'd': 1, 'r': 2, 'z': 6})
ct.most_common(2)
#[('a', 15), ('z', 6)]
collections.UserDict
这个类就是把标准 dict 用纯 Python 实现了一遍
子类化 UserDict
要建立新的映射类型,继承 UserDict 要比继承 dict 容易,这体现在,我们可以很容易的改进前面的 StrKeyDict0 类,将所有的键都存成 str 类型。
我们更倾向继承 UserDict 而不是 dict 的原因是,dict 在某些方法的实现上会走一些捷径,导致我们不得不在它的子类重写这些方法,而 UserDict 就没有这个问题。
注意 UserDict 没有继承dict,而是内部有一个 dict 实例,称为 data,这是 UserDict 最终存储数据的地方,这样做的好处是,相比上面 StrKeyDict0 类,UserDict 的子类在实现 __setitem__ 这种方法时,可以避免不必要的地柜。而且可以使 __contains__ 方法更简洁。
下面的例子不仅比 StrKeyDict0 类更简洁,功能还更完善,不但把所有的键都以字符串的方式存储,还能处理一些创建或者更新实例时包含非字符串类型的键的意外情况
End of explanation
"""
from types import MappingProxyType
d = {1: 'A'}
d_proxy = MappingProxyType(d)
d_proxy
d_proxy[1]
d_proxy[2] = 'x'
d[2] = 'b'
d_proxy
"""
Explanation: 因为 UserDict 是 MutableMapping 的子类,所以 StrKeyDict 里的剩下的那些映射类型方法都是从 UserDict、MutableMapping 和 Mapping 这些超类继承来的,特别是最后的 Mapping 类,它虽然是一个抽象基类(ABC),但是却提供了好几个使用的方法,下面是比较值得关注的方法:
MutableMapping.update:
这个方法不仅可以直接使用,而且它还用在 __init__ 里,让构造方法可以利用传入的各种参数(其它映射类型,元素是 (key, value) 对的可迭代对象和键值参数)来新建实例,因为这个方法背后用的是 self[key] = value 来添加新值的,所以它实际是在使用我们的 __setitem__() 方法。
Mapping.get
在 StrKeyDict0 中,我们必须自己编写 get 方法,好让其表现和 __getitem__() 一致,而在我们的这个例子中就不用了,因为它继承 Mapping.get() 方法,而 Python 源码显示,这个方法的实现方式和 StrKeyDict0.get() 是一模一样的。
不可变映射
Python3.3 后,types 提供 MappingProxyType 包装类,当你给它一个映射后,它返回一个只读的视图,但它是原始映射的动态展示,也就是说,你可以在视图中中看到原始映射的改变,但是无法通过它改变原始映射。下面的例子说明了这点:
End of explanation
"""
l = ['spam', 'spam', 'eggs', 'spam']
set(l)
list(set(l))
"""
Explanation: 集合理论
我们说的集合包括 frozenset 和 set
集的本质是许多唯一对象的聚集,所以,集合可以用来去重。
End of explanation
"""
#found = len(needles & haystack)
"""
Explanation: 集合的元素必须可散列的,set 类型本身是不可散列的,但是 frozenset 可以,所以可以在 set 中放入 frozenset 元素。
为了保持唯一性,集合实现了很多基础的中缀运算符, a | b 代表 a 和 b 的并集, a & b 代表 a 和 b 的交集,a - b 为 a 和 b 的差集。合理使用这些操作,不仅可以使代码变少,而且还能减少 Python 程序的运行时间。
例如,我们有一个电子邮件地址的集合(haystack),还要维护一个较小的电子邮件地址集合(needles),然后求出 needles 中有多少地址同时也出现在了 haystack 中,借助集合实现非常简单。
End of explanation
"""
#found = len(set(needles) & set(haystack))
# 也可以写成
#found = len(set(needles).intersection(haystack))
"""
Explanation: 但是上面的语法只支持集合,我们让其支持所有可迭代类型
End of explanation
"""
# 下面这种方式比 set([1]) 快,因为后者需要先从 set 这个名字查询构造方法,然后新建一个列表,最后再把列表传到构造方法中
# 但是如果像 {1, 2, 3} 这种字面量,Python 有一个专门叫作 BUILD_SET 的字节码创建集合
s = {1}
type(s)
s.pop()
s
s = {}
type(s) #看到这样定义的是一个字典,而不是 set
from dis import dis
dis('{1}')
dis('set([1])')
"""
Explanation: 集合字面量
集合的常数定义语法({1}, {1, 2} 等)看起来很像数学标记法,不过有一个重要的差别,空的 set 没有常数表示方法,必须显式的定义 set(),如果写成 {} 的形式,你只是定义了一个空的字典
End of explanation
"""
frozenset(range(10))
"""
Explanation: 看到 {1} 的方式直接执行了 1 的赋值, set([1]) 步骤比较多
frozenset 常数定义没有特殊的语法,必须使用构造方法。Python 3 里 fronzenset 的标准字符串表示形式看起来就像构造方法调用一样。来看这段控制台对话
End of explanation
"""
from unicodedata import name #用 name 函数获取字符的名字
{chr(i) for i in range(32, 256) if 'SIGN' in name(chr(i), '')} # 将字符名里有 SIGN 的单词挑出来,放到集合
"""
Explanation: 集合推导
集合推导式和字典推导差不多:
End of explanation
"""
DIAL_CODES = [
(86, 'China'),
(91, 'India'),
(1, 'United States'),
(62, 'Indonesia'),
(55, 'Brazil'),
(92, 'Pakistan'),
(880, 'Bangladesh'),
(234, 'Nigeria'),
(7, 'Russia'),
(81, 'Japan')
]
d1 = dict(DIAL_CODES) # 数组元组顺序按照国家人口排名决定
print('d1:', d1.keys())
d2 = dict(sorted(DIAL_CODES)) # 数组元组顺序按照国家区号大小排名决定
print('d2:', d2.keys())
d3 = dict(sorted(DIAL_CODES, key = lambda x: x[1])) # 数组元组顺序按照国家字母顺序排名决定
print('d3:', d3.keys())
assert d1 == d2 and d2 == d3
"""
Explanation: 集合操作
集合中缀运算符要求两个运算元都是集合,但是所其它所有的方法不是这样,例如要产生 a,b,c,d 的并集,a.union(b, c, d) 只要求 a 是集合,b,c,d 是可迭代对象就行
dict 和 set 的背后
这一节会回答下面几个问题:
- Python 里的 dict 和 set 效率有多高
- 为什么它们是无序的
- 为什么并不是所有的 Python 对象都可以当做 dict 的键或者 set 中的元素
- 为什么 dict的键和 set 元素的顺序是根据它们被添加的次序决定的,以及为什么在映射对象的生命周期中,这个顺序并不是一成不变的。
- 为什么不应该在迭代循环 dict 或是 set 的同时往里添加元素。
一个关于效率的实验
我们有一个 1000 万个不重复双精度浮点数的字典,名叫 haystack,还有一个 1000 个浮点数的 needles 数组,其中 needles 中的 500 个数字可以在 haystack 中找到,另外 500 个肯定找不到。
我们使用下面代码来测试下面代码的运行时间(用 timeit 模块可以测试速度)
found = 0
for n in needles:
for n in haystack:
found += 1
表明当 haystack 个数是 1000 的时候,需要 0.000202 秒,当 haystack 的个数是 10,000,000 的时候,只需要 0.000337 秒。
作为对比,我们把 haystack 换成 set 和 list 类型,做了同样的实验。对于 set,我们除了运行上面的代码,还测试了用交集计算 needles 出现在 haystack 中的次数运行时间
found = len(needles & haystack)
我们发现,最快的是集合交集花费的时间,最慢是列表所花费的时间,集合使用两个 for 循环操作和字典用时差不多。
字典中的散列表
散列表其实是一个稀疏数组(总有空白元素的数组),散列表的单元称为表元,在 dict 散列表中,每个键值对都占用一个表元,每个表元有两部分,一个是对键的引用,一个是对值的引用。因为所有表元大小一致,所以可以通过偏移量来读取某个表元。
因为 python 会保证大概还有三分之一的表元是空的,所以快要到达这个阈值的时候,原有的散列表会被复制到一个更大的空间中。
如果把一个对象放到散列表,那么首先要计算这个元素键的散列值,Python 中可以用 hash() 方法来做这件事。
散列值的相等性
内置的 hash() 方法可以用于所有的内置类型对象,如果自定义对象调用 hash() 方法,实际上是运行自定义的 __hash__()。如果两个对象相等,那么它们的散列值必须相等,否则散列表就不能正常运行了。例如 1 == 1.0 为真,那么 hash(1) 和 hash(1.0) 也必须为真,但其实两个数字(整数和浮点)内部结构是完全不一样的。
散列表算法
为了获取 my_dict[search_key] 背后的值,Python 首先会调用 hash(search_key) 来计算 search_key 的散列值,把这个值的最低几个数字当做偏移量,在散列表里查找表元(具体取几位取决于当前散列表的大小)。若找到表元是空的,抛出 KeyError 异常。如果不是空的,则表元里会有一对 found_key:found_value。这时候 Python 会校验 search_key == found_key 是否为真,如果它们相等的话,就会返回 found_value
如果 search_key 和 found_key 不匹配的话,这种情况称为散列冲突,发生这种情况是因为,散列表所做的其实是把随机的元素映射到只有几位的数字上,而散列表本身的索引有只依赖这个数字的一部分。为了解决散列冲突,算法会在散列值中再取几位,然后用特殊方法处理一下,把新得到的数字再索引来寻找表元(C 语言写的用来打乱散列值位的算法名字叫 perturb)。如果这次找到的表元是空的,则抛出 KeyError,如果非空,看看是否匹配,如果键匹配,返回这个值,又发生了散列冲突的话,重复之前步骤。
添加新元素和更新现有元素操作几乎和上面一样。只不过对于前者,发现空表元会放入一个新表元,后者找到相应表元会将原表元的值替换。
另外插入新值时,Python 可能会按照散列表的拥挤程度来决定是否重新分配内存并为它扩容,如果增加了散列表大小,那么散列值所占的位数和用作索引的位数都会随之增加,这样做的目的是为了减少发生散列冲突的概率。
表面看来这个算法很费劲,实际上就算 dict 有上百万个元素,多数的搜索并不会有冲突发生,平均下来每次搜索可能有一到两次冲突,在正常情况下,最不走运时候键所遇到的冲突次数一只手也能数过来。
dict 的实现以及导致的结果
下面会讨论散列表给 dict 带来的优势和限制。
1. 键值必须是可散列的
一个可散列的对象必须满足以下条件
可以由 hash() 得到一个不变的值。
可以通过 __eq__() 方法判断是否相等
如果 a == b 为 True,那么 hash(a) == hash(b) 为 True
如果你自定义了 __eq__(),一定要保证 a == b 和 hash(a) == hash(b) 结果相同,否则可能造成字典无法处理你的元素,另一方面,如果一个含有自定义的 __eq__() 依赖的类处于可变状态,那就不要在这个类中实现 __hash__() 方法,因为它的实例是不可散列的。
2. 字典在内存开销巨大
由于字典使用了散列表,而散列表又是稀疏的,这导致它在空间上效率低下,举例而言,如果我们需要存放数量巨大的记录,那么存放在元组或者具名元组构成的列表是一个比较好的选择。最好不要根据 JSON 风格,存放到字典中,用字典取代元组的原因有两个,其一是避免了散列表所耗费的空间,其二是无需把记录中字段的名字在每个元素都存一遍。
在用户自定义类型中 __stots__ 属性可以改变实例属性的存储方式,由 dict 变成 tuple,相关细节在第 9 章会谈到。
记住我们现在讨论的是空间优化,如果你手里有几百万对象而机器只有几 GB 内存时才真正需要考虑空间优化,因为优化往往是可维护性的对立面。
3. 键值查询很快
dict 是典型的空间换时间。有着巨大的内存开销,但是它提供了无视数据大小的快速访问-只要字典可以被装到内存里。
4. 键的顺序取决与添加顺序
当往 dict 里添加新键而又发生散列冲突时,新建可能会被安排存放到另一个位置。于是下面这种情况就会发生:由 dict([(key1, value1), (key2, value2]) 和 dict([(key2, value2), (key1, value1]) 得到的两个字典,在比较的时候,它们是相等的,但是如果在 key1 和 key2 被添加到字典过程中有冲突的话,这两个键出现在字典里的顺序是不一样的。
下面展示了这个现象,用同样的数据创建了 3 个字典,唯一的区别就是数据出现的次序不一样,可以看到,虽然键的次序是乱的,这 3 个字典仍然被视为相等的。
End of explanation
"""
|
sebastiandres/mat281 | clases/Unidad4-MachineLearning/Clase02-Clustering/clustering.ipynb | cc0-1.0 | from sklearn import datasets
import matplotlib.pyplot as plt
iris = datasets.load_iris()
def plot(dataset, ax, i, j):
ax.scatter(dataset.data[:,i], dataset.data[:,j], c=dataset.target, s=50)
ax.set_xlabel(dataset.feature_names[i], fontsize=20)
ax.set_ylabel(dataset.feature_names[j], fontsize=20)
# row and column sharing
f, ((ax1, ax2), (ax3, ax4), (ax5,ax6)) = plt.subplots(3, 2, figsize=(16,8))
plot(iris, ax1, 0, 1)
plot(iris, ax2, 0, 2)
plot(iris, ax3, 0, 3)
plot(iris, ax4, 1, 2)
plot(iris, ax5, 1, 3)
plot(iris, ax6, 2, 3)
f.tight_layout()
plt.show()
"""
Explanation: <header class="w3-container w3-teal">
<img src="images/utfsm.png" alt="" height="100px" align="left"/>
<img src="images/mat.png" alt="" height="100px" align="right"/>
</header>
<br/><br/><br/><br/><br/>
MAT281
Aplicaciones de la Matemática en la Ingeniería
Sebastián Flores
https://www.github.com/usantamaria/mat281
Clase anterior
Motivación para Data Science y Machine Learning.
Pregunta
¿Cuáles son las 3 grandes familias de algoritmos?
Clustering
Regresión
Clasificación
Iris Dataset
Buscaremos ilustrar los distintos algoritmos con datos reales. Un conjunto de datos interesante y versatil es el Iris Dataset, que puede utilizarse para clustering, regresión y clasificación.
Fue utilizado por Ronald Fisher en su artículo "The use of multiple measurements in taxonomic problems" (1936).
El conjunto de datos consiste en 50 muestras de 3 especies de Iris (Iris setosa, Iris virginica y Iris versicolor).
Para cada flor, se midieron 4 características: largo y ancho de los petalos, y largo y ancho de los sépalos, en centímetros.
Iris Dataset
<img src="images/iris_petal_sepal.png" alt="" width="600px" align="middle"/>
Iris Dataset
Exploracion de datos
End of explanation
"""
from mat281_code import iplot
iplot.kmeans(N_points=100, n_clusters=4)
"""
Explanation: Clustering
Pregunta Crucial:
¿Si no supiéramos que existen 3 tipos de Iris, seríamos capaces algorítmicamente de encontrar 3 tipos de flores?
Clustering
Se tienen datos sin etiquetar/agrupar.
Se busca obtener un agrupamiento "natural" de los datos.
No existen ejemplos de los cuales aprender: método sin supervisar.
Fácil de verificar por inspección visual en 2D y 3D.
Difícil de verificar en dimensiones superiores.
Ejemplo de Problemas de Clustering
Segmentación de mercado:
¿Cómo atendemos mejor a nuestros clientes?
Ubicación de centros de reabastacimiento:
¿Cómo minimizamos tiempos de entrega?
Compresión de imágenes:
¿Cómo minimizamos el espacio destinado al almacenamiento?
Ubicación centros de reabastecimiento
<img src="images/reabastecimiento1.png" width="500px" align="middle"/>
Ubicación centros de reabastecimiento
<img src="images/reabastecimiento2.png" width="500px" align="middle"/>
Compresión de Imágenes
Utilizando todos los colores:
<img src="images/colores.png" width="500px" align="middle"/>
Compresión de Imágenes
Utilizando únicamente 32 colores:
<img src="images/colores_32means.png" width="500px" align="middle"/>
Características de un Problema de Clustering
Datos de entrada: Conjunto de inputs sin etiquetas.
Datos de salida: Etiquetas para cada input.
Obs: La etiqueta/label típicamente se asocia a un entero (0,1,2, etc.) pero en realidad es cualquier variable categórica.
Algoritmos de Clustering
Buscan utilizar las propiedades inherentes presentes en los datos para organizarlos en grupos de máxima
similitud.
Algoritmos basados en conectividad: Hierarchical Clustering.
Algoritmos basados en densidad: Expectation Maximization
Algoritmos basados en centroides: k-means.
k-means
Input: set $X$ de $N$ datos $x=(x_1, ..., x_n)$ y un meta-parámetro $k$ con el número de clusters a crear.
Output: Set de $k$ centroides de clusters ($\mu_l$) y una etiquetación de cada dato $x$ en $X$ indicando a qué cluster pertenece.
$x_i$ y $\mu_l$ son vectores en $\mathcal{R}^m$.
La pertenencia es única. Todos los puntos dentro de un cluster se encuentran mas
cercanos en distancia al centroide de su cluster que al centroide de otro cluster.
k-means
Matemáticamente:
\begin{align}
\textrm{Minimizar } \sum_{l=1}^k \sum_{x_n \in C_l} ||x_n - \mu_l ||^2 \textrm{ respecto a } C_l, \mu_l.
\end{align}
Donde $C_l$ es el cluster l-ésimo.
El problema anterior es NP-hard (imposible de resolver en tiempo polinomial, del tipo más difícil de los probleams NP).
Algoritmo de Lloyd
Heurística que converge en pocos pasos a un mínimo local.
Procedimiento
Calcular el centroide del cluster promediando las posiciones de los puntos actualmente en el cluster.
Actualizar la pertenencia a los clusters utilizando la distancia más cercana a cada centroide.
<span class="good">¿Cuándo funciona k-means?<span/>
Cuando los clusters son bien definidos y pueden separarse por círculos (n-esferas) de igual tamaño.
<img src="images/kmeans1.png" width="600px" align="middle"/>
<img src="images/kmeans2.png" width="600px" align="middle"/>
<img src="images/kmeans3.png" width="400px" align="middle"/>
<img src="images/kmeans4.png" width="400px" align="middle"/>
<span class="bad">¿Cuándo falla k-means?<span/>
Cuando se selecciona mal el número $k$ de clusters.
Cuando no existe separación clara entre los clusters.
Cuando los clusters son de tamaños muy distintos.
Cuando la inicialización no es apropiada.
<img src="images/kmeans4.png" width="400px" align="middle"/>
<img src="images/kmeans5.png" width="600px" align="middle"/>
<img src="images/kmeans6.png" width="600px" align="middle"/>
Ejemplos de k-means
End of explanation
"""
import numpy as np
from scipy.linalg import norm
def find_centers(X, k, seed=None):
if seed is None:
seed = np.random.randint(10000000)
np.random.seed(seed)
# Initialize to K random centers
old_centroids = random_centers(X, k)
new_centroids = random_centers(X, k)
while not has_converged(new_centroids, old_centroids):
old_centroids = new_centroids
# Assign all points in X to clusters
clusters = cluster_points(X, old_centroids)
# Reevaluate centers
new_centroids = reevaluate_centers(X, clusters, k)
return (new_centroids, clusters)
def random_centers(X, k):
index = np.random.randint(0, X.shape[0], k)
return X[index, :]
def has_converged(new_mu, old_mu, tol=1E-6):
num = norm(np.array(new_mu)-np.array(old_mu))
den = norm(new_mu)
rel_error= num/den
return rel_error < tol
def cluster_points(X, centroids):
clusters = []
for i, x in enumerate(X):
distances = np.array([norm(x-cj) for cj in centroids])
clusters.append( distances.argmin())
return np.array(clusters)
def reevaluate_centers(X, clusters, k):
centroids = []
for j in range(k):
cj = X[clusters==j,:].mean(axis=0)
centroids.append(cj)
return centroids
"""
Explanation: k-means
<span class="good">Ventajas<span/>
Rápido y sencillo de programar
<span class="bad">Desventajas<span/>
Trabaja en datos continuos, o donde distancias y promedios pueden definirse.
Heurística depende del puntos iniciales.
Requiere especificar el número de clusters $k$.
No funciona correctamente en todos los casos de clustering, incluso conociendo $k$ correctamente.
End of explanation
"""
from mat281_code import gendata
from mat281_code import plot
from mat281_code import kmeans
X = gendata.init_blobs(1000, 4, seed=40)
ax = plot.data(X)
centroids, clusters = kmeans.find_centers(X, k=4)
plot.clusters(X, centroids, clusters)
"""
Explanation: Aplicación a datos
End of explanation
"""
from mat281_code import gendata
from mat281_code import plot
from sklearn.cluster import KMeans
X = gendata.init_blobs(10000, 6, seed=43)
plot.data(X)
kmeans = KMeans(n_clusters=6)
kmeans.fit(X)
centroids = kmeans.cluster_centers_
clusters = kmeans.labels_
plot.clusters(X, centroids, clusters)
"""
Explanation: ¿Es necesario reinventar la rueda?
Utilicemos la libreria sklearn.
End of explanation
"""
import numpy as np
from sklearn import datasets
from sklearn.cluster import KMeans
from sklearn.metrics import confusion_matrix
# Parameters
n_clusters = 8
# Loading the data
iris = datasets.load_iris()
X = iris.data
y_true = iris.target
# Running the algorithm
kmeans = KMeans(n_clusters)
kmeans.fit(X)
y_pred = kmeans.labels_
# Show the classificacion report
cm = confusion_matrix(y_true, y_pred)
print cm
print (cm.sum() - np.diag(cm).sum() ) / float(cm.sum()) # 16/100
"""
Explanation: ¿Cómo seleccionar k?
Conocimiento previo de los datos.
Prueba y error.
Regla del codo (Elbow rule).
Estimating the number of clusters in a dataset via the gap statistic, Tibshirani, Walther and Hastie (2001).
Selection of k in k-means, Pham, Dimov y Nguyen (2004).
Volviendo al Iris Dataset
Apliquemos k-means al Iris Dataset y calculemos el error de clasificación.
Mostremos el resultado utilizando la matriz de confusión.
<img src="images/predictionMatrix.png" alt="" width="900px" align="middle"/>
End of explanation
"""
|
lee-ngo/dataset-ice-fire | basic_python_data_science_ice_fire.ipynb | mit | type(454)
type(2.1648)
type(5 + 6 == 10) # You can put expressions in them as well!
type(5 + 72j)
type(None)
"""
Explanation: Basic Python for Data Science: A Dataset of Ice and Fire
Hello, and welcome to the Jupyter Notebook for this lesson by Lee Ngo!
If you've gotten this far, that means you've accomplished the following:
You've installed a way to install Jupyter on your computer (we recommend Anaconda at https://continuum.io/downloads)
You knew how to turn on Jupyter Notebook in your command prompt
You did it in the folder of a downloaded copy of the repo, which you can find here: https:://github.com/lee-ngo/dataset-ice-fire
If you've gotten this far, we're ready to go onto the next phase!
Objectives of this Lesson
Learn the basics of the Python language for data science
Import popular Python libraries and data files
Perform some Exploratory Data Analysis (EDA)
Complete some Data Visualization
Seems like a lot, but we'll be able to get through most of this within an hour!
Let's get started with some light Python to get us warmed up.
Great if you're already familiar with the language, but here are some warm-up commands to get started.
Basics of the Python language
We're certainly not going to cover EVERYTHING one could learn in Python - that takes a lifetime.
For our purposes, it helps to understand certain key concepts.
Let's start with data types.
Data Types in Python
There are five common data types in Python:
int - integer value
float - decimal value
bool - True/False
complex - imaginary
NoneType - null value
TIME TO CODE!
Let's try to identify some data types below. Predict the outputs
of the following commands and run them.
End of explanation
"""
house = ['Targaryen','Stark','Lannister','Tyrell','Tully','Aaryn','Martell','Baratheon','Greyjoy']
"""
Explanation: Identifying data types will be helpful later on, as having conflicting types can lead to messy data science.
'Arrays' in Python
Data can also be stored, arranged, and organized in ways that lend itself to a lot of great analysis.
Here are the types of data one might work with here.
Note: the term 'immutable' means that items within the object cannot be changed unless the entire object changes as well.
str - string/varchar immutable value, defined with quotes = ‘abc’
list - collection of elements, defined with brackets = [‘a’, ‘b’]
tuple - immutable list, defined with parentheses = (‘a’, ‘b’)
dict - unordered key-value pairs, keys are unique and immutable, defined with braces = {‘a’:1, ‘b’:2}
set - unordered collection of unique elements, defined with braces = {‘a’, ‘b’}
For this lesson, we'll mostly be working in strings, lists, and dictionaries. Let's play around with a basic one below.
TIME TO CODE
Create a list below called house and set the the value of the items to the following, in this order (include the quotes and separate the values with a comma):
'Targaryen'
'Stark'
'Lannister'
'Tyrell'
'Tully'
'Arryn'
'Martell'
'Baratheon'
'Greyjoy'
End of explanation
"""
house[5]
"""
Explanation: Let's do some super-basic data exploration. What happens if you type in house[5]?
End of explanation
"""
def words_of_stark():
return "Winter is coming!"
words_of_stark()
def shipping(x,y):
return x + " is now romantically involved with " + y + ". Hope they're not related!"
shipping("Jon Snow","Danaerys Targaryen") # Sorry, spoiler alert.
"""
Explanation: Yep, you get the sixth item in the list. A common standard in most coding languages, lists are automatically indexed upon creation, starting at 0. This will be helpful to know when you're trying to look for certain items in a list by order - they will be at the nth - 1 index.
Functions and Methods - Let's do more with these objects!
We're going to be working a lot with functions and methods to do some cool things with our data.
First, what's the difference between the two?
A function is a programming construct that allows us to do a little bit more with the objects we created.
A method is a function specific to a class, and it is accessed using an instance/object within the class.
A class is a user-defined prototype for an objecct that defines a set of attributes and characteristics of any object within them, including variables and methods.
Don't worry if you're a little confused - we'll learn more through practice.
Another good way to remember: all methods are functions, but not all functions are methods.
TIME TO CODE!
Let's start with some pretty basic mathematical functions. What will the following functions return if you run them?
End of explanation
"""
import pandas as pd # We use this shortened syntax to type less later on
import matplotlib.pyplot as plt # Specifically, we're using the PyPlot class and again, a shortened syntax
%matplotlib inline
# This handy command above allows us to see our visualizations instantly, if written correctly
"""
Explanation: Well, that was fun, but I don't want to have to re-invent the wheel.
Fortunately the Python community has developed a lot of rich libraries full of classes for us to work with so that we don't have to constant define them.
We access them by importing.
Importing Libraries
We're going to be working with two in particular for this lesson:
Pandas - a data analysis library in Python, completely free to use. (pandas.pydata.org)
Matplotlib - a 2D p lotting library to create some decent visualizations on our data.
We'll be importing both of them and using them throughout the lesson. Below is the code to import. Be sure to run it so that it applies to the subsequent code.
End of explanation
"""
raw_dataframe = pd.read_csv("war_of_the_five_kings_dataset.csv")
"""
Explanation: Awesome! Now we're ready to start working with the dataset.
About This Dataset You're About to Import
I originally found this while searching for fun Game of Thrones-based data, and I found one by Chris Albon, a major contributor to the data science community. I felt it was perfect teach some of the basics in Python for data science, especially the core concepts of how to think scientifically, even on make-believe fantasy data.
You can find the original dataset here: https://github.com/chrisalbon/war_of_the_five_kings_dataset
Out of respect to him, I've left it unchanged. You now have a copy of it as well. Use the code below to import it:
End of explanation
"""
raw_dataframe.head(3) # This should show 3 rows, starting at index 0.
"""
Explanation: We've now created an object called raw_dataframe that contains all of the data from the csv file, converted into a Pandas-based dataframe. This will allow us to do a lot of great exploratory things.
Basic Exploratory Data Analysis
Let's take a look in our data by using the .head() method, which allows us to see the top few rows of data according to the number of rows we'd like to see. Run the code below.
End of explanation
"""
pd.set_option('display.max_columns', None)
"""
Explanation: You now can catch a glimpse of what's in this data set! Wait a minute...
What happens to the data after column defender_1? There's an ... ellipsis?
This dataset is actually very wide. Pandas is doing us a favor by ignoring some of those columns.
Let's make them visible by changing a small feature of our Pandas import:
End of explanation
"""
raw_dataframe.head(3)
"""
Explanation: This little bit of code now makes it so that the display class of our Pandas library has no limit to the number of columns it sends. Try running the .head() method again to see if it worked.
End of explanation
"""
raw_dataframe.info()
"""
Explanation: Great! We can now see all of the columns of data! This little bit of code is handy for customizing your libraries in the future.
Let's do one more key bit of code, drawing back to our very first lesson:
End of explanation
"""
df = raw_dataframe.copy()
"""
Explanation: Here's another way for us to look at our data and see what types exit within them, indexed by key.
So far we know that there are 38 data points overall, and some of those points are written as integers, others as float objects.
These are all default data assignments by Pandas, but as we dive deeper, we'll care a little bit more about which is which.
SO, WHAT DO YOU WANT TO KNOW?
We've imported our libraries and our dataset, and we have a decent idea as to what's in them. What's next?
The first rule about being a data scientist: it's not about the tools, it's about the questions to answer.
We are scientists first and foremost, thus we must begin all of our exercises with a question we hope the data will answer.
In an era where there's oceans of data generated, we then need tools and people to use them properly to answer questions.
For now, we're dealing with a small dataset: a quantiative documentation of the results of the War of the Five Kings, including:
Participating Houses
Participating "Kings" of Those Houses
Participants in each battle, including attacks and defenders
Army sizes
Outcome of the battle (based on the attacker's perspective)
Name of the battle
There's a lot more we could cover, but I'll try to focus on one particular King: Robb Stark.
Analyzing 'The King in the North'
Sure, things didn't exactly work out well for "The Young Wolf" (spoiler, although it's actually mentioned in this dataset), but his end overshadowed what was otherwise an impressive military campaign.
We can answer some pretty simple questions about his performance in this war, such as:
How many battles did Robb Stark fight in?
How many battles did Robb Stark win as an attacker?
How many battles did Robb Stark win overall?
Let's start with just those 3.
TIME TO CODE!
First, we have to group the data in such as way so that we can analyze it.
Let's make a new dataframe called df, cloned from the existing one with the .copy() method.
End of explanation
"""
robb_off = df[df['attacker_king'] == 'Robb Stark']
robb_def = df[df['defender_king'] == 'Robb Stark']
"""
Explanation: How many battles did Robb Stark fight in?
A quick glance in the dataset shows that Robb Stark fought in battles as both an attacker and as a defender.
We'll need to create sub-dataframes that isolate those key_values:
End of explanation
"""
robb_total = len(robb_off) + len(robb_def)
robb_total
"""
Explanation: Whoa, that looks a little complex. Let's break it down. We've created two objects: robb_off for whenever Robb Stark attacked (i.e. on the "offensive), and robb_def for whenever Robb Stark defended.
We're requesting in Python to set these objects equal to the dataframe dictionary for whenever the key of attacker_king and defender_king is equivalent to the value of Robb Stark.
Feel free to use .head() on each object to see if it worked.
From here, it's as simple as counting the rows for each using the len() function, which gives us the "length" of a dataset according to the number of indexed items (in this case - rows).
Here's the code, setting it equal to a new variable:
End of explanation
"""
robb_off_win = robb_off[robb_off['attacker_outcome'] == 'win']
"""
Explanation: In other words, Robb Stark was involved in nearly 2/3 of all the battles fought during the War of the Five Kings.
But how good of a war commander was he?
How many battles did Robb Stark win as an attacker?
We can build upon the objects we've already built. We have the object for the number of battles Robb fought as an attacker, so let's create one involving him as an attacker AND a victor, still drawing from the original data source:
End of explanation
"""
len(robb_off_win)
"""
Explanation: Using the same strategy as before, we're now looking into the sub-dataframe robb_off for whenever the key attacker_outcome has a value win.
From there, it's a simple len() method.
End of explanation
"""
robb_def_win = robb_def[robb_def['attacker_outcome'] == 'loss']
"""
Explanation: Cool! Robb Stark won 8 of the battles he fought as an attacker. What about all the battles he won, including the ones as a defender?
We apply the same method, but remember - victories are according to the attacker's perspective. We need times when the attacker has lost to add to Robb's scoreboard.
End of explanation
"""
len(robb_off_win + robb_def_win)
"""
Explanation: Adding these two variables together gets you the number of overall victories:
End of explanation
"""
robb_off_viz = robb_off.groupby('attacker_outcome').apply(len)
"""
Explanation: .... Wait, only 9? Out of the total number of battles Robb Stark fought, he was successful as a attacker but not great on the defensive. Overall, winning 9 out of 24 battles is really not that impressive.
Perhaps 'The Young Wolf' wasn't as impressive as we thought...
Try answering some more questions:
What was the average size of Robb Stark's armies against those defending against him?
How did the Lanninster/Baratheons fare in the War of the Five Kings?
Which king had the highest winning percentages? (Requires some light statistics...)
Who was the most effective commander (there are several to choose from)?
Try some other methods as well in Pandas:
.mean() - gives you the average of some value (you have to designate the key-value in some cases)
.median() - returns the median value of an object
.min() - gives you the lowest value in that array
.fillna(0.0).astype(int) - this is a way to get rid of all the float objects in your dataset.
.describe() - gives you an overview of the object's data, according to counts, unique values, and data types
Now that you have a light understanding of how data analysis is done, let's create some visualizations!
Creating Data Visualizations in Python
Relying a lot on Matplotlib here, data visualizations allow us to better communicate and understand the information we're able to create through our analyses.
Let's try to do a few based on the questions we've already resolved so far. Let's create some bar graphs.
First, let's create a new object robb_off_viz that measures what's going on in our robb_off object, using two more methods:
* .groupby() - calculating the unique values in a particular key
* .len() - measuring them by their "length" or number of rows
End of explanation
"""
robb_off_viz.plot(kind='bar').set_ylabel('# of Battles')
"""
Explanation: Now, we can create a simple bar graph with the code below and setting the y label with a few more methods.
End of explanation
"""
robb_def_viz = robb_def.groupby('attacker_outcome').apply(len)
robb_def_viz.plot(kind='bar').set_ylabel('# of Battles')
"""
Explanation: Let's compare that with a plot for Robb Stark's defense. Remember, in this graph, Robb is the defender, so his "wins" are in the "loss" column below.
End of explanation
"""
attacker_win = df[df['attacker_outcome'] == 'win'].groupby('attacker_1').apply(len)
attacker_win.plot(kind='bar').set_ylabel('# of Victories')
"""
Explanation: We can interpret this data much easier now with these visuals in a couple of ways:
Attacking is a far more effective means to victory than defending
Robb Stark is about as good at attacking as he is terrible at defending
Cool. Though looking at just two bars is a little lame. Let's compare some more data!
The code below creates a new object called attacker_win that groups and measures the victories according to the attacker.
Afterwards, the rest is a simple bar plot like before.
End of explanation
"""
x = robb_off['attacker_size']
y = robb_off['defender_size']
plt.scatter(x,y,color='red')
"""
Explanation: We'll that's interesting. Turns out that the Greyjoys and the Lannisters are more effective on the attack than the Starks.
Let's move onto another popular form of visualization: scatterplots.
Scatterploting in Python
I was told by a good friend that scatterplots are the best way to visualize data.
Let's see if he's right. Let's create one based on the battles that took place in this war, beginning with a research question:
Were Robb Stark's armies generally larger than the defenders while attacking?
Were Robb Stark's armies generally smaller than the attackers while defending?
Let's see if we can do all of this in one scatterplot.
Since we now know that Robb won more often as an attacker than defender, let's see if army size is a factor.
First, let's create our first plot
Let's plot the battles where Robb Stark is a attacking and make that the color red. See below:
End of explanation
"""
x = robb_def['defender_size']
y = robb_def['attacker_size']
plt.scatter(x,y,color='blue')
"""
Explanation: Now let's do the same thing with the battles where Robb Stark is defending and make that the color blue.
End of explanation
"""
x = robb_off['attacker_size']
y = robb_off['defender_size']
plt.scatter(x,y,color='red')
x = robb_def['defender_size']
y = robb_def['attacker_size']
plt.scatter(x,y,color='blue')
plt.title('When Starks Attack')
plt.xlabel('Stark Army')
plt.ylabel('Defender Army')
plt.xlim(0,21000) # x parameters - aim for homogeneous scales
plt.ylim(0,21000) # y parameters
plt.plot([0,21000],[0,21000],color="k") # line of equality
plt.show()
"""
Explanation: Hm, also interesting, but hard to see how it fits together. We have to run the code in a single Python block to make them all into one cool visual. I've seen added some code to make a "line of equality" to give us a sense of what the Starks were truly up against.
End of explanation
"""
df.hist()
"""
Explanation: Check it out! Especially with the "line of equality," we can make the following conclusions:
The Starks typically faced opponents who had a larger army size than their own.
Can we make conclusions about army size AND victory? Not yet. We'll need to redo the data for that, but you should know everything you need to get started on that visualization.
Histograms in Python
One last little lesson on visualization is histograms, a popular way to show data if you have data with variables that occur frequently. Run the code below to see some of the potential histograms we can create.
End of explanation
"""
df.hist('year')
"""
Explanation: Hm, that's not all that meaningful to us as a group, but we can focus on one key, such as year.
End of explanation
"""
df.hist('attacker_size')
"""
Explanation: A little more helpful! Here we can see that many of the battles (20) happened in the year 299, with 7 happening the year before and 11 in the year after. (We can use Python to count exactly what these numbers are.
Let's also check out how big the attacking armies were by exploring attacker_size.
End of explanation
"""
|
swirlingsand/deep-learning-foundations | rnns/intro-to-rnns/Anna_KaRNNa.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
"""
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
"""
text[:100]
"""
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
"""
encoded[:100]
"""
Explanation: And we can see the characters encoded as integers.
End of explanation
"""
len(vocab)
"""
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
"""
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the batch size and number of batches we can make
batch_size = n_seqs * n_steps
n_batches = len(arr)//batch_size
# Keep only enough characters to make full batches
arr = arr[:n_batches * batch_size]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
"""
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
End of explanation
"""
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
"""
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
"""
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
"""
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
End of explanation
"""
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
"""
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
"""
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
x: Input tensor
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# That is, the shape should be batch_size*num_steps rows by lstm_size columns
seq_output = tf.concat(lstm_output, axis=1)
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
"""
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
End of explanation
"""
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
"""
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
End of explanation
"""
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
"""
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
"""
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
"""
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
End of explanation
"""
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
"""
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
"""
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
End of explanation
"""
tf.train.get_checkpoint_state('checkpoints')
"""
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/inm/cmip6/models/inm-cm5-0/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'inm-cm5-0', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: INM
Source ID: INM-CM5-0
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:04
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
Luke035/dlnd-lessons | embedding/Skip-Gram_word2vec.ipynb | mit | import time
import numpy as np
import tensorflow as tf
import utils
"""
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
"""
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
"""
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
"""
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
"""
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
"""
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
"""
np.random.rand()
## Your code here
from collections import Counter
word_counter = Counter(int_words)
#Absolute frequency
WORD_THRESHOLD = 300
prob_dict = {}
for word in word_counter:
prob = 1 - np.sqrt(WORD_THRESHOLD / word_counter[word])
prob_dict[word] = prob
'''
np.random.rand restituisce valore tra 0 e 1
prod_dict[word] rappresenta la probabilità che la data parola debba essere scartata
1 - prod_dict[word] invece la prob che la parola debba essere tenuta
Se la prob_dict[word] è alta (molto prob da scartare), 1 - prob_dict[word] sarà molto basso
di conseguenza sarà improbabile che np.random.rand() restituirà un valore che superi la soglia per essere ammesso nella lista
'''
train_words = []
for word in int_words:
if np.random.rand() < (1 - prob_dict[word]):
train_words.append(word)
len(train_words)
"""
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is that probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
"""
from random import randint
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
random_size = randint(1, window_size)
#L'indice di start può essere negativo, se quello di fine supera la fine PYTHON prende comunque l'ultimo
#Uso inline if
start_idx = idx - random_size if idx - random_size > 0 else 0
#Deve essere un set per evitare parole ripetute nel target.
#La seconda parte deve incrementare l'indice di 1 per evitare di includere la parola stessa
context = set(words[start_idx:random_size] + words[idx+1:(idx+1)+random_size])
return context
"""
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
"""
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
text_x, test_y = next(get_batches(int_words, batch_size=128, window_size=5))
print(np.array(text_x).shape)
print(np.array(test_y).shape)
"""
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, <b>so I make one row per input-target pair</b>. This is a generator function by the way, helps save memory.
End of explanation
"""
train_graph = tf.Graph()
with train_graph.as_default():
#Batch size may vary
inputs = tf.placeholder(tf.int32, [None], name='inputs')
#Batch size and window may vary
labels = tf.placeholder(tf.int32, [None, None], name='labels')
"""
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
"""
n_vocab = len(int_to_vocab)
n_embedding = 300 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.truncated_normal((n_vocab, n_embedding),stddev=0.1))# create embedding weight matrix here
#Lookup in embedding di inputs
embed = tf.nn.embedding_lookup(params=embedding, ids=inputs)# use tf.nn.embedding_lookup to get the hidden layer output
"""
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
"""
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
#Guarda doc sampled_softmax_loss, i weights devono essere nella forma [num_classes, dim]
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding),stddev=0.01))# create softmax weight matrix here
#I label sono sempre i vocaboli attorno
softmax_b = tf.Variable(tf.zeros(n_vocab))# create softmax biases here
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(weights=softmax_w,
biases=softmax_b,
labels=labels,
inputs=embed,
num_sampled=n_sampled,
num_classes=n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
"""
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
"""
import random
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
"""
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
"""
for i in range(1000):
sys.stdout.write("\r" + str(i))
sys.stdout.flush()
import sys
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
sys.stdout.write("\rEpoch {}/{} ".format(e, epochs) +
"Iteration: {} ".format(iteration) +
"Avg. Training loss: {:.4f} ".format(loss/100) +
"{:.4f} sec/batch".format((end-start)/100))
sys.stdout.flush()
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
"""
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
"""
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
"""
Explanation: Restore the trained network if you need to:
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
"""
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation
"""
|
tbphu/fachkurs_master_2016 | 07_modelling/20151201_ODEcomplete.ipynb | mit | import numpy as np
# 1. initial conditions
S0 = 500. # initial population
Z0 = 0 # initial zombie population
R0 = 0 # initial death population
y0 = [S0, Z0, R0] # initial condition vector
# 2. parameter values
P = 0 # birth rate
d = 0.0001 # 'natural' death percent (per day)
B = 0.0095 # transmission percent (per day)
G = 0.0001 # resurect percent (per day)
A = 0.001 # destroy percent (per day)
# 3. simulation time
start = 0.0 # days
end = 15. # days
number_time_points = 1000.
t = np.linspace(start, end, number_time_points) # time grid, 1000 steps or data points
"""
Explanation: Ordinary Differential Equations - ODE
or 'How to Model the Zombie Apocalypse'
Jens Hahn - 01/12/2015
Content taken from:
Scipy Docs at http://scipy-cookbook.readthedocs.org/items/Zombie_Apocalypse_ODEINT.html
Munz et al. (2009): http://mysite.science.uottawa.ca/rsmith43/Zombies.pdf
Introduction
What is an ODE
Differential equations can be used to describe the time-dependent behaviour of a variable.
$$\frac{\text{d}\vec{x}}{\text{d}t} = \vec{f}(\vec{x}, t)$$
In our case the variable stands for the number of humans in a infected (zombies) or not infected population.
Of course they can also be used to describe the change of concentrations in a cell or other continuous or quasi-continuous quantity.
In general, a first order ODE has two parts, the increasing (birth, formation,...) and the decreasing (death, degradation, ...) part:
$$\frac{\text{d}\vec{x}}{\text{d}t} = \sum_{}\text{Rates}{\text{production}} - \sum{}\text{Rates}_{\text{loss}}$$
You probably already know ways to solve a differential equation algebraically by 'separation of variables' (Trennung der Variablen) in the homogeneous case or 'variation of parameters' (Variation der Konstanten) in the inhomogeneous case. Here, we want to discuss the use of numerical methods to solve your ODE system.
Solve the model
The zombie apokalypse model
Let's have a look at our equations:
Number of susceptible victims $S$:
$$\frac{\text{d}S}{\text{d}t} = \text{P} - \text{B}\times S \times Z - \text{d}\times S$$
Number of zombies $Z$:
$$\frac{\text{d}Z}{\text{d}t} = \text{B}\times S \times Z + \text{G}\times R - \text{A}\times S \times Z$$
Number of people "killed" $R$:
$$\frac{\text{d}R}{\text{d}t} = \text{d}\times S + \text{A}\times S \times Z - \text{G}\times R$$
Parameters:
P: the population birth rate
d: the chance of a natural death
B: the chance the “zombie disease” is transmitted (an alive person becomes a zombie)
G: the chance a dead person is resurrected into a zombie
A: the chance a zombie is totally destroyed by a human
Let's start
Before we start the simulation of our model, we have to define our system.
We start with our static information:
1. Initial conditions for our variables
2. Values of the paramters
3. Simulation time
4. Number of time points at which we want to have the values for our variables (the time grid). Use numpy!!
End of explanation
"""
# function 'f' to solve the system dy/dt = f(y, t)
def f(y, t):
Si = y[0]
Zi = y[1]
Ri = y[2]
# the model equations (see Munz et al. 2009)
f0 = P - B*Si*Zi - d*Si
f1 = B*Si*Zi + G*Ri - A*Si*Zi
f2 = d*Si + A*Si*Zi - G*Ri
return [f0, f1, f2]
"""
Explanation: In the second step, we write a small function f, that receives a list of the current values of our variables x and the current time t. The function has to evaluate the equations of our system or $\frac{\text{d}\vec{x}}{\text{d}t}$, respectively. Afterwards, it returns the values of the equations as another list.
Important
Since this function f is used by the solver, we are not allowed to change the input (arguments) or output (return value) of this function.
End of explanation
"""
# zombie apocalypse modeling
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.integrate import odeint
# solve the DEs
result = odeint(f, y0, t)
S = result[:, 0]
Z = result[:, 1]
R = result[:, 2]
# plot results
plt.figure()
plt.plot(t, S, label='Humans')
plt.plot(t, Z, label='Zombies')
plt.plot(t, R, label='Dead Humans')
plt.xlabel('Days from outbreak')
plt.ylabel('Population')
plt.title('Zombie Apocalypse - No Init. Dead Pop.; No New Births.')
plt.ylim([0,500])
plt.legend(loc=0)
"""
Explanation: Last but not least, we need to import and call our solver. The result will be a matrix with our time courses as columns and the values at the specified time points. Since we have a values for every time point and every species, we can directly plot the results using matplotlib.
End of explanation
"""
|
nsaunier/CIV8760 | Python/tutoriel-python.ipynb | mit | # esprit de Python
import this
"""
Explanation: << Table des matières
Introduction
Objectifs
Se familiariser avec Python et les Jupyter Notebook
comprendre les exemples présentés tout au long du cours, en traitement de données, données spatiales, analyse statistique et fouille de données
Commencer avec quelques exemples
de jeux de donnees et leurs attributs
de notions d'algorithmes: structures de donnees et de contrôle
Bonnes pratiques
Wilson G, Aruliah DA, Brown CT, Chue Hong NP, Davis M, Guy RT, et al. (2014) Best Practices for Scientific Computing. PLoS Biol 12(1): e1001745. https://doi.org/10.1371/journal.pbio.1001745
* Write Programs for People, Not Computers
* Let the Computer Do the Work
* Make Incremental Changes
* Don't Repeat Yourself (or Others)
* Plan for Mistakes
* Optimize Software Only after It Works Correctly
* Document Design and Purpose, Not Mechanics
* Collaborate
Principes
Automatiser, "soyez fainéant !": un ordinateur est idiot et idéal pour les taches répétitives: permet de traiter de larges ensembles de données
Réutiliser votre code
Devenir autonome: résoudre un autre problème, apprendre un autre langage
Apprendre à contrôler son ordinateur: au lieu de se limiter aux programmes pensés et écrits par d'autres
Program or be programmed (Douglas Rushkoff, http://rushkoff.com/program/)
Comprendre ce qu'il est possible de programmer, et comment le faire: le but n'est pas d'apprendre Python, mais un langage de programmation moderne et puissant
Assurer la répétabilité et traçabilité de vos traitements de données pour faire du travail de bonne qualité
Méthode
Développement itératif: faire la chose la plus simple qui peut marcher ("do the simplest thing that could possibly work") et raffiner (re-factoriser)
méthodes de développement logiciel agiles
"premature optimization is the root of all evil" (Don Knuth http://en.wikipedia.org/wiki/Program_optimization)
tester interactivement le code dans l'interpréteur, manipulation directe des variables
Ne pas se répéter: éviter la duplication de code
Style de programmation
choisir un style d'écriture et s'y tenir
utiliser des noms explicites, éviter les commentaires: si les commentaires sont trop long, cela veut dire que le code est compliqué et pourrait être simplifié
Attention aux détails et patience
reconnaître les différences
il faut rester rationnel
l'ordinateur est idiot et suit vos instructions les unes après les autres: s'il y a un bug, c'est vous qui l'avez créé
Python
Avantages
Orienté-objet: ré-utilisation du code
Libre ("open source"): gratuit à utiliser et distribuer
Flexibilité
Largement utilisé et bonne documentation: automatiser VISSIM, Aimsun, QGIS et ArcGIS
Plus facile à lire et apprendre que d'autres langages: relire et reprendre votre code dans 6 mois est comme écrire pour être relu par un autre être humain
Multi-plateforme, langage "glue", "Bindings" pour de nombreux langages et bibliothèques
Bibliothèques scientifiques, visualisation, SIG: scipy, numpy, matplotlib, pandas, scikit-image, scikit-learn, shapely, geopandas
Faiblesses: vitesse (langage interprété), manque de certains outils de calcul numérique et statistique (matlab)
Description
Langage interprété et interpréteur
Version 3 vs 2
Code = fichier texte, extension .py
Les commentaires commencent par #
Scripts et modules (bibliothèques)
Large bibliothèque standard ("standard library", "batteries included") http://docs.python.org/library/
End of explanation
"""
print("Hello World")
s = "Hello World"
print(s)
reponse = input('Bonjour, comment vous appelez-vous ? ')
print('Bonjour '+reponse)
%run ./script01.py # commande magique ipython
"""
Explanation: Ligne de commande et interpréteur
Ne pas avoir peur d'écrire
L'interpréteur est une calculatrice
Simplifié par la complétion automatique et l'historique des commandes (+ facilités syntaxiques)
Ligne de commande avancée: IPython
Évaluer des expressions et des variables
help, ?
End of explanation
"""
type('Hello')
type(4)
type(4.5)
type(True)
type([])
type([2,3])
type({})
type({1: 'sdfasd', 'g': [1,2]})
type((2,3))
# enregistrement vide
class A:
pass
a = A()
a.x = 10
a.y = -2.3
type(a)
a = 2
a = 2.3
a = 'hello' # les variables n'ont pas de type
#b = a + 3 # les valeurs ont un type
a = 4
b = a + 3
b
a
# conversions entre types
int('3')
str(3)
# opération
i=1
i == 1
i+=1
i == 1
a
i == 2
"""
Explanation: Jupyter Notebook
Environnement computationnel interactif web pour créer des carnets ("notebooks")
document JSON contenant une liste ordonnée de cellule d'entrée/sortie qui peuvent contenir du code, du texte, des formules mathématiques, des graphiques, etc.
Versions Python, R, Julia
Les notebooks peuvent être convertis dans plusieurs formats standards ouverts (HTML, diapositifs, LaTeX, PDF, ReStructuredText, Markdown, Python)
Processus de travail similaire à l'interpréteur
Outil de communication
Types de données de base
booléen (binaire)
numérique: entier, réel
chaîne de caractère
liste, tuple, ensembles
dictionnaire
End of explanation
"""
print('À quelle vitesse roule le véhicule (km/h) ?')
vitesse = None
temps = None
distance = None
print('Le temps de freinage est', temps, 'et la distance de freinage est', distance)
# listes
a = list(range(10))
print(a)
print(a[0]) # index commence à 0
print(a[-1])
print(a[-2])
a[2] = -10
print(a)
# méthodes
a.sort()
print(a)
a.append(-100)
print(a)
del a[0]
print(a)
a[3] = 'hello'
print(a)
# appartenance
1 in a
2 in a
# références
b = list(range(10))
c = b
c[3] = 'bug'
print(b)
##
a = A()
b = a
a.x = 10
print(b.x)
print(a == b) # référence au même objet
# "list comprehensions"
a = list(range(10))
doubles = [2*x for x in a]
print(doubles)
carres = [x*x for x in a]
print(carres)
"""
Explanation: Exercice
Écrire un programme qui demande une vitesse et calcule le temps et la distance nécessaire pour s'arrêter. Le temps de perception réaction est 1.5 s et la décélération du véhicule est -1 m/s2.
End of explanation
"""
# boucles
a = list(range(5))
for x in a:
print(x)
for i in range(len(a)):
print(a[i])
i = 0
while i<len(a):
print(a[i])
i += 1
# test
from numpy.random import random_sample
b = random_sample(10)
print(b)
for x in b:
if x > 0.5:
print(x)
else:
print('Nombre plus petit que 0.5', x)
# list comprehensions avec test
c = [x for x in b if x>0.5]
print(c)
"""
Explanation: Structures de contrôle
Séquences d’opérations
Conditionnelles: test si [condition] alors [opération1] (sinon [opération2])
Boucles:
tant que [condition] [opération]
itérateur (compteur) pour [ensemble] [opération]
End of explanation
"""
def test():
x = 0
test() == None
def vitesseMetreSecEnKmH(v):
return v*3.6
x = 1
print("Un piéton marchant à {} m/s marche à {} km/h".format(x,vitesseMetreSecEnKmH(x)))
"""
Explanation: Fonctions
Les fonctions sont fondamentales pour éviter de répéter du code. Les fonctions ont des arguments (peut être vide) et retourne quelque chose (None si pas de retour explicite). Les arguments de la fonction peuvent avoir des valeurs par défaut (derniers arguments).
End of explanation
"""
import urllib.request
import zipfile
import io
import matplotlib.mlab as pylab
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
# exemples numpy
a = np.arange(10) # similaire a range(10), retourne une array
b = np.zeros((4,5))
c = np.ones((4,5))
a = np.random.random_sample(10)
# éviter les boucles, extraire des sous-vecteurs (comme avec matlab)
b = a>0.5
print(b)
c = a[b]
print(a)
print(c)
# charger des matrices
data = np.loadtxt('../donnees/vitesse-debit.txt')
plt.plot(data[:,0], data[:,1], 'o')
data.mean(0)
"""
Explanation: Exercice
Modifier le programme de calcul du temps et de la distance de freinage pour donner les réponses pour différentes valeurs de décélération, entre -0.5 et -6 m/s^2 (avec un incrément de -0.5 m/s^2).
Faire un graphique du temps (ou de la distance) de freinage en fonction de la vitesse pour différentes valeurs de décélération
Transformer le programme pour demander à l'utilisateur s'il veut continuer avec d'autres valeurs de vitesses, et ré-itérer la question et les calculs (tant que l'utilisateur veut continuer).
Compter le nombre de chiffres plus petits que 0.5 dans la liste b.
Bibliothèques scientifiques
numpy: vecteurs, matrices, etc.
scipy: fonctions scientifiques, en particulier statistiques
matplotlib: graphiques et visualisation
pandas: structures de données (interface et similarité avec SQL)
statsmodels: modèles statistiques
scikit-learn: apprentissage automatique
Toutes les bibliothèques doivent être importées avec la commande import. La bibliothèque peut être renommée avec as (par ex. dans import numpy as np). Une classe ou fonction spécifique peut être importée si importée spécifiquement avec la commande from (par ex. from numpy.random import random_sample), et on peut importer plusieurs classes ou fonction d'un coup en les ajoutant les unes après les autres séparées par des virgules.
End of explanation
"""
# jeu de données de voitures http://lib.stat.cmu.edu/DASL/Datafiles/Cars.html
text = '''Country Car MPG Weight Drive_Ratio Horsepower Displacement Cylinders
U.S. Buick Estate Wagon 16.9 4.360 2.73 155 350 8
U.S. Ford Country Squire Wagon 15.5 4.054 2.26 142 351 8
U.S. Chevy Malibu Wagon 19.2 3.605 2.56 125 267 8
U.S. Chrysler LeBaron Wagon 18.5 3.940 2.45 150 360 8
U.S. Chevette 30.0 2.155 3.70 68 98 4
Japan Toyota Corona 27.5 2.560 3.05 95 134 4
Japan Datsun 510 27.2 2.300 3.54 97 119 4
U.S. Dodge Omni 30.9 2.230 3.37 75 105 4
Germany Audi 5000 20.3 2.830 3.90 103 131 5
Sweden Volvo 240 GL 17.0 3.140 3.50 125 163 6
Sweden Saab 99 GLE 21.6 2.795 3.77 115 121 4
France Peugeot 694 SL 16.2 3.410 3.58 133 163 6
U.S. Buick Century Special 20.6 3.380 2.73 105 231 6
U.S. Mercury Zephyr 20.8 3.070 3.08 85 200 6
U.S. Dodge Aspen 18.6 3.620 2.71 110 225 6
U.S. AMC Concord D/L 18.1 3.410 2.73 120 258 6
U.S. Chevy Caprice Classic 17.0 3.840 2.41 130 305 8
U.S. Ford LTD 17.6 3.725 2.26 129 302 8
U.S. Mercury Grand Marquis 16.5 3.955 2.26 138 351 8
U.S. Dodge St Regis 18.2 3.830 2.45 135 318 8
U.S. Ford Mustang 4 26.5 2.585 3.08 88 140 4
U.S. Ford Mustang Ghia 21.9 2.910 3.08 109 171 6
Japan Mazda GLC 34.1 1.975 3.73 65 86 4
Japan Dodge Colt 35.1 1.915 2.97 80 98 4
U.S. AMC Spirit 27.4 2.670 3.08 80 121 4
Germany VW Scirocco 31.5 1.990 3.78 71 89 4
Japan Honda Accord LX 29.5 2.135 3.05 68 98 4
U.S. Buick Skylark 28.4 2.670 2.53 90 151 4
U.S. Chevy Citation 28.8 2.595 2.69 115 173 6
U.S. Olds Omega 26.8 2.700 2.84 115 173 6
U.S. Pontiac Phoenix 33.5 2.556 2.69 90 151 4
U.S. Plymouth Horizon 34.2 2.200 3.37 70 105 4
Japan Datsun 210 31.8 2.020 3.70 65 85 4
Italy Fiat Strada 37.3 2.130 3.10 69 91 4
Germany VW Dasher 30.5 2.190 3.70 78 97 4
Japan Datsun 810 22.0 2.815 3.70 97 146 6
Germany BMW 320i 21.5 2.600 3.64 110 121 4
Germany VW Rabbit 31.9 1.925 3.78 71 89 4
'''
s = io.StringIO(text)
data = pd.read_csv(s, delimiter = '\t')
#data.to_csv('cars.txt', index=False)
print(data.info())
data
#data.describe(include = 'all')
data.describe()
data['Country'].value_counts().plot(kind='bar')
data[['Car', 'Country']].describe()
# exemple de vecteur ou enregistrement
data.loc[0]
"""
Explanation: Exercice
Sachant que la première colonne du fichier vitesse-debit.txt est la vitesse moyenne et la seconde le débit, calculer la densité (égale au débit divisé par la vitesse) et tracer le graphique de la vitesse en fonction de la densité.
Exemple de fichier csv avec en-tête avec différents types de données
Cet exemple utilise la structure de données DataFrame de la bibliothèque pandas.
End of explanation
"""
plt.scatter(data.Weight, data.MPG)
"""
Explanation: Exercice
Quel est le type des attributs?
Proposer une méthode pour déterminer le pays avec les véhicules les plus économes
End of explanation
"""
# comptages vélo
filename, message = urllib.request.urlretrieve('http://donnees.ville.montreal.qc.ca/dataset/f170fecc-18db-44bc-b4fe-5b0b6d2c7297/resource/6caecdd0-e5ac-48c1-a0cc-5b537936d5f6/download/comptagevelo20162.csv')
data = pd.read_csv(filename)
print(data.info())
plt.plot(data['CSC (Côte Sainte-Catherine)'])
# 01/01/16 était un vendredi, le 4 était un lundi
cscComptage = np.array(data['CSC (Côte Sainte-Catherine)'].tolist()[4:4+51*7]).reshape(51,7)
for r in cscComptage:
plt.plot(r)
plt.xticks(range(7),['lundi', 'mardi', 'mercredi', 'jeudi', 'vendredi', 'samedi', 'dimanche'])
plt.ylabel('Nombre de cyclistes')
plt.imshow(cscComptage, interpolation = 'none', aspect = 'auto')
plt.colorbar()
"""
Explanation: Exemple de fichier de comptage de vélos
Source http://donnees.ville.montreal.qc.ca/dataset/velos-comptage
End of explanation
"""
# données bixi
filename, message = urllib.request.urlretrieve('https://sitewebbixi.s3.amazonaws.com/uploads/docs/biximontrealrentals2018-96034e.zip')
zip=zipfile.ZipFile(filename)
data = pd.read_csv(zip.open(zip.namelist()[0]))
print(data.info())
print(data.describe())
# reflechir aux types, sens de moyenner des codes de station
"""
Explanation: Exercice
Les semaines sont-elle bien représentées?
À quoi correspondent les bandes blanches dans l'image?
Données bixi
Source https://bixi.com/fr/page-27
End of explanation
"""
# données météo d'Environnement Canada
#filename, message = urllib.request.urlretrieve('http://climate.weather.gc.ca/climate_data/bulk_data_f.html?format=csv&stationID=10761&Year=2017&Month=1&Day=1&timeframe=2&submit=Download+Data')
filename, message = urllib.request.urlretrieve('http://climate.weather.gc.ca/climate_data/bulk_data_f.html?format=csv&stationID=10761&Year=2017&Month=7&Day=1&timeframe=1&submit=Download+Data')
data = pd.read_csv(filename, delimiter = ',')
print(data.info())
plt.plot(data['Hum. rel (%)'])
plt.show()
#data.describe()
# plt.plot
"""
Explanation: Données météo d'Environnement Canada
End of explanation
"""
|
Lolcroc/AI | ML1/lab2_original.ipynb | gpl-3.0 | NAME = ""
NAME2 = ""
NAME3 = ""
EMAIL = ""
EMAIL2 = ""
EMAIL3 = ""
"""
Explanation: Save this file as studentid1_studentid2_lab#.ipynb
(Your student-id is the number shown on your student card.)
E.g. if you work with 3 people, the notebook should be named:
12301230_3434343_1238938934_lab1.ipynb.
This will be parsed by a regexp, so please double check your filename.
Before you turn this problem in, please make sure everything runs correctly. First, restart the kernel (in the menubar, select Kernel$\rightarrow$Restart) and then run all cells (in the menubar, select Cell$\rightarrow$Run All).
Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE", as well as your names and email adresses below.
End of explanation
"""
%pylab inline
plt.rcParams["figure.figsize"] = [9,5]
"""
Explanation: Lab 2: Classification
Machine Learning 1, September 2017
Notes on implementation:
You should write your code and answers in this IPython Notebook: http://ipython.org/notebook.html. If you have problems, please contact your teaching assistant.
Please write your answers right below the questions.
Among the first lines of your notebook should be "%pylab inline". This imports all required modules, and your plots will appear inline.
Use the provided test cells to check if your answers are correct
Make sure your output and plots are correct before handing in your assignment with Kernel -> Restart & Run All
$\newcommand{\bx}{\mathbf{x}}$
$\newcommand{\bw}{\mathbf{w}}$
$\newcommand{\bt}{\mathbf{t}}$
$\newcommand{\by}{\mathbf{y}}$
$\newcommand{\bm}{\mathbf{m}}$
$\newcommand{\bb}{\mathbf{b}}$
$\newcommand{\bS}{\mathbf{S}}$
$\newcommand{\ba}{\mathbf{a}}$
$\newcommand{\bz}{\mathbf{z}}$
$\newcommand{\bv}{\mathbf{v}}$
$\newcommand{\bq}{\mathbf{q}}$
$\newcommand{\bp}{\mathbf{p}}$
$\newcommand{\bh}{\mathbf{h}}$
$\newcommand{\bI}{\mathbf{I}}$
$\newcommand{\bX}{\mathbf{X}}$
$\newcommand{\bT}{\mathbf{T}}$
$\newcommand{\bPhi}{\mathbf{\Phi}}$
$\newcommand{\bW}{\mathbf{W}}$
$\newcommand{\bV}{\mathbf{V}}$
End of explanation
"""
from sklearn.datasets import fetch_mldata
# Fetch the data
mnist = fetch_mldata('MNIST original')
data, target = mnist.data, mnist.target.astype('int')
# Shuffle
indices = np.arange(len(data))
np.random.seed(123)
np.random.shuffle(indices)
data, target = data[indices].astype('float32'), target[indices]
# Normalize the data between 0.0 and 1.0:
data /= 255.
# Split
x_train, x_valid, x_test = data[:50000], data[50000:60000], data[60000: 70000]
t_train, t_valid, t_test = target[:50000], target[50000:60000], target[60000: 70000]
"""
Explanation: Part 1. Multiclass logistic regression
Scenario: you have a friend with one big problem: she's completely blind. You decided to help her: she has a special smartphone for blind people, and you are going to develop a mobile phone app that can do machine vision using the mobile camera: converting a picture (from the camera) to the meaning of the image. You decide to start with an app that can read handwritten digits, i.e. convert an image of handwritten digits to text (e.g. it would enable her to read precious handwritten phone numbers).
A key building block for such an app would be a function predict_digit(x) that returns the digit class of an image patch $\bx$. Since hand-coding this function is highly non-trivial, you decide to solve this problem using machine learning, such that the internal parameters of this function are automatically learned using machine learning techniques.
The dataset you're going to use for this is the MNIST handwritten digits dataset (http://yann.lecun.com/exdb/mnist/). You can download the data with scikit learn, and load it as follows:
End of explanation
"""
def plot_digits(data, num_cols, targets=None, shape=(28,28)):
num_digits = data.shape[0]
num_rows = int(num_digits/num_cols)
for i in range(num_digits):
plt.subplot(num_rows, num_cols, i+1)
plt.imshow(data[i].reshape(shape), interpolation='none', cmap='Greys')
if targets is not None:
plt.title(int(targets[i]))
plt.colorbar()
plt.axis('off')
plt.tight_layout()
plt.show()
plot_digits(x_train[0:40000:5000], num_cols=4, targets=t_train[0:40000:5000])
"""
Explanation: MNIST consists of small 28 by 28 pixel images of written digits (0-9). We split the dataset into a training, validation and testing arrays. The variables x_train, x_valid and x_test are $N \times M$ matrices, where $N$ is the number of datapoints in the respective set, and $M = 28^2 = 784$ is the dimensionality of the data. The second set of variables t_train, t_valid and t_test contain the corresponding $N$-dimensional vector of integers, containing the true class labels.
Here's a visualisation of the first 8 digits of the trainingset:
End of explanation
"""
# 1.1.2 Compute gradient of log p(t|x;w,b) wrt w and b
def logreg_gradient(x, t, w, b):
# YOUR CODE HERE
raise NotImplementedError()
return logp[:,t].squeeze(), dL_dw, dL_db.squeeze()
np.random.seed(123)
# scalar, 10 X 768 matrix, 10 X 1 vector
w = np.random.normal(size=(28*28,10), scale=0.001)
# w = np.zeros((784,10))
b = np.zeros((10,))
# test gradients, train on 1 sample
logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w, b)
print("Test gradient on one point")
print("Likelihood:\t", logpt)
print("\nGrad_W_ij\t",grad_w.shape,"matrix")
print("Grad_W_ij[0,152:158]=\t", grad_w[152:158,0])
print("\nGrad_B_i shape\t",grad_b.shape,"vector")
print("Grad_B_i=\t", grad_b.T)
print("i in {0,...,9}; j in M")
assert logpt.shape == (), logpt.shape
assert grad_w.shape == (784, 10), grad_w.shape
assert grad_b.shape == (10,), grad_b.shape
# It's always good to check your gradient implementations with finite difference checking:
# Scipy provides the check_grad function, which requires flat input variables.
# So we write two helper functions that provide can compute the gradient and output with 'flat' weights:
from scipy.optimize import check_grad
np.random.seed(123)
# scalar, 10 X 768 matrix, 10 X 1 vector
w = np.random.normal(size=(28*28,10), scale=0.001)
# w = np.zeros((784,10))
b = np.zeros((10,))
def func(w):
logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w.reshape(784,10), b)
return logpt
def grad(w):
logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w.reshape(784,10), b)
return grad_w.flatten()
finite_diff_error = check_grad(func, grad, w.flatten())
print('Finite difference error grad_w:', finite_diff_error)
assert finite_diff_error < 1e-3, 'Your gradient computation for w seems off'
def func(b):
logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w, b)
return logpt
def grad(b):
logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w, b)
return grad_b.flatten()
finite_diff_error = check_grad(func, grad, b)
print('Finite difference error grad_b:', finite_diff_error)
assert finite_diff_error < 1e-3, 'Your gradient computation for b seems off'
"""
Explanation: In multiclass logistic regression, the conditional probability of class label $j$ given the image $\bx$ for some datapoint is given by:
$ \log p(t = j \;|\; \bx, \bb, \bW) = \log q_j - \log Z$
where $\log q_j = \bw_j^T \bx + b_j$ (the log of the unnormalized probability of the class $j$), and $Z = \sum_k q_k$ is the normalizing factor. $\bw_j$ is the $j$-th column of $\bW$ (a matrix of size $784 \times 10$) corresponding to the class label, $b_j$ is the $j$-th element of $\bb$.
Given an input image, the multiclass logistic regression model first computes the intermediate vector $\log \bq$ (of size $10 \times 1$), using $\log q_j = \bw_j^T \bx + b_j$, containing the unnormalized log-probabilities per class.
The unnormalized probabilities are then normalized by $Z$ such that $\sum_j p_j = \sum_j \exp(\log p_j) = 1$. This is done by $\log p_j = \log q_j - \log Z$ where $Z = \sum_i \exp(\log q_i)$. This is known as the softmax transformation, and is also used as a last layer of many classifcation neural network models, to ensure that the output of the network is a normalized distribution, regardless of the values of second-to-last layer ($\log \bq$)
Warning: when computing $\log Z$, you are likely to encounter numerical problems. Save yourself countless hours of debugging and learn the log-sum-exp trick.
The network's output $\log \bp$ of size $10 \times 1$ then contains the conditional log-probabilities $\log p(t = j \;|\; \bx, \bb, \bW)$ for each digit class $j$. In summary, the computations are done in this order:
$\bx \rightarrow \log \bq \rightarrow Z \rightarrow \log \bp$
Given some dataset with $N$ independent, identically distributed datapoints, the log-likelihood is given by:
$ \mathcal{L}(\bb, \bW) = \sum_{n=1}^N \mathcal{L}^{(n)}$
where we use $\mathcal{L}^{(n)}$ to denote the partial log-likelihood evaluated over a single datapoint. It is important to see that the log-probability of the class label $t^{(n)}$ given the image, is given by the $t^{(n)}$-th element of the network's output $\log \bp$, denoted by $\log p_{t^{(n)}}$:
$\mathcal{L}^{(n)} = \log p(t = t^{(n)} \;|\; \bx = \bx^{(n)}, \bb, \bW) = \log p_{t^{(n)}} = \log q_{t^{(n)}} - \log Z^{(n)}$
where $\bx^{(n)}$ and $t^{(n)}$ are the input (image) and class label (integer) of the $n$-th datapoint, and $Z^{(n)}$ is the normalizing constant for the distribution over $t^{(n)}$.
1.1 Gradient-based stochastic optimization
1.1.1 Derive gradient equations (20 points)
Derive the equations for computing the (first) partial derivatives of the log-likelihood w.r.t. all the parameters, evaluated at a single datapoint $n$.
You should start deriving the equations for $\frac{\partial \mathcal{L}^{(n)}}{\partial \log q_j}$ for each $j$. For clarity, we'll use the shorthand $\delta^q_j = \frac{\partial \mathcal{L}^{(n)}}{\partial \log q_j}$.
For $j = t^{(n)}$:
$
\delta^q_j
= \frac{\partial \mathcal{L}^{(n)}}{\partial \log p_j}
\frac{\partial \log p_j}{\partial \log q_j}
+ \frac{\partial \mathcal{L}^{(n)}}{\partial \log Z}
\frac{\partial \log Z}{\partial Z}
\frac{\partial Z}{\partial \log q_j}
= 1 \cdot 1 - \frac{\partial \log Z}{\partial Z}
\frac{\partial Z}{\partial \log q_j}
= 1 - \frac{\partial \log Z}{\partial Z}
\frac{\partial Z}{\partial \log q_j}
$
For $j \neq t^{(n)}$:
$
\delta^q_j
= \frac{\partial \mathcal{L}^{(n)}}{\partial \log Z}
\frac{\partial \log Z}{\partial Z}
\frac{\partial Z}{\partial \log q_j}
= - \frac{\partial \log Z}{\partial Z}
\frac{\partial Z}{\partial \log q_j}
$
Complete the above derivations for $\delta^q_j$ by furtherly developing $\frac{\partial \log Z}{\partial Z}$ and $\frac{\partial Z}{\partial \log q_j}$. Both are quite simple. For these it doesn't matter whether $j = t^{(n)}$ or not.
For $j = t^{(n)}$:
\begin{align}
\delta^q_j
&=
\end{align}
For $j \neq t^{(n)}$:
\begin{align}
\delta^q_j
&=
\end{align}
YOUR ANSWER HERE
Given your equations for computing the gradients $\delta^q_j$ it should be quite straightforward to derive the equations for the gradients of the parameters of the model, $\frac{\partial \mathcal{L}^{(n)}}{\partial W_{ij}}$ and $\frac{\partial \mathcal{L}^{(n)}}{\partial b_j}$. The gradients for the biases $\bb$ are given by:
$
\frac{\partial \mathcal{L}^{(n)}}{\partial b_j}
= \frac{\partial \mathcal{L}^{(n)}}{\partial \log q_j}
\frac{\partial \log q_j}{\partial b_j}
= \delta^q_j
\cdot 1
= \delta^q_j
$
The equation above gives the derivative of $\mathcal{L}^{(n)}$ w.r.t. a single element of $\bb$, so the vector $\nabla_\bb \mathcal{L}^{(n)}$ with all derivatives of $\mathcal{L}^{(n)}$ w.r.t. the bias parameters $\bb$ is:
$
\nabla_\bb \mathcal{L}^{(n)} = \mathbf{\delta}^q
$
where $\mathbf{\delta}^q$ denotes the vector of size $10 \times 1$ with elements $\mathbf{\delta}_j^q$.
The (not fully developed) equation for computing the derivative of $\mathcal{L}^{(n)}$ w.r.t. a single element $W_{ij}$ of $\bW$ is:
$
\frac{\partial \mathcal{L}^{(n)}}{\partial W_{ij}} =
\frac{\partial \mathcal{L}^{(n)}}{\partial \log q_j}
\frac{\partial \log q_j}{\partial W_{ij}}
= \mathbf{\delta}j^q
\frac{\partial \log q_j}{\partial W{ij}}
$
What is $\frac{\partial \log q_j}{\partial W_{ij}}$? Complete the equation above.
If you want, you can give the resulting equation in vector format ($\nabla_{\bw_j} \mathcal{L}^{(n)} = ...$), like we did for $\nabla_\bb \mathcal{L}^{(n)}$.
YOUR ANSWER HERE
1.1.2 Implement gradient computations (10 points)
Implement the gradient calculations you derived in the previous question. Write a function logreg_gradient(x, t, w, b) that returns the gradients $\nabla_{\bw_j} \mathcal{L}^{(n)}$ (for each $j$) and $\nabla_{\bb} \mathcal{L}^{(n)}$, i.e. the first partial derivatives of the log-likelihood w.r.t. the parameters $\bW$ and $\bb$, evaluated at a single datapoint (x, t).
The computation will contain roughly the following intermediate variables:
$
\log \bq \rightarrow Z \rightarrow \log \bp\,,\, \mathbf{\delta}^q
$
followed by computation of the gradient vectors $\nabla_{\bw_j} \mathcal{L}^{(n)}$ (contained in a $784 \times 10$ matrix) and $\nabla_{\bb} \mathcal{L}^{(n)}$ (a $10 \times 1$ vector).
For maximum points, ensure the function is numerically stable.
End of explanation
"""
def sgd_iter(x_train, t_train, W, b):
# YOUR CODE HERE
raise NotImplementedError()
return logp_train, W, b
# Sanity check:
np.random.seed(1243)
w = np.zeros((28*28, 10))
b = np.zeros(10)
logp_train, W, b = sgd_iter(x_train[:5], t_train[:5], w, b)
"""
Explanation: 1.1.3 Stochastic gradient descent (10 points)
Write a function sgd_iter(x_train, t_train, w, b) that performs one iteration of stochastic gradient descent (SGD), and returns the new weights. It should go through the trainingset once in randomized order, call logreg_gradient(x, t, w, b) for each datapoint to get the gradients, and update the parameters using a small learning rate of 1E-6. Note that in this case we're maximizing the likelihood function, so we should actually performing gradient ascent... For more information about SGD, see Bishop 5.2.4 or an online source (i.e. https://en.wikipedia.org/wiki/Stochastic_gradient_descent)
End of explanation
"""
def test_sgd(x_train, t_train, w, b):
# YOUR CODE HERE
raise NotImplementedError()
np.random.seed(1243)
w = np.zeros((28*28, 10))
b = np.zeros(10)
w,b = test_sgd(x_train, t_train, w, b)
"""
Explanation: 1.2. Train
1.2.1 Train (10 points)
Perform 10 SGD iterations through the trainingset. Plot (in one graph) the conditional log-probability of the trainingset and validation set after each iteration.
End of explanation
"""
# YOUR CODE HERE
raise NotImplementedError()
"""
Explanation: 1.2.2 Visualize weights (10 points)
Visualize the resulting parameters $\bW$ after a few iterations through the training set, by treating each column of $\bW$ as an image. If you want, you can use or edit the plot_digits(...) above.
End of explanation
"""
# YOUR CODE HERE
raise NotImplementedError()
"""
Explanation: Describe in less than 100 words why these weights minimize the loss
YOUR ANSWER HERE
1.2.3. Visualize the 8 hardest and 8 easiest digits (10 points)
Visualize the 8 digits in the validation set with the highest probability of the true class label under the model.
Also plot the 8 digits that were assigned the lowest probability.
Ask yourself if these results make sense.
End of explanation
"""
# Write all helper functions here
def qj(w, h, b, j):
return np.exp((w.T)[j]*h + b[j])
def Z(q_j):
np.sum(q_j)
# 2
def delta_qj(w, h, b, j, t_n):
if j == t_n:
I = 1
else:
I = 0
q_j = qj(w, h, b, j)
Z = Z(q_j)
return I - q_j / Z
# 3
def partial_L_wij(w, h, b, j, t_n, i):
return h[i] * delta_qj(w, h, b, j, t_n)
# 1
def delta_h(W, delta_q):
return W * delta_q
#4
def partial_L_Vhi(h, x, W, delta_q, i, k):
return h[i] * (1 - h[i]) * x[k] * delta_h(W, delta_q)[i]
#5
def partial_L_ai(h, W, delta_q, i):
return h[i] * (1 - h[i]) * delta_h(W, delta_q)[i]
def gradient_L(W, w, h, x, t_n, j, i, k, delta_q):
return [partial_L_wij(w, h, b, j, t_n, i), partial_L_Vhi(h, x, W, delta_q, i, k), partial_L_ai(h, W, delta_q, i)]
def w_t1(w_t, eta, gradient_L):
return w_t - eta*gradient_L
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def dsig(s):
return s * (1 - s)
def calc_h(x, v, a):
return sigmoid(np.matmul(x, v) + a)
def logq_j(w_j, h, Z):
return w_j.T*h + b_j - log(Z)
def logp_j(w_j, h, Z, q_j):
return logq_j(w_j, h, Z) - Z(q_j)
# Write training code here:
# Plot the conditional loglikelihoods for the train and validation dataset after every iteration.
# Plot the weights of the first layer.
x_test = x_train[0:1,:]
t_test = t_train[0:1]
K = x_train.shape[1] # No. of input units
L = 20 # No. of hidden units
M = 10 # No. of output units
V = np.zeros((K, L))
W = np.zeros((L, M))
a = np.zeros((L,))
b = np.zeros((M,))
h_test = calc_h(x_test, V, a)
logreg_gradient(h_test, t_test, W, b)
"""
Explanation: Part 2. Multilayer perceptron
You discover that the predictions by the logistic regression classifier are not good enough for your application: the model is too simple. You want to increase the accuracy of your predictions by using a better model. For this purpose, you're going to use a multilayer perceptron (MLP), a simple kind of neural network. The perceptron wil have a single hidden layer $\bh$ with $L$ elements. The parameters of the model are $\bV$ (connections between input $\bx$ and hidden layer $\bh$), $\ba$ (the biases/intercepts of $\bh$), $\bW$ (connections between $\bh$ and $\log q$) and $\bb$ (the biases/intercepts of $\log q$.
The conditional probability of the class label $j$ is given by:
$\log p(t = j \;|\; \bx, \bb, \bW) = \log q_j - \log Z$
where $q_j$ are again the unnormalized probabilities per class, and $Z = \sum_j q_j$ is again the probability normalizing factor. Each $q_j$ is computed using:
$\log q_j = \bw_j^T \bh + b_j$
where $\bh$ is a $L \times 1$ vector with the hidden layer activations (of a hidden layer with size $L$), and $\bw_j$ is the $j$-th column of $\bW$ (a $L \times 10$ matrix). Each element of the hidden layer is computed from the input vector $\bx$ using:
$h_j = \sigma(\bv_j^T \bx + a_j)$
where $\bv_j$ is the $j$-th column of $\bV$ (a $784 \times L$ matrix), $a_j$ is the $j$-th element of $\ba$, and $\sigma(.)$ is the so-called sigmoid activation function, defined by:
$\sigma(x) = \frac{1}{1 + \exp(-x)}$
Note that this model is almost equal to the multiclass logistic regression model, but with an extra 'hidden layer' $\bh$. The activations of this hidden layer can be viewed as features computed from the input, where the feature transformation ($\bV$ and $\ba$) is learned.
2.1 Derive gradient equations (20 points)
State (shortly) why $\nabla_{\bb} \mathcal{L}^{(n)}$ is equal to the earlier (multiclass logistic regression) case, and why $\nabla_{\bw_j} \mathcal{L}^{(n)}$ is almost equal to the earlier case.
Like in multiclass logistic regression, you should use intermediate variables $\mathbf{\delta}_j^q$. In addition, you should use intermediate variables $\mathbf{\delta}_j^h = \frac{\partial \mathcal{L}^{(n)}}{\partial h_j}$.
Given an input image, roughly the following intermediate variables should be computed:
$
\log \bq \rightarrow Z \rightarrow \log \bp \rightarrow \mathbf{\delta}^q \rightarrow \mathbf{\delta}^h
$
where $\mathbf{\delta}_j^h = \frac{\partial \mathcal{L}^{(n)}}{\partial \bh_j}$.
Give the equations for computing $\mathbf{\delta}^h$, and for computing the derivatives of $\mathcal{L}^{(n)}$ w.r.t. $\bW$, $\bb$, $\bV$ and $\ba$.
You can use the convenient fact that $\frac{\partial}{\partial x} \sigma(x) = \sigma(x) (1 - \sigma(x))$.
YOUR ANSWER HERE
2.2 MAP optimization (10 points)
You derived equations for finding the maximum likelihood solution of the parameters. Explain, in a few sentences, how you could extend this approach so that it optimizes towards a maximum a posteriori (MAP) solution of the parameters, with a Gaussian prior on the parameters.
YOUR ANSWER HERE
2.3. Implement and train a MLP (15 points)
Implement a MLP model with a single hidden layer of 20 neurons.
Train the model for 10 epochs.
Plot (in one graph) the conditional log-probability of the trainingset and validation set after each two iterations, as well as the weights.
10 points: Working MLP that learns with plots
+5 points: Fast, numerically stable, vectorized implementation
End of explanation
"""
predict_test = np.zeros(len(t_test))
# Fill predict_test with the predicted targets from your model, don't cheat :-).
# YOUR CODE HERE
raise NotImplementedError()
assert predict_test.shape == t_test.shape
n_errors = np.sum(predict_test != t_test)
print('Test errors: %d' % n_errors)
"""
Explanation: 2.3.1. Explain the weights (5 points)
In less than 80 words, explain how and why the weights of the hidden layer of the MLP differ from the logistic regression model, and relate this to the stronger performance of the MLP.
YOUR ANSWER HERE
2.3.1. Less than 250 misclassifications on the test set (10 bonus points)
You receive an additional 10 bonus points if you manage to train a model with very high accuracy: at most 2.5% misclasified digits on the test set. Note that the test set contains 10000 digits, so you model should misclassify at most 250 digits. This should be achievable with a MLP model with one hidden layer. See results of various models at : http://yann.lecun.com/exdb/mnist/index.html. To reach such a low accuracy, you probably need to have a very high $L$ (many hidden units), probably $L > 200$, and apply a strong Gaussian prior on the weights. In this case you are allowed to use the validation set for training.
You are allowed to add additional layers, and use convolutional networks, although that is probably not required to reach 2.5% misclassifications.
End of explanation
"""
|
harrisonpim/bookworm | 05 - Cliques and Communities.ipynb | mit | from bookworm import *
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (12,9)
import pandas as pd
import numpy as np
import networkx as nx
"""
Explanation: < 04 - Time and Chronology | Home | 06 - Stable Roommates, Marriages, and Gender >
Cliques and Communities
End of explanation
"""
import community
"""
Explanation: Communities are just as important in the social structure of novels as they are in real-world social structures. They're also just as obvious - it's easy to think of a tight cluster of characters in your favourite novel which are isolated from the rest of the story. Visually, they're also quite apparent. Refer back to the Dursley's little clique which we saw back in notebook 3.
However, NetworkX's clique finding algorithm isn't ideal - it enumerates all cliques, giving us plenty of overlapping cliques which aren't that descriptive of the existing communities. In mathematical terms, we want to maximise the modularity of the whole graph at once.
A cleaner solution to our problem is the python implementation of louvain community detection given here by Thomas Aynaud.
End of explanation
"""
book = nx.from_pandas_dataframe(bookworm('data/raw/hp_chamber_of_secrets.txt'),
source='source',
target='target')
partitions = community.best_partition(book)
values = [partitions.get(node) for node in book.nodes()]
nx.draw(book,
cmap=plt.get_cmap("RdYlBu"),
node_color=values,
with_labels=True)
"""
Explanation: This implementation of louvain modularity is a very smart piece of maths, first given by Blondel et al in Fast unfolding of communities in large networks. If you're not a mathematical reader, just skip over this section and go straight to the results.
If you are interested in the maths, it goes roughly like this:
We want to calculate a value Q between -1 and 1 for a partition of our graph, where $Q$ denotes the modularity of the network. Modularity is a comparative measure of the density within the communities in question and the density between them. A high modularity indicates a good splitting. Through successive, gradual changes to our labelling of nodes and close monitoring of the value of $Q$, we can optimise our partition(s). $Q$ and its change for each successve optimisation epoch ($\Delta Q$) are calculated in two stages, as follows.
$$
Q = \frac{1}{2m} \sum_{ij} \left[\ A_{ij}\ \frac{k_i k_j}{2m}\ \right]\ \delta(c_i, c_j)
$$
$m$ is the sum of all of the edge weights in the graph
$A_{ij}$ represents the edge weight between nodes $i$ and $j$
$k_{i}$ and $k_{j}$ are the sum of the weights of the edges attached to nodes $i$ and $j$, respectively
$\delta$ is the delta function.
$c_{i}$ and $c_{j}$ are the communities of the nodes
First, each node in the network is assigned to its own community. Then for each node $i$, the change in modularity is calculated by removing $i$ from its own community and moving it into the community of each neighbor $j$ of $i$:
$$
\Delta Q = \left[\frac{\sum_{in} +\ k_{i,in}}{2m} - \left(\frac{\sum_{tot} +\ k_{i}}{2m}\right)^2 \right] -
\left[ \frac{\sum_{in}}{2m} - \left(\frac{\sum_{tot}}{2m}\right)^2 - \left(\frac{k_{i}}{2m}\right)^2 \right]
$$
$\sum_{in}$ is sum of all the weights of the links inside the community $i$ is moving into
$k_{i,in}$ is the sum of the weights of the links between $i$ and other nodes in the community
$m$ is the sum of the weights of all links in the network
$\Sigma _{tot}$ is the sum of all the weights of the links to nodes in the community
$k_{i}$ is the weighted degree of $i$
Once this value is calculated for all communities that $i$ is connected to, $i$ is placed into the community that resulted in the greatest modularity increase. If no increase is possible, $i$ remains in its original community. This process is applied repeatedly and sequentially to all nodes until no modularity increase can occur. Once this local maximum of modularity is hit, we move on to the second stage.
All of the nodes in the same community are grouped to create a new network, where nodes are the communities from the previous phase. Links between nodes within communities are represented by self loops on these new community nodes, and links from multiple nodes in the same community to a node in a different community are represented by weighted edges. The first stage is then applied to this new weighted network, and the process repeats.
Actually doing the thing
Let's load in a book and try applying python-louvain's implementation of this algorithm to it:
End of explanation
"""
def draw_with_communities(book):
'''
draw a networkx graph with communities partitioned and coloured
according to their louvain modularity
Parameters
----------
book : nx.Graph (required)
the book graph to be visualised
'''
partitions = community.best_partition(book)
values = [partitions.get(node) for node in book.nodes()]
nx.draw(book,
cmap=plt.get_cmap("RdYlBu"),
node_color=values,
with_labels=True)
book = nx.from_pandas_dataframe(bookworm('data/raw/fellowship_of_the_ring.txt'),
source='source',
target='target')
draw_with_communities(book)
"""
Explanation: Sweet - that works nicely. We can wrap this up neatly into a single function call
End of explanation
"""
|
google/py-decorators-tutorial | decorators-tutorial.ipynb | apache-2.0 | %%javascript
// From https://github.com/kmahelona/ipython_notebook_goodies
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')
"""
Explanation: Copyright 2016 Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
<h1 id="tocheading">Table of Contents</h1>
<div id="toc"></div>
End of explanation
"""
def add(n1, n2):
return n1 + n2
def multiply(n1, n2):
return n1 * n2
def exponentiate(n1, n2):
"""Raise n1 to the power of n2"""
import math
return math.pow(n1, n2)
"""
Explanation: Basics
There's lots of guides out there on decorators (this one is good), but I was never really sure when I would need to use decorators. Hopefully this will help motivate them a little more. Here I hope to show you:
When decorators might come in handy
How to write one
How to generalize using *args and **kwargs sorcery.
You should read this if:
You've heard of decorators and want to know more about them, and/or
You want to know what *args and **kwargs mean.
If you're here just for *args and **kwargs, start reading here.
Motivation
Let's say you're defining methods on numbers:
End of explanation
"""
def is_number(n):
"""Return True iff n is a number."""
# A number can always be converted to a float
try:
float(n)
return True
except ValueError:
return False
def add(n1, n2):
if not (is_number(n1) and is_number(n2)):
print("Arguments must be numbers!")
return
return n1 + n2
def multiply(n1, n2):
if not (is_number(n1) and is_number(n2)):
print("Arguments must be numbers!")
return
return n1 * n2
def exponentiate(n1, n2):
"""Raise n1 to the power of n2"""
if not (is_number(n1) and is_number(n2)):
print("Arguments must be numbers!")
return
import math
return math.pow(n1, n2)
"""
Explanation: Well, we only want these functions to work if both inputs are numbers. So we could do:
End of explanation
"""
def validate_two_arguments(n1, n2):
"""
Returns True if n1 and n2 are both numbers.
"""
if not (is_number(n1) and is_number(n2)):
return False
return True
def add(n1, n2):
if validate_two_arguments(n1, n2):
return n1 + n2
def multiply(n1, n2):
if validate_two_arguments(n1, n2):
return n1 * n2
def exponentiate(n1, n2):
"""Raise n1 to the power of n2"""
if validate_two_arguments(n1, n2):
import math
return math.pow(n1, n2)
"""
Explanation: But this is yucky: we had to copy and paste code. This should always make you sad! For example, what if you wanted to change the message slightly? Or to return an error instead? You'd have to change it everywhere it appears...
We want the copy & pasted code to live in just one place, so any changes just go there (DRY code: Don't Repeat Yourself). So let's refactor.
End of explanation
"""
# The decorator: takes a function.
def validate_arguments(func):
# The decorator will be returning wrapped_func, a function that has the
# same signature as add, multiply, etc.
def wrapped_func(n1, n2):
# If we don't have two numbers, we don't want to run the function.
# Best practice ("be explicit") is to raise an error here
# instead of just returning None.
if not validate_two_arguments(n1, n2):
raise Exception("Arguments must be numbers!")
# We've passed our checks, so we can call the function with the passed in arguments.
# If you like, think of this as
# result = func(n1, n2)
# return result
# to distinguish it from the outer return where we're returning a function.
return func(n1, n2)
# This is where we return the function that has the same signature.
return wrapped_func
@validate_arguments
def add(n1, n2):
return n1 + n2
# Don't forget, the @ syntax just means
# add = validate_decorator(add)
print(add(1, 3))
try:
add(2, 'hi')
except Exception as e:
print("Caught Exception: {}".format(e))
"""
Explanation: This is definitely better. But there's still some repeated logic. Like, what if we want to return an error if we don't get numbers, or print something before running the code? We'd still have to make the changes in multiple places. The code isn't DRY.
Basic decorators
We can refactor further with the decorator pattern.
We want to write something that looks like
@decorator
def add(n1, n2):
return n1 + n2
so that all the logic about validating n1 and n2 lives in one place, and the functions just do what we want them to do.
Since the @ syntax just means add = decorator(add), we know the decorator needs to take a function as an argument, and it needs to return a function. (This should be confusing at first. Functions returning functions are scary, but think about it until that doesn't seem outlandish to you.)
This returned function should act the same way as add, so it should take two arguments. And within this returned function, we want to first check that the arguments are numbers. If they are, we want to call the original function that we decorated (in this case, add). If not, we don't want to do anything. Here's what that looks like (there's a lot here, so use the comments to understand what's happening):
End of explanation
"""
@validate_arguments # Won't work!
def add3(n1, n2, n3):
return n1 + n2 + n3
add3(1, 2, 3)
"""
Explanation: This pattern is nice because we've even refactored out all the validation logic (even the "if blah then blah" part) into the decorator.
Generalizing with *args and **kwargs
What if we want to validate a function that has a different number of arguments?
End of explanation
"""
# The decorator: takes a function.
def validate_arguments(func):
# Note the *args! Think of this as representing "as many arguments as you want".
# So this function will take an arbitrary number of arguments.
def wrapped_func(*args):
# We just want to apply the check to each argument.
for arg in args:
if not is_number(arg):
raise Exception("Arguments must be numbers!")
# We also want to make sure there's at least two arguments.
if len(args) < 2:
raise Exception("Must specify at least 2 arguments!")
# We've passed our checks, so we can call the function with the
# passed-in arguments.
# Right now, args is a tuple of all the different arguments passed in
# (more explanation below), so we want to expand them back out when
# calling the function.
return func(*args)
return wrapped_func
@validate_arguments # This works
def add3(n1, n2, n3):
return n1 + n2 + n3
add3(1, 2, 3)
@validate_arguments # And so does this
def addn(*args):
"""Add an arbitrary number of numbers together"""
cumu = 0
for arg in args:
cumu += arg
return cumu
print(addn(1, 2, 3, 4, 5))
# range(n) gives a list, so we expand the list into positional arguments...
print(addn(*range(10)))
"""
Explanation: We can't decorate this because the wrapped function expects 2 arguments.
Here's where we use the * symbol. I'll write out the code so you can see how it looks, and we'll look at what *args is doing below.
End of explanation
"""
def foo(*args):
print("foo args: {}".format(args))
print("foo args type: {}".format(type(args)))
# So foo can take an arbitrary number of arguments
print("First call:")
foo(1, 2, 'a', 3, True)
# Which can be written using the * syntax to expand an iterable
print("\nSecond call:")
l = [1, 2, 'a', 3, True]
foo(*l)
"""
Explanation: <a id='args'>*args</a>
What is this * nonsense?
You've probably seen *args and **kwargs in documentation before. Here's what they mean:
When calling a function, * expands an iterable into positional arguments.
Terminology note: in a call like bing(1, 'hi', name='fig'), 1 is the first positional argument, 'hi' is the second positional argument, and there's a keyword argument 'name' with the value 'fig'.
When defining a signature, *args represents an arbitrary number of positional arguments.
End of explanation
"""
def bar(**kwargs):
print("bar kwargs: {}".format(kwargs))
# bar takes an arbitrary number of keyword arguments
print("First call:")
bar(location='US-PAO', ldap='awan', age=None)
# Which can also be written using the ** syntax to expand a dict
print("\nSecond call:")
d = {'location': 'US-PAO', 'ldap': 'awan', 'age': None}
bar(**d)
"""
Explanation: Back to the decorator
(If you're just here for *args and **kwargs, skip down to here)
So let's look at the decorator code again, minus the comments:
def validate_decorator(func):
def wrapped_func(*args):
for arg in args:
if not is_number(arg):
print("arguments must be numbers!")
return
return func(*args)
return wrapped_func
def wrapped_func(*args) says that wrapped_func can take an arbitrary number of arguments.
Within wrapped_func, we interact with args as a tuple containing all the (positional) arguments passed in.
If all the arguments are numbers, we call func, the function we decorated, by expanding the args tuple back out into positional arguments: func(*args).
Finally the decorator needs to return a function (remember that the @ syntax is just sugar for add = decorator(add).
Congrats, you now understand decorators! You can do tons of other stuff with them, but hopefully now you're equipped to read the other guides online.
<a id='kwargs'>As for **kwargs:</a>
When calling a function, ** expands a dict into keyword arguments.
When defining a signature, **kwargs represents an arbitrary number of keyword arguments.
End of explanation
"""
def baz(*args, **kwargs):
print("baz args: {}. kwargs: {}".format(args, kwargs))
# Calling baz with a mixture of positional and keyword arguments
print("First call:")
baz(1, 3, 'hi', name='Joe', age=37, occupation='Engineer')
# Which is the same as
print("\nSecond call:")
l = [1, 3, 'hi']
d = {'name': 'Joe', 'age': 37, 'occupation': 'Engineer'}
baz(*l, **d)
"""
Explanation: And in case your head doesn't hurt yet, we can do both together:
End of explanation
"""
def convert_arguments(func):
"""
Convert func arguments to floats.
"""
# Introducing the leading underscore: (weakly) marks a private
# method/property that should not be accessed outside the defining
# scope. Look up PEP 8 for more.
def _wrapped_func(*args):
new_args = [float(arg) for arg in args]
return func(*new_args)
return _wrapped_func
@convert_arguments
@validate_arguments
def divide_n(*args):
cumu = args[0]
for arg in args[1:]:
cumu = cumu / arg
return cumu
# The user doesn't need to think about integer division!
divide_n(103, 2, 8)
"""
Explanation: Advanced decorators
This section will introduce some of the many other useful ways you can use decorators. We'll talk about
* Passing arguments into decorators
* functools.wraps
* Returning a different function
* Decorators and objects.
Use the table of contents at the top to make it easier to look around.
Decorators with arguments
A common thing to want to do is to do some kind of configuration in a decorator. For example, let's say we want to define a divide_n method, and to make it easy to use we want to hide the existence of integer division. Let's define a decorator that converts arguments into floats.
End of explanation
"""
def convert_arguments_to(to_type=float):
"""
Convert arguments to the given to_type by casting them.
"""
def _wrapper(func):
def _wrapped_func(*args):
new_args = [to_type(arg) for arg in args]
return func(*new_args)
return _wrapped_func
return _wrapper
@validate_arguments
def divide_n(*args):
cumu = args[0]
for arg in args[1:]:
cumu = cumu / arg
return cumu
@convert_arguments_to(to_type=int)
def divide_n_as_integers(*args):
return divide_n(*args)
@convert_arguments_to(to_type=float)
def divide_n_as_float(*args):
return divide_n(*args)
print(divide_n_as_float(7, 3))
print(divide_n_as_integers(7, 3))
"""
Explanation: But now let's say we want to define a divide_n_as_integers function. We could write a new decorator, or we could alter our decorator so that we can specify what we want to convert the arguments to. Let's try the latter.
(For you smart alecks out there: yes you could use the // operator, but you'd still have to replicate the logic in divide_n. Nice try.)
End of explanation
"""
@validate_arguments
def foo(*args):
"""foo frobs bar"""
pass
print(foo.__name__)
print(foo.__doc__)
"""
Explanation: Did you notice the tricky thing about creating a decorator that takes arguments? We had to create a function to "return a decorator". The outermost function, convert_arguments_to, returns a function that takes a function, which is what we've been calling a "decorator".
To think about why this is necessary, let's start from the form that we wanted to write, and unpack from there. We wanted to be able to do:
@decorator(decorator_arg)
def myfunc(*func_args):
pass
Unpacking the syntactic sugar gives us
def myfunc(*func_args):
pass
myfunc = decorator(decorator_arg)(myfunc)
Written this way, it should immediately be clear that decorator(decorator_arg) returns a function that takes a function.
So that's how you write a decorator that takes an argument: it actually has to be a function that takes your decorator arguments, and returns a function that takes a function.
functools.wraps
If you've played around with the examples above, you might've seen that the name of the wrapped function changes after you apply a decorator... And perhaps more importantly, the docstring of the wrapped function changes too (this is important for when generating documentation, e.g. with Sphinx).
End of explanation
"""
from functools import wraps
def better_validate_arguments(func):
@wraps(func)
def wrapped_func(*args):
for arg in args:
if not is_number(arg):
raise Exception("Arguments must be numbers!")
if len(args) < 2:
raise Exception("Must specify at least 2 arguments!")
return func(*args)
return wrapped_func
@better_validate_arguments
def bar(*args):
"""bar frobs foo"""
pass
print(bar.__name__)
print(bar.__doc__)
"""
Explanation: functools.wraps solves this problem. Use it as follows:
End of explanation
"""
def jedi_mind_trick(func):
def _jedi_func():
return "Not the droid you're looking for"
return _jedi_func
@jedi_mind_trick
def get_droid():
return "Found the droid!"
get_droid()
"""
Explanation: Think of the @wraps decorator making it so that wrapped_func knows what function it originally wrapped.
Returning a different function
Decorators don't even have to return the function that's passed in. You can have some fun with this...
End of explanation
"""
|
tpin3694/tpin3694.github.io | machine-learning/calculate_the_determinant_of_a_matrix.ipynb | mit | # Load library
import numpy as np
"""
Explanation: Title: Calculate The Determinant Of A Matrix
Slug: calculate_the_determinant_of_a_matrix
Summary: How to calculate the determinant of a matrix in Python.
Date: 2017-09-02 12:00
Category: Machine Learning
Tags: Vectors Matrices Arrays
Authors: Chris Albon
Preliminaries
End of explanation
"""
# Create matrix
matrix = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
"""
Explanation: Create Matrix
End of explanation
"""
# Return determinant of matrix
np.linalg.det(matrix)
"""
Explanation: Calculate Determinant
End of explanation
"""
|
gunan/tensorflow | tensorflow/lite/micro/examples/micro_speech/train/train_micro_speech_model.ipynb | apache-2.0 | # A comma-delimited list of the words you want to train for.
# The options are: yes,no,up,down,left,right,on,off,stop,go
# All the other words will be used to train an "unknown" label and silent
# audio data with no spoken words will be used to train a "silence" label.
WANTED_WORDS = "yes,no"
# The number of steps and learning rates can be specified as comma-separated
# lists to define the rate at each stage. For example,
# TRAINING_STEPS=12000,3000 and LEARNING_RATE=0.001,0.0001
# will run 12,000 training loops in total, with a rate of 0.001 for the first
# 8,000, and 0.0001 for the final 3,000.
TRAINING_STEPS = "12000,3000"
LEARNING_RATE = "0.001,0.0001"
# Calculate the total number of steps, which is used to identify the checkpoint
# file name.
TOTAL_STEPS = str(sum(map(lambda string: int(string), TRAINING_STEPS.split(","))))
# Print the configuration to confirm it
!echo "Training these words:" $WANTED_WORDS
!echo "Training steps in each stage:" $TRAINING_STEPS
!echo "Learning rate in each stage:" $LEARNING_RATE
!echo "Total number of training steps:" $TOTAL_STEPS
"""
Explanation: Train a Simple Audio Recognition Model
This notebook demonstrates how to train a 20 kB Simple Audio Recognition model to recognize keywords in speech.
The model created in this notebook is used in the micro_speech example for TensorFlow Lite for MicroControllers.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/micro_speech/train/train_micro_speech_model.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/micro_speech/train/train_micro_speech_model.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Training is much faster using GPU acceleration. Before you proceed, ensure you are using a GPU runtime by going to Runtime -> Change runtime type and set Hardware accelerator: GPU. Training 15,000 iterations will take 1.5 - 2 hours on a GPU runtime.
Configure Defaults
MODIFY the following constants for your specific use case.
End of explanation
"""
# Calculate the percentage of 'silence' and 'unknown' training samples required
# to ensure that we have equal number of samples for each label.
number_of_labels = WANTED_WORDS.count(',') + 1
number_of_total_labels = number_of_labels + 2 # for 'silence' and 'unknown' label
equal_percentage_of_training_samples = int(100.0/(number_of_total_labels))
SILENT_PERCENTAGE = equal_percentage_of_training_samples
UNKNOWN_PERCENTAGE = equal_percentage_of_training_samples
# Constants which are shared during training and inference
PREPROCESS = 'micro'
WINDOW_STRIDE ='20'
MODEL_ARCHITECTURE = 'tiny_conv' # Other options include: single_fc, conv,
# low_latency_conv, low_latency_svdf, tiny_embedding_conv
QUANTIZE = '1' # For booleans, we provide 1 or 0 (instead of True or False)
# Constants used during training only
VERBOSITY = 'WARN'
EVAL_STEP_INTERVAL = '1000'
SAVE_STEP_INTERVAL = '5000'
# Constants for training directories and filepaths
DATASET_DIR = 'dataset/'
LOGS_DIR = 'logs/'
TRAIN_DIR = 'train/' # for training checkpoints and other files.
# Constants for inference directories and filepaths
import os
MODELS_DIR = 'models/'
os.mkdir(MODELS_DIR)
MODEL_TF = MODELS_DIR + 'model.pb'
MODEL_TFLITE = MODELS_DIR + 'model.tflite'
MODEL_TFLITE_MICRO = MODELS_DIR + 'model.cc'
"""
Explanation: DO NOT MODIFY the following constants as they include filepaths used in this notebook and data that is shared during training and inference.
End of explanation
"""
%tensorflow_version 1.x
import tensorflow as tf
"""
Explanation: Setup Environment
Install Dependencies
End of explanation
"""
!rm -rf {DATASET_DIR} {LOGS_DIR} {TRAIN_DIR} {MODELS_DIR}
"""
Explanation: DELETE any old data from previous runs
End of explanation
"""
!git clone -q https://github.com/tensorflow/tensorflow
"""
Explanation: Clone the TensorFlow Github Repository, which contains the relevant code required to run this tutorial.
End of explanation
"""
%load_ext tensorboard
%tensorboard --logdir {LOGS_DIR}
"""
Explanation: Load TensorBoard to visualize the accuracy and loss as training proceeds.
End of explanation
"""
!python tensorflow/tensorflow/examples/speech_commands/train.py \
--data_dir={DATASET_DIR} \
--wanted_words={WANTED_WORDS} \
--silence_percentage={SILENT_PERCENTAGE} \
--unknown_percentage={UNKNOWN_PERCENTAGE} \
--preprocess={PREPROCESS} \
--window_stride={WINDOW_STRIDE} \
--model_architecture={MODEL_ARCHITECTURE} \
--quantize={QUANTIZE} \
--how_many_training_steps={TRAINING_STEPS} \
--learning_rate={LEARNING_RATE} \
--train_dir={TRAIN_DIR} \
--summaries_dir={LOGS_DIR} \
--verbosity={VERBOSITY} \
--eval_step_interval={EVAL_STEP_INTERVAL} \
--save_step_interval={SAVE_STEP_INTERVAL} \
"""
Explanation: Training
The following script downloads the dataset and begin training.
End of explanation
"""
!python tensorflow/tensorflow/examples/speech_commands/freeze.py \
--wanted_words=$WANTED_WORDS \
--window_stride_ms=$WINDOW_STRIDE \
--preprocess=$PREPROCESS \
--model_architecture=$MODEL_ARCHITECTURE \
--quantize=$QUANTIZE \
--start_checkpoint=$TRAIN_DIR$MODEL_ARCHITECTURE'.ckpt-'$TOTAL_STEPS \
--output_file=$MODEL_TF \
"""
Explanation: Generate a TensorFlow Model for Inference
Combine relevant training results (graph, weights, etc) into a single file for inference. This process is known as freezing a model and the resulting model is known as a frozen model/graph, as it cannot be further re-trained after this process.
End of explanation
"""
input_tensor = 'Reshape_2'
output_tensor = 'labels_softmax'
converter = tf.lite.TFLiteConverter.from_frozen_graph(
MODEL_TF, [input_tensor], [output_tensor])
converter.inference_type = tf.uint8
converter.quantized_input_stats = {input_tensor: (0.0, 9.8077)} # (mean, standard deviation)
tflite_model = converter.convert()
tflite_model_size = open(MODEL_TFLITE, "wb").write(tflite_model)
print("Model is %d bytes" % tflite_model_size)
"""
Explanation: Generate a TensorFlow Lite Model
Convert the frozen graph into a TensorFlow Lite model, which is fully quantized for use with embedded devices.
The following cell will also print the model size, which will be under 20 kilobytes.
End of explanation
"""
# Install xxd if it is not available
!apt-get update && apt-get -qq install xxd
# Convert to a C source file
!xxd -i {MODEL_TFLITE} > {MODEL_TFLITE_MICRO}
# Update variable names
REPLACE_TEXT = MODEL_TFLITE.replace('/', '_').replace('.', '_')
!sed -i 's/'{REPLACE_TEXT}'/g_model/g' {MODEL_TFLITE_MICRO}
"""
Explanation: Generate a TensorFlow Lite for MicroControllers Model
Convert the TensorFlow Lite model into a C source file that can be loaded by TensorFlow Lite for Microcontrollers.
End of explanation
"""
# Print the C source file
!cat {MODEL_TFLITE_MICRO}
"""
Explanation: Deploy to a Microcontroller
Follow the instructions in the micro_speech README.md for TensorFlow Lite for MicroControllers to deploy this model on a specific microcontroller.
Reference Model: If you have not modified this notebook, you can follow the instructions as is, to deploy the model. Refer to the micro_speech/train/models directory to access the models generated in this notebook.
New Model: If you have generated a new model to identify different words: (i) Update kCategoryCount and kCategoryLabels in micro_speech/micro_features/micro_model_settings.h and (ii) Update the values assigned to the variables defined in micro_speech/micro_features/model.cc with values displayed after running the following cell.
End of explanation
"""
|
lukas/ml-class | examples/keras-fashion/sweeps.ipynb | gpl-2.0 | # WandB – Install the W&B library
%pip install wandb -q
import wandb
from wandb.keras import WandbCallback
!pip install wandb -qq
from keras.datasets import fashion_mnist
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Dropout, Dense, Flatten
from keras.utils import np_utils
from keras.optimizers import SGD
from keras.optimizers import RMSprop, SGD, Adam, Nadam
from keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, Callback, EarlyStopping
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow as tf
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
import wandb
from wandb.keras import WandbCallback
(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()
labels=["T-shirt/top","Trouser","Pullover","Dress","Coat",
"Sandal","Shirt","Sneaker","Bag","Ankle boot"]
img_width=28
img_height=28
X_train = X_train.astype('float32') / 255.
X_test = X_test.astype('float32') / 255.
# reshape input data
X_train = X_train.reshape(X_train.shape[0], img_width, img_height, 1)
X_test = X_test.reshape(X_test.shape[0], img_width, img_height, 1)
# one hot encode outputs
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
"""
Explanation: <a href="https://colab.research.google.com/github/lukas/ml-class/blob/master/examples/keras-fashion/sweeps.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Introduction to Hyperparameter Sweeps
Searching through high dimensional hyperparameter spaces to find the most performant model can get unwieldy very fast. Hyperparameter sweeps provide an organized and efficient way to conduct a battle royale of models and pick the most accurate model. They enable this by automatically searching through combinations of hyperparameter values (e.g. learning rate, batch size, number of hidden layers, optimizer type) to find the most optimal values.
In this tutorial we'll see how you can run sophisticated hyperparameter sweeps in 3 easy steps using Weights and Biases.
Sweeps: An Overview
Running a hyperparameter sweep with Weights & Biases is very easy. There are just 3 simple steps:
Define the sweep: we do this by creating a dictionary or a YAML file that specifies the parameters to search through, the search strategy, the optimization metric et all.
Initialize the sweep: with one line of code we initialize the sweep and pass in the dictionary of sweep configurations:
sweep_id = wandb.sweep(sweep_config)
Run the sweep agent: also accomplished with one line of code, we call wandb.agent() and pass the sweep_id to run, along with a function that defines your model architecture and trains it:
wandb.agent(sweep_id, function=train)
And voila! That's all there is to running a hyperparameter sweep! In the notebook below, we'll walk through these 3 steps in more detail.
We highly encourage you to fork this notebook, tweak the parameters, or try the model with your own dataset!
Resources
Sweeps docs →
Launching from the command line →
Setup
Start out by installing the experiment tracking library and setting up your free W&B account:
pip install wandb – Install the W&B library
import wandb – Import the wandb library
End of explanation
"""
# Configure the sweep – specify the parameters to search through, the search strategy, the optimization metric et all.
sweep_config = {
'method': 'random', #grid, random
'metric': {
'name': 'accuracy',
'goal': 'maximize'
},
'parameters': {
'epochs': {
'values': [2, 5, 10]
},
'batch_size': {
'values': [256, 128, 64, 32]
},
'dropout': {
'values': [0.3, 0.4, 0.5]
},
'conv_layer_size': {
'values': [16, 32, 64]
},
'weight_decay': {
'values': [0.0005, 0.005, 0.05]
},
'learning_rate': {
'values': [1e-2, 1e-3, 1e-4, 3e-4, 3e-5, 1e-5]
},
'optimizer': {
'values': ['adam', 'nadam', 'sgd', 'rmsprop']
},
'activation': {
'values': ['relu', 'elu', 'selu', 'softmax']
}
}
}
"""
Explanation: 1. Define the Sweep
Weights & Biases sweeps give you powerful levers to configure your sweeps exactly how you want them, with just a few lines of code. The sweeps config can be defined as a dictionary or a YAML file.
Let's walk through some of them together:
* Metric – This is the metric the sweeps are attempting to optimize. Metrics can take a name (this metric should be logged by your training script) and a goal (maximize or minimize).
* Search Strategy – Specified using the 'method' variable. We support several different search strategies with sweeps.
* Grid Search – Iterates over every combination of hyperparameter values.
* Random Search – Iterates over randomly chosen combinations of hyperparameter values.
* Bayesian Search – Creates a probabilistic model that maps hyperparameters to probability of a metric score, and chooses parameters with high probability of improving the metric. The objective of Bayesian optimization is to spend more time in picking the hyperparameter values, but in doing so trying out fewer hyperparameter values.
* Stopping Criteria – The strategy for determining when to kill off poorly peforming runs, and try more combinations faster. We offer several custom scheduling algorithms like HyperBand and Envelope.
* Parameters – A dictionary containing the hyperparameter names, and discreet values, max and min values or distributions from which to pull their values to sweep over.
You can find a list of all configuration options here.
End of explanation
"""
# Initialize a new sweep
# Arguments:
# – sweep_config: the sweep config dictionary defined above
# – entity: Set the username for the sweep
# – project: Set the project name for the sweep
sweep_id = wandb.sweep(sweep_config, entity="sweep", project="sweeps-tutorial")
"""
Explanation: 2. Initialize the Sweep
End of explanation
"""
# The sweep calls this function with each set of hyperparameters
def train():
# Default values for hyper-parameters we're going to sweep over
config_defaults = {
'epochs': 5,
'batch_size': 128,
'weight_decay': 0.0005,
'learning_rate': 1e-3,
'activation': 'relu',
'optimizer': 'nadam',
'hidden_layer_size': 128,
'conv_layer_size': 16,
'dropout': 0.5,
'momentum': 0.9,
'seed': 42
}
# Initialize a new wandb run
wandb.init(config=config_defaults)
# Config is a variable that holds and saves hyperparameters and inputs
config = wandb.config
# Define the model architecture - This is a simplified version of the VGG19 architecture
model = Sequential()
# Set of Conv2D, Conv2D, MaxPooling2D layers with 32 and 64 filters
model.add(Conv2D(filters = config.conv_layer_size, kernel_size = (3, 3), padding = 'same',
activation ='relu', input_shape=(img_width, img_height,1)))
model.add(Dropout(config.dropout))
model.add(Conv2D(filters = config.conv_layer_size, kernel_size = (3, 3),
padding = 'same', activation ='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(config.hidden_layer_size, activation ='relu'))
model.add(Dense(num_classes, activation = "softmax"))
# Define the optimizer
if config.optimizer=='sgd':
optimizer = SGD(lr=config.learning_rate, decay=1e-5, momentum=config.momentum, nesterov=True)
elif config.optimizer=='rmsprop':
optimizer = RMSprop(lr=config.learning_rate, decay=1e-5)
elif config.optimizer=='adam':
optimizer = Adam(lr=config.learning_rate, beta_1=0.9, beta_2=0.999, clipnorm=1.0)
elif config.optimizer=='nadam':
optimizer = Nadam(lr=config.learning_rate, beta_1=0.9, beta_2=0.999, clipnorm=1.0)
model.compile(loss = "categorical_crossentropy", optimizer = optimizer, metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=config.batch_size,
epochs=config.epochs,
validation_data=(X_test, y_test),
callbacks=[WandbCallback(data_type="image", validation_data=(X_test, y_test), labels=labels),
EarlyStopping(patience=10, restore_best_weights=True)])
"""
Explanation: Define Your Neural Network
Before we can run the sweep, let's define a function that creates and trains our neural network.
In the function below, we define a simplified version of a VGG19 model in Keras, and add the following lines of code to log models metrics, visualize performance and output and track our experiments easily:
* wandb.init() – Initialize a new W&B run. Each run is single execution of the training script.
* wandb.config – Save all your hyperparameters in a config object. This lets you use our app to sort and compare your runs by hyperparameter values.
* callbacks=[WandbCallback()] – Fetch all layer dimensions, model parameters and log them automatically to your W&B dashboard.
* wandb.log() – Logs custom objects – these can be images, videos, audio files, HTML, plots, point clouds etc. Here we use wandb.log to log images of Simpson characters overlaid with actual and predicted labels.
End of explanation
"""
# Initialize a new sweep
# Arguments:
# – sweep_id: the sweep_id to run - this was returned above by wandb.sweep()
# – function: function that defines your model architecture and trains it
wandb.agent(sweep_id, train)
"""
Explanation: 3. Run the sweep agent
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/tensorboard/dataframe_api.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
!pip install tensorboard pandas
!pip install matplotlib seaborn
from packaging import version
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
from scipy import stats
import tensorboard as tb
major_ver, minor_ver, _ = version.parse(tb.__version__).release
assert major_ver >= 2 and minor_ver >= 3, \
"This notebook requires TensorBoard 2.3 or later."
print("TensorBoard version: ", tb.__version__)
"""
Explanation: TensorBoard の DataFrames データにアクセスする
概要
TensorBoard の主な機能はインタラクティブ GUI ですが、ログれーたの事後分析やカスタム視覚化の作成目的で、TensorBoard に保存されているデータログを プログラムで 読み取るユーザーもいます。
TensorBoard 2.3 は、tensorboard.data.experimental.ExperimentFromDev() でこのようなユースケースをサポートしており、TensorBoard のスカラーログにプログラムを使ってアクセスすることができます。このページでは、この新しい API の基本的な使用方法を実演します。
注意:
この API は、名前空間で想像できるように、まだ実験段階にある API です。そのため、将来的に重大な変更が適用される場合があります。
現在のところ、この機能は TensorBoard.dev にアップロードされる logdir のみをサポートしています。TensorBoard.dev は、TensorBoard の永続化と共有を可能にする無料のホステッドサービスです。ローカルに保存されている TensorBoard logdir のサポートは、今後追加される予定です。簡単に言うと、ローカルのファイルシステムに保存されている TensorBoard logdir を、1 行のコマンド(tensorboard dev upload --logdir <logdir>)で TensorBoard.dev にアップロードすることができます。詳細は、tensorboard.dev をご覧ください。
セットアップ
プログラマティック API を使用するには、tensorboard とともに pandas がインストールされていることを確認してください。
このガイドではカスタムプロットの作成に matplotlib と seaborn を使用しますが、任意のツールを使って DataFrame の分析と視覚化を行えます。
End of explanation
"""
experiment_id = "c1KCv3X3QvGwaXfgX1c4tg"
experiment = tb.data.experimental.ExperimentFromDev(experiment_id)
df = experiment.get_scalars()
df
"""
Explanation: pandas.DataFrame として TensorBoard スカラーを読み込む
TensorBoard logdir が TensorBoard.dev にアップロードされると、logdir は「実験」となります。各実験には一意の ID が割り当てられており、実験の TensorBoard.dev URL で確認することができます。次のデモでは、https://tensorboard.dev/experiment/c1KCv3X3QvGwaXfgX1c4tg にある TensorBoard.dev を使用しています。
End of explanation
"""
print(df["run"].unique())
print(df["tag"].unique())
"""
Explanation: df は、実験のすべてのスカラーログを含む pandas.DataFrame です。
DataFrame の列は次のとおりです。
run: run(実行)は、元の logdir のサブディレクトリに対応しています。この実験では、run は特定のオプティマイザタイプ(トレーニングハイパーパラメータ)を使用した MNIST データセットのニューラルネットワーク(CNN)の完全なトレーニングに由来しています。この DataFrame は、このような run が複数含まれており、別のオプティマイザタイプの配下にある反復トレーニングに対応しています。
tag: これは、同一の行にある value の意味、つまり値が表現するメトリックが何であるかを記述しています。この実験では、epoch_accuracy と epoch_loss という、それぞれ精度と損失のメトリックに対応する 2 つのタグのみがあります。
step: これは、run の中で対応する行のシリアル順を反映する番号です。ここでは、step は実際にエポック番号を指します。step 値とは別にタイムスタンプを取得する場合は、get_scalars() を呼び出す際にキーワード引数 include_wall_time=True を使用できます。
value: これは関心のある実際の数値です。上述のとおり、この特定の DataFrame の各 value は、行の tag に応じて損失か精度になります。
End of explanation
"""
dfw = experiment.get_scalars(pivot=True)
dfw
"""
Explanation: ピボット(ワイドフォーム)DataFrame を取得する
この実験では、各実行の同じステップ時Iに 2 つのタグ(epoch_loss と epoch_accuracy)が存在します。このため、pivot=True キーワード引数を使用することで、「ワイドフォーム」DataFrame を get_scalars() から直接取得することができます。すべてのタグがワイドフォーム DataFrame の列として含まれているため、このケースを含み、場合によっては操作がより便利になります。
ただし、すべての実行のすべてのタグで統一したステップ値を持つ条件が満たされる場合、pivot=True を使用するとエラーになることに注意してください。
End of explanation
"""
csv_path = '/tmp/tb_experiment_1.csv'
dfw.to_csv(csv_path, index=False)
dfw_roundtrip = pd.read_csv(csv_path)
pd.testing.assert_frame_equal(dfw_roundtrip, dfw)
"""
Explanation: ワイドフォーム DataFrame には、1 つの「value」列の代わりに、epoch_accuracy と epoch_loss の 2 つのタグ(メトリック)が列として明示的に含まれています。
DataFrame を CSV として保存する
pandas.DataFrame has good interoperability with CSV. You can store it as a local CSV file and load it back later. For example:
End of explanation
"""
# Filter the DataFrame to only validation data, which is what the subsequent
# analyses and visualization will be focused on.
dfw_validation = dfw[dfw.run.str.endswith("/validation")]
# Get the optimizer value for each row of the validation DataFrame.
optimizer_validation = dfw_validation.run.apply(lambda run: run.split(",")[0])
plt.figure(figsize=(16, 6))
plt.subplot(1, 2, 1)
sns.lineplot(data=dfw_validation, x="step", y="epoch_accuracy",
hue=optimizer_validation).set_title("accuracy")
plt.subplot(1, 2, 2)
sns.lineplot(data=dfw_validation, x="step", y="epoch_loss",
hue=optimizer_validation).set_title("loss")
"""
Explanation: カスタム視覚化と統計分析を実行する
End of explanation
"""
adam_min_val_loss = dfw_validation.loc[optimizer_validation=="adam", :].groupby(
"run", as_index=False).agg({"epoch_loss": "min"})
rmsprop_min_val_loss = dfw_validation.loc[optimizer_validation=="rmsprop", :].groupby(
"run", as_index=False).agg({"epoch_loss": "min"})
sgd_min_val_loss = dfw_validation.loc[optimizer_validation=="sgd", :].groupby(
"run", as_index=False).agg({"epoch_loss": "min"})
min_val_loss = pd.concat([adam_min_val_loss, rmsprop_min_val_loss, sgd_min_val_loss])
sns.boxplot(data=min_val_loss, y="epoch_loss",
x=min_val_loss.run.apply(lambda run: run.split(",")[0]))
# Perform pairwise comparisons between the minimum validation losses
# from the three optimizers.
_, p_adam_vs_rmsprop = stats.ttest_ind(
adam_min_val_loss["epoch_loss"],
rmsprop_min_val_loss["epoch_loss"])
_, p_adam_vs_sgd = stats.ttest_ind(
adam_min_val_loss["epoch_loss"],
sgd_min_val_loss["epoch_loss"])
_, p_rmsprop_vs_sgd = stats.ttest_ind(
rmsprop_min_val_loss["epoch_loss"],
sgd_min_val_loss["epoch_loss"])
print("adam vs. rmsprop: p = %.4f" % p_adam_vs_rmsprop)
print("adam vs. sgd: p = %.4f" % p_adam_vs_sgd)
print("rmsprop vs. sgd: p = %.4f" % p_rmsprop_vs_sgd)
"""
Explanation: 上記のプロットは、検証精度と検証損失のタイムコースを示し、それぞれの曲線は、あるオプティマイザタイプによる 5 回の実行の平均を示します。seaborn.lineplot() に組み込まれた機能により、それぞれの曲線は、平均に関する ±1 の標準偏差も表示するため、曲線の変動性と 3 つのオプティマイザの差の重要性がわかりやすくなります。この変動性の視覚化は、TensorBoard の GUI ではまだサポートされていません。
最小検証損失が「adam」、「rmsprop」、および「sgd」オプティマイザ間で大きく異なるという仮説を調べるため、それぞれのオプティマイザにおける最小検証損失の DataFrame を抽出します。
そして、最小検証損失の差を視覚化する箱ひげ図を作成します。
End of explanation
"""
|
dennisobrien/bokeh | examples/howto/server_embed/notebook_embed.ipynb | bsd-3-clause | import yaml
from bokeh.layouts import column
from bokeh.models import ColumnDataSource, Slider
from bokeh.plotting import figure
from bokeh.themes import Theme
from bokeh.io import show, output_notebook
from bokeh.sampledata.sea_surface_temperature import sea_surface_temperature
output_notebook()
"""
Explanation: Embedding a Bokeh server in a Notebook
This notebook shows how a Bokeh server application can be embedded inside a Jupyter notebook.
End of explanation
"""
def modify_doc(doc):
df = sea_surface_temperature.copy()
source = ColumnDataSource(data=df)
plot = figure(x_axis_type='datetime', y_range=(0, 25),
y_axis_label='Temperature (Celsius)',
title="Sea Surface Temperature at 43.18, -70.43")
plot.line('time', 'temperature', source=source)
def callback(attr, old, new):
if new == 0:
data = df
else:
data = df.rolling('{0}D'.format(new)).mean()
source.data = ColumnDataSource(data=data).data
slider = Slider(start=0, end=30, value=0, step=1, title="Smoothing by N Days")
slider.on_change('value', callback)
doc.add_root(column(slider, plot))
doc.theme = Theme(json=yaml.load("""
attrs:
Figure:
background_fill_color: "#DDDDDD"
outline_line_color: white
toolbar_location: above
height: 500
width: 800
Grid:
grid_line_dash: [6, 4]
grid_line_color: white
"""))
"""
Explanation: There are various application handlers that can be used to build up Bokeh documents. For example, there is a ScriptHandler that uses the code from a .py file to produce Bokeh documents. This is the handler that is used when we run bokeh serve app.py. Here we are going to use the lesser-known FunctionHandler, that gets configured with a plain Python function to build up a document.
Here is the function modify_doc(doc) that defines our app:
End of explanation
"""
show(modify_doc) # notebook_url="http://localhost:8888"
"""
Explanation: Now we can display our application using show, which will automatically create an Application that wraps modify_doc using FunctionHandler. The end result is that the Bokeh server will call modify_doc to build new documents for every new sessions that is opened.
Note: If the current notebook is not displayed at the default URL, you must update the notebook_url parameter in the comment below to match, and pass it to show.
End of explanation
"""
|
ueapy/ueapy.github.io | content/notebooks/2019-05-30-cartopy-map.ipynb | mit | import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import xarray as xr
from pathlib import Path
"""
Explanation: To make a pretty, publication grade map for your study area look no further than cartopy.
In this tutorial we will walk through generating a basemap with:
- Bathymetry/topography
- Coastline
- Scatter data
- Location labels
- Inset map
- Legend
This code can be generalised to any region you wish to map
First we import some modules for manipulating and plotting data
End of explanation
"""
import cartopy
"""
Explanation: Then we import cartopy itself
End of explanation
"""
import cartopy.crs as ccrs
"""
Explanation: In addition, we import cartopy's coordinate reference system submodule:
End of explanation
"""
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
plt.rcParams.update({"font.size": 20})
SMALL_SIZE = 22
MEDIUM_SIZE = 22
LARGE_SIZE = 26
plt.rc("font", size=SMALL_SIZE)
plt.rc("xtick", labelsize=SMALL_SIZE)
plt.rc("ytick", labelsize=SMALL_SIZE)
plt.rc("axes", titlesize=SMALL_SIZE)
plt.rc("legend", fontsize=SMALL_SIZE)
"""
Explanation: A few other modules and functions which we will use later to add cool stuff to our plots. Also updating font sizes for improved readability
End of explanation
"""
# Open prepared bathymetry dataset using pathlib to sepcify the relative path
bathy_file_path = Path('../data/bathy.nc')
bathy_ds = xr.open_dataset(bathy_file_path)
bathy_lon, bathy_lat, bathy_h = bathy_ds.bathymetry.longitude, bathy_ds.bathymetry.latitude, bathy_ds.bathymetry.values
"""
Explanation: Note on bathymetry data
To save space and time I have subset the bathymetry plotted in this example. If you wish to map a different area you will need to download the GEBCO topography data found here.
You can find a notebook intro to using xarray for netcdf here on the UEA python website. Or go to Callum's github for a worked example using GEBCO data.
End of explanation
"""
bathy_h[bathy_h > 0] = 0
bathy_conts = np.arange(-9000, 500, 500)
"""
Explanation: We're just interested in bathy here, so set any height values greater than 0 to to 0 and set contour levels to plot later
End of explanation
"""
# Load some scatter data of smaple locations near South Georgia
data = pd.read_csv("../data/scatter_coords.csv")
lons = data.Longitude.values
lats = data.Latitude.values
# Subset of sampling locations
sample_lon = lons[[0, 2, 7]]
sample_lat = lats[[0, 2, 7]]
"""
Explanation: Here we load some scatter data from a two column csv for plotting later
End of explanation
"""
coord = ccrs.PlateCarree()
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(111, projection=coord)
ax.set_extent([-42, -23, -60, -50], crs=coord);
"""
Explanation: Now to make the map itself. First we define our coordinate system. Here we are using a Plate Carrée projection, which is one of equidistant cylindrical projections.
A full list of Cartopy projections is available at http://scitools.org.uk/cartopy/docs/latest/crs/projections.html.
Then we create figure and axes instances and set the plotting extent in degrees [West, East, South, North]
End of explanation
"""
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(111, projection=coord)
ax.set_extent([-42, -23, -60, -50], crs=coord)
bathy = ax.contourf(bathy_lon, bathy_lat, bathy_h, bathy_conts, transform=coord, cmap="Blues_r")
"""
Explanation: Now we contour the bathymetry data
End of explanation
"""
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(111, projection=coord)
ax.set_extent([-42, -23, -60, -50], crs=coord)
bathy = ax.contourf(bathy_lon, bathy_lat, bathy_h, bathy_conts, transform=coord, cmap="Blues_r")
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True, linewidth=1, color="k", alpha=0.5, linestyle="--")
gl.xlabels_top = False
gl.ylabels_right = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.ylines = True
gl.xlines = True
fig.colorbar(bathy, ax=ax, orientation="horizontal", label="Bathymetry (m)", shrink=0.7, pad=0.08, aspect=40);
"""
Explanation: A good start. To make it more map like we add gridlines, formatted labels and a colorbar
End of explanation
"""
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(111, projection=coord)
ax.set_extent([-42, -23, -60, -50], crs=coord)
bathy = ax.contourf(bathy_lon, bathy_lat, bathy_h, bathy_conts, transform=coord, cmap="Blues_r")
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True, linewidth=1, color="k", alpha=0.5, linestyle="--")
gl.xlabels_top = False
gl.ylabels_right = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.ylines = True
gl.xlines = True
fig.colorbar(bathy, ax=ax, orientation="horizontal", label="Bathymetry (m)", shrink=0.7, pad=0.08, aspect=40)
feature = cartopy.feature.NaturalEarthFeature(
name="coastline", category="physical", scale="50m", edgecolor="0.5", facecolor="0.8"
)
ax.add_feature(feature)
ax.scatter(lons, lats, zorder=5, color="red", label="Samples collected")
ax.scatter(sample_lon, sample_lat, zorder=10, color="k", marker="D", s=50, label="Samples sequenced");
"""
Explanation: Now to add a few more features. First coastlines from cartopy's natural features toolbox. Then scatters of the samples we imported earlier
End of explanation
"""
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(111, projection=coord)
ax.set_extent([-42, -23, -60, -50], crs=coord)
bathy = ax.contourf(bathy_lon, bathy_lat, bathy_h, bathy_conts, transform=coord, cmap="Blues_r")
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True, linewidth=1, color="k", alpha=0.5, linestyle="--")
gl.xlabels_top = False
gl.ylabels_right = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.ylines = True
gl.xlines = True
fig.colorbar(bathy, ax=ax, orientation="horizontal", label="Bathymetry (m)", shrink=0.7, pad=0.08, aspect=40)
ax.add_feature(feature)
ax.scatter(lons, lats, zorder=5, color="red", label="Samples collected")
ax.scatter(sample_lon, sample_lat, zorder=10, color="k", marker="D", s=50, label="Samples sequenced")
fig.legend(bbox_to_anchor=(0.12, 0.2), loc="lower left")
tr2 = ccrs.Stereographic(central_latitude=-55, central_longitude=-35)
sub_ax = plt.axes(
[0.63, 0.65, 0.2, 0.2], projection=ccrs.Stereographic(central_latitude=-55, central_longitude=-35)
)
sub_ax.set_extent([-70, -15, -75, 10])
x_co = [-42, -42, -23, -23, -42]
y_co = [-60, -50, -50, -60, -60]
sub_ax.add_feature(feature)
sub_ax.plot(x_co, y_co, transform=coord, zorder=10, color="red")
ax.text(-38.5, -54.9, "South\nGeorgia", fontsize=14)
ax.text(-26.8, -58.2, "South\nSandwich\nIslands", fontsize=14);
HTML(html)
"""
Explanation: To finish off the map we add a legend for the scatter plot, an inset map showing the area at a larger scale and some text identifying the islands
End of explanation
"""
|
Vvkmnn/books | AutomateTheBoringStuffWithPython/lesson42.ipynb | gpl-3.0 | import openpyxl
"""
Explanation: Lesson 42:
Reading Excel Spreadsheets
The openpyxl module allows you to manipulate Excel sheets within Python.
Excel files have the following terminology:
* A collection of sheets is a workbook, and saved with a .xlsx extension.
* A workbook contains multiple sheets, each of which is a single spreadsheet, or worksheet.
* Each sheet has columns and rows, defined by letters and numbers respecitvely.
* The intersection between a column and a row is a cell.
End of explanation
"""
# Import OS module to navigate directories
import os
# Change the directory to the excel file location, using relative and absolute paths as previously discussed.
os.chdir('files')
os.listdir()
"""
Explanation: We must first nagivate to the directory containing the spreadsheets, which for this notebook is the subdirectory 'files'.
End of explanation
"""
workbook = openpyxl.load_workbook('example.xlsx')
type(workbook)
"""
Explanation: We must now open the workbook file.
End of explanation
"""
sheet = workbook.get_sheet_by_name('Sheet1')
type(sheet)
"""
Explanation: Once the workbook is loaded, we can interact with specific sheets by loading them via workbook methods.
End of explanation
"""
workbook.get_sheet_names()
"""
Explanation: We can also use the .get_sheet_names() method to print all sheet names, in case we aren't sure.
End of explanation
"""
# Just references an object exists; requires an additional method to interact with
sheet['A1']
"""
Explanation: We can now interact with specific cells by creating cell objects, referenced via a sheet method.
End of explanation
"""
cell = sheet['A1']
cell.value
"""
Explanation: The .value method returns the actual value in the cell.
End of explanation
"""
print(str(cell.value))
print(str(sheet['A1'].value))
"""
Explanation: This particular cell returns a datetime reference from Excel via Python's own datetime module. A string value is available by passing into the str() function:
End of explanation
"""
print("The value in cell %s is '%s' and is type %s." %('A1', sheet['A1'].value, type(sheet['A1'].value)))
print("The value in cell %s is '%s' and is type %s." %('B1', sheet['B1'].value, type(sheet['B1'].value)))
print("The value in cell %s is '%s' and is type %s." %('C1', sheet['C1'].value, type(sheet['C1'].value)))
"""
Explanation: All cell values inherit their data types from Excel.
End of explanation
"""
# B1 Cell
sheet.cell(row = 1, column = 2)
"""
Explanation: You can also reference cells via rows and columns. Excel rows start at 1 and columns at A.
End of explanation
"""
for i in range(1,8):
print(i, sheet.cell(row=i, column=2).value)
"""
Explanation: This can be useful for iterative or looping operations.
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/io/tutorials/audio.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow IO Authors.
End of explanation
"""
!pip install tensorflow-io
"""
Explanation: 音声データの準備と拡張
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/io/tutorials/audio"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.org で表示</a>
</td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/io/tutorials/audio.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a>
</td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/io/tutorials/audio.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示{</a></td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/io/tutorials/audio.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
概要
自動音声認識における大きな課題の 1 つは、音声データの準備と拡張です。音声データ分析は、時間または周波数領域にあり可能性があるため、画像などのほかのデータソースと比べさらに複雑化します。
TensorFlow エコシステムの一環として、tensorflow-io パッケージには、多数の有用な音声関連の API が提供されており、音声データの準備と拡張を単純化することができます。
セットアップ
必要なパッケージをインストールし、ランタイムを再起動する
End of explanation
"""
import tensorflow as tf
import tensorflow_io as tfio
audio = tfio.audio.AudioIOTensor('gs://cloud-samples-tests/speech/brooklyn.flac')
print(audio)
"""
Explanation: 使用方法
音声ファイルを読み取る
TensorFlow IO では、クラス tfio.audio.AudioIOTensor を使用して、音声ファイルを遅延読み込みされる IOTensor に読み出すことができます。
End of explanation
"""
audio_slice = audio[100:]
# remove last dimension
audio_tensor = tf.squeeze(audio_slice, axis=[-1])
print(audio_tensor)
"""
Explanation: 上記の例の Flac ファイル brooklyn.flac は、google cloud でパブリックアクセスが可能な音声クリップから得たものです。
GCS は TensorFlow でサポートされているファイルシステムであるため、GCS アドレス gs://cloud-samples-tests/speech/brooklyn.flac が直接使用されています。Flac 形式のほか、WAV、Ogg、MP3、および MP4A 形式も AudioIOTensor の自動ファイル形式検出でサポートされています。
AudioIOTensor は遅延読み込みされるため、最初は形状、dtype、およびサンプルレートしか表示されません。AudioIOTensor の形状は [samples, channels] で表現され、読み込んだ音声クリップが int16 型の 28979 サンプルを含む Mono チャンネルであることを示します。
音声クリップのコンテンツは、to_tensor() 経由で AudioIOTensor から Tensor に変換するか、スライスによって、必要に応じてのみ読み取られます。スライスは、特に大きな音声クリップのほんの一部のみが必要である場合に役立ちます。
End of explanation
"""
from IPython.display import Audio
Audio(audio_tensor.numpy(), rate=audio.rate.numpy())
"""
Explanation: 次のようにして、音声を再生できます。
End of explanation
"""
import matplotlib.pyplot as plt
tensor = tf.cast(audio_tensor, tf.float32) / 32768.0
plt.figure()
plt.plot(tensor.numpy())
"""
Explanation: テンソルを浮動小数点数に変換して音声クリップをグラフに表示するとより便利です。
End of explanation
"""
position = tfio.audio.trim(tensor, axis=0, epsilon=0.1)
print(position)
start = position[0]
stop = position[1]
print(start, stop)
processed = tensor[start:stop]
plt.figure()
plt.plot(processed.numpy())
"""
Explanation: ノイズをトリムする
音声からノイズを取り除く方が好ましい場合があります。これは、API tfio.audio.trim を使用して行います。API から戻されるのは、セグメントの [start, stop] 位置のペアです。
End of explanation
"""
fade = tfio.audio.fade(
processed, fade_in=1000, fade_out=2000, mode="logarithmic")
plt.figure()
plt.plot(fade.numpy())
"""
Explanation: フェードインとフェードアウト
音声エンジニアリングの有用なテクニックには、フェードという、音声信号を徐々に増加または減少させるものがあります。これは、tfio.audio.fade を使用して行います。tfio.audio.fade は、linear、logarithmic、または exponential などのさまざまな形状のフェードをサポートしています。
End of explanation
"""
# Convert to spectrogram
spectrogram = tfio.audio.spectrogram(
fade, nfft=512, window=512, stride=256)
plt.figure()
plt.imshow(tf.math.log(spectrogram).numpy())
"""
Explanation: スペクトログラム
多くの場合、高度な音声処理は、時間の経過に伴う周波数の変化に対応します。tensorflow-io では、tfio.audio.spectrogram を使って波形を変換することができます。
End of explanation
"""
# Convert to mel-spectrogram
mel_spectrogram = tfio.audio.melscale(
spectrogram, rate=16000, mels=128, fmin=0, fmax=8000)
plt.figure()
plt.imshow(tf.math.log(mel_spectrogram).numpy())
# Convert to db scale mel-spectrogram
dbscale_mel_spectrogram = tfio.audio.dbscale(
mel_spectrogram, top_db=80)
plt.figure()
plt.imshow(dbscale_mel_spectrogram.numpy())
"""
Explanation: 異なるスケールへの追加の変換も可能です。
End of explanation
"""
# Freq masking
freq_mask = tfio.audio.freq_mask(dbscale_mel_spectrogram, param=10)
plt.figure()
plt.imshow(freq_mask.numpy())
"""
Explanation: SpecAugment
上述したデータの準備と拡張 API のほか、tensorflow-io パッケージには、高度なスペクトログラムの拡張、特に SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition(Park et al., 2019)で論じられている周波数と時間のマスキングも含まれています。
周波数マスキング
周波数マスキングでは、周波数チャンネルの [f0, f0 + f) がマスクされます。f は、0 から周波数マスクパラメータ F までの一様分布から選択され、f0 は、(0, ν − f) から選択されます。この ν は、周波数チャンネル数です。
End of explanation
"""
# Time masking
time_mask = tfio.audio.time_mask(dbscale_mel_spectrogram, param=10)
plt.figure()
plt.imshow(time_mask.numpy())
"""
Explanation: 時間マスキング
時間マスキングでは、t 個の連続した時間ステップ [t0, t0 + t) がマスクされます。t は、0 から時間マスクパラメータ T までの一様分布から選択され、t0 は、[0, τ − t) から選択されます。この τ は時間ステップ数です。
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.1/tutorials/eclipse.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.1,<2.2"
"""
Explanation: Eclipse Detection
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
phoebe.devel_on() # DEVELOPER MODE REQUIRED FOR VISIBLE_PARTIAL - DON'T USE FOR SCIENCE
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
"""
b.add_dataset('mesh', times=[0.05], columns=['visibilities'])
"""
Explanation: Let's just compute the mesh at a single time-point that we know should be during egress.
End of explanation
"""
b.run_compute(eclipse_method='native')
afig, mplfig = b.plot(component='primary', fc='visibilities', xlim=(-0.5, 0.25), ylim=(-0.4, 0.4), show=True)
"""
Explanation: Native
The 'native' eclipse method computes what percentage (by area) of each triangle is visible at the current time. It also determines the centroid of the visible portion of each triangle.
Physical quantities (temperatures, intensities, velocities, etc) are computed at the vertices of each triangle, and this centroid is then used to determine the average quantity across the visible portion of the triangle (by assuming a linear gradient across the triangle).
Let's plot the visibilities (ratio of the area that is visible) as the color scale, with red being completely hidden and green being completely visible.
End of explanation
"""
b.run_compute(eclipse_method='visible_partial')
afig, mplfig = b.plot(component='primary', fc='visibilities', xlim=(-0.5, 0.25), ylim=(-0.4, 0.4), show=True)
"""
Explanation: Visible Partial
The 'visible partial' eclipse method simply determines which triangles are hidden, which are visible, and which are partially visible. It then assigns a visibility of 0.5 to any partially visible triangles - meaning they will contribute half of their intensities when integrated (assume that half of the area is visible). There are no longer any centroids - values are still computed at the vertices but are then averaged to be at the geometric center of EACH triangle.
Again, let's plot the visibilities (ratio of the area that is visible) as the color scale, with red being completely hidden and green being completely visible.
End of explanation
"""
|
4dsolutions/Python5 | Extended Precision.ipynb | mit | %%latex
\begin{align}
e = lim_{n \to \infty} (1 + 1/n)^n
\end{align}
from math import e, pi
print(e) # as a floating point number
print(pi)
"""
Explanation: Python for Everyone!<br/>Oregon Curriculum Network
Extended Precision with the Native Decimal Type
With LaTeX and Generator Functions
<img src="https://c8.staticflickr.com/6/5691/30269841575_8bea763a54.jpg" alt="TAOCP"
style="width: 50%; height: 50%"/>
The Python Standard Library provides a decimal module containing the class Decimal. A Decimal object behaves according to the base 10 algorithms we learn in school.
The precision i.e. number of decimal places to which computations are carried out, is set globally, or within the scope of a context manager.
Note also that Jupyter Notebooks are able to render LaTeX when commanded to do so with the %%latex magic command. As a first example, here is an expression for the mathematical constant e.
End of explanation
"""
import decimal
with decimal.localcontext() as ctx: # context manager
ctx.prec = 1000
n = decimal.Decimal(1e102)
e = (1 + 1/n) ** n
e_1000_places = """2.7182818284590452353602874713526624977572470936999595749669
6762772407663035354759457138217852516642742746639193200305992181741359662904357
2900334295260595630738132328627943490763233829880753195251019011573834187930702
1540891499348841675092447614606680822648001684774118537423454424371075390777449
9206955170276183860626133138458300075204493382656029760673711320070932870912744
3747047230696977209310141692836819025515108657463772111252389784425056953696770
7854499699679468644549059879316368892300987931277361782154249992295763514822082
6989519366803318252886939849646510582093923982948879332036250944311730123819706
8416140397019837679320683282376464804295311802328782509819455815301756717361332
0698112509961818815930416903515988885193458072738667385894228792284998920868058
2574927961048419844436346324496848756023362482704197862320900216099023530436994
1849146314093431738143640546253152096183690888707016768396424378140592714563549
0613031072085103837505101157477041718986106873969655212671546889570350354"""
e_1000_places = e_1000_places.replace("\n",str())
str(e)[2:103] == e_1000_places[2:103] # skipping "2." and going to 100 decimals
"""
Explanation: Lets show setting precision to a thousand places within a scope defined by decimal.localcontext. We set precision internally to the scope. By default, precision is to 28 places. We set n to 1 followed by 102 zeros, so a very large number. The resulting computation matches a published value for e to 100 decimal places.
End of explanation
"""
with decimal.localcontext() as ctx: # context manager
ctx.prec = 1000
def converge(): # generator function
n = decimal.Decimal('10')
while True:
yield (1 + 1/n) ** n
n = n * 100 # two more zeros
f = converge()
for _ in range(9):
next(f) # f.__next__() <--- not quite like Python 2.x (f.next())
r = next(f)
r
str(r)[:20] == e_1000_places[:20]
"""
Explanation: In the context below, the value of n starts at 10 and then gets two more zeros every time around the loop.
The yield keyword is similar to return in handing back an object, however a generator function then pauses to pick up where it left off when nudged by next(), (which triggers __next__ internally).
Generator functions do not forget their internal state as they advance through next values.
Note that when a Decimal type object operates with an integer, that integer is coerced (cast) as a Decimal object.
End of explanation
"""
%%latex
\begin{align}
\frac{1}{\pi} = \frac{2\sqrt{2}}{9801} \sum^\infty_{k=0} \frac{(4k)!(1103+26390k)}{(k!)^4 396^{4k}}
\end{align}
%%latex
\begin{align*}
\frac{1}{\pi} = \frac{2\sqrt{2}}{9801}\sum_{n = 0}^{\infty}\frac{(4n)!(1103 + 26390n)}{4^{4n}(n!)^{4}99^{4n}} = A\sum_{n = 0}^{\infty}B_{n}C_{n},\\
A = \frac{2\sqrt{2}}{9801},\,\\
B_{n} = \frac{(4n)!(1103 + 26390n)}{4^{4n}(n!)^{4}},\,\\
C_{n} = \frac{1}{99^{4n}}
\end{align*}
from math import factorial
from decimal import Decimal as D
decimal.getcontext().prec=100
A = (2 * D('2').sqrt()) / 9801
A
def B():
n = 0
while True:
numerator = factorial(4 * n) * (D(1103) + 26390 * n)
denominator = (4 ** (4*n))*(factorial(n))**4
yield numerator / denominator
n += 1
def C():
n = 0
while True:
yield 1 / (D('99')**(4*n))
n += 1
def Pi():
Bn = B()
Cn = C()
the_sum = 0
while True:
the_sum += next(Bn) * next(Cn)
yield 1/(A * the_sum)
pi = Pi()
next(pi)
next(pi)
next(pi)
pi_1000_places = """3.1415926535 8979323846 2643383279 5028841971 6939937510 5820974944
5923078164 0628620899 8628034825 3421170679 8214808651 3282306647 0938446095 5058223172
5359408128 4811174502 8410270193 8521105559 6446229489 5493038196 4428810975 6659334461
2847564823 3786783165 2712019091 4564856692 3460348610 4543266482 1339360726 0249141273
7245870066 0631558817 4881520920 9628292540 9171536436 7892590360 0113305305 4882046652
1384146951 9415116094 3305727036 5759591953 0921861173 8193261179 3105118548 0744623799
6274956735 1885752724 8912279381 8301194912 9833673362 4406566430 8602139494 6395224737
1907021798 6094370277 0539217176 2931767523 8467481846 7669405132 0005681271 4526356082
7785771342 7577896091 7363717872 1468440901 2249534301 4654958537 1050792279 6892589235
4201995611 2129021960 8640344181 5981362977 4771309960 5187072113 4999999837 2978049951
0597317328 1609631859 5024459455 3469083026 4252230825 3344685035 2619311881 7101000313
7838752886 5875332083 8142061717 7669147303 5982534904 2875546873 1159562863 8823537875
9375195778 1857780532 1712268066 1300192787 6611195909 2164201989"""
pi_1000_places = pi_1000_places.replace(" ","").replace("\n","")
r = next(pi)
str(r)[:20] == pi_1000_places[:20]
"""
Explanation: <img src="https://sciencenode.org/img/img_2012/stamp.JPG" alt="Ramanujan Postage Stamp"
style="width: 50%; height: 50%"/>
The fancier LaTeX below renders a famous equation by Ramanujan, which has been shown to converge to 1/π and therefore π very quickly, relative to many other algorithms. I don't think anyone understands how some random guy could think up such a miraculaous equation.
End of explanation
"""
|
dipanjank/ml | data_analysis/blood_transfusion_uci.ipynb | gpl-3.0 | import numpy as np
import pandas as pd
%pylab inline
pylab.style.use('ggplot')
"""
Explanation: <h1 align="center">UCI machine-learning-databases/blood-transfusion</h1>
End of explanation
"""
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data'
data_df = pd.read_csv(url)
"""
Explanation: Getting the Data
End of explanation
"""
data_df.head()
cols = [c.lower().split()[0] for c in data_df.columns]
cols[-1] = 'class_name'
data_df.columns = cols
data_df.head()
data_df.head()
"""
Explanation: To demonstrate the RFMTC marketing model (a modified version of RFM), this study
adopted the donor database of Blood Transfusion Service Center in Hsin-Chu City
in Taiwan. The center passes their blood transfusion service bus to one
university in Hsin-Chu City to gather blood donated about every three months. To
build a FRMTC model, we selected 748 donors at random from the donor database.
These 748 donor data, each one included R (Recency - months since last
donation), F (Frequency - total number of donation), M (Monetary - total blood
donated in c.c.), T (Time - months since first donation), and a binary variable
representing whether he/she donated blood in March 2007 (1 stand for donating
blood; 0 stands for not donating blood).
Attribute Information:
Given is the variable name, variable type, the measurement unit and a brief
description. The "Blood Transfusion Service Center" is a classification problem.
The order of this listing corresponds to the order of numerals along the rows of
the database.
R (Recency - months since last donation),
F (Frequency - total number of donation),
M (Monetary - total blood donated in c.c.),
T (Time - months since first donation), and
a binary variable representing whether he/she donated blood in March 2007 (1 stand for donating blood; 0 stands for not donating blood).
End of explanation
"""
counts = data_df['class_name'].value_counts()
counts.plot(kind='bar')
import seaborn as sns
sns.pairplot(data_df, hue='class_name')
"""
Explanation: Check for Class Imbalance
End of explanation
"""
features = data_df.drop('class_name', axis=1)
labels = data_df['class_name']
corrs = features.corr()
sns.heatmap(corrs, annot=True)
features.corrwith(labels).plot(kind='bar')
pylab.xticks(rotation=30)
"""
Explanation: Feature Correlations
End of explanation
"""
from sklearn.feature_selection import f_classif
t_stat, p_vals = f_classif(features, labels)
test_results = pd.DataFrame(np.column_stack([t_stat, p_vals]),
index=features.columns.copy(),
columns=['t_stats', 'p_vals'])
test_results.plot(kind='bar', subplots=True)
"""
Explanation: Feature Importances
End of explanation
"""
from sklearn.naive_bayes import GaussianNB
from sklearn.pipeline import Pipeline
from sklearn.feature_selection import SelectKBest
from sklearn.model_selection import StratifiedKFold, cross_val_score
estimator = GaussianNB(priors=[0.5, 0.5])
kbest = SelectKBest(f_classif, k=2)
pipeline = Pipeline([('selector', kbest), ('model', estimator)])
cv = StratifiedKFold(n_splits=10, shuffle=True)
nb_scores = cross_val_score(pipeline, features, labels, cv=cv)
nb_scores = pd.Series(nb_scores)
nb_scores.plot(kind='bar')
"""
Explanation: Approach 1: Gaussian Naive Bayes
End of explanation
"""
from sklearn.metrics import f1_score
def calc_scores(train_idx, test_idx):
"""Take train and test data for each CV fold and calculate scores for train and test splits."""
train_data, train_labels = features.iloc[train_idx], labels.iloc[train_idx]
test_data, test_labels = features.iloc[test_idx], labels.iloc[test_idx]
estimator = GaussianNB(priors=[0.5, 0.5])
kbest = SelectKBest(f_classif, k=2)
pipeline = Pipeline([('selector', kbest), ('model', estimator)])
pipeline = pipeline.fit(train_data, train_labels)
train_score = f1_score(train_labels, pipeline.predict(train_data))
test_score = f1_score(test_labels, pipeline.predict(test_data))
return (train_score, test_score)
return fold_scores
cv = StratifiedKFold(n_splits=10, shuffle=True)
fold_scores = [calc_scores(train_idx, test_idx)
for train_idx, test_idx
in cv.split(features, np.ones(len(features)))]
fold_scores = pd.DataFrame(fold_scores, columns=['train_score', 'test_score'])
fold_scores.plot(kind='bar')
fold_scores.mean()
cv = StratifiedKFold(n_splits=5, shuffle=True)
fold_scores = [calc_scores(train_idx, test_idx)
for train_idx, test_idx
in cv.split(features, np.ones(len(features)))]
fold_scores = pd.DataFrame(fold_scores, columns=['train_score', 'test_score'])
fold_scores.plot(kind='bar')
fold_scores.mean()
"""
Explanation: Improving the Model
Check for Underfit vs Overfit
Next, we compare train vs. test scores for each fold.
If train scores are higher than test scores, then the model is overfit.
If both scores are nearly equal, it implies the model is not overfit. So we either need to give more data to each fold or use a different model.
End of explanation
"""
|
possnfiffer/py-emde | Py-EMDE-Kenya-GLOBE-02.ipynb | bsd-2-clause | import requests
import json
r = requests.get('http://3d-kenya.chordsrt.com/instruments/2.geojson?start=2017-03-01T00:00&end=2017-05-01T00:00')
if r.status_code == 200:
d = r.json()['Data']
else:
print("Please verify that the URL for the weather station is correct. You may just have to try again with a different/smaller date range or different dates.")
"""
Explanation: Py-EMDE
Python Email Data Entry
The following code can gather data from weather stations reporting to the CHORDS portal, package it up into the proper format for GLOBE Email Data Entry , and send it using the SparkPost API.
In order to send email, you'll need to setup SparkPost by creating an account and confirming you own the domain you'll be sending emails from. You'll also need to create a SparkPost API key and set the environment variable SPARKPOST_API_KEY equal to the value of your API key. This script can be further modified to use a different method for sending email if needed.
This code will contact the CHORDS Portal and collect all the measurement data from the specified instrument, in the specified date range.
End of explanation
"""
d
"""
Explanation: Now the collected data can be viewed simply by issuing the following command
End of explanation
"""
for o in d:
if o['variable_shortname'] == 'msl1':
print(o['time'], o['value'], o['units'])
"""
Explanation: This code is useful for looking at a specific measurement dataset
End of explanation
"""
davad_tuple = (
'f1',
'f2',
'f3',
'f4',
'f5',
'f6',
'f7',
'f8',
'f9',
'f10',
'f11',
'f12',
'f13',
'f14',
)
def make_data_set(d):
data_list = []
for o in d:
if o['variable_shortname'] == 'msl1':
t = o['time'].split("T")
tdate = t[0].replace('-', '')
ttime = ''.join(t[1].split(':')[:-1])
pressure = o['value']
if ttime.endswith('00') or ttime.endswith('15') or ttime.endswith('30') or ttime.endswith('45'):
davad_tuple = ['DAVAD', 'GLID4TT4', 'SITE_ID:45013']+['X']*11
davad_tuple[3] = tdate + ttime
davad_tuple[13] = str(pressure)
data_list.append('{}'.format(' '.join(davad_tuple)))
#print('//AA\n{}\n//ZZ'.format('\n'.join(data_list)))
return data_list
"""
Explanation: A modified version of the above code will format the data properly for GLOBE Email Data Entry
End of explanation
"""
make_data_set(d)
"""
Explanation: To see the data formatted in GLOBE Email Data Entry format, comment out the return data_list command above, uncomment the print command right above it, then issue the following command
End of explanation
"""
def email_data(data_list):
import os
from sparkpost import SparkPost
FROM_EMAIL = os.getenv('FROM_EMAIL')
BCC_EMAIL = os.getenv('BCC_EMAIL')
# Send email using the SparkPost api
sp = SparkPost() # uses environment variable named SPARKPOST_API_KEY
response = sp.transmission.send(
recipients=['data@globe.gov'],
bcc=[BCC_EMAIL],
text='//AA\n{}\n//ZZ'.format('\n'.join(data_list)),
from_email=FROM_EMAIL,
subject='DATA'
)
print(response)
"""
Explanation: To email the data set to GLOBE's email data entry server, run the following code.
End of explanation
"""
email_data(make_data_set(d))
"""
Explanation: Finally, this command sends the email
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.24/_downloads/6d98b103d247000f4433763dd76607c0/25_background_filtering.ipynb | bsd-3-clause | import numpy as np
from numpy.fft import fft, fftfreq
from scipy import signal
import matplotlib.pyplot as plt
from mne.time_frequency.tfr import morlet
from mne.viz import plot_filter, plot_ideal_filter
import mne
sfreq = 1000.
f_p = 40.
flim = (1., sfreq / 2.) # limits for plotting
"""
Explanation: Background information on filtering
Here we give some background information on filtering in general, and
how it is done in MNE-Python in particular.
Recommended reading for practical applications of digital
filter design can be found in
Parks & Burrus (1987) :footcite:ParksBurrus1987
and Ifeachor & Jervis (2002) :footcite:IfeachorJervis2002,
and for filtering in an M/EEG context we recommend reading
Widmann et al. (2015) :footcite:WidmannEtAl2015.
<div class="alert alert-info"><h4>Note</h4><p>This tutorial goes pretty deep into the mathematics of filtering and the
design decisions that go into choosing a filter. If you just want to know
how to apply the default filters in MNE-Python to your data, skip this
tutorial and read `tut-filter-resample` instead (but someday, you
should come back and read this one too 🙂).</p></div>
Problem statement
Practical issues with filtering electrophysiological data are covered
in Widmann et al. (2012) :footcite:WidmannSchroger2012, where they
conclude with this statement:
Filtering can result in considerable distortions of the time course
(and amplitude) of a signal as demonstrated by VanRullen (2011)
:footcite:`VanRullen2011`.
Thus, filtering should not be used lightly. However, if effects of
filtering are cautiously considered and filter artifacts are minimized,
a valid interpretation of the temporal dynamics of filtered
electrophysiological data is possible and signals missed otherwise
can be detected with filtering.
In other words, filtering can increase signal-to-noise ratio (SNR), but if it
is not used carefully, it can distort data. Here we hope to cover some
filtering basics so users can better understand filtering trade-offs and why
MNE-Python has chosen particular defaults.
Filtering basics
Let's get some of the basic math down. In the frequency domain, digital
filters have a transfer function that is given by:
\begin{align}H(z) &= \frac{b_0 + b_1 z^{-1} + b_2 z^{-2} + \ldots + b_M z^{-M}}
{1 + a_1 z^{-1} + a_2 z^{-2} + \ldots + a_N z^{-M}} \
&= \frac{\sum_{k=0}^Mb_kz^{-k}}{\sum_{k=1}^Na_kz^{-k}}\end{align}
In the time domain, the numerator coefficients $b_k$ and denominator
coefficients $a_k$ can be used to obtain our output data
$y(n)$ in terms of our input data $x(n)$ as:
\begin{align}:label: summations
y(n) &= b_0 x(n) + b_1 x(n-1) + \ldots + b_M x(n-M)
- a_1 y(n-1) - a_2 y(n - 2) - \ldots - a_N y(n - N)\\
&= \sum_{k=0}^M b_k x(n-k) - \sum_{k=1}^N a_k y(n-k)\end{align}
In other words, the output at time $n$ is determined by a sum over
1. the numerator coefficients $b_k$, which get multiplied by
the previous input values $x(n-k)$, and
2. the denominator coefficients $a_k$, which get multiplied by
the previous output values $y(n-k)$.
Note that these summations correspond to (1) a weighted moving average and
(2) an autoregression.
Filters are broken into two classes: FIR_ (finite impulse response) and
IIR_ (infinite impulse response) based on these coefficients.
FIR filters use a finite number of numerator
coefficients $b_k$ ($\forall k, a_k=0$), and thus each output
value of $y(n)$ depends only on the $M$ previous input values.
IIR filters depend on the previous input and output values, and thus can have
effectively infinite impulse responses.
As outlined in Parks & Burrus (1987) :footcite:ParksBurrus1987,
FIR and IIR have different trade-offs:
* A causal FIR filter can be linear-phase -- i.e., the same time delay
across all frequencies -- whereas a causal IIR filter cannot. The phase
and group delay characteristics are also usually better for FIR filters.
* IIR filters can generally have a steeper cutoff than an FIR filter of
equivalent order.
* IIR filters are generally less numerically stable, in part due to
accumulating error (due to its recursive calculations).
In MNE-Python we default to using FIR filtering. As noted in Widmann et al.
(2015) :footcite:WidmannEtAl2015:
Despite IIR filters often being considered as computationally more
efficient, they are recommended only when high throughput and sharp
cutoffs are required
(Ifeachor and Jervis, 2002 :footcite:`IfeachorJervis2002`, p. 321)...
FIR filters are easier to control, are always stable, have a
well-defined passband, can be corrected to zero-phase without
additional computations, and can be converted to minimum-phase.
We therefore recommend FIR filters for most purposes in
electrophysiological data analysis.
When designing a filter (FIR or IIR), there are always trade-offs that
need to be considered, including but not limited to:
1. Ripple in the pass-band
2. Attenuation of the stop-band
3. Steepness of roll-off
4. Filter order (i.e., length for FIR filters)
5. Time-domain ringing
In general, the sharper something is in frequency, the broader it is in time,
and vice-versa. This is a fundamental time-frequency trade-off, and it will
show up below.
FIR Filters
First, we will focus on FIR filters, which are the default filters used by
MNE-Python.
Designing FIR filters
Here we'll try to design a low-pass filter and look at trade-offs in terms
of time- and frequency-domain filter characteristics. Later, in
tut-effect-on-signals, we'll look at how such filters can affect
signals when they are used.
First let's import some useful tools for filtering, and set some default
values for our data that are reasonable for M/EEG.
End of explanation
"""
nyq = sfreq / 2. # the Nyquist frequency is half our sample rate
freq = [0, f_p, f_p, nyq]
gain = [1, 1, 0, 0]
third_height = np.array(plt.rcParams['figure.figsize']) * [1, 1. / 3.]
ax = plt.subplots(1, figsize=third_height)[1]
plot_ideal_filter(freq, gain, ax, title='Ideal %s Hz lowpass' % f_p, flim=flim)
"""
Explanation: Take for example an ideal low-pass filter, which would give a magnitude
response of 1 in the pass-band (up to frequency $f_p$) and a magnitude
response of 0 in the stop-band (down to frequency $f_s$) such that
$f_p=f_s=40$ Hz here (shown to a lower limit of -60 dB for simplicity):
End of explanation
"""
n = int(round(0.1 * sfreq))
n -= n % 2 - 1 # make it odd
t = np.arange(-(n // 2), n // 2 + 1) / sfreq # center our sinc
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, sfreq, freq, gain, 'Sinc (0.1 s)', flim=flim, compensate=True)
"""
Explanation: This filter hypothetically achieves zero ripple in the frequency domain,
perfect attenuation, and perfect steepness. However, due to the discontinuity
in the frequency response, the filter would require infinite ringing in the
time domain (i.e., infinite order) to be realized. Another way to think of
this is that a rectangular window in the frequency domain is actually a sinc_
function in the time domain, which requires an infinite number of samples
(and thus infinite time) to represent. So although this filter has ideal
frequency suppression, it has poor time-domain characteristics.
Let's try to naïvely make a brick-wall filter of length 0.1 s, and look
at the filter itself in the time domain and the frequency domain:
End of explanation
"""
n = int(round(1. * sfreq))
n -= n % 2 - 1 # make it odd
t = np.arange(-(n // 2), n // 2 + 1) / sfreq
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, sfreq, freq, gain, 'Sinc (1.0 s)', flim=flim, compensate=True)
"""
Explanation: This is not so good! Making the filter 10 times longer (1 s) gets us a
slightly better stop-band suppression, but still has a lot of ringing in
the time domain. Note the x-axis is an order of magnitude longer here,
and the filter has a correspondingly much longer group delay (again equal
to half the filter length, or 0.5 seconds):
End of explanation
"""
n = int(round(10. * sfreq))
n -= n % 2 - 1 # make it odd
t = np.arange(-(n // 2), n // 2 + 1) / sfreq
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, sfreq, freq, gain, 'Sinc (10.0 s)', flim=flim, compensate=True)
"""
Explanation: Let's make the stop-band tighter still with a longer filter (10 s),
with a resulting larger x-axis:
End of explanation
"""
trans_bandwidth = 10 # 10 Hz transition band
f_s = f_p + trans_bandwidth # = 50 Hz
freq = [0., f_p, f_s, nyq]
gain = [1., 1., 0., 0.]
ax = plt.subplots(1, figsize=third_height)[1]
title = '%s Hz lowpass with a %s Hz transition' % (f_p, trans_bandwidth)
plot_ideal_filter(freq, gain, ax, title=title, flim=flim)
"""
Explanation: Now we have very sharp frequency suppression, but our filter rings for the
entire 10 seconds. So this naïve method is probably not a good way to build
our low-pass filter.
Fortunately, there are multiple established methods to design FIR filters
based on desired response characteristics. These include:
1. The Remez_ algorithm (:func:`scipy.signal.remez`, `MATLAB firpm`_)
2. Windowed FIR design (:func:`scipy.signal.firwin2`,
:func:`scipy.signal.firwin`, and `MATLAB fir2`_)
3. Least squares designs (:func:`scipy.signal.firls`, `MATLAB firls`_)
4. Frequency-domain design (construct filter in Fourier
domain and use an :func:`IFFT <numpy.fft.ifft>` to invert it)
<div class="alert alert-info"><h4>Note</h4><p>Remez and least squares designs have advantages when there are
"do not care" regions in our frequency response. However, we want
well controlled responses in all frequency regions.
Frequency-domain construction is good when an arbitrary response
is desired, but generally less clean (due to sampling issues) than
a windowed approach for more straightforward filter applications.
Since our filters (low-pass, high-pass, band-pass, band-stop)
are fairly simple and we require precise control of all frequency
regions, we will primarily use and explore windowed FIR design.</p></div>
If we relax our frequency-domain filter requirements a little bit, we can
use these functions to construct a lowpass filter that instead has a
transition band, or a region between the pass frequency $f_p$
and stop frequency $f_s$, e.g.:
End of explanation
"""
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (1.0 s)',
flim=flim, compensate=True)
"""
Explanation: Accepting a shallower roll-off of the filter in the frequency domain makes
our time-domain response potentially much better. We end up with a more
gradual slope through the transition region, but a much cleaner time
domain signal. Here again for the 1 s filter:
End of explanation
"""
n = int(round(sfreq * 0.5)) + 1
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (0.5 s)',
flim=flim, compensate=True)
"""
Explanation: Since our lowpass is around 40 Hz with a 10 Hz transition, we can actually
use a shorter filter (5 cycles at 10 Hz = 0.5 s) and still get acceptable
stop-band attenuation:
End of explanation
"""
n = int(round(sfreq * 0.2)) + 1
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (0.2 s)',
flim=flim, compensate=True)
"""
Explanation: But if we shorten the filter too much (2 cycles of 10 Hz = 0.2 s),
our effective stop frequency gets pushed out past 60 Hz:
End of explanation
"""
trans_bandwidth = 25
f_s = f_p + trans_bandwidth
freq = [0, f_p, f_s, nyq]
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 50 Hz transition (0.2 s)',
flim=flim, compensate=True)
"""
Explanation: If we want a filter that is only 0.1 seconds long, we should probably use
something more like a 25 Hz transition band (0.2 s = 5 cycles @ 25 Hz):
End of explanation
"""
h_min = signal.minimum_phase(h)
plot_filter(h_min, sfreq, freq, gain, 'Minimum-phase', flim=flim)
"""
Explanation: So far, we have only discussed non-causal filtering, which means that each
sample at each time point $t$ is filtered using samples that come
after ($t + \Delta t$) and before ($t - \Delta t$) the current
time point $t$.
In this sense, each sample is influenced by samples that come both before
and after it. This is useful in many cases, especially because it does not
delay the timing of events.
However, sometimes it can be beneficial to use causal filtering,
whereby each sample $t$ is filtered only using time points that came
after it.
Note that the delay is variable (whereas for linear/zero-phase filters it
is constant) but small in the pass-band. Unlike zero-phase filters, which
require time-shifting backward the output of a linear-phase filtering stage
(and thus becoming non-causal), minimum-phase filters do not require any
compensation to achieve small delays in the pass-band. Note that as an
artifact of the minimum phase filter construction step, the filter does
not end up being as steep as the linear/zero-phase version.
We can construct a minimum-phase filter from our existing linear-phase
filter with the :func:scipy.signal.minimum_phase function, and note
that the falloff is not as steep:
End of explanation
"""
dur = 10.
center = 2.
morlet_freq = f_p
tlim = [center - 0.2, center + 0.2]
tticks = [tlim[0], center, tlim[1]]
flim = [20, 70]
x = np.zeros(int(sfreq * dur) + 1)
blip = morlet(sfreq, [morlet_freq], n_cycles=7)[0].imag / 20.
n_onset = int(center * sfreq) - len(blip) // 2
x[n_onset:n_onset + len(blip)] += blip
x_orig = x.copy()
rng = np.random.RandomState(0)
x += rng.randn(len(x)) / 1000.
x += np.sin(2. * np.pi * 60. * np.arange(len(x)) / sfreq) / 2000.
"""
Explanation: Applying FIR filters
Now lets look at some practical effects of these filters by applying
them to some data.
Let's construct a Gaussian-windowed sinusoid (i.e., Morlet imaginary part)
plus noise (random and line). Note that the original clean signal contains
frequency content in both the pass band and transition bands of our
low-pass filter.
End of explanation
"""
transition_band = 0.25 * f_p
f_s = f_p + transition_band
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
# This would be equivalent:
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
fir_design='firwin', verbose=True)
x_v16 = np.convolve(h, x)
# this is the linear->zero phase, causal-to-non-causal conversion / shift
x_v16 = x_v16[len(h) // 2:]
plot_filter(h, sfreq, freq, gain, 'MNE-Python 0.16 default', flim=flim,
compensate=True)
"""
Explanation: Filter it with a shallow cutoff, linear-phase FIR (which allows us to
compensate for the constant filter delay):
End of explanation
"""
transition_band = 0.25 * f_p
f_s = f_p + transition_band
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
# This would be equivalent:
# filter_dur = 6.6 / transition_band # sec
# n = int(sfreq * filter_dur)
# h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.)
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
fir_design='firwin2', verbose=True)
x_v14 = np.convolve(h, x)[len(h) // 2:]
plot_filter(h, sfreq, freq, gain, 'MNE-Python 0.14 default', flim=flim,
compensate=True)
"""
Explanation: Filter it with a different design method fir_design="firwin2", and also
compensate for the constant filter delay. This method does not produce
quite as sharp a transition compared to fir_design="firwin", despite
being twice as long:
End of explanation
"""
transition_band = 0.5 # Hz
f_s = f_p + transition_band
filter_dur = 10. # sec
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
# This would be equivalent
# n = int(sfreq * filter_dur)
# h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.)
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
h_trans_bandwidth=transition_band,
filter_length='%ss' % filter_dur,
fir_design='firwin2', verbose=True)
x_v13 = np.convolve(np.convolve(h, x)[::-1], h)[::-1][len(h) - 1:-len(h) - 1]
# the effective h is one that is applied to the time-reversed version of itself
h_eff = np.convolve(h, h[::-1])
plot_filter(h_eff, sfreq, freq, gain, 'MNE-Python 0.13 default', flim=flim,
compensate=True)
"""
Explanation: Let's also filter with the MNE-Python 0.13 default, which is a
long-duration, steep cutoff FIR that gets applied twice:
End of explanation
"""
h = mne.filter.design_mne_c_filter(sfreq, l_freq=None, h_freq=f_p + 2.5)
x_mne_c = np.convolve(h, x)[len(h) // 2:]
transition_band = 5 # Hz (default in MNE-C)
f_s = f_p + transition_band
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
plot_filter(h, sfreq, freq, gain, 'MNE-C default', flim=flim, compensate=True)
"""
Explanation: Let's also filter it with the MNE-C default, which is a long-duration
steep-slope FIR filter designed using frequency-domain techniques:
End of explanation
"""
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
phase='minimum', fir_design='firwin',
verbose=True)
x_min = np.convolve(h, x)
transition_band = 0.25 * f_p
f_s = f_p + transition_band
filter_dur = 6.6 / transition_band # sec
n = int(sfreq * filter_dur)
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
plot_filter(h, sfreq, freq, gain, 'Minimum-phase filter', flim=flim)
"""
Explanation: And now an example of a minimum-phase filter:
End of explanation
"""
axes = plt.subplots(1, 2)[1]
def plot_signal(x, offset):
"""Plot a signal."""
t = np.arange(len(x)) / sfreq
axes[0].plot(t, x + offset)
axes[0].set(xlabel='Time (s)', xlim=t[[0, -1]])
X = fft(x)
freqs = fftfreq(len(x), 1. / sfreq)
mask = freqs >= 0
X = X[mask]
freqs = freqs[mask]
axes[1].plot(freqs, 20 * np.log10(np.maximum(np.abs(X), 1e-16)))
axes[1].set(xlim=flim)
yscale = 30
yticklabels = ['Original', 'Noisy', 'FIR-firwin (0.16)', 'FIR-firwin2 (0.14)',
'FIR-steep (0.13)', 'FIR-steep (MNE-C)', 'Minimum-phase']
yticks = -np.arange(len(yticklabels)) / yscale
plot_signal(x_orig, offset=yticks[0])
plot_signal(x, offset=yticks[1])
plot_signal(x_v16, offset=yticks[2])
plot_signal(x_v14, offset=yticks[3])
plot_signal(x_v13, offset=yticks[4])
plot_signal(x_mne_c, offset=yticks[5])
plot_signal(x_min, offset=yticks[6])
axes[0].set(xlim=tlim, title='FIR, Lowpass=%d Hz' % f_p, xticks=tticks,
ylim=[-len(yticks) / yscale, 1. / yscale],
yticks=yticks, yticklabels=yticklabels)
for text in axes[0].get_yticklabels():
text.set(rotation=45, size=8)
axes[1].set(xlim=flim, ylim=(-60, 10), xlabel='Frequency (Hz)',
ylabel='Magnitude (dB)')
mne.viz.tight_layout()
plt.show()
"""
Explanation: Both the MNE-Python 0.13 and MNE-C filters have excellent frequency
attenuation, but it comes at a cost of potential
ringing (long-lasting ripples) in the time domain. Ringing can occur with
steep filters, especially in signals with frequency content around the
transition band. Our Morlet wavelet signal has power in our transition band,
and the time-domain ringing is thus more pronounced for the steep-slope,
long-duration filter than the shorter, shallower-slope filter:
End of explanation
"""
sos = signal.iirfilter(2, f_p / nyq, btype='low', ftype='butter', output='sos')
plot_filter(dict(sos=sos), sfreq, freq, gain, 'Butterworth order=2', flim=flim,
compensate=True)
x_shallow = signal.sosfiltfilt(sos, x)
del sos
"""
Explanation: IIR filters
MNE-Python also offers IIR filtering functionality that is based on the
methods from :mod:scipy.signal. Specifically, we use the general-purpose
functions :func:scipy.signal.iirfilter and :func:scipy.signal.iirdesign,
which provide unified interfaces to IIR filter design.
Designing IIR filters
Let's continue with our design of a 40 Hz low-pass filter and look at
some trade-offs of different IIR filters.
Often the default IIR filter is a Butterworth filter_, which is designed
to have a maximally flat pass-band. Let's look at a few filter orders,
i.e., a few different number of coefficients used and therefore steepness
of the filter:
<div class="alert alert-info"><h4>Note</h4><p>Notice that the group delay (which is related to the phase) of
the IIR filters below are not constant. In the FIR case, we can
design so-called linear-phase filters that have a constant group
delay, and thus compensate for the delay (making the filter
non-causal) if necessary. This cannot be done with IIR filters, as
they have a non-linear phase (non-constant group delay). As the
filter order increases, the phase distortion near and in the
transition band worsens. However, if non-causal (forward-backward)
filtering can be used, e.g. with :func:`scipy.signal.filtfilt`,
these phase issues can theoretically be mitigated.</p></div>
End of explanation
"""
iir_params = dict(order=8, ftype='butter')
filt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
method='iir', iir_params=iir_params,
verbose=True)
plot_filter(filt, sfreq, freq, gain, 'Butterworth order=8', flim=flim,
compensate=True)
x_steep = signal.sosfiltfilt(filt['sos'], x)
"""
Explanation: The falloff of this filter is not very steep.
<div class="alert alert-info"><h4>Note</h4><p>Here we have made use of second-order sections (SOS)
by using :func:`scipy.signal.sosfilt` and, under the
hood, :func:`scipy.signal.zpk2sos` when passing the
``output='sos'`` keyword argument to
:func:`scipy.signal.iirfilter`. The filter definitions
given `above <tut-filtering-basics>` use the polynomial
numerator/denominator (sometimes called "tf") form ``(b, a)``,
which are theoretically equivalent to the SOS form used here.
In practice, however, the SOS form can give much better results
due to issues with numerical precision (see
:func:`scipy.signal.sosfilt` for an example), so SOS should be
used whenever possible.</p></div>
Let's increase the order, and note that now we have better attenuation,
with a longer impulse response. Let's also switch to using the MNE filter
design function, which simplifies a few things and gives us some information
about the resulting filter:
End of explanation
"""
iir_params.update(ftype='cheby1',
rp=1., # dB of acceptable pass-band ripple
)
filt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
method='iir', iir_params=iir_params,
verbose=True)
plot_filter(filt, sfreq, freq, gain,
'Chebychev-1 order=8, ripple=1 dB', flim=flim, compensate=True)
"""
Explanation: There are other types of IIR filters that we can use. For a complete list,
check out the documentation for :func:scipy.signal.iirdesign. Let's
try a Chebychev (type I) filter, which trades off ripple in the pass-band
to get better attenuation in the stop-band:
End of explanation
"""
iir_params['rp'] = 6.
filt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
method='iir', iir_params=iir_params,
verbose=True)
plot_filter(filt, sfreq, freq, gain,
'Chebychev-1 order=8, ripple=6 dB', flim=flim,
compensate=True)
"""
Explanation: If we can live with even more ripple, we can get it slightly steeper,
but the impulse response begins to ring substantially longer (note the
different x-axis scale):
End of explanation
"""
axes = plt.subplots(1, 2)[1]
yticks = np.arange(4) / -30.
yticklabels = ['Original', 'Noisy', 'Butterworth-2', 'Butterworth-8']
plot_signal(x_orig, offset=yticks[0])
plot_signal(x, offset=yticks[1])
plot_signal(x_shallow, offset=yticks[2])
plot_signal(x_steep, offset=yticks[3])
axes[0].set(xlim=tlim, title='IIR, Lowpass=%d Hz' % f_p, xticks=tticks,
ylim=[-0.125, 0.025], yticks=yticks, yticklabels=yticklabels,)
for text in axes[0].get_yticklabels():
text.set(rotation=45, size=8)
axes[1].set(xlim=flim, ylim=(-60, 10), xlabel='Frequency (Hz)',
ylabel='Magnitude (dB)')
mne.viz.adjust_axes(axes)
mne.viz.tight_layout()
plt.show()
"""
Explanation: Applying IIR filters
Now let's look at how our shallow and steep Butterworth IIR filters
perform on our Morlet signal from before:
End of explanation
"""
x = np.zeros(int(2 * sfreq))
t = np.arange(0, len(x)) / sfreq - 0.2
onset = np.where(t >= 0.5)[0][0]
cos_t = np.arange(0, int(sfreq * 0.8)) / sfreq
sig = 2.5 - 2.5 * np.cos(2 * np.pi * (1. / 0.8) * cos_t)
x[onset:onset + len(sig)] = sig
iir_lp_30 = signal.iirfilter(2, 30. / sfreq, btype='lowpass')
iir_hp_p1 = signal.iirfilter(2, 0.1 / sfreq, btype='highpass')
iir_lp_2 = signal.iirfilter(2, 2. / sfreq, btype='lowpass')
iir_hp_2 = signal.iirfilter(2, 2. / sfreq, btype='highpass')
x_lp_30 = signal.filtfilt(iir_lp_30[0], iir_lp_30[1], x, padlen=0)
x_hp_p1 = signal.filtfilt(iir_hp_p1[0], iir_hp_p1[1], x, padlen=0)
x_lp_2 = signal.filtfilt(iir_lp_2[0], iir_lp_2[1], x, padlen=0)
x_hp_2 = signal.filtfilt(iir_hp_2[0], iir_hp_2[1], x, padlen=0)
xlim = t[[0, -1]]
ylim = [-2, 6]
xlabel = 'Time (sec)'
ylabel = r'Amplitude ($\mu$V)'
tticks = [0, 0.5, 1.3, t[-1]]
axes = plt.subplots(2, 2)[1].ravel()
for ax, x_f, title in zip(axes, [x_lp_2, x_lp_30, x_hp_2, x_hp_p1],
['LP$_2$', 'LP$_{30}$', 'HP$_2$', 'LP$_{0.1}$']):
ax.plot(t, x, color='0.5')
ax.plot(t, x_f, color='k', linestyle='--')
ax.set(ylim=ylim, xlim=xlim, xticks=tticks,
title=title, xlabel=xlabel, ylabel=ylabel)
mne.viz.adjust_axes(axes)
mne.viz.tight_layout()
plt.show()
"""
Explanation: Some pitfalls of filtering
Multiple recent papers have noted potential risks of drawing
errant inferences due to misapplication of filters.
Low-pass problems
Filters in general, especially those that are non-causal (zero-phase), can
make activity appear to occur earlier or later than it truly did. As
mentioned in VanRullen (2011) :footcite:VanRullen2011,
investigations of commonly (at the time)
used low-pass filters created artifacts when they were applied to simulated
data. However, such deleterious effects were minimal in many real-world
examples in Rousselet (2012) :footcite:Rousselet2012.
Perhaps more revealing, it was noted in Widmann & Schröger (2012)
:footcite:WidmannSchroger2012 that the problematic low-pass filters from
VanRullen (2011) :footcite:VanRullen2011:
Used a least-squares design (like :func:scipy.signal.firls) that
included "do-not-care" transition regions, which can lead to
uncontrolled behavior.
Had a filter length that was independent of the transition bandwidth,
which can cause excessive ringing and signal distortion.
High-pass problems
When it comes to high-pass filtering, using corner frequencies above 0.1 Hz
were found in Acunzo et al. (2012) :footcite:AcunzoEtAl2012 to:
"... generate a systematic bias easily leading to misinterpretations of
neural activity.”
In a related paper, Widmann et al. (2015) :footcite:WidmannEtAl2015
also came to suggest a 0.1 Hz highpass. More evidence followed in
Tanner et al. (2015) :footcite:TannerEtAl2015 of such distortions.
Using data from language ERP studies of semantic and
syntactic processing (i.e., N400 and P600), using a high-pass above 0.3 Hz
caused significant effects to be introduced implausibly early when compared
to the unfiltered data. From this, the authors suggested the optimal
high-pass value for language processing to be 0.1 Hz.
We can recreate a problematic simulation from
Tanner et al. (2015) :footcite:TannerEtAl2015:
"The simulated component is a single-cycle cosine wave with an amplitude
of 5µV [sic], onset of 500 ms poststimulus, and duration of 800 ms. The
simulated component was embedded in 20 s of zero values to avoid
filtering edge effects... Distortions [were] caused by 2 Hz low-pass
and high-pass filters... No visible distortion to the original
waveform [occurred] with 30 Hz low-pass and 0.01 Hz high-pass filters...
Filter frequencies correspond to the half-amplitude (-6 dB) cutoff
(12 dB/octave roll-off)."
<div class="alert alert-info"><h4>Note</h4><p>This simulated signal contains energy not just within the
pass-band, but also within the transition and stop-bands -- perhaps
most easily understood because the signal has a non-zero DC value,
but also because it is a shifted cosine that has been
*windowed* (here multiplied by a rectangular window), which
makes the cosine and DC frequencies spread to other frequencies
(multiplication in time is convolution in frequency, so multiplying
by a rectangular window in the time domain means convolving a sinc
function with the impulses at DC and the cosine frequency in the
frequency domain).</p></div>
End of explanation
"""
def baseline_plot(x):
all_axes = plt.subplots(3, 2)[1]
for ri, (axes, freq) in enumerate(zip(all_axes, [0.1, 0.3, 0.5])):
for ci, ax in enumerate(axes):
if ci == 0:
iir_hp = signal.iirfilter(4, freq / sfreq, btype='highpass',
output='sos')
x_hp = signal.sosfiltfilt(iir_hp, x, padlen=0)
else:
x_hp -= x_hp[t < 0].mean()
ax.plot(t, x, color='0.5')
ax.plot(t, x_hp, color='k', linestyle='--')
if ri == 0:
ax.set(title=('No ' if ci == 0 else '') +
'Baseline Correction')
ax.set(xticks=tticks, ylim=ylim, xlim=xlim, xlabel=xlabel)
ax.set_ylabel('%0.1f Hz' % freq, rotation=0,
horizontalalignment='right')
mne.viz.adjust_axes(axes)
mne.viz.tight_layout()
plt.suptitle(title)
plt.show()
baseline_plot(x)
"""
Explanation: Similarly, in a P300 paradigm reported by
Kappenman & Luck (2010) :footcite:KappenmanLuck2010,
they found that applying a 1 Hz high-pass decreased the probability of
finding a significant difference in the N100 response, likely because
the P300 response was smeared (and inverted) in time by the high-pass
filter such that it tended to cancel out the increased N100. However,
they nonetheless note that some high-passing can still be useful to deal
with drifts in the data.
Even though these papers generally advise a 0.1 Hz or lower frequency for
a high-pass, it is important to keep in mind (as most authors note) that
filtering choices should depend on the frequency content of both the
signal(s) of interest and the noise to be suppressed. For example, in
some of the MNE-Python examples involving the sample-dataset dataset,
high-pass values of around 1 Hz are used when looking at auditory
or visual N100 responses, because we analyze standard (not deviant) trials
and thus expect that contamination by later or slower components will
be limited.
Baseline problems (or solutions?)
In an evolving discussion, Tanner et al. (2015) :footcite:TannerEtAl2015
suggest using baseline correction to remove slow drifts in data. However,
Maess et al. (2016) :footcite:MaessEtAl2016
suggest that baseline correction, which is a form of high-passing, does
not offer substantial advantages over standard high-pass filtering.
Tanner et al. (2016) :footcite:TannerEtAl2016
rebutted that baseline correction can correct for problems with filtering.
To see what they mean, consider again our old simulated signal x from
before:
End of explanation
"""
n_pre = (t < 0).sum()
sig_pre = 1 - np.cos(2 * np.pi * np.arange(n_pre) / (0.5 * n_pre))
x[:n_pre] += sig_pre
baseline_plot(x)
"""
Explanation: In response, Maess et al. (2016) :footcite:MaessEtAl2016a
note that these simulations do not
address cases of pre-stimulus activity that is shared across conditions, as
applying baseline correction will effectively copy the topology outside the
baseline period. We can see this if we give our signal x with some
consistent pre-stimulus activity, which makes everything look bad.
<div class="alert alert-info"><h4>Note</h4><p>An important thing to keep in mind with these plots is that they
are for a single simulated sensor. In multi-electrode recordings
the topology (i.e., spatial pattern) of the pre-stimulus activity
will leak into the post-stimulus period. This will likely create a
spatially varying distortion of the time-domain signals, as the
averaged pre-stimulus spatial pattern gets subtracted from the
sensor time courses.</p></div>
Putting some activity in the baseline period:
End of explanation
"""
# Use the same settings as when calling e.g., `raw.filter()`
fir_coefs = mne.filter.create_filter(
data=None, # data is only used for sanity checking, not strictly needed
sfreq=1000., # sfreq of your data in Hz
l_freq=None,
h_freq=40., # assuming a lowpass of 40 Hz
method='fir',
fir_window='hamming',
fir_design='firwin',
verbose=True)
# See the printed log for the transition bandwidth and filter length.
# Alternatively, get the filter length through:
filter_length = fir_coefs.shape[0]
"""
Explanation: Both groups seem to acknowledge that the choices of filtering cutoffs, and
perhaps even the application of baseline correction, depend on the
characteristics of the data being investigated, especially when it comes to:
The frequency content of the underlying evoked activity relative
to the filtering parameters.
The validity of the assumption of no consistent evoked activity
in the baseline period.
We thus recommend carefully applying baseline correction and/or high-pass
values based on the characteristics of the data to be analyzed.
Filtering defaults
Defaults in MNE-Python
Most often, filtering in MNE-Python is done at the :class:mne.io.Raw level,
and thus :func:mne.io.Raw.filter is used. This function under the hood
(among other things) calls :func:mne.filter.filter_data to actually
filter the data, which by default applies a zero-phase FIR filter designed
using :func:scipy.signal.firwin.
In Widmann et al. (2015) :footcite:WidmannEtAl2015, they
suggest a specific set of parameters to use for high-pass filtering,
including:
"... providing a transition bandwidth of 25% of the lower passband
edge but, where possible, not lower than 2 Hz and otherwise the
distance from the passband edge to the critical frequency.”
In practice, this means that for each high-pass value l_freq or
low-pass value h_freq below, you would get this corresponding
l_trans_bandwidth or h_trans_bandwidth, respectively,
if the sample rate were 100 Hz (i.e., Nyquist frequency of 50 Hz):
+------------------+-------------------+-------------------+
| l_freq or h_freq | l_trans_bandwidth | h_trans_bandwidth |
+==================+===================+===================+
| 0.01 | 0.01 | 2.0 |
+------------------+-------------------+-------------------+
| 0.1 | 0.1 | 2.0 |
+------------------+-------------------+-------------------+
| 1.0 | 1.0 | 2.0 |
+------------------+-------------------+-------------------+
| 2.0 | 2.0 | 2.0 |
+------------------+-------------------+-------------------+
| 4.0 | 2.0 | 2.0 |
+------------------+-------------------+-------------------+
| 8.0 | 2.0 | 2.0 |
+------------------+-------------------+-------------------+
| 10.0 | 2.5 | 2.5 |
+------------------+-------------------+-------------------+
| 20.0 | 5.0 | 5.0 |
+------------------+-------------------+-------------------+
| 40.0 | 10.0 | 10.0 |
+------------------+-------------------+-------------------+
| 50.0 | 12.5 | 12.5 |
+------------------+-------------------+-------------------+
MNE-Python has adopted this definition for its high-pass (and low-pass)
transition bandwidth choices when using l_trans_bandwidth='auto' and
h_trans_bandwidth='auto'.
To choose the filter length automatically with filter_length='auto',
the reciprocal of the shortest transition bandwidth is used to ensure
decent attenuation at the stop frequency. Specifically, the reciprocal
(in samples) is multiplied by 3.1, 3.3, or 5.0 for the Hann, Hamming,
or Blackman windows, respectively, as selected by the fir_window
argument for fir_design='firwin', and double these for
fir_design='firwin2' mode.
<div class="alert alert-info"><h4>Note</h4><p>For ``fir_design='firwin2'``, the multiplicative factors are
doubled compared to what is given in
Ifeachor & Jervis (2002) :footcite:`IfeachorJervis2002`
(p. 357), as :func:`scipy.signal.firwin2` has a smearing effect
on the frequency response, which we compensate for by
increasing the filter length. This is why
``fir_desgin='firwin'`` is preferred to ``fir_design='firwin2'``.</p></div>
In 0.14, we default to using a Hamming window in filter design, as it
provides up to 53 dB of stop-band attenuation with small pass-band ripple.
<div class="alert alert-info"><h4>Note</h4><p>In band-pass applications, often a low-pass filter can operate
effectively with fewer samples than the high-pass filter, so
it is advisable to apply the high-pass and low-pass separately
when using ``fir_design='firwin2'``. For design mode
``fir_design='firwin'``, there is no need to separate the
operations, as the lowpass and highpass elements are constructed
separately to meet the transition band requirements.</p></div>
For more information on how to use the
MNE-Python filtering functions with real data, consult the preprocessing
tutorial on tut-filter-resample.
Defaults in MNE-C
MNE-C by default uses:
5 Hz transition band for low-pass filters.
3-sample transition band for high-pass filters.
Filter length of 8197 samples.
The filter is designed in the frequency domain, creating a linear-phase
filter such that the delay is compensated for as is done with the MNE-Python
phase='zero' filtering option.
Squared-cosine ramps are used in the transition regions. Because these
are used in place of more gradual (e.g., linear) transitions,
a given transition width will result in more temporal ringing but also more
rapid attenuation than the same transition width in windowed FIR designs.
The default filter length will generally have excellent attenuation
but long ringing for the sample rates typically encountered in M/EEG data
(e.g. 500-2000 Hz).
Defaults in other software
A good but possibly outdated comparison of filtering in various software
packages is available in Widmann et al. (2015) :footcite:WidmannEtAl2015.
Briefly:
EEGLAB
MNE-Python 0.14 defaults to behavior very similar to that of EEGLAB
(see the EEGLAB filtering FAQ_ for more information).
FieldTrip
By default FieldTrip applies a forward-backward Butterworth IIR filter
of order 4 (band-pass and band-stop filters) or 2 (for low-pass and
high-pass filters). Similar filters can be achieved in MNE-Python when
filtering with :meth:raw.filter(..., method='iir') <mne.io.Raw.filter>
(see also :func:mne.filter.construct_iir_filter for options).
For more information, see e.g. the
FieldTrip band-pass documentation <ftbp_>_.
Reporting Filters
On page 45 in Widmann et al. (2015) :footcite:WidmannEtAl2015,
there is a convenient list of
important filter parameters that should be reported with each publication:
Filter type (high-pass, low-pass, band-pass, band-stop, FIR, IIR)
Cutoff frequency (including definition)
Filter order (or length)
Roll-off or transition bandwidth
Passband ripple and stopband attenuation
Filter delay (zero-phase, linear-phase, non-linear phase) and causality
Direction of computation (one-pass forward/reverse, or two-pass forward
and reverse)
In the following, we will address how to deal with these parameters in MNE:
Filter type
Depending on the function or method used, the filter type can be specified.
To name an example, in :func:mne.filter.create_filter, the relevant
arguments would be l_freq, h_freq, method, and if the method is
FIR fir_window and fir_design.
Cutoff frequency
The cutoff of FIR filters in MNE is defined as half-amplitude cutoff in the
middle of the transition band. That is, if you construct a lowpass FIR filter
with h_freq = 40, the filter function will provide a transition
bandwidth that depends on the h_trans_bandwidth argument. The desired
half-amplitude cutoff of the lowpass FIR filter is then at
h_freq + transition_bandwidth/2..
Filter length (order) and transition bandwidth (roll-off)
In the tut-filtering-in-python section, we have already talked about
the default filter lengths and transition bandwidths that are used when no
custom values are specified using the respective filter function's arguments.
If you want to find out about the filter length and transition bandwidth that
were used through the 'auto' setting, you can use
:func:mne.filter.create_filter to print out the settings once more:
End of explanation
"""
|
dnc1994/MachineLearning-UW | ml-classification/blank/module-8-boosting-assignment-2-blank.ipynb | mit | import graphlab
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Boosting a decision stump
The goal of this notebook is to implement your own boosting module.
Brace yourselves! This is going to be a fun and challenging assignment.
Use SFrames to do some feature engineering.
Modify the decision trees to incorporate weights.
Implement Adaboost ensembling.
Use your implementation of Adaboost to train a boosted decision stump ensemble.
Evaluate the effect of boosting (adding more decision stumps) on performance of the model.
Explore the robustness of Adaboost to overfitting.
Let's get started!
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create (1.8.3 or newer). Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
End of explanation
"""
loans = graphlab.SFrame('lending-club-data.gl/')
"""
Explanation: Getting the data ready
We will be using the same LendingClub dataset as in the previous assignment.
End of explanation
"""
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans.remove_column('bad_loans')
target = 'safe_loans'
loans = loans[features + [target]]
"""
Explanation: Extracting the target and the feature columns
We will now repeat some of the feature processing steps that we saw in the previous assignment:
First, we re-assign the target to have +1 as a safe (good) loan, and -1 as a risky (bad) loan.
Next, we select four categorical features:
1. grade of the loan
2. the length of the loan term
3. the home ownership status: own, mortgage, rent
4. number of years of employment.
End of explanation
"""
safe_loans_raw = loans[loans[target] == 1]
risky_loans_raw = loans[loans[target] == -1]
# Undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
risky_loans = risky_loans_raw
safe_loans = safe_loans_raw.sample(percentage, seed=1)
loans_data = risky_loans_raw.append(safe_loans)
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
"""
Explanation: Subsample dataset to make sure classes are balanced
Just as we did in the previous assignment, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We use seed=1 so everyone gets the same results.
End of explanation
"""
loans_data = risky_loans.append(safe_loans)
for feature in features:
loans_data_one_hot_encoded = loans_data[feature].apply(lambda x: {x: 1})
loans_data_unpacked = loans_data_one_hot_encoded.unpack(column_name_prefix=feature)
# Change None's to 0's
for column in loans_data_unpacked.column_names():
loans_data_unpacked[column] = loans_data_unpacked[column].fillna(0)
loans_data.remove_column(feature)
loans_data.add_columns(loans_data_unpacked)
"""
Explanation: Note: There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this paper. For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods.
Transform categorical data into binary features
In this assignment, we will work with binary decision trees. Since all of our features are currently categorical features, we want to turn them into binary features using 1-hot encoding.
We can do so with the following code block (see the first assignments for more details):
End of explanation
"""
features = loans_data.column_names()
features.remove('safe_loans') # Remove the response variable
features
"""
Explanation: Let's see what the feature columns look like now:
End of explanation
"""
train_data, test_data = loans_data.random_split(0.8, seed=1)
"""
Explanation: Train-test split
We split the data into training and test sets with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.
End of explanation
"""
def intermediate_node_weighted_mistakes(labels_in_node, data_weights):
# Sum the weights of all entries with label +1
total_weight_positive = sum(data_weights[labels_in_node == +1])
# Weight of mistakes for predicting all -1's is equal to the sum above
### YOUR CODE HERE
weighted_mistakes_all_negative = ...
# Sum the weights of all entries with label -1
### YOUR CODE HERE
total_weight_negative = ...
# Weight of mistakes for predicting all +1's is equal to the sum above
### YOUR CODE HERE
weighted_mistakes_all_positive = ...
# Return the tuple (weight, class_label) representing the lower of the two weights
# class_label should be an integer of value +1 or -1.
# If the two weights are identical, return (weighted_mistakes_all_positive,+1)
### YOUR CODE HERE
...
"""
Explanation: Weighted decision trees
Let's modify our decision tree code from Module 5 to support weighting of individual data points.
Weighted error definition
Consider a model with $N$ data points with:
* Predictions $\hat{y}_1 ... \hat{y}_n$
* Target $y_1 ... y_n$
* Data point weights $\alpha_1 ... \alpha_n$.
Then the weighted error is defined by:
$$
\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \frac{\sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}]}{\sum_{i=1}^{n} \alpha_i}
$$
where $1[y_i \neq \hat{y_i}]$ is an indicator function that is set to $1$ if $y_i \neq \hat{y_i}$.
Write a function to compute weight of mistakes
Write a function that calculates the weight of mistakes for making the "weighted-majority" predictions for a dataset. The function accepts two inputs:
* labels_in_node: Targets $y_1 ... y_n$
* data_weights: Data point weights $\alpha_1 ... \alpha_n$
We are interested in computing the (total) weight of mistakes, i.e.
$$
\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}].
$$
This quantity is analogous to the number of mistakes, except that each mistake now carries different weight. It is related to the weighted error in the following way:
$$
\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \frac{\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})}{\sum_{i=1}^{n} \alpha_i}
$$
The function intermediate_node_weighted_mistakes should first compute two weights:
* $\mathrm{WM}{-1}$: weight of mistakes when all predictions are $\hat{y}_i = -1$ i.e $\mathrm{WM}(\mathbf{\alpha}, \mathbf{-1}$)
* $\mathrm{WM}{+1}$: weight of mistakes when all predictions are $\hat{y}_i = +1$ i.e $\mbox{WM}(\mathbf{\alpha}, \mathbf{+1}$)
where $\mathbf{-1}$ and $\mathbf{+1}$ are vectors where all values are -1 and +1 respectively.
After computing $\mathrm{WM}{-1}$ and $\mathrm{WM}{+1}$, the function intermediate_node_weighted_mistakes should return the lower of the two weights of mistakes, along with the class associated with that weight. We have provided a skeleton for you with YOUR CODE HERE to be filled in several places.
End of explanation
"""
example_labels = graphlab.SArray([-1, -1, 1, 1, 1])
example_data_weights = graphlab.SArray([1., 2., .5, 1., 1.])
if intermediate_node_weighted_mistakes(example_labels, example_data_weights) == (2.5, -1):
print 'Test passed!'
else:
print 'Test failed... try again!'
"""
Explanation: Checkpoint: Test your intermediate_node_weighted_mistakes function, run the following cell:
End of explanation
"""
# If the data is identical in each feature, this function should return None
def best_splitting_feature(data, features, target, data_weights):
# These variables will keep track of the best feature and the corresponding error
best_feature = None
best_error = float('+inf')
num_points = float(len(data))
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
# The right split will have all data points where the feature value is 1
left_split = data[data[feature] == 0]
right_split = data[data[feature] == 1]
# Apply the same filtering to data_weights to create left_data_weights, right_data_weights
## YOUR CODE HERE
left_data_weights = ...
right_data_weights = ...
# DIFFERENT HERE
# Calculate the weight of mistakes for left and right sides
## YOUR CODE HERE
left_weighted_mistakes, left_class = ...
right_weighted_mistakes, right_class = ...
# DIFFERENT HERE
# Compute weighted error by computing
# ( [weight of mistakes (left)] + [weight of mistakes (right)] ) / [total weight of all data points]
## YOUR CODE HERE
error = ...
# If this is the best error we have found so far, store the feature and the error
if error < best_error:
best_feature = feature
best_error = error
# Return the best feature we found
return best_feature
"""
Explanation: Recall that the classification error is defined as follows:
$$
\mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# all data points}}
$$
Quiz Question: If we set the weights $\mathbf{\alpha} = 1$ for all data points, how is the weight of mistakes $\mbox{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})$ related to the classification error?
Function to pick best feature to split on
We continue modifying our decision tree code from the earlier assignment to incorporate weighting of individual data points. The next step is to pick the best feature to split on.
The best_splitting_feature function is similar to the one from the earlier assignment with two minor modifications:
1. The function best_splitting_feature should now accept an extra parameter data_weights to take account of weights of data points.
2. Instead of computing the number of mistakes in the left and right side of the split, we compute the weight of mistakes for both sides, add up the two weights, and divide it by the total weight of the data.
Complete the following function. Comments starting with DIFFERENT HERE mark the sections where the weighted version differs from the original implementation.
End of explanation
"""
example_data_weights = graphlab.SArray(len(train_data)* [1.5])
if best_splitting_feature(train_data, features, target, example_data_weights) == 'term. 36 months':
print 'Test passed!'
else:
print 'Test failed... try again!'
"""
Explanation: Checkpoint: Now, we have another checkpoint to make sure you are on the right track.
End of explanation
"""
def create_leaf(target_values, data_weights):
# Create a leaf node
leaf = {'splitting_feature' : None,
'is_leaf': True}
# Computed weight of mistakes.
weighted_error, best_class = intermediate_node_weighted_mistakes(target_values, data_weights)
# Store the predicted class (1 or -1) in leaf['prediction']
leaf['prediction'] = ... ## YOUR CODE HERE
return leaf
"""
Explanation: Note. If you get an exception in the line of "the logical filter has different size than the array", try upgradting your GraphLab Create installation to 1.8.3 or newer.
Very Optional. Relationship between weighted error and weight of mistakes
By definition, the weighted error is the weight of mistakes divided by the weight of all data points, so
$$
\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \frac{\sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}]}{\sum_{i=1}^{n} \alpha_i} = \frac{\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})}{\sum_{i=1}^{n} \alpha_i}.
$$
In the code above, we obtain $\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}})$ from the two weights of mistakes from both sides, $\mathrm{WM}(\mathbf{\alpha}{\mathrm{left}}, \mathbf{\hat{y}}{\mathrm{left}})$ and $\mathrm{WM}(\mathbf{\alpha}{\mathrm{right}}, \mathbf{\hat{y}}{\mathrm{right}})$. First, notice that the overall weight of mistakes $\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})$ can be broken into two weights of mistakes over either side of the split:
$$
\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})
= \sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}]
= \sum_{\mathrm{left}} \alpha_i \times 1[y_i \neq \hat{y_i}]
+ \sum_{\mathrm{right}} \alpha_i \times 1[y_i \neq \hat{y_i}]\
= \mathrm{WM}(\mathbf{\alpha}{\mathrm{left}}, \mathbf{\hat{y}}{\mathrm{left}}) + \mathrm{WM}(\mathbf{\alpha}{\mathrm{right}}, \mathbf{\hat{y}}{\mathrm{right}})
$$
We then divide through by the total weight of all data points to obtain $\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}})$:
$$
\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}})
= \frac{\mathrm{WM}(\mathbf{\alpha}{\mathrm{left}}, \mathbf{\hat{y}}{\mathrm{left}}) + \mathrm{WM}(\mathbf{\alpha}{\mathrm{right}}, \mathbf{\hat{y}}{\mathrm{right}})}{\sum_{i=1}^{n} \alpha_i}
$$
Building the tree
With the above functions implemented correctly, we are now ready to build our decision tree. Recall from the previous assignments that each node in the decision tree is represented as a dictionary which contains the following keys:
{
'is_leaf' : True/False.
'prediction' : Prediction at the leaf node.
'left' : (dictionary corresponding to the left tree).
'right' : (dictionary corresponding to the right tree).
'features_remaining' : List of features that are posible splits.
}
Let us start with a function that creates a leaf node given a set of target values:
End of explanation
"""
def weighted_decision_tree_create(data, features, target, data_weights, current_depth = 1, max_depth = 10):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target]
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1. Error is 0.
if intermediate_node_weighted_mistakes(target_values, data_weights)[0] <= 1e-15:
print "Stopping condition 1 reached."
return create_leaf(target_values, data_weights)
# Stopping condition 2. No more features.
if remaining_features == []:
print "Stopping condition 2 reached."
return create_leaf(target_values, data_weights)
# Additional stopping condition (limit tree depth)
if current_depth > max_depth:
print "Reached maximum depth. Stopping for now."
return create_leaf(target_values, data_weights)
# If all the datapoints are the same, splitting_feature will be None. Create a leaf
splitting_feature = best_splitting_feature(data, features, target, data_weights)
remaining_features.remove(splitting_feature)
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1]
left_data_weights = data_weights[data[splitting_feature] == 0]
right_data_weights = data_weights[data[splitting_feature] == 1]
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
# Create a leaf node if the split is "perfect"
if len(left_split) == len(data):
print "Creating leaf node."
return create_leaf(left_split[target], data_weights)
if len(right_split) == len(data):
print "Creating leaf node."
return create_leaf(right_split[target], data_weights)
# Repeat (recurse) on left and right subtrees
left_tree = weighted_decision_tree_create(
left_split, remaining_features, target, left_data_weights, current_depth + 1, max_depth)
right_tree = weighted_decision_tree_create(
right_split, remaining_features, target, right_data_weights, current_depth + 1, max_depth)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
"""
Explanation: We provide a function that learns a weighted decision tree recursively and implements 3 stopping conditions:
1. All data points in a node are from the same class.
2. No more features to split on.
3. Stop growing the tree when the tree depth reaches max_depth.
End of explanation
"""
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
"""
Explanation: Here is a recursive function to count the nodes in your tree:
End of explanation
"""
example_data_weights = graphlab.SArray([1.0 for i in range(len(train_data))])
small_data_decision_tree = weighted_decision_tree_create(train_data, features, target,
example_data_weights, max_depth=2)
if count_nodes(small_data_decision_tree) == 7:
print 'Test passed!'
else:
print 'Test failed... try again!'
print 'Number of nodes found:', count_nodes(small_data_decision_tree)
print 'Number of nodes that should be there: 7'
"""
Explanation: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
End of explanation
"""
small_data_decision_tree
"""
Explanation: Let us take a quick look at what the trained tree is like. You should get something that looks like the following
{'is_leaf': False,
'left': {'is_leaf': False,
'left': {'is_leaf': True, 'prediction': -1, 'splitting_feature': None},
'prediction': None,
'right': {'is_leaf': True, 'prediction': 1, 'splitting_feature': None},
'splitting_feature': 'grade.A'
},
'prediction': None,
'right': {'is_leaf': False,
'left': {'is_leaf': True, 'prediction': 1, 'splitting_feature': None},
'prediction': None,
'right': {'is_leaf': True, 'prediction': -1, 'splitting_feature': None},
'splitting_feature': 'grade.D'
},
'splitting_feature': 'term. 36 months'
}
End of explanation
"""
def classify(tree, x, annotate = False):
# If the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# Split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
return classify(tree['right'], x, annotate)
"""
Explanation: Making predictions with a weighted decision tree
We give you a function that classifies one data point. It can also return the probability if you want to play around with that as well.
End of explanation
"""
def evaluate_classification_error(tree, data):
# Apply the classify(tree, x) to each row in your data
prediction = data.apply(lambda x: classify(tree, x))
# Once you've made the predictions, calculate the classification error
return (prediction != data[target]).sum() / float(len(data))
evaluate_classification_error(small_data_decision_tree, test_data)
"""
Explanation: Evaluating the tree
Now, we will write a function to evaluate a decision tree by computing the classification error of the tree on the given dataset.
Again, recall that the classification error is defined as follows:
$$
\mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# all data points}}
$$
The function called evaluate_classification_error takes in as input:
1. tree (as described above)
2. data (an SFrame)
The function does not change because of adding data point weights.
End of explanation
"""
# Assign weights
example_data_weights = graphlab.SArray([1.] * 10 + [0.]*(len(train_data) - 20) + [1.] * 10)
# Train a weighted decision tree model.
small_data_decision_tree_subset_20 = weighted_decision_tree_create(train_data, features, target,
example_data_weights, max_depth=2)
"""
Explanation: Example: Training a weighted decision tree
To build intuition on how weighted data points affect the tree being built, consider the following:
Suppose we only care about making good predictions for the first 10 and last 10 items in train_data, we assign weights:
* 1 to the last 10 items
* 1 to the first 10 items
* and 0 to the rest.
Let us fit a weighted decision tree with max_depth = 2.
End of explanation
"""
subset_20 = train_data.head(10).append(train_data.tail(10))
evaluate_classification_error(small_data_decision_tree_subset_20, subset_20)
"""
Explanation: Now, we will compute the classification error on the subset_20, i.e. the subset of data points whose weight is 1 (namely the first and last 10 data points).
End of explanation
"""
evaluate_classification_error(small_data_decision_tree_subset_20, train_data)
"""
Explanation: Now, let us compare the classification error of the model small_data_decision_tree_subset_20 on the entire test set train_data:
End of explanation
"""
from math import log
from math import exp
def adaboost_with_tree_stumps(data, features, target, num_tree_stumps):
# start with unweighted data
alpha = graphlab.SArray([1.]*len(data))
weights = []
tree_stumps = []
target_values = data[target]
for t in xrange(num_tree_stumps):
print '====================================================='
print 'Adaboost Iteration %d' % t
print '====================================================='
# Learn a weighted decision tree stump. Use max_depth=1
tree_stump = weighted_decision_tree_create(data, features, target, data_weights=alpha, max_depth=1)
tree_stumps.append(tree_stump)
# Make predictions
predictions = data.apply(lambda x: classify(tree_stump, x))
# Produce a Boolean array indicating whether
# each data point was correctly classified
is_correct = predictions == target_values
is_wrong = predictions != target_values
# Compute weighted error
# YOUR CODE HERE
weighted_error = ...
# Compute model coefficient using weighted error
# YOUR CODE HERE
weight = ...
weights.append(weight)
# Adjust weights on data point
adjustment = is_correct.apply(lambda is_correct : exp(-weight) if is_correct else exp(weight))
# Scale alpha by multiplying by adjustment
# Then normalize data points weights
## YOUR CODE HERE
...
return weights, tree_stumps
"""
Explanation: The model small_data_decision_tree_subset_20 performs a lot better on subset_20 than on train_data.
So, what does this mean?
* The points with higher weights are the ones that are more important during the training process of the weighted decision tree.
* The points with zero weights are basically ignored during training.
Quiz Question: Will you get the same model as small_data_decision_tree_subset_20 if you trained a decision tree with only the 20 data points with non-zero weights from the set of points in subset_20?
Implementing your own Adaboost (on decision stumps)
Now that we have a weighted decision tree working, it takes only a bit of work to implement Adaboost. For the sake of simplicity, let us stick with decision tree stumps by training trees with max_depth=1.
Recall from the lecture the procedure for Adaboost:
1. Start with unweighted data with $\alpha_j = 1$
2. For t = 1,...T:
* Learn $f_t(x)$ with data weights $\alpha_j$
* Compute coefficient $\hat{w}t$:
$$\hat{w}_t = \frac{1}{2}\ln{\left(\frac{1- \mbox{E}(\mathbf{\alpha}, \mathbf{\hat{y}})}{\mbox{E}(\mathbf{\alpha}, \mathbf{\hat{y}})}\right)}$$
* Re-compute weights $\alpha_j$:
$$\alpha_j \gets \begin{cases}
\alpha_j \exp{(-\hat{w}_t)} & \text{ if }f_t(x_j) = y_j\
\alpha_j \exp{(\hat{w}_t)} & \text{ if }f_t(x_j) \neq y_j
\end{cases}$$
* Normalize weights $\alpha_j$:
$$\alpha_j \gets \frac{\alpha_j}{\sum{i=1}^{N}{\alpha_i}} $$
Complete the skeleton for the following code to implement adaboost_with_tree_stumps. Fill in the places with YOUR CODE HERE.
End of explanation
"""
stump_weights, tree_stumps = adaboost_with_tree_stumps(train_data, features, target, num_tree_stumps=2)
def print_stump(tree):
split_name = tree['splitting_feature'] # split_name is something like 'term. 36 months'
if split_name is None:
print "(leaf, label: %s)" % tree['prediction']
return None
split_feature, split_value = split_name.split('.')
print ' root'
print ' |---------------|----------------|'
print ' | |'
print ' | |'
print ' | |'
print ' [{0} == 0]{1}[{0} == 1] '.format(split_name, ' '*(27-len(split_name)))
print ' | |'
print ' | |'
print ' | |'
print ' (%s) (%s)' \
% (('leaf, label: ' + str(tree['left']['prediction']) if tree['left']['is_leaf'] else 'subtree'),
('leaf, label: ' + str(tree['right']['prediction']) if tree['right']['is_leaf'] else 'subtree'))
"""
Explanation: Checking your Adaboost code
Train an ensemble of two tree stumps and see which features those stumps split on. We will run the algorithm with the following parameters:
* train_data
* features
* target
* num_tree_stumps = 2
End of explanation
"""
print_stump(tree_stumps[0])
"""
Explanation: Here is what the first stump looks like:
End of explanation
"""
print_stump(tree_stumps[1])
print stump_weights
"""
Explanation: Here is what the next stump looks like:
End of explanation
"""
stump_weights, tree_stumps = adaboost_with_tree_stumps(train_data, features,
target, num_tree_stumps=10)
"""
Explanation: If your Adaboost is correctly implemented, the following things should be true:
tree_stumps[0] should split on term. 36 months with the prediction -1 on the left and +1 on the right.
tree_stumps[1] should split on grade.A with the prediction -1 on the left and +1 on the right.
Weights should be approximately [0.158, 0.177]
Reminders
- Stump weights ($\mathbf{\hat{w}}$) and data point weights ($\mathbf{\alpha}$) are two different concepts.
- Stump weights ($\mathbf{\hat{w}}$) tell you how important each stump is while making predictions with the entire boosted ensemble.
- Data point weights ($\mathbf{\alpha}$) tell you how important each data point is while training a decision stump.
Training a boosted ensemble of 10 stumps
Let us train an ensemble of 10 decision tree stumps with Adaboost. We run the adaboost_with_tree_stumps function with the following parameters:
* train_data
* features
* target
* num_tree_stumps = 10
End of explanation
"""
def predict_adaboost(stump_weights, tree_stumps, data):
scores = graphlab.SArray([0.]*len(data))
for i, tree_stump in enumerate(tree_stumps):
predictions = data.apply(lambda x: classify(tree_stump, x))
# Accumulate predictions on scaores array
# YOUR CODE HERE
...
return scores.apply(lambda score : +1 if score > 0 else -1)
predictions = predict_adaboost(stump_weights, tree_stumps, test_data)
accuracy = graphlab.evaluation.accuracy(test_data[target], predictions)
print 'Accuracy of 10-component ensemble = %s' % accuracy
"""
Explanation: Making predictions
Recall from the lecture that in order to make predictions, we use the following formula:
$$
\hat{y} = sign\left(\sum_{t=1}^T \hat{w}_t f_t(x)\right)
$$
We need to do the following things:
- Compute the predictions $f_t(x)$ using the $t$-th decision tree
- Compute $\hat{w}_t f_t(x)$ by multiplying the stump_weights with the predictions $f_t(x)$ from the decision trees
- Sum the weighted predictions over each stump in the ensemble.
Complete the following skeleton for making predictions:
End of explanation
"""
stump_weights
"""
Explanation: Now, let us take a quick look what the stump_weights look like at the end of each iteration of the 10-stump ensemble:
End of explanation
"""
# this may take a while...
stump_weights, tree_stumps = adaboost_with_tree_stumps(train_data,
features, target, num_tree_stumps=30)
"""
Explanation: Quiz Question: Are the weights monotonically decreasing, monotonically increasing, or neither?
Reminder: Stump weights ($\mathbf{\hat{w}}$) tell you how important each stump is while making predictions with the entire boosted ensemble.
Performance plots
In this section, we will try to reproduce some of the performance plots dicussed in the lecture.
How does accuracy change with adding stumps to the ensemble?
We will now train an ensemble with:
* train_data
* features
* target
* num_tree_stumps = 30
Once we are done with this, we will then do the following:
* Compute the classification error at the end of each iteration.
* Plot a curve of classification error vs iteration.
First, lets train the model.
End of explanation
"""
error_all = []
for n in xrange(1, 31):
predictions = predict_adaboost(stump_weights[:n], tree_stumps[:n], train_data)
error = 1.0 - graphlab.evaluation.accuracy(train_data[target], predictions)
error_all.append(error)
print "Iteration %s, training error = %s" % (n, error_all[n-1])
"""
Explanation: Computing training error at the end of each iteration
Now, we will compute the classification error on the train_data and see how it is reduced as trees are added.
End of explanation
"""
plt.rcParams['figure.figsize'] = 7, 5
plt.plot(range(1,31), error_all, '-', linewidth=4.0, label='Training error')
plt.title('Performance of Adaboost ensemble')
plt.xlabel('# of iterations')
plt.ylabel('Classification error')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size': 16})
"""
Explanation: Visualizing training error vs number of iterations
We have provided you with a simple code snippet that plots classification error with the number of iterations.
End of explanation
"""
test_error_all = []
for n in xrange(1, 31):
predictions = predict_adaboost(stump_weights[:n], tree_stumps[:n], test_data)
error = 1.0 - graphlab.evaluation.accuracy(test_data[target], predictions)
test_error_all.append(error)
print "Iteration %s, test error = %s" % (n, test_error_all[n-1])
"""
Explanation: Quiz Question: Which of the following best describes a general trend in accuracy as we add more and more components? Answer based on the 30 components learned so far.
Training error goes down monotonically, i.e. the training error reduces with each iteration but never increases.
Training error goes down in general, with some ups and downs in the middle.
Training error goes up in general, with some ups and downs in the middle.
Training error goes down in the beginning, achieves the best error, and then goes up sharply.
None of the above
Evaluation on the test data
Performing well on the training data is cheating, so lets make sure it works on the test_data as well. Here, we will compute the classification error on the test_data at the end of each iteration.
End of explanation
"""
plt.rcParams['figure.figsize'] = 7, 5
plt.plot(range(1,31), error_all, '-', linewidth=4.0, label='Training error')
plt.plot(range(1,31), test_error_all, '-', linewidth=4.0, label='Test error')
plt.title('Performance of Adaboost ensemble')
plt.xlabel('# of iterations')
plt.ylabel('Classification error')
plt.rcParams.update({'font.size': 16})
plt.legend(loc='best', prop={'size':15})
plt.tight_layout()
"""
Explanation: Visualize both the training and test errors
Now, let us plot the training & test error with the number of iterations.
End of explanation
"""
|
NuGrid/NuPyCEE | DOC/Capabilities/Delayed_extra_sources.ipynb | bsd-3-clause | import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from NuPyCEE import sygma
"""
Explanation: Delayed Extra Sources in NuPyCEE
Created by Benoit Côté
This notebook introduces the general delayed-extra set of parameters in NuPyCEE that allows to include any enrichment source that requires a delay-time distribution function (DTD). For example, this implementation can be used for different SNe Ia channels, for compact binary mergers, and for exotic production sites involving interactions between stellar objects.
This notebook focuses on SYGMA, but the implementation can also be used with OMEGA.
1. Implementation
Here are the basic simple stellar population (SSP) inputs that need to be provided:
- The DTD function (rate of occurence as a function of time),
- The total number of events per unit of M$_\odot$ formed,
- The yields (abundance pattern) associated with an event,
- The total mass ejected by an event (which will multiply the yields).
Here is how these inputs are implemented in NuPyCEE (see below for examples):
delayed_extra_dtd[ nb_sources ][ nb_Z ]
nb_sources is the number of different input astrophysical site (e.g., SNe Ia, neutron star mergers, iRAWDs, etc.).
nb_Z is the number of available metallicities available in the delayed-extra yields table.
delayed_extra_dtd[i][j] is a 2D array in the form of [ number_of_times ][ 0-time, 1-rate ].
The fact that we use 2D arrays provides maximum flexibility. We can then use analytical formulas, population synthesis predictions, outputs from simulations, etc..
delayed_extra_dtd_norm[ nb_sources ][ nb_Z ]
Total number of delayed sources occurring per M$_\odot$ formed.
delayed_extra_yields[ nb_sources ]
Yields table path (string) for each source.
There is no [ nb_Z ] since the yields table can contain many metallicities.
delayed_extra_yields_norm[ nb_sources ][ nb_Z ]
Fraction (float) of the yield table that will be eject per event, for each source and metallicity. This is the total mass ejected per event if the yields are in mass fraction (normalized to 1).
2. Example with SYGMA
End of explanation
"""
# Create the DTD and yields information for the extra source
# ==========================================================
# Event rate [yr^-1] as a function of time [yr].
# Times need to be in order. No event will occurs before the lowest time and after the largest time.
# The code will interpolate the data point provided (in linear or in log-log space).
t = [0.0, 1.0e9, 2.0e9] # Can be any length.
R = [1.0, 3.0, 2.0] # This is only the shape of the DTD, as it will be re-normalized.
# Build the input DTD array
dtd = []
for i in range(0,len(t)):
dtd.append([t[i], R[i]])
# Add the DTD array in the delayed_extra_dtd array.
delayed_extra_dtd = [[dtd]]
# [[ ]] for the indexes for the number of sources (here 1) and metallicities (here 1)
# Define the total number of event per unit of Msun formed. This will normalize the DTD.
delayed_extra_dtd_norm = [[1.0e-1]]
# [[ ]] for the indexes for the number of sources (here 1) and metallicities (here 1)
# Define the yields path for the extra source
delayed_extra_yields = ['yield_tables/r_process_arnould_2007.txt']
# [ ] and not [[ ]] because the nb_Z is in the yields table as in SN Ia yields
# See yield_tables/sn1a_ivo12_stable_z.txt for an example of such yields template.
# Define the total mass ejected by an extra source
delayed_extra_yields_norm = [[1.0e-3]]
# Run SYGMA, one SSP with a total mass of 1 Msun at Z = 0.02
mgal = 1.0
s1 = sygma.sygma(iniZ=0.02, delayed_extra_dtd=delayed_extra_dtd, delayed_extra_dtd_norm=delayed_extra_dtd_norm,\
delayed_extra_yields=delayed_extra_yields, delayed_extra_yields_norm=delayed_extra_yields_norm, mgal=mgal,\
dt=1e8, special_timesteps=-1, tend=1.1*t[-1])
"""
Explanation: 2.1 One deyaled extra source
End of explanation
"""
# Predicted number of events
N_pred = delayed_extra_dtd_norm[0][0] * mgal
# Predicted mass ejected by events
M_ej_pred = N_pred * delayed_extra_yields_norm[0][0]
# Calculated number of events
N_sim = sum(s1.delayed_extra_numbers[0])
# Calculated mass ejected by events
M_ej_sim = sum(s1.ymgal_delayed_extra[0][-1])
# Print the test
print ('The following numbers should be 1.0')
print (' Number of events (predicted/calculated):', N_pred / N_sim)
print (' Mass ejected by events (predicted/calculated):', M_ej_pred / M_ej_sim)
"""
Explanation: Let's test the total number of events and the total mass ejected by those events.
End of explanation
"""
%matplotlib nbagg
plt.plot(s1.history.age[1:], np.array(s1.delayed_extra_numbers[0])/s1.history.timesteps)
plt.xlim(0, 1.1*t[-1])
plt.xlabel('Time [yr]', fontsize=12)
plt.ylabel('Rate [event yr$^{-1}$]', fontsize=12)
"""
Explanation: Let's plot the DTD
End of explanation
"""
# Create the DTD and yields information for the second extra source
# =================================================================
# Event rate [yr^-1] as a function of time [yr].
t2 = [1.4e9, 1.6e9, 1.8e9]
R2 = [4.0, 1.0, 4.0]
# Build the input DTD array
dtd2 = []
for i in range(0,len(t2)):
dtd2.append([t2[i], R2[i]])
# Add the DTD array in the delayed_extra_dtd array.
delayed_extra_dtd = [[dtd],[dtd2]]
# Define the total number of event per unit of Msun formed. This will normalize the DTD.
delayed_extra_dtd_norm = [[1.0e-2],[5.0e-3]]
# Define the yields path for the extra source
delayed_extra_yields = ['yield_tables/r_process_arnould_2007.txt','yield_tables/r_process_arnould_2007.txt']
# Define the total mass ejected by an extra source
delayed_extra_yields_norm = [[1.0e-3],[2.0e-3]]
# Run SYGMA, one SSP with a total mass of 1 Msun at Z = 0.02
mgal = 1.0
s2 = sygma.sygma(iniZ=0.02, delayed_extra_dtd=delayed_extra_dtd, delayed_extra_dtd_norm=delayed_extra_dtd_norm,\
delayed_extra_yields=delayed_extra_yields, delayed_extra_yields_norm=delayed_extra_yields_norm, mgal=mgal,\
dt=1e8, special_timesteps=-1, tend=1.1*t[-1])
%matplotlib nbagg
plt.plot(s2.history.age[1:], np.array(s2.delayed_extra_numbers[0])/s2.history.timesteps)
plt.plot(s2.history.age[1:], np.array(s2.delayed_extra_numbers[1])/s2.history.timesteps)
plt.xlim(0, 1.1*t[-1])
plt.xlabel('Time [yr]', fontsize=12)
plt.ylabel('Rate [event yr$^{-1}$]', fontsize=12)
"""
Explanation: 2.2 Two deyaled extra sources
End of explanation
"""
|
ML4DS/ML4all | R4.ML_Regression/Regression_ML_professor.ipynb | mit | # Import some libraries that will be necessary for working with data and displaying plots
# To visualize plots in the notebook
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import scipy.io # To read matlab files
import pylab
"""
Explanation: Parametric Model-Based regression
Notebook version: 1.3 (Sep 20, 2019)
Author: Jesús Cid-Sueiro (jesus.cid@uc3m.es)
Jerónimo Arenas García (jarenas@tsc.uc3m.es)
Changes: v.1.0 - First version, expanding some cells from the Bayesian Regression
notebook
v.1.1 - Python 3 version.
v.1.2 - Revised presentation.
v.1.3 - Updated index notation
Pending changes: * Include regression on the stock data
End of explanation
"""
X = np.array([0.15, 0.41, 0.53, 0.80, 0.89, 0.92, 0.95])
s = np.array([0.09, 0.16, 0.63, 0.44, 0.55, 0.82, 0.95])
"""
Explanation: A quick note on the mathematical notation
In this notebook we will make extensive use of probability distributions. In general, we will use capital leters
${\bf X}$, $S$, $E$ ..., to denote random variables, and lower-case letters ${\bf x}$, $s$, $\epsilon$ ..., to denote the values they can take.
In general, we will use letter $p$ for probability density functions (pdf). When necessary, we will use, capital subindices to make the random variable explicit. For instance, $p_{{\bf X}, S}({\bf x}, s)$ would be the joint pdf of random variables ${\bf X}$ and $S$ at values ${\bf x}$ and $s$, respectively.
However, to avoid a notation overload, we will omit subindices when they are clear from the context. For instance, we will use $p({\bf x}, s)$ instead of $p_{{\bf X}, S}({\bf x}, s)$.
1. Model-based parametric regression
1.1. The regression problem
Given an observation vector ${\bf x}$, the goal of the regression problem is to find a function $f({\bf x})$ providing good predictions about some unknown variable $s$. To do so, we assume that a set of labelled training examples, ${{\bf x}k, s_k}{k=0}^{K-1}$ is available.
The predictor function should make good predictions for new observations ${\bf x}$ not used during training. In practice, this is tested using a second set (the test set) of labelled samples.
1.2. The underlying model assumption
Many regression algorithms are grounded on the idea that all samples from the training set have been generated
independently by some common stochastic process.
<img src="figs/data_model.png" width=180>
If $p({\bf x}, s)$ were known, we could apply estimation theory to estimate $s$ for a given ${\bf x}$ using $p$. For instance, we could apply any of the following classical estimates:
Maximum A Posterior (MAP): $$\hat{s}_{\text{MAP}} = \arg\max_s p(s| {\bf x})$$
Minimum Mean Square Error (MSE): $$\hat{s}_{\text{MSE}} = \mathbb{E}{S |{\bf x}} = \int s \, p(s| {\bf x}) \, ds $$
Note that, since these estimators depend on $p(s |{\bf x})$, knowing the posterior distribution of the target variable is enough, and we do not need to know the joint distribution $p({\bf x}, s)$.
More importantly, note that if we knew the underlying model, we would not need the data in ${\cal D}$ to make predictions on new data.
Exercise 1:
Assume the target variable $s$ is a scaled noisy version of the input variable $x$:
$$
s = 2 x + \epsilon
$$
where $\epsilon$ is Gaussian a noise variable with zero mean and unit variance, which does not depend on $x$.
Compute the target model $p(s| x)$
Compute prediction $\hat{s}_\text{MAP}$ for an arbitrary input $x$
Compute prediction $\hat{s}_\text{MSE}$ for an arbitrary input $x$
Compute prediction $\hat{s}_\text{MSE}$ for input $x=4$
Solution:
Since $\epsilon$ is Gaussian, so it is $s$ for a given $x$, with mean
$$
\mathbb{E}{S \mid x } = \mathbb{E}{2 x\mid x } + \mathbb{E}{\epsilon \mid x } = 2 x + 0 = 2 x
$$
and variance
$$
\text{Var}{S \mid x } = \text{Var}{2 x \mid x } + \text{Var}{\epsilon \mid x } = 0 + 1 = 1
$$
therefore
$$
p(s| x ) = \frac{1}{\sqrt{2\pi}}\exp\left(-\frac12(s-2x)^2\right)
$$
The MAP estimate is
$$
\hat{s}\text{MAP} = \arg\max{s}p(s|x) = 2 x
$$
Since the MSE estimate is the conditional mean, which has been already computed, have $\hat{s}_\text{MSE}= 2x$
The prediction is $\hat{s}_\text{MSE}= 2 \cdot 4 = 8$
1.3. Model-based regression
In practice, the underlying model is usually unknown.
Model based-regression methods exploit the idea of using the training data to estimate the posterior distribution $p(s|{\bf x})$ and then apply estimation theory to make predictions.
<img src="figs/ModelBasedReg.png" width=280>
1.4. Parametric model-based regression
In some cases, we may have a partial knowledge about the underlying mode. In this notebook we will assume that $p$ belongs to a parametric family of distributions $p(s|{\bf x},{\bf w})$, where ${\bf w}$ is some unknown parameter.
Exercise 2:
Assume the target variable $s$ is a scaled noisy version of the input variable $x$:
$$
s = w x + \epsilon
$$
where $\epsilon$ is Gaussian a noise variable with zero mean and unit variance, which does not depend on $x$. Assume that $w$ is known.
1. Compute the target model $p(s| x, w)$
2. Compute prediction $\hat{s}\text{MAP}$ for an arbitrary input $x$
3. Compute prediction $\hat{s}\text{MSE}$ for an arbitrary input $x$
Solution:
As in Exercise 1 $\epsilon$ is Gaussian, so it is $s$ for a given $x$, with mean
$$
\mathbb{E}{S|x, w } = w x
$$
and variance
$$
\text{Var}{S|x, w } = 1
$$
therefore
$$
p(s| x, w ) = \frac{1}{\sqrt{2\pi}}\exp\left(-\frac12(s-wx)^2\right)
$$
The MAP estimate is
$$
\hat{s}\text{MAP} = \arg\max{s}p(s|x, w) = w x
$$
Since the MSE estimate is the conditional mean, which has been already computed, have $\hat{s}_\text{MSE}= wx$
We will use the training data to estimate ${\bf w}$
<img src="figs/ParametricReg.png" width=300>
The estimation of ${\bf w}$ from a given dataset $\mathcal{D}$ is the goal of the following sections
2. Maximum Likelihood parameter estimation.
The ML (Maximum Likelihood) principle is well-known in statistics and can be stated as follows: take the value of the parameter to be estimated (in our case, ${\bf w}$) that best explains the given observations (in our case, the training dataset $\mathcal{D}$). Mathematically, this can be expressed as follows:
$$
\hat{\bf w}{\text{ML}} = \arg \max{\bf w} p(\mathcal{D}|{\bf w})
$$
Exercise 3:
All samples in dataset ${\cal D} = {(x_k, s_k), k=0,\ldots,K-1 }$
$$
s_k = w \cdot x_k + \epsilon_k
$$
where $\epsilon_k$ are i.i.d. (independent and identically distributed) Gaussian noise random variables with zero mean and unit variance, which do not depend on $x_k$.
Compute the ML estimate, $\hat{w}_{\text{ML}}$, of $w$.
Solution:
From Exercise 2,
$$
p\left(s_k \mid x_k, w \right)
= \frac{1}{\sqrt{2\pi}}\exp\left(-\frac12\left(s_k-wx_k\right)^2\right)
$$
Since the noise variables are i.i.d, we have
$$
p(\mathcal{D}|w)
= p\left(s_0,\ldots, s_{K-1}\mid x_{0}, \ldots, x_{K-1}, w \right)
p\left(x_0,\ldots, x_{K-1}\mid w \right)\
\qquad \quad = \prod_{k=0}^{K-1} \frac{1}{\sqrt{2\pi}}
\exp\left(-\frac12\left(s_k-wx_k\right)^2\right)
p\left(x_0,\ldots, x_{K-1} \right)
\
\qquad \quad = \frac{1}{(2\pi)^{\frac{K}{2}}}
\exp\left(-\frac12 \sum_{k=0}^{K-1} \left(s_k-wx_k\right)^2\right)
p\left(x_0,\ldots, x_{K-1} \right)
$$
Therefore:
$$
\hat{\bf w}{\text{ML}} = \arg \max{\bf w} p(\mathcal{D}|{\bf w})
= \arg \min_{\bf w} \sum_{k=0}^{K-1} \left(s_k-wx_k\right)^2
$$
Differentiating with respect to w:
$$
- 2 \sum_{k=0}^{K-1} \left(s_k-\hat{w}\text{ML} x_k\right) x_k = 0
$$
that is
$$
\hat{w}\text{ML} = \frac{\sum_{k=0}^{K-1} s_k x_k}
{\sum_{k=0}^{K-1} \left(x_k\right)^2}
$$
Exercise 4:
The inputs and the targets from a dataset ${\cal D}$ have been stored in the following Python arrays:
End of explanation
"""
# <SOL>
plt.figure()
plt.scatter(X, s)
plt.xlabel('x')
plt.ylabel('s')
plt.show()
# </SOL>
"""
Explanation: 4.1. Represent a scatter plot of the data points
End of explanation
"""
# wML = <FILL IN>
wML = np.sum(X*s) / np.sum(X*X)
print("The ML estimate is {}".format(wML))
"""
Explanation: 4.2. Compute the ML estimate
End of explanation
"""
sigma_eps = 1
K = len(s)
wGrid = np.arange(-0.5, 2, 0.01)
p = []
for w in wGrid:
d = s - X*w
# p.append(<FILL IN>)
p.append((1.0/(np.sqrt(2*np.pi)*sigma_eps))**K * np.exp(-np.dot(d, d) / (2*sigma_eps**2)))
# Compute the likelihood for the ML parameter wML
# d = <FILL IN>
d = s-X*wML
# pML = [<FILL IN>]
pML = [(1.0/(np.sqrt(2*np.pi)*sigma_eps))**K * np.exp(-np.dot(d, d) / (2*sigma_eps**2))]
# Plot the likelihood function and the optimal value
plt.figure()
plt.plot(wGrid, p)
plt.stem([wML], pML)
plt.xlabel('$w$')
plt.ylabel('Likelihood function')
plt.show()
"""
Explanation: 4.3. Plot the likelihood as a function of parameter $w$ along the interval $-0.5\le w \le 2$, verifying that the ML estimate takes the maximum value.
End of explanation
"""
xgrid = np.arange(0, 1.2, 0.01)
# sML = <FILL IN>
sML = wML * xgrid
plt.figure()
plt.scatter(X, s)
# plt.plot(<FILL IN>)
plt.plot(xgrid, sML)
plt.xlabel('x')
plt.ylabel('s')
plt.axis('tight')
plt.show()
"""
Explanation: 4.4. Plot the prediction function on top of the data scatter plot
End of explanation
"""
X = np.array([0.15, 0.41, 0.53, 0.80, 0.89, 0.92, 0.95])
s = np.array([0.09, 0.16, 0.63, 0.44, 0.55, 0.82, 0.95])
"""
Explanation: 2.1. Model assumptions
In order to solve exercise 4 we have taken advantage of the statistical independence of the noise components. Some independence assumptions are required in general to compute the ML estimate in other scenarios.
In order to estimate ${\bf w}$ from the training data in a mathematicaly rigorous and compact form let us group the target variables into a vector
$$
{\bf s} = \left(s_0, \dots, s_{K-1}\right)^\top
$$
and the input vectors into a matrix
$$
{\bf X} = \left({\bf x}0, \dots, {\bf x}{K-1}\right)^\top
$$
We will make the following assumptions:
A1. All samples in ${\cal D}$ have been generated by the same distribution, $p({\bf x}, s \mid {\bf w})$
A2. Input variables ${\bf x}$ do not depend on ${\bf w}$. This implies that
$$
p({\bf X} \mid {\bf w}) = p({\bf X})
$$
A3. Targets $s_{0},\ldots, s_{K-1}$ are statistically independent, given ${\bf w}$ and the inputs ${\bf x}0,\ldots, {\bf x}{K-1}$, that is:
$$
p({\bf s} \mid {\bf X}, {\bf w}) = \prod_{k=0}^{K-1} p(s_k \mid {\bf x}_k, {\bf w})
$$
Since ${\cal D} = ({\bf X}, {\bf s})$, we can write
$$p(\mathcal{D}|{\bf w})
= p({\bf s}, {\bf X}|{\bf w})
= p({\bf s} | {\bf X}, {\bf w}) p({\bf X}|{\bf w})
$$
Using assumption A2,
$$
p(\mathcal{D}|{\bf w})
= p({\bf s} | {\bf X}, {\bf w}) p({\bf X})
$$
and, finally, using assumption A3, we can express the estimation problem as the computation of
\begin{align}
\hat{\bf w}{\text{ML}}
&= \arg \max{\bf w} p({\bf s}|{\bf X},{\bf w}) \
\qquad \quad &= \arg \max_{\bf w} \prod_{k=0}^{K-1} p(s_k \mid {\bf x}k, {\bf w}) \
\qquad \quad &= \arg \max{\bf w} \sum_{k=0}^{K-1}\log p(s_k \mid {\bf x}_k, {\bf w})
\end{align}
Any of the last three terms can be used to optimize ${\bf w}$. The sum in the last term is usually called the log-likelihood function, $L({\bf w})$, whereas the product in the previous line is simply referred as the likelihood function.
2.2. Summary.
Let's summarize what we need to do in order to design a regression algorithm based on ML estimation:
Assume a parametric data model $p(s| {\bf x},{\bf w})$
Using the data model and the i.i.d. assumption, compute $p({\bf s}| {\bf X},{\bf w})$.
Find an expression for ${\bf w}_{\text{ML}}$
Assuming ${\bf w} = {\bf w}_{\text{ML}}$, compute the MAP or the minimum MSE estimate of $s$ given ${\bf x}$.
3. ML estimation for a Gaussian model.
3.1. Step 1: The Gaussian generative model
Let us assume that the target variables $s_k$ in dataset $\mathcal{D}$ are given by
$$
s_k = {\bf w}^\top {\bf z}_k + \varepsilon_k
$$
where ${\bf z}k$ is the result of some transformation of the inputs, ${\bf z}_k = T({\bf x}_k)$, and $\varepsilon_k$ are i.i.d. instances of a Gaussian random variable with mean zero and varianze $\sigma\varepsilon^2$, i.e.,
$$
p_E(\varepsilon) = \frac{1}{\sqrt{2\pi}\sigma_\varepsilon}
\exp\left(-\frac{\varepsilon^2}{2\sigma_\varepsilon^2}\right)
$$
Assuming that the noise variables are independent of ${\bf x}$ and ${\bf w}$, then, for a given ${\bf x}$ and ${\bf w}$, the target variable is gaussian with mean ${\bf w}^\top {\bf z}k$ and variance $\sigma\varepsilon^2$
$$
p(s|{\bf x}, {\bf w}) = p_E(s-{\bf w}^\top{\bf z}) =
\frac{1}{\sqrt{2\pi}\sigma_\varepsilon}
\exp\left(-\frac{(s-{\bf w}^\top{\bf z})^2}{2\sigma_\varepsilon^2}\right)
$$
3.2. Step 2: Likelihood function
Now we need to compute the likelihood function $p({\bf s}, {\bf X} | {\bf w})$. If the samples are i.i.d. we can write
$$
p({\bf s}| {\bf X}, {\bf w})
= \prod_{k=0}^{K-1} p(s_k| {\bf x}k, {\bf w})
= \prod{k=0}^{K-1} \frac{1}{\sqrt{2\pi}\sigma_\varepsilon}
\exp\left(-\frac{\left(s_k-{\bf w}^\top{\bf z}k\right)^2}{2\sigma\varepsilon^2}\right) \
= \left(\frac{1}{\sqrt{2\pi}\sigma_\varepsilon}\right)^K
\exp\left(-\sum_{k=1}^K \frac{\left(s_k-{\bf w}^\top{\bf z}k\right)^2}{2\sigma\varepsilon^2}\right) \
$$
Finally, grouping variables ${\bf z}k$ in
$${\bf Z} = \left({\bf z}_0, \dots, {\bf z}{K-1}\right)^\top$$
we get
$$
p({\bf s}| {\bf X}, {\bf w})
= \left(\frac{1}{\sqrt{2\pi}\sigma_\varepsilon}\right)^K
\exp\left(-\frac{1}{2\sigma_\varepsilon^2}\|{\bf s}-{\bf Z}{\bf w}\|^2\right)
$$
3.3. Step 3: ML estimation.
The <b>maximum likelihood</b> solution is then given by:
$$
{\bf w}{ML} = \arg \max{\bf w} p({\bf s}|{\bf w}) = \arg \min_{\bf w} \|{\bf s} - {\bf Z}{\bf w}\|^2
$$
Note that $\|{\bf s} - {\bf Z}{\bf w}\|^2$ is the sum or the squared prediction errors (Sum of Squared Errors, SSE) for all samples in the dataset. This is also called the * Least Squares * (LS) solution.
The LS solution can be easily computed by differentiation,
$$
\nabla_{\bf w} \|{\bf s} - {\bf Z}{\bf w}\|^2\Bigg|{{\bf w} = {\bf w}\text{ML}}
= - 2 {\bf Z}^\top{\bf s} + 2 {\bf Z}^\top{\bf Z} {\bf w}{\text{ML}}
= {\bf 0}
$$
and it is equal to
$$
{\bf w}\text{ML} = ({\bf Z}^\top{\bf Z})^{-1}{\bf Z}^\top{\bf s}
$$
3.4. Step 4: Prediction function.
The last step consists on computing an estimate of $s$ by assuming that the true value of the weight parameters is ${\bf w}\text{ML}$. In particular, the minimum MSE estimate is
$$
\hat{s}\text{MSE} = \mathbb{E}{s|{\bf x},{\bf w}_\text{ML}}
$$
Knowing that, given ${\bf x}$ and ${\bf w}$, $s$ is normally distributed with mean ${\bf w}^\top {\bf z}$ we can write
$$
\hat{s}\text{MSE} = {\bf w}\text{ML}^\top {\bf z}
$$
Exercise 5:
Assume that the targets in the one-dimensional dataset given by
End of explanation
"""
sigma_eps = 0.3
"""
Explanation: have been generated by the polynomial Gaussian model
$$
s = w_0 + w_1 x + w_2 x^2 + \epsilon
$$
(i.e., with ${\bf z} = T(x) = (1, x, x^2)^\intercal$) with noise variance
End of explanation
"""
# Compute the extended input matrix Z
nx = len(X)
# Z = <FILL IN>
Z = np.hstack((np.ones((nx, 1)), X[:,np.newaxis], X[:,np.newaxis]**2))
# Compute the ML estimate using linalg.lstsq from Numpy.
# wML = <FILL IN>
wML = np.linalg.lstsq(Z, s)[0]
print(wML)
"""
Explanation: 5.1. Compute the ML estimate.
End of explanation
"""
K = len(s)
# Compute the likelihood for the ML parameter wML
# d = <FILL IN>
d = s - np.dot(Z, wML)
# LwML = [<FILL IN>]
LwML = - K/2*np.log(2*np.pi*sigma_eps**2) - np.dot(d, d) / (2*sigma_eps**2)
print(LwML)
"""
Explanation: 5.2. Compute the value of the log-likelihood function for ${\bf w}={\bf w}_\text{ML}$.
End of explanation
"""
xgrid = np.arange(0, 1.2, 0.01)
nx = len(xgrid)
# Compute the input matrix for the grid data in x
# Z = <FILL IN>
Z = np.hstack((np.ones((nx, 1)), xgrid[:,np.newaxis], xgrid[:,np.newaxis]**2))
# sML = <FILL IN>
sML = np.dot(Z, wML)
plt.figure()
plt.scatter(X, s)
# plt.plot(<FILL IN>)
plt.plot(xgrid, sML)
plt.xlabel('x')
plt.ylabel('s')
plt.axis('tight')
plt.show()
"""
Explanation: 5.3. Plot the prediction function over the data scatter plot
End of explanation
"""
K = len(s)
wGrid = np.arange(0, 6, 0.01)
p = []
Px = np.prod(X)
xs = np.dot(X,s)
for w in wGrid:
# p.append(<FILL IN>)
p.append((w**K)*Px*np.exp(-w*xs))
plt.figure()
# plt.plot(<FILL IN>)
plt.plot(wGrid, p)
plt.xlabel('$w$')
plt.ylabel('Likelihood function')
plt.show()
"""
Explanation: Exercise 6:
Assume the dataset $\mathcal{D} = {(x_k, s_k, k=0,\ldots, K-1}$ contains i.i.d. samples from a distribution with posterior density given by
$$
p(s \mid x, w) = w x \exp(- w x s), \qquad s\ge0, \,\, x\ge 0, \,\, w\ge 0
$$
6.1. Determine an expression for the likelihood function
Solution:
<SOL>
The likelihood function is
$$
p({\bf s}|w, {\bf X})
= \prod_{k=0}^{K-1} w x_k \exp(- w x_k s_k)
= w^K \left(\prod_{k=0}^{K-1} x_k\right) \exp\left(- w \sum_{k=0}^{K-1} x_k s_k\right)
$$
</SOL>
6.2. Draw the likelihood function for the dataset in Exercise 4 in the range $0\le w\le 6$.
End of explanation
"""
# wML = <FILL IN>
wML = np.float(K) /xs
print(wML)
"""
Explanation: 6.3. Determine the maximum likelihood coefficient, $w_\text{ML}$.
(Hint: you can maximize the log-likelihood function instead of the likelihood function in order to simplify the differentiation)
Solution:
<SOL>
Applyng the logarithm to the likelihood function we get
$$
\log p({\bf s}|{\bf w}, {\bf X})
= K\log w + \sum_{k=0}^{K-1} \log\left(x_k\right) - w {\bf X}^\top {\bf s}
$$
which is minimum for
$$
w_\text{ML} = \frac{K}{{\bf X}^\top {\bf s}}
$$
</SOL>
6.4. Compute $w_\text{ML}$ for the dataset in Exercise 4
End of explanation
"""
xgrid = np.arange(0.1, 1.2, 0.01)
# sML = <FILL IN>
sML = 1 / (wML * xgrid)
plt.figure()
plt.scatter(X, s)
# plt.plot(<FILL IN>)
plt.plot(xgrid, sML)
plt.xlabel('x')
plt.ylabel('s')
plt.axis('tight')
plt.show()
"""
Explanation: 6.5. Assuming $w = w_\text{ML}$, compute the prediction function based on the estimate $s_{MSE}$
Solution:
<SOL>
$$
\hat{s}{\text{MSE}} = \mathbb{E}{s |x, w} = \int s w x \exp(w\text{ML} x s) ds = \frac{1}{w_\text{ML} x}
$$
</SOL>
6.6. Plot the prediction function obtained in ap. 6.5, and compare it with the linear predictor in exercise 4
End of explanation
"""
|
oditorium/blog | iPython/MCPricing2-CallLognorm.ipynb | agpl-3.0 | import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: iPython Cookbook - Monte Carlo Pricing II - Call (Lognormal)
Pricing a call option with Monte Carlo (Normal model)
End of explanation
"""
strike = 100
mat = 1
forward = 100
vol = 0.3
"""
Explanation: Those are our option and market parameters: the exercise price of the option strike, the forward price of the underlying security forward and its volatility vol (as the model is lognormal, the volatility is a percentage number; eg 0.20 = 20%)
End of explanation
"""
def call(k=100):
def payoff(spot):
if spot > k:
return spot - k
else:
return 0
return payoff
payoff = call(k=strike)
"""
Explanation: We now define our payoff function using a closure: the variable payoff represents a function with one parameter spot with the strike k being frozen at whatever value it had when the outer function call was called to set payoff
End of explanation
"""
from scipy.stats import norm
def bscall(fwd=100,strike=100,sig=0.1,mat=1):
lnfs = log(fwd/strike)
sig2t = sig*sig*mat
sigsqrt = sig*sqrt(mat)
d1 = (lnfs + 0.5 * sig2t)/ sigsqrt
d2 = (lnfs - 0.5 * sig2t)/ sigsqrt
fv = fwd * norm.cdf (d1) - strike * norm.cdf (d2)
#print "d1 = %f (N = %f)" % (d1, norm.cdf (d1))
#print "d2 = %f (N = %f)" % (d2, norm.cdf (d2))
return fv
#bscall(fwd=100, strike=100, sig=0.1, mat=1)
"""
Explanation: We also define an analytic function for calculation the price of a call using the Black Scholes formula, allowing us to benchmark our results
End of explanation
"""
N = 10000
z = np.random.standard_normal((N))
#z
"""
Explanation: We now generate a set of Standard Gaussian variables $z$ as a basis for our simulation...
End of explanation
"""
x = forward * exp(- 0.5 * vol * vol * mat + vol * sqrt(mat) * z)
min(x), max(x), mean(x)
"""
Explanation: ...and transform it in a lognormal variable with the right mean and log standard deviation, ie a variable that is distributed according to $LN(F,\sigma\sqrt{T})$. Specifically, to transform a Standard Gaussian $Z$ into a lognormal $X$ with the above parameters we use the following formula
$$
X = F \times \exp ( -0.5 \sigma^2 T + \sigma \sqrt{T} Z )
$$
End of explanation
"""
def trim_xvals(a):
a1 = np.zeros(len(a)-1)
for idx in range(0,len(a)-1):
#a1[idx] = 0.5*(a[idx]+a[idx+1])
a1[idx] = a[idx]
return a1
hg0=np.histogram(x, bins=50)
xvals0 = trim_xvals(hg0[1])
fwd1 = mean(x)
print ("forward = %f" % (fwd1))
plt.bar(xvals0,hg0[0], width=0.5*(xvals0[1]-xvals0[0]))
plt.title('forward distribution')
plt.xlabel('forward')
plt.ylabel('occurrences')
plt.show()
"""
Explanation: We first look at the histogram of the spot prices $x$ (the function trim_vals simply deals with with the fact that histogram returns the starting and the ending point of the bin, ie overall one point too many)
End of explanation
"""
po = list(map(payoff,x))
fv = mean(po)
#po
"""
Explanation: We now determine the payoff values from our draws of the final spot price. Note that we need to use the map command rather than simply writing po = payoff(x). The reason for this is that this latter form is not compatible with the if statement in our payoff function. We also already compute the forward value of the option, which is simply the average payoff over all simulations.
End of explanation
"""
hg = np.histogram(po,bins=50)
xvals = trim_xvals(hg[1])
plt.bar(xvals,hg[0], width=0.9*(xvals[1]-xvals[0]))
plt.title('payout distribution')
plt.xlabel('payout')
plt.ylabel('occurrences')
plt.show()
"""
Explanation: Now we produce the histogram of the payoffs
End of explanation
"""
x = (forward+1) * exp(- 0.5 * vol * vol * mat + vol * sqrt(mat) * z)
po = list(map(payoff,x))
fv_plus = mean(po)
x = (forward-1) * exp(- 0.5 * vol * vol * mat + vol * sqrt(mat) * z)
po = list(map(payoff,x))
fv_minus = mean(po)
x = forward * exp(- 0.5 * (vol+0.01) * (vol+0.01) * mat + (vol+0.01) * sqrt(mat) * z)
po = list(map(payoff,x))
fv_volp = mean(po)
x = forward * exp(- 0.5 * vol * vol * (mat-0.1) + vol * sqrt(mat-0.1) * z)
po = list(map(payoff,x))
fv_timep = mean(po)
print ("Strike = %f" % strike)
print ("Maturity = %f" % mat)
print ("Forward = %f" % forward)
print ("Volatility = %f" % vol)
print ("FV = %f" % fv)
print (" check = %f" % bscall(fwd=forward, strike=strike, sig=vol, mat=mat))
print ("Delta = %f" % ((fv_plus - fv_minus)/2))
print ("Gamma = %f" % ((fv_plus + fv_minus - 2 * fv)))
print ("Theta = %f" % ((fv_timep - fv)))
print ("Vega = %f" % ((fv_volp - fv)))
"""
Explanation: In the next step we compute our "Greeks", ie a number of derivatives of the forward value with respect to the underlying parameters. What is crucial here is that those derivative are calculated on the same draw random numbers $z$, otherwise the Monte Carlo sampling error will dwarf the signal. The sensitivities we compute are to increase / decrease the forward by one currency unit (for Delta and Gamma), to increase the volatility by one currency unit (for Vega), and to decrease the time to maturity by 0.1y (for Theta)
End of explanation
"""
import sys
print(sys.version)
"""
Explanation: Licence and version
(c) Stefan Loesch / oditorium 2014; all rights reserved
(license)
End of explanation
"""
|
palrogg/foundations-homework | Data_and_databases/Homework_4_Paul_Ronga.ipynb | mit | numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120'
"""
Explanation: Homework #4
These problem sets focus on list comprehensions, string operations and regular expressions.
Problem set #1: List slices and list comprehensions
Let's start with some data. The following cell contains a string with comma-separated integers, assigned to a variable called numbers_str:
End of explanation
"""
numbers = [int(i) for i in numbers_str.split(',')] # replace 'None' with an expression, as described above
max(numbers)
"""
Explanation: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
End of explanation
"""
sorted(numbers)[-10:]
"""
Explanation: Great! We'll be using the numbers list you created above in the next few problems.
In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output:
[506, 528, 550, 581, 699, 721, 736, 804, 855, 985]
(Hint: use a slice.)
End of explanation
"""
sorted([i for i in numbers if i % 3 == 0])
"""
Explanation: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output:
[120, 171, 258, 279, 528, 699, 804, 855]
End of explanation
"""
from math import sqrt
[sqrt(i) for i in numbers if i < 100]
"""
Explanation: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output:
[2.6457513110645907, 8.06225774829855, 8.246211251235321]
(These outputs might vary slightly depending on your platform.)
End of explanation
"""
planets = [
{'diameter': 0.382,
'mass': 0.06,
'moons': 0,
'name': 'Mercury',
'orbital_period': 0.24,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.949,
'mass': 0.82,
'moons': 0,
'name': 'Venus',
'orbital_period': 0.62,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 1.00,
'mass': 1.00,
'moons': 1,
'name': 'Earth',
'orbital_period': 1.00,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.532,
'mass': 0.11,
'moons': 2,
'name': 'Mars',
'orbital_period': 1.88,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 11.209,
'mass': 317.8,
'moons': 67,
'name': 'Jupiter',
'orbital_period': 11.86,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 9.449,
'mass': 95.2,
'moons': 62,
'name': 'Saturn',
'orbital_period': 29.46,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 4.007,
'mass': 14.6,
'moons': 27,
'name': 'Uranus',
'orbital_period': 84.01,
'rings': 'yes',
'type': 'ice giant'},
{'diameter': 3.883,
'mass': 17.2,
'moons': 14,
'name': 'Neptune',
'orbital_period': 164.8,
'rings': 'yes',
'type': 'ice giant'}]
"""
Explanation: Problem set #2: Still more list comprehensions
Still looking good. Let's do a few more with some different data. In the cell below, I've defined a data structure and assigned it to a variable planets. It's a list of dictionaries, with each dictionary describing the characteristics of a planet in the solar system. Make sure to run the cell before you proceed.
End of explanation
"""
[i['name'] for i in planets if i['diameter'] > 4]
"""
Explanation: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a diameter greater than four earth radii. Expected output:
['Jupiter', 'Saturn', 'Uranus']
End of explanation
"""
sum([i['mass'] for i in planets])
"""
Explanation: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: 446.79
End of explanation
"""
[i['name'] for i in planets if 'giant' in i['type']]
"""
Explanation: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output:
['Jupiter', 'Saturn', 'Uranus', 'Neptune']
End of explanation
"""
[i['name'] for i in sorted(planets, key=lambda planet: planet['moons'])]
"""
Explanation: EXTREME BONUS ROUND: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the key parameter of the sorted function, which we haven't yet discussed in class! That's why this is an EXTREME BONUS question.) Expected output:
['Mercury', 'Venus', 'Earth', 'Mars', 'Neptune', 'Uranus', 'Saturn', 'Jupiter']
End of explanation
"""
import re
poem_lines = ['Two roads diverged in a yellow wood,',
'And sorry I could not travel both',
'And be one traveler, long I stood',
'And looked down one as far as I could',
'To where it bent in the undergrowth;',
'',
'Then took the other, as just as fair,',
'And having perhaps the better claim,',
'Because it was grassy and wanted wear;',
'Though as for that the passing there',
'Had worn them really about the same,',
'',
'And both that morning equally lay',
'In leaves no step had trodden black.',
'Oh, I kept the first for another day!',
'Yet knowing how way leads on to way,',
'I doubted if I should ever come back.',
'',
'I shall be telling this with a sigh',
'Somewhere ages and ages hence:',
'Two roads diverged in a wood, and I---',
'I took the one less travelled by,',
'And that has made all the difference.']
"""
Explanation: Problem set #3: Regular expressions
In the following section, we're going to do a bit of digital humanities. (I guess this could also be journalism if you were... writing an investigative piece about... early 20th century American poetry?) We'll be working with the following text, Robert Frost's The Road Not Taken. Make sure to run the following cell before you proceed.
End of explanation
"""
[line for line in poem_lines if re.search(r'\b\w{4}\s\w{4}\b', line)]
"""
Explanation: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint: use the \b anchor. Don't overthink the "two words in a row" requirement.)
Expected result:
['Then took the other, as just as fair,',
'Had worn them really about the same,',
'And both that morning equally lay',
'I doubted if I should ever come back.',
'I shall be telling this with a sigh']
End of explanation
"""
[line for line in poem_lines if re.search('\w{5}\.{0,1}$', line)]
"""
Explanation: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the ? quantifier. Is there an existing character class, or a way to write a character class, that matches non-alphanumeric characters?) Expected output:
['And be one traveler, long I stood',
'And looked down one as far as I could',
'And having perhaps the better claim,',
'Though as for that the passing there',
'In leaves no step had trodden black.',
'Somewhere ages and ages hence:']
End of explanation
"""
all_lines = " ".join(poem_lines)
"""
Explanation: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
End of explanation
"""
re.findall('I\s(.*?)\s', all_lines)
"""
Explanation: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint: Use re.findall() and grouping! Expected output:
['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took']
End of explanation
"""
entrees = [
"Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95",
"Lavender and Pepperoni Sandwich $8.49",
"Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v",
"Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v",
"Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95",
"Rutabaga And Cucumber Wrap $8.49 - v"
]
"""
Explanation: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
End of explanation
"""
menu = []
for item in entrees:
dictitem = {}
dictitem['name'] = re.search('(.*)\s\$', item).group(1) # why 1? 0 = whole match?
dictitem['price'] = float(re.search('\d{1,2}\.\d{2}', item).group())
dictitem['vegetarian'] = bool(re.match('.*v$', item))
menu.append(dictitem)
menu
"""
Explanation: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
Expected output:
[{'name': 'Yam, Rosemary and Chicken Bowl with Hot Sauce ',
'price': 10.95,
'vegetarian': False},
{'name': 'Lavender and Pepperoni Sandwich ',
'price': 8.49,
'vegetarian': False},
{'name': 'Water Chestnuts and Peas Power Lunch (with mayonnaise) ',
'price': 12.95,
'vegetarian': True},
{'name': 'Artichoke, Mustard Green and Arugula with Sesame Oil over noodles ',
'price': 9.95,
'vegetarian': True},
{'name': 'Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce ',
'price': 19.95,
'vegetarian': False},
{'name': 'Rutabaga And Cucumber Wrap ', 'price': 8.49, 'vegetarian': True}]
End of explanation
"""
|
mne-tools/mne-tools.github.io | stable/_downloads/c37ac181bfe2eb2f1fa69c3fab30417d/mne_cov_power.ipynb | bsd-3-clause | # Author: Denis A. Engemann <denis-alexander.engemann@inria.fr>
# Luke Bloy <luke.bloy@gmail.com>
#
# License: BSD-3-Clause
import os.path as op
import numpy as np
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse_cov
data_path = sample.data_path()
subjects_dir = data_path / 'subjects'
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname)
"""
Explanation: Compute source power estimate by projecting the covariance with MNE
We can apply the MNE inverse operator to a covariance matrix to obtain
an estimate of source power. This is computationally more efficient than first
estimating the source timecourses and then computing their power. This
code is based on the code from :footcite:Sabbagh2020 and has been useful to
correct for individual field spread using source localization in the context of
predictive modeling.
References
.. footbibliography::
End of explanation
"""
raw_empty_room_fname = op.join(
data_path, 'MEG', 'sample', 'ernoise_raw.fif')
raw_empty_room = mne.io.read_raw_fif(raw_empty_room_fname)
raw_empty_room.crop(0, 30) # cropped just for speed
raw_empty_room.info['bads'] = ['MEG 2443']
raw_empty_room.add_proj(raw.info['projs'])
noise_cov = mne.compute_raw_covariance(raw_empty_room, method='shrunk')
del raw_empty_room
"""
Explanation: Compute empty-room covariance
First we compute an empty-room covariance, which captures noise from the
sensors and environment.
End of explanation
"""
raw.pick(['meg', 'stim', 'eog']).load_data().filter(4, 12)
raw.info['bads'] = ['MEG 2443']
events = mne.find_events(raw, stim_channel='STI 014')
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
tmin, tmax = -0.2, 0.5
baseline = (None, 0) # means from the first instant to t = 0
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
proj=True, picks=('meg', 'eog'), baseline=None,
reject=reject, preload=True, decim=5, verbose='error')
del raw
"""
Explanation: Epoch the data
End of explanation
"""
base_cov = mne.compute_covariance(
epochs, tmin=-0.2, tmax=0, method='shrunk', verbose=True)
data_cov = mne.compute_covariance(
epochs, tmin=0., tmax=0.2, method='shrunk', verbose=True)
fig_noise_cov = mne.viz.plot_cov(noise_cov, epochs.info, show_svd=False)
fig_base_cov = mne.viz.plot_cov(base_cov, epochs.info, show_svd=False)
fig_data_cov = mne.viz.plot_cov(data_cov, epochs.info, show_svd=False)
"""
Explanation: Compute and plot covariances
In addition to the empty-room covariance above, we compute two additional
covariances:
Baseline covariance, which captures signals not of interest in our
analysis (e.g., sensor noise, environmental noise, physiological
artifacts, and also resting-state-like brain activity / "noise").
Data covariance, which captures our activation of interest (in addition
to noise sources).
End of explanation
"""
evoked = epochs.average().pick('meg')
evoked.drop_channels(evoked.info['bads'])
evoked.plot(time_unit='s')
evoked.plot_topomap(times=np.linspace(0.05, 0.15, 5), ch_type='mag')
noise_cov.plot_topomap(evoked.info, 'grad', title='Noise')
data_cov.plot_topomap(evoked.info, 'grad', title='Data')
data_cov.plot_topomap(evoked.info, 'grad', noise_cov=noise_cov,
title='Whitened data')
"""
Explanation: We can also look at the covariances using topomaps, here we just show the
baseline and data covariances, followed by the data covariance whitened
by the baseline covariance:
End of explanation
"""
# Read the forward solution and compute the inverse operator
fname_fwd = meg_path / 'sample_audvis-meg-oct-6-fwd.fif'
fwd = mne.read_forward_solution(fname_fwd)
# make an MEG inverse operator
info = evoked.info
inverse_operator = make_inverse_operator(info, fwd, noise_cov,
loose=0.2, depth=0.8)
"""
Explanation: Apply inverse operator to covariance
Finally, we can construct an inverse using the empty-room noise covariance:
End of explanation
"""
stc_data = apply_inverse_cov(data_cov, evoked.info, inverse_operator,
nave=len(epochs), method='dSPM', verbose=True)
stc_base = apply_inverse_cov(base_cov, evoked.info, inverse_operator,
nave=len(epochs), method='dSPM', verbose=True)
"""
Explanation: Project our data and baseline covariance to source space:
End of explanation
"""
stc_data /= stc_base
brain = stc_data.plot(subject='sample', subjects_dir=subjects_dir,
clim=dict(kind='percent', lims=(50, 90, 98)),
smoothing_steps=7)
"""
Explanation: And visualize power is relative to the baseline:
End of explanation
"""
|
benkoo/fast_ai_coursenotes | deeplearning1/nbs/lesson1.ipynb | apache-2.0 | %matplotlib inline
import keras.backend as K
K.set_image_dim_ordering('th')
"""
Explanation: Using Convolutional Neural Networks
Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
Introduction to this week's task: 'Dogs vs Cats'
We're going to try to create a model to enter the Dogs vs Cats competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): "State of the art: The current literature suggests machine classifiers can score above 80% accuracy on this task". So if we can beat 80%, then we will be at the cutting edge as at 2013!
Basic setup
There isn't too much to do to get started - just a few simple configuration steps.
This shows plots in the web page itself - we always wants to use this when using jupyter notebook:
End of explanation
"""
#path = "data/dogscats/"
path = "data/dogscats/sample/"
"""
Explanation: Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.)
End of explanation
"""
from __future__ import division,print_function
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
"""
Explanation: A few basic libraries that we'll need for the initial exercises:
End of explanation
"""
from imp import reload
import utils; reload(utils)
from utils import plots
"""
Explanation: We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
Note : In 'utils.py', we need to change the code, changing "import cpickle as pickle" to "import _pickle as pickle"
End of explanation
"""
# As large as you can, but no larger than 64 is recommended.
# If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this.
batch_size=64
# Import our class, and instantiate
from imp import reload
import vgg16; reload(vgg16)
from vgg16 import Vgg16
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
vgg.finetune(batches)
vgg.fit(batches, val_batches, nb_epoch=1)
"""
Explanation: Use a pretrained VGG model with our Vgg16 class
Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.
We have created a python class, Vgg16, which makes using the VGG 16 model very straightforward.
The punchline: state of the art custom model in 7 lines of code
Here's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.
End of explanation
"""
vgg = Vgg16()
"""
Explanation: The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.
Let's take a look at how this works, step by step...
Use Vgg16 for basic image recognition
Let's start off by using the Vgg16 class to recognise the main imagenet category for each image.
We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.
First, create a Vgg16 object:
End of explanation
"""
batches = vgg.get_batches(path+'train', batch_size=4)
"""
Explanation: Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.
Let's grab batches of data from our training folder:
End of explanation
"""
imgs,labels = next(batches)
"""
Explanation: (BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)
Batches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
End of explanation
"""
plots(imgs, titles=labels)
"""
Explanation: As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding.
The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.
End of explanation
"""
vgg.predict(imgs, True)
"""
Explanation: We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
End of explanation
"""
vgg.classes[:4]
"""
Explanation: The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four:
End of explanation
"""
batch_size=64
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size)
"""
Explanation: (Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)
Use our Vgg16 class to finetune a Dogs vs Cats model
To change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.
However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().
We create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.
End of explanation
"""
vgg.finetune(batches)
"""
Explanation: Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
End of explanation
"""
vgg.fit(batches, val_batches, nb_epoch=1)
"""
Explanation: Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)
End of explanation
"""
from numpy.random import random, permutation
from scipy import misc, ndimage
from scipy.ndimage.interpolation import zoom
import keras
from keras import backend as K
from keras.utils.data_utils import get_file
from keras.models import Sequential, Model
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.layers import Input
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD, RMSprop
from keras.preprocessing import image
"""
Explanation: That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.
Next up, we'll dig one level deeper to see what's going on in the Vgg16 class.
Create a VGG model from scratch in Keras
For the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes.
Model setup
We need to import all the modules we'll be using from numpy, scipy, and keras:
End of explanation
"""
FILES_PATH = 'http://www.platform.ai/models/'; CLASS_FILE='imagenet_class_index.json'
# Keras' get_file() is a handy function that downloads files, and caches them for re-use later
fpath = get_file(CLASS_FILE, FILES_PATH+CLASS_FILE, cache_subdir='models')
with open(fpath) as f: class_dict = json.load(f)
# Convert dictionary with string indexes into an array
classes = [class_dict[str(i)][1] for i in range(len(class_dict))]
"""
Explanation: Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.
End of explanation
"""
classes[:5]
"""
Explanation: Here's a few examples of the categories we just imported:
End of explanation
"""
def ConvBlock(layers, model, filters):
for i in range(layers):
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(filters, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
"""
Explanation: Model creation
Creating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.
VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition:
End of explanation
"""
def FCBlock(model):
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
"""
Explanation: ...and here's the fully-connected definition.
End of explanation
"""
# Mean of each channel as provided by VGG researchers
vgg_mean = np.array([123.68, 116.779, 103.939]).reshape((3,1,1))
def vgg_preprocess(x):
x = x - vgg_mean # subtract mean
return x[:, ::-1] # reverse axis bgr->rgb
"""
Explanation: When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model:
End of explanation
"""
def VGG_16():
model = Sequential()
model.add(Lambda(vgg_preprocess, input_shape=(3,224,224)))
ConvBlock(2, model, 64)
ConvBlock(2, model, 128)
ConvBlock(3, model, 256)
ConvBlock(3, model, 512)
ConvBlock(3, model, 512)
model.add(Flatten())
FCBlock(model)
FCBlock(model)
model.add(Dense(1000, activation='softmax'))
return model
"""
Explanation: Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!
End of explanation
"""
model = VGG_16()
"""
Explanation: We'll learn about what these different blocks do later in the course. For now, it's enough to know that:
Convolution layers are for finding patterns in images
Dense (fully connected) layers are for combining patterns across an image
Now that we've defined the architecture, we can create the model like any python object:
End of explanation
"""
fpath = get_file('vgg16.h5', FILES_PATH+'vgg16.h5', cache_subdir='models')
model.load_weights(fpath)
"""
Explanation: As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem.
Downloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.
End of explanation
"""
batch_size = 4
"""
Explanation: Getting imagenet predictions
The setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call predict() on them.
End of explanation
"""
def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True,
batch_size=batch_size, class_mode='categorical'):
return gen.flow_from_directory(path+dirname, target_size=(224,224),
class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)
"""
Explanation: Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data:
End of explanation
"""
batches = get_batches('train', batch_size=batch_size)
val_batches = get_batches('valid', batch_size=batch_size)
imgs,labels = next(batches)
# This shows the 'ground truth'
plots(imgs, titles=labels)
"""
Explanation: From here we can use exactly the same steps as before to look at predictions from the model.
End of explanation
"""
def pred_batch(imgs):
preds = model.predict(imgs)
idxs = np.argmax(preds, axis=1)
print('Shape: {}'.format(preds.shape))
print('First 5 classes: {}'.format(classes[:5]))
print('First 5 probabilities: {}\n'.format(preds[0, :5]))
print('Predictions prob/class: ')
for i in range(len(idxs)):
idx = idxs[i]
print (' {:.4f}/{}'.format(preds[i, idx], classes[idx]))
pred_batch(imgs)
"""
Explanation: The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with np.argmax()) we can find the predicted label.
End of explanation
"""
|
jtwalsh0/methods | .ipynb_checkpoints/Statistics-checkpoint.ipynb | mit | %%latex
\begin{align*}
f_X(X=x) &= cx^2, 0 \leq x \leq 2 \\
1 &= c\int_0^2 x^2 dx \\
&= c[\frac{1}{3}x^3 + d]_0^2 \\
&= c[\frac{8}{3} + d - d] \\
&= c[\frac{8}{3}] \\
f_X(X=x) &= \frac{3}{8}x^2, 0 \leq x \leq 2
\end{align*}
u = np.random.uniform(size=100000)
x = 2 * u**.3333
df = pd.DataFrame({'x':x})
print df.describe()
ggplot(aes(x='x'), data=df) + geom_histogram()
"""
Explanation: Inversion sampling example
First find the normalizing constant:
$$
\begin{align}
f_X(X=x) &&= cx^2, 0 \leq x \leq 2 \
1 &&= c\int_0^2 x^2 dx \
&&= c[\frac{1}{3}x^3 + d]_0^2 \
&&= c[\frac{8}{3} + d - d] \
&&= c[\frac{8}{3}] \
f_X(X=x) &&= \frac{3}{8}x^2, 0 \leq x \leq 2
\end{align}
$$
Next find the cumulative distribution function:
* $F_X(X=x) = \int_0^x \frac{3}{8}x^2dx$
* $=\frac{3}{8}[\frac{1}{3}x^3 + d]_0^x$
* $=\frac{3}{8}[\frac{1}{3}x^3 + d - d]$
* $=\frac{1}{8}x^3$
We can randomly generate values from a standard uniform distribution and set equal to the CDF. Solve for $x$. Plug the randomly generated values into the equation and plot the histogram or density of $x$ to get the shape of the distribution:
* $u = \frac{1}{8}x^3$
* $x^3 = 8u$
* $x = 2u^{\frac{1}{3}}$
End of explanation
"""
x = np.random.uniform(size=10000)
y = np.random.uniform(size=10000)
"""
Explanation: Joint Distribution
Find the normalizing constant:
* $f_{X,Y}(X=x,Y=y) = c(2x + y), 0 \leq x \leq 2, 0 \leq y \leq 2$
* $1 = \int_0^2 \int_0^2 c(2x + y) dy dx$
* $ = c\int_0^2 [2xy + \frac{1}{2}y^2 + d]0^2 dx$
* $ = c\int_0^2 [4x + \frac{1}{2}4 + d - d] dx$
* $ = c\int_0^2 (4x + 2) dx$
* $ = c[2x^2 + 2x + d]_0^2$
* $ = c[2(4) + 2(2) + d - d]$
* $ = 12c$
* $c = \frac{1}{12}$
* $f{X,Y}(X=x,Y=y) = \frac{1}{12}(2x + y), 0 \leq x \leq 2, 0 \leq y \leq 2$
End of explanation
"""
u = np.random.uniform(size=100000)
x = (-1 + (1 + 24*u)**.5) / 2
df = pd.DataFrame({'x':x})
ggplot(aes(x='x'), data=df) + geom_histogram()
"""
Explanation: Find the marginal distribution:
* $f_{X,Y}(X=x,Y=y) = \frac{1}{12}(2x + y), 0 \leq x \leq 2, 0 \leq y \leq 2$
* $f_X(X=x) = \int_0^2 \frac{1}{12}(2x + y) dy$
* $ = \frac{1}{12}[2xy + \frac{1}{2}y^2 + d]_0^2$
* $ = \frac{1}{12}[4x + 2 + d - d]$
* $ = \frac{4x + 2}{12}$
* $ = \frac{2x + 1}{6}$
Inversion sampling example:
* $F_X(X=x) = \int_0^x \dfrac{2x+1}{6}dx$
* $= \frac{1}{6}[x^2 + x + d]_0^x$
* $= \frac{x(x + 1)}{6}$
* $u = \frac{x^2 + x}{6}$
* $0 = x^2 + x - 6u$
* $x = \frac{-1 \pm \sqrt{1 + 4 \times 6u}}{2}$
End of explanation
"""
|
marxav/hello-world | ann_101_numpy_step_by_step.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: <a href="https://colab.research.google.com/github/marxav/hello-world-python/blob/master/ann_101_numpy_step_by_step.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Import Python Librairies
End of explanation
"""
X_train = np.array([1.0,])
Y_train = np.array([-2.0,])
"""
Explanation: Prepare the data
End of explanation
"""
ANN_ARCHITECTURE = [
{"input_dim": 1, "output_dim": 2, "activation": "relu"},
{"input_dim": 2, "output_dim": 2, "activation": "relu"},
{"input_dim": 2, "output_dim": 1, "activation": "none"},
]
PSEUDO_RANDOM_PARAM_VALUES = {
'W1': np.array([[ 0.01],
[-0.03]]),
'b1': np.array([[ 0.02],
[-0.04]]),
'W2': np.array([[ 0.05, -0.06 ],
[-0.07, 0.08]]),
'b2': np.array([[ 0.09],
[-0.10]]),
'W3': np.array([[-0.11, -0.12]]),
'b3': np.array([[-0.13]])
}
def relu(Z):
return np.maximum(0,Z)
def relu_backward(dA, Z):
dZ = np.array(dA, copy = True)
dZ[Z <= 0] = 0;
return dZ;
"""
Explanation: Build the artificial neural-network
End of explanation
"""
def single_layer_forward_propagation(A_prev, W_curr, b_curr, activation):
# calculation of the input value for the activation function
Z_curr = np.dot(W_curr, A_prev) + b_curr
# selection of activation function
if activation == "none":
return Z_curr, Z_curr
elif activation == "relu":
activation_func = relu
else:
raise Exception('Non-supported activation function')
# return of calculated activation A and the intermediate Z matrix
return activation_func(Z_curr), Z_curr
def full_forward_propagation(X, params_values, ann_architecture):
# creating a temporary memory to store the information needed for a backward step
memory = {}
# X vector is the activation for layer 0
A_curr = X
# iteration over network layers
for idx, layer in enumerate(ann_architecture):
# we number network layers starting from 1
layer_idx = idx + 1
# transfer the activation from the previous iteration
A_prev = A_curr
# extraction of the activation function for the current layer
activ_function_curr = layer["activation"]
# extraction of W for the current layer
W_curr = params_values["W" + str(layer_idx)]
# extraction of b for the current layer
b_curr = params_values["b" + str(layer_idx)]
# calculation of activation for the current layer
A_curr, Z_curr = single_layer_forward_propagation(A_prev, W_curr, b_curr, activ_function_curr)
# saving calculated values in the memory
memory["A" + str(idx)] = A_prev
memory["Z" + str(layer_idx)] = Z_curr
# return of prediction vector and a dictionary containing intermediate values
return A_curr, memory
def get_cost_value(Ŷ, Y):
# this cost function works for 1-dimension only
# to do: use a quadratic function instead
cost = Ŷ - Y
return np.squeeze(cost)
"""
Explanation: Single layer forward propagation step
$$\boldsymbol{Z}^{[l]} = \boldsymbol{W}^{[l]} \cdot \boldsymbol{A}^{[l-1]} + \boldsymbol{b}^{[l]}$$
$$\boldsymbol{A}^{[l]} = g^{[l]}(\boldsymbol{Z}^{[l]})$$
End of explanation
"""
def single_layer_backward_propagation(dA_curr, W_curr, b_curr, Z_curr, A_prev, activation, layer, debug=False):
# end of BP1 or BP2
if activation == "none": # i.e. no σ in the layer
dZ_curr = dA_curr
else: # i.e. σ in the layer
if activation == "relu":
backward_activation_func = relu_backward
else:
raise Exception('activation function not supported.')
# calculation of the activation function derivative
dZ_curr = backward_activation_func(dA_curr, Z_curr)
if debug:
print('Step_4: layer',layer,'dZ=', dZ_curr.tolist())
# BP3: derivative of the matrix W
dW_curr = np.dot(dZ_curr, A_prev.T) # BP3
if debug:
# tolist() allows printing a numpy array on a single debug line
print('Step_4: layer',layer,'dW=dZ.A_prev.T=', dZ_curr.tolist(), '.', A_prev.T.tolist())
print(' dW=', dW_curr.tolist())
# BP4: derivative of the vector b
db_curr = np.sum(dZ_curr, axis=1, keepdims=True) # BP4
if debug:
print('Step_4: layer',layer,'db=', db_curr.tolist())
# beginning of BP2: error (a.k.a. delta) at the ouptut of matrix A_prev
# but without taking into account the derivating of the activation function
# which will be done after, in the other layer (cf. "end of BP2")
dA_prev = np.dot(W_curr.T, dZ_curr)
if debug:
print('Step_4: layer',layer,'dA_prev=W.T.dZ=', W_curr.T.tolist(), '.', dZ_curr.tolist())
print(' dA_prev=', dA_prev.tolist())
return dA_prev, dW_curr, db_curr
def full_backward_propagation(Ŷ, cost, memory, params_values, ann_architecture, debug=False):
grads_values = {}
# number of examples
m = Ŷ.shape[1]
# initiation of gradient descent algorithm
# i.e. compute 𐤃C (beginning of BP1)
dA_prev = cost.reshape(Ŷ.shape)
for layer_idx_prev, layer in reversed(list(enumerate(ann_architecture))):
# we number network layers from 1
layer_idx_curr = layer_idx_prev + 1
# extraction of the activation function for the current layer
activ_function_curr = layer["activation"]
dA_curr = dA_prev
A_prev = memory["A" + str(layer_idx_prev)]
Z_curr = memory["Z" + str(layer_idx_curr)]
W_curr = params_values["W" + str(layer_idx_curr)]
b_curr = params_values["b" + str(layer_idx_curr)]
dA_prev, dW_curr, db_curr = single_layer_backward_propagation(
dA_curr, W_curr, b_curr, Z_curr, A_prev, activ_function_curr, layer_idx_curr, debug)
grads_values["dW" + str(layer_idx_curr)] = dW_curr
grads_values["db" + str(layer_idx_curr)] = db_curr
return grads_values
"""
Explanation: Figure: The four main formula of backpropagation at each layer. For more detail refer to http://neuralnetworksanddeeplearning.com/chap2.html
End of explanation
"""
def update(params_values, grads_values, ann_architecture, learning_rate, m):
# iteration over network layers
for layer_idx, layer in enumerate(ann_architecture, 1):
params_values["W" + str(layer_idx)] -= learning_rate * grads_values["dW" + str(layer_idx)] / m
params_values["b" + str(layer_idx)] -= learning_rate * grads_values["db" + str(layer_idx)] / m
return params_values;
def train(X, Y, ann_architecture, params_values, learning_rate, debug=False, callback=None):
# initiation of neural net parameters
# initiation of lists storing the history
# of metrics calculated during the learning process
cost_history = []
# performing calculations for subsequent iterations
Ŷ, memory = full_forward_propagation(X, params_values, ann_architecture)
if debug:
print('Step_2: memory=%s', memory)
print('Step_2: Ŷ=', Ŷ)
# calculating metrics and saving them in history (just for future information)
cost = get_cost_value(Ŷ, Y)
if debug:
print('Step_3: cost=%.5f' % cost)
cost_history.append(cost)
# step backward - calculating gradient
grads_values = full_backward_propagation(Ŷ, cost, memory, params_values, ann_architecture, debug)
#print('grads_values:',grads_values)
# updating model state
m = X.shape[0] # m is number of samples in the batch
params_values = update(params_values, grads_values, ann_architecture, learning_rate, m)
if debug:
print('Step_5: params_values=', params_values)
return params_values
X_train = X_train.reshape(X_train.shape[0], 1)
Y_train = Y_train.reshape(Y_train.shape[0], 1)
"""
Explanation: For each $l = L, L-1, \ldots, 2$:
* update the weights according to the rule $w^l \rightarrow w^l-\frac{\eta}{m} \sum_x \delta^{x,l} (a^{x,l-1})^T$
update the biases according to the rule $b^l \rightarrow b^l-\frac{\eta}{m} \sum_x \delta^{x,l}$
End of explanation
"""
debug = True
# Training
ann_architecture = ANN_ARCHITECTURE
param_values = PSEUDO_RANDOM_PARAM_VALUES.copy()
if debug:
print('X_train:', X_train)
print('Y_train:', Y_train)
print('ann_architecture:', ANN_ARCHITECTURE)
# implementation of the stochastic gradient descent
EPOCHS = 2
for epoch in range(EPOCHS):
if debug:
print('##### EPOCH %d #####' % epoch)
print('Step_0: param_values:', param_values)
samples_per_batch = 1
for i in range(int(X_train.shape[0]/samples_per_batch)):
si = i * samples_per_batch
sj = (i + 1) * samples_per_batch
if debug:
print('Step_1: X_train[%d,%d]=%s' % (si, sj, X_train[si:sj]))
learning_rate = 0.01
param_values = train(
np.transpose(X_train[si:sj]),
np.transpose(Y_train[si:sj]),
ann_architecture,
param_values,
learning_rate,
debug)
"""
Explanation: Train the artificial neural-network model
End of explanation
"""
|
Diyago/Machine-Learning-scripts | DEEP LEARNING/NLP/LSTM RNN/Sentiment pytorch/Sentiment_RNN.ipynb | apache-2.0 | import numpy as np
# read data from text files
with open('data/reviews.txt', 'r') as f:
reviews = f.read()
with open('data/labels.txt', 'r') as f:
labels = f.read()
print(reviews[:1000])
print()
print(labels[:20])
"""
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis.
Using an RNN rather than a strictly feedforward network is more accurate since we can include information about the sequence of words.
Here we'll use a dataset of movie reviews, accompanied by sentiment labels: positive or negative.
<img src="assets/reviews_ex.png" width=40%>
Network Architecture
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=40%>
First, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the Word2Vec lesson. You can actually train an embedding with the Skip-gram Word2Vec model and use those embeddings as input, here. However, it's good enough to just have an embedding layer and let the network learn a different embedding table on its own. In this case, the embedding layer is for dimensionality reduction, rather than for learning semantic representations.
After input words are passed to an embedding layer, the new embeddings will be passed to LSTM cells. The LSTM cells will add recurrent connections to the network and give us the ability to include information about the sequence of words in the movie review data.
Finally, the LSTM outputs will go to a sigmoid output layer. We're using a sigmoid function because positive and negative = 1 and 0, respectively, and a sigmoid will output predicted, sentiment values between 0-1.
We don't care about the sigmoid outputs except for the very last one; we can ignore the rest. We'll calculate the loss by comparing the output at the last time step and the training label (pos or neg).
Load in and visualize the data
End of explanation
"""
from string import punctuation
# get rid of punctuation
reviews = reviews.lower() # lowercase, standardize
all_text = ''.join([c for c in reviews if c not in punctuation])
# split by new lines and spaces
reviews_split = all_text.split('\n')
all_text = ' '.join(reviews_split)
# create a list of words
words = all_text.split()
words[:30]
"""
Explanation: Data pre-processing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. Here are the processing steps, we'll want to take:
We'll want to get rid of periods and extraneous punctuation.
Also, you might notice that the reviews are delimited with newline characters \n. To deal with those, I'm going to split the text into each review using \n as the delimiter.
Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
"""
# feel free to use this import
from collections import Counter
## Build a dictionary that maps words to integers
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
## use the dict to tokenize each review in reviews_split
## store the tokenized reviews in reviews_ints
reviews_ints = []
for review in reviews_split:
reviews_ints.append([vocab_to_int[word] for word in review.split()])
"""
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
"""
# stats about vocabulary
print('Unique words: ', len((vocab_to_int))) # should ~ 74000+
print()
# print tokens in first review
print('Tokenized review: \n', reviews_ints[:1])
"""
Explanation: Test your code
As a text that you've implemented the dictionary correctly, print out the number of unique words in your vocabulary and the contents of the first, tokenized review.
End of explanation
"""
# 1=positive, 0=negative label conversion
labels_split = labels.split('\n')
encoded_labels = np.array([1 if label == 'positive' else 0 for label in labels_split])
"""
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively, and place those in a new list, encoded_labels.
End of explanation
"""
# outlier review stats
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
"""
Explanation: Removing Outliers
As an additional pre-processing step, we want to make sure that our reviews are in good shape for standard processing. That is, our network will expect a standard input text size, and so, we'll want to shape our reviews into a specific length. We'll approach this task in two main steps:
Getting rid of extremely long or short reviews; the outliers
Padding/truncating the remaining data so that we have reviews of the same length.
Before we pad our review text, we should check for reviews of extremely short or long lengths; outliers that may mess with our training.
End of explanation
"""
print('Number of reviews before removing outliers: ', len(reviews_ints))
## remove any reviews/labels with zero length from the reviews_ints list.
# get indices of any reviews with length 0
non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]
# remove 0-length reviews and their labels
reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]
encoded_labels = np.array([encoded_labels[ii] for ii in non_zero_idx])
print('Number of reviews after removing outliers: ', len(reviews_ints))
"""
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. We'll have to remove any super short reviews and truncate super long reviews. This removes outliers and should allow our model to train more efficiently.
Exercise: First, remove any reviews with zero length from the reviews_ints list and their corresponding label in encoded_labels.
End of explanation
"""
def pad_features(reviews_ints, seq_length):
''' Return features of review_ints, where each review is padded with 0's
or truncated to the input seq_length.
'''
# getting the correct rows x cols shape
features = np.zeros((len(reviews_ints), seq_length), dtype=int)
# for each review, I grab that review and
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_length]
return features
# Test your implementation!
seq_length = 200
features = pad_features(reviews_ints, seq_length=seq_length)
## test statements - do not change - ##
assert len(features)==len(reviews_ints), "Your features should have as many rows as reviews."
assert len(features[0])==seq_length, "Each feature row should contain seq_length values."
# print first 10 values of the first 30 batches
print(features[:30,:10])
"""
Explanation: Padding sequences
To deal with both short and very long reviews, we'll pad or truncate all our reviews to a specific length. For reviews shorter than some seq_length, we'll pad with 0s. For reviews longer than seq_length, we can truncate them to the first seq_length words. A good seq_length, in this case, is 200.
Exercise: Define a function that returns an array features that contains the padded data, of a standard size, that we'll pass to the network.
* The data should come from review_ints, since we want to feed integers to the network.
* Each row should be seq_length elements long.
* For reviews shorter than seq_length words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128].
* For reviews longer than seq_length, use only the first seq_length words as the feature vector.
As a small example, if the seq_length=10 and an input review is:
[117, 18, 128]
The resultant, padded sequence should be:
[0, 0, 0, 0, 0, 0, 0, 117, 18, 128]
Your final features array should be a 2D array, with as many rows as there are reviews, and as many columns as the specified seq_length.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
"""
split_frac = 0.8
## split data into training, validation, and test data (features and labels, x and y)
split_idx = int(len(features)*0.8)
train_x, remaining_x = features[:split_idx], features[split_idx:]
train_y, remaining_y = encoded_labels[:split_idx], encoded_labels[split_idx:]
test_idx = int(len(remaining_x)*0.5)
val_x, test_x = remaining_x[:test_idx], remaining_x[test_idx:]
val_y, test_y = remaining_y[:test_idx], remaining_y[test_idx:]
## print out the shapes of your resultant feature data
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
"""
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets.
* You'll need to create sets for the features and the labels, train_x and train_y, for example.
* Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9.
* Whatever data is left will be split in half to create the validation and testing data.
End of explanation
"""
import torch
from torch.utils.data import TensorDataset, DataLoader
# create Tensor datasets
train_data = TensorDataset(torch.from_numpy(train_x), torch.from_numpy(train_y))
valid_data = TensorDataset(torch.from_numpy(val_x), torch.from_numpy(val_y))
test_data = TensorDataset(torch.from_numpy(test_x), torch.from_numpy(test_y))
# dataloaders
batch_size = 50
# make sure the SHUFFLE your training data
train_loader = DataLoader(train_data, shuffle=True, batch_size=batch_size)
valid_loader = DataLoader(valid_data, shuffle=True, batch_size=batch_size)
test_loader = DataLoader(test_data, shuffle=True, batch_size=batch_size)
# obtain one batch of training data
dataiter = iter(train_loader)
sample_x, sample_y = dataiter.next()
print('Sample input size: ', sample_x.size()) # batch_size, seq_length
print('Sample input: \n', sample_x)
print()
print('Sample label size: ', sample_y.size()) # batch_size
print('Sample label: \n', sample_y)
"""
Explanation: Check your work
With train, validation, and test fractions equal to 0.8, 0.1, 0.1, respectively, the final, feature data shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
DataLoaders and Batching
After creating training, test, and validation data, we can create DataLoaders for this data by following two steps:
1. Create a known format for accessing our data, using TensorDataset which takes in an input set of data and a target set of data with the same first dimension, and creates a dataset.
2. Create DataLoaders and batch our training, validation, and test Tensor datasets.
train_data = TensorDataset(torch.from_numpy(train_x), torch.from_numpy(train_y))
train_loader = DataLoader(train_data, batch_size=batch_size)
This is an alternative to creating a generator function for batching our data into full batches.
End of explanation
"""
# First checking if GPU is available
train_on_gpu=torch.cuda.is_available()
if(train_on_gpu):
print('Training on GPU.')
else:
print('No GPU available, training on CPU.')
import torch.nn as nn
class SentimentRNN(nn.Module):
"""
The RNN model that will be used to perform Sentiment analysis.
"""
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, drop_prob=0.5):
"""
Initialize the model by setting up the layers.
"""
super(SentimentRNN, self).__init__()
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=drop_prob, batch_first=True)
# dropout layer
self.dropout = nn.Dropout(0.3)
# linear and sigmoid layers
self.fc = nn.Linear(hidden_dim, output_size)
self.sig = nn.Sigmoid()
def forward(self, x, hidden):
"""
Perform a forward pass of our model on some input and hidden state.
"""
batch_size = x.size(0)
# embeddings and lstm_out
embeds = self.embedding(x)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
out = self.dropout(lstm_out)
out = self.fc(out)
# sigmoid function
sig_out = self.sig(out)
# reshape to be batch_size first
sig_out = sig_out.view(batch_size, -1)
sig_out = sig_out[:, -1] # get last batch of labels
# return last sigmoid output and hidden state
return sig_out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state '''
# Create two new tensors with sizes n_layers x batch_size x hidden_dim,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
Explanation: Sentiment Network with PyTorch
Below is where you'll define the network.
<img src="assets/network_diagram.png" width=40%>
The layers are as follows:
1. An embedding layer that converts our word tokens (integers) into embeddings of a specific size.
2. An LSTM layer defined by a hidden_state size and number of layers
3. A fully-connected output layer that maps the LSTM layer outputs to a desired output_size
4. A sigmoid activation layer which turns all outputs into a value 0-1; return only the last sigmoid output as the output of this network.
The Embedding Layer
We need to add an embedding layer because there are 74000+ words in our vocabulary. It is massively inefficient to one-hot encode that many classes. So, instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using Word2Vec, then load it here. But, it's fine to just make a new layer, using it for only dimensionality reduction, and let the network learn the weights.
The LSTM Layer(s)
We'll create an LSTM to use in our recurrent network, which takes in an input_size, a hidden_dim, a number of layers, a dropout probability (for dropout between multiple layers), and a batch_first parameter.
Most of the time, you're network will have better performance with more layers; between 2-3. Adding more layers allows the network to learn really complex relationships.
Exercise: Complete the __init__, forward, and init_hidden functions for the SentimentRNN model class.
Note: init_hidden should initialize the hidden and cell state of an lstm layer to all zeros, and move those state to GPU, if available.
End of explanation
"""
# Instantiate the model w/ hyperparams
vocab_size = len(vocab_to_int)+1 # +1 for the 0 padding + our word tokens
output_size = 1
embedding_dim = 400
hidden_dim = 256
n_layers = 2
net = SentimentRNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers)
print(net)
"""
Explanation: Instantiate the network
Here, we'll instantiate the network. First up, defining the hyperparameters.
vocab_size: Size of our vocabulary or the range of values for our input, word tokens.
output_size: Size of our desired output; the number of class scores we want to output (pos/neg).
embedding_dim: Number of columns in the embedding lookup table; size of our embeddings.
hidden_dim: Number of units in the hidden layers of our LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
n_layers: Number of LSTM layers in the network. Typically between 1-3
Exercise: Define the model hyperparameters.
End of explanation
"""
# loss and optimization functions
lr=0.001
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
# training params
epochs = 4 # 3-4 is approx where I noticed the validation loss stop decreasing
counter = 0
print_every = 100
clip=5 # gradient clipping
# move model to GPU, if available
if(train_on_gpu):
net.cuda()
net.train()
# train for some number of epochs
for e in range(epochs):
# initialize hidden state
h = net.init_hidden(batch_size)
# batch loop
for inputs, labels in train_loader:
counter += 1
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
# zero accumulated gradients
net.zero_grad()
# get the output from the model
output, h = net(inputs, h)
# calculate the loss and perform backprop
loss = criterion(output.squeeze(), labels.float())
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(net.parameters(), clip)
optimizer.step()
# loss stats
if counter % print_every == 0:
# Get validation loss
val_h = net.init_hidden(batch_size)
val_losses = []
net.eval()
for inputs, labels in valid_loader:
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
val_h = tuple([each.data for each in val_h])
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
output, val_h = net(inputs, val_h)
val_loss = criterion(output.squeeze(), labels.float())
val_losses.append(val_loss.item())
net.train()
print("Epoch: {}/{}...".format(e+1, epochs),
"Step: {}...".format(counter),
"Loss: {:.6f}...".format(loss.item()),
"Val Loss: {:.6f}".format(np.mean(val_losses)))
"""
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. You can also add code to save a model by name.
We'll also be using a new kind of cross entropy loss, which is designed to work with a single Sigmoid output. BCELoss, or Binary Cross Entropy Loss, applies cross entropy loss to a single value between 0 and 1.
We also have some data and training hyparameters:
lr: Learning rate for our optimizer.
epochs: Number of times to iterate through the training dataset.
clip: The maximum gradient value to clip at (to prevent exploding gradients).
End of explanation
"""
# Get test data loss and accuracy
test_losses = [] # track loss
num_correct = 0
# init hidden state
h = net.init_hidden(batch_size)
net.eval()
# iterate over test data
for inputs, labels in test_loader:
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
# get predicted outputs
output, h = net(inputs, h)
# calculate loss
test_loss = criterion(output.squeeze(), labels.float())
test_losses.append(test_loss.item())
# convert output probabilities to predicted class (0 or 1)
pred = torch.round(output.squeeze()) # rounds to the nearest integer
# compare predictions to true label
correct_tensor = pred.eq(labels.float().view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
num_correct += np.sum(correct)
# -- stats! -- ##
# avg test loss
print("Test loss: {:.3f}".format(np.mean(test_losses)))
# accuracy over all test data
test_acc = num_correct/len(test_loader.dataset)
print("Test accuracy: {:.3f}".format(test_acc))
"""
Explanation: Testing
There are a few ways to test your network.
Test data performance: First, we'll see how our trained model performs on all of our defined test_data, above. We'll calculate the average loss and accuracy over the test data.
Inference on user-generated data: Second, we'll see if we can input just one example review at a time (without a label), and see what the trained model predicts. Looking at new, user input data like this, and predicting an output label, is called inference.
End of explanation
"""
# negative test review
test_review_neg = 'The worst movie I have seen; acting was terrible and I want my money back. This movie had bad acting and the dialogue was slow.'
from string import punctuation
def tokenize_review(test_review):
test_review = test_review.lower() # lowercase
# get rid of punctuation
test_text = ''.join([c for c in test_review if c not in punctuation])
# splitting by spaces
test_words = test_text.split()
# tokens
test_ints = []
test_ints.append([vocab_to_int[word] for word in test_words])
return test_ints
# test code and generate tokenized review
test_ints = tokenize_review(test_review_neg)
print(test_ints)
# test sequence padding
seq_length=200
features = pad_features(test_ints, seq_length)
print(features)
# test conversion to tensor and pass into your model
feature_tensor = torch.from_numpy(features)
print(feature_tensor.size())
def predict(net, test_review, sequence_length=200):
net.eval()
# tokenize review
test_ints = tokenize_review(test_review)
# pad tokenized sequence
seq_length=sequence_length
features = pad_features(test_ints, seq_length)
# convert to tensor to pass into your model
feature_tensor = torch.from_numpy(features)
batch_size = feature_tensor.size(0)
# initialize hidden state
h = net.init_hidden(batch_size)
if(train_on_gpu):
feature_tensor = feature_tensor.cuda()
# get the output from the model
output, h = net(feature_tensor, h)
# convert output probabilities to predicted class (0 or 1)
pred = torch.round(output.squeeze())
# printing output value, before rounding
print('Prediction value, pre-rounding: {:.6f}'.format(output.item()))
# print custom response
if(pred.item()==1):
print("Positive review detected!")
else:
print("Negative review detected.")
# positive test review
test_review_pos = 'This movie had the best acting and the dialogue was so good. I loved it.'
# call function
seq_length=200 # good to use the length that was trained on
predict(net, test_review_neg, seq_length)
"""
Explanation: Inference on a test review
You can change this test_review to any text that you want. Read it and think: is it pos or neg? Then see if your model predicts correctly!
Exercise: Write a predict function that takes in a trained net, a plain text_review, and a sequence length, and prints out a custom statement for a positive or negative review!
* You can use any functions that you've already defined or define any helper functions you want to complete predict, but it should just take in a trained net, a text review, and a sequence length.
End of explanation
"""
|
numeristical/introspective | examples/Calibration_Example_ICU_MIMIC.ipynb | mit | # "pip install ml_insights" in terminal if needed
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import ml_insights as mli
%matplotlib inline
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import train_test_split, StratifiedKFold
from sklearn.metrics import roc_auc_score, log_loss, brier_score_loss
from sklearn import clone
"""
Explanation: Probability Calibration using ML-Insights
On Example of Mortality Model Using MIMIC ICU Data*
This workbook is intended to demonstrate why probability calibration may be useful, and how to do it using the prob_calibration_function in the ML-Insights package.
We build a random forest model, demonstrate that using the vote percentage as a probability is not well-calibrated, and then show how to use an independent validation set and the prob_calibration_function to properly calibrate it so that accurate probabilities can be obtained.
*MIMIC-III, a freely accessible critical care database. Johnson AEW, Pollard TJ, Shen L, Lehman L, Feng M, Ghassemi M, Moody B, Szolovits P, Celi LA, and Mark RG. Scientific Data (2016).
https://mimic.physionet.org
End of explanation
"""
# Load dataset derived from the MMIC database
lab_aug_df = pd.read_csv("data/lab_vital_icu_table.csv")
lab_aug_df.head(10)
X = lab_aug_df.loc[:,['aniongap_min', 'aniongap_max',
'albumin_min', 'albumin_max', 'bicarbonate_min', 'bicarbonate_max',
'bilirubin_min', 'bilirubin_max', 'creatinine_min', 'creatinine_max',
'chloride_min', 'chloride_max',
'hematocrit_min', 'hematocrit_max', 'hemoglobin_min', 'hemoglobin_max',
'lactate_min', 'lactate_max', 'platelet_min', 'platelet_max',
'potassium_min', 'potassium_max', 'ptt_min', 'ptt_max', 'inr_min',
'inr_max', 'pt_min', 'pt_max', 'sodium_min', 'sodium_max', 'bun_min',
'bun_max', 'wbc_min', 'wbc_max','sysbp_max', 'sysbp_mean', 'diasbp_min', 'diasbp_max', 'diasbp_mean',
'meanbp_min', 'meanbp_max', 'meanbp_mean', 'resprate_min',
'resprate_max', 'resprate_mean', 'tempc_min', 'tempc_max', 'tempc_mean',
'spo2_min', 'spo2_max', 'spo2_mean']]
y = lab_aug_df['hospital_expire_flag']
# Impute the median for in each column to replace NA's
median_vec = [X.iloc[:,i].median() for i in range(len(X.columns))]
for i in range(len(X.columns)):
X.iloc[:,i].fillna(median_vec[i],inplace=True)
"""
Explanation: In the next few cells, we load in some data, inspect it, select columns for our features and outcome (mortality) and fill in missing values with the median of that column.
End of explanation
"""
X_train_val, X_test, y_train_val, y_test = train_test_split(X, y, test_size=0.2, random_state=942)
X_train, X_val, y_train, y_val = train_test_split(X_train_val, y_train_val, test_size=0.25, random_state=942)
"""
Explanation: Now we divide the data into training, validation, and test sets. The training set will be used to fit the model, the validation set will be used to calibrate the probabilities, and the test set will be used to evaluate the performance. We use a 60-20-20 split (achived by first doing 80/20 and then splitting the 80 by 75/25)
End of explanation
"""
rfmodel1 = RandomForestClassifier(n_estimators = 500, class_weight='balanced_subsample', random_state=942, n_jobs=-1 )
rfmodel1.fit(X_train,y_train)
val_res = rfmodel1.predict_proba(X_val)[:,1]
test_res_uncalib = rfmodel1.predict_proba(X_test)[:,1]
"""
Explanation: Next, we fit a Random Forest model to our training data. Then we use that model to predict "probabilities" on our validation and test sets.
I use quotes on "probabilities" because these numbers, which are the percentage of trees that voted "yes" are better understood as mere scores. A higher value should generally indicate a higher probability of mortality. However, there is no reason to expect these to be well-calibrated probabilities. The fact that, say, 60% of the trees voted "yes" on a particular case does not mean that that case has a 60% probability of mortality.
We will demonstrate this empirically later.
End of explanation
"""
# Side by side histograms showing scores of positive vs negative cases
fig, axis = plt.subplots(2,2, figsize = (8,8))
ax=axis.flatten()
countvec0_val = ax[0].hist(val_res[np.where(y_val==0)],bins=20,range=[0,1]);
countvec1_val = ax[1].hist(val_res[np.where(y_val==1)],bins=20,range=[0,1]);
countvec0_test = ax[2].hist(test_res_uncalib[np.where(y_test==0)],bins=20,range=[0,1]);
countvec1_test = ax[3].hist(test_res_uncalib[np.where(y_test==1)],bins=20,range=[0,1]);
"""
Explanation: What follows is histograms showing on the validation set (top row) and the test set (bottom row) the histogram of the random forest vote percentage for the negative cases (survivors, left side) and the positive cases (mortalities, right side). As we would expect the distribution of mortalities is shifted to the right. We store the "count vectors" so that we can calculate some empirical probabilites. We use 20 equal size bins.
End of explanation
"""
list(zip(countvec0_val[0],countvec1_val[0]))
"""
Explanation: Next, we see the numerical counts in each bin for the validation set.
For example, in the first entry, we see that for the entries that had a vote percentage between 0 and .05, there were 4968 survivors, and 62 mortalities. For those with a vote percentage between .05 and .10, there were 2450 survivors and 138 mortalities, and so on. This allows us to calculate some empirical probabilities and visually see how well (or poorly) calibrated the vote percentage is.
End of explanation
"""
emp_prob_vec_val = countvec1_val[0]/(countvec0_val[0]+countvec1_val[0])
list(zip([(i/20, (i+1)/20) for i in range(20)],emp_prob_vec_val))
"""
Explanation: Using the counts above, we can calculate empirical probabilities for each bin. Below, we can see the endpoints of each bin and the empirical probability of mortality for entries in that bin. So for example, for entries with a vote percentage between .45 and .5, we can see that the empirical probability of mortality was .756 (59/78), which suggests that the vote percentage is not a well-calibrated probability.
End of explanation
"""
plt.plot(np.linspace(0,1,101),np.linspace(0,1,101),'k')
plt.plot(np.linspace(0.05,1,20)-.025,emp_prob_vec_val,'ro')
"""
Explanation: To demonstrate this more visually, a well-calibrated probability should have the points falling (roughly) on the line x=y. However, as we see in the plot below, most of the points lie above that line. This suggest that (in particular for entries with vote percentage >.4) the true probability is considerably higher than the vote percentage.
End of explanation
"""
calibrate_rf = mli.prob_calibration_function(y_val,val_res)
"""
Explanation: So how do we fix this?! We can use the validation set to create a calibration function. This function would map our uncalibrated score into a proper probability. The ml-insights package has a function called prob_calibration_function which does just that. Give it the scores and the true 0/1 values, and it will return a calibration function. This calibration function can then be applied to the model output to yield a well-calibrated probability.
End of explanation
"""
plt.plot(np.linspace(0,1,101),calibrate_rf(np.linspace(0,1,101)))
plt.plot(np.linspace(0,1,101),np.linspace(0,1,101),'k')
plt.plot(np.linspace(0.05,1,20)-.025,emp_prob_vec_val,'ro')
"""
Explanation: The function takes a little bit of time to run, primarily because it is doing cross-validation on the regularization parameter C (to balance fitting the current data with generalizing well). There are additional settings on the function that can be applied to speed up this process, if desired.
As you can see below, the resulting function is a nice cubic function that fits the data well without over-fitting
End of explanation
"""
log_loss(y_test,test_res_uncalib)
test_res_calib=calibrate_rf(test_res_uncalib)
log_loss(y_test,test_res_calib)
brier_score_loss(y_test,test_res_uncalib)
brier_score_loss(y_test,test_res_calib)
"""
Explanation: Now let's demonstrate that we actually have more accurate probabilities after calibration than we did before. We will use the log_loss metric to determine which performs better on the test set, with or without calibration.
End of explanation
"""
emp_prob_vec_test = countvec1_test[0]/(countvec0_test[0]+countvec1_test[0])
plt.plot(np.linspace(0,1,101),calibrate_rf(np.linspace(0,1,101)))
#plt.plot(np.linspace(.025,.975,20),emp_prob_vec_val,'ro')
plt.plot(np.linspace(.025,.975,20),emp_prob_vec_test,'y+')
plt.plot(np.linspace(0,1,101),np.linspace(0,1,101),'k')
rfm_calib = mli.SplineCalibratedClassifierCV(rfmodel1, cv='prefit')
rfm_calib.fit(X_val,y_val)
test_res_calib_alt = rfm_calib.predict_proba(X_test)[:,1]
plt.hist(test_res_calib_alt - test_res_calib)
"""
Explanation: As you can see, there is a significant improvement after calibration. (Lower loss is better)
This next plot shows the empirical probabilities of the test set data (that was not seen by the calibration process).
You can see, that the curve fits the data pretty well, though not as well as it fits the validation set (since it was trained on the validation set).
Note, the warning below is just eliminating the points where the counts in both bins was 0)
End of explanation
"""
calibrate_rf_L2 = mli.prob_calibration_function(y_val,val_res,method='ridge')
test_res_calib_L2=calibrate_rf_L2(test_res_uncalib)
log_loss(y_test,test_res_calib_L2)
brier_score_loss(y_test,test_res_calib_L2)
"""
Explanation: The default calibration uses a logistic regression fitting procedure, which is best if your metric of interest is log-loss. If brier score is the metric of interest, it is better to use method='ridge' since that will use an L2 (mean-squared error) loss function as the calibration metric to optimize.
End of explanation
"""
rfm_calib = mli.SplineCalibratedClassifierCV(rfmodel1, cv='prefit')
rfm_calib.fit(X_val,y_val)
test_res_calib_direct = rfm_calib.predict_proba(X_test)[:,1]
log_loss(y_test,test_res_calib_direct)
brier_score_loss(y_test,test_res_calib_direct)
"""
Explanation: As you can see, using the L2 calibration improves (slightly) on the brier score loss, while doing (significantly) worse on the log-loss. In general, using the default logistic method is advised for calibration of probabilities
End of explanation
"""
new_C_vec = np.linspace(100,10000,101)
new_C_vec
calibrate_rf_new = mli.prob_calibration_function(y_val, val_res, reg_param_vec=new_C_vec)
"""
Explanation: Changing the Regularization Search
At the heart of the spline-fitting component of the calibration is a Penalized Logistic Regression, driven by the LogisticRegressionCV function in sklearn. This function searches over a range of "C" values, where C is a parameter that controls the regularization (shrinkage) of the coefficients.
By default, prob_calibration_function searches over 43 values, logarithmically spaced between 10^-4 and 10^10. (In numpy language this is reg_param_vec = 10**np.linspace(-4,10,43). However, you have the option to specify your own reg_param_vec vector of C values to try in the Cross-Validation.
If you recall previously, when we ran
calibrate_rf = mli.prob_calibration_function(y_val, val_res)
We got a message:
Trying 43 values of C between 0.0001 and 10000000000.0
Best value found C = [ 1000.]
If we wish to optimize more fully, we might want to try a range of points between 100 and 10000 for C.
Below we show how to do this.
End of explanation
"""
test_res_calib_new=calibrate_rf_new(test_res_uncalib)
log_loss(y_test,test_res_calib_new)
"""
Explanation: Note that the run-time of the calibration depends on the number of C values that are tried. In general, if you want to save some time, you can specify a range with fewer points in the reg_param_vec. For a more thorough search (which takes more time) you can specify more points.
End of explanation
"""
from sklearn.calibration import CalibratedClassifierCV
clf_isotonic = CalibratedClassifierCV(rfmodel1, method='isotonic',cv="prefit")
clf_isotonic.fit(X_val, y_val)
prob_pos_isotonic = clf_isotonic.predict_proba(X_test)[:, 1]
log_loss(y_test,prob_pos_isotonic)
brier_score_loss(y_test,prob_pos_isotonic)
"""
Explanation: We see that, in this case, the log_loss was slightly improved from our previous calibration.
Existing Sklearn Calibration Functionality
Note, sklearn has a CalibratedClassifierCV function, but it does not seem to work as well. Here we will use it in the same fashion as we did above. That is, we assume we have a model that is fit already and then use the additional data to calibrate (this is why we use the option cv='prefit'). Unlike ml-insights, the sklearn functionality creates a new model which integrates the calibration rather than keeping the model and calibration separate.
End of explanation
"""
plt.plot(np.linspace(0,1,101),calibrate_rf(np.linspace(0,1,101)))
plt.plot(np.linspace(.025,.975,20),emp_prob_vec_val,'ro')
plt.plot(np.linspace(0,1,101),np.linspace(0,1,101),'k')
plt.plot(test_res_uncalib,prob_pos_isotonic,'c.')
plt.plot(np.linspace(.025,.975,20),emp_prob_vec_test,'g+')
"""
Explanation: The log loss and brier score loss is actually worse (higher) than the uncalibrated version, and considerably worse than the calibrated method using ml-insights
End of explanation
"""
clf_sigmoid = CalibratedClassifierCV(rfmodel1, method='sigmoid',cv="prefit")
clf_sigmoid.fit(X_val, y_val)
prob_pos_sigmoid = clf_sigmoid.predict_proba(X_test)[:, 1]
log_loss(y_test,prob_pos_sigmoid)
brier_score_loss(y_test,prob_pos_sigmoid)
"""
Explanation: From the plot above, it seems to give more of a "step function" calibration rather than a smooth, spline-like curve. It also seems to be off quite a bit when it predicts a 0.2 probability and the actual (empirical) probability is around 0.05.
End of explanation
"""
#plt.plot(np.linspace(.025,.975,20),emp_prob_vec_val,'ro')
plt.plot(np.linspace(0,1,101),np.linspace(0,1,101),'k')
plt.plot(test_res_uncalib,prob_pos_sigmoid,'c.')
plt.plot(np.linspace(.025,.975,20),emp_prob_vec_test,'g+')
plt.plot(np.linspace(0,1,101),calibrate_rf(np.linspace(0,1,101)))
"""
Explanation: Again, the results are not very reassuring.
End of explanation
"""
rfm = RandomForestClassifier(n_estimators = 500, class_weight='balanced_subsample', random_state=942, n_jobs=-1 )
rfm_cv = mli.SplineCalibratedClassifierCV(rfm)
rfm_cv.fit(X_train_val,y_train_val)
test_res_uncalib_cv = rfm_cv.uncalibrated_classifier.predict_proba(X_test)[:,1]
#test_res_calib_cv = calibration_function(test_res_uncalib_cv)
test_res_calib_cv = rfm_cv.predict_proba(X_test)[:,1]
#log_loss(y_test,test_res_uncalib_cv)
#brier_score_loss(y_test,test_res_uncalib_cv)
log_loss(y_test,test_res_calib_cv)
brier_score_loss(y_test,test_res_calib_cv)
plt.plot(np.linspace(0,1,101),np.linspace(0,1,101),'k')
plt.plot(np.linspace(.025,.975,20),emp_prob_vec_test,'g+')
plt.plot(np.linspace(0,1,101),calibrate_rf(np.linspace(0,1,101)))
plt.plot(np.linspace(0,1,101),rfm_cv.calib_func(np.linspace(0,1,101)))
rfm = RandomForestClassifier(n_estimators = 500, class_weight='balanced_subsample', random_state=942)
rfm_cv_L2 = mli.SplineCalibratedClassifierCV(rfm, method='ridge')
rfm_cv_L2.fit(X_train_val,y_train_val)
test_res_calib_cv_L2 = rfm_cv_L2.predict_proba(X_test)[:,1]
log_loss(y_test,test_res_calib_cv_L2)
brier_score_loss(y_test,test_res_calib_cv_L2)
plt.plot(np.linspace(0,1,101),np.linspace(0,1,101),'k')
plt.plot(np.linspace(.025,.975,20),emp_prob_vec_test,'g+')
plt.plot(np.linspace(0,1,101),rfm_cv.calib_func(np.linspace(0,1,101)),'g')
plt.plot(np.linspace(0,1,101),rfm_cv_L2.calib_func(np.linspace(0,1,101)),'r')
clf_isotonic_xval = CalibratedClassifierCV(rfm, method='isotonic', cv=5)
clf_isotonic_xval.fit(X_train_val,y_train_val)
prob_pos_isotonic_xval = clf_isotonic_xval.predict_proba(X_test)[:, 1]
log_loss(y_test,prob_pos_isotonic_xval)
brier_score_loss(y_test,prob_pos_isotonic_xval)
plt.plot(np.linspace(0,1,101),np.linspace(0,1,101),'k')
plt.plot(test_res_uncalib_cv,prob_pos_isotonic_xval,'c.')
plt.plot(np.linspace(.025,.975,20),emp_prob_vec_test,'g+')
#plt.plot(np.linspace(0,1,101),calibration_function(np.linspace(0,1,101)),'g')
clf_sigmoid_xval = CalibratedClassifierCV(rfm, method='sigmoid', cv=5)
clf_sigmoid_xval.fit(X_train_val,y_train_val)
prob_pos_sigmoid_xval = clf_sigmoid_xval.predict_proba(X_test)[:, 1]
plt.plot(np.linspace(0,1,101),np.linspace(0,1,101),'k')
plt.plot(test_res_uncalib_cv,prob_pos_sigmoid,'c.')
plt.plot(np.linspace(.025,.975,20),emp_prob_vec_test,'g+')
#plt.plot(np.linspace(0,1,101),calibrate_rf(np.linspace(0,1,101)))
#plt.plot(np.linspace(0,1,101),calibration_function(np.linspace(0,1,101)),'g')
log_loss(y_test,prob_pos_sigmoid_xval)
brier_score_loss(y_test,prob_pos_sigmoid_xval)
"""
Explanation: Above, we see that the sigmoid option assumes a rather strict parametric form, and thus fits badly if the reality is not of that shape.
Cross Validated Calibration
One disadvantage to the above approach is that the model uses one set of data for training and another set for calibration. As a result, neither step benefits from the full size of available data. Below we outline the ML-Insights capabilities to do a cross-validated training and calibration, and compare it to the existing sklearn functionality.
End of explanation
"""
|
mastertrojan/Udacity | tv-script-generation/.ipynb_checkpoints/dlnd_tv_script_generation-checkpoint.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
vocab_to_int = dict(zip(sorted_vocab, range(0, len(text))))
int_to_vocab = {v: k for k, v in vocab_to_int.items()}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
dict = {'.': "||Period||", ',':'||Comma||', '"':'||Quotation_Mark||', ';':'||Semicolon||',
'!':'||Exclamation_Mark||', '?': '||Question_Mark||', '(':'||Left_Parentheses||',
')':'||Right_Parentheses||', '--':'||Dash||', '\n':'||Return||'}
# TODO: Implement Function
return dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
Input = tf.placeholder(tf.int32, [None, None], name='input')
Targets = tf.placeholder(tf.int32, [None, None])
LearningRage = tf.placeholder(tf.float32)
return Input, Targets, LearningRage
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=0.7)
#more lstms cause higher learning loss investigate...
cell = tf.contrib.rnn.MultiRNNCell([drop] * 1)
#initial state with all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name='initial_state')
return cell, initial_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
#initial_state = cell.zero_state(batch_size, tf.float32)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
# TODO: Implement Function
return outputs, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
"""
# embed_dim dimesions of what?
embed = get_embed(input_data, vocab_size, rnn_size)
outputs, final_state = build_rnn(cell, embed)
#OMG activation_fn does not default to NONE/Linear...
logits = tf.contrib.layers.fully_connected(outputs, vocab_size,
weights_initializer=tf.truncated_normal_initializer(mean=0.0,stddev=0.1),
biases_initializer=tf.zeros_initializer(), activation_fn=None)
# TODO: Implement Function
return logits, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
#do the slice
slice_size = batch_size * seq_length
# TODO: Implement Function
#divide batches by slice
n_batches = int(len(int_text) / slice_size)
#do the numpy!
x_data = np.array(int_text[: n_batches * slice_size])
y_data = np.array(int_text[1: n_batches * slice_size + 1])
x_batches = np.split(x_data.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(y_data.reshape(batch_size, -1), n_batches, 1)
return np.asarray(list(zip(x_batches, y_batches)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
"""
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Sequence Length
seq_length = 20
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 10
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
InputTensor = loaded_graph.get_tensor_by_name("input:0")
InitialStateTensor = loaded_graph.get_tensor_by_name("initial_state:0")
FinalStateTensor = loaded_graph.get_tensor_by_name("final_state:0")
ProbsTensor = loaded_graph.get_tensor_by_name("probs:0")
# TODO: Implement Function
return InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
return np.random.choice(list(int_to_vocab.values()), 1, p=probabilities)[0]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
rasbt/python-machine-learning-book | code/bonus/scikit-model-to-json.ipynb | mit | %load_ext watermark
%watermark -a 'Sebastian Raschka' -v -d -p scikit-learn,numpy,scipy
# to install watermark just uncomment the following line:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
"""
Explanation: Sebastian Raschka, 2016
https://github.com/rasbt/python-machine-learning-book
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
"""
%matplotlib inline
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import load_iris
import matplotlib.pyplot as plt
from mlxtend.plotting import plot_decision_regions
iris = load_iris()
y, X = iris.target, iris.data[:, [0, 2]] # only use 2 features
lr = LogisticRegression(C=100.0,
class_weight=None,
dual=False,
fit_intercept=True,
intercept_scaling=1,
max_iter=100,
multi_class='multinomial',
n_jobs=1,
penalty='l2',
random_state=1,
solver='newton-cg',
tol=0.0001,
verbose=0,
warm_start=False)
lr.fit(X, y)
plot_decision_regions(X=X, y=y, clf=lr, legend=2)
plt.xlabel('sepal length')
plt.ylabel('petal length')
plt.show()
"""
Explanation: Bonus Material - Scikit-learn Model Persistence using JSON
In many situations, it is desirable to store away a trained model for future use. These situations where we want to persist a model could be the deployment of a model in a web application, for example, or scientific reproducibility of our experiments.
I wrote a little bit about serializing scikit-learn models using pickle in context of the web applications that we developed in chapter 8. Also, you can find an excellent tutorial section on scikit-learn's website. Honestly, I would say that pickling Python objects via the pickle, dill or joblib modules is probably the most convenient approach to model persistence. However, pickling Python objects can sometimes be a little bit problematic, for example, deserializing a model in Python 3.x that was originally pickled in Python 2.7x and vice versa. Also, pickle offers different protocols (currently the protocols 0-4), which are not necessarily backwards compatible.
Thus, to prepare for the worst case scenario -- corrupted pickle files or version incompatibilities -- there's at least one other (a little bit more tedious) way to model persistence using JSON.
JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language, Standard ECMA-262 3rd Edition - December 1999. JSON is a text format that is completely language independent but uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. These properties make JSON an ideal data-interchange language. [Source: http://www.json.org]
One of the advantages of JSON is that it is a human-readable format. So, if push comes to shove, we should still be able to read the parameter files and model coefficients "manually" and assign these values to the respective scikit-learn estimator or build our own model to reproduce scientific results.
Let's see how that works ... First, let us train a simple logistic regression classifier on Iris:
End of explanation
"""
lr.get_params()
"""
Explanation: Luckily, we don't have to retype or copy & paste all the estimator parameters manually if we want to store them away. To get a dictionary of these parameters, we can simply use the handy "get_params" method:
End of explanation
"""
import json
with open('./sckit-model-to-json/params.json', 'w', encoding='utf-8') as outfile:
json.dump(lr.get_params(), outfile)
"""
Explanation: Storing them in JSON format is easy, we simply import the json module from Python's standard library and dump the dictionary to a file:
End of explanation
"""
with open('./sckit-model-to-json/params.json', 'r', encoding='utf-8') as infile:
print(infile.read())
"""
Explanation: When we read the file, we can see that the JSON file is just a 1-to-1 copy of our Python dictionary in text format:
End of explanation
"""
attrs = [i for i in dir(lr) if i.endswith('_') and not i.endswith('__')]
print(attrs)
attr_dict = {i: getattr(lr, i) for i in attrs}
"""
Explanation: Now, the trickier part is to identify the "fit" parameters of the estimator, i.e., the parameters of our logistic regression model. However, in practice it's actually pretty straight forward to figure it out by heading over to the respective documentation page: Just look out for the "attributes" in the "Attribute" section that have a trailing underscore (thanks, scikit-learn team, for the beautifully thought-out API!). In case of logistic regression, we are interested in the weights .coef_, the bias unit .intercept_, and the classes_ and n_iter_ attributes.
End of explanation
"""
import numpy as np
for k in attr_dict:
if isinstance(attr_dict[k], np.ndarray):
attr_dict[k] = attr_dict[k].tolist()
"""
Explanation: In order to deserialize NumPy arrays to JSON objects, we need to cast the arrays to (nested) Python lists first, however, it's not that much of a hassle thanks to the tolist method. (Also, consider saving the attributes to separate JSON files, e.g., intercept.json and coef.json, for clarity.)
End of explanation
"""
with open('./sckit-model-to-json/attributes.json', 'w', encoding='utf-8') as outfile:
json.dump(attr_dict,
outfile,
separators=(',', ':'),
sort_keys=True,
indent=4)
"""
Explanation: Now, we are ready to dump our "attribute dictionary" to a JSON file:
End of explanation
"""
with open('./sckit-model-to-json/attributes.json', 'r', encoding='utf-8') as infile:
print(infile.read())
"""
Explanation: If everything went fine, our JSON file should look like this -- in plaintext format:
End of explanation
"""
import codecs
import json
obj_text = codecs.open('./sckit-model-to-json/params.json', 'r', encoding='utf-8').read()
params = json.loads(obj_text)
obj_text = codecs.open('./sckit-model-to-json/attributes.json', 'r', encoding='utf-8').read()
attributes = json.loads(obj_text)
"""
Explanation: With similar ease, we can now use json's loads method to read the data back from the ".json" files and re-assign them to Python objects. (Imagine the following happens in a new Python session.)
End of explanation
"""
%matplotlib inline
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import load_iris
import matplotlib.pyplot as plt
from mlxtend.plotting import plot_decision_regions
import numpy as np
iris = load_iris()
y, X = iris.target, iris.data[:, [0, 2]] # only use 2 features
lr = LogisticRegression()
lr.set_params(**params)
for k in attributes:
if isinstance(attributes[k], list):
setattr(lr, k, np.array(attributes[k]))
else:
setattr(lr, k, attributes[k])
plot_decision_regions(X=X, y=y, clf=lr, legend=2)
plt.xlabel('sepal length')
plt.ylabel('petal length')
plt.show()
"""
Explanation: Finally, we just need to initialize a default LogisticRegression estimator, feed it the desired parameters via the set_params method, and reassign the other attributes using Python's built-in setattr (don't forget to recast the Python lists to NumPy arrays, though!):
End of explanation
"""
|
seansweeney/NEARC-2017 | Updating AGOL item metadata_with_outputs.ipynb | unlicense | from arcgis.gis import GIS
from getpass import getpass
from IPython.display import display
"""
Explanation: Import the GIS module and other needed Python modules
The IPython.display module has some helper functions that the Python API takes advantage of for displaying objects like item details and maps in the notebook.
End of explanation
"""
# Get username and password
username = input('Username: ')
password = getpass(prompt='Password: ')
# Connect to portal
gis = GIS("https://arcgis.com/", username, password)
"""
Explanation: Create the GIS object and point it to AGOL
End of explanation
"""
user = gis.users.get(username)
user
"""
Explanation: Test the connection
The output here is an example of the Python API taking advantage of IPython.display.
End of explanation
"""
title = input("Feature class to search for: ")
items = gis.content.search(query="title:'" + title + "' AND owner:" + username, item_type="Feature Service")
print(type(items), len(items))
print(type(items[0]))
item = items[0]
item
item.tags
"""
Explanation: Get the item that you want to update
Portals allows users to store and share a variety of items. Each item has a type, such as Web Map or Feature Service, and a set of type keywords that provide additional information on the characteristics of the type.
See Items and item types for more information.
End of explanation
"""
# First set up some variables for input ot the *update* method.
thumbnail_path = "c:/temp/Hospitals.JPG"
tags = list(item.tags)
tags.append("health")
item_properties = {"snippet": "Location of Cambridge hospitals.",
"title": "Cambridge Hospitals",
"tags": ','.join(tags),
"accessinformation": "City of Cambridge GIS",
"licenseInfo": "License Info"
}
# Then perform the update
item.update(item_properties, thumbnail=thumbnail_path)
item
# Get the updated *item*
items[0].tags
"""
Explanation: Update the metadata
End of explanation
"""
|
minireference/noBSLAnotebooks | chapter02_linearity_intuition.ipynb | mit | # setup SymPy
from sympy import *
init_printing()
x, y, z, t = symbols('x y z t')
alpha, beta = symbols('alpha beta')
"""
Explanation: 2/ Linearity
End of explanation
"""
b, m = symbols('b m')
def f(x):
return m*x
f(1)
f(2)
f(1+2)
f(1) + f(2)
expand(f(x+y)) == f(x) + f(y)
"""
Explanation: Simplest linear function
End of explanation
"""
m_1, m_2 = symbols('m_1 m_2')
def T(vec):
"""A function that takes a 2D vector and returns a number."""
return m_1*vec[0] + m_2*vec[1]
u_1, u_2 = symbols('u_1 u_2')
u = Matrix([u_1,u_2])
v_1, v_2 = symbols('v_1 v_2')
v = Matrix([v_1,v_2])
T(u)
T(v)
T(u) + T(v)
expand( T(u+v) )
simplify( T(alpha*u + beta*v) - alpha*T(u) - beta*T(v) )
"""
Explanation: What about vector inputs?
End of explanation
"""
m_11, m_12, m_21, m_22 = symbols('m_11 m_12 m_21 m_22')
def T(vec):
"""A linear transformations R^2 --> R^2."""
out_1 = m_11*vec[0] + m_12*vec[1]
out_2 = m_21*vec[0] + m_22*vec[1]
return Matrix([out_1, out_2])
T(u)
T(v)
T(u+v)
"""
Explanation: Linear transformations
A linear transformation is function that takes vectors as inputs, and produces vectors as outputs:
$$
T: \mathbb{R}^n \to \mathbb{R}^m.
$$
See page 136 in v2.2 of the book.
End of explanation
"""
def T_impl(vec):
"""A linear transformations implemented as matrix-vector product."""
M_T = Matrix([[m_11, m_12],
[m_21, m_22]])
return M_T*vec
T_impl(u)
"""
Explanation: Linear transformations as matrix-vector products
See page 133 in v2.2 of the book.
End of explanation
"""
|
julienchastang/unidata-python-workshop | notebooks/Bonus/What to do when things go wrong.ipynb | mit | while = 1
"""
Explanation: <div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<h1>What to do when things go wrong</h1>
<h3>Unidata Python Workshop</h3>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
<div style="float:right; width:250 px"><img src="https://regmedia.co.uk/2015/06/15/silicon-valley-fire.jpg" alt="Server on fire" style="height: 200px;"></div>
This session is designed as both a workshop and group discussion. We will examine basic failures in Python and how they manifest, followed by a discussion of what sort of errors and failure cases plague you, the audience. We will try to identify things that we (Unidata) can do differently to make the software development/usage process easier for our community. We will also pool our python knowledge/experiences to try to identify common issues and potential solutions.
This is useful because, eventually, problems will arise when programming with Python. Common issues include:
Syntax Errors
Runtime Errors
Unexpected Results
Identifying and correcting these issues can be difficult. However, there are strategies for preventing and mitigating many of the problems a developer may encounter.
Syntax Errors
Syntax errors usually occur when python is translated from source code into byte code. This is typically identified by a message similar to:
~~~
SyntaxError: invalid syntax
~~~
Common syntax errors include:
Using a reserved python keyword for a variable name.
End of explanation
"""
for i in range(1,10) print('Hello world')
"""
Explanation: Missing colon at the end of for, while, if, def statements.
End of explanation
"""
for i in range(1,10):
x = 1 + i
print(x)
"""
Explanation: Incorrect indentation. Don't mix spaces and tabs!
End of explanation
"""
print("Some python issues are easier to find than others ')
"""
Explanation: Mismatched quotation marks.
End of explanation
"""
print("Missing brackets can be difficult" , range(1,10)
print("to find. probably not these ones however.")
"""
Explanation: Unclosed brackets
End of explanation
"""
x = int(input("Please enter a number: "))
print(x)
while True:
try:
x = int(input("Please enter a number:"))
break
except ValueError:
print("oops!")
"""
Explanation: Runtime errors
Once the program is syntactically correct, it can run and now nothing will go wrong! Until it does. There are multiple ways runtime errors may manifest:
Nothing happens.
The program hangs.
Infinite loop or recursion.
Debugging Runtime errors in Python with GDB
It's possible to debug python using gdb. It requires the pythong debugging extensions be installed.
Running gdb interactively:
$ gdb python
....
(gdb) run <programname>.py <arguments>
Running gdb Automatically:
$ gdb -ex r --args python <programname>.py <arguments>
The latter case runs the program until it exits, segfaults or execution is stopped.
You can get a stack trace or python stack trace via:
(gdb) bt
or
(gdb) py-bt
Handling exceptions with try/except
Handling exceptions may done using try/except statements in python.
End of explanation
"""
class MyInputError(Exception):
"""Exception raised for errors in the input.
Attributes:
message -- explanation of the error
"""
def __init__(self,*args,**kwargs):
Exception.__init__(self,*args,**kwargs)
self.dError = "Internal Error 10"
try:
raise MyInputError("MyInputError")
raise AssertionError("Sample Assertion Error")
except MyInputError as err:
print('InputError caught:',err)
print('dError:',err.dError)
except AssertionError as err:
print('Assertion error caught:',err)
?type
"""
Explanation: User-defined exceptions
Developers can create thir own exceptions by creating new exception classes derived from the Exception class.
End of explanation
"""
try:
# raise MyInputError("New error, test finally")
#
except MyInputError as err:
print('MyInputError:',err)
print('dError:',err.dError)
finally:
print('Graceful Cleanup: Goodbye, world!')
"""
Explanation: Defining Clean-up Actions
The try statement has an optional cluase which lets you define a clean-up action; this is an action executed whether an action has occured or not.
End of explanation
"""
|
danielbarter/personal_website_code | blog_notebooks/mnist/helm_mnist.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style({"font.family" : ["Serif"]})
"""
Explanation: Let's check out MNIST
End of explanation
"""
# tensorflow mnist downloader
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
"""
Explanation: Many of the algorithms used in statistics and machine learning are not coordinate independent. For this reason, pre-processing is often required before fitting a model. Luckily, mnist is already in pristine condition, so no pre-processing is required. Moreover, the official binary files are really easy to parse, so I will just use the tensorflow mnist downloader.
End of explanation
"""
print("training feature size :", mnist.train.images.shape)
print("validation feature size :", mnist.validation.images.shape)
print("test feature size :", mnist.test.images.shape)
sns.heatmap(mnist.train.images[0].reshape(28,28)
,xticklabels=False
,yticklabels=False);
print("label :", mnist.train.labels[0])
"""
Explanation: the python object mnist contains a training set with 55000 images, a validation set with 5000 images and a test set with 10000 images.
End of explanation
"""
linear_softmax_mle = tf.Graph()
with linear_softmax_mle.as_default():
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.random_normal([784, 10],stddev=0.1))
b = tf.Variable(tf.random_normal([10],stddev=0.1))
y = tf.nn.softmax(tf.matmul(x, W) + b)
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
"""
Explanation: We are going to use some Discriminative models to try and predict the label from the image. I wrote a blog post about the math behind this.
Model 1 : Linear function > softmax. MLE. 2 hours.
End of explanation
"""
def train_linear_softmax_mle():
with tf.Session(graph = linear_softmax_mle) as sess:
cross_entropy_list = []
sess.run(tf.global_variables_initializer())
for step in range(1000):
x_data,y_data = mnist.train.next_batch(100)
_,l = sess.run([train,cross_entropy]
,feed_dict = {x : x_data, y_ : y_data}
)
cross_entropy_list.append(l)
validation_accuracy = sess.run(accuracy
,feed_dict = {x : mnist.validation.images
, y_ : mnist.validation.labels
}
)
mle = sess.run(W)
print("validation accuracy = %f" % validation_accuracy)
fig, (ax1, ax2) = plt.subplots(ncols=2,figsize=(15,6))
sns.heatmap(mle,xticklabels=False,yticklabels=False,ax=ax1)
ax1.set_title("heatmap for W")
ax2.plot(np.arange(0,1000),np.array(cross_entropy_list))
ax2.set_title("cross entropy")
plt.show()
train_linear_softmax_mle()
"""
Explanation: The mnist.train object has a next_batch method whose source can be found here. It lets us sample without replacement from the feature,label pairs. Perfect for stochastic gradient descent. I don't want my laptop to melt!
End of explanation
"""
nonlinear_softmax_mle = tf.Graph()
with nonlinear_softmax_mle.as_default():
x = tf.placeholder(tf.float32, [None, 784])
W1 = tf.Variable(tf.random_normal([784, 100],stddev=0.1))
b1 = tf.Variable(tf.random_normal([100]))
mid = tf.nn.relu(tf.matmul(x,W1) + b1)
W2 = tf.Variable(tf.random_normal([100,10],stddev=0.1))
b2 = tf.Variable(tf.random_normal([10],stddev=0.1))
y = tf.nn.softmax(tf.matmul(mid, W2) + b2)
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
def train_nonlinear_softmax_mle():
with tf.Session(graph = nonlinear_softmax_mle) as sess:
cross_entropy_list = []
sess.run(tf.global_variables_initializer())
for step in range(1000):
x_data,y_data = mnist.train.next_batch(100)
_,l = sess.run([train,cross_entropy]
,feed_dict = {x : x_data, y_ : y_data}
)
cross_entropy_list.append(l)
validation_accuracy = sess.run(accuracy
,feed_dict = {x : mnist.validation.images
, y_ : mnist.validation.labels
}
)
mle_1 = sess.run(W1)
mle_2 = sess.run(W2)
print("validation accuracy = %f" % validation_accuracy)
fig, (ax1, ax2) = plt.subplots(ncols=2,figsize=(15,6))
sns.heatmap(mle_2,xticklabels=False,yticklabels=False,ax=ax1)
ax2.plot(np.arange(0,1000),np.array(cross_entropy_list))
ax2.set_title("cross entropy")
ax1.set_title("heatmap for W2")
train_nonlinear_softmax_mle()
"""
Explanation: Model 2 : Linear Function > RELU > Linear Function > softmax. MLE. 1 hour.
End of explanation
"""
|
EVS-ATMOS/cmac2.0 | notebooks/DDLobes.ipynb | bsd-3-clause | import pyart
import gzip
from matplotlib import pyplot as plt
from matplotlib import rcParams
from scipy import ndimage
import shutil, os
from datetime import timedelta, datetime
import numpy as np
import tempfile
import glob
import re
from copy import deepcopy
from IPython.display import Image, display
import math
%matplotlib inline
import pyproj
import cartopy.crs as ccrs
import cartopy.feature as cfeature
from cartopy.mpl.ticker import LongitudeFormatter, LatitudeFormatter
"""
Explanation: Dual Doppler lobe plotter
DD lobe plotter. Needs py-ART Grid file since DD lobes are calculated in radar relative coordinates.
Based on code created by Scott Collis, Johnathan Helmus, Zachary Sherman, and myself
End of explanation
"""
def dms_to_decimal(degrees, minutes, seconds):
if(degrees > 0):
return degrees+minutes/60+seconds/3600
else:
return degrees-minutes/60-seconds/3600
def get_bca(grid):
berr_origin = [-12960.1,-23091.1]
x,y = np.meshgrid(grid.x['data'], grid.y['data'])
a = np.sqrt(np.multiply(x,x)+np.multiply(y,y))
b = np.sqrt(pow(x-berr_origin[0],2)+pow(y-berr_origin[1],2))
c = np.sqrt(berr_origin[0]*berr_origin[0]+berr_origin[1]*berr_origin[1])
theta_1 = np.arccos(x/a)
theta_2 = np.arccos((x-berr_origin[1])/b)
return np.arccos((a*a+b*b-c*c)/(2*a*b))
# Gets beam crossing angle over 2D grid centered over Radar 1.
# grid_x, grid_y are cartesian coordinates from pyproj.Proj (or basemap)
def get_bca(rad1_lon, rad1_lat,
rad2_lon, rad2_lat,
grid_lon, grid_lat):
# Beam crossing angle needs cartesian coordinates
p = ccrs.PlateCarree()
p = p.as_geocentric()
rad1 = p.transform_points(ccrs.PlateCarree().as_geodetic(),
np.array(rad1_lon),
np.array(rad1_lat))
rad2 = p.transform_points(ccrs.PlateCarree().as_geodetic(),
np.array(rad2_lon),
np.array(rad2_lat))
grid_lon, grid_lat = np.meshgrid(grid_lon, grid_lat)
grid = p.transform_points(ccrs.PlateCarree().as_geodetic(),
grid_lon,
grid_lat,
np.zeros(grid_lon.shape))
# Create grid with Radar 1 in center
x = grid[:,:,0]-rad1[0,0]
y = grid[:,:,1]-rad1[0,1]
rad2 = rad2 - rad1
a = np.sqrt(np.multiply(x,x)+np.multiply(y,y))
b = np.sqrt(pow(x-rad2[0,0],2)+pow(y-rad2[0,1],2))
c = np.sqrt(rad2[0,0]*rad2[0,0]+rad2[0,1]*rad2[0,1])
theta_1 = np.arccos(x/a)
theta_2 = np.arccos((x-rad2[0,1])/b)
return np.arccos((a*a+b*b-c*c)/(2*a*b))
def scale_bar(ax, length, location=(0.5, 0.05), linewidth=3):
"""
ax is the axes to draw the scalebar on.
location is center of the scalebar in axis coordinates ie. 0.5 is the middle of the plot
length is the length of the scalebar in km.
linewidth is the thickness of the scalebar.
"""
#Projection in metres, need to change this to suit your own figure
utm = ccrs.UTM(14)
#Get the extent of the plotted area in coordinates in metres
x0, x1, y0, y1 = ax.get_extent(utm)
#Turn the specified scalebar location into coordinates in metres
sbcx, sbcy = x0 + (x1 - x0) * location[0], y0 + (y1 - y0) * location[1]
#Generate the x coordinate for the ends of the scalebar
bar_xs = [sbcx - length * 500, sbcx + length * 500]
#Plot the scalebar
ax.plot(bar_xs, [sbcy, sbcy], transform=utm, color='k', linewidth=linewidth)
#Plot the scalebar label
ax.text(sbcx, sbcy, str(length) + ' km', transform=utm,
horizontalalignment='center', verticalalignment='bottom')
"""
Explanation: This looks for all of the available timeperiods in the data_path directory and pulls out
the file names that match the given time periods above.
End of explanation
"""
def plot_dd_lobes(radar1_loc, radar2_loc, radar1_name, radar2_name):
ax = plt.axes(projection=ccrs.PlateCarree())
# Amf locations
i5 = [dms_to_decimal(-97, 35, 37.68), dms_to_decimal(36, 29, 29.4)]
i4 = [dms_to_decimal(-97, 21, 49.32), dms_to_decimal(36, 34, 44.4)]
grid_lon = np.arange(radar1_loc[0]-1.5, radar1_loc[0]+1.5, 0.01)
grid_lat = np.arange(radar1_loc[1]-1.5, radar1_loc[1]+1.5, 0.01)
bca = get_bca(radar1_loc[0], radar1_loc[1],
radar2_loc[0], radar2_loc[1],
grid_lon, grid_lat)
lon_gridded, lat_gridded = np.meshgrid(grid_lon, grid_lat)
# Create a feature for States/Admin 1 regions at 1:50m from Natural Earth
states_provinces = cfeature.NaturalEarthFeature(
category='cultural',
name='admin_1_states_provinces_lines',
scale='50m',
facecolor='none')
SOURCE = 'Natural Earth'
LICENSE = 'public domain'
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(states_provinces, edgecolor='gray')
ax.set_xticks(grid_lon[::int(len(grid_lon)/5)], crs=ccrs.PlateCarree())
ax.set_yticks(grid_lat[::int(len(grid_lon)/5)], crs=ccrs.PlateCarree())
lon_formatter = LongitudeFormatter(zero_direction_label=True)
lat_formatter = LatitudeFormatter()
ax.xaxis.set_major_formatter(lon_formatter)
ax.yaxis.set_major_formatter(lat_formatter)
plt.contour(lon_gridded, lat_gridded,
bca,
levels=[math.pi/6, 5*math.pi/6],
linewidths=2,
transform=ccrs.PlateCarree())
plt.annotate('i4',
xy=(i4[0]+0.02, i4[1]+0.01),
fontweight='bold',
fontsize=8,
transform=ccrs.PlateCarree())
plt.annotate('i5',
xy=(i5[0]+0.02, i5[1]+0.01),
fontweight='bold',
fontsize=8,
transform=ccrs.PlateCarree())
plt.plot(i4[0], i5[1], marker='d',
linewidth=1, color='k')
plt.plot(i4[0], i5[1], marker='d',
linewidth=1, color='k')
scale_bar(ax, 20, location=(0.1, 0.9),)
ax.coastlines(resolution='10m')
ax.stock_img()
plt.xlim((grid_lon[0]-0.4, grid_lon[-1]+0.4))
plt.ylim((grid_lat[0]-0.4, grid_lat[-1]+0.4))
"""
Explanation: Grid plotting code
This code creates plots from all of the Grids developed by multidop
This loads the Grid files and creates the animation
End of explanation
"""
# Amf locations
i5 = [dms_to_decimal(-97, 35, 37.68), dms_to_decimal(36, 29, 29.4)]
i4 = [dms_to_decimal(-97, 21, 49.32), dms_to_decimal(36, 34, 44.4)]
plt.figure(figsize=(8,10))
plot_dd_lobes(i4, i5, 'i4', 'i5')
plt.title('XSAPR DD lobes')
"""
Explanation: Plot DD lobes for XSAPR i4 and i5
End of explanation
"""
|
DJCordhose/ai | notebooks/md/4-keras-tensorflow-nn.ipynb | mit | import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%pylab inline
import pandas as pd
print(pd.__version__)
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)
import keras
print(keras.__version__)
"""
Explanation: Neural Networks with TensorFlow and Keras
End of explanation
"""
df = pd.read_csv('./insurance-customers-1500.csv', sep=';')
y=df['group']
df.drop('group', axis='columns', inplace=True)
X = df.as_matrix()
df.describe()
"""
Explanation: First Step: Load Data and disassemble for our purposes
We need a few more data point samples for this approach
End of explanation
"""
# ignore this, it is just technical code
# should come from a lib, consider it to appear magically
# http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
cmap_print = ListedColormap(['#AA8888', '#004000', '#FFFFDD'])
cmap_bold = ListedColormap(['#AA4444', '#006000', '#AAAA00'])
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#FFFFDD'])
font_size=25
def meshGrid(x_data, y_data):
h = 1 # step size in the mesh
x_min, x_max = x_data.min() - 1, x_data.max() + 1
y_min, y_max = y_data.min() - 1, y_data.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return (xx,yy)
def plotPrediction(clf, x_data, y_data, x_label, y_label, colors, title="", mesh=True, fixed=None, fname=None, print=False):
xx,yy = meshGrid(x_data, y_data)
plt.figure(figsize=(20,10))
if clf and mesh:
grid_X = np.array(np.c_[yy.ravel(), xx.ravel()])
if fixed:
fill_values = np.full((len(grid_X), 1), fixed)
grid_X = np.append(grid_X, fill_values, axis=1)
Z = clf.predict(grid_X)
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
if print:
plt.scatter(x_data, y_data, c=colors, cmap=cmap_print, s=200, marker='o', edgecolors='k')
else:
plt.scatter(x_data, y_data, c=colors, cmap=cmap_bold, s=80, marker='o', edgecolors='k')
plt.xlabel(x_label, fontsize=font_size)
plt.ylabel(y_label, fontsize=font_size)
plt.title(title, fontsize=font_size)
if fname:
plt.savefig(fname)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42, stratify=y)
X_train.shape, y_train.shape, X_test.shape, y_test.shape
# tiny little pieces of feature engeneering
from keras.utils.np_utils import to_categorical
num_categories = 3
y_train_categorical = to_categorical(y_train, num_categories)
y_test_categorical = to_categorical(y_test, num_categories)
from keras.layers import Input
from keras.layers import Dense
from keras.models import Model
from keras.layers import Dropout
inputs = Input(name='input', shape=(3, ))
x = Dense(100, name='hidden1', activation='relu')(inputs)
x = Dense(100, name='hidden2', activation='relu')(x)
predictions = Dense(3, name='softmax', activation='softmax')(x)
model = Model(input=inputs, output=predictions)
# loss function: http://cs231n.github.io/linear-classify/#softmax
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
%time model.fit(X_train, y_train_categorical, epochs=500, batch_size=100)
train_loss, train_accuracy = model.evaluate(X_train, y_train_categorical, batch_size=100)
train_accuracy
test_loss, test_accuracy = model.evaluate(X_test, y_test_categorical, batch_size=100)
test_accuracy
"""
Explanation: Second Step: Deep Learning as Alchemy
End of explanation
"""
kms_per_year = 20
plotPrediction(model, X_test[:, 1], X_test[:, 0],
'Age', 'Max Speed', y_test,
fixed = kms_per_year,
title="Test Data Max Speed vs Age with Prediction, 20 km/year")
kms_per_year = 50
plotPrediction(model, X_test[:, 1], X_test[:, 0],
'Age', 'Max Speed', y_test,
fixed = kms_per_year,
title="Test Data Max Speed vs Age with Prediction, 50 km/year")
kms_per_year = 5
plotPrediction(model, X_test[:, 1], X_test[:, 0],
'Age', 'Max Speed', y_test,
fixed = kms_per_year,
title="Test Data Max Speed vs Age with Prediction, 5 km/year")
"""
Explanation: Look at all the different shapes for different kilometers per year
now we have three dimensions, so we need to set one to a certain number
End of explanation
"""
|
chinapnr/python_study | Python 基础课程/Python Basic Lesson 06 - 随机数.ipynb | gpl-3.0 | import random
# random.choice(sequence)。参数sequence表示一个有序类型。
# random.choice 从序列中获取一个随机元素。
print(random.choice(range(1,100)))
# 从一个列表中产生随机元素
list1 = ['a', 'b', 'c']
print(random.choice(list1))
# random.sample()
# 创建指定范围内指定个数的整数随机数
print(random.sample(range(1,100), 10))
print(random.sample(range(1,10), 5))
# 如果要产生的随机数数量大于范围边界,会怎么样?
# print(random.sample(range(1,10), 15))
# random.randint(a, b),用于生成一个指定范围内的整数。
# 其中参数a是下限,参数b是上限,生成的随机数n: a <= n <= b
print(random.randint(1,100))
# random.randrange([start], stop[, step]),
# 从指定范围内,按指定基数递增的集合中 获取一个随机数。
print(random.randrange(1,10))
# 可以多运行几次,看看结果总是哪几个数字
print(random.randrange(1,10,3))
# random.random()用于生成一个0到1的随机浮点数: 0 <= n < 1.0
print(random.random())
# random.uniform(a, b),
# 用于生成一个指定范围内的随机浮点数,两个参数其中一个是上限,一个是下限。
# 如果a < b,则生成的随机数n: a <= n <= b。如果 a > b, 则 b <= n <= a。
print(random.uniform(1,100))
print(random.uniform(50,10))
# random.shuffle(x[, random]),
# 用于将一个列表中的元素打乱
a = [12, 23, 1, 5, 87]
random.shuffle(a)
print(a)
# random.sample(sequence, k),
# 从指定序列中随机获取指定长度的片断。sample函数不会修改原有序列。
print(random.sample(range(10),5))
print(random.sample(range(10),7))
"""
Explanation: Lesson 6
v1.1, 2020.4 5, edit by David Yi
本次内容要点
随机数
思考:猜数游戏等
随机数
随机数这一概念在不同领域有着不同的含义,在密码学、通信领域有着非常重要的用途。
Python 的随机数模块是 random,random 模块主要有以下函数,结合例子来看看。
random.choice() 从序列中获取一个随机元素
random.sample() 创建指定范围内指定个数的随机数
random.random() 用于生成一个0到1的随机浮点数
random.uniform() 用于生成一个指定范围内的随机浮点数
random.randint() 生成一个指定范围内的整数
random.shuffle() 将一个列表中的元素打乱
End of explanation
"""
# 猜数,人猜
# 简单版本
import random
a = random.randint(1,1000)
print('Now you can guess...')
guess_mark = True
while guess_mark:
user_number =int(input('please input number:'))
if user_number > a:
print('too big')
if user_number < a:
print('too small')
if user_number == a:
print('bingo!')
guess_mark = False
# 猜数,人猜
# 记录猜数的过程
import random
# 记录人猜了多少数字
user_number_list = []
# 记录人猜了几次
user_guess_count = 0
a = random.randint(1,100)
print('Now you can guess...')
guess_mark = True
# 主循环
while guess_mark:
user_number =int(input('please input number:'))
user_number_list.append(user_number)
user_guess_count += 1
if user_number > a:
print('too big')
if user_number < a:
print('too small')
if user_number == a:
print('bingo!')
print('your guess number list:', user_number_list)
print('you try times:', user_guess_count)
guess_mark = False
# 猜数,人猜
# 增加判断次数,如果猜了大于4次显示不同提示语
import random
# 记录人猜了多少数字
user_number_list = []
# 记录人猜了几次
user_guess_count = 0
a = random.randint(1,100)
print('Now you can guess...')
guess_mark = True
# 主循环
while guess_mark:
if 0 <= user_guess_count <= 4:
user_number =int(input('please input number:'))
if 4 < user_guess_count <= 100:
user_number =int(input('try harder, please input number:'))
user_number_list.append(user_number)
user_guess_count += 1
if user_number > a:
print('too big')
if user_number < a:
print('too small')
if user_number == a:
print('bingo!')
print('your guess number list:', user_number_list)
print('you try times:', user_guess_count)
guess_mark = False
"""
Explanation: 思考
一个猜数程序
End of explanation
"""
from fishbase.fish_random import *
# 这些银行卡卡号只是符合规范,可以通过最基本的银行卡号规范检查,但是实际上是不存在的
# 随机生成一张银行卡卡号
print(gen_random_bank_card())
# 随机生成一张中国银行的借记卡卡号
print(gen_random_bank_card('中国银行', 'CC'))
# 随机生成一张中国银行的贷记卡卡号
print(gen_random_bank_card('中国银行', 'DC'))
from fishbase.fish_random import *
# 生成假的身份证号码,符合标准身份证的分段设置和校验位
# 指定身份证的地域
print(gen_random_id_card('310000'))
# 增加指定年龄
print(gen_random_id_card('310000', age=70))
# 增加年龄和性别
print(gen_random_id_card('310000', age=30, gender='00'))
# 生成一组
print(gen_random_id_card(age=30, gender='01', result_type='LIST'))
"""
Explanation: 更加复杂的生成随机内容。
可以参考我们开发的python 函数包中的 random 部分,https://fishbase.readthedocs.io/en/latest/fish_random.html
fish_random.gen_random_address(zone) 通过省份行政区划代码,返回该省份的随机地址
fish_random.get_random_areanote(zone) 省份行政区划代码,返回下辖的随机地区名称
fish_random.gen_random_bank_card([…]) 通过指定的银行名称,随机生成该银行的卡号
fish_random.gen_random_company_name() 随机生成一个公司名称
fish_random.gen_random_float(minimum, maximum) 指定一个浮点数范围,随机生成并返回区间内的一个浮点数,区间为闭区间 受限于 random.random 精度限制,支持最大 15 位精度
fish_random.gen_random_id_card([zone, …]) 根据指定的省份编号、性别或年龄,随机生成一个身份证号
fish_random.gen_random_mobile() 随机生成一个手机号
fish_random.gen_random_name([family_name, …]) 指定姓氏、性别、长度,返回随机人名,也可不指定生成随机人名
fish_random.gen_random_str(min_length, …) 指定一个前后缀、字符串长度以及字符串包含字符类型,返回随机生成带有前后缀及指定长度的字符串
End of explanation
"""
# 猜数,机器猜
min = 0
max = 1000
guess_ok_mark = False
while not guess_ok_mark:
cur_guess = int((min + max) / 2)
print('I guess:', cur_guess)
human_answer = input('Please tell me big or small:')
if human_answer == 'big':
max = cur_guess
if human_answer == 'small':
min = cur_guess
if human_answer == 'ok':
print('HAHAHA')
guess_ok_mark = True
"""
Explanation: 猜数程序修改为机器猜,根据每次人返回的结果来调整策略
End of explanation
"""
|
saezlab/kinact | doc/KSEA_example.ipynb | gpl-3.0 | # Import useful libraries
import numpy as np
import pandas as pd
# Import required libraries for data visualisation
import matplotlib.pyplot as plt
import seaborn as sns
# Import the package
import kinact
# Magic
%matplotlib inline
"""
Explanation: Protocol for Kinase-Substrate Enrichment Analysis (KSEA)
This IPython notebook accompanies the chapter 'Phosphoproteomics-based profiling of kinase activities in cancer cell' in the book 'Methods of Molecular Biology: Cancer Systems Biology' from Springer, 2016.
The script aims to demonstrate the methodology of KSEA, to facilitate grasping the operations performed in the provided code, and to enable reproduction of the implementation in other programming languages where required.
End of explanation
"""
# import data
data_fc, data_p_value = kinact.get_example_data()
# import prior knowledge
adj_matrix = kinact.get_kinase_targets()
print data_fc.head()
print
print data_p_value.head()
# Perform ksea using the Mean method
score, p_value = kinact.ksea.ksea_mean(data_fc=data_fc['5min'].dropna(),
interactions=adj_matrix,
mP=data_fc['5min'].values.mean(),
delta=data_fc['5min'].values.std())
print pd.DataFrame({'score': score, 'p_value': p_value}).head()
# Perform ksea using the Alternative Mean method
score, p_value = kinact.ksea.ksea_mean_alt(data_fc=data_fc['5min'].dropna(),
p_values=data_p_value['5min'],
interactions=adj_matrix,
mP=data_fc['5min'].values.mean(),
delta=data_fc['5min'].values.std())
print pd.DataFrame({'score': score, 'p_value': p_value}).head()
# Perform ksea using the Delta method
score, p_value = kinact.ksea.ksea_delta(data_fc=data_fc['5min'].dropna(),
p_values=data_p_value['5min'],
interactions=adj_matrix)
print pd.DataFrame({'score': score, 'p_value': p_value}).head()
"""
Explanation: Quick Start
End of explanation
"""
# Read data
data_raw = pd.read_csv('../kinact/data/deGraaf_2014_jurkat.csv', sep=',', header=0)
# Filter for those p-sites that were matched ambiguously
data_reduced = data_raw[~data_raw['Proteins'].str.contains(';')]
# Create identifier for each phosphorylation site, e.g. P06239_S59 for the Serine 59 in the protein Lck
data_reduced.loc[:, 'ID'] = data_reduced['Proteins'] + '_' + data_reduced['Amino acid'] + \
data_reduced['Positions within proteins']
data_indexed = data_reduced.set_index('ID')
# Extract only relevant columns
data_relevant = data_indexed[[x for x in data_indexed if x.startswith('Average')]]
# Rename columns
data_relevant.columns = [x.split()[-1] for x in data_relevant]
# Convert abundaces into fold changes compared to control (0 minutes after stimulation)
data_fc = data_relevant.sub(data_relevant['0min'], axis=0)
data_fc.drop('0min', axis=1, inplace=True)
# Also extract the p-values for the fold changes
data_p_value = data_indexed[[x for x in data_indexed if x.startswith('p value') and x.endswith('vs0min')]]
data_p_value.columns = [x.split('_')[-1].split('vs')[0] + 'min' for x in data_p_value]
data_p_value = data_p_value.astype('float') # Excel saved the p-values as strings, not as floating point numbers
print data_fc.head()
print data_p_value.head()
"""
Explanation: 1. Loading the data
In order to perform the described kinase enrichment analysis, we load the data into a Pandas DataFrame. Here, we use the data from <em>de Graaf et al., 2014</em> for demonstration of KSEA. The data is available as supplemental material to the article online under http://mcponline.org/content/13/9/2426/suppl/DC1. The dataset of interest can be found in the Supplemental Table 2.
When downloading the dataset from the internet, it will be provided as Excel spreadsheet. For the use in this script, it will have to saved as csv-file, using the 'Save As' function in Excel.
In the accompanying github repository, we will provide an already processed csv-file together with the code for KSEA.
End of explanation
"""
# Read data
ks_rel = pd.read_csv('../kinact/data/PhosphoSitePlus.txt', sep='\t')
# The data from the PhosphoSitePlus database is not provided as comma-separated value file (csv),
# but instead, a tab = \t delimits the individual cells
# Restrict the data on interactions in the organism of interest
ks_rel_human = ks_rel.loc[(ks_rel['KIN_ORGANISM'] == 'human') & (ks_rel['SUB_ORGANISM'] == 'human')]
# Create p-site identifier of the same format as before
ks_rel_human.loc[:, 'psite'] = ks_rel_human['SUB_ACC_ID'] + '_' + ks_rel_human['SUB_MOD_RSD']
# Create adjencency matrix (links between kinases (columns) and p-sites (rows) are indicated with a 1, NA otherwise)
ks_rel_human.loc[:, 'value'] = 1
adj_matrix = pd.pivot_table(ks_rel_human, values='value', index='psite', columns='GENE', fill_value=0)
print adj_matrix.head()
print adj_matrix.sum(axis=0).sort_values(ascending=False).head()
"""
Explanation: 2. Import prior-knowledge kinase-substrate relationships from PhosphoSitePlus
In the following example, we use the data from the PhosphoSitePlus database, which can be downloaded here: http://www.phosphosite.org/staticDownloads.action.
Consider, that the downloaded file contains a disclaimer at the top of the file, which has to be removed before the file can be used as described below.
End of explanation
"""
score, p_value = kinact.ksea.ksea_delta(data_fc=data_fc['5min'],
p_values=data_p_value['5min'],
interactions=adj_matrix,
)
print pd.DataFrame({'score': score, 'p_value': p_value}).head()
# Calculate the KSEA scores for all data with the ksea_mean method
activity_mean = pd.DataFrame({c: kinact.ksea.ksea_mean(data_fc=data_fc[c],
interactions=adj_matrix,
mP=data_fc.values.mean(),
delta=data_fc.values.std())[0]
for c in data_fc})
activity_mean = activity_mean[['5min', '10min', '20min', '30min', '60min']]
print activity_mean.head()
# Calculate the KSEA scores for all data with the ksea_mean method, using the median
activity_median = pd.DataFrame({c: kinact.ksea.ksea_mean(data_fc=data_fc[c],
interactions=adj_matrix,
mP=data_fc.values.mean(),
delta=data_fc.values.std(), median=True)[0]
for c in data_fc})
activity_median = activity_median[['5min', '10min', '20min', '30min', '60min']]
print activity_median.head()
# Calculate the KSEA scores for all data with the ksea_mean_alt method
activity_mean_alt = pd.DataFrame({c: kinact.ksea.ksea_mean_alt(data_fc=data_fc[c],
p_values=data_p_value[c],
interactions=adj_matrix,
mP=data_fc.values.mean(),
delta=data_fc.values.std())[0]
for c in data_fc})
activity_mean_alt = activity_mean_alt[['5min', '10min', '20min', '30min', '60min']]
print activity_mean_alt.head()
# Calculate the KSEA scores for all data with the ksea_mean method, using the median
activity_median_alt = pd.DataFrame({c: kinact.ksea.ksea_mean_alt(data_fc=data_fc[c],
p_values=data_p_value[c],
interactions=adj_matrix,
mP=data_fc.values.mean(),
delta=data_fc.values.std(),
median=True)[0]
for c in data_fc})
activity_median_alt = activity_median_alt[['5min', '10min', '20min', '30min', '60min']]
print activity_median_alt.head()
# Calculate the KSEA scores for all data with the ksea_delta method
activity_delta = pd.DataFrame({c: kinact.ksea.ksea_delta(data_fc=data_fc[c],
p_values=data_p_value[c],
interactions=adj_matrix)[0]
for c in data_fc})
activity_delta = activity_delta[['5min', '10min', '20min', '30min', '60min']]
print activity_delta.head()
sns.set(context='poster', style='ticks')
sns.heatmap(activity_mean_alt, cmap=sns.blend_palette([sns.xkcd_rgb['amber'],
sns.xkcd_rgb['almost black'],
sns.xkcd_rgb['bright blue']],
as_cmap=True))
plt.show()
"""
Explanation: 3. KSEA
3.1 Quick start for KSEA
Together with this tutorial, we will provide an implementation of KSEA as custom Python functions. Examplary, the use of the function for the dataset by de Graaf et al. could look like this.
End of explanation
"""
kinase='CSNK2A1'
df_plot = pd.DataFrame({'mean': activity_mean.loc[kinase],
'delta': activity_delta.loc[kinase],
'mean_alt': activity_mean_alt.loc[kinase]})
df_plot['time [min]'] = [5, 10, 20, 30, 60]
df_plot = pd.melt(df_plot, id_vars='time [min]', var_name='method', value_name='activity score')
g = sns.FacetGrid(df_plot, col='method', sharey=False, size=3, aspect=1)
g = g.map(sns.pointplot, 'time [min]', 'activity score')
plt.subplots_adjust(top=.82)
plt.show()
"""
Explanation: In de Graaf et al., they associated (amongst others) the Casein kinase II alpha (CSNK2A1) with higher activity after prolonged stimulation with prostaglandin E2. Here, we plot the activity scores of CSNK2A1 for all three methods of KSEA, which are in good agreement.
End of explanation
"""
data_condition = data_fc['60min'].copy()
p_values = data_p_value['60min']
kinase = 'CDK1'
substrates = adj_matrix[kinase].replace(0, np.nan).dropna().index
detected_p_sites = data_fc.index
intersect = list(set(substrates).intersection(detected_p_sites))
"""
Explanation: 3.2. KSEA in detail
In the following, we show in detail the computations that are carried out inside the provided functions. Let us concentrate on a single condition (60 minutes after stimulation with prostaglandin E2) and a single kinase (CDK1).
End of explanation
"""
mS = data_condition.loc[intersect].mean()
mP = data_fc.values.mean()
m = len(intersect)
delta = data_fc.values.std()
z_score = (mS - mP) * np.sqrt(m) * 1/delta
from scipy.stats import norm
p_value_mean = norm.sf(abs(z_score))
print mS, p_value_mean
"""
Explanation: 3.2.1. Mean method
End of explanation
"""
cut_off = -np.log10(0.05)
set_alt = data_condition.loc[intersect].where(p_values.loc[intersect] > cut_off).dropna()
mS_alt = set_alt.mean()
z_score_alt = (mS_alt - mP) * np.sqrt(len(set_alt)) * 1/delta
p_value_mean_alt = norm.sf(abs(z_score_alt))
print mS_alt, p_value_mean_alt
"""
Explanation: 3.2.2. Alternative Mean method
End of explanation
"""
cut_off = -np.log10(0.05)
score_delta = len(data_condition.loc[intersect].where((data_condition.loc[intersect] > 0) &
(p_values.loc[intersect] > cut_off)).dropna()) -\
len(data_condition.loc[intersect].where((data_condition.loc[intersect] < 0) &
(p_values.loc[intersect] > cut_off)).dropna())
M = len(data_condition)
n = len(intersect)
N = len(np.where(p_values.loc[adj_matrix.index.tolist()] > cut_off)[0])
from scipy.stats import hypergeom
hypergeom_dist = hypergeom(M, n, N)
p_value_delta = hypergeom_dist.pmf(len(p_values.loc[intersect].where(p_values.loc[intersect] > cut_off).dropna()))
print score_delta, p_value_delta
"""
Explanation: 3.2.3. Delta Method
End of explanation
"""
|
yl565/statsmodels | examples/notebooks/glm.ipynb | bsd-3-clause | %matplotlib inline
from __future__ import print_function
import numpy as np
import statsmodels.api as sm
from scipy import stats
from matplotlib import pyplot as plt
"""
Explanation: Generalized Linear Models
End of explanation
"""
print(sm.datasets.star98.NOTE)
"""
Explanation: GLM: Binomial response data
Load data
In this example, we use the Star98 dataset which was taken with permission
from Jeff Gill (2000) Generalized linear models: A unified approach. Codebook
information can be obtained by typing:
End of explanation
"""
data = sm.datasets.star98.load()
data.exog = sm.add_constant(data.exog, prepend=False)
"""
Explanation: Load the data and add a constant to the exogenous (independent) variables:
End of explanation
"""
print(data.endog[:5,:])
"""
Explanation: The dependent variable is N by 2 (Success: NABOVE, Failure: NBELOW):
End of explanation
"""
print(data.exog[:2,:])
"""
Explanation: The independent variables include all the other variables described above, as
well as the interaction terms:
End of explanation
"""
glm_binom = sm.GLM(data.endog, data.exog, family=sm.families.Binomial())
res = glm_binom.fit()
print(res.summary())
"""
Explanation: Fit and summary
End of explanation
"""
print('Total number of trials:', data.endog[0].sum())
print('Parameters: ', res.params)
print('T-values: ', res.tvalues)
"""
Explanation: Quantities of interest
End of explanation
"""
means = data.exog.mean(axis=0)
means25 = means.copy()
means25[0] = stats.scoreatpercentile(data.exog[:,0], 25)
means75 = means.copy()
means75[0] = lowinc_75per = stats.scoreatpercentile(data.exog[:,0], 75)
resp_25 = res.predict(means25)
resp_75 = res.predict(means75)
diff = resp_75 - resp_25
"""
Explanation: First differences: We hold all explanatory variables constant at their means and manipulate the percentage of low income households to assess its impact on the response variables:
End of explanation
"""
print("%2.4f%%" % (diff*100))
"""
Explanation: The interquartile first difference for the percentage of low income households in a school district is:
End of explanation
"""
nobs = res.nobs
y = data.endog[:,0]/data.endog.sum(1)
yhat = res.mu
"""
Explanation: Plots
We extract information that will be used to draw some interesting plots:
End of explanation
"""
from statsmodels.graphics.api import abline_plot
fig, ax = plt.subplots()
ax.scatter(yhat, y)
line_fit = sm.OLS(y, sm.add_constant(yhat, prepend=True)).fit()
abline_plot(model_results=line_fit, ax=ax)
ax.set_title('Model Fit Plot')
ax.set_ylabel('Observed values')
ax.set_xlabel('Fitted values');
"""
Explanation: Plot yhat vs y:
End of explanation
"""
fig, ax = plt.subplots()
ax.scatter(yhat, res.resid_pearson)
ax.hlines(0, 0, 1)
ax.set_xlim(0, 1)
ax.set_title('Residual Dependence Plot')
ax.set_ylabel('Pearson Residuals')
ax.set_xlabel('Fitted values')
"""
Explanation: Plot yhat vs. Pearson residuals:
End of explanation
"""
from scipy import stats
fig, ax = plt.subplots()
resid = res.resid_deviance.copy()
resid_std = stats.zscore(resid)
ax.hist(resid_std, bins=25)
ax.set_title('Histogram of standardized deviance residuals');
"""
Explanation: Histogram of standardized deviance residuals:
End of explanation
"""
from statsmodels import graphics
graphics.gofplots.qqplot(resid, line='r')
"""
Explanation: QQ Plot of Deviance Residuals:
End of explanation
"""
print(sm.datasets.scotland.DESCRLONG)
"""
Explanation: GLM: Gamma for proportional count response
Load data
In the example above, we printed the NOTE attribute to learn about the
Star98 dataset. Statsmodels datasets ships with other useful information. For
example:
End of explanation
"""
data2 = sm.datasets.scotland.load()
data2.exog = sm.add_constant(data2.exog, prepend=False)
print(data2.exog[:5,:])
print(data2.endog[:5])
"""
Explanation: Load the data and add a constant to the exogenous variables:
End of explanation
"""
glm_gamma = sm.GLM(data2.endog, data2.exog, family=sm.families.Gamma())
glm_results = glm_gamma.fit()
print(glm_results.summary())
"""
Explanation: Fit and summary
End of explanation
"""
nobs2 = 100
x = np.arange(nobs2)
np.random.seed(54321)
X = np.column_stack((x,x**2))
X = sm.add_constant(X, prepend=False)
lny = np.exp(-(.03*x + .0001*x**2 - 1.0)) + .001 * np.random.rand(nobs2)
"""
Explanation: GLM: Gaussian distribution with a noncanonical link
Artificial data
End of explanation
"""
gauss_log = sm.GLM(lny, X, family=sm.families.Gaussian(sm.families.links.log))
gauss_log_results = gauss_log.fit()
print(gauss_log_results.summary())
"""
Explanation: Fit and summary
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.3/tutorials/distance.ipynb | gpl-3.0 | #!pip install -I "phoebe>=2.3,<2.4"
"""
Explanation: Distance
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
"""
print(b.get_parameter(qualifier='distance', context='system'))
print(b.get_parameter(qualifier='t0', context='system'))
"""
Explanation: Relevant Parameters
The 'distance' parameter lives in the 'system' context and is simply the distance between the center of the coordinate system and the observer (at t0)
End of explanation
"""
b.add_dataset('orb', times=np.linspace(0,3,101), dataset='orb01')
b.set_value('distance', 1.0)
b.run_compute(model='dist1')
b.set_value('distance', 2.0)
b.run_compute(model='dist2')
afig, mplfig = b['orb01'].plot(y='ws', show=True, legend=True)
"""
Explanation: Influence on Orbits (Positions)
The distance has absolutely NO effect on the synthetic orbit as the origin of the orbit's coordinate system is such that the barycenter of the system is at 0,0,0 at t0.
To demonstrate this, let's create an 'orb' dataset and compute models at both 1 m and 2 m and then plot the resulting synthetic models.
End of explanation
"""
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
"""
Explanation: Influence on Light Curves (Fluxes)
Fluxes are, however, affected by distance exactly as you'd expect as inverse of distance squared.
To illustrate this, let's add an 'lc' dataset and compute synthetic fluxes at 1 and 2 m.
End of explanation
"""
b.set_value_all('ld_mode', 'manual')
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0.,0.])
b.set_value('distance', 1.0)
b.run_compute(model='dist1', overwrite=True)
b.set_value('distance', 2.0)
b.run_compute(model='dist2', overwrite=True)
"""
Explanation: To make things easier to compare, let's disable limb darkening
End of explanation
"""
afig, mplfig = b['lc01'].plot(show=True, legend=True)
"""
Explanation: Since we doubled the distance from 1 to 2 m, we expect the entire light curve at 2 m to be divided by 4 (note the y-scales on the plots below).
End of explanation
"""
b.add_dataset('mesh', times=[0], dataset='mesh01', columns=['intensities@lc01', 'abs_intensities@lc01'])
b.set_value('distance', 1.0)
b.run_compute(model='dist1', overwrite=True)
b.set_value('distance', 2.0)
b.run_compute(model='dist2', overwrite=True)
print("dist1 abs_intensities: ", np.nanmean(b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='dist1')))
print("dist2 abs_intensities: ", np.nanmean(b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='dist2')))
print("dist1 intensities: ", np.nanmean(b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='dist1')))
print("dist2 intensities: ", np.nanmean(b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='dist2')))
"""
Explanation: Note that 'pblum' is defined such that a (spherical, non-eclipsed, non-limb darkened) star with a pblum of 4pi will contribute a flux of 1.0 at 1.0 m (the default distance).
For more information, see the pblum tutorial
Influence on Meshes (Intensities)
Distance does not affect the intensities stored in the mesh (including those in relative units). In other words, like third light, distance only scales the fluxes.
NOTE: this is different than pblums which DO affect the relative intensities. Again, see the pblum tutorial for more details.
To see this we can run both of our distances again and look at the values of the intensities in the mesh.
End of explanation
"""
|
yogeshVU/matplotlib_apps | MatPlotLib.ipynb | mit | %matplotlib inline
from scipy.stats import norm
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(-3, 3, 0.001)
plt.plot(x, norm.pdf(x))
plt.show()
"""
Explanation: MatPlotLib Basics
Draw a line graph
End of explanation
"""
plt.plot(x, norm.pdf(x))
plt.plot(x, norm.pdf(x, 1.0, 0.5))
plt.show()
"""
Explanation: Mutiple Plots on One Graph
End of explanation
"""
plt.plot(x, norm.pdf(x))
plt.plot(x, norm.pdf(x, 1.0, 0.5))
plt.savefig('C:\\Users\\anirban\\Dropbox\\DataScience\\DataScience\\MyPlot.png', format='png')
"""
Explanation: Save it to a File
End of explanation
"""
axes = plt.axes()
axes.set_xlim([-5, 5])
axes.set_ylim([0, 1.0])
axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5])
axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
plt.plot(x, norm.pdf(x))
plt.plot(x, norm.pdf(x, 1.0, 0.5))
plt.show()
"""
Explanation: Adjust the Axes
End of explanation
"""
axes = plt.axes()
axes.set_xlim([-5, 5])
axes.set_ylim([0, 1.0])
axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5])
axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
axes.grid()
plt.plot(x, norm.pdf(x))
plt.plot(x, norm.pdf(x, 1.0, 0.5))
plt.show()
"""
Explanation: Add a Grid
End of explanation
"""
axes = plt.axes()
axes.set_xlim([-5, 5])
axes.set_ylim([0, 1.0])
axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5])
axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
axes.grid()
plt.plot(x, norm.pdf(x), 'b-')
plt.plot(x, norm.pdf(x, 1.0, 0.5), 'g-.')
plt.show()
"""
Explanation: Change Line Types and Colors
End of explanation
"""
axes = plt.axes()
axes.set_xlim([-5, 5])
axes.set_ylim([0, 1.0])
axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5])
axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
axes.grid()
plt.xlabel('Greebles')
plt.ylabel('Probability')
plt.plot(x, norm.pdf(x), 'b-')
plt.plot(x, norm.pdf(x, 1.0, 0.5), 'r:')
plt.legend(['Sneetches', 'Gacks'], loc=4)
plt.show()
"""
Explanation: Labeling Axes and Adding a Legend
End of explanation
"""
plt.xkcd()
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
plt.xticks([])
plt.yticks([])
ax.set_ylim([-30, 10])
data = np.ones(100)
data[70:] -= np.arange(30)
plt.annotate(
'THE DAY I REALIZED\nI COULD COOK BACON\nWHENEVER I WANTED',
xy=(70, 1), arrowprops=dict(arrowstyle='->'), xytext=(15, -10))
plt.plot(data)
plt.xlabel('time')
plt.ylabel('my overall health')
"""
Explanation: XKCD Style :)
End of explanation
"""
# Remove XKCD mode:
plt.rcdefaults()
values = [12, 55, 4, 32, 14]
colors = ['r', 'g', 'b', 'c', 'm']
explode = [0.2, 0, 0.0, 0, 0]
labels = ['India', 'United States', 'Russia', 'China', 'Europe']
plt.pie(values, colors= colors, labels=labels, explode = explode)
plt.title('Student Locations')
plt.show()
"""
Explanation: Pie Chart
End of explanation
"""
values = [12, 55, 4, 32, 14]
colors = ['r', 'g', 'b', 'c', 'm']
plt.bar(range(0,5), values, color= colors)
plt.show()
"""
Explanation: Bar Chart
End of explanation
"""
from pylab import randn
X = randn(500)
Y = randn(500)
plt.scatter(X,Y)
plt.show()
"""
Explanation: Scatter Plot
End of explanation
"""
incomes = np.random.normal(27000, 15000, 10000)
plt.hist(incomes, 50)
plt.show()
"""
Explanation: Histogram
End of explanation
"""
uniformSkewed = np.random.rand(100) * 100 - 40
high_outliers = np.random.rand(10) * 50 + 100
low_outliers = np.random.rand(10) * -50 - 100
data = np.concatenate((uniformSkewed, high_outliers, low_outliers))
plt.boxplot(data)
plt.show()
"""
Explanation: Box & Whisker Plot
Useful for visualizing the spread & skew of data.
The red line represents the median of the data, and the box represents the bounds of the 1st and 3rd quartiles.
So, half of the data exists within the box.
The dotted-line "whiskers" indicate the range of the data - except for outliers, which are plotted outside the whiskers. Outliers are 1.5X or more the interquartile range.
This example below creates uniformly distributed random numbers between -40 and 60, plus a few outliers above 100 and below -100:
End of explanation
"""
import numpy as np
axes = plt.axes()
axes.grid()
plt.xlabel('time')
plt.ylabel('age')
X = np.random.randint(low=0, high =24, size =100)
Y = np.random.randint(low=0, high =100, size =100)
plt.scatter(X,Y)
plt.show()
x.cov(Y)
"""
Explanation: Activity
Try creating a scatter plot representing random data on age vs. time spent watching TV. Label the axes.
End of explanation
"""
|
otavio-r-filho/AIND-Deep_Learning_Notebooks | batch-norm/Batch_Normalization_Exercises.ipynb | mit | import tensorflow as tf
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
"""
Explanation: Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was:
1. Complicated enough that training would benefit from batch normalization.
2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.
3. Simple enough that the architecture would be easy to understand without additional resources.
This notebook includes two versions of the network that you can edit. The first uses higher level functions from the tf.layers package. The second is the same network, but uses only lower level functions in the tf.nn package.
Batch Normalization with tf.layers.batch_normalization
Batch Normalization with tf.nn.batch_normalization
The following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named mnist. You'll need to run this cell before running anything else in the notebook.
End of explanation
"""
"""
DO NOT MODIFY THIS CELL
"""
def fully_connected(prev_layer, num_units):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
"""
Explanation: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
End of explanation
"""
"""
DO NOT MODIFY THIS CELL
"""
def conv_layer(prev_layer, layer_depth):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
"""
Explanation: We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.
This version of the function does not include batch normalization.
End of explanation
"""
"""
DO NOT MODIFY THIS CELL
"""
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
"""
Explanation: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
End of explanation
"""
def fully_connected(prev_layer, num_units, is_training):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None)
layer = tf.layers.batch_normalization(layer, training=is_training)
layer = tf.nn.relu(layer)
return layer
"""
Explanation: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
"""
def conv_layer(prev_layer, layer_depth, is_training):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=None, use_bias=False)
conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
"""
Explanation: TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
"""
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
is_training = tf.placeholder(tf.bool, name='is_training')
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs,
labels: batch_ys,
is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
"""
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
End of explanation
"""
def fully_connected(prev_layer, num_units, is_training):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
"""
prev_shape = prev_layer.get_shape().as_list()
weights = tf.Variable(tf.random_normal([int(prev_shape[-1]), num_units], stddev=0.05))
layer = tf.matmul(prev_layer, weights)
beta = tf.Variable(tf.ones([num_units]))
gamma = tf.Variable(tf.zeros([num_units]))
pop_mean = tf.Variable(tf.zeros([num_units]), trainable=False)
pop_variance = tf.Variable(tf.zeros([num_units]), trainable=False)
epsilon = 1e-3
def batch_norm_training():
decay = 0.99
batch_mean, batch_variance = tf.nn.moments(layer, [0])
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
with tf.control_dependencies([train_mean, train_variance]):
batch_norm_layer = (layer - batch_mean) / tf.sqrt(batch_variance + epsilon)
return batch_norm_layer
def batch_norm_inference():
batch_norm_layer = (layer - pop_mean) / tf.sqrt(pop_variance + epsilon)
return batch_norm_layer
layer = tf.cond(is_training, batch_norm_training, batch_norm_inference)
layer = gamma * layer + beta
layer = tf.nn.relu(layer)
return layer
"""
Explanation: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.
Batch Normalization using tf.nn.batch_normalization<a id="example_2"></a>
Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.
This version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization.
Optional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables.
End of explanation
"""
def conv_layer(prev_layer, layer_depth, is_training):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
beta = tf.Variable(tf.zeros([out_channels]))
gamma = tf.Variable(tf.ones([out_channels]))
pop_mean = tf.Variable(tf.zeros([out_channels]), trainable=False)
pop_variance = tf.Variable(tf.zeros([out_channels]), trainable=False)
#bias = tf.Variable(tf.zeros(out_channels))
conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
epsilon = 1e-3
def batch_norm_training():
decay = 0.99
batch_mean, batch_variance = tf.nn.moments(conv_layer, [0,1,2], keep_dims=False)
train_mean = tf.assign(pop_mean, batch_mean * decay + pop_mean * (1 - decay))
train_variance = tf.assign(pop_variance, batch_variance * decay + pop_variance * (1 - decay))
with tf.control_dependencies([train_mean, train_variance]):
batch_norm_layer = (conv_layer - batch_mean) / tf.sqrt(batch_variance + epsilon)
return batch_norm_layer
def batch_norm_inferene():
batch_norm_layer = (conv_layer - pop_mean) / tf.sqrt(pop_variance + epsilon)
return batch_norm_layer
conv_layer = tf.cond(is_training, batch_norm_training, batch_norm_inferene)
conv_layer = gamma * conv_layer + beta
#conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
# Setting initial conditions
layer = tf.placeholder(tf.float32, [None, 28, 28, 1])
print('Simulating parameter propagation over 5 steps:')
for layer_i in range(1,5):
strides = 2 if layer_i % 3 == 0 else 1
input_shape = layer.get_shape().as_list()
in_channels = layer.get_shape().as_list()[3]
out_channels = layer_i*4
weights = tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05)
layer = tf.nn.conv2d(layer, weights, strides=[1,strides, strides, 1], padding='SAME')
print('-----------------------------------------------')
print('strides:{0}'.format(strides))
print('Input layer shape:{0}'.format(input_shape))
print('in_channels:{0}'.format(in_channels))
print('out_channels:{0}'.format(out_channels))
print('Truncated normal output:{0}'.format(weights))
"""
Explanation: TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected.
End of explanation
"""
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training:True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
"""
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training.
End of explanation
"""
|
AnyBody-Research-Group/AnyPyTools | docs/slides/Automate your AnyBody simulations.ipynb | mit | from anypytools import AnyPyProcess
app = AnyPyProcess( )
macrolist = [
'load "Knee.any"',
'classoperation Main.MyParameter "Set Value" --value="10"',
'operation Main.MyStudy.Kinematics',
'run',
'exit'
]
app.start_macro(macrolist);
"""
Explanation: <img src="https://avatars.githubusercontent.com/u/1136538?s=70&v=4" align="right"><br><br>
Using AnyBody from Python
We will use a small open source library to help us.
AnyPyTools
Handle all interaction with the console application
Create macros programatically
Run simualtions in parallel
How to get started
<br>
Requirements:
A Python installation
The AnyPyTools library
<br><br>
<img src="anaconda_download.png" alt="Download Anaconda" width="30%" align="right" border="5">
The easy way:
Install the a Conda Python distribution.
- https://www.anaconda.com/products/individual
Has all the important scientific python packages
More stuff can be installed using the build-in package-manager (conda)
c:\>conda config --add channels conda-forge
c:\>conda install anypytools
Running the console applicaiton from Python
<br>
A simple example
<br>
End of explanation
"""
macrolist = [['load "Knee.any"',
'operation Main.MyStudy.Kinematics',
'run',
'exit'],
['load "Knee.any"',
'operation Main.MyStudy.InverseDynamics',
'run',
'exit']]
app.start_macro(macrolist);
"""
Explanation: Running multiple macros
<br>
End of explanation
"""
many_macros = [['load "Knee.any"',
'classoperation Main.MyParameter "Set Value" --value="10"',
'operation Main.MyStudy.Kinematics',
'run',
'exit']]*40
many_macros
"""
Explanation: Better performance with parallelization
End of explanation
"""
app = AnyPyProcess(num_processes = 1)
app.start_macro(many_macros);
"""
Explanation: Better performance with parallelization
First sequentially
End of explanation
"""
app = AnyPyProcess(num_processes = 10)
app.start_macro(many_macros);
"""
Explanation: Then with parallelization
End of explanation
"""
from anypytools import AnyPyProcess
app = AnyPyProcess( )
macrolist = [['load "Knee.any"',
'operation Main.MyStudy.InverseDynamics',
'run',
'classoperation Main.MyStudy.Output.MaxMuscleActivity "Dump"',
'exit'],
['load "Knee.any"',
'operation Main.MyStudy.InverseDynamics',
'run',
'classoperation Main.MyStudy.Output.MaxMuscleActivity "Dump"',
'exit']]
results = app.start_macro(macrolist)
results
"""
Explanation: Getting data back from AnyBody
<br>
Usual approach: AnyOutputFile, Save all output data (HDF5 file)
- :( Difficult to concatenate data across simulations.
- :( Impractical if we only need a few variables.
There must be a better way.
...there is...
The console application has a class operation we can use
classoperation <AnyScript_variable_name> "Dump"
AnyPyTools will automatically grab any data or error from AnyBodyCon
End of explanation
"""
%matplotlib inline
from matplotlib.pyplot import plot
max_muscle_activity = results[0]['Main.MyStudy.Output.MaxMuscleActivity']
plot(max_muscle_activity);
"""
Explanation: Plotting the data
End of explanation
"""
results[0]['Main.MyStudy.Output.MaxMuscleActivity'].shape
results['Main.MyStudy.Output.MaxMuscleActivity'].shape
results['MaxMuscleActivity'].shape
"""
Explanation: Output behaves like default Python data types:
list and dictionary types.
But with extra convenience functionality:
End of explanation
"""
macrolist = ['load "Knee.any"',
'operation Main.MyStudy.Kinematic ',
'run',
'exit']
from anypytools import AnyPyProcess
app = AnyPyProcess( )
result = app.start_macro(macrolist);
result['ERROR']
"""
Explanation: Handling Errors
<br>
AnyPyTools will also catch Errors...
Here is a macro with misspelled operation:
End of explanation
"""
%pycat "large_macro.anymcr"
from anypytools.macro_commands import Load, OperationRun, Dump, SetValue
Load("Knee.any")
SetValue('Main.Model.Parameter1', 100.3)
macrolist = [ Load('Knee.any'),
SetValue('Main.MyParameter', 10),
OperationRun('Main.MyStudy.InverseDynamics'),
Dump('Main.MyStudy.Output.MaxMuscleActivity')]
macrolist
from anypytools import AnyPyProcess
app = AnyPyProcess()
app.start_macro(macrolist);
"""
Explanation: Creating macro programmatically
<br>
Why would we want to generate macro commands?
Individual macros can be really long and complex
Parameter and sensitivity studies may require thoudsands of different macros
End of explanation
"""
from anypytools import AnyMacro
macrolist = AnyMacro( [ SetValue('Main.MyModel.MyParameter',8)] )
macrolist.number_of_macros = 2
macrolist
parameter_list = [2.2, 2.5, 2.7, 2.9, 3.1]
macrolist = [ SetValue('Main.MyModel.MyParameter',parameter_list)]
AnyMacro(macrolist, number_of_macros=5)
"""
Explanation: Creating many macros
We need an small helper class AnyMacro to wrap our macro list.
End of explanation
"""
patella_len = [0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08]
macro = [Load('Knee.any'),
SetValue('Main.MyModel.PatellaLigament.DriverPos', patella_len ),
OperationRun('Main.MyStudy.InverseDynamics'),
Dump('Main.MyStudy.Output.Abscissa.t'),
Dump('Main.MyModel.PatellaLigament.DriverPos'),
Dump('Main.MyStudy.Output.MaxMuscleActivity')]
parameter_study_macro = AnyMacro(macro, number_of_macros= 7 )
parameter_study_macro
from anypytools import AnyPyProcess
app = AnyPyProcess()
output = app.start_macro(parameter_study_macro)
%matplotlib inline
from matplotlib.pyplot import plot, title, xlabel, legend, ylabel
for data in output:
maxact = data['Main.MyStudy.Output.MaxMuscleActivity']
time = data['Main.MyStudy.Output.Abscissa.t']
ligament_len = data['Main.MyModel.PatellaLigament.DriverPos'][0]
plot(time, maxact, label = str(100* ligament_len)+' cm' )
title('Effect of changing patella tendon length')
xlabel('Time steps')
ylabel('Max muscle activity')
legend(bbox_to_anchor=(1.05, 1), loc=2);
"""
Explanation: A simple parameter study
<br>
<img src="https://github.com/AnyBody-Research-Group/AnyPyTools/blob/master/docs/Tutorial/knee.gif?raw=true" alt="Drawing" align="Right"
width=160 />
Combine everything in a parameter study
- Vary patella tendon length in toy model (2 cm to 8 cm)
Observe the effect on maximum muscle activity.
End of explanation
"""
from anypytools import AnyPyProcess
from anypytools.macro_commands import Load, OperationRun
app = AnyPyProcess(num_processes = 3)
macro = [Load("main.any"),
OperationRun('Main.Study.InverseDynamics') ]
macro
app.start_macro(macro, search_subdirs= "model[1-9].*main.any" );
"""
Explanation: Batch processing
<br>
The approach depends on how the AnyBody model is structured.
Setup with a single main file:
Load -> Modify parameters -> run -> save results
Every macro is different
Setup with multiple main files:
Each main file defines its own parameters
All macros are the same. Main files in Diffent folders
<img src="batch_process_folder_model.png" alt="Drawing" align="Left" />
<img src="batch_process_mainfile.png" alt="Drawing" align="Right" />
End of explanation
"""
import seaborn as sns
sns.set_context('talk')
sns.set_style('whitegrid');
%matplotlib inline
from matplotlib.pyplot import plot, title, xlabel, legend, ylabel
"""
Explanation: Other possiblities
<br>
<img src="MonteCarlo.svg"
style="height: 250px; margin: 0px 20px 0px 0px; float: right; border: 2px solid gray;"/>
Monte Carlo simulations
Latin hypercube sampling
Using external optimizers
Possible topics for a webcast in the fall.
For now... read the tutorial:
goo.gl/F6mCHC
<img src="https://dl.dropboxusercontent.com/u/1683635/store/tutorial.png"
style="height: 250px; margin: 0px 20px 0px 0px; float: middle; border: 2px solid gray;"/>
from scipy.stats.distributions import norm
from anypytools import AnyPyProcess, AnyMacro
from anypytools.macro_commands import Load, SetValue_random, OperationRun
app = AnyPyProcess( )
macro = AnyMacro(
Load( "Knee.any"),
SetValue_random('Main.MyModel.MyParameter',
norm(0.1, 0.04)),
OperationRun('Main.MyStudy.InverseDynamics') )
app.start_macro(macro.create_macros_MonteCarlo(1000));
End of explanation
"""
|
tleonhardt/CodingPlayground | dataquest/SQL_and_Databases/SQLite_Relations.ipynb | mit | import sqlite3
# Conect to nominations.db
conn = sqlite3.connect('../data/nominations.db')
# Return the schema using "pragma table_info()"
query = "pragma table_info(nominations);"
schema = conn.execute(query).fetchall()
schema
# Return the first 10 rows using the SELECT and LIMIT statements
query = "SELECT * FROM nominations LIMIT 10;"
first_ten = conn.execute(query).fetchall()
# Since both schema and first_ten are lists, use a for loop to iterate
for item in schema:
print(item)
for row in first_ten:
print(row)
"""
Explanation: Introduction to the Data
n this project, we will walk through how to normalize our single table into multiple tables and how to create relations between them.
The Academy Awards, also known as the Oscars, is an annual awards ceremony hosted to recognize the achievements in the film industry. There are many different awards categories and the members of the academy vote every year to decide which artist or film should get the award. Each row in our data represents a nomination for an award. Recall that our database file, nominations.db, contains just the nominations table. This table has the following schema:
* Year - the year of the awards ceremony, integer type.
* Category - the category of award the nominee was nominated for, text type.
* Nominee - the person nominated for the award, text type.
* Movie - the movie the nominee participated in, text type.
* Character - the name of the character the nominee played, text type.
* Won - if this nominee won the award, integer type.
Let's now set up our enviroment and spend some time getting familiar with the data before we start normalizing it.
End of explanation
"""
# Create the ceremonies table
create_ceremonies = "create table ceremonies (id integer primary key, year integer, host text);"
conn.execute(create_ceremonies)
# Create the list of tuples, years_hosts
years_hosts = [(2010, "Steve Martin"),
(2009, "Hugh Jackman"),
(2008, "Jon Stewart"),
(2007, "Ellen DeGeneres"),
(2006, "Jon Stewart"),
(2005, "Chris Rock"),
(2004, "Billy Crystal"),
(2003, "Steve Martin"),
(2002, "Whoopi Goldberg"),
(2001, "Steve Martin"),
(2000, "Billy Crystal"),
]
# Use the Connection method executemany() to insert the values
insert_query = "insert into ceremonies (Year, Host) values (?,?);"
conn.executemany(insert_query, years_hosts)
# Verify that the ceremonies table was created and populated correctly
print(conn.execute("select * from ceremonies limit 10;").fetchall())
print(conn.execute("pragma table_info(ceremonies);").fetchall())
"""
Explanation: Creating the Ceremonies Table
Let's now add information on the host for each awards ceremony. Instead of adding a Host column to the nominations table and having lots of redundant data, we'll create a separate table called ceremonies which contains data specific to the ceremony itself.
Let's create a ceremonies table that contains the Year and Host for each ceremony and then set up a one-to-many relationship between ceremonies and nominations. In this screen, we'll focus on creating the ceremonies table and inserting the data we need and in the next guided step, we'll focus on setting up the one-to-many relationship.
The ceremonies table will contain 3 fields:
* id - unique identifier for each row, integer type.
* Year - the year of the awards ceremony, integer type.
* Host - the host of the awards ceremony, text type.
Before we can create and insert into the ceremonies table, we need to look up the host for each ceremony from 2000 to 2010. While we could represent each row as a tuple and write a SQL query with an INSERT statement to add each row to the ceremonies table, this is incredibly cumbersome.
The Python sqlite3 library comes with an executemany method that let's us easily mass insert records into a table. The executemany method requires the records we want to insert to be represented as a list of tuples. We then just need to write a single INSERT query with placeholder elements and specify that we want the list of tuples to be dropped into the query.
Let's first create the list of tuples representing the data we want inserted and then we'll walk through the placeholder query we need to write. We'll skip over creating the ceremonies table for now since we've explored how to create a table earlier in the course.
We then need to write the INSERT query with placeholder values. Instead of having specific values in the query string, we use a question mark (?) to act as a placeholder in the values section of the query:
insert_query = "INSERT INTO ceremonies (Year, Host) VALUES (?,?);"
Since the placeholder elements (?) will be replaced by the values in years_hosts, you need to make sure the number of question marks matches the length of each tuple in years_hosts. Since each tuple has 2 elements, we need to have 2 question marks as the placeholder elements. We don't need to specify values for the id column since it's a primary key column. When inserting values, recall that SQLite automatically creates a unique primary key for each row.
We then call the executemany method and pass in insert_query as the first parameter and years_hosts as the second parameter:
conn.executemany(insert_query, years_hosts)
End of explanation
"""
# Turn on foreign key constraints
conn.execute("PRAGMA foreign_keys = ON;")
"""
Explanation: Foreign Key Constraints
Since we'll be creating relations using foreign keys, we need to turn on foreign key constraints. By default, if you insert a row into a table that contains one or multiple foreign key columns, the record will be successfully inserted even if the foreign key reference is incorrect.
For example, since the ceremonies table only contains the id values 1 to 10, inserting a row into nominations while specifying that the ceremony_id value be 11 will work and no error will be returned. This is problematic because if we try to actually join that row with the ceremonies table, the results set will be empty since the id value 11 doesn't map to any row in the ceremonies table. To prevent us from inserting rows with nonexisting foreign key values, we need to turn on foreign key constraints by running the following query:
PRAGMA foreign_keys = ON;
The above query needs to be run every time you connect to a database where you'll be inserting foreign keys. Whenever you try inserting a row into a table containing foreign key(s), SQLite will query the linked table to make sure that foreign key value exists. If it does, the transaction will continue as expected. If it doesn't, then an error will be returned and the transaction won't go through.
End of explanation
"""
# Write and run the query to create the nominations_two table
create_nominations_two = '''create table nominations_two
(id integer primary key,
category text,
nominee text,
movie text,
character text,
won text,
ceremony_id integer,
foreign key(ceremony_id) references ceremonies(id));
'''
conn.execute(create_nominations_two)
# Write and run the query that returns the records from nominations
nom_query = '''
select ceremonies.id as ceremony_id, nominations.category as category,
nominations.nominee as nominee, nominations.movie as movie,
nominations.character as character, nominations.won as won
from nominations
inner join ceremonies
on nominations.year == ceremonies.year
;
'''
joined_nominations = conn.execute(nom_query).fetchall()
# Write a placeholder insert query that can insert values into nom2
insert_nominations_two = '''insert into nominations_two
(ceremony_id, category, nominee, movie, character, won)
values (?,?,?,?,?,?);
'''
# Use the Connection method executemany() to insert records
conn.executemany(insert_nominations_two, joined_nominations)
# Verify your work by returning the first 5 rows from nominations_two
print(conn.execute("select * from nominations_two limit 5;").fetchall())
"""
Explanation: Setting Up One-To-Many
The next step is to remove the Year column from nominations and add a new column, ceremony_id, that contains the foreign key reference to the id column in the ceremonies table. Unfortunately, we can't remove columns from an existing table in SQLite or change its schema. The goal of SQLite is to create an incredibly lightweight, open source database that contains a common, but reduced, set of features. While this has allowed SQLite to become the most popular database in the world, SQLite doesn't have the ability to heavily modify an existing table to keep the code base lightweight.
The only alterations we can make to an existing table are renaming it or adding a new column. This means that we can't just remove the Year column from nominations and add the ceremony_id column. We need to instead:
* create a new table nominations_two with the schema we want,
* populate nominations_two with the records we want,
* delete the original nominations table,
* rename nominations_two to nominations.
For nominations_two, we want the following schema:
* id: primary key, integer,
* category: text,
* nominee: text,
* movie: text,
* character: text,
* won: text,
* ceremony_id: foreign key reference to id column from ceremonies.
First, we need to select all the records from the original nominations table with the columns we want and use an INNER JOIN to add the id field from ceremonies for each row:
SELECT nominations.category, nominations.nominee, nominations.movie, nominations.character, nominations.won, ceremonies.id
FROM nominations
INNER JOIN ceremonies ON
nominations.year == ceremonies.year
;
Then we can write the placeholder insert query we need to insert these records into nominations_two. Let's create and populate the nominations_two table in this screen and we'll work through the rest in the next screen.
End of explanation
"""
# Write and run the query that deletes the nominations table
drop_nominations = "drop table nominations;"
conn.execute(drop_nominations)
# Write and run the query that renames nominations_two to nominations
rename_nominations_two = "alter table nominations_two rename to nominations;"
conn.execute(rename_nominations_two)
"""
Explanation: Deleting and Renaming Tables
We now need to delete the nominations table since we'll be using the nominations_two table moving forward. We can use the DROP TABLE statement to drop the original nominations table.
Once we drop this table, we can use the ALTER TABLE statement to rename nominations_two to nominations. Here's what the syntax looks like for that statement:
ALTER TABLE [current_table_name]
RENAME TO [future_table_name]
End of explanation
"""
# Create the movies table
create_movies = "create table movies (id integer primary key,movie text);"
conn.execute(create_movies)
# Create the actors table
create_actors = "create table actors (id integer primary key,actor text);"
conn.execute(create_actors)
# Create the movie_actors join table
create_movies_actors = '''create table movies_actors (id INTEGER PRIMARY KEY,
movie_id INTEGER references movies(id), actor_id INTEGER references actors(id));
'''
conn.execute(create_movies_actors)
"""
Explanation: Creating a Join Table
Creating a join table is no different than creating a regular one. To create the movies_actors join table we need to declare both of the foreign key references when specifying the schema:
CREATE TABLE movies_actors (
id INTEGER PRIMARY KEY,
movie_id INTEGER REFERENCES movies(id),
actor_id INTEGER REFERENCES actors(id)
);
End of explanation
"""
insert_movies = "insert into movies (movie) select distinct movie from nominations;"
insert_actors = "insert into actors (actor) select distinct nominee from nominations;"
conn.execute(insert_movies)
conn.execute(insert_actors)
print(conn.execute("select * from movies limit 5;").fetchall())
print(conn.execute("select * from actors limit 5;").fetchall())
"""
Explanation: Populating the movies and actors tables
End of explanation
"""
pairs_query = "select movie,nominee from nominations;"
movie_actor_pairs = conn.execute(pairs_query).fetchall()
join_table_insert = "insert into movies_actors (movie_id, actor_id) values ((select id from movies where movie == ?),(select id from actors where actor == ?));"
conn.executemany(join_table_insert,movie_actor_pairs)
print(conn.execute("select * from movies_actors limit 5;").fetchall())
# Close the database
conn.close()
"""
Explanation: Populating a join table
End of explanation
"""
|
DesignSafe-CI/adama_example | notebooks/Demo.ipynb | mit | cd demo
"""
Explanation: Adama example for DesignSafe-CI
This is an example of building an Adama service.
We use the Haiti Earthquake Database and we construct a couple of web services from the data hosted at https://nees.org/dataview/spreadsheet/haiti.
Setting up
The code for these services is in the directory demo:
End of explanation
"""
%load services/haiti/metadata.yml
"""
Explanation: We will construct two web services that will return the data in JSON format:
haiti: will allow to query the database by building,
haiti_images: will allow to retrieve the set of images for each building.
Each Adama service consists in two pieces of information:
the metadata that describes the service,
and the actual code.
This is an example of the metadata for the haiti service:
End of explanation
"""
%load services/haiti/main.py
"""
Explanation: The code for the service looks very simple:
End of explanation
"""
import adamalib
# your credentials here:
TOKEN = '474af9d41c8ecc873191ea97153857'
adama = adamalib.Adama('https://api.araport.org/community/v0.3',
token=TOKEN)
"""
Explanation: Interacting with Adama
To interact with Adama, we create an object with our credentials:
End of explanation
"""
adama.status
"""
Explanation: Now the adama object is connected to the server. We can check that the server is up:
End of explanation
"""
import services.haiti.main
services.haiti.main.search({'building': 'A001'}, adama)
"""
Explanation: Hooray!
Testing the new services locally
The services we are going to register in Adama can be tested first locally:
End of explanation
"""
adama['designsafe-dev'].services
"""
Explanation: Registering the services in Adama
Now we are ready to register the services in Adama.
We'll use the namespace designsafe-dev for testing purposes:
End of explanation
"""
import services.haiti.main # the haiti service
import services.haiti_images.main # the haiti_images service
haiti = adama['designsafe-dev'].services.add(services.haiti.main)
haiti_images = adama['designsafe-dev'].services.add(services.haiti_images.main)
haiti, haiti_images
"""
Explanation: To register a service, we just import its code and add it to the previous list
It may take a minute or two... ☕️
End of explanation
"""
haiti = adama['designsafe-dev'].haiti
haiti_images = adama['designsafe-dev'].haiti_images
"""
Explanation: The services are registered and can be accessed in https://araport.org.
Using the services
Now that the services are registered in Adama, and the Python objects haiti and haiti_images are connected to the remote services, we can use them as regular objects.
Let's recall. The web services are:
End of explanation
"""
x = haiti.search(building='A001')
# pretty print it
DataFrame(x[0].items())
"""
Explanation: Let's test the services. Note that now the code will be executed remotely in the Adama server:
End of explanation
"""
from IPython.display import Image
Image(data=haiti_images.search(building='A001', image=1).content)
"""
Explanation: We can display an image using the haiti_images service:
End of explanation
"""
full = haiti.list()
columns = ['Building', 'Latitude', 'Longitude', 'Priority Index [%]']
buildings = DataFrame([[row[col] for col in columns] for row in full if row['Latitude']],
columns=columns).convert_objects(convert_numeric=True)
buildings
"""
Explanation: Let's display the table of buildings together with their geographical coordinates and the priority index:
End of explanation
"""
plt.hist(buildings['Priority Index [%]'], bins=50);
"""
Explanation: We can operate on the data, for example doing a histogram of the priority index:
End of explanation
"""
import services.map_tools
services.map_tools.map_init()
services.map_tools.map_display(DataFrame.as_matrix(buildings)[:,:3],
token=TOKEN)
"""
Explanation: Let's display the buildings in a map, and let's load the images on demand:
End of explanation
"""
|
intel-analytics/BigDL | apps/ray/parameter_server/sharded_parameter_server.ipynb | apache-2.0 | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import ray
import time
"""
Explanation: This notebook is adapted from:
https://github.com/ray-project/tutorial/tree/master/examples/sharded_parameter_server.ipynb
Sharded Parameter Servers
GOAL: The goal of this exercise is to use actor handles to implement a sharded parameter server example for distributed asynchronous stochastic gradient descent.
Before doing this exercise, make sure you understand the concepts from the exercise on Actor Handles.
Parameter Servers
A parameter server is simply an object that stores the parameters (or "weights") of a machine learning model (this could be a neural network, a linear model, or something else). It exposes two methods: one for getting the parameters and one for updating the parameters.
In a typical machine learning training application, worker processes will run in an infinite loop that does the following:
1. Get the latest parameters from the parameter server.
2. Compute an update to the parameters (using the current parameters and some data).
3. Send the update to the parameter server.
The workers can operate synchronously (that is, in lock step), in which case distributed training with multiple workers is algorithmically equivalent to serial training with a larger batch of data. Alternatively, workers can operate independently and apply their updates asynchronously. The main benefit of asynchronous training is that a single slow worker will not slow down the other workers. The benefit of synchronous training is that the algorithm behavior is more predictable and reproducible.
End of explanation
"""
from bigdl.dllib.nncontext import init_spark_on_local, init_spark_on_yarn
import numpy as np
import os
hadoop_conf_dir = os.environ.get('HADOOP_CONF_DIR')
if hadoop_conf_dir:
sc = init_spark_on_yarn(
hadoop_conf=hadoop_conf_dir,
conda_name=os.environ.get("ZOO_CONDA_NAME", "zoo"), # The name of the created conda-env
num_executors=2,
executor_cores=4,
executor_memory="2g",
driver_memory="2g",
driver_cores=1,
extra_executor_memory_for_ray="3g")
else:
sc = init_spark_on_local(cores = 8, conf = {"spark.driver.memory": "2g"})
# It may take a while to ditribute the local environment including python and java to cluster
import ray
from bigdl.orca.ray import OrcaRayContext
ray_ctx = OrcaRayContext(sc=sc, object_store_memory="4g")
ray_ctx.init()
#ray.init(num_cpus=30, include_webui=False, ignore_reinit_error=True)
"""
Explanation: Init SparkContext
End of explanation
"""
dim = 10
@ray.remote
class ParameterServer(object):
def __init__(self, dim):
self.parameters = np.zeros(dim)
def get_parameters(self):
return self.parameters
def update_parameters(self, update):
self.parameters += update
ps = ParameterServer.remote(dim)
"""
Explanation: A simple parameter server can be implemented as a Python class in a few lines of code.
EXERCISE: Make the ParameterServer class an actor.
End of explanation
"""
@ray.remote
def worker(ps, dim, num_iters):
for _ in range(num_iters):
# Get the latest parameters.
parameters = ray.get(ps.get_parameters.remote())
# Compute an update.
update = 1e-3 * parameters + np.ones(dim)
# Update the parameters.
ps.update_parameters.remote(update)
# Sleep a little to simulate a real workload.
time.sleep(0.5)
# Test that worker is implemented correctly. You do not need to change this line.
ray.get(worker.remote(ps, dim, 1))
# Start two workers.
worker_results = [worker.remote(ps, dim, 100) for _ in range(2)]
"""
Explanation: A worker can be implemented as a simple Python function that repeatedly gets the latest parameters, computes an update to the parameters, and sends the update to the parameter server.
End of explanation
"""
print(ray.get(ps.get_parameters.remote()))
"""
Explanation: As the worker tasks are executing, you can query the parameter server from the driver and see the parameters changing in the background.
End of explanation
"""
@ray.remote
class ParameterServerShard(object):
def __init__(self, sharded_dim):
self.parameters = np.zeros(sharded_dim)
def get_parameters(self):
return self.parameters
def update_parameters(self, update):
self.parameters += update
total_dim = (10 ** 8) // 8 # This works out to 100MB (we have 25 million
# float64 values, which are each 8 bytes).
num_shards = 2 # The number of parameter server shards.
assert total_dim % num_shards == 0, ('In this exercise, the number of shards must '
'perfectly divide the total dimension.')
# Start some parameter servers.
ps_shards = [ParameterServerShard.remote(total_dim // num_shards) for _ in range(num_shards)]
assert hasattr(ParameterServerShard, 'remote'), ('You need to turn ParameterServerShard into an '
'actor (by using the ray.remote keyword).')
"""
Explanation: Sharding a Parameter Server
As the number of workers increases, the volume of updates being sent to the parameter server will increase. At some point, the network bandwidth into the parameter server machine or the computation down by the parameter server may be a bottleneck.
Suppose you have $N$ workers and $1$ parameter server, and suppose each of these is an actor that lives on its own machine. Furthermore, suppose the model size is $M$ bytes. Then sending all of the parameters from the workers to the parameter server will mean that $N * M$ bytes in total are sent to the parameter server. If $N = 100$ and $M = 10^8$, then the parameter server must receive ten gigabytes, which, assuming a network bandwidth of 10 gigabits per second, would take 8 seconds. This would be prohibitive.
On the other hand, if the parameters are sharded (that is, split) across K parameter servers, K is 100, and each parameter server lives on a separate machine, then each parameter server needs to receive only 100 megabytes, which can be done in 80 milliseconds. This is much better.
EXERCISE: The code below defines a parameter server shard class. Modify this class to make ParameterServerShard an actor. We will need to revisit this code soon and increase num_shards.
End of explanation
"""
@ray.remote
def worker_task(total_dim, num_iters, *ps_shards):
# Note that ps_shards are passed in using Python's variable number
# of arguments feature. We do this because currently actor handles
# cannot be passed to tasks inside of lists or other objects.
for _ in range(num_iters):
# Get the current parameters from each parameter server.
parameter_shards = [ray.get(ps.get_parameters.remote()) for ps in ps_shards]
assert all([isinstance(shard, np.ndarray) for shard in parameter_shards]), (
'The parameter shards must be numpy arrays. Did you forget to call ray.get?')
# Concatenate them to form the full parameter vector.
parameters = np.concatenate(parameter_shards)
assert parameters.shape == (total_dim,)
# Compute an update.
update = np.ones(total_dim)
# Shard the update.
update_shards = np.split(update, len(ps_shards))
# Apply the updates to the relevant parameter server shards.
for ps, update_shard in zip(ps_shards, update_shards):
ps.update_parameters.remote(update_shard)
# Test that worker_task is implemented correctly. You do not need to change this line.
ray.get(worker_task.remote(total_dim, 1, *ps_shards))
"""
Explanation: The code below implements a worker that does the following.
1. Gets the latest parameters from all of the parameter server shards.
2. Concatenates the parameters together to form the full parameter vector.
3. Computes an update to the parameters.
4. Partitions the update into one piece for each parameter server.
5. Applies the right update to each parameter server shard.
End of explanation
"""
num_workers = 4
# Start some workers. Try changing various quantities and see how the
# duration changes.
start = time.time()
ray.get([worker_task.remote(total_dim, 5, *ps_shards) for _ in range(num_workers)])
print('This took {} seconds.'.format(time.time() - start))
"""
Explanation: EXERCISE: Experiment by changing the number of parameter server shards, the number of workers, and the size of the data.
NOTE: Because these processes are all running on the same machine, network bandwidth will not be a limitation and sharding the parameter server will not help. To see the difference, you would need to run the application on multiple machines. There are still regimes where sharding a parameter server can help speed up computation on the same machine (by parallelizing the computation that the parameter server processes have to do). If you want to see this effect, you should implement a synchronous training application. In the asynchronous setting, the computation is staggered and so speeding up the parameter server usually does not matter.
End of explanation
"""
|
ykakihara/experiments | tech-circle9/chainer-natual-language-processing.ipynb | mit | import time
import math
import sys
import pickle
import copy
import os
import re
import numpy as np
from chainer import cuda, Variable, FunctionSet, optimizers
import chainer.functions as F
"""
Explanation: Introduction
Chainer とはニューラルネットの実装を簡単にしたフレームワークです。
今回は言語の分野でニューラルネットを適用してみました。
今回は言語モデルを作成していただきます。
言語モデルとはある単語が来たときに次の単語に何が来やすいかを予測するものです。
言語モデルにはいくつか種類があるのでここでも紹介しておきます。
n-グラム言語モデル
単語の数を単純に数え挙げて作成されるモデル。考え方としてはデータにおけるある単語の頻度に近い
ニューラル言語モデル
単語の辞書ベクトルを潜在空間ベクトルに落とし込み、ニューラルネットで次の文字を学習させる手法
リカレントニューラル言語モデル
基本的なアルゴリズムはニューラル言語モデルと同一だが過去に使用した単語を入力に加えることによって文脈を考慮した言語モデルの学習が可能となる。ニューラル言語モデルとは異なり、より古い情報も取得可能
以下では、このChainerを利用しデータを準備するところから実際に言語モデルを構築し学習・評価を行うまでの手順を解説します。
各種ライブラリ導入
初期設定
データ入力
[リカレントニューラル言語モデル設定](#リカレントニューラル言語モデル設定(ハンズオン))
学習を始める前の設定
パラメータ更新方法
言語の予測
もしGPUを使用したい方は、以下にまとめてあるのでご参考ください。
Chainer を用いてリカレントニューラル言語モデル作成のサンプルコードを解説してみた
1.各種ライブラリ導入
Chainerの言語処理では多数のライブラリを導入します。
End of explanation
"""
#-------------Explain7 in the Qiita-------------
n_epochs = 30
n_units = 625
batchsize = 100
bprop_len = 10
grad_clip = 0.5
data_dir = "data_hands_on"
checkpoint_dir = "cv"
#-------------Explain7 in the Qiita-------------
"""
Explanation: `導入するライブラリの代表例は下記です。
numpy: 行列計算などの複雑な計算を行なうライブラリ
chainer: Chainerの導入
2.初期設定
下記を設定しています。
* 学習回数:n_epochs
* ニューラルネットのユニット数:n_units
* 確率的勾配法に使用するデータの数:batchsize
* 学習に使用する文字列の長さ:bprop_len
* 勾配法で使用する敷居値:grad_clip
* 学習データの格納場所:data_dir
* モデルの出力場所:checkpoint_dir
End of explanation
"""
# input data
#-------------Explain1 in the Qiita-------------
def source_to_words(source):
line = source.replace("¥n", " ").replace("¥t", " ")
for spacer in ["(", ")", "{", "}", "[", "]", ",", ";", ":", "++", "!", "$", '"', "'"]:
line = line.replace(spacer, " " + spacer + " ")
words = [w.strip() for w in line.split()]
return words
def load_data():
vocab = {}
print ('%s/angular.js'% data_dir)
source = open('%s/angular_full_remake.js' % data_dir, 'r').read()
words = source_to_words(source)
freq = {}
dataset = np.ndarray((len(words),), dtype=np.int32)
for i, word in enumerate(words):
if word not in vocab:
vocab[word] = len(vocab)
freq[word] = 0
dataset[i] = vocab[word]
freq[word] += 1
print('corpus length:', len(words))
print('vocab size:', len(vocab))
return dataset, words, vocab, freq
#-------------Explain1 in the Qiita-------------
if not os.path.exists(checkpoint_dir):
os.mkdir(checkpoint_dir)
train_data, words, vocab, freq = load_data()
for f in ["frequent", "rarely"]:
print("{0} words".format(f))
print(sorted(freq.items(), key=lambda i: i[1], reverse=True if f == "frequent" else False)[:50])
"""
Explanation: 3.データ入力
学習用にダウンロードしたファイルをプログラムに読ませる処理を関数化しています
文字列の場合は通常のデータと異なり、数字ベクトル化する必要があります。
学習データをテキスト形式で読み込んでいます。
ソースコードを単語で扱えるようにsource_to_wordsのメソッドで分割しています。
文字データを確保するための行列を定義しています。
データは単語をキー、語彙数の連番idを値とした辞書データにして行列データセットに登録しています。
学習データ、単語の長さ、語彙数を取得しています。
上記をそれぞれ行列データとして保持しています。
End of explanation
"""
#-------------Explain2 in the Qiita-------------
class CharRNN(FunctionSet):
"""
ニューラルネットワークを定義している部分です。
上から順に入力された辞書ベクトル空間を隠れ層のユニット数に変換し、次に隠れ層の入
力と隠れ層を設定しています。
同様の処理を2層にも行い、出力層では語彙数に修正して出力しています。
なお最初に設定するパラメータは-0.08から0.08の間でランダムに設定しています。
"""
def __init__(self, n_vocab, n_units):
"""
順伝搬の記述です。
順伝搬の入力をVariableで定義し、入力と答えを渡しています。
入力層を先ほど定義したembedを用います。
隠れ層の入力には、先ほど定義したl1_xを用いて、引数にdropout、隠れ層の状態を渡して
います。
lstmに隠れ層第1層の状態とh1_inを渡します。
2層目も同様に記述し、出力層は状態を渡さずに定義します。
次回以降の入力に使用するため各状態は保持しています。
出力されたラベルと答えのラベル比較し、損失を返すのと状態を返しています。
"""
super(CharRNN, self).__init__(
embed = F.EmbedID(n_vocab, n_units),
l1_x = F.Linear(n_units, 4*n_units),
l1_h = F.Linear(n_units, 4*n_units),
l2_h = F.Linear(n_units, 4*n_units),
l2_x = F.Linear(n_units, 4*n_units),
l3 = F.Linear(n_units, n_vocab),
)
for param in self.parameters:
param[:] = np.random.uniform(-0.08, 0.08, param.shape)
def forward_one_step(self, x_data, y_data, state, train=True, dropout_ratio=0.5):
"""
dropoutの記述を外して予測用のメソッドとして記述しています。
dropoutにはtrainという引数が存在し、trainの引数をfalseにしておくと動作しない
ので、予測の時は渡す引数を変えて学習と予測を変えても良いですが、今回は明示的に分る
ように分けて記述しました。
"""
x = Variable(x_data, volatile=not train)
t = Variable(y_data, volatile=not train)
h0 = self.embed(x)
h1_in = self.l1_x(F.dropout(h0, ratio=dropout_ratio, train=train)) + self.l1_h(state['h1'])
c1, h1 = F.lstm(state['c1'], h1_in)
h2_in = self.l2_x(F.dropout(h1, ratio=dropout_ratio, train=train)) + self.l2_h(state['h2'])
c2, h2 = F.lstm(state['c2'], h2_in)
y = self.l3(F.dropout(h2, ratio=dropout_ratio, train=train))
state = {'c1': c1, 'h1': h1, 'c2': c2, 'h2': h2}
return state, F.softmax_cross_entropy(y, t)
def predict(self, x_data, state):
x = Variable(x_data, volatile=True)
h0 = self.embed(x)
h1_in = self.l1_x(h0) + self.l1_h(state['h1'])
c1, h1 = F.lstm(state['c1'], h1_in)
h2_in = self.l2_x(h1) + self.l2_h(state['h2'])
c2, h2 = F.lstm(state['c2'], h2_in)
y = self.l3(h2)
state = {'c1': c1, 'h1': h1, 'c2': c2, 'h2': h2}
return state, F.softmax(y)
"""
状態の初期化です。
"""
def make_initial_state(n_units, batchsize=100, train=True):
return {name: Variable(np.zeros((batchsize, n_units), dtype=np.float32),
volatile=not train)
for name in ('c1', 'h1', 'c2', 'h2')}
#-------------Explain2 in the Qiita-------------
"""
Explanation: 4.リカレントニューラル言語モデル設定(ハンズオン)
RNNLM(リカレントニューラル言語モデルの設定を行っています)
この部分で自由にモデルを変えることが出来ます。
この部分でリカレントニューラル言語モデル独特の特徴を把握してもらうことが目的です。
EmbedIDで行列変換を行い、疎なベクトルを密なベクトルに変換しています。辞書データを、入力ユニット数分のデータに変換する処理(潜在ベクトル空間への変換)を行っています。
出力が4倍の理由は入力層、入力制限層、出力制限層、忘却層をLSTMでは入力に使用するためです。LSTMの魔法の工夫について知りたい方は下記をご覧下さい。
http://www.slideshare.net/nishio/long-shortterm-memory
h1_in = self.l1_x(F.dropout(h0, ratio=dropout_ratio, train=train)) + self.l1_h(state['h1'])は隠れ層に前回保持した隠れ層の状態を入力することによってLSTMを実現しています。
F.dropoutは過去の情報を保持しながらどれだけのdropoutでユニットを削るかを表しています。これにより過学習するのを抑えています。
Drop outについては下記をご覧下さい。
http://olanleed.hatenablog.com/entry/2013/12/03/010945
c1, h1 = F.lstm(state['c1'], h1_in)はlstmと呼ばれる魔法の工夫によってリカレントニューラルネットがメモリ破綻を起こさずにいい感じで学習するための工夫です。詳しく知りたい人は下記をご覧下さい。
return state, F.softmax_cross_entropy(y, t)は予測した文字と実際の文字を比較して損失関数を更新している所になります。ソフトマックス関数を使用している理由は出力層の一つ前の層の全入力を考慮して出力を決定できるので一般的に出力層の計算にはソフトマックス関数が使用されます。
予測を行なうメソッドも実装しており、入力されたデータ、状態を元に次の文字列と状態を返すような関数になっています。
モデルの初期化を行なう関数もここで定義しています。
下記をコーディングして下さい!!!!
End of explanation
"""
# Prepare RNNLM model
model = CharRNN(len(vocab), n_units)
optimizer = optimizers.RMSprop(lr=2e-3, alpha=0.95, eps=1e-8)
optimizer.setup(model.collect_parameters())
"""
Explanation: RNNLM(リカレントニューラル言語モデルの設定を行っています)
作成したリカレントニューラル言語モデルを導入しています。
最適化の手法はRMSpropを使用
http://qiita.com/skitaoka/items/e6afbe238cd69c899b2a
RMSpropは勾配がマイナスであれば重みに加算、正であれば重みを減算する手法です。勾配の加算、減算の度合いを表しています。基本的に勾配が急になれなるほど緩やかに演算が行なわれるように工夫がされています。alphaは過去の勾配による影響を減衰させるパラメータで、lrは勾配の影響を減衰させるパラメータです。epsは0割を防ぐために導入されています。
初期のパラメータを-0.1〜0.1の間で与えています。
End of explanation
"""
whole_len = train_data.shape[0]
jump = whole_len // batchsize
epoch = 0
start_at = time.time()
cur_at = start_at
state = make_initial_state(n_units, batchsize=batchsize)
accum_loss = Variable(np.zeros((), dtype=np.float32))
cur_log_perp = np.zeros(())
"""
Explanation: 5.学習を始める前の設定
学習データのサイズを取得
ジャンプの幅を設定(順次学習しない)
パープレキシティを0で初期化
最初の時間情報を取得
初期状態を現在の状態に付与
状態の初期化
損失を0で初期化
End of explanation
"""
N = 50
for i in range(int(jump * n_epochs)):
#-------------Explain4 in the Qiita-------------
x_batch = np.array([train_data[(jump * j + i) % whole_len]
for j in range(batchsize)])
y_batch = np.array([train_data[(jump * j + i + 1) % whole_len]
for j in range(batchsize)])
state, loss_i = model.forward_one_step(x_batch, y_batch, state, dropout_ratio=0.7)
accum_loss += loss_i
cur_log_perp += loss_i.data.reshape(())
if (i + 1) % bprop_len == 0: # Run truncated BPTT
now = time.time()
cur_at = now
#print('{}/{}, train_loss = {}, time = {:.2f}'.format((i + 1)/bprop_len, jump, accum_loss.data / bprop_len, now-cur_at))
optimizer.zero_grads()
accum_loss.backward()
accum_loss.unchain_backward() # truncate
accum_loss = Variable(np.zeros((), dtype=np.float32))
optimizer.clip_grads(grad_clip)
optimizer.update()
if (i + 1) % N == 0:
perp = math.exp(cuda.to_cpu(cur_log_perp) / N)
print('iter {} training perplexity: {:.2f} '.format(i + 1, perp))
fn = ('%s/charrnn_epoch_%i.chainermodel' % (checkpoint_dir, epoch))
pickle.dump(copy.deepcopy(model).to_cpu(), open(fn, 'wb'))
cur_log_perp.fill(0)
if (i + 1) % jump == 0:
epoch += 1
#-------------Explain4 in the Qiita-------------
sys.stdout.flush()
"""
Explanation: 6.パラメータ更新方法(ミニバッチ学習)
ミニバッチを用いて学習している。
x_batch = np.array([train_data[(jump * j + i) % whole_len] for j in range(batchsize)])はややこしいので下記の図を用いて説明します。
下図のように縦に文字が入っている配列があるとします。
jのインデックスはjump(全データのサイズをバッチサイズで割った数)を掛けている各バッチサイズ分移動させる役目を持ち、iのインデックスはバッチサイズ内で移動することを表しています。
whole_lenで余りを出しているのは(jump * j + i)がデータのサイズを超えるので、最初の位置に戻すために行なっている。
* y_batch = np.array([train_data[(jump * j + i + 1) % whole_len] for j in range(batchsize)])はxの一つ先の文字を与えて学習させて
* state, loss_i = model.forward_one_step(x_batch, y_batch, state, dropout_ratio=0.5)は損失と状態を計算しています。ここで過学習を防ぐdropアウトの率も設定可能です。
* if (i + 1) % bprop_len == 0はどれだけ過去の文字を保持するかを表しています。bprop_lenが大きければ大きいほど過去の文字を保持できますが、メモリ破綻を起こす可能性があるのでタスクによって適切な数値に設定する必要があります。
* bprop_lenの詳細についてtruncate
* optimizer.clip_grads(grad_clip)は正則化をかけており、過学習を防いでいます。
End of explanation
"""
# load model
#-------------Explain6 in the Qiita-------------
model = pickle.load(open("cv/charrnn_epoch_0.chainermodel", 'rb'))
#-------------Explain6 in the Qiita-------------
n_units = model.embed.W.shape[1]
"""
Explanation: 7.言語の予測
予測では作成されたモデル変更と文字列予測を行ないます。
モデルを変更する。
文字列を予測する。
予測するモデルの変更はここではiPython notebook内の下記のコードを変更します。
作成されたモデルはcvフォルダの中にあるので
あまり数は出来ていませんが、確認して見て下さい。
End of explanation
"""
# initialize generator
state = make_initial_state(n_units, batchsize=1, train=False)
index = np.random.randint(0, len(vocab), 1)[0]
ivocab = {v:k for k, v in vocab.items()}
sampling_range = 5
for i in range(1000):
if ivocab[index] in ["}", ";"]:
sys.stdout.write(ivocab[index] + "\n")
else:
sys.stdout.write(ivocab[index] + " ")
#-------------Explain7 in the Qiita-------------
state, prob = model.predict(np.array([index], dtype=np.int32), state)
#index = np.argmax(prob.data)
index = np.random.choice(prob.data.argsort()[0,-sampling_range:][::-1], 1)[0]
#-------------Explain7 in the Qiita-------------
print
"""
Explanation: state, prob = model.predict(prev_char, state)で予測された確率と状態を取得しています。次の予測にも使用するため状態も取得しています。
index = np.argmax(cuda.to_cpu(prob.data))はcuda.to_cpu(prob.data)部分で各単語の重み確率を取得できるため、その中で一番確率が高いものが予測された文字なのでその文字のインデックスを返すようにしています。
sys.stdout.write(ivocab[index] + " ")で予測した文字を出力するための準備です。
prev_char = np.array([index], dtype=np.int32)は次の予測に使用するために過去の文字を保持するのに使用しています。
End of explanation
"""
|
keflavich/pyspeckit | examples/AmmoniaLevelPopulation.ipynb | mit | # This is a test to show what happens if you add lines vs. computing a single optical depth per channel
from pyspeckit.spectrum.models.ammonia_constants import (line_names, freq_dict, aval_dict, ortho_dict,
voff_lines_dict, tau_wts_dict)
from astropy import constants
from astropy import units as u
import pylab as pl
linename = 'oneone'
xarr_v = (np.linspace(-25,25,1000)*u.km/u.s)
xarr = xarr_v.to(u.GHz, u.doppler_radio(freq_dict['oneone']*u.Hz))
tauprof = np.zeros(xarr.size)
true_prof = np.zeros(xarr.size)
width = 0.1
xoff_v = 0
ckms = constants.c.to(u.km/u.s).value
pl.figure(figsize=(12,12))
pl.clf()
for ii,tau_tot in enumerate((0.001, 0.1, 1, 10,)):
tau_dict = {'oneone':tau_tot}
voff_lines = np.array(voff_lines_dict[linename])
tau_wts = np.array(tau_wts_dict[linename])
lines = (1-voff_lines/ckms)*freq_dict[linename]/1e9
tau_wts = tau_wts / (tau_wts).sum()
nuwidth = np.abs(width/ckms*lines)
nuoff = xoff_v/ckms*lines
# tau array
tauprof = np.zeros(len(xarr))
for kk,nuo in enumerate(nuoff):
tauprof_ = (tau_dict[linename] * tau_wts[kk] *
np.exp(-(xarr.value+nuo-lines[kk])**2 /
(2.0*nuwidth[kk]**2)))
tauprof += tauprof_
true_prof += (1-np.exp(-tauprof_))
ax = pl.subplot(4,1,ii+1)
ax.plot(xarr_v, 1 - np.exp(-tauprof), label=str(tau_tot), zorder=20, linewidth=1)
ax.plot(xarr_v, true_prof, label=str(tau_tot), alpha=0.7, linewidth=2)
ax.plot(xarr_v, true_prof-(1-np.exp(-tauprof)) - tau_tot/20, linewidth=1)
pl.title(str(tau_tot))
"""
Explanation: Notes on the Ammonia model
Exact equation from the source code:
population_upperstate = lin_ntot * orthoparafrac * partition/(Z.sum())
tau_dict[linename] = (population_upperstate /
(1. + np.exp(-h*frq/(kb*tkin) ))*ccms**2 /
(8*np.pi*frq**2) * aval *
(1-np.exp(-h*frq/(kb*tex))) /
(width/ckms*frq*np.sqrt(2*np.pi)) )
\begin{equation}
\tau = N_{tot} g_{opr} Z_{upper} \frac{A_{ij} c^2}{8\pi\nu^2}
\left(1-\exp{ \frac{-h \nu}{k_B T_{ex}} } \right)
\left(1+\exp{\frac{-h \nu}{k_B T_K}}\right)
\left((2\pi)^{1/2} \nu \sigma_\nu / c\right)^{-1}
\end{equation}
Equation 16 from Rosolowsky et al 2008:
$$N(1,1) = \frac{8 \pi k \nu_0^2}{h c^3} \frac{1}{A_{1,1}} \sqrt{2\pi}\sigma_\nu (T_{ex}-T_{bg})\tau$$
Rearranges to:
$$\tau = N(1,1) \frac{h c^3}{8\pi k \nu_0^2} A_{1,1} \frac{1}{\sqrt{2 \pi} \sigma_\nu} \left(T_{ex}-T_{bg}\right)^{-1}$$
Equation A4 of Friesen et al 2009:
$$N(1,1) = \frac{8\pi\nu^2}{c^2} \frac{g_1}{g_2} \frac{1}{A_{1,1}} \frac{1+\exp\left(-h\nu_0/k_B T_{ex}\right)}{1-\exp\left(-h \nu_0/k_B T_{ex}\right)} \int \tau(\nu) d\nu$$
Equation 98 of Mangum & Shirley 2015:
$$N_{tot} = \frac{3 h}{8 \pi \mu^2 R_i} \frac{J_u(J_u+1)}{K^2}
\frac{Q_{rot}}{g_J g_K g_I} \frac{\exp{E_u/k_B T_{ex}}}{\exp{h \nu/k_B T_{ex}} - 1}
\left[\frac{\int T_R dv}{f\left(J_\nu(T_{ex})-J_\nu{T_B}\right) }\right]$$
From Scratch
$$\tau_\nu = \int \alpha_\nu ds$$
$$\alpha_\nu = \frac{c^2}{8\pi\nu_0^2} \frac{g_u}{g_l} n_l A_{ul} \left(1-\frac{g_l n_u}{g_u n_l}\right) \phi_\nu$$
Excitation temperature:
$$T_{ex} \equiv \frac{h\nu_0/k_b}{\ln \frac{n_l g_u}{n_u g_l} } $$
$\nu_0$ = rest frequency of the line
Rearranges to:
$$ \frac{n_l g_u}{n_u g_l} = \exp\left(\frac{h \nu_0}{k_B T_{ex}}\right)$$
Boltzman distribution:
$$ \frac{n_u}{n_l} = \frac{g_u}{g_l} \exp\left(\frac{-h \nu_0}{k_B T}\right)$$
where T is a thermal equilibrium temperature
Rearranges to:
$$ 1-\frac{n_u g_l}{n_l g_u} = 1-\exp\left(\frac{-h \nu_0}{k_B T}\right)$$
Column Density
$$N_u \equiv \int n_u ds$$
$$N_l \equiv \int n_l ds$$
Starting to substitute previous equations into each other:
$$\tau_\nu d\nu= \alpha_\nu d\nu = \frac{c^2}{8\pi\nu_0^2} \frac{g_u}{g_l} n_l A_{ul} \left(1-\frac{g_l n_u}{g_u n_l}\right) \phi_\nu d\nu$$
$$\frac{g_u}{g_l}N_l = N_u\exp\left(\frac{h \nu_0}{k_B T_{ex}}\right)$$
First substitution is the Boltzmann distribution, with $T_{ex}$ for T
$$\int \tau_\nu d\nu = \int \frac{c^2}{8\pi\nu_0^2} \frac{g_u}{g_l} n_l A_{ul} \left[ 1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right) \right] \phi_\nu d\nu $$
Second is the $N_l$ - $N_u$ relation:
$$\int \tau_\nu d\nu = \frac{c^2}{8\pi\nu_0^2} A_{ul} N_u\left[\exp\left(\frac{h \nu_0}{k_B T_{ex}}\right)\right] \left[ 1-\exp\left(\frac{-h \nu_0}{k_B T}\right) \right] \int \phi_\nu d\nu $$
Then some simplification:
$$\int \tau_\nu d\nu = \frac{c^2}{8\pi\nu_0^2} A_{ul} N_u \left[ \exp\left(\frac{h \nu_0}{k_B T}\right) - 1 \right] \int \phi_\nu d\nu $$
$$A_{ul} = \frac{64\pi^4\nu_0^3}{3 h c^3} \left|\mu_{lu}\right|^2$$
Becomes, via some manipulation, equation 29 of Mangum & Shirley 2015:
$$N_u = \frac{3 h c}{8\pi^3 \nu \left|\mu_{lu}\right|^2} \left[\exp\left(\frac{h\nu}{k_B T_{ex}}\right) -1\right]^{-1} \int \tau_\nu d\nu$$
where I have used $T_{ex}$ instead of $T$ here because that is one of the substitutions invoked (quietly) in their derivation. There is some sleight-of-hand regarding assuming $N_l = n_l$ that essentially assumes $T_{ex}$ is constant along the line of sight, but that is fine.
(Equation 30 is the same as this one, but with $dv$ instead of $d\nu$ units)
Solve for tau again (because that's what's implemented in the code):
$$\mathrm{"tau"} = \int \tau_\nu d\nu = N_u \frac{
c^2 A_{ul}}{8\pi\nu_0^2} \left[\exp\left(\frac{h\nu}{k_B T_{ex}}\right) -1\right] $$
The key difference from Erik's derivation is that this is $N_u$, but he has defined $N_{(1,1)}= N_u + N_l$.
So, we get $N_l$ the same way as above:
$$N_l = \frac{8\pi\nu_0^2}{c^2} \frac{g_l}{g_u} A_{ul}^{-1} \left[ 1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right) \right]^{-1} \int \tau d\nu$$
$$N_l = \frac{3 h c}{8 \pi^3 \nu \left|\mu_{lu}\right|^2} \frac{g_l}{g_u} \left[ 1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right) \right]^{-1} \int \tau d\nu$$
Added together:
$$N_u + N_l = \frac{3 h c}{8 \pi^3 \nu \left|\mu_{lu}\right|^2} \frac{\frac{g_l}{g_u} +\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)}{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} \int \tau d\nu$$
We can solve that back for tau, which is what Erik has done:
$$\int \tau d\nu = (N_u + N_l) \frac{8 \pi^3 \nu \left|\mu_{lu}\right|^2}{3 h c} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {\frac{g_l}{g_u} +\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$
$$=(N_u + N_l) \frac{g_u}{g_l}\frac{8 \pi^3 \nu \left|\mu_{lu}\right|^2}{3 h c} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {1 +\frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$
$$=(N_u + N_l) \frac{g_u}{g_l}\frac{A_{ul}c^2}{8\pi\nu_0^2} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {1 +\frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$
now identical to Erik's equation.
This is actually a problem, because $N_u$ is related to $N_{tot}$ via the partition function, but there is some double-counting going on if we try to relate $N_{(1,1)}$ to $N_{tot}$ with the same equation.
So, to reformulate the equations in pyspeckit using the appropriate values, we want to use both the partition function (calculated using $T_{kin}$) and $N_u$.
Eqn 31:
$$N_u = N_{tot} \frac{g_u}{Q_{rot}} \exp\left(\frac{-E_u}{k_B T_{kin}}\right)$$
is implemented correctly in pyspeckit:
population_upperstate = lin_ntot * orthoparafrac * partition/(Z.sum())
where partition is
$$Z_i(\mathrm{para}) = (2J + 1) \exp\left[ \frac{ -h (B_0 J (J+1) + (C_0-B_0)J^2)}{k_B T_{kin}}\right]$$
$$Z_i(\mathrm{ortho}) = 2(2J + 1) \exp\left[ \frac{ -h (B_0 J (J+1) + (C_0-B_0)J^2)}{k_B T_{kin}}\right]$$
...so I'm assuming (haven't checked) that $E_u = h (B_0 J (J+1) + (C_0-B_0)J^2)$
Note that the leading "2" above cancels out in the Z/sum(Z), so it doesn't matter if it's right or not. I suspect, though, that the 2 belongs in front of both the para and ortho states, but it should be excluded for the J=0 case.
An aside by Erik Rosolowsky
(Note May 16, 2018: I believe this was incorporated into the above analysis)
EWR: The above equation is problematic because it relates the total column density to the $(J,J)$ state which is the equivalent of the $N_{(1,1)}$ term. In the notation above $N_{(1,1)} = N_u + N_l$, so to get this right, you need to consider the inversion transition splitting on top of the total energy of the state so that
$$ E_u = h (B_0 J (J+1) + (C_0-B_0)J^2) + \Delta E_{\mathrm{inv}}, g_u = 1 $$
and
$$ E_l = h (B_0 J (J+1) + (C_0-B_0)J^2) - \Delta E_{\mathrm{inv}}, g_l = 1 $$
or, since the splitting is small compared to the rotational energy (1 K compared to > 20 K), then
$$Z_J \approx 2 (2J + 1) \exp\left[ \frac{ -h (B_0 J (J+1) + (C_0-B_0)J^2)}{k_B T_{\mathrm{rot}}}\right]$$
where the leading 2 accounts for the internal inversion states. Since this 2 appears in all the terms, it cancels out in the sum. Note that I have also changed the $T_{\mathrm{kin}}$ to $T_{\mathrm{rot}}$ since these two aren't the same and it is the latter which establishes the level populations.
Returning to the above, I would then suggest
$$N_{(J,J)} = N_{tot} \frac{Z_J}{\sum_j Z_j} $$
Is the treatment of optical depth correct?
May 16, 2018: https://github.com/pyspeckit/pyspeckit/blob/725746f517e9bdcc22b83f4f9d6c9b8666e0a99e/pyspeckit/spectrum/models/ammonia.py
In this version, we compute the optical depth with the code:
```
for kk,nuo in enumerate(nuoff):
tauprof_ = (tau_dict[linename] * tau_wts[kk] *
np.exp(-(xarr.value+nuo-lines[kk])2 /
(2.0*nuwidth[kk]2)))
if return_components:
components.append(tauprof_)
tauprof += tauprof_
```
The total tau is normalized such that $\Sigma(\tau_{hf}){hf} = \tau{tot}$ for each line, i.e., the hyperfine $\tau$s sum to the tau value specified for the line.
The question Nico raised is, should we be computing the synthetic spectrum as $1-e^{\Sigma(\tau_{hf,\nu})}$ or $\Sigma(1-e^{\tau_{hf,\nu}})$?
The former is correct: we only have one optical depth per frequency bin. It doesn't matter what line the optical depth comes from.
End of explanation
"""
from astropy import units as u
from astropy import constants
freq = 23*u.GHz
def tau_wrong(tkin, tex):
return (1-np.exp(-constants.h * freq/(constants.k_B*tkin)))/(1+np.exp(-constants.h * freq/(constants.k_B*tex)))
def tau_right(tex):
return (1-np.exp(-constants.h * freq/(constants.k_B*tex)))/(1+np.exp(-constants.h * freq/(constants.k_B*tex)))
tkin = np.linspace(5,40,101)*u.K
tex = np.linspace(5,40,100)*u.K
grid = np.array([[tau_wrong(tk,tx)/tau_right(tx) for tx in tex] for tk in tkin])
%matplotlib inline
import pylab as pl
pl.imshow(grid, cmap='hot', extent=[5,40,5,40])
pl.xlabel("Tex")
pl.ylabel("Tkin")
pl.colorbar()
pl.contour(tex, tkin, grid, levels=[0.75,1,1/0.75], colors=['w','w','k'])
"""
Explanation: Below are numerical checks for accuracy
Some numerical checks: How bad was the use of Tkin instead of Tex in the $\tau$ equation?
$$(N_u + N_l) \frac{g_u}{g_l}\frac{A_{ul}c^2}{8\pi\nu_0^2} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {1 +\frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$
End of explanation
"""
def nunlnu_error(Tkin):
return 1+np.exp(-constants.h * freq / (constants.k_B * Tkin))
pl.plot(tkin.value, nunlnu_error(tkin))
"""
Explanation: So the error could be 50%-700% over a somewhat reasonable range. That's bad, and it affects the temperature estimates. However, the effect on temperature estimates should be pretty small, since each line will be affected in the same way. The biggest effect will be on the column density.
But, is this error at all balanced by the double-counting problem?
Because we were using the partition function directly, it's not obvious. I was assuming that we were using the equation with $N_u$ as the leader, but we were using $N_u+N_l$. i.e., I was using this equation:
$$\int \tau d\nu =(N_u + N_l) \frac{g_u}{g_l}\frac{A_{ul}c^2}{8\pi\nu_0^2} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {1 +\frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$
but with $N_u$ in place of $N_u + N_l$.
The magnitude of the error can therefore be estimated by computing $(N_u+N_l)/N_u = 1 + \frac{N_l}{N_u}$.
We can use the Boltzmann distribution to compute this error, then:
$$ \frac{n_u}{n_l} = \frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T}\right)$$
End of explanation
"""
from pyradex import Radex
from astropy import constants, units as u
R = Radex(species='p-nh3', column=1e13, collider_densities={'pH2':1e4}, temperature=20)
tbl = R(collider_densities={'ph2': 1e4}, temperature=20, column=1e13)
tbl[8:10]
# we're comparing the upper states since these are the ones that are emitting photons
trot = (u.Quantity(tbl['upperstateenergy'][8]-tbl['upperstateenergy'][9], u.K) *
np.log((tbl['upperlevelpop'][9] * R.upperlevel_statisticalweight[8]) /
(tbl['upperlevelpop'][8] * R.upperlevel_statisticalweight[9]))**-1
)
trot
tbl['Tex'][8:10].mean()
"""
Explanation: So we were always off by a factor very close to 2. The relative values of $\tau$ should never have been affected by this issue.
It will be more work to determine exactly how much the T_K and column estimates were affected.
New work in May 2016: T_{rot}
Comparing Trot and Tkin. If we start with the equation that governs level populations,
$$N_u = N_{tot} \frac{g_u}{Q_{rot}} \exp\left(\frac{-E_u}{k_B T_{kin}}\right)$$
we get
$$N_u / N_l = \frac{g_u}{g_l} \exp\left(\frac{-E_u}{k_B T_{kin}} + \frac{E_l}{k_B T_{kin}}\right)$$
where we really mean $T_{rot}$ instead of $T_{kin}$ here as long as we're talking about just two levels. This gives us a definition
$$T_{rot} = \left(\frac{E_l-E_u}{k_B}\right)\left[\ln\left(\frac{N_u g_l}{N_l g_u}\right)\right]^{-1}$$
which is the rotational temperature for a two-level system... which is just a $T_{ex}$, but governing non-radiatively-coupled levels.
So, for example, if we want to know $T_{rot}$ for the 2-2 and 1-1 lines at $n=10^4$ and $T_{kin}=20$ K:
End of explanation
"""
dT_oneone = -(constants.h * u.Quantity(tbl['frequency'][8], u.GHz)/constants.k_B).to(u.K)
print("delta-T for 1-1_upper - 1-1_lower: {0}".format(dT_oneone))
tex = (dT_oneone *
np.log((tbl['upperlevelpop'][8] * R.upperlevel_statisticalweight[8]) /
(tbl['lowerlevelpop'][8] * R.upperlevel_statisticalweight[8]))**-1
)
print("Excitation temperature computed is {0} and should be {1}".format(tex.to(u.K), tbl['Tex'][8]))
"""
Explanation: Pause here
$T_{rot} = 60$ K for $T_{kin}=25$ K? That doesn't seem right. Is it possible RADEX is doing something funny with level populations?
ERIK I SOLVED IT
I had left out the $^{-1}$ in the code. Oops!
End of explanation
"""
T0=tbl['upperstateenergy'][9]-tbl['upperstateenergy'][8]
T0
def tr_swift(tk, T0=T0):
return tk*(1+tk/T0 * np.log(1+0.6*np.exp(-15.7/tk)))**-1
"""
Explanation: Moving on: comparison to Swift et al 2005
Swift et al 2005 eqn A6
$$T_R = T_K \left[ 1 + \frac{T_K}{T_0} \ln \left[1+0.6\exp\left( -15.7/T_K \right)\right] \right]^{-1}$$
where $T_0=41.18$ K
End of explanation
"""
tr_swift(20, T0=-41.18)
tr_swift(20, T0=41.18)
tr_swift(20, T0=41.5)
def trot_radex(column=1e13, density=1e4, tkin=20):
tbl = R(collider_densities={'ph2': density}, temperature=tkin, column=column)
trot = (u.Quantity(tbl['upperstateenergy'][8]-tbl['upperstateenergy'][9], u.K) *
np.log((tbl['upperlevelpop'][9] * R.upperlevel_statisticalweight[8]) /
(tbl['upperlevelpop'][8] * R.upperlevel_statisticalweight[9]))**-1
)
return trot
"""
Explanation: Note that the approximation "works" - gets something near 20 - for positive or negative values of T0 (but see below)
End of explanation
"""
trot_radex(tkin=20)
def tex_radex(column=1e13, density=1e4, tkin=20, lineno=8):
""" used in tests below """
tbl = R(collider_densities={'ph2': density}, temperature=tkin, column=column)
return tbl[lineno]['Tex']
%matplotlib inline
import pylab as pl
cols = np.logspace(12,15)
trots = [trot_radex(column=c).to(u.K).value for c in cols]
pl.semilogx(cols, trots)
pl.hlines(tr_swift(20), cols.min(), cols.max(), color='k')
pl.xlabel("Column")
pl.ylabel("$T_{rot} (2-2)/(1-1)$")
densities = np.logspace(3,9)
trots = [trot_radex(density=n).to(u.K).value for n in densities]
pl.semilogx(densities, trots)
pl.hlines(tr_swift(20), densities.min(), densities.max(), color='k')
pl.xlabel("Volume Density")
pl.ylabel("$T_{rot} (2-2)/(1-1)$")
"""
Explanation: RADEX suggests that the positive T0 value is the correct one (the negative one appeared correct when incorrectly indexed statistical weights were being used)
End of explanation
"""
temperatures = np.linspace(5,40)
trots = [trot_radex(tkin=t).to(u.K).value for t in temperatures]
pl.plot(temperatures, trots)
# wrong pl.plot(temperatures, tr_swift(temperatures, T0=-41.18), color='k')
pl.plot(temperatures, tr_swift(temperatures, T0=41.18), color='r')
pl.xlabel("Temperatures")
pl.ylabel("$T_{rot} (2-2)/(1-1)$")
temperatures = np.linspace(5,40,50)
trots = [trot_radex(tkin=t).to(u.K).value for t in temperatures]
pl.plot(temperatures, np.abs(trots-tr_swift(temperatures, T0=41.18))/trots)
pl.xlabel("Temperatures")
pl.ylabel("$(T_{rot}(\mathrm{RADEX}) - T_{rot}(\mathrm{Swift}))/T_{rot}(\mathrm{RADEX})$")
"""
Explanation: This is the plot that really convinces me that the negative (black curve) value of T0 is the appropriate value to use for this approximation
End of explanation
"""
from pyspeckit.spectrum.models.tests import test_ammonia
from pyspeckit.spectrum.models import ammonia
"""
Explanation: Tests of cold_ammonia reproducing pyspeckit ammonia spectra
End of explanation
"""
tkin = 20*u.K
trot = trot_radex(tkin=tkin)
print(trot)
spc = test_ammonia.make_synthspec(lte=False, tkin=None, tex=6.66, trot=trot.value, lines=['oneone','twotwo'])
spc.specfit.Registry.add_fitter('cold_ammonia',ammonia.cold_ammonia_model(),6)
spc.specfit(fittype='cold_ammonia', guesses=[23, 5, 13.1, 1, 0.5, 0],
fixed=[False,False,False,False,False,True])
print("For Tkin={1} -> Trot={2}, pyspeckit's cold_ammonia fitter got:\n{0}".format(spc.specfit.parinfo, tkin, trot))
spc.specfit(fittype='cold_ammonia', guesses=[22.80, 6.6, 13.1, 1, 0.5, 0],
fixed=[False,False,False,False,False,True])
bestfit_coldammonia_temperature = spc.specfit.parinfo[0]
print("The best fit cold ammonia temperature is {0} for an input T_rot={1}".format(bestfit_coldammonia_temperature, trot))
"""
Explanation: Test 1: Use a constant excitatino temperature for all lines
End of explanation
"""
tex11 = tex_radex(tkin=tkin, lineno=8)
tex22 = tex_radex(tkin=tkin, lineno=9)
print("tex11={0}, tex22={1} for tkin={2}, trot={3}".format(tex11,tex22,tkin,trot))
spc = test_ammonia.make_synthspec(lte=False, tkin=None,
tex={'oneone':tex11, 'twotwo':tex22},
trot=trot.value,
lines=['oneone','twotwo'])
spc.specfit.Registry.add_fitter('cold_ammonia',ammonia.cold_ammonia_model(),6)
spc.specfit(fittype='cold_ammonia', guesses=[23, 5, 13.1, 1, 0.5, 0],
fixed=[False,False,False,False,False,True])
print("For Tkin={1} -> Trot={2}, pyspeckit's cold_ammonia fitter got:\n{0}"
.format(spc.specfit.parinfo, tkin, trot))
print("The best fit cold ammonia temperature is {0} for an input T_rot={1}"
.format(bestfit_coldammonia_temperature, trot))
"""
Explanation: Test 2: Use a different (& appropriate) tex for each level in the input model spectrum
If we use the exact tex for each line in the input model, in principle, the resulting fitted temperature should be more accurate. However, at present, it looks dramatically incorrect
End of explanation
"""
tkin = 20*u.K
trot = trot_radex(tkin=tkin)
dT0=41.18
print(tkin * (1 + (tkin.value/dT0)*np.log(1 + 0.6*np.exp(-15.7/tkin.value)))**-1)
print("tkin={0} trot={1} tex11={2} tex22={3}".format(tkin, trot, tex11, tex22))
spc = test_ammonia.make_synthspec(lte=False, tkin=None,
tex={'oneone':tex11, 'twotwo':tex22},
trot=trot.value,
lines=['oneone','twotwo'])
spc_666 = test_ammonia.make_synthspec(lte=False, tkin=None,
tex=6.66,
trot=trot.value,
lines=['oneone','twotwo'])
# this one is guaranteed different because tex = trot
spc_cold = test_ammonia.make_synthspec_cold(tkin=tkin.value,
lines=['oneone','twotwo'])
spc[0].plotter(linewidth=3, alpha=0.5)
spc_666[0].plotter(axis=spc[0].plotter.axis, clear=False, color='r', linewidth=1, alpha=0.7)
spc_cold[0].plotter(axis=spc[0].plotter.axis, clear=False, color='b', linewidth=1, alpha=0.7)
"""
Explanation: Test 3: compare cold_ammonia to "normal" ammonia model to see why they differ
In a previous iteration of the ammonia model, there was a big (and incorrect) difference between the synthetic spectra from ammonia and cold_ammonia. This is now something of a regression test for that error, which turned out to be from yet another incorrect indexing of the degeneracy.
End of explanation
"""
spc[0].data.max(), spc_666[0].data.max()
spc[1].plotter()
spc_666[1].plotter(axis=spc[1].plotter.axis, clear=False, color='r')
spc_cold[1].plotter(axis=spc[1].plotter.axis, clear=False, color='b')
"""
Explanation: The red and black look too different to me; they should differ only by a factor of (tex11-6.66)/6.66 or so. Instead, they differ by a factor of 5-6.
End of explanation
"""
temperatures = np.linspace(5,40)
trots = [trot_radex(tkin=t).to(u.K).value for t in temperatures]
tex11s = np.array([tex_radex(tkin=t, lineno=8) for t in temperatures])
tex22s = np.array([tex_radex(tkin=t, lineno=9) for t in temperatures])
pl.plot(trots, tex11s)
pl.plot(trots, tex22s)
#pl.plot(tr_swift(temperatures), color='k')
pl.ylabel("$T_{ex}$")
pl.xlabel("$T_{rot} (2-2)/(1-1)$")
"""
Explanation: RADEX analysis: T_rot vs T_kin vs T_ex
End of explanation
"""
temperatures = np.linspace(5,40)
trots = [trot_radex(tkin=t).to(u.K).value for t in temperatures]
tex11s = np.array([tex_radex(tkin=t, lineno=8) for t in temperatures])
tex22s = np.array([tex_radex(tkin=t, lineno=9) for t in temperatures])
pl.plot(trots, tex11s/tex22s)
#pl.plot(tr_swift(temperatures), color='k')
pl.ylabel("$T_{ex} (2-2)/(1-1)$")
pl.xlabel("$T_{rot} (2-2)/(1-1)$")
"""
Explanation: Apparently there are some discreteness problems but the ratio changes very little.
End of explanation
"""
from pyspeckit.spectrum.models.tests import test_ammonia
test_ammonia.test_ammonia_parlimits()
test_ammonia.test_ammonia_parlimits_fails()
test_ammonia.test_cold_ammonia()
test_ammonia.test_self_fit()
"""
Explanation: run pyspeckit tests
End of explanation
"""
temperatures = np.array((10,15,20,25,30,35,40))
recovered_tkin = {}
recovered_column = {}
for tkin in temperatures:
tbl = R(collider_densities={'ph2': 1e4}, temperature=tkin, column=1e13)
tex11 = tbl['Tex'][8]
tex22 = tbl['Tex'][9]
trot = (u.Quantity(tbl['upperstateenergy'][8]-tbl['upperstateenergy'][9], u.K) *
np.log((tbl['upperlevelpop'][9] * R.upperlevel_statisticalweight[8]) /
(tbl['upperlevelpop'][8] * R.upperlevel_statisticalweight[9]))**-1
)
spc = test_ammonia.make_synthspec(lte=False, tkin=None,
tex={'oneone':tex11, 'twotwo':tex22},
trot=trot.value,
lines=['oneone','twotwo'])
spc.specfit.Registry.add_fitter('cold_ammonia',ammonia.cold_ammonia_model(),6)
spc.specfit(fittype='cold_ammonia', guesses=[23, 5, 13.1, 1, 0.5, 0],
fixed=[False,False,False,False,False,True])
recovered_tkin[tkin] = spc.specfit.parinfo['tkin0'].value
recovered_column[tkin] = spc.specfit.parinfo['ntot0'].value
pl.xlabel("$T_K$")
pl.ylabel("Fitted $T_K$ from cold_ammonia")
pl.plot(recovered_tkin.keys(), recovered_tkin.values(), 'o')
pl.plot(temperatures, temperatures)
pl.xlabel("$T_K$")
pl.ylabel("$|T_K-T_{fit}|/T_K$")
inp = np.array(list(recovered_tkin.keys()), dtype='float')
rslt = np.array(list(recovered_tkin.values()), dtype='float')
pl.plot(inp, np.abs(rslt-inp)/rslt, 'o')
"""
Explanation: More extensive (& expensive) tests: recovered Tkin
1. Check the recovered temperature as a function of input temperature using RADEX to simulate "real" data
End of explanation
"""
pl.xlabel("$N(NH_3)$")
pl.ylabel("Fitted $N(NH_3)$ from cold_ammonia")
pl.plot(recovered_column.keys(), recovered_column.values(), 'o')
pl.plot(temperatures, temperatures*0+13)
"""
Explanation: 2. Check the recovery as a function of column density
End of explanation
"""
|
abhi1509/deep-learning | intro-to-rnns/Anna_KaRNNa_Exercises.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = sorted(set(text))
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
"""
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
"""
text[:100]
"""
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
"""
encoded[:100]
"""
Explanation: And we can see the characters encoded as integers.
End of explanation
"""
len(vocab)
"""
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
"""
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch = n_seqs * n_steps
# // to get value without decimal
n_batches = len(arr)//characters_per_batch
# Keep only enough characters to make full batches
arr = arr[:n_batches*characters_per_batch]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:,n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:,:-1], y[:,-1] = x[:,1:], x[:,0]
yield x, y
"""
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
Exercise: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself.
End of explanation
"""
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
"""
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
"""
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
"""
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise: Create the input placeholders in the function below.
End of explanation
"""
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell outputs
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell(drop, num_layers)
initial_state =
return cell, initial_state
"""
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob):
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])
```
Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
"""
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
lstm_output: List of output tensors from the LSTM layer
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# Concatenate lstm_output over axis 1 (the columns)
seq_output =
# Reshape seq_output to a 2D tensor with lstm_size columns
x =
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
# Create the weight and bias variables here
softmax_w =
softmax_b =
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits =
# Use softmax to get the probabilities for predicted characters
out =
return out, logits
"""
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise: Implement the output layer in the function below.
End of explanation
"""
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per sequence per step
y_one_hot =
y_reshaped =
# Softmax cross entropy loss
loss =
return loss
"""
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise: Implement the loss calculation in the function below.
End of explanation
"""
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
"""
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
"""
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob =
# Build the LSTM cell
cell, self.initial_state =
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot =
# Run each sequence step through the RNN with tf.nn.dynamic_rnn
outputs, state =
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits =
# Loss and optimizer (with gradient clipping)
self.loss =
self.optimizer =
"""
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network.
End of explanation
"""
batch_size = 10 # Sequences per batch
num_steps = 50 # Number of sequence steps per batch
lstm_size = 128 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.01 # Learning rate
keep_prob = 0.5 # Dropout keep probability
"""
Explanation: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
"""
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
"""
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise: Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU.
End of explanation
"""
tf.train.get_checkpoint_state('checkpoints')
"""
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation
"""
|
grananqvist/TDA602_ApplicationIPS | Plots_technical_background.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
"""
Explanation: The code in this notebook is entirely for making example plots to use in the technical background of the report
This notebook doesn't have any relation to the overall firewall project
End of explanation
"""
xmin, xmax = -7, 5
n = 50
np.random.seed(0)
X = np.random.normal(size=n)
y = (X > 0).astype(np.float)
X[X >= 0] *= 4
X[X >= 0] += 2
X[X < 0] -= 2
# plot separated
f, axarr = plt.subplots(1,2, sharey=True)
f.set_figheight(5)
f.set_figwidth(12)
'''
Figure 0
'''
for t,marker,c in zip([0.0,1.0],"ox","gb"):
# plot each class on its own to get different colored markers
axarr[0].scatter(X[y == t],
np.zeros(len(X[y==t])),
marker=marker,
c=c)
X_test = np.linspace(-7, 10, 300)
axarr[0].set_ylabel('y')
axarr[0].set_xlabel('X')
axarr[0].set_xticks(range(-7, 10))
axarr[0].set_yticks([0, 0.5, 1])
axarr[0].set_ylim(-.25, 1.25)
axarr[0].set_xlim(-7, 10)
axarr[0].legend(('class 0', 'class 1'),
loc="lower right", fontsize='small')
'''
Figure 1
'''
for t,marker,c in zip([0.0,1.0],"ox","gb"):
# plot each class on its own to get different colored markers
axarr[1].scatter(X[y == t],
y[y==t],
marker=marker,
c=c)
#plt.scatter(X, y, c=y,color='black', zorder=20)
X_test = np.linspace(-7, 10, 300)
def model(x):
return 1 / (1 + np.exp(-x))
loss = model(X_test)
axarr[1].plot(X_test, loss, color='red', linewidth=3)
axarr[1].set_ylabel('y')
axarr[1].set_xlabel('X')
axarr[1].set_xticks(range(-7, 10))
axarr[1].set_yticks([0, 0.5, 1])
axarr[1].set_ylim(-.25, 1.25)
axarr[1].set_xlim(-7, 10)
axarr[1].legend(('Logistic Regression Model', 'class 0', 'class 1'),
loc="lower right", fontsize='small')
plt.show()
f.savefig('images/report_images/logistic.png', bbox_inches='tight')
"""
Explanation: Logistic regression
End of explanation
"""
'''
clear
clc
load('d2.mat');
hold on
gscatter(X(:,1),X(:,2),Y,'rb','x+',6);
SVMstruct = svmtrain(X,Y,'boxconstraint',1,'autoscale',false,'kernel_function','RBF');
% Make a grid of values to classify the entire space
x1_axis = linspace(min(X(:,1)), max(X(:,1)), 1000)';
x2_axis = linspace(min(X(:,2)), max(X(:,2)), 1000)';
[x1_space, x2_space] = meshgrid(x1_axis, x2_axis);
for i = 1:size(x1_space, 2)
point_in_space = [x1_space(:, i), x2_space(:, i)];
class(:, i) = svmclassify(SVMstruct, point_in_space);
end
% Plot the SVM boundary
hold on
contour(x1_space, x2_space, class, [0 0], 'k');
legend('-1','1','RBF boundary');
xlabel('x1');
ylabel('x2');
title('Data points and decision boundary');
hold off;
'''
import pickle
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.ensemble import RandomForestClassifier
def get2Grams(payload_obj):
'''Divides a string into 2-grams
Example: input - payload: "<script>"
output- ["<s","sc","cr","ri","ip","pt","t>"]
'''
payload = str(payload_obj)
ngrams = []
for i in range(0,len(payload)-2):
ngrams.append(payload[i:i+2])
return ngrams
classifier = pickle.load( open("data/tfidf_2grams_randomforest.p", "rb"))
def injection_test(inputs):
variables = inputs.split('&')
values = [ variable.split('=')[1] for variable in variables]
print(values)
return 'MALICIOUS' if classifier.predict(values).sum() > 0 else 'NOT_MALICIOUS'
injection_test('var1=<rip cookie')
def get1Grams(payload_obj):
'''Divides a string into 1-grams
Example: input - payload: "<script>"
output- ["<","s","c","r","i","p","t",">"]
'''
payload = str(payload_obj)
ngrams = []
for i in range(0,len(payload)-1):
ngrams.append(payload[i:i+1])
return ngrams
def get2Grams(payload_obj):
'''Divides a string into 2-grams
Example: input - payload: "<script>"
output- ["<s","sc","cr","ri","ip","pt","t>"]
'''
payload = str(payload_obj)
ngrams = []
for i in range(0,len(payload)-2):
ngrams.append(payload[i:i+2])
return ngrams
def get3Grams(payload_obj):
'''Divides a string into 3-grams
Example: input - payload: "<script>"
output- ["<sc","scr","cri","rip","ipt","pt>"]
'''
payload = str(payload_obj)
ngrams = []
for i in range(0,len(payload)-3):
ngrams.append(payload[i:i+3])
return ngrams
classifierz = pickle.load( open("data/trained_classifiers.p", "rb"))[['accuracy','sensitivity','specificity','auc','conf_matrix','params']]
from IPython.display import display
import pandas as pd
display(classifierz)
classifierz.to_csv('data/classifiers_result_table.csv',encoding='UTF-8')
"""
Explanation: Support vector machine
This is matlab code on how the image demonstrating linear vs RBF kernel was made (svm_kernel.png)
End of explanation
"""
|
vivekec/datascience | tutorials/python/Ipython files/Seaborn - 1. Introduction.ipynb | gpl-3.0 | # Collective data
def sinplot(flip=1):
x = np.linspace(0, 14, 100)
for i in range(1, 7):
plt.plot(x, np.sin(x + i * .5) * (7 - i) * flip)
# Individual data
data = np.random.normal(size=(20, 6)) + np.arange(6) / 2
"""
Explanation: 1. Controlling figure aesthetics
Let us generate some data to work with.
End of explanation
"""
# axis parameters
sns.axes_style()
sns.set_style(style='whitegrid',
rc={'font.sans-setif':'Helvetica'}) # axis parameters can be passed in argument rc
sns.boxplot(data=data)
# removing top and right axis splines using despline
sns.set_style('white')
sns.boxplot(data=data)
sns.despine() # despine arguments can control which side to be removed
sns.boxplot(data=data)
sns.despine(trim=True)
# axis_style helps to make temporarily changes when used with 'with'
with sns.axes_style('darkgrid'):
sns.violinplot(data=data)
"""
Explanation: Figure style
functions to practice -
* set_style()
* axes_style()
End of explanation
"""
sns.set_style('white') # user-defined
sns.set() # resetting all style parameters to default
sns.violinplot(data=data) # verifying
"""
Explanation: Resetting seaborn style parameters to default using set()
End of explanation
"""
col = sns.color_palette(palette='hls', n_colors=10)
sns.set_palette(col)
sns.violinplot(data=data)
"""
Explanation: 2. Choosing color palettes
End of explanation
"""
# Colors can also be passed in the hex format like: #fff
sns.set_palette(sns.color_palette(["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71"]))
sns.violinplot(data=data)
# Sequential color maps
sns.set_palette(sns.color_palette("Blues"))
sns.violinplot(data=data)
# Sequential cubehelix color maps -
# helpful to preserve the info when plots printed in black and white (useful for color-blinds)
sns.set_palette(sns.cubehelix_palette(reverse=True, light=0.95, dark=0.2, hue=0.5,
rot=0.6, # an arbitrary value between -1 and 1
start=2)) # an arbitrary value between 0 and 3
sns.violinplot(data=data)
# light palette
sns.set_palette(sns.light_palette((0.22, 0.85, 0.125)))
sns.violinplot(data=data)
# dark palette
sns.set_palette(sns.dark_palette((0.22, 0.85, 0.125)))
sns.violinplot(data=data)
# divergent palette
sns.set_palette(sns.diverging_palette(8, 2200, n=6)) # First two arguments are in degrees
sns.violinplot(data=data)
"""
Explanation: Possible values to be passed inside color palette can be viewed here
Possible values are: Accent, Accent_r, Blues, Blues_r, BrBG, BrBG_r, BuGn, BuGn_r, BuPu, BuPu_r, CMRmap, CMRmap_r, Dark2, Dark2_r, GnBu, GnBu_r, Greens, Greens_r, Greys, Greys_r, OrRd, OrRd_r, Oranges, Oranges_r, PRGn, PRGn_r, Paired, Paired_r, Pastel1, Pastel1_r, Pastel2, Pastel2_r, PiYG, PiYG_r, PuBu, PuBuGn, PuBuGn_r, PuBu_r, PuOr, PuOr_r, PuRd, PuRd_r, Purples, Purples_r, RdBu, RdBu_r, RdGy, RdGy_r, RdPu, RdPu_r, RdYlBu, RdYlBu_r, RdYlGn, RdYlGn_r, Reds, Reds_r, Set1, Set1_r, Set2, Set2_r, Set3, Set3_r, Spectral, Spectral_r, Wistia, Wistia_r, YlGn, YlGnBu, YlGnBu_r, YlGn_r, YlOrBr, YlOrBr_r, YlOrRd, YlOrRd_r, afmhot, afmhot_r, autumn, autumn_r, binary, binary_r, bone, bone_r, brg, brg_r, bwr, bwr_r, cividis, cividis_r, cool, cool_r, coolwarm, coolwarm_r, copper, copper_r, cubehelix, cubehelix_r, flag, flag_r, gist_earth, gist_earth_r, gist_gray, gist_gray_r, gist_heat, gist_heat_r, gist_ncar, gist_ncar_r, gist_rainbow, gist_rainbow_r, gist_stern, gist_stern_r, gist_yarg, gist_yarg_r, gnuplot, gnuplot2, gnuplot2_r, gnuplot_r, gray, gray_r, hot, hot_r, hsv, hsv_r, icefire, icefire_r, inferno, inferno_r, jet, jet_r, magma, magma_r, mako, mako_r, nipy_spectral, nipy_spectral_r, ocean, ocean_r, pink, pink_r, plasma, plasma_r, prism, prism_r, rainbow, rainbow_r, rocket, rocket_r, seismic, seismic_r, spring, spring_r, summer, summer_r, tab10, tab10_r, tab20, tab20_r, tab20b, tab20b_r, tab20c, tab20c_r, terrain, terrain_r, viridis, viridis_r, vlag, vlag_r, winter, winter_r
End of explanation
"""
|
steven-murray/halomod | devel/fix_angular_cf.ipynb | mit | from halomod import AngularCF
import halomod
halomod.__version__
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Fix Amplitude of Angular CF
There was a bug reported that AngularCF is returning really large values for the tracer correlation function, specifically for v1.6.0 of halomod. This notebook tries to find out why, and fix it.
End of explanation
"""
acf = AngularCF(z=0.475, zmin=0.45, zmax=0.5)
plt.plot(acf.theta * 180/np.pi, acf.angular_corr_gal)
plt.xscale('log')
plt.yscale('log')
plt.xlabel(r"$\theta$ / deg")
plt.ylabel('Angular CF')
"""
Explanation: The problem illustrated
We will try to roughly reproduce the results of Blake+2008:
<img src="blake-angular-cf.png">
End of explanation
"""
acf.mean_tracer_den * 1e4
"""
Explanation: Note that this is pretty much 1000x times what Blake+ got, but does have a similar shape.
Let's go a bit deeper and look at the exact parameters used in Blake+:
<img src="blake-parameters.png">
End of explanation
"""
acf.hod_model = 'Zheng05'
acf.hod_params = {'M_min': 12.98, 'M_0':-10, 'M_1':14.09, 'sig_logm':0.21, 'alpha':1.57}
acf.mean_tracer_den * 1e4, acf.bias_effective_tracer, acf.mass_effective
"""
Explanation: OK, this is awful. Let's try specify the HOD more accurately:
End of explanation
"""
plt.plot(acf.theta * 180/np.pi, acf.angular_corr_gal)
plt.xscale('log')
plt.yscale('log')
plt.xlabel(r"$\theta$ / deg")
plt.ylabel('Angular CF')
"""
Explanation: This is much closer. Now let's try the ACF:
End of explanation
"""
r = np.logspace(-3, 2.1, 500)
plt.plot(r, acf.corr_auto_tracer_fnc(r))
plt.xscale('log')
plt.yscale('log')
"""
Explanation: It's even worse than before!
Let's plot the standard correlation function:
End of explanation
"""
from halomod.integrate_corr import angular_corr_gal
angular_corr_gal(
theta=np.array([1e-3]),
xi = acf.corr_auto_tracer_fnc,
p1=lambda x : 0.7*np.ones_like(x)/(Planck15.comoving_distance(0.5).value - Planck15.comoving_distance(0.45).value),
p_of_z=False,
zmin=0.45,
zmax=0.5,
logu_min=-6,
logu_max=2.1,
unum=1000,
znum=500
)
from astropy.cosmology import Planck15
Planck15.comoving_distance(0.45)
Planck15.comoving_distance(0.5)
np.linspace(0,1,2)
"""
Explanation: This is of a similar magnitude to that found in Blake.
Try using low-level angular_corr_gal
End of explanation
"""
|
empet/geom_modeling | Catmull-Rom-splines.ipynb | bsd-2-clause | from IPython.display import Image
Image(filename='Imag/Catmull-Rom-curve.png')
"""
Explanation: Catmull-Rom splines
Definition of this class of curves
The Catmull-Rom interpolation problem defined in [Catmull, E. and R. Rom, A Class of Local Interpolationg Splines, in Barnhill R.E. and R.F. Riesenfeld (eds.), Computer Aided Geometric Design, Academic Press, New York, 1974.] is formulated as follows:
Given $n+1$ points, $P_0, P_1, \ldots, P_n$, $n\geq 3$, find a piecewise cubic curve, parameterized by
$c:[1, n-1]\to\mathbb{R}^d$ (usually, d=2,3) such
that $c(i)=P_{i}$, for $i\in{1,2\ldots, n-1}$, and the tangent vector at each interpolatory point, $P_i$, $i=\overline{1, n-1}$, to be colinear with the vector $\overrightarrow{P_{i-1}P_{i+1}}$:
$$\vec{\dot{c}}(i)=s\overrightarrow{P_{i-1}P_{i+1}},
$$ where $s\in(0,1]$. Usually one takes s=0.5.
Hence given n+1 points, $P_0, P_1, \ldots, P_n$, a CR curve interporlates only the points $P_1, P_2, \ldots, P_{n-1}$.
The first and the last point influence only the direction of tangent at $P_1$, respectively, $P_{n-1}$.
Since the distance between two consecutive knots, $t_i=i$, $t_{i+1}=i+1$ is constant, such a curve is called uniform parameterized Catmull-Rom curve.
Being a cubic above each interval $[i, i+1]$, an arc of CR curve on such an interval can be defined
as a Bézier curve of control points ${\bf b}0, {\bf b}_1, {\bf b}_2, {\bf b}_3$,
with
${\bf b}_0=P_i, {\bf b}_3=P{i+1}$.
But the tangent at end points, ${\bf b}_0$ and ${\bf b}_3$, of a Bézier curve, is
$3\overrightarrow{{\bf b}_0{\bf b}_1}$, respectively $3\overrightarrow{{\bf b}_2{\bf b}_3}$.
Thus we get:
$$ {\bf b}1=P_i+s(P{i+1}-P_{i-1})/3, \:\: {\bf b}2=P{i+1}-s(P_{i+2}-P_{i})/3$$
Below we illustrate the above stated properties of a CR curve, defined by 7 points. Attached to the third arc, joining the points $P_3, P_4$,
is the Bézier control polygon defining that arc of curve. The tangent at $P_2$ is obviously parallel to
$\overrightarrow{P_1P_3}$.
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Catmull-Rom interpolating curves can be evaluated
at a point using a recursive scheme similar to the de Boor algorithm for B-spline curves.
Usually a cubic spline curve is $C^2$ at knots. Although the CR curves are only $C^1$ at each knot,
they are sometimes called Catmull Rom spline curves, due to the similarities of the algorithms of
evaluation, and other common properties.
Let us point out the drawback of the uniform parameterization, that led to defining CR splines with
non-uniform knots.
A $C^1$-parameterization, $c:[a,b]\to\mathbb{R}^d$, of a curve can be interpreted as defining the motion of the point c(t), along the trace of $c$, during the time from a to b.
When the parameterization of a CR curve is uniform, the point $c(t)$ spends the same amount of time on each arc of ends $P_i, P_{i+1}$, $i=1,2, ...n-1$, irrespective of the distance between $P_i, P_{i+1}$.
If the distance between two data points is large, the point c(t)
moves with a high speed. If the next two interpolatory points, $P_{i+1}, P_{i+2}$, are closer, then the moving point will overshoot since it cannot abruptly change its speed.
Barry and Goldman [P. J. Barry, R. N. Goldman, A recursive evaluation algorithm for a class of Catmull–Rom splines, SIGGRAPH Computer Graphics, 22(4):199-204, 1988] extended the definition of uniform parameterized CR splines to curves whose parameterization incorporates the
distance between any two adjacent interpolatory points.
Namely, given the points $P_0, P_1, \ldots, P_n$, one defines a knot sequence, $t_0, t_1, \ldots, t_{n}$ as follows:
$$\begin{array}{lll}t_0&=&0\
t_i&=&t_{i-1}+||\overrightarrow{P_{i-1}P_i}||^\alpha, \quad \alpha\in[0,1]\end{array}$$
A $C^1$-piecewise cubic, that interpolates the given points, i.e. $c(t_i)=P_i$, $i=1,2, \ldots, n-1$, is defined at any $u\in[t_i, t_{i+1}]$ through a recursive scheme in which are involved only four points, $P_{i-1}, P_i, P_{i+1}, P_{i+2}$
and the knots $t_{i-1}, t_i, t_{i+1}, t_{i+2}$:
$$P_j^r=(1-\omega^{r-1}j)P^{r-1}_j+\omega^{r-1}_jP^{r-1}{j+1}, \quad r=1,2,\quad j=i-1, i, i+1, i+2,
$$
with $\omega^r_j(u)=\displaystyle\frac{u-t_j}{t_{j+1+r}-t_j}$. The superscript $r$ points out the level of recursion. $P_j^0=P_j$.
The last two points, $P^2_{i-1}$, $P^2_{i}$, are finally interpolated linearly, to get $c(u)$, the point on the CR curve corresponding to the parameter $u\in[t_i, t_{i+1}]$:
$$c(u)=\left(1-\displaystyle\frac{u-t_i}{t_{i+1}-t_i}\right) P^2_{i-1}+\displaystyle\frac{u-t_i}{t_{i+1}-t_i} P^2_{i}$$
We note that in the first two steps, the points $P_j^r$, $r=1,2$, are computed via de Boor algorithm for B-splines, while in the last step, $c(u)$ is computed according to the Neville algorithm for evaluating Lagrange polynomials.
The parameter $\alpha$ in the definition of knots controls the geometry of the interpolating CR curve.
For $\alpha=0.5$ the corresponding CR curve is called centripetal Catmull-Rom curve, for $\alpha=1$, chordal Catmull
Rom curve, while for $\alpha=0$ we get the initial uniform parameterized CR spline.
For $\alpha$ close to $0$ the corresponding CR curve can exhibit singular points (self intersections or cusps) within short curve segments(with small distance between $P_i, P_{i+1}$).
In [C Yuksel, S Schaefer, J Keyser,
Parameterization and Applications of Catmull-Rom Curves,
Computer Aided Design, 43, 7, 2011] it was deduced that the optimal parameter that corresponds to CR splines with no singular points is $\alpha=0.5$.
Moreover a centripetal CR curve follows more tightly the interpolatory points than CR curves corresponding to $\alpha$ close to 0 or to 1.
That is why centripetal Catmull-Rom splines are now widely used in interactive generation of interpolating curves.
Python implementation of the Catmull-Rom spline generation
Instead of the above mixture of the de Boor and Neville algorithm we generate each segment of CR curve,
as a Bezier curve.
Over each interval $[t_i, t_{i+1}]$, the CR curve is a cubic polynomial curve, hence it can be defined as
a Bézier curve, parameterized over $[0,1]$. Theoretically, the standard polynomial parameterization
is obtained applying (using a Computer Algebra System) the two steps of de Boor algorithm defined above, and a step of the Neville algo.
Then this parameterization is converted to one in which the polynomial components are expressed
in Bernstein basis, to get the Bézier form of that polynomial segment of curve.
The expression of the Bézier control points as combinations of four consecutive interpolatory
points, $P_{0}, P_1, P_{2}, P_{3}$, involved in the definition of a CR segment, is given in the function ctrl_bezier(P, d). d is a list of values involved in knot definition, $d_j=||\overrightarrow{P_{j}P_{j+1}}||^\alpha$, $j=0,1,2$.
End of explanation
"""
def knot_interval(i_pts, alpha=0.5, closed=False):
if len(i_pts)<4:
raise ValueError('CR-curves need at least 4 interpolatory points')
#i_pts is the list of interpolatory points P[0], P[1], ... P[n]
if closed:
i_pts+=[i_pts[0], i_pts[1], i_pts[2]]
i_pts=np.array(i_pts)
dist=np.linalg.norm(i_pts[1:, :]-i_pts[:-1,:], axis=1)
return dist**alpha
def ctrl_bezier(P, d):
#Associate to 4 consecutive interpolatory points and the corresponding three d-values,
#the Bezier control points
if len(P)!=len(d)+1!=4:
raise ValueError('The list of points and knot intervals have inappropriate len ')
P=np.array(P)
bz=[0]*4
bz[0]=P[1]
bz[1]=(d[0]**2*P[2]-d[1]**2*P[0] +(2*d[0]**2+3*d[0]*d[1]+d[1]**2)*P[1])/(3*d[0]*(d[0]+d[1]))
bz[2]=(d[2]**2*P[1]-d[1]**2*P[3] +(2*d[2]**2+3*d[2]*d[1]+d[1]**2)*P[2])/(3*d[2]*(d[1]+d[2]))
bz[3]=P[2]
return bz
def Bezier_curve(bz, nr=100):
#implements the de Casteljau algorithm to compute nr points on a Bezier curve
t=np.linspace(0,1, nr)
N=len(bz)
points=[]# the list of points to be computed on the Bezier curve
for i in range(nr):#for each parameter t[i] evaluate a point on the Bezier curve
#via De Casteljau algorithm
aa=np.copy(bz)
for r in range(1,N):
aa[:N-r,:]=(1-t[i])*aa[:N-r,:]+t[i]*aa[1:N-r+1,:]# convex combination
points.append(aa[0,:])
return points
def Catmull_Rom(i_pts, alpha=0.5, closed=False):
#returns the list of points computed on the interpolating CR curve
#i_pts the list of interpolatory points P[0], P[1], ...P[n]
curve_pts=[]#the list of all points to be computed on the CR curve
d=knot_interval(i_pts, alpha=alpha, closed=closed)
for k in range(len(i_pts)-3):
cb=ctrl_bezier(i_pts[k:k+4], d[k:k+3])
curve_pts.extend(Bezier_curve(cb, nr=100))
return np.array(curve_pts)
"""
Explanation: Define the function knot_interval, that computes the values
$d_j=||\overrightarrow{P_{j}P_{j+1}}||^\alpha$, $j=\overline{0, n-1}$. If we want define a closed curve, then we extend the list of points $P_0, P_1, \ldots, P_n$ with $P_0, P_1, P_2$:
End of explanation
"""
P=[[-0.72, -0.3], [0,0], [1., 0.8], [1.1, 0.5], [2.7, 1.2], [3.4, 0.27]]
curve0=Catmull_Rom(P, alpha=0, closed=False)
curve1=Catmull_Rom(P, alpha=0.5, closed=False)
curve2=Catmull_Rom(P, alpha=1.0, closed=False)
xp, yp=zip(*P)
fig=plt.figure(figsize=(9,6))
plt.plot(curve0[:,0], curve0[:,1], 'r', xp, yp, 'go' )
plt.plot(curve1[:,0], curve1[:,1], 'g')
plt.plot(curve2[:,0], curve2[:,1], 'b')
"""
Explanation: Now we give a list of points and define the CR curve corresponding to $\alpha=0,0.5,1$:
End of explanation
"""
#enable interactivity in notebook:
%matplotlib notebook
def curve_plot(i_pts, alpha=0.5, closed=False):
curve_pts=Catmull_Rom(i_pts, alpha, closed=closed)
# plot the interpolating curve
plt.plot(curve_pts[:,0], curve_pts[:,1], 'b')
class catrom(object):
def __init__(self, alpha=0.5, closed=False):
self.interp_pts=[] # list of interpolating points
self.alpha=alpha
self.closed=closed# boolean value
def callback(self, event): #select interpolating points with left mouse button click
if event.button==1 and event.inaxes:
x,y = event.xdata, event.ydata
self.interp_pts.append([x,y])
plt.plot(x, y, 'bo')
elif event.button==3: #press right button to plot the curve
curve_plot(self.interp_pts, self.alpha, self.closed)
plt.draw()
else: pass
def caxis(self):#define axes for plot
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111)
ax.set_xlim(0,10)
ax.set_ylim(0,10)
ax.grid('on')
ax.set_autoscale_on(False)
fig.canvas.mpl_connect('button_press_event', self.callback)
curve=catrom()
curve.caxis()
cv=catrom(closed=True)
cv.caxis()
cv2=catrom(alpha=0)
cv2.caxis()
"""
Explanation: The green curve is the centripetal CR spline interpolating the central 4 from 6 points. It folows closely
the given interpolatory points.
In order to get more insights into CR curves properties we give the possibility to generate them interactively.
Choose the interpolatory points, by clicking the left mouse button at the desired position. When the right button is pressed, the corresponding curve is generated:
End of explanation
"""
def tangentBezier(bz,t):
#bz is the list of Bezier control points and t is value in [0,1]
if t<0 or t>1:
raise ValueError('The parameter t must bi in [0,1]')
a=np.copy(bz)
N=a.shape[0]
for r in range(1,N-1):
a[:N-r,:]=(1-t)*a[:N-r,:]+t*a[1:N-r+1,:]
return (len(bz)-1)*(a[1,:]-a[0,:])
"""
Explanation: The advantage of defining a segment of a CR curve as a Bezier curve, not via de Boor plus Neville algorithm, is that we can compute the tangent direction at any point of the curve.
Namely, if $r:[0,1]\to\mathbb{R}^d$ is the Bézier parameterization of a segment of CR curve, then
the tangent vector at a point corresponding to the parameter $t\in[0,1]$ is $\vec{\dot{r}}(t)=3({\bf b}_1^{2}(t)-{\bf b}_0^2(t))$, where ${\bf b}^2_0, {\bf b}^2_1$ are points computed in the second level of recursion of the de Casteljau schema (for details see this notebook):
End of explanation
"""
from IPython.display import HTML
HTML('<iframe src=https://plot.ly/~empet/13914/ width=900 height=600></iframe>')
from IPython.core.display import HTML
def css_styling():
styles = open("./custom.css", "r").read()
return HTML(styles)
css_styling()
"""
Explanation: The above functions are independent on the dimension of the space of points. They work well on points in any space $\mathbb{R}^d$, $d\geq 2$.
Here is an illustration of a Plotly plot of a (2,3)-torus knot resulted from a centripetal Catmull-Rom interpolation of a set of points on torus:
End of explanation
"""
|
dtamayo/rebound | ipython_examples/Horizons.ipynb | gpl-3.0 | import rebound
sim = rebound.Simulation()
sim.add("Sun")
## Other examples:
# sim.add("Venus")
# sim.add("399")
# sim.add("Europa")
# sim.add("NAME=Ida")
# sim.add("Pluto")
# sim.add("NAME=Pluto")
sim.status()
"""
Explanation: Adding particles using NASA JPL Horizons system
REBOUND can add particles to simulations by obtaining ephemerides from NASA's powerful HORIZONS database. HORIZONS supports many different options, and we will certainly not try to cover everything here. This is meant to serve as an introduction to the basics, beyond what's in Churyumov-Gerasimenko.ipynb. If you catch any errors, or would either like to expand on this documentation or improve REBOUND's HORIZONS interface (rebound/horizons.py), please do fork the repository and send us a pull request.
Adding particles
When we add particles by passing a string, REBOUND queries the HORIZONS database and takes the first dataset HORIZONS offers. For the Sun, moons, and small bodies, this will typically return the body itself. For planets, it will return the barycenter of the system (for moonless planets like Venus it will say barycenter but there is no distinction). If you want the planet specifically, you have to use, e.g., "NAME=Pluto" rather than "Pluto". In all cases, REBOUND will print out the name of the HORIZONS entry it's using.
You can also add bodies using their integer NAIF IDs: NAIF IDs. Note that because of the number of small bodies (asteroids etc.) we have discovered, this convention only works for large objetcts. For small bodies, instead use "NAME=name" (see the SMALL BODIES section in the HORIZONS Documentation).
End of explanation
"""
sim.add("NAME=Ida")
print(sim.particles[-1]) # Ida before setting the mass
sim.particles[-1].m = 2.1e-14 # Setting mass of Ida in Solar masses
print(sim.particles[-1]) # Ida after setting the mass
"""
Explanation: Currently, HORIZONS does not have any mass information for solar system bodies. rebound/horizons.py has a hard-coded list provided by Jon Giorgini (10 May 2015) that includes the planets, their barycenters (total mass of planet plus moons), and the largest moons. If REBOUND doesn't find the corresponding mass for an object from this list (like for the asteroid Ida above), it will print a warning message. If you need the body's mass for your simulation, you can set it manually, e.g. (see Units.ipynb for an overview of using different units):
End of explanation
"""
sim = rebound.Simulation()
date = "2005-06-30 15:24" # You can also use Julian Days. For example: date = "JD2458327.500000"
sim.add("Venus")
sim.add("Venus", date=date)
sim.status()
"""
Explanation: Time
By default, REBOUND queries HORIZONS for objects' current positions. Specifically, it caches the current time the first time you call rebound.add, and gets the corresponding ephemeris. All subsequent calls to rebound.add will then use that initial cached time to make sure you get a synchronized set of ephemerides.
You can also explicitly pass REBOUND the time at which you would like the particles ephemerides:
End of explanation
"""
sim = rebound.Simulation()
date = "2005-06-30 15:24" # You can also use Julian Days. For example: date = "JD2458327.500000"
sim.add("Venus", date=date)
sim.add("Earth")
sim.status()
"""
Explanation: We see that the two Venus positions are different. The first call cached the current time, but since the second call specified a date, it overrode the default. Any time you pass a date, it will overwrite the default cached time, so:
End of explanation
"""
sim = rebound.Simulation()
sim.add("Sun")
sim.status()
"""
Explanation: would set up a simulation with Venus and Earth, all synchronized to 2005-06-30 15:24. All dates should either be passed in the format Year-Month-Day Hour:Minute or in JDxxxxxxx.xxxx for a date in Julian Days.
REBOUND takes these absolute times to the nearest minute, since at the level of seconds you have to worry about exactly what time system you're using, and small additional perturbations probably start to matter. For reference HORIZONS interprets all times for ephemerides as Coordinate (or Barycentric Dynamical) Time.
Reference Frame
REBOUND queries for particles' positions and velocities relative to the Sun:
End of explanation
"""
|
SheffieldML/notebook | GPy/config.ipynb | bsd-3-clause | # This is the default configuration file for GPy
# Do note edit this file.
# For machine specific changes (i.e. those specific to a given installation) edit GPy/installation.cfg
# For user specific changes edit $HOME/.gpy_user.cfg
[parallel]
# Enable openmp support. This speeds up some computations, depending on the number
# of cores available. Setting up a compiler with openmp support can be difficult on
# some platforms, hence by default it is off.
openmp=False
[anaconda]
# if you have an anaconda python installation please specify it here.
installed = False
location = None
MKL = False # set this to true if you have the MKL optimizations installed
"""
Explanation: GPy Configuration Settings
The GPy default configuration settings are stored in a file in the main GPy directory called defaults.cfg. These settings should not be changed in this file. The file gives you an overview of what the configuration settings are for the install.
End of explanation
"""
# This is the local installation configuration file for GPy
[parallel]
openmp=True
"""
Explanation: Machine Dependent Options
Each installation of GPy also creates an installation.cfg file. This file should include any installation specific settings for your GPy installation. For example, if a particular machine is set up to run OpenMP then the installation.cfg file should contain
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/bnu/cmip6/models/sandbox-3/landice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'sandbox-3', 'landice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: BNU
Source ID: SANDBOX-3
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation
"""
|
fschueler/incubator-systemml | projects/breast_cancer/MachineLearning.ipynb | apache-2.0 | %load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import matplotlib.pyplot as plt
import numpy as np
from pyspark.sql.functions import col, max
import systemml # pip3 install systemml
from systemml import MLContext, dml
plt.rcParams['figure.figsize'] = (10, 6)
ml = MLContext(sc)
"""
Explanation: Predicting Breast Cancer Proliferation Scores with Apache Spark and Apache SystemML
Machine Learning
Setup
End of explanation
"""
# Settings
size=64
grayscale = True
c = 1 if grayscale else 3
p = 0.01
tr_sample_filename = os.path.join("data", "train_{}_sample_{}{}.parquet".format(p, size, "_grayscale" if grayscale else ""))
val_sample_filename = os.path.join("data", "val_{}_sample_{}{}.parquet".format(p, size, "_grayscale" if grayscale else ""))
train_df = sqlContext.read.load(tr_sample_filename)
val_df = sqlContext.read.load(val_sample_filename)
train_df, val_df
tc = train_df.count()
vc = val_df.count()
tc, vc, tc + vc
train_df.select(max(col("__INDEX"))).show()
train_df.groupBy("tumor_score").count().show()
val_df.groupBy("tumor_score").count().show()
"""
Explanation: Read in train & val data
End of explanation
"""
# Note: Must use the row index column, or X may not
# necessarily correspond correctly to Y
X_df = train_df.select("__INDEX", "sample")
X_val_df = val_df.select("__INDEX", "sample")
y_df = train_df.select("__INDEX", "tumor_score")
y_val_df = val_df.select("__INDEX", "tumor_score")
X_df, X_val_df, y_df, y_val_df
"""
Explanation: Extract X and Y matrices
End of explanation
"""
script = """
# Scale images to [-1,1]
X = X / 255
X_val = X_val / 255
X = X * 2 - 1
X_val = X_val * 2 - 1
# One-hot encode the labels
num_tumor_classes = 3
n = nrow(y)
n_val = nrow(y_val)
Y = table(seq(1, n), y, n, num_tumor_classes)
Y_val = table(seq(1, n_val), y_val, n_val, num_tumor_classes)
"""
outputs = ("X", "X_val", "Y", "Y_val")
script = dml(script).input(X=X_df, X_val=X_val_df, y=y_df, y_val=y_val_df).output(*outputs)
X, X_val, Y, Y_val = ml.execute(script).get(*outputs)
X, X_val, Y, Y_val
"""
Explanation: Convert to SystemML Matrices
Note: This allows for reuse of the matrices on multiple
subsequent script invocations with only a single
conversion. Additionally, since the underlying RDDs
backing the SystemML matrices are maintained, any
caching will also be maintained.
End of explanation
"""
# script = """
# # Trigger conversions and caching
# # Note: This may take a while, but will enable faster iteration later
# print(sum(X))
# print(sum(Y))
# print(sum(X_val))
# print(sum(Y_val))
# """
# script = dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val)
# ml.execute(script)
"""
Explanation: Trigger Caching (Optional)
Note: This will take a while and is not necessary, but doing it
once will speed up the training below. Otherwise, the cost of
caching will be spread across the first full loop through the
data during training.
End of explanation
"""
# script = """
# write(X, "data/X_"+p+"_sample_binary", format="binary")
# write(Y, "data/Y_"+p+"_sample_binary", format="binary")
# write(X_val, "data/X_val_"+p+"_sample_binary", format="binary")
# write(Y_val, "data/Y_val_"+p+"_sample_binary", format="binary")
# """
# script = dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val, p=p)
# ml.execute(script)
"""
Explanation: Save Matrices (Optional)
End of explanation
"""
script = """
source("softmax_clf.dml") as clf
# Hyperparameters & Settings
lr = 1e-2 # learning rate
mu = 0.9 # momentum
decay = 0.999 # learning rate decay constant
batch_size = 50
epochs = 500
log_interval = 1
n = 200 # sample size for overfitting sanity check
# Train
[W, b] = clf::train(X[1:n,], Y[1:n,], X[1:n,], Y[1:n,], lr, mu, decay, batch_size, epochs, log_interval)
"""
outputs = ("W", "b")
script = dml(script).input(X=X, Y=Y, X_val=X_val, Y_val=Y_val).output(*outputs)
W, b = ml.execute(script).get(*outputs)
W, b
"""
Explanation: Softmax Classifier
Sanity Check: Overfit Small Portion
End of explanation
"""
script = """
source("softmax_clf.dml") as clf
# Hyperparameters & Settings
lr = 5e-7 # learning rate
mu = 0.5 # momentum
decay = 0.999 # learning rate decay constant
batch_size = 50
epochs = 1
log_interval = 10
# Train
[W, b] = clf::train(X, Y, X_val, Y_val, lr, mu, decay, batch_size, epochs, log_interval)
"""
outputs = ("W", "b")
script = dml(script).input(X=X, Y=Y, X_val=X_val, Y_val=Y_val).output(*outputs)
W, b = ml.execute(script).get(*outputs)
W, b
"""
Explanation: Train
End of explanation
"""
script = """
source("softmax_clf.dml") as clf
# Eval
probs = clf::predict(X, W, b)
[loss, accuracy] = clf::eval(probs, Y)
probs_val = clf::predict(X_val, W, b)
[loss_val, accuracy_val] = clf::eval(probs_val, Y_val)
"""
outputs = ("loss", "accuracy", "loss_val", "accuracy_val")
script = dml(script).input(X=X, Y=Y, X_val=X_val, Y_val=Y_val, W=W, b=b).output(*outputs)
loss, acc, loss_val, acc_val = ml.execute(script).get(*outputs)
loss, acc, loss_val, acc_val
"""
Explanation: Eval
End of explanation
"""
script = """
source("convnet.dml") as clf
# Hyperparameters & Settings
lr = 1e-2 # learning rate
mu = 0.9 # momentum
decay = 0.999 # learning rate decay constant
lambda = 0 #5e-04
batch_size = 50
epochs = 300
log_interval = 1
dir = "models/lenet-cnn/sanity/"
n = 200 # sample size for overfitting sanity check
# Train
[Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2] = clf::train(X[1:n,], Y[1:n,], X[1:n,], Y[1:n,], C, Hin, Win, lr, mu, decay, lambda, batch_size, epochs, log_interval, dir)
"""
outputs = ("Wc1", "bc1", "Wc2", "bc2", "Wc3", "bc3", "Wa1", "ba1", "Wa2", "ba2")
script = (dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val,
C=c, Hin=size, Win=size)
.output(*outputs))
Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2 = ml.execute(script).get(*outputs)
Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2
"""
Explanation: LeNet-like ConvNet
Sanity Check: Overfit Small Portion
End of explanation
"""
script = """
source("convnet.dml") as clf
dir = "models/lenet-cnn/hyperparam-search/"
# TODO: Fix `parfor` so that it can be efficiently used for hyperparameter tuning
j = 1
while(j < 2) {
#parfor(j in 1:10000, par=6) {
# Hyperparameter Sampling & Settings
lr = 10 ^ as.scalar(rand(rows=1, cols=1, min=-7, max=-1)) # learning rate
mu = as.scalar(rand(rows=1, cols=1, min=0.5, max=0.9)) # momentum
decay = as.scalar(rand(rows=1, cols=1, min=0.9, max=1)) # learning rate decay constant
lambda = 10 ^ as.scalar(rand(rows=1, cols=1, min=-7, max=-1)) # regularization constant
batch_size = 50
epochs = 1
log_interval = 10
trial_dir = dir + "j/"
# Train
[Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2] = clf::train(X, Y, X_val, Y_val, C, Hin, Win, lr, mu, decay, lambda, batch_size, epochs, log_interval, trial_dir)
# Eval
#probs = clf::predict(X, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2)
#[loss, accuracy] = clf::eval(probs, Y)
probs_val = clf::predict(X_val, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2)
[loss_val, accuracy_val] = clf::eval(probs_val, Y_val)
# Save hyperparams
str = "lr: " + lr + ", mu: " + mu + ", decay: " + decay + ", lambda: " + lambda + ", batch_size: " + batch_size
name = dir + accuracy_val + "," + j #+","+accuracy+","+j
write(str, name)
j = j + 1
}
"""
script = (dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val, C=c, Hin=size, Win=size))
ml.execute(script)
"""
Explanation: Hyperparameter Search
End of explanation
"""
script = """
source("convnet.dml") as clf
# Hyperparameters & Settings
lr = 0.00205 # learning rate
mu = 0.632 # momentum
decay = 0.99 # learning rate decay constant
lambda = 0.00385
batch_size = 50
epochs = 1
log_interval = 10
dir = "models/lenet-cnn/train/"
# Train
[Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2] = clf::train(X, Y, X_val, Y_val, C, Hin, Win, lr, mu, decay, lambda, batch_size, epochs, log_interval, dir)
"""
outputs = ("Wc1", "bc1", "Wc2", "bc2", "Wc3", "bc3", "Wa1", "ba1", "Wa2", "ba2")
script = (dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val,
C=c, Hin=size, Win=size)
.output(*outputs))
Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2 = ml.execute(script).get(*outputs)
Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2
"""
Explanation: Train
End of explanation
"""
script = """
source("convnet.dml") as clf
# Eval
probs = clf::predict(X, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2)
[loss, accuracy] = clf::eval(probs, Y)
probs_val = clf::predict(X_val, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2)
[loss_val, accuracy_val] = clf::eval(probs_val, Y_val)
"""
outputs = ("loss", "accuracy", "loss_val", "accuracy_val")
script = (dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val,
C=c, Hin=size, Win=size,
Wc1=Wc1, bc1=bc1,
Wc2=Wc2, bc2=bc2,
Wc3=Wc3, bc3=bc3,
Wa1=Wa1, ba1=ba1,
Wa2=Wa2, ba2=ba2)
.output(*outputs))
loss, acc, loss_val, acc_val = ml.execute(script).get(*outputs)
loss, acc, loss_val, acc_val
"""
Explanation: Eval
End of explanation
"""
|
tensorflow/docs-l10n | site/zh-cn/quantum/tutorials/barren_plateaus.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
!pip install tensorflow==2.4.1
"""
Explanation: 贫瘠高原
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://tensorflow.google.cn/quantum/tutorials/barren_plateaus"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a> </td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/quantum/tutorials/barren_plateaus.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a></td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/quantum/tutorials/barren_plateaus.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 Github 上查看源代码</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/quantum/tutorials/barren_plateaus.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a> </td>
</table>
本教程将介绍 <a href="https://www.nature.com/articles/s41467-018-07090-4" class="external">McClean 在 2019 年的一项研究</a>的结果,这项研究解释了并非任何量子神经网络结构都能提供优秀的学习效果。特别是,您将看到某些大型随机量子电路系列不能有效地用作量子神经网络,因为它们几乎在各处都会出现梯度消失问题。在本例中,您不会针对特定的学习问题训练任何模型,重点是理解梯度行为的更简单问题。
设置
End of explanation
"""
!pip install tensorflow-quantum
# Update package resources to account for version changes.
import importlib, pkg_resources
importlib.reload(pkg_resources)
"""
Explanation: 安装 TensorFlow Quantum:
End of explanation
"""
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
np.random.seed(1234)
"""
Explanation: 现在,导入 TensorFlow 和模块依赖项:
End of explanation
"""
def generate_random_qnn(qubits, symbol, depth):
"""Generate random QNN's with the same structure from McClean et al."""
circuit = cirq.Circuit()
for qubit in qubits:
circuit += cirq.ry(np.pi / 4.0)(qubit)
for d in range(depth):
# Add a series of single qubit rotations.
for i, qubit in enumerate(qubits):
random_n = np.random.uniform()
random_rot = np.random.uniform(
) * 2.0 * np.pi if i != 0 or d != 0 else symbol
if random_n > 2. / 3.:
# Add a Z.
circuit += cirq.rz(random_rot)(qubit)
elif random_n > 1. / 3.:
# Add a Y.
circuit += cirq.ry(random_rot)(qubit)
else:
# Add a X.
circuit += cirq.rx(random_rot)(qubit)
# Add CZ ladder.
for src, dest in zip(qubits, qubits[1:]):
circuit += cirq.CZ(src, dest)
return circuit
generate_random_qnn(cirq.GridQubit.rect(1, 3), sympy.Symbol('theta'), 2)
"""
Explanation: 1. 摘要
具有许多如下量子块的随机量子电路($R_{P}(\theta)$ 为随机 Pauli 旋转):<br> <img src="./images/barren_2.png" width="700">
其中,如果将 $f(x)$ 定义为相对于任何量子位 $a$ 和 $b$ 的 $Z_{a}Z_{b}$ 的期望值,那么将存在一个问题,即 $f'(x)$ 的平均值将非常接近 0 且变化不大。具体请参见下文:
2. 生成随机电路
论文内提供的构造直观易懂。以下代码所实现的简单函数可以在一组量子位上生成具有给定深度的随机量子电路,有时称为量子神经网络 (QNN):
End of explanation
"""
def process_batch(circuits, symbol, op):
"""Compute the variance of a batch of expectations w.r.t. op on each circuit that
contains `symbol`. Note that this method sets up a new compute graph every time it is
called so it isn't as performant as possible."""
# Setup a simple layer to batch compute the expectation gradients.
expectation = tfq.layers.Expectation()
# Prep the inputs as tensors
circuit_tensor = tfq.convert_to_tensor(circuits)
values_tensor = tf.convert_to_tensor(
np.random.uniform(0, 2 * np.pi, (n_circuits, 1)).astype(np.float32))
# Use TensorFlow GradientTape to track gradients.
with tf.GradientTape() as g:
g.watch(values_tensor)
forward = expectation(circuit_tensor,
operators=op,
symbol_names=[symbol],
symbol_values=values_tensor)
# Return variance of gradients across all circuits.
grads = g.gradient(forward, values_tensor)
grad_var = tf.math.reduce_std(grads, axis=0)
return grad_var.numpy()[0]
"""
Explanation: 作者研究了单个参数 $\theta_{1,1}$ 的梯度。我们继续在 $\theta_{1,1}$ 所在电路中放置 sympy.Symbol。由于作者并没有分析电路中任何其他符号的统计信息,让我们立即将它们替换为随机值。
3. 运行电路
生成其中一些电路以及可观测对象以测试梯度变化不大的说法。首先,生成一批随机电路。选择一个随机 ZZ 可观测对象,并使用 TensorFlow Quantum 批量计算梯度和方差。
3.1 批量方差计算
让我们编写一个辅助函数,计算给定可观测对象的梯度在一批电路上的方差:
End of explanation
"""
n_qubits = [2 * i for i in range(2, 7)
] # Ranges studied in paper are between 2 and 24.
depth = 50 # Ranges studied in paper are between 50 and 500.
n_circuits = 200
theta_var = []
for n in n_qubits:
# Generate the random circuits and observable for the given n.
qubits = cirq.GridQubit.rect(1, n)
symbol = sympy.Symbol('theta')
circuits = [
generate_random_qnn(qubits, symbol, depth) for _ in range(n_circuits)
]
op = cirq.Z(qubits[0]) * cirq.Z(qubits[1])
theta_var.append(process_batch(circuits, symbol, op))
plt.semilogy(n_qubits, theta_var)
plt.title('Gradient Variance in QNNs')
plt.xlabel('n_qubits')
plt.ylabel('$\\partial \\theta$ variance')
plt.show()
"""
Explanation: 3.1 设置和运行
选择要生成的随机电路的数量及其深度,以及电路操作的量子位数。然后绘制结果。
End of explanation
"""
def generate_identity_qnn(qubits, symbol, block_depth, total_depth):
"""Generate random QNN's with the same structure from Grant et al."""
circuit = cirq.Circuit()
# Generate initial block with symbol.
prep_and_U = generate_random_qnn(qubits, symbol, block_depth)
circuit += prep_and_U
# Generate dagger of initial block without symbol.
U_dagger = (prep_and_U[1:])**-1
circuit += cirq.resolve_parameters(
U_dagger, param_resolver={symbol: np.random.uniform() * 2 * np.pi})
for d in range(total_depth - 1):
# Get a random QNN.
prep_and_U_circuit = generate_random_qnn(
qubits,
np.random.uniform() * 2 * np.pi, block_depth)
# Remove the state-prep component
U_circuit = prep_and_U_circuit[1:]
# Add U
circuit += U_circuit
# Add U^dagger
circuit += U_circuit**-1
return circuit
generate_identity_qnn(cirq.GridQubit.rect(1, 3), sympy.Symbol('theta'), 2, 2)
"""
Explanation: 此图表明,对于量子机器学习问题,不能仅凭猜测随机 QNN 拟设来希望达到最佳效果。为了使梯度变化幅度足以支持机器学习,模型电路中必须具备特定结构。
4. 启发式方法
<a href="https://arxiv.org/pdf/1903.05076.pdf" class="external">Grant 于 2019 年</a>提出了一种有趣的启发式方法,支持以近乎随机但又不完全随机的方式开始学习。作者使用了与 McClean 等人相同的电路,针对经典控制参数提出了一种不同的初始化技术,可化解“贫瘠高原”问题。这种初始化技术首先对一些层使用完全随机的控制参数,但在紧随其后的层中,将选择前几层的初始变换未完成的参数。作者称其为标识块。
这种启发式方法的优势在于,只需更改单个参数即可,当前块之外的所有其他块都将保持原有标识,而梯度信号将比之前更强。这种方法使用户可以通过选择需要修改的变量和块来获得强梯度信号。这种启发式方法并不能防止用户在训练阶段陷入“贫瘠高原”困境(并限制完全同步更新),但可以保证您在刚开始工作时位于“高原”之外。
4.1 构造新的 QNN
现在,构造一个函数来生成标识块 QNN。这与论文中的实现略有不同。目前,使单个参数的梯度行为与 McClean 等人的研究一致即可,因此可以进行一些简化。
要生成标识块并训练模型,通常需要 $U1(\theta_{1a}) U1(\theta_{1b})^{\dagger}$ 而非 $U1(\theta_1) U1(\theta_1)^{\dagger}$。最初,$\theta_{1a}$ 和 $\theta_{1b}$ 为相同的角度,但二者需独立学习。否则,即使经过训练后,您也将始终获得同一标识。标识块的数量需要根据经验进行选择。块越深,块中部的方差就越小。但在块的开头和结尾,参数梯度的方差应较大。
End of explanation
"""
block_depth = 10
total_depth = 5
heuristic_theta_var = []
for n in n_qubits:
# Generate the identity block circuits and observable for the given n.
qubits = cirq.GridQubit.rect(1, n)
symbol = sympy.Symbol('theta')
circuits = [
generate_identity_qnn(qubits, symbol, block_depth, total_depth)
for _ in range(n_circuits)
]
op = cirq.Z(qubits[0]) * cirq.Z(qubits[1])
heuristic_theta_var.append(process_batch(circuits, symbol, op))
plt.semilogy(n_qubits, theta_var)
plt.semilogy(n_qubits, heuristic_theta_var)
plt.title('Heuristic vs. Random')
plt.xlabel('n_qubits')
plt.ylabel('$\\partial \\theta$ variance')
plt.show()
"""
Explanation: 4.2 比较
在这里,您可以看到启发式方法确实有助于防止梯度方差迅速消失:
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.