repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
llscm0202/BIGDATA2017
|
ATIVIDADE2/Lab1_Rev_Python_Numpy_AlgeLin.ipynb
|
gpl-3.0
|
[
"Lab 1: Revisão de Conceitos Básicos\nNesse notebook revisaremos os conceitos básicos da linguagem de programação Python, Algebra Linear, o uso da biblioteca NumPy e conceitos de Programação Funcional.\nA primeira parte abordará as diferenças de sintaxe da linguagem Python em relação ao Java e ao C/C++ e formas de otimizar o código. A segunda parte mostrará, os conceitos básicos da biblioteca Numpy e como utilizá-la para aplicar conceitos de Algebra Linear. Finalmente, a terceira parte mostrará os conceitos de Programação Funcional como Expressões Lambda e Funções de Ordem Alta.\nPara navegar pelo notebook execute cada uma das células utilizando a tecla de atalho SHIFT-ENTER, isso executará as intruções da célula In mostrando o resultado na célula Out correspondente. Todas as variáveis criadas em uma célula podem ser acessadas em todas as células subsequentes.\nAs células-exercícios iniciam com o comentário # EXERCICIO e os códigos a serem completados estão marcados pelos comentários <COMPLETAR>.\n Nesse notebook: \nParte 1: Python\nParte 2: NumPy\nParte 3: Programação Funcional\nParte 1: Python\n (1a) Declarações de Variáveis \nO tipo de variáveis no Python é definido dinâmicamente pelo interpretador. Não existe necessidade em declarar.",
"x = 10 # x é um inteiro\nprint type(x)\nprint \"oi\"\n\nx = 1.3 # x é um ponto flutuante\nprint type(x)\n\nx = \"Ola\" # x é uma string\nprint type(x)\n\nx = [1, 5, 10] # x é uma lista\nprint type(x)",
"(1b) Indentações \nNo Python a indentação faz o papel das chaves para determinar o bloco de comandos de uma função, condicional ou laço de repetição. O início de um bloco é definido pelo caractere \":\".",
"x = 10\n\nfor i in range(20):\n # Início da repetição For\n x = x + 1\n if x%2 == 0:\n # Instrução se condição verdadeira\n x = x + 1\n else:\n # Instrução se a condição for falsa\n x = x + 2\n # Fim do bloco de repetição For\nprint x # Isso está fora do for! Boa Ideia!",
"(1c) Funções \nAs funções no Python podem receber quantas entradas necessárias e retornar múltiplas saídas. A declaração da função é precedida pela palavra-chave def.",
"def Soma(x,y):\n return x+y\n\ndef Mult(x,y):\n return x*y\n\ndef SomaMult(x,y):\n return x+y, x*y # múltiplas saídas separadas por vírgula\n\nprint Soma(10,2), Mult(10,2), SomaMult(10,2)\n\n# O retorno de múltiplas saídas podem ser atribuídas diretamente para múltiplas variáveis\nw,z = SomaMult(10,2)\nprint w, z",
"(1d) Tipos Especiais \nAlém dos tipos básicos de variáveis o Python possui os tipos lista (list), tupla (tuple) e dicionários (dict).\nAs listas e tuplas são agregadores de valores e podem agrupar valores não necessariamente dos mesmos tipos. A diferença entre os dois é que listas são mutáveis (podem ser alterados) e tuplas são imutáveis. Elas são indexadas a partir do índice 0.\nOs dicionários são arranjos associativos que permite associar uma chave (de qualquer tipo) a um valor (também de qualquer tipo).",
"lista = [1, 2, True, \"palavra\"]\ntupla = (1, 2, True, \"palavra\")\n\nlista[1] = 3.0\nprint lista\n\ntupla[1] = 3.0 # Vai dar erro!\n\n# range(n) gera uma lista de valores entre 0 e n-1\n# len(lista) retorna o tamanho de uma lista\ndef DobraValores(lista):\n for i in range(len(lista)):\n lista[i] = lista[i]*2\n return lista\n\nlista = [1,2,3,4]\nlista2 = DobraValores(lista)\n\nprint lista, lista2 # As listas são passadas como referência para as funções\n\ndicionario = { \"Ana\":12, \"Joao\":13, \"Jose\":17 } # declaração inicial do dicionário, pode ser {} para dic. vazio\nprint dicionario[\"Ana\"] # acesso ao elemento pela chave entre colchetes\n\ndicionario[\"Maria\"] = 11 # podemos alterar ou inserir um novo elemento\n\nprint dicionario\n\nprint \"As chaves do dicionário são: \", dicionario.keys()\nprint \"Os valores do dicionário são: \", dicionario.values()",
"(1e) Iteradores \nPara iterar por uma lista, tupla ou dicionário podemos utilizar a palavra-chave in. Em conjunto com a instrução for ela percorre cada elemento iterativamente. Essa palavra-chave também pode ser utilizada para verificar se um elemento está contido na lista.",
"lista = range(10) # gera a lista [0,..,9]\nprint 8 in lista, 12 in lista\n\nfor x in lista:\n print x",
"(1f) Geradores e List Comprehension \nO Python permite uma sintaxe mais enxuta (e otimizada) para gerar uma nova lista de acordo com alguma regra específica:\n[ funcao(x) for x in listaGeradora ]",
"# Jeito tradicional, mas não otimizado\nlistaOriginal = [1,2,3,4,5,6,7,8,9]\nlistaQuadrada = []\nfor x in listaOriginal:\n listaQuadrada.append(x*x)\n\nprint listaQuadrada\n\n# Através do List Comprehension\nlistaQuadrada = [ x*x for x in listaOriginal ]\nprint listaQuadrada",
"Quando precisamos trabalhar com listas muito grandes, mas sem a necessidade de acessar os elementos aleatóriamente, podemos utilizar os geradores.\nUm gerador define a instrução para gerar uma sequência, calculando cada elemento da sequência conforme requisitado.",
"listaQuadrada = ( x*x for x in listaOriginal )\nprint listaQuadrada # os elementos ainda não foram calculados\n\nfor x in listaQuadrada:\n print x # a cada iteração apenas o próximo elemento é calculado, a lista não existe na memória",
"(1g) Arquivos \nA leitura de arquivos é feita pelo comando open() que gera um apontador para arquivo, podemos utilizar o laço for para ler cada linha do arquivo iterativamente como uma string.",
"import os.path\n\ncaminho = os.path.join('Data','Aula01') # garante o uso correto de / ou \\\\ para diretórios\narquivo = os.path.join(caminho,'exemplo.txt')\n\nf = open(arquivo)\nfor linha in f:\n print linha\nf.close() ",
"Parte 2: Numpy\nNumPy é uma biblioteca do Python para trabalhar com arrays. Essa biblioteca provê abstrações para utilizar arrays como vetores e matrizes. Ela é otimizada para ser rápida e eficiente em relação ao uso de memória. O tipo básico do NumPy é o ndarray, que é uma array multidimensional de tamanho fixo que contém elementos de um tipo apenas.\n (2a) Multiplicação por Escalar \nPara esse exercício, crie uma ndarray contendo os elementos [1, 2, 3] e multiplique essa array por 5. Use o comando np.array() para criar a array. Note que um dos possíveis parâmetros para essa função é ums lista do Python. A multiplicação escalar pode ser feita utilizando o operador *.\nNote que se você criar uma array partindo de uma lista do Python, você obterá uma array unidimensional, que é equivalente a um vetor.",
"# Como convenção importaremos a biblioteca numpy como np\nimport numpy as np\n\n# EXERCICIO\n# Crie uma array numpy com os valores 1, 2, 3\narraySimples = np.arange(1,4,1)#<COMPLETAR>\n# Faça o produto escalar multiplicando a array por 5\nvezesCinco = arraySimples*5#<COMPLETAR>\nprint arraySimples\nprint vezesCinco\n\n# TESTE do exercício (2a)\nassert np.all(vezesCinco == [5, 10, 15]), 'valor incorreto para vezesCinco'\nprint \"Correto!\"",
"(2b) Multiplicação elemento-a-elemento e produto interno \nA multiplicação elemento-a-elemento é calculada como: $$ \\mathbf{x} \\odot \\mathbf{y} = \\begin{bmatrix} x_1 y_1 \\\\ x_2 y_2 \\\\ \\vdots \\\\ x_n y_n \\end{bmatrix} $$\nE o do produto interno de dois vetores de mesmo tamanho $ n $: $$ \\mathbf{w} \\cdot \\mathbf{x} = \\sum_{i=1}^n w_i x_i $$\nEm alguns livros você também vê $ \\mathbf{w} \\cdot \\mathbf{x} $ escrito como $ \\mathbf{w}^\\top \\mathbf{x} $\nO tipo Numpy Array suporta essas duas operações, ao utilizar o operador * para multiplicar dois vetores ou matrizes, ele executará a multiplicação elemento-a-elemento. Para realizar o produto interno você pode utilizar tanto a função np.dot() ou ndarray.dot(). Ex.: dados os vetores $x$ e $y$ pode realizar a operação como np.dot(x,y) ou x.dot(y).",
"# EXERCICIO\n# A função np.arange(inicio,fim,passo) cria uma lista iniciando em inicio, terminando antes do fim seguindo passo\nu = np.arange(0, 5, .5) # np.array([0,0.5,1.0,...,4.5])\nv = np.arange(5, 10, .5)\n\nelementoAelemento = u*v#<COMPLETAR>\nprodInterno = u.dot(v)#<COMPLETAR>\nprint 'u: {0}'.format(u)\nprint 'v: {0}'.format(v)\nprint '\\nelementoAelemento\\n{0}'.format(elementoAelemento)\nprint '\\nprodInterno\\n{0}'.format(prodInterno)\n\n# TESTE do exercício (2b)\nassert np.all(elementoAelemento == [ 0., 2.75, 6., 9.75, 14., 18.75, 24., 29.75, 36., 42.75]), \"Valores incorretos para elementoAelemento\"\nprint \"Primeiro teste OK\"\nassert prodInterno==183.75, \"Valor incorreto para prodInterno\"\nprint \"Segundo teste OK\"",
"(2c) Multiplicação de Matriz \nA multiplicação de matriz é definida por:\n$$ [\\mathbf{X} \\mathbf{Y}]{i,j} = \\sum{r=1}^n \\mathbf{X}{i,r} \\mathbf{Y}{r,j} $$\nNote que o número de colunas da primeira matriz deve ser o mesmo do número de linhas da segunda matriz, representada por $ n $\nNo Numpy utilizamo np.matrix() quando queremos criar uma matriz a partir de listas do Python. Com esse tipo podemos utilizar o operador * para multiplicação de matrizes, np.multiply() para multiplicação elemento-a-elemento, np.matrix.transpose() ou .T para calcular a transposta e np.linalg.inv() para calcular a inversa de uma matriz quadrada.",
"# EXERCICIO\nfrom numpy.linalg import inv # agora podemos utilizar o comando inv() sem preceder com np.linalg\n\n# Criar uma matriz com listas de listas\nA = np.matrix([[1,2,3,4],[5,6,7,8]])\nprint 'A:\\n{0}'.format(A)\n\n# Imprima a matriz transposta\nprint '\\nA transposta:\\n{0}'.format(A.T)#<COMPLETAR>)\n\n# Multiplique A por sua Transposta\nAAt = A*A.T #<COMPLETAR>\nprint '\\nAAt:\\n{0}'.format(AAt)\n\n# Inverta AAt com o comando inv()\nAAtInv = inv(AAt)#<COMPLETAR>\nprint '\\nAAtInv:\\n{0}'.format(AAtInv)\n\n# Mostre que a matriz vezes sua inversa é a identidade\n# .round(n) arredonda os valores para n casas decimais\nprint '\\nAAtInv * AAt:\\n{0}'.format((AAt*AAtInv).round(4))#<COMPLETAR>).round(4))\n\n# TESTE do exercício (2c)\nassert np.all(AAt == np.matrix([[30, 70], [70, 174]])), \"Valores incorretos para AAt\"\nprint \"Primeiro teste OK\"\nassert np.allclose(AAtInv, np.matrix([[0.54375, -0.21875], [-0.21875, 0.09375]])), \"Valor incorreto para AAtInv\"\nprint \"Segundo teste OK\"",
"(2d) Slices \nNos vetores e matrizes do Numpy podemos selecionar sub-conjuntos de valores durante a indexação. Ex.:\nv[:10] seleciona os 10 primeiros elementos\nv[2:] seleciona os elementos da terceira posição em diante\nv[-5:] retorna os 5 últimos elementos\nv[:-5] retorna os elementos do começo até o ultimo-5\nv[1:3] retorna os elementos 1 e 2",
"# EXERCICIO\natributos = np.array([1, 2, 3, 4])\nprint 'atributos:\\n{0}'.format(atributos)\n\n# Crie uma array com os 3 últimos elementos de atributos\nultTres = atributos[-3:]#[<COMPLETAR>]\n\nprint '\\nÚltimos três:\\n{0}'.format(ultTres)\n\n# TEST do exercício (2d)\nassert np.all(ultTres == [2, 3, 4]), \"Valores incorretos para ultTres\"\nprint \"Teste OK\"",
"Parte 3: Programação Funcional \n (3a) Funções Anônimas (Lambda) \nUma função/expressão lambda é utilizada para definir funções simples com apenas uma instrução. Para isso basta usar a instrução lambda seguido da lista de parâmetros de entrada, precedidos por : e a expressão a ser executada. Por exempo, lambda x, y: x + y é uma função anônima que calcula a soma de dois valores.\nExpressões lambda geram uma função quando interpretadas pelo Python. Elas são úteis quando precisamos aplicar uma função simples em diversos elementos de uma lista.\nPara saber mais sobre Lambdas: Lambda Functions, Lambda Tutorial, and Python Functions.\nNo exercício abaixo crie uma função lambda que multiplique um valor por 10, atribua a variável designada.",
"# EXERCICIO\n# Lembre-se que: \"lambda x, y: x + y\" cria uma função que adiciona dois valores\nmult10 = lambda x: x*10#<COMPLETAR>\nprint mult10(5)\n\n# Note that the function still shows its name as <lambda>\nprint '\\n', mult10\n\nassert mult10(10)==100, \"Função incorreta\"\nprint \"Teste OK\"",
"As funções lambdas tem restrições em relação a expressão computada. Essa expressão não pode conter print ou incremento +=, por exemplo.\nAlém disso, os parâmetros de entrada podem ser de qualquer tipo, incluindo tuplas e listas.",
"p1 = (1,3)\np2 = (3,7)\n\neuclidiana2D = lambda (x0,y0), (x1,y1): np.sqrt(((x0-x1)**2) + ((y0-y1)**2)) # sqrt é a raíz quadrada\nprint euclidiana2D(p1,p2)",
"(3b) Lógica Funcional \nNo paradigma funcional trabalhamos com os conceitos de dados imutáveis, ou seja, não existe o conceito de variáveis, uma vez que um valor é designado a um nome, esse valor não pode mudar.",
"# Lógica não-funcional\na = 0\ndef inc():\n global a\n a = a + 1\n\n# Lógica funcional\ndef incFn(a):\n return a+1",
"(3c) Funções de Alta Ordem \nDesse modo, o uso de laços (for, while) é desencorajado e substituídos pela recursividade e funções de alta ordem. Uma função de alta ordem é uma função que recebe uma ou mais funções como parâmetro e retorna uma função.",
"# Função para somar 3 valores\ndef Soma3(a,b,c):\n return a+b+c\n\n# Função que soma apenas dois valores\ndef Soma2(a,b):\n return a+b\n\n# Soma 3 poderia ser criado a partir de Soma2:\nSoma3Fn = lambda a,b,c: Soma2(Soma2(a,b),c)",
"Esse tipo de função ajuda a criar um código declarativo, em que o próprio código auto-explica o que está sendo feito\nUm exemplo interessante é a construção de uma função que retorna outra função.",
"# Cria uma função que calcula a eq. do segundo grau no formato ax^2 + bx + c\ndef Eq2grau(a,b,c):\n def f(x):\n return a*x**2 + b*x + c\n return f\n\nf = Eq2grau(10,2,1)\nprint f(10)\n\n# EXERCICIO\n\n# Escreva uma função Soma(x) que retorna uma função que recebe um valor y e soma ao x.\ndef Soma(x):\n def SomaY(y):\n return x+y\n return SomaY\n #<COMPLETAR>\n\nSoma2 = lambda a,b: Soma(a)(b)\nSoma3 = lambda a,b,c: Soma(Soma(a)(b))(c)\n\nprint Soma2(1,3), Soma3(1,2,3)\n\nassert Soma3(1,2,3)==6, \"Erro na função\"\nprint \"Ok\"",
"(3d) Map, Reduce, Filter \nEssas três funções são utilizadas para transformação de listas no paradigma funcional. Essas funções recebem como parâmetro uma função f e uma lista l.\nMap: aplica a função em cada elemento da lista, gerando uma nova lista\nReduce: aplica a função cumulativamente em pares de elementos da lista, retornando um único valor agregado ao final\nFilter: gera uma nova lista contendo os elementos da lista l em que a aplicação de f returna True\nPara os próximos exercícios vamos utilizar a classe FuncionalW para criar uma sintaxe parecida com a que utilizaremos com o Spark.",
"class FuncionalW(object):\n def __init__(self, data):\n self.data = data\n def map(self, function):\n \"\"\"Call `map` on the items in `data` using the provided `function`\"\"\"\n return FuncionalW(map(function, self.data))\n def reduce(self, function):\n \"\"\"Call `reduce` on the items in `data` using the provided `function`\"\"\"\n return reduce(function, self.data)\n def filter(self, function):\n \"\"\"Call `filter` on the items in `data` using the provided `function`\"\"\"\n return FuncionalW(filter(function, self.data))\n def __eq__(self, other):\n return (isinstance(other, self.__class__)\n and self.__dict__ == other.__dict__)\n def __getattr__(self, name): return getattr(self.data, name)\n def __getitem__(self, k): return self.data.__getitem__(k)\n def __repr__(self): return 'FuncionalW({0})'.format(repr(self.data))\n def __str__(self): return 'FuncionalW({0})'.format(str(self.data))\n\n# Exemplo de Map\n\n# Criaremos uma lista\nlista = FuncionalW(range(10))\n\n# Criar uma função a ser aplicada nessa lista\nf = lambda x: x*x\n\n# Programação Imperativa\nresultado1 = FuncionalW([])\nfor x in lista:\n resultado1.append(f(x))\nprint \"Resultado: {}\".format(resultado1) \n\n# Funcional\nprint \"Resultado usando Map: {}\".format(lista.map(f))\n\n# Exemplo de Reduce\n\n# Criaremos uma lista\nlista = FuncionalW(range(1,10))\n\n# Criar uma função a ser aplicada nessa lista\nf = lambda x,y: x*y\n\n# Programação Imperativa\nprodutoria = 1\nfor x in lista:\n produtoria = f(produtoria,x)\nprint \"Resultado: {}\".format(produtoria) \n\n# Funcional\nprint \"Resultado usando Reduce: {}\".format(lista.reduce(f))\n\n# EXERCICIO\n\ndataset = FuncionalW(range(10))\n\n# Multiplique cada elemento por 5\nmapResult = dataset.map(lambda x:x*5)#<COMPLETAR>\n# Filtre eliminando os elementos ímpares\n# No Python \"x % 2\" é o resultado do resto da divisão de x por 2\nfilterResult = dataset.filter(lambda x: x%2==0)#<COMPLETAR>\n# Some os elementos\nreduceResult = dataset.reduce(lambda x,y: x+y)#<COMPLETAR>\n\nprint 'mapResult: {0}'.format(mapResult)\nprint '\\nfilterResult: {0}'.format(filterResult)\nprint '\\nreduceResult: {0}'.format(reduceResult)\n\nassert mapResult == FuncionalW([0, 5, 10, 15, 20, 25, 30, 35, 40, 45]),\"Valor incorreto para mapResult\"\nprint \"Teste 1 OK\"\n\nassert filterResult == FuncionalW([0, 2, 4, 6, 8]), \"Valor incorreto para filterResult\"\nprint \"Teste 2 OK\"\n\nassert reduceResult == 45, \"Valor incorreto para reduceResult\"\nprint \"Teste 3 OK\"",
"Para reduzir o tamanho do código e facilitar a leitura, podemos compor as funções em sequência",
"dataset = FuncionalW(range(10))\n\nSoma = (dataset\n .map(lambda x: x*5)\n .filter(lambda x: x%2==0)\n .reduce(lambda x,y: x+y)\n )\nprint Soma\n\n# EXERCICIO\n\n# split() divide a string em palavras\nTexto = FuncionalW(\"Esse texto tem varias palavras cada linha tem palavras escritas palavras palavras palavras Esse texto esta escrito\".split())\n\n# Vamos fazer uma contagem da palavra 'palavras' no texto\n\n# Crie uma função lambda que recebe duas entradas e retorna se são iguais ou não\nIgual = lambda x,y:x==y #<COMPLETAR>\n\n# Crie uma função lambda que utiliza a função Igual para detectar se a entrada é igual a palavra 'palavras'\nDetectaPalavra = lambda x: Igual(x,\"palavras\")#<COMPLETAR>\n\n# 1) Filtre as palavras iguais a 'palavras'\n# 2) Mapeie todos os elementos para o valor 1\n# 3) Reduza para a somatória\ncontagem = (Texto\n .filter(DetectaPalavra)#<COMPLETAR>\n .map(lambda x:1)#<COMPLETAR>\n .reduce(lambda x,y: x+y) #<COMPLETAR>\n )\nprint(contagem)\n#print \"Existem {0} ocorrências de 'palavras'\".format(contagem)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
steinam/teacher
|
jup_notebooks/data-science-ipython-notebooks-master/matplotlib/04.03-Errorbars.ipynb
|
mit
|
[
"<!--BOOK_INFORMATION-->\n<img align=\"left\" style=\"padding-right:10px;\" src=\"figures/PDSH-cover-small.png\">\nThis notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.\nThe text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!\nNo changes were made to the contents of this notebook from the original.\n<!--NAVIGATION-->\n< Simple Scatter Plots | Contents | Density and Contour Plots >\nVisualizing Errors\nFor any scientific measurement, accurate accounting for errors is nearly as important, if not more important, than accurate reporting of the number itself.\nFor example, imagine that I am using some astrophysical observations to estimate the Hubble Constant, the local measurement of the expansion rate of the Universe.\nI know that the current literature suggests a value of around 71 (km/s)/Mpc, and I measure a value of 74 (km/s)/Mpc with my method. Are the values consistent? The only correct answer, given this information, is this: there is no way to know.\nSuppose I augment this information with reported uncertainties: the current literature suggests a value of around 71 $\\pm$ 2.5 (km/s)/Mpc, and my method has measured a value of 74 $\\pm$ 5 (km/s)/Mpc. Now are the values consistent? That is a question that can be quantitatively answered.\nIn visualization of data and results, showing these errors effectively can make a plot convey much more complete information.\nBasic Errorbars\nA basic errorbar can be created with a single Matplotlib function call:",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.style.use('seaborn-whitegrid')\nimport numpy as np\n\nx = np.linspace(0, 10, 50)\ndy = 0.8\ny = np.sin(x) + dy * np.random.randn(50)\n\nplt.errorbar(x, y, yerr=dy, fmt='.k');",
"Here the fmt is a format code controlling the appearance of lines and points, and has the same syntax as the shorthand used in plt.plot, outlined in Simple Line Plots and Simple Scatter Plots.\nIn addition to these basic options, the errorbar function has many options to fine-tune the outputs.\nUsing these additional options you can easily customize the aesthetics of your errorbar plot.\nI often find it helpful, especially in crowded plots, to make the errorbars lighter than the points themselves:",
"plt.errorbar(x, y, yerr=dy, fmt='o', color='black',\n ecolor='lightgray', elinewidth=3, capsize=0);",
"In addition to these options, you can also specify horizontal errorbars (xerr), one-sided errorbars, and many other variants.\nFor more information on the options available, refer to the docstring of plt.errorbar.\nContinuous Errors\nIn some situations it is desirable to show errorbars on continuous quantities.\nThough Matplotlib does not have a built-in convenience routine for this type of application, it's relatively easy to combine primitives like plt.plot and plt.fill_between for a useful result.\nHere we'll perform a simple Gaussian process regression, using the Scikit-Learn API (see Introducing Scikit-Learn for details).\nThis is a method of fitting a very flexible non-parametric function to data with a continuous measure of the uncertainty.\nWe won't delve into the details of Gaussian process regression at this point, but will focus instead on how you might visualize such a continuous error measurement:",
"from sklearn.gaussian_process import GaussianProcess\n\n# define the model and draw some data\nmodel = lambda x: x * np.sin(x)\nxdata = np.array([1, 3, 5, 6, 8])\nydata = model(xdata)\n\n# Compute the Gaussian process fit\ngp = GaussianProcess(corr='cubic', theta0=1e-2, thetaL=1e-4, thetaU=1E-1,\n random_start=100)\ngp.fit(xdata[:, np.newaxis], ydata)\n\nxfit = np.linspace(0, 10, 1000)\nyfit, MSE = gp.predict(xfit[:, np.newaxis], eval_MSE=True)\ndyfit = 2 * np.sqrt(MSE) # 2*sigma ~ 95% confidence region",
"We now have xfit, yfit, and dyfit, which sample the continuous fit to our data.\nWe could pass these to the plt.errorbar function as above, but we don't really want to plot 1,000 points with 1,000 errorbars.\nInstead, we can use the plt.fill_between function with a light color to visualize this continuous error:",
"# Visualize the result\nplt.plot(xdata, ydata, 'or')\nplt.plot(xfit, yfit, '-', color='gray')\n\nplt.fill_between(xfit, yfit - dyfit, yfit + dyfit,\n color='gray', alpha=0.2)\nplt.xlim(0, 10);",
"Note what we've done here with the fill_between function: we pass an x value, then the lower y-bound, then the upper y-bound, and the result is that the area between these regions is filled.\nThe resulting figure gives a very intuitive view into what the Gaussian process regression algorithm is doing: in regions near a measured data point, the model is strongly constrained and this is reflected in the small model errors.\nIn regions far from a measured data point, the model is not strongly constrained, and the model errors increase.\nFor more information on the options available in plt.fill_between() (and the closely related plt.fill() function), see the function docstring or the Matplotlib documentation.\nFinally, if this seems a bit too low level for your taste, refer to Visualization With Seaborn, where we discuss the Seaborn package, which has a more streamlined API for visualizing this type of continuous errorbar.\n<!--NAVIGATION-->\n< Simple Scatter Plots | Contents | Density and Contour Plots >"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
drvinceknight/cfm
|
assets/assessment/mock/assignment.ipynb
|
mit
|
[
"Computing for Mathematics - Mock individual coursework\nThis jupyter notebook contains questions that will resemble the questions in your individual coursework.\nImportant Do not delete the cells containing: \n```\nBEGIN SOLUTION\nEND SOLUTION\n```\nwrite your solution attempts in those cells.\nIf you would like to submit this notebook:\n\nChange the name of the notebook from main to: <student_number>. For example, if your student number is c1234567 then change the name of the notebook to c1234567.\nWrite all your solution attempts in the correct locations;\nSave the notebook (File>Save As);\nFollow the instructions given in class/email to submit.\n\nQuestion 1\nOutput the evaluation of the following expressions exactly.\na. \\(\\frac{(9a^2bc^4) ^ {\\frac{1}{2}}}{6ab^{\\frac{3}{2}}c}\\)",
"### BEGIN SOLUTION\n\n\n### END SOLUTION",
"b. \\((2 ^ {\\frac{1}{2}} + 2) ^ 2 - 2 ^ {\\frac{5}{2}}\\)",
"### BEGIN SOLUTION\n\n\n### END SOLUTION",
"\\((\\frac{1}{8}) ^ {\\frac{4}{3}}\\)",
"### BEGIN SOLUTION\n\n\n### END SOLUTION",
"Question 2\nWrite a function expand that takes a given mathematical expression and returns the expanded expression.",
"def expand(expression):\n ### BEGIN SOLUTION\n\n\n### END SOLUTION",
"Question 3\nThe matrix \\(D\\) is given by \\(D = \\begin{pmatrix} 1& 2 & a\\ 3 & 1 & 0\\ 1 & 1 & 1\\end{pmatrix}\\) where \\(a\\ne 2\\).\na. Create a variable D which has value the matrix \\(D\\).",
"### BEGIN SOLUTION\n\n\n### END SOLUTION",
"b. Create a variable D_inv with value the inverse of \\(D\\).",
"### BEGIN SOLUTION\n\n\n### END SOLUTION",
"c. Using D_inv output the solution of the following system of equations:\n\\[\n\\begin{array}{r}\n x + 2y + 4z = 3\\\n 3x + y = 4\\\n x + y + z = 1\\\n\\end{array}\n\\]",
"### BEGIN SOLUTION\n\n\n### END SOLUTION",
"Question 4\nDuring a game of frisbee between a handler and their dog the handler chooses to randomly select if they throw using a backhand or a forehand: 25% of the time they will throw a backhand.\nBecause of the way their dog chooses to approach a flying frisbee they catch it with the following probabilities:\n\n80% of the time when it is thrown using a backhand\n90% of the time when it is thrown using a forehand\n\na. Write a function sample_experiment() that simulates a given throw and returns the throw type (as a string with value \"backhand\" or \"forehand\") and whether it was caught (as a boolean: either True or False).",
"import random\n\n\ndef sample_experiment():\n \"\"\"\n Returns the throw type and whether it was caught\n \"\"\"\n ### BEGIN SOLUTION\n\n\n### END SOLUTION\n return throw, caught",
"b. Using 1,000,000 samples create a variable probability_of_catch which has value an estimate for the probability of the frisbee being caught.",
"### BEGIN SOLUTION\n\n\n### END SOLUTION",
"c. Using the above, create a variable probability_of_forehand_given_drop which has value an estimate for the probability of the frisbee being thrown with a forehand given that it was not caught.",
"### BEGIN SOLUTION\n\n\n### END SOLUTION"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/nuist/cmip6/models/sandbox-3/ocean.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Ocean\nMIP Era: CMIP6\nInstitute: NUIST\nSource ID: SANDBOX-3\nTopic: Ocean\nSub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. \nProperties: 133 (101 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:34\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'nuist', 'sandbox-3', 'ocean')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Seawater Properties\n3. Key Properties --> Bathymetry\n4. Key Properties --> Nonoceanic Waters\n5. Key Properties --> Software Properties\n6. Key Properties --> Resolution\n7. Key Properties --> Tuning Applied\n8. Key Properties --> Conservation\n9. Grid\n10. Grid --> Discretisation --> Vertical\n11. Grid --> Discretisation --> Horizontal\n12. Timestepping Framework\n13. Timestepping Framework --> Tracers\n14. Timestepping Framework --> Baroclinic Dynamics\n15. Timestepping Framework --> Barotropic\n16. Timestepping Framework --> Vertical Physics\n17. Advection\n18. Advection --> Momentum\n19. Advection --> Lateral Tracers\n20. Advection --> Vertical Tracers\n21. Lateral Physics\n22. Lateral Physics --> Momentum --> Operator\n23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff\n24. Lateral Physics --> Tracers\n25. Lateral Physics --> Tracers --> Operator\n26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff\n27. Lateral Physics --> Tracers --> Eddy Induced Velocity\n28. Vertical Physics\n29. Vertical Physics --> Boundary Layer Mixing --> Details\n30. Vertical Physics --> Boundary Layer Mixing --> Tracers\n31. Vertical Physics --> Boundary Layer Mixing --> Momentum\n32. Vertical Physics --> Interior Mixing --> Details\n33. Vertical Physics --> Interior Mixing --> Tracers\n34. Vertical Physics --> Interior Mixing --> Momentum\n35. Uplow Boundaries --> Free Surface\n36. Uplow Boundaries --> Bottom Boundary Layer\n37. Boundary Forcing\n38. Boundary Forcing --> Momentum --> Bottom Friction\n39. Boundary Forcing --> Momentum --> Lateral Friction\n40. Boundary Forcing --> Tracers --> Sunlight Penetration\n41. Boundary Forcing --> Tracers --> Fresh Water Forcing \n1. Key Properties\nOcean key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of ocean model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean model code (NEMO 3.6, MOM 5.0,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Family\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of ocean model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OGCM\" \n# \"slab ocean\" \n# \"mixed layer ocean\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBasic approximations made in the ocean.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Primitive equations\" \n# \"Non-hydrostatic\" \n# \"Boussinesq\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of prognostic variables in the ocean component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# \"Salinity\" \n# \"U-velocity\" \n# \"V-velocity\" \n# \"W-velocity\" \n# \"SSH\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Seawater Properties\nPhysical properties of seawater in ocean\n2.1. Eos Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Wright, 1997\" \n# \"Mc Dougall et al.\" \n# \"Jackett et al. 2006\" \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2.2. Eos Functional Temp\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTemperature used in EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# TODO - please enter value(s)\n",
"2.3. Eos Functional Salt\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSalinity used in EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Practical salinity Sp\" \n# \"Absolute salinity Sa\" \n# TODO - please enter value(s)\n",
"2.4. Eos Functional Depth\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDepth or pressure used in EOS for sea water ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pressure (dbars)\" \n# \"Depth (meters)\" \n# TODO - please enter value(s)\n",
"2.5. Ocean Freezing Point\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2.6. Ocean Specific Heat\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nSpecific heat in ocean (cpocean) in J/(kg K)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"2.7. Ocean Reference Density\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nBoussinesq reference density (rhozero) in kg / m3",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3. Key Properties --> Bathymetry\nProperties of bathymetry in ocean\n3.1. Reference Dates\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nReference date of bathymetry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Present day\" \n# \"21000 years BP\" \n# \"6000 years BP\" \n# \"LGM\" \n# \"Pliocene\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Type\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the bathymetry fixed in time in the ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.3. Ocean Smoothing\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe any smoothing or hand editing of bathymetry in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.4. Source\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe source of bathymetry in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.source') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Nonoceanic Waters\nNon oceanic waters treatement in ocean\n4.1. Isolated Seas\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how isolated seas is performed",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. River Mouth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how river mouth mixing or estuaries specific treatment is performed",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Key Properties --> Software Properties\nSoftware properties of ocean code\n5.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Resolution\nResolution in the ocean grid\n6.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Range Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Number Of Horizontal Gridpoints\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"6.5. Number Of Vertical Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"6.6. Is Adaptive Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.7. Thickness Level 1\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nThickness of first surface ocean level (in meters)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7. Key Properties --> Tuning Applied\nTuning methodology for ocean component\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation\nConservation in the ocean component\n8.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBrief description of conservation methodology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProperties conserved in the ocean by the numerical schemes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Enstrophy\" \n# \"Salt\" \n# \"Volume of ocean\" \n# \"Momentum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Consistency Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAny additional consistency properties (energy conversion, pressure gradient discretisation, ...)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Corrected Conserved Prognostic Variables\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSet of variables which are conserved by more than the numerical scheme alone.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.5. Was Flux Correction Used\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDoes conservation involve flux correction ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9. Grid\nOcean grid\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of grid in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Grid --> Discretisation --> Vertical\nProperties of vertical discretisation in ocean\n10.1. Coordinates\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of vertical coordinates in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Z-coordinate\" \n# \"Z*-coordinate\" \n# \"S-coordinate\" \n# \"Isopycnic - sigma 0\" \n# \"Isopycnic - sigma 2\" \n# \"Isopycnic - sigma 4\" \n# \"Isopycnic - other\" \n# \"Hybrid / Z+S\" \n# \"Hybrid / Z+isopycnic\" \n# \"Hybrid / other\" \n# \"Pressure referenced (P)\" \n# \"P*\" \n# \"Z**\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Partial Steps\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nUsing partial steps with Z or Z vertical coordinate in ocean ?*",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"11. Grid --> Discretisation --> Horizontal\nType of horizontal discretisation scheme in ocean\n11.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal grid type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Lat-lon\" \n# \"Rotated north pole\" \n# \"Two north poles (ORCA-style)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Staggering\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nHorizontal grid staggering type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa E-grid\" \n# \"N/a\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.3. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite difference\" \n# \"Finite volumes\" \n# \"Finite elements\" \n# \"Unstructured grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Timestepping Framework\nOcean Timestepping Framework\n12.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of time stepping in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.2. Diurnal Cycle\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiurnal cycle type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Via coupling\" \n# \"Specific treatment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Timestepping Framework --> Tracers\nProperties of tracers time stepping in ocean\n13.1. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTracers time stepping scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTracers time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14. Timestepping Framework --> Baroclinic Dynamics\nBaroclinic dynamics in ocean\n14.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBaroclinic dynamics type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Preconditioned conjugate gradient\" \n# \"Sub cyling\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBaroclinic dynamics scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.3. Time Step\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nBaroclinic time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15. Timestepping Framework --> Barotropic\nBarotropic time stepping in ocean\n15.1. Splitting\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime splitting method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"split explicit\" \n# \"implicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.2. Time Step\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nBarotropic time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Timestepping Framework --> Vertical Physics\nVertical physics time stepping in ocean\n16.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDetails of vertical time stepping in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17. Advection\nOcean advection\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of advection in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Advection --> Momentum\nProperties of lateral momemtum advection scheme in ocean\n18.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of lateral momemtum advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flux form\" \n# \"Vector form\" \n# TODO - please enter value(s)\n",
"18.2. Scheme Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean momemtum advection scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. ALE\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nUsing ALE for vertical advection ? (if vertical coordinates are sigma)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.ALE') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"19. Advection --> Lateral Tracers\nProperties of lateral tracer advection scheme in ocean\n19.1. Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOrder of lateral tracer advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.2. Flux Limiter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nMonotonic flux limiter for lateral tracer advection scheme in ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"19.3. Effective Order\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nEffective order of limited lateral tracer advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.5. Passive Tracers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPassive tracers advected",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ideal age\" \n# \"CFC 11\" \n# \"CFC 12\" \n# \"SF6\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.6. Passive Tracers Advection\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIs advection of passive tracers different than active ? if so, describe.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Advection --> Vertical Tracers\nProperties of vertical tracer advection scheme in ocean\n20.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.2. Flux Limiter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nMonotonic flux limiter for vertical tracer advection scheme in ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"21. Lateral Physics\nOcean lateral physics\n21.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lateral physics in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of transient eddy representation in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Eddy active\" \n# \"Eddy admitting\" \n# TODO - please enter value(s)\n",
"22. Lateral Physics --> Momentum --> Operator\nProperties of lateral physics operator for momentum in ocean\n22.1. Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDirection of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.2. Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrder of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.3. Discretisation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiscretisation of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff\nProperties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean\n23.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLateral physics momemtum eddy viscosity coeff type in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Constant Coefficient\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"23.3. Variable Coefficient\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.4. Coeff Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.5. Coeff Backscatter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"24. Lateral Physics --> Tracers\nProperties of lateral physics for tracers in ocean\n24.1. Mesoscale Closure\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there a mesoscale closure in the lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"24.2. Submesoscale Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"25. Lateral Physics --> Tracers --> Operator\nProperties of lateral physics operator for tracers in ocean\n25.1. Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDirection of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrder of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Discretisation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiscretisation of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff\nProperties of eddy diffusity coeff in lateral physics tracers scheme in the ocean\n26.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLateral physics tracers eddy diffusity coeff type in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.2. Constant Coefficient\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.3. Variable Coefficient\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Coeff Background\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nDescribe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.5. Coeff Backscatter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"27. Lateral Physics --> Tracers --> Eddy Induced Velocity\nProperties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean\n27.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of EIV in lateral physics tracers in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"GM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Constant Val\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf EIV scheme for tracers is constant, specify coefficient value (M2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.3. Flux Type\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of EIV flux (advective or skew)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Added Diffusivity\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of EIV added diffusivity (constant, flow dependent or none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Vertical Physics\nOcean Vertical Physics\n28.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vertical physics in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Vertical Physics --> Boundary Layer Mixing --> Details\nProperties of vertical physics in ocean\n29.1. Langmuir Cells Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there Langmuir cells mixing in upper ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30. Vertical Physics --> Boundary Layer Mixing --> Tracers\n*Properties of boundary layer (BL) mixing on tracers in the ocean *\n30.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of boundary layer mixing for tracers in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.2. Closure Order\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.3. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant BL mixing of tracers, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground BL mixing of tracers coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. Vertical Physics --> Boundary Layer Mixing --> Momentum\n*Properties of boundary layer (BL) mixing on momentum in the ocean *\n31.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of boundary layer mixing for momentum in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Closure Order\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"31.3. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant BL mixing of momentum, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"31.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground BL mixing of momentum coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32. Vertical Physics --> Interior Mixing --> Details\n*Properties of interior mixing in the ocean *\n32.1. Convection Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of vertical convection in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Non-penetrative convective adjustment\" \n# \"Enhanced vertical diffusion\" \n# \"Included in turbulence closure\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.2. Tide Induced Mixing\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how tide induced mixing is modelled (barotropic, baroclinic, none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.3. Double Diffusion\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there double diffusion",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.4. Shear Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there interior shear mixing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33. Vertical Physics --> Interior Mixing --> Tracers\n*Properties of interior mixing on tracers in the ocean *\n33.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of interior mixing for tracers in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.2. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant interior mixing of tracers, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"33.3. Profile\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIs the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground interior mixing of tracers coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34. Vertical Physics --> Interior Mixing --> Momentum\n*Properties of interior mixing on momentum in the ocean *\n34.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of interior mixing for momentum in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"34.2. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant interior mixing of momentum, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"34.3. Profile\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIs the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground interior mixing of momentum coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35. Uplow Boundaries --> Free Surface\nProperties of free surface in ocean\n35.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of free surface in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nFree surface scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear implicit\" \n# \"Linear filtered\" \n# \"Linear semi-explicit\" \n# \"Non-linear implicit\" \n# \"Non-linear filtered\" \n# \"Non-linear semi-explicit\" \n# \"Fully explicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"35.3. Embeded Seaice\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the sea-ice embeded in the ocean model (instead of levitating) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36. Uplow Boundaries --> Bottom Boundary Layer\nProperties of bottom boundary layer in ocean\n36.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of bottom boundary layer in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.2. Type Of Bbl\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of bottom boundary layer in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diffusive\" \n# \"Acvective\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36.3. Lateral Mixing Coef\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"36.4. Sill Overflow\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe any specific treatment of sill overflows",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37. Boundary Forcing\nOcean boundary forcing\n37.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of boundary forcing in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.2. Surface Pressure\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.3. Momentum Flux Correction\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.4. Tracers Flux Correction\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.5. Wave Effects\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how wave effects are modelled at ocean surface.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.6. River Runoff Budget\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how river runoff from land surface is routed to ocean and any global adjustment done.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.7. Geothermal Heating\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how geothermal heating is present at ocean bottom.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38. Boundary Forcing --> Momentum --> Bottom Friction\nProperties of momentum bottom friction in ocean\n38.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of momentum bottom friction in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Non-linear\" \n# \"Non-linear (drag function of speed of tides)\" \n# \"Constant drag coefficient\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"39. Boundary Forcing --> Momentum --> Lateral Friction\nProperties of momentum lateral friction in ocean\n39.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of momentum lateral friction in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Free-slip\" \n# \"No-slip\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"40. Boundary Forcing --> Tracers --> Sunlight Penetration\nProperties of sunlight penetration scheme in ocean\n40.1. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of sunlight penetration scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"1 extinction depth\" \n# \"2 extinction depth\" \n# \"3 extinction depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"40.2. Ocean Colour\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the ocean sunlight penetration scheme ocean colour dependent ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"40.3. Extinction Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe and list extinctions depths for sunlight penetration scheme (if applicable).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"41. Boundary Forcing --> Tracers --> Fresh Water Forcing\nProperties of surface fresh water forcing in ocean\n41.1. From Atmopshere\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of surface fresh water forcing from atmos in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.2. From Sea Ice\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of surface fresh water forcing from sea-ice in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Real salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.3. Forced Mode Restoring\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of surface salinity restoring in forced mode (OMIP)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tpin3694/tpin3694.github.io
|
machine-learning/handling_outliers.ipynb
|
mit
|
[
"Title: Handling Outliers\nSlug: handling_outliers \nSummary: How to handling outliers for machine learning in Python. \nDate: 2016-09-06 12:00\nCategory: Machine Learning\nTags: Preprocessing Structured Data\nAuthors: Chris Albon\n<a alt=\"Handling outliers\" href=\"https://machinelearningflashcards.com\">\n <img src=\"handling_outliers/Outlier_print.png\" class=\"flashcard center-block\">\n</a>\nPreliminaries",
"# Load library\nimport pandas as pd",
"Create Data",
"# Create DataFrame\nhouses = pd.DataFrame()\nhouses['Price'] = [534433, 392333, 293222, 4322032]\nhouses['Bathrooms'] = [2, 3.5, 2, 116]\nhouses['Square_Feet'] = [1500, 2500, 1500, 48000]\n\nhouses",
"Option 1: Drop",
"# Drop observations greater than some value\nhouses[houses['Bathrooms'] < 20]",
"Option 2: Mark",
"# Load library\nimport numpy as np\n\n# Create feature based on boolean condition\nhouses['Outlier'] = np.where(houses['Bathrooms'] < 20, 0, 1)\n\n# Show data\nhouses",
"Option 3: Rescale",
"# Log feature\nhouses['Log_Of_Square_Feet'] = [np.log(x) for x in houses['Square_Feet']]\n\n# Show data\nhouses"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
smorton2/think-stats
|
code/chap06ex.ipynb
|
gpl-3.0
|
[
"Examples and Exercises from Think Stats, 2nd Edition\nhttp://thinkstats2.com\nCopyright 2016 Allen B. Downey\nMIT License: https://opensource.org/licenses/MIT",
"from __future__ import print_function, division\n\n%matplotlib inline\n\nimport numpy as np\n\nimport brfss\n\nimport thinkstats2\nimport thinkplot",
"I'll start with the data from the BRFSS again.",
"df = brfss.ReadBrfss(nrows=None)",
"Here are the mean and standard deviation of female height in cm.",
"female = df[df.sex==2]\nfemale_heights = female.htm3.dropna()\nmean, std = female_heights.mean(), female_heights.std()\nmean, std",
"NormalPdf returns a Pdf object that represents the normal distribution with the given parameters.\nDensity returns a probability density, which doesn't mean much by itself.",
"pdf = thinkstats2.NormalPdf(mean, std)\npdf.Density(mean + std)",
"thinkplot provides Pdf, which plots the probability density with a smooth curve.",
"thinkplot.Pdf(pdf, label='normal')\nthinkplot.Config(xlabel='x', ylabel='PDF', xlim=[140, 186])",
"Pdf provides MakePmf, which returns a Pmf object that approximates the Pdf.",
"pmf = pdf.MakePmf()\nthinkplot.Pmf(pmf, label='normal')\nthinkplot.Config(xlabel='x', ylabel='PDF', xlim=[140, 186])",
"If you have a Pmf, you can also plot it using Pdf, if you have reason to think it should be represented as a smooth curve.",
"thinkplot.Pdf(pmf, label='normal')\nthinkplot.Config(xlabel='x', ylabel='PDF', xlim=[140, 186])",
"Using a sample from the actual distribution, we can estimate the PDF using Kernel Density Estimation (KDE).\nIf you run this a few times, you'll see how much variation there is in the estimate.",
"thinkplot.Pdf(pdf, label='normal')\n\nsample = np.random.normal(mean, std, 500)\nsample_pdf = thinkstats2.EstimatedPdf(sample, label='sample')\nthinkplot.Pdf(sample_pdf, label='sample KDE')\nthinkplot.Config(xlabel='x', ylabel='PDF', xlim=[140, 186])",
"Moments\nRaw moments are just sums of powers.",
"def RawMoment(xs, k):\n return sum(x**k for x in xs) / len(xs)",
"The first raw moment is the mean. The other raw moments don't mean much.",
"RawMoment(female_heights, 1), RawMoment(female_heights, 2), RawMoment(female_heights, 3)\n\ndef Mean(xs):\n return RawMoment(xs, 1)\n\nMean(female_heights)",
"The central moments are powers of distances from the mean.",
"def CentralMoment(xs, k):\n mean = RawMoment(xs, 1)\n return sum((x - mean)**k for x in xs) / len(xs)",
"The first central moment is approximately 0. The second central moment is the variance.",
"CentralMoment(female_heights, 1), CentralMoment(female_heights, 2), CentralMoment(female_heights, 3)\n\ndef Var(xs):\n return CentralMoment(xs, 2)\n\nVar(female_heights)",
"The standardized moments are ratios of central moments, with powers chosen to make the dimensions cancel.",
"def StandardizedMoment(xs, k):\n var = CentralMoment(xs, 2)\n std = np.sqrt(var)\n return CentralMoment(xs, k) / std**k",
"The third standardized moment is skewness.",
"StandardizedMoment(female_heights, 1), StandardizedMoment(female_heights, 2), StandardizedMoment(female_heights, 3)\n\ndef Skewness(xs):\n return StandardizedMoment(xs, 3)\n\nSkewness(female_heights)",
"Normally a negative skewness indicates that the distribution has a longer tail on the left. In that case, the mean is usually less than the median.",
"def Median(xs):\n cdf = thinkstats2.Cdf(xs)\n return cdf.Value(0.5)",
"But in this case the mean is greater than the median, which indicates skew to the right.",
"Mean(female_heights), Median(female_heights)",
"Because the skewness is based on the third moment, it is not robust; that is, it depends strongly on a few outliers. Pearson's median skewness is more robust.",
"def PearsonMedianSkewness(xs):\n median = Median(xs)\n mean = RawMoment(xs, 1)\n var = CentralMoment(xs, 2)\n std = np.sqrt(var)\n gp = 3 * (mean - median) / std\n return gp",
"Pearson's skewness is positive, indicating that the distribution of female heights is slightly skewed to the right.",
"PearsonMedianSkewness(female_heights)",
"Birth weights\nLet's look at the distribution of birth weights again.",
"import first\n\nlive, firsts, others = first.MakeFrames()",
"Based on KDE, it looks like the distribution is skewed to the left.",
"birth_weights = live.totalwgt_lb.dropna()\npdf = thinkstats2.EstimatedPdf(birth_weights)\nthinkplot.Pdf(pdf, label='birth weight')\nthinkplot.Config(xlabel='Birth weight (pounds)', ylabel='PDF')",
"The mean is less than the median, which is consistent with left skew.",
"Mean(birth_weights), Median(birth_weights)",
"And both ways of computing skew are negative, which is consistent with left skew.",
"Skewness(birth_weights), PearsonMedianSkewness(birth_weights)",
"Adult weights\nNow let's look at adult weights from the BRFSS. The distribution looks skewed to the right.",
"adult_weights = df.wtkg2.dropna()\npdf = thinkstats2.EstimatedPdf(adult_weights)\nthinkplot.Pdf(pdf, label='Adult weight')\nthinkplot.Config(xlabel='Adult weight (kg)', ylabel='PDF')",
"The mean is greater than the median, which is consistent with skew to the right.",
"Mean(adult_weights), Median(adult_weights)",
"And both ways of computing skewness are positive.",
"Skewness(adult_weights), PearsonMedianSkewness(adult_weights)",
"Exercises\nThe distribution of income is famously skewed to the right. In this exercise, we’ll measure how strong that skew is.\nThe Current Population Survey (CPS) is a joint effort of the Bureau of Labor Statistics and the Census Bureau to study income and related variables. Data collected in 2013 is available from http://www.census.gov/hhes/www/cpstables/032013/hhinc/toc.htm. I downloaded hinc06.xls, which is an Excel spreadsheet with information about household income, and converted it to hinc06.csv, a CSV file you will find in the repository for this book. You will also find hinc2.py, which reads this file and transforms the data.\nThe dataset is in the form of a series of income ranges and the number of respondents who fell in each range. The lowest range includes respondents who reported annual household income “Under \\$5000.” The highest range includes respondents who made “\\$250,000 or more.”\nTo estimate mean and other statistics from these data, we have to make some assumptions about the lower and upper bounds, and how the values are distributed in each range. hinc2.py provides InterpolateSample, which shows one way to model this data. It takes a DataFrame with a column, income, that contains the upper bound of each range, and freq, which contains the number of respondents in each frame.\nIt also takes log_upper, which is an assumed upper bound on the highest range, expressed in log10 dollars. The default value, log_upper=6.0 represents the assumption that the largest income among the respondents is $10^6$, or one million dollars.\nInterpolateSample generates a pseudo-sample; that is, a sample of household incomes that yields the same number of respondents in each range as the actual data. It assumes that incomes in each range are equally spaced on a log10 scale.",
"def InterpolateSample(df, log_upper=6.0):\n \"\"\"Makes a sample of log10 household income.\n\n Assumes that log10 income is uniform in each range.\n\n df: DataFrame with columns income and freq\n log_upper: log10 of the assumed upper bound for the highest range\n\n returns: NumPy array of log10 household income\n \"\"\"\n # compute the log10 of the upper bound for each range\n df['log_upper'] = np.log10(df.income)\n\n # get the lower bounds by shifting the upper bound and filling in\n # the first element\n df['log_lower'] = df.log_upper.shift(1)\n df.loc[0, 'log_lower'] = 3.0\n\n # plug in a value for the unknown upper bound of the highest range\n df.loc[41, 'log_upper'] = log_upper\n \n # use the freq column to generate the right number of values in\n # each range\n arrays = []\n for _, row in df.iterrows():\n vals = np.linspace(row.log_lower, row.log_upper, row.freq)\n arrays.append(vals)\n\n # collect the arrays into a single sample\n log_sample = np.concatenate(arrays)\n return log_sample\n\n\nimport hinc\nincome_df = hinc.ReadData()\n\nlog_sample = InterpolateSample(income_df, log_upper=6.0)\n\nlog_cdf = thinkstats2.Cdf(log_sample)\nthinkplot.Cdf(log_cdf)\nthinkplot.Config(xlabel='Household income (log $)',\n ylabel='CDF')\n\nsample = np.power(10, log_sample)\n\ncdf = thinkstats2.Cdf(sample)\nthinkplot.Cdf(cdf)\nthinkplot.Config(xlabel='Household income ($)',\n ylabel='CDF')",
"Compute the median, mean, skewness and Pearson’s skewness of the resulting sample. What fraction of households report a taxable income below the mean? How do the results depend on the assumed upper bound?",
"# Solution goes here\n\n# Solution goes here\n\n# Solution goes here",
"All of this is based on an assumption that the highest income is one million dollars, but that's certainly not correct. What happens to the skew if the upper bound is 10 million?\nWithout better information about the top of this distribution, we can't say much about the skewness of the distribution."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
KIPAC/StatisticalMethods
|
tutorials/cepheids.ipynb
|
gpl-2.0
|
[
"Tutorial: Cepheid Period-Luminosity Data\nCepheids are stars whose brightness oscillates with a stable period, the logarithm of which appears to be strongly correlated with their mean absolute magnitude. Traditionally, these stars form part of the \"distance ladder\" establishing the relationship between cosmological redshift and distance, i.e. the Hubble expansion rate. The absolute luminosity for a given period is measured in our Galaxy, or one very nearby, where the distance can be determined by geometric paralax; measured fluxes and periods can then be used to estimate the distance to larger numbers of more distant galaxies where other distance proxies such as supernova luminosities are calibrated. This procedure requires the period-luminosity relation to be universal (the same relation must hold in every galaxy).\nWe'll be using this data set to explore some aspects of hierarchical models, which are a natural way to handle data for populations of sources. In this notebook, we'll get familiar with a particular data set (from Riess et al. 2011), and think about what kind of model might be appropriate.",
"exec(open('tbc.py').read()) # define TBC and TBC_above\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport dill",
"Exploring the data\nThe data tables we need have been extracted from the paper itself and can be downloaded using these links:\n* R11redsh.dat\n* R11ceph.dat\nThe first of these text tables lists the redshifts for a sample of galaxies, while the second contains measurements of a number of Cepheids in each galaxy. Take a look at each - there are more verbose comments at the top explaining what each column means in the files themselves.",
"TBC() # edit path as needed\n# data_path = '../ignore/' \n\n# columns: id gal m merr P logO_H bias\ncepheids = np.loadtxt(data_path + 'R11ceph.dat')\n\n# columns: gal z\ngalaxies = np.loadtxt(data_path + 'R11redsh.dat')",
"This cell reorganizes the data into a dictionary indexed by galaxy (NGC number). We also go ahead and compute the base-10 log of the period for each Cepheid.",
"ngc_numbers = [int(g) for g in galaxies[:,0]]\ndata = {int(x[0]):{'z':x[1]} for x in galaxies}\nfor g in ngc_numbers:\n j = np.where(cepheids[:,1] == g)[0]\n data[g]['id'] = np.array([int(i) for i in cepheids[j,0]])\n data[g]['Ngal'] = len(data[g]['id'])\n data[g]['m'] = cepheids[j,2]\n data[g]['merr'] = cepheids[j,3]\n data[g]['P'] = cepheids[j,4]\n data[g]['logO_H'] = cepheids[j,5]\n data[g]['bias'] = cepheids[j,6]\n data[g]['logP'] = np.log10(data[g]['P'])",
"Have a look at the data structure for one of the galaxies:",
"data[ngc_numbers[0]]",
"Let's quickly plot the period vs. apparent magnitude. Smaller magnitudes mean brighter sources, so we reverse the Y axis.",
"plt.rcParams['figure.figsize'] = (7.0, 5.0)\nfor g in ngc_numbers:\n plt.errorbar(data[g]['logP'], data[g]['m'], yerr=data[g]['merr'], fmt='none');\nplt.xlabel('log10 period/days', fontsize=14);\nplt.ylabel('apparent magnitude', fontsize=14);\nplt.gca().invert_yaxis();",
"Of course, the apparent magnitude (related to the observed flux) is not a good basis for comparing Cepheid populations in galaxies at different distances from us. We'll want to convert this to an intrinsic property, the luminosity or absolute magnitude.\nApparent magnitude, $m$, is converted to absolute magnitude, $M$, via the luminosity distance, $d_\\mathrm{L}(z)$:\n$m = M + 5 \\log_{10}\\left(\\frac{d_\\mathrm{L}(z)}{\\mathrm{Mpc}}\\right) + 25$.\nThis is the silly astronomer's version of the relationship between flux, luminosity and distance, $F \\propto L/d_\\mathrm{L}^2$.\nThe host galaxies in the R11 sample are all very nearby (check the maximum redshift read in above), so let's assume that distance is proportional to redshift $z$ via Hubble's law:\n$d_\\mathrm{L}(z) = \\frac{cz}{H_0}$,\nwhere $c = 3\\times10^5 {\\rm ~km~s}^{-1}$ and $H_0 = 70 {\\rm ~km~s}^{-1} {\\rm Mpc}^{-1}$. Note that having these constants correct is not important for accounting for the relative distance to each galaxy, i.e. getting their period-luminosity scatterplots to line up.\nGo ahead and add entries for the luminosity distance (scalar) and Cepheid absolute magnitudes (array) to the dictionary for each galaxy.",
"c = 3.0e5 # km/s\nH0 = 70.0 # km/s/Mpc\n\nTBC()\n# for g in ngc_numbers:\n# data[g]['dL'] = ...\n# data[g]['M'] = ...",
"Note that, since we're assuming a perfectly known, additive translation from $m$ to $M$, merr describes the uncertainty either $m$ or $M$.\nLet's see whether the Cepheids for each galaxy line up when plotting the absolute magnitudes:",
"plt.rcParams['figure.figsize'] = (7.0, 5.0)\nfor g in ngc_numbers:\n plt.errorbar(data[g]['logP'], data[g]['M'], yerr=data[g]['merr'], fmt='none');\nplt.xlabel('log10 period/days', fontsize=14);\nplt.ylabel('absolute magnitude', fontsize=14);\nplt.gca().invert_yaxis();",
"Checkpoint: Pretty much! You should see a fairly clear trend.\nThe data also include log(O/H), a measure of the metallicity of each star. Let's see how that looks vs. the magnitude:",
"plt.rcParams['figure.figsize'] = (7.0, 5.0)\nfor g in ngc_numbers:\n plt.errorbar(data[g]['logO_H'], data[g]['M'], yerr=data[g]['merr'], fmt='none');\nplt.xlabel('log10 O/H', fontsize=14);\nplt.ylabel('absolute magnitude', fontsize=14);\nplt.gca().invert_yaxis();",
"We would have to check in the paper whether the metallicity is expected to have an influence on the Cepheid luminosity or period (or, indeed, whether some metallicity-based correction has already been applied). Let's keep things relatively simple for this example by neglecting any of that, and focusing only on the supplied periods and magntitudes.\nSimilarly, the final column, bias is something that has already been applied by R11. If we were specialists we might second guess this, but for our purposes we can leave it alone.\nOne last thing, which can be seen in the plots above already, is that the distribution of periods is nonuniform, and varies from galaxy to galaxy. Let's look at period histograms for each galaxy, because we can.",
"plt.rcParams['figure.figsize'] = (16.0, 12.0)\nfig, ax = plt.subplots(3,4)\nfor i,g in enumerate(ngc_numbers):\n j = np.unravel_index(i,ax.shape)\n ax[j].hist(data[g]['logP'], label='NGC'+str(g));\n ax[j].set_xlabel('log10 period/days', fontsize=14);\n ax[j].set_ylabel('frequency', fontsize=14);\n ax[j].legend();",
"The most obvious explanation for this variation is a selection effect due to the frequency with which each galaxy was observed - we can't have measured periods much shorter or longer than the observation cadence with any fidelity.\nDefining a model\nIt's standard to fit a linear relation between absolute magnitude and log-period, which translates to a power-law between luminosity and period. That looks broadly reasonable, based on the plots above.\nUltimately, we'll be interested in whether the period-luminosity relations of the different galaxies are indeed universal, meaning that exactly the same relationship holds for ech galaxy. So, let's consider that the parameters of those relations may not be identical:\n$M = a_i + b_i \\log_{10}P$, with $i$ indexing the host galaxies.\nHowever, we wouldn't expect the intercepts and slopes for different galaxies to be entirely unrelated to one another either. This is one aspect that makes the problem hierarchical - there are properties of each galaxy ($a_i$ and $b_i$) that we believe should follow some parent distribution, even if they aren't identical.\nSimilarly, at a lower level, we've assumed that the Cepheids within each host follow the scaling relation above. However, we should allow for the possibility that they don't follow the relation exactly; there may be some intrinsic scatter that makes the true magnitudes deviate from the linear relation above, even without accounting for measurement errors. This is another level of the hierarchy - there is a property of every Cepheid in a given host ($M$) that follows a parent distribution specific to that host (some to-be-specified scatter distribution about the linear relation above).\nAs for the measured quantities, for the apparent magnitudes we are given estimates and error bars (as is common). With nothing else to go on, we'd normally assume these are Gaussian uncertainties, and normally this should be accurate enough for background-subtracted fluxes measured from many photons. We are given no information about uncertainty in the periods, so we'll have to assume those are precise. (If they aren't, this might contribute to any intrinsic catter we see.) Similarly, we'll take the redshifts to be precise, as measuring them to high accuracy and precision should be straightforward for such nearby galaxies. The fact that the galaxies are so nearby also means that we don't really need to worry about modeling uncertainties in the conversion from absolute to apparent magnitudes (the linear apparoximation of $d_\\mathrm{L}$ above).\nTo summarize our model:\n* We have precise periods and Gaussian distributed apparent magnitudes for a number of Cepheids in different galaxies.\n* Each galaxy's redshift is given precisely, allowing us to compute a distance and convert between apparent and absolute magnitude with negligible uncertainty.\n* Within a given galaxy, Cepheid magnitudes and log-periods are assumed to follow an overall linear trend, with some intrinsic scatter.\n* The parameters of the line and scatter may not be identical from galaxy to galaxy, but we'll assume they are drawn from some to-be-specified parent distributions.\nWe'll get more specific about what PDFs to use later, but this should be enough to draw a nice, hierarchical PGM for this problem. Do so.\n\nYour PGM here\n\nFinishing up\nLet's save ourselves having to repeat the data handling above by saving this session.",
"TBC() # edit path as desired\n# dill.dump_session('../ignore/cepheids.db')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
albangaignard/BioLinkedData-HandsOn
|
sparql_backup.ipynb
|
mit
|
[
"Prise en main SPARQL\nExercice 1 : DESCRIBE\nLister les informations relatives à l'identifiant ENSG00000139618 (Ensembl).",
"from SPARQLWrapper import SPARQLWrapper, JSON, XML\n\n# Set SPARQL endpoint\nendpoint = \"http://sparql.wikipathways.org/\"\n\nsparql = SPARQLWrapper(endpoint)\n\n# Set prefix\nprefix = \"\"\"\nPREFIX wp: <http://vocabularies.wikipathways.org/wp#>\nPREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>\nPREFIX dcterms: <http://purl.org/dc/terms/>\n\"\"\"\n\n# Query\nquery = \"\"\"\nDESCRIBE <http://identifiers.org/wikipathways/WP43_r84102>\n\"\"\"\n\nsparql.setQuery(prefix + query)\nsparql.setReturnFormat(\"csv\")\n\nresults = sparql.query().convert()\n\n# print results\nprint(str(results))",
"Exercice 2 : SELECT AND FILTER \nLister les gènes \"CYP\".",
"# Query\nquery = \"\"\"\nSELECT DISTINCT str(?label) as ?geneProduct\nWHERE {\n ?geneProduct a wp:GeneProduct . \n ?geneProduct rdfs:label ?label .\n\n FILTER regex(str(?label), \"CYP\"). \n}\n\"\"\"\n\nsparql.setQuery(prefix + query)\nsparql.setReturnFormat(JSON)\n\nresults = sparql.query().convert()\n\n# print results\nfor result in results[\"results\"][\"bindings\"]:\n print result['geneProduct']['value']",
"Exercice 3 \nLister les pathways qui ont le gène \"CYP\".",
"# Query\nquery = \"\"\"\nSELECT DISTINCT ?pathway str(?label) as ?geneProduct\nWHERE {\n ?geneProduct a wp:GeneProduct . \n ?geneProduct rdfs:label ?label .\n ?geneProduct dcterms:isPartOf ?pathway .\n ?pathway a wp:Pathway .\n \n FILTER regex(str(?label), \"CYP\"). \n}\n\"\"\"\n\nsparql.setQuery(prefix + query)\nsparql.setReturnFormat(JSON)\n\nresults = sparql.query().convert()\n\n# print results\nfor result in results[\"results\"][\"bindings\"]:\n print result['pathway']['value'] + \"\\t\" + result['geneProduct']['value']",
"Exercice 4 : OPTIONAL\nA partir de la requête précédente, lister les identifiants EntrezGene des gènes \"CYP\".",
"query = \"\"\"\nSELECT DISTINCT ?pathway str(?label) as ?geneProduct ?idEG\nWHERE {\n ?geneProduct a wp:GeneProduct . \n OPTIONAL{?geneProduct wp:bdbEntrezGene ?idEG .}\n ?geneProduct rdfs:label ?label .\n ?geneProduct dcterms:isPartOf ?pathway .\n ?pathway a wp:Pathway .\n \n FILTER regex(str(?label), \"CYP\"). \n}\n\"\"\"\n\nsparql.setQuery(prefix + query)\nsparql.setReturnFormat(JSON)\n\nresults = sparql.query().convert()\n\n# print results\nfor result in results[\"results\"][\"bindings\"]:\n if (typeOf(result['idEG']['value'] != undefined)) :\n print result['pathway']['value'] + \"\\t\" + result['geneProduct']['value'] + \"\\t\" + result['idEG']['value']"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gmckercher/2015-TTU-SWC
|
pythonic-data-analysis/pythonic-data-analysis.ipynb
|
mit
|
[
"Pythonic Data Analysis\nOne of the nice aspects of Python is that it contains many powerful tools built into the language itself:\n- Iteration and unpacking\n- dictionaries\n- first-class functions\n- filepath manipulation (standard library)\nIn this section we'll look at how to use these concepts to write simple, yet powerful data analysis scripts.\nBasic\nLet's start by opening a time series from the GFS to play with.",
"from netCDF4 import Dataset, num2date\ndata = Dataset('data/model-gfs.nc', 'r')\n\n# Convert the array of time numbers to datetimes\ntime_var = data['time']\ntime = num2date(time_var[:], time_var.units).squeeze()",
"Now let's make a basic time series plot with matplotlib.",
"import matplotlib.pyplot as plt\n%matplotlib inline\n\n# Just a single panel of time vs. Temperature_isobaric, which was taken from 1000mb\nfig, ax = plt.subplots(1, 1, figsize=(12, 6))\nax.plot(time, data.variables['Temperature_isobaric'][:].squeeze(), 'r-')\nax.grid()",
"Let's work a bit harder and add axis labels, set up some better date ticking, and format the grid lines to go with the new ticking so that we can more easily denote the days.",
"fig, ax = plt.subplots(1, 1, figsize=(12, 6))\nax.plot(time, data.variables['Temperature_isobaric'][:].squeeze(), 'r-', linewidth=2)\n\n# Add x/y axis labels with a bigger font\nlabel_font = dict(size=16)\nax.set_xlabel('Forecast time (UTC)', fontdict=label_font)\nax.set_ylabel('Temperature', fontdict=label_font)\n\n# Set the x-axis to do major ticks on the days and label them like 'Jul 20'\nfrom matplotlib.dates import DateFormatter, DayLocator, HourLocator\nax.xaxis.set_major_locator(DayLocator())\nax.xaxis.set_major_formatter(DateFormatter('%h %d'))\n\n# Set up minor ticks with the hours 6, 12, 18 written as '18Z'\nax.xaxis.set_minor_locator(HourLocator(range(6, 24, 6)))\nax.xaxis.set_minor_formatter(DateFormatter('%HZ'))\n\n# Highlight the major x-axis grid lines with a thicker, dashed line\nax.grid(axis='x', linestyle='--', color='#666699', linewidth=1.0)\nax.grid(which='minor', axis='x')\nax.grid(axis='y')",
"Containers and iteration\nPython's built-in lists and tuples facilitate easy ways of looping over groups of data. One useful utility is zip, which iterates over multiple lists at the same time and gives tuples of the elements.",
"vals = [1, 2, 3, 4]\nnames = ['one', 'two', 'three', 'four']\nz = list(zip(names, vals))",
"Coincidentally, zip can also be used to unzip, using the ability of Python to pass a sequence as a set of positional arguments:",
"list(zip(*z))",
"Another useful part of the containers is unpacking, which allows assignment directly from a container",
"one, two, three, four = vals\nprint(three)",
"This same functionality enables multiple return values from functions",
"import math\ndef polar_to_cartesian(r, th):\n x = r * math.cos(th)\n y = r * math.sin(th)\n return x, y\n\nX,Y = polar_to_cartesian(2, math.pi / 3)\nprint(X, Y)",
"You can put those together and iterate over multiple lists, and unpack those items into individual variables:",
"for n, v in zip(names, vals):\n print('%s == %d' % (n, v))",
"Now let's add Relative_humidity_isobaric to our plot on a new subplot. But we need to do this in a loop without duplicating code. To do so we need:\n- A list of variables to plot\n- A way to convert the variable from the file to an axis label\n- A list of plot formats",
"# This is just to keep the different cells in the notebook from rehashing this\ndef set_defaults(ax):\n # Set the x-axis to do major ticks on the days and label them like 'Jul 20'\n from matplotlib.dates import DateFormatter, DayLocator, HourLocator\n ax.xaxis.set_major_locator(DayLocator())\n ax.xaxis.set_major_formatter(DateFormatter('%h %d'))\n\n # Set up minor ticks with the hours 6, 12, 18 written as '18Z'\n ax.xaxis.set_minor_locator(HourLocator(range(6, 24, 6)))\n ax.xaxis.set_minor_formatter(DateFormatter('%HZ'))\n\n # Highlight the major x-axis grid lines with a thicker, dashed line\n ax.grid(axis='x', linestyle='--', color='#666699', linewidth=1.0)\n ax.grid(which='minor', axis='x')\n ax.grid(axis='y')\n \n # Normal x-axis label\n ax.set_xlabel('Forecast time (UTC)', fontdict=dict(size=16))\n\n# This creates a figure and 2 subplots (axes is a list 2 Axes objects)\nfig, axes = plt.subplots(1, 2, figsize=(18, 6))\n\n# What should we loop over?\n",
"Dictionaries\nDictionaries are another powerful language feature that allow you to create arbitrary mappings between two sets of things (key -> value). They can be abused, certainly, but they give programmers the ability to create simple data structures on the fly.",
"# From before, we can now create a way to map a integer value to its\n# english representation\nnum_map = dict(zip(vals, names))\nprint(num_map)",
"Of course, the values can themselves be dictionaries:",
"states = dict(Colorado={'abbreviation': 'CO', 'capitol': 'Denver', 'notes': 'Home!'},\n Oklahoma={'abbreviation': 'OK', 'capitol': 'Oklahoma City', 'flat': True},\n Kansas={'abbreviation': 'KS', 'capitol': 'Topeka', 'flat': True})\n\nprint(states['Oklahoma']['abbreviation'])",
"Dictionaries can also be used to pass a set of keyword arguments:",
"def print_states(Colorado, **the_rest):\n print(Colorado['abbreviation'], 'is', Colorado['notes'])\n for state, info in the_rest.items():\n print('The capitol of %s is %s' % (state, info['capitol']))\n\nprint_states(**states)",
"Now let's add a third panel with the Temperature_surface. This should have the same plotting style as Temperature, but without duplicating the style information. It might also be nice to set the title of the axes with the variable name for clarity.\nHint: We already have name, that might make a nice key\nHint #2: plot takes a bunch of keyword arguments",
"# Should probably specify plotting styles here\n\nfig, axes = plt.subplots(1, 3, figsize=(18, 6))\n\n# Now need to loop over the subplots",
"First class functions\nFunctions in Python can be passed around and used, just like any other object. This makes it very easy to pull out bits of functionality and re-use them.",
"# Function using our dictionary from earlier\ndef to_english(i):\n return num_map[i]\n\nto_english(2)",
"So let's sort our list of numbers using their english representation",
"vals\n\nvals.sort(key=to_english)\nvals",
"Let's convert the temperature on our plots from Kelvin to Farenheit. Since functions are first-class objects, we can stuff them into dictionaries as well....",
"# Conversion functions, and some structure to use them...\n\nfig, axes = plt.subplots(1, 3, figsize=(18, 6))\n\n# Loop over the subplots",
"A bit more advanced\nLet's now go ahead and put both temperature variables on the same plot. We'll need to change to loop over a list of varnames for each plot. We'll also need to label each plot with the level surface vs. isobaric, and add a legend. This will also require separating out the plot styles across the type and level.",
"# Need some styles here\n\n# Now only two subplots\nfig, axes = plt.subplots(1, 2, figsize=(18, 6))\n\n# Looping over each subplot\n",
"Filepath manipulation\nThe Python standard library has a lot of utilties for manipulating paths. One such utility is the glob module, which makes it easy to find files matching a UNIX shell-style file wildcard:",
"import glob\nglob.glob('data/*.nc')",
"The os.path module provides functions that make it easy to manipulate paths, making munging such information trivial. For instance, to get the base filename from those paths above:",
"import os.path\nfilenames = [os.path.basename(fname) for fname in glob.glob('data/*.nc')]\nfilenames",
"And even remove the file extensions:",
"[os.path.splitext(name)[0] for name in filenames]",
"Can we use this to process all files beginning with model- in the data/ directory? Let's start by just collecting the NetCDF4 datasets into a dictionary with the model name (the part after model-, but before .nc) capitalized as the key.",
"# Create empty dict where we'll store stuff\nmodel_data = dict()\n\n# Glob for file and loop\n",
"Now we need to update the plotting to loop over the model datasets. Note that we need to do the time conversion step from the beginning inside this loop, so that each dataset has it done. The subplots should have their title set based on the model. It might also be nice to use consistent y-axis limits for the variables.",
"# ?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tpin3694/tpin3694.github.io
|
sql/sort_by_multiple_columns.ipynb
|
mit
|
[
"Title: Sort By Multiple Columns \nSlug: sort_by_multiple_columns\nSummary: Sort By Multiple Columns in SQL. \nDate: 2017-01-16 12:00\nCategory: SQL\nTags: Basics\nAuthors: Chris Albon \nNote: This tutorial was written using Catherine Devlin's SQL in Jupyter Notebooks library. If you have not using a Jupyter Notebook, you can ignore the two lines of code below and any line containing %%sql. Furthermore, this tutorial uses SQLite's flavor of SQL, your version might have some differences in syntax.\nFor more, check out Learning SQL by Alan Beaulieu.",
"# Ignore\n%load_ext sql\n%sql sqlite://\n%config SqlMagic.feedback = False",
"Create Data",
"%%sql\n\n-- Create a table of criminals\nCREATE TABLE criminals (pid, name, age, sex, city, minor);\nINSERT INTO criminals VALUES (412, 'James Smith', 15, 'M', 'Santa Rosa', 1);\nINSERT INTO criminals VALUES (234, 'Bill James', 22, 'M', 'Santa Rosa', 0);\nINSERT INTO criminals VALUES (632, 'Stacy Miller', 23, 'F', 'San Francisco', 0);\nINSERT INTO criminals VALUES (901, 'Gordon Ado', 32, 'F', 'San Francisco', 0);\nINSERT INTO criminals VALUES (512, 'Bill Byson', 21, 'M', 'Petaluma', 0);",
"Sort By Ascending Age And Then Alphabetically By Name",
"%%sql\n\n-- Select all unique\nSELECT distinct *\n\n-- From the criminals table\nFROM criminals\n\n-- Sort by ascending age\nORDER BY age ASC, name"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
spectralDNS/shenfun
|
binder/Shenfun-introduction.ipynb
|
bsd-2-clause
|
[
"<center>\nShenfun - High-Performance Computing platform for the Spectral Galerkin method\n<div><img src=\"https://rawcdn.githack.com/spectralDNS/spectralutilities/f3419a3e6c40dad55be5dcca51f6e0e21713dd90/figures/Chebyshev_Polynomials_of_the_First_Kind.svg\" width=\"300\"></div>\n\n<div class=\"sl-block\" style=\"height: auto; width: 600px;\">\n <div>\n <p><center style=\"font-size:1.2em\">Professor Mikael Mortensen</p>\n <p><center>Department of Mathematics, University of Oslo</p>\n <p><center>Presented at the International Conference on Scientific Computing and Applications (ICSCA), Xiamen, China, 28/5 - 2019</p>\n </div>\n</div>\n\nShenfun - facts\n\nShenfun is named in honour of <strong>Professor Jie Shen</strong> for his seminal work on the spectral Galerkin method:-) \nShenfun is a high performance computing platform for solving partial differential equations (PDEs) with the spectral Galerkin method (with numerical integration).\nShenfun has been run with 65,000 processors on a Cray XC40.\nShenfun is a high-level <strong>Python</strong> package originally developed for pseudo-spectral turbulence simulations.\n\n<img src=\"https://rawcdn.githack.com/spectralDNS/spectralutilities/473129742f0b5f8d57e8c647809272c0ced99a45/movies/RB_200k_small.png\" style=\"float:left\" width=\"300\"> <img src=\"https://rawcdn.githack.com/spectralDNS/spectralutilities/473129742f0b5f8d57e8c647809272c0ced99a45/movies/isotropic_cropped.gif\" style=\"float:right\" width=\"200\"> \n<p style=\"clear: both;\">",
"from shenfun import *\nprint('hello world')",
"The Spectral Galerkin method\nAdvantages\n\nAccuracy (spectral)\nSparsity - very low memory use for coefficient matrices \nEfficiency - Very fast linear algebra solvers\nRobustness (round-off errors) - condition numbers much lower than for collocation methods\n\nDisadvantages\n\nNo complex grids. Structured tensor product grids only\nNonlinear terms must be implemented explicitly (pseudospectral)\nGenerally considered more difficult to implement since we solve equations in spectral space\n\nThe Spectral Galerkin method (in a nutshell)\napproximates solutions $u(x)$ using global <strong>trial</strong> functions $\\phi_k(x)$ and unknown expansion coefficients $\\hat{u}_k$\n$$\nu(x) = \\sum_{k=0}^{N-1}\\hat{u}_k \\phi_k(x)\n$$\nMultidimensional solutions are formed from outer products of 1D bases\n$$\nu(x, y) = \\sum_{k=0}^{N_0-1}\\sum_{l=0}^{N_1-1}\\hat{u}{kl} \\phi{kl}(x, y)\\quad \\text{ or }\\quad\nu(x, y, z) = \\sum_{k=0}^{N_0-1}\\sum_{l=0}^{N_1-1} \\sum_{m=0}^{N_2-1}\\hat{u}{klm} \\phi{klm}(x, y, z)\n$$\nwhere, for example\n$$\n\\begin{align}\n\\phi_{kl}(x, y) &= T_k(x) L_l(y)\\\n\\phi_{klm}(x, y, z) &= T_k(x) L_l(y) \\exp(\\text{i}mz)\n\\end{align}\n$$\n$T_k$ and $L_k$ are Chebyshev and Legendre polynomials.\nThe Spectral Galerkin method\nsolves PDEs, like Poisson's equation\n\\begin{align}\n\\nabla^2 u(x) &= f(x), \\quad x \\in [-1, 1] \\\nu(\\pm 1) &= 0\n\\end{align}\nusing variational forms by the <strong>method of weighted residuals</strong>. I.e., multiply PDE by a test function $v$ and integrate over the domain. For Poisson this leads to the problem:\nFind $u \\in H^1_0$ such that \n$$(\\nabla u, \\nabla v)_w^N = -(f, v)_w^N \\quad \\forall v \\in H^1_0$$\nHere $(u, v)_w^{N}$ is a weighted inner product and $v(=\\phi_j)$ is a <strong>test</strong> function. Note that test and trial functions are the same for the Galerkin method.\nWeighted inner products\nThe weighted inner product is defined as\n$$\n (u, v)w = \\int{\\Omega} u \\overline{v} w \\, d\\Omega,\n$$\nwhere $w(\\mathbf{x})$ is a weight associated with the chosen basis (different bases have different weights). The overline represents a complex conjugate (for Fourier).\n$\\Omega$ is a tensor product domain spanned by the chosen 1D bases.\nIn Shenfun quadrature is used for the integrals\n1D with Chebyshev basis:\n$$\n(u, v)w ^N = \\sum{i=0}^{N-1} u(x_i) v(x_i) \\omega_i \\approx \\int_{-1}^1 \\frac{u v}{\\sqrt{1-x^2}} \\, {dx},\n$$\nwhere ${\\omega_i}{i=0}^{N-1}$ are the quadrature weights associated with the chosen basis and quadrature rule. The associated quadrature points are denoted as ${x_i}{i=0}^{N-1}$. \n2D with mixed Chebyshev-Fourier:\n$$\n(u, v)w^N = \\int{-1}^1\\int_{0}^{2\\pi} \\frac{u \\overline{v}}{2\\pi\\sqrt{1-x^2}} \\, {dxdy} \\approx \\sum_{i=0}^{N_0-1}\\sum_{j=0}^{N_1-1} u(x_i, y_j) \\overline{v}(x_i, y_j) \\omega^{(x)}_i \\omega_j^{(y)} ,\n$$\nSpectral Galerkin solution procedure\n\nChoose function space(s) satisfying the correct boundary conditions\nTransform PDEs to variational forms using inner products\nAssemble variational forms and solve resulting linear algebra systems\n\nOrthogonal bases\n<p style=\"margin-bottom:1cm;\">\n\n| Family | Basis | Domain |\n| :---: | :---: | :---: |\n| Chebyshev | $$\\{T_k\\}_{k=0}^{N-1}$$ | $$[-1, 1]$$ |\n| Legendre | $$\\{L_k\\}_{k=0}^{N-1}$$ | $$[-1, 1]$$ |\n| Fourier | $$\\{\\exp(\\text{i}kx)\\}_{k=-N/2}^{N/2-1}$$| $$[0, 2\\pi]$$ |\n| Hermite | $$\\{H_k\\}_{k=0}^{N-1}$$ | $$[-\\infty, \\infty]$$|\n| Laguerre | $$\\{La_k\\}_{k=0}^{N-1}$$ | $$[0, \\infty]$$ |",
"from shenfun import *\nN = 8\nC = FunctionSpace(N, 'Chebyshev', quad='GC', domain=[-2, 2])\nL = FunctionSpace(N, 'Legendre')\nx, w = C.points_and_weights()\nprint(L.points_and_weights())",
"Jie Shen's bases with Dirichlet bcs\n<p style=\"margin-bottom:1cm;\">\n\n\n| family | Basis | Boundary condition |\n|-----------|-----------------------|----------|\n| Chebyshev | $$\\{T_k-T_{k+2}\\}_{k=0}^{N-3}$$ | $$u(\\pm 1) = 0$$ |\n| Legendre | $$\\{L_k-L_{k+2}\\}_{k=0}^{N-3}$$ | $$u(\\pm 1) = 0$$ |\n| Hermite | $$\\exp(-x^2)\\{H_k\\}_{k=0}^{N-1}$$ | $$u(\\pm \\infty) = 0$$ |\n| Laguerre | $$\\exp(-x/2)\\{La_k-La_{k+1}\\}_{k=0}^{N-2}$$| $$u(0) = u(\\infty) = 0$$ |",
"C0 = FunctionSpace(N, 'Chebyshev', bc=(0, 0))\nL0 = FunctionSpace(N, 'Legendre', bc=(0, 0))\nH0 = FunctionSpace(N, 'Hermite')\nLa = FunctionSpace(N, 'Laguerre', bc=(0, None))",
"Shen's bases with Neumann $u'(\\pm 1) = 0$\n<p style=\"margin-bottom:1cm;\">\n\n| family | Basis |\n|-----------|-----------------------|\n| Chebyshev | $$\\left\\{T_k-\\frac{k^2}{(k+2)^2}T_{k+2}\\right\\}_{k=0}^{N-3}$$ | \n| Legendre | $$\\left\\{L_k-\\frac{k(k+1)}{(k+2)(k+3)}L_{k+2}\\right\\}_{k=0}^{N-3}$$ |",
"CN = FunctionSpace(N, 'Chebyshev', bc={'left': {'N': 0}, 'right': {'N': 0}})\nLN = FunctionSpace(N, 'Legendre', bc={'left': {'N': 0}, 'right': {'N': 0}})",
"Shen's biharmonic bases $u(\\pm 1) = u'(\\pm 1) = 0$\n<p style=\"margin-bottom:1cm;\">\n\n| family | Basis |\n|-----------| :-----------------: |\n| Chebyshev | $$\\left\\{T_k-\\frac{2(k+2)}{k+3}T_{k+2}+\\frac{k+1}{k+3} T_{k+4}\\right\\}_{k=0}^{N-5}$$ | \n| Legendre | $$\\left\\{L_k-\\frac{2(2k+5)}{(2k+7)}L_{k+2}+\\frac{2k+3}{2k+7}L_{k+4}\\right\\}_{k=0}^{N-5}$$ |",
"CB = FunctionSpace(N, 'Chebyshev', bc=(0, 0, 0, 0))\nLB = FunctionSpace(N, 'Legendre', bc=(0, 0, 0, 0))",
"Multidimensional tensor product spaces\n<p style=\"margin-bottom:0.5cm;\">\n\n$$\n\\begin{align}\nL_0 &= \\{L_k(x)-L_{k+2}(x)\\}_{k=0}^{N-3} \\\\\nC_0 &= \\{T_k(x)-T_{k+2}(x)\\}_{k=0}^{N-3} \\\\\nL_1 &= \\{L_l(y)\\}_{l=0}^{N-1} \\\\\nLL(x, y) &= L_0(x) \\times L_1(y) \\\\\nCL(x, y) &= C_0(x) \\times L_1(y)\n\\end{align}\n$$",
"L0 = FunctionSpace(N, 'Legendre', bc=(0, 0))\nC0 = FunctionSpace(N, 'Chebyshev', bc=(0, 0))\nL1 = FunctionSpace(N, 'Legendre')\nLL = TensorProductSpace(comm, (L0, L1)) # comm is MPI.COMM_WORLD\nCL = TensorProductSpace(comm, (C0, L1))\nV = VectorSpace(LL) # For vector valued functions\nf = Array(LL)\nprint(f.shape)",
"Challenge 1\nCreate a 3-dimensional tensor product space for a scalar $u$, with problem described by\n$$ \n\\begin{align}\n\\Omega &= [-1, 1]\\times[-1, 1] \\times [0, 2\\pi] \\\nu(\\pm 1, y, z) &= 0 \\\nu'(x, \\pm 1, z) &= 0 \\\nu(x, y, 2\\pi) &= u(x, y, 0)\n\\end{align}\n$$",
"L0 = FunctionSpace(N, 'Legendre', bc=(0, 0))\nL1 = FunctionSpace(N, 'Chebyshev', bc={'left': ('N', 0), 'right': ('N', 0)})\nF2 = FunctionSpace(N, 'Fourier', dtype='d')\nT = TensorProductSpace(comm, (L0, L1, F2))\nf = Array(T)\nprint(f.dtype)\nhelp(project)",
"Operators in shenfun\nAct on instances of a TestFunction, TrialFunction or Function\n\ndiv\ngrad\ncurl\nDx (partial derivative)\n\nAssembly\n\nproject\ninner",
"L0 = FunctionSpace(N, 'Legendre', bc=(0, 0))\nL1 = FunctionSpace(N, 'Legendre')\nu = TrialFunction(L0)\nv = TestFunction(L0)\nuh = Function(L0)\ng = Array(L0)\ndu = grad(u) # vector valued expression\nh = div(du) # scalar valued expression\n\nA = inner(Dx(u, 0, 3), v)\nprint(A.diags().todense())",
"The shenfun Function represents the solution\nuh = Function(L0)\n$$\nu_h(x) = \\sum_{k=0}^{N-1} \\hat{u}k \\phi{k}(x)\n$$\nThe function evaluated for all quadrature points, ${x_j}_{j=0}^{N-1}$, is an Array\nuj = Array(L0)\nThere is a (fast) backward transform for moving from Function to Array, and a forward transform to go the other way.",
"uj = Array(L0)\nuj = uh.backward(uj)\nuh = uj.forward(uh)",
"Projections\nProject $g(\\mathbf{x})$ to $V$:\nFind $u$ in $V$ such that:\n$$(u, v)_w = (Ig, v)_w \\quad \\text{for} \\, v \\in V $$\nwhere $Ig$ is ${g(x_j)}_{j=0}^{N-1}$, i.e., $g(x)$ evaluated on the quadrature mesh.\nWorks if $g(x)$ is \n\nan Array, which is exactly a Function evaluated on the mesh\nan expression involving a Function, like div(grad(uh))\na sympy expression, like sin(x)",
"dudx = project(Dx(uh, 0, 1), L1) # Compute du/dx\nwh = project(uj, L1)\nimport sympy as sp\nx, y = sp.symbols('x,y')\nws = project(sp.sin(4*x), L1)",
"Challenge 2\nCreate a shenfun Function that corresponds to the Chebyshev polynomial $T_2(x)=2x^2-1$\nCompute the derivative of this Function using project and Dx and verify that it equals $4x$.\nChallenge 3\nCreate a sympy function\n$$f(x, y) = sin(4x)cos(3y)$$\nProject $f(x, y)$ to an orthogonal 2D Chebyshev space. Project then $\\nabla f$ to a vector tensor product space. Compare the results with that obtained by using sympy to compute the derivatives.\nImplementation details",
"F0 = FunctionSpace(N, 'F', dtype='D')\nF1 = FunctionSpace(N, 'F', dtype='D')\nF2 = FunctionSpace(N, 'F', dtype='d')\nFF = TensorProductSpace(comm, (F0, F1, F2))\nuh = Function(FF)\nua = Array(FF)\nua[:] = np.random.random(ua.shape)\nuh = ua.forward(uh)\ndu = div(grad(uh))\nV = VectorSpace(FF)\nvv = Function(V)\nc = curl(vv)\nch = project(c, V)",
"Implementation matches mathematics\n<p style=\"margin-bottom:1cm;\">\n\n$$\nA = (\\nabla u, \\nabla v)_w^N\n$$",
"A = inner(grad(u), grad(v))\n\ndict(A)\n\nprint(A.diags().todense())",
"A diagonal stiffness matrix!\nComplete Poisson solver with error verification in 1D",
"# Solve Poisson's equation\nimport matplotlib.pyplot as plt\nfrom sympy import symbols, sin, cos, lambdify\nfrom shenfun import * \n\n# Use sympy to compute manufactured solution\nx, y = symbols(\"x,y\")\nue = sin(6*np.pi*x)*(1-x**2) # `ue` is the manufactured solution\nfe = ue.diff(x, 2) # `fe` is Poisson's right hand side for `ue`\n\nSD = FunctionSpace(20, 'L', bc=(0, 0))\nu = TrialFunction(SD)\nv = TestFunction(SD)\n\nb = inner(v, Array(SD, buffer=fe)) # Array is initialized with `fe`\nA = inner(v, div(grad(u)))\n\nuh = Function(SD)\nuh = A.solve(b, uh) # Very fast solver due to Jie Shen\nprint(uh.backward()-Array(SD, buffer=ue))\n\nplt.plot(SD.mesh(), uh.backward())",
"2D - still closely matching mathematics",
"L0 = FunctionSpace(N, 'Legendre', bc=(0, 0))\nF1 = FunctionSpace(N, 'Fourier', dtype='d')\nTP = TensorProductSpace(comm, (L0, F1))\nu = TrialFunction(TP)\nv = TestFunction(TP)\nA = inner(grad(u), grad(v))\n\nprint(A)",
"?\nA is a list of two TPMatrix objects???\nTPMatrix is a Tensor Product matrix\nA TPMatrix is the outer product of smaller matrices (2 in 2D, 3 in 3D etc). \nConsider the inner product:\n$$\n\\begin{align}\n(\\nabla u, \\nabla v)w &= \\frac{1}{2\\pi}\\int{-1}^{1}\\int_{0}^{2\\pi} \\left(\\frac{\\partial u}{\\partial x}, \\frac{\\partial u}{\\partial y}\\right) \\cdot \\left(\\frac{\\partial \\overline{v}}{\\partial x}, \\frac{\\partial \\overline{v}}{\\partial y}\\right) {dxdy} \\\n(\\nabla u, \\nabla v)w &= \\frac{1}{2\\pi}\\int{-1}^1 \\int_{0}^{2\\pi} \\frac{\\partial u}{\\partial x}\\frac{\\partial \\overline{v}}{\\partial x} {dxdy} + \\int_{-1}^1 \\int_{0}^{2\\pi} \\frac{\\partial u}{\\partial y}\\frac{\\partial \\overline{v}}{\\partial y} {dxdy}\n\\end{align}\n$$\nwhich, like A, is a sum of two terms. These two terms are the two TPMatrixes returned by inner above.\nNow each one of these two terms can be written as the outer product of two smaller matrices. \nConsider the first, inserting for test and trial functions\n$$\n\\begin{align}\nv &= \\phi_{kl} = (L_k(x)-L_{k+2}(x))\\exp(\\text{i}ly) \\\nu &= \\phi_{mn}\n\\end{align}\n$$\nThe first term becomes\n$$\n\\small\n\\begin{align}\n\\int_{-1}^1 \\int_{0}^{2\\pi} \\frac{\\partial u}{\\partial x}\\frac{\\partial \\overline{v}}{\\partial x} \\frac{dxdy}{2\\pi} &= \\underbrace{\\int_{-1}^1 \\frac{\\partial (L_m-L_{m+2})}{\\partial x}\\frac{\\partial (L_k-L_{k+2})}{\\partial x} {dx}}{a{km}} \\underbrace{\\int_{0}^{2\\pi} \\exp(iny) \\exp(-ily) \\frac{dy}{2\\pi}}{\\delta{ln}} \\\n &= a_{km} \\delta_{ln}\n\\end{align}\n$$\nand the second\n$$\n\\small\n\\begin{align}\n\\int_{-1}^1 \\int_{0}^{2\\pi} \\frac{\\partial u}{\\partial y}\\frac{\\partial \\overline{v}}{\\partial y} \\frac{dxdy}{2\\pi} &= \\underbrace{\\int_{-1}^1 (L_m-L_{m+2})(L_k-L_{k+2}) {dx}}{b{km}} \\underbrace{\\int_{0}^{2\\pi} ln \\exp(iny) \\exp(-ily)\\frac{dy}{2\\pi}}{l^2\\delta{ln}} \\\n &= l^2 b_{km} \\delta_{ln}\n\\end{align}\n$$\nAll in all:\n$$\n(\\nabla u, \\nabla v)w = \\left(a{km} \\delta_{ln} + l^2 b_{km} \\delta_{ln}\\right)\n$$\n$$\n(\\nabla u, \\nabla v)w = \\left(a{km} \\delta_{ln} + l^2 b_{km} \\delta_{ln}\\right)\n$$",
"A = inner(grad(u), grad(v)) # <- list of two TPMatrices\nprint(A[0].mats)\nprint('Or as dense matrices:')\nfor mat in A[0].mats:\n print(mat.diags().todense())\n\nprint(A[1].mats[0].diags().todense())\n#print(A[1].scale) # l^2",
"3D Poisson (with MPI and Fourier x 2)",
"from sympy import symbols, sin, cos, lambdify\nfrom shenfun import *\n\n# Use sympy to compute manufactured solution\nx, y, z = symbols(\"x,y,z\")\nue = (cos(4*x) + sin(2*y) + sin(4*z))*(1-x**2)\nfe = ue.diff(x, 2) + ue.diff(y, 2) + ue.diff(z, 2)\n\nC0 = FunctionSpace(32, 'Chebyshev', bc=(0, 0))\nF1 = FunctionSpace(32, 'Fourier', dtype='D')\nF2 = FunctionSpace(32, 'Fourier', dtype='d')\nT = TensorProductSpace(comm, (C0, F1, F2))\nu = TrialFunction(T)\nv = TestFunction(T)\n\n# Assemble left and right hand\nf_hat = inner(v, Array(T, buffer=fe))\nA = inner(v, div(grad(u)))\n\n# Solve\nsolver = chebyshev.la.Helmholtz(*A) # Very fast solver due to Jie Shen\nu_hat = Function(T)\nu_hat = solver(f_hat, u_hat)\nassert np.linalg.norm(u_hat.backward()-Array(T, buffer=ue)) < 1e-12\nprint(u_hat.shape)",
"Contour plot of slice with constant y",
"X = T.local_mesh()\nua = u_hat.backward()\nplt.contourf(X[2][0, 0, :], X[0][:, 0, 0], ua[:, 2], 100)\nplt.colorbar()",
"Run with MPI distribution of arrays\nHere we would normally run from a bash shell\n<p style=\"margin-bottom:0.5cm;\">\n\n<div style=\"color:black\"> <strong>[bash shell] mpirun -np 4 python poisson3D.py </strong> </div>\n\nBut since we are in a Jupyter notebook lets actually do this from python in a live cell:-)",
"import subprocess\nsubprocess.check_output('mpirun -np 4 python poisson3D.py', shell=True)",
"Note that Fourier bases are especially attractive because of features easily handled with MPI:\n- diagonal matrices\n- fast transforms\n\nNonlinearities (convolution)\nAll treated with pseudo-spectral techniques\n$$\n\\begin{align}\n\\hat{w}_k &= \\widehat{u^2}_k\n\\end{align}\n$$\nThat is, transform Functions to real space Arrays, perform the nonlinear operation there and transform the nonlinear product back to spectral space (to a Function).\n3/2-rule or 2/3-rule is possible for dealiasing with Fourier. Not for the remaining bases.",
"uj = Array(SD)\n#uj[:] = np.random.random(uj.shape)\nuj = uh.backward(uj)\nwh = Function(SD)\nwh = SD.forward(uj*uj, wh)",
"Challenge 3\nImplement nonlinear solver for\n<p style=\"margin-bottom:1cm;\">\n\n$$\n\\begin{align}\n\\nabla^2 u(x) + \\alpha |\\nabla u(x)|^2 &= -1, \\quad x \\in [-1, 1]\\\\\nu(\\pm 1) = 0 &\\text{ and } \\alpha \\in \\mathbb{R^+}\n\\end{align}\n$$\n\nNote that if $\\alpha=0$ then obviously $u(x) = a_0+a_1x-0.5x^2$. This solution can be realized with basis function $T_0(x)-T_2(x)$, which equals $2(1-x^2)$, and which is the first basis function in the space\n\n$$\nV = \\text{span}\\{T_k-T_{k+2}\\}_{k=0}^{N-3}\n$$\n\nIn other words, for $u$ in $V$ the solution is $\\hat{u}_0 = 0.25$ and $\\hat{u}_k = 0$ for all $k=1, 2, \\ldots$.\n\n$$\n\\begin{align}\nu(x) &= \\sum_{k=0}^{N-3}\\hat{u}_k (T_k-T_{k+2}) = 0.25(1-(2x^2-1)) \\\\\n &= 0.5(1-x^2)\n\\end{align}\n$$\n\nThis is a good initial guess to find the nonlinear solution when $\\alpha>0$! What is a good initial guess for Legendre? You need to iterate to find the solution when $\\alpha > 0$!\n\n# Mixed tensor product spaces\n\nSolve several equations simultaneously\n\n- Coupled equations\n- Block matrices and vectors\n- Tensor spaces of vectors, like velocity $u \\in [\\mathbb{R}^3]^3$\n\n# Stokes equations\n### lid-driven cavity - coupled solver\n<p style=\"margin-bottom:0.25cm;\">\n\n$$\n\\begin{align*}\n\\nabla^2 \\mathbf{u} - \\nabla p &= \\mathbf{f} \\quad \\text{in } \\Omega, \\quad \\quad \\Omega = [-1, 1]\\times[-1, 1]\\\\ \n\\nabla \\cdot \\mathbf{u} &= h \\quad \\text{in } \\Omega \\\\ \n\\int_{\\Omega} p dx &= 0 \\\\\n\\mathbf{u}(\\pm 1, y) = \\mathbf{u}(x, -1) = (0, 0) &\\text{ and }\\mathbf{u}(x, 1) = (1, 0) \\text{ or } ((1-x^2)(1+x^2), 0)\n\\end{align*}\n$$\n\nGiven appropriate spaces $V$ and $Q$ a variational form reads: find $(\\mathbf{u}, p) \\in V \\times Q$ such that \n\n$$\n\\begin{equation}\na((\\mathbf{u}, p), (\\mathbf{v}, q)) = L((\\mathbf{v}, q)) \\quad \\forall (\\mathbf{v}, q) \\in V \\times Q\n\\end{equation}\n$$\n\nwhere bilinear and linear forms are, respectively\n\n$$\n\\begin{equation}\n a((\\mathbf{u}, p), (\\mathbf{v}, q)) = \\int_{\\Omega} (\\nabla^2 \\mathbf{u} - \\nabla p) \\cdot {\\mathbf{v}} \\, dx_w + \\int_{\\Omega} \\nabla \\cdot \\mathbf{u} \\, {q} \\, dx_w,\n\\end{equation}\n$$\n$$\n\\begin{equation} \n L((\\mathbf{v}, q)) = \\int_{\\Omega} \\mathbf{f} \\cdot {\\mathbf{v}}\\, dx_w + \\int_{\\Omega} h {q} \\, dx_w\n\\end{equation}\n$$\n\nUsing integration by parts for Legendre\n\n$$\n\\begin{equation}\n a((\\mathbf{u}, p), (\\mathbf{v}, q)) = -\\int_{\\Omega} \\nabla \\mathbf{u} \\cdot \\nabla{\\mathbf{v}} \\, dx_w + \\int_{\\Omega} \\nabla \\cdot \\mathbf{v} \\, {p} \\, dx_w + \\int_{\\Omega} \\nabla \\cdot \\mathbf{u} \\, {q} \\, dx_w,\n\\end{equation}\n$$\n\n# Implementation of spaces, basis functions",
"N = (40, 40)\nfamily = 'Legendre'\nD0X = FunctionSpace(N[0], 'Legendre', bc=(0, 0))\n#D1Y = FunctionSpace(N[1], 'Legendre', bc=(1, 0)) # Regular lid\nD1Y = FunctionSpace(N[1], 'Legendre', bc=(0, (1-x)**2*(1+x)**2)) # Regularized lid\nD0Y = FunctionSpace(N[1], 'Legendre', bc=(0, 0))\nPX = FunctionSpace(N[0], 'Legendre')\nPY = FunctionSpace(N[1], 'Legendre')\n\n# All required spaces\nV1 = TensorProductSpace(comm, (D0X, D1Y))\nV0 = TensorProductSpace(comm, (D0X, D0Y))\nQ = TensorProductSpace(comm, (PX, PY), modify_spaces_inplace=True)\nV = VectorSpace([V1, V0])\nW = CompositeSpace([V0, V0])\nVQ = CompositeSpace([V, Q])\n\nPX.slice = lambda: slice(0, PX.N-2)\nPY.slice = lambda: slice(0, PY.N-2)\n\n# All required test and trial functions\nup = TrialFunction(VQ)\nvq = TestFunction(VQ)\nu, p = up\nv, q = vq",
"Implementation Stokes - matrices and solve",
"# Assemble matrices\nA = inner(grad(v), -grad(u))\nG = inner(div(v), p)\nD = inner(q, div(u))\n\n# Create Block matrix\nsol = la.BlockMatrixSolver(A+G+D)\n\n# Functions to hold solution and rhs\nup_hat = Function(VQ).set_boundary_dofs()\nfh_hat = Function(VQ)\n\n# Solve Stokes problem. Note constraint for pressure\nup_hat = sol(fh_hat, u=up_hat, constraints=((2, 0, 0),)) \n\n# Move solution to Array in real space\nup = up_hat.backward()\nu_, p_ = up\n\nX = Q.local_mesh(True)\nplt.quiver(X[0], X[1], u_[0], u_[1])",
"Sparsity pattern of block matrix\n$$\nM =\n \\begin{bmatrix}\n A[0]+A[1] & 0 & G[0] \\\n 0 & A[2]+A[3] & G[1] \\ \n D[0] & D[1] & 0\n \\end{bmatrix}\n$$",
"%matplotlib notebook\nplt.figure(figsize=(6,4))\nplt.spy(sol.mat.diags(), markersize=0.5)",
"Block matrix\n$$\nM =\n \\begin{bmatrix}\n A[0]+A[1] & 0 & G[0] \\\n 0 & A[2]+A[3] & G[1] \\ \n D[0] & D[1] & 0\n \\end{bmatrix}\n$$\nwhere $D = G^T$ for the Legendre basis, making $M$ symmetric. For Chebyshev $M$ will not be symmetric.\nSolver through scipy.sparse.linalg\nFor Navier-Stokes of the lid-driven cavity, see https://github.com/spectralDNS/shenfun/blob/master/demo/NavierStokesDrivenCavity.py\n<center>\nThank you for your time😃\n<img src=\"https://rawcdn.githack.com/spectralDNS/spectralutilities/473129742f0b5f8d57e8c647809272c0ced99a45/movies/RB_200k_small.png\" style=\"float:left\" width=\"300\">\n<img src=\"https://cdn.jsdelivr.net/gh/spectralDNS/spectralutilities@master/movies/Re2000_5.gif\" width=\"400\" height=\"133\" style=\"float:right\" alt=\"Channel flow\"/>\n<p style=\"clear: both;\">\n\n<img src=\"https://rawcdn.githack.com/spectralDNS/spectralutilities/473129742f0b5f8d57e8c647809272c0ced99a45/movies/isotropic_cropped.gif\" style=\"float:left\" width=\"250\">\n\n<img src=\"https://cdn.jsdelivr.net/gh/spectralDNS/spectralutilities@master/movies/KHmovie_3.gif\" width=\"400\" height=\"200\" style=\"float:right\" alt=\"Kelvin Helmholtz instability\"/>\n\n<p style=\"clear: both;\">"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
karlnapf/shogun
|
doc/ipython-notebooks/computer_vision/Scene_classification.ipynb
|
bsd-3-clause
|
[
"Visual Categorization with Bags of Keypoints in Shogun\nBy Abhijeet Kislay (GitHub ID: <a href='https://github.com/kislayabhi'>kislayabhi</a>) as a GSoC'14 project under Kevin Hughes(GitHub ID: <a href='https://github.com/pickle27'>pickle27</a>)\nThis notebook is about performing Object Categorization using <a href=\"http://en.wikipedia.org/wiki/Scale-invariant_feature_transform\">SIFT</a> descriptors of keypoints as features, and <a href=\"http://en.wikipedia.org/wiki/Support_vector_machine\">SVM</a>s to predict the category of the object present in the image. Shogun's <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKMeans.html\">K-Means clustering </a> is employed for generating the <a href=\"http://en.wikipedia.org/wiki/Bag-of-words_model_in_computer_vision\">bag of keypoints</a> and its <a href=\"http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm\">k-nearest neighbours</a> <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKNN.html\">module</a> is extensively used to construct the feature vectors. \nBackground\nThis notebook presents a bag of keypoints approach to visual categorization. A bag of keypoints corresponds to a histogram of the number of occurences of particular image patterns in a given image.The main advantages of the method are its simplicity, its computational efficiency and its invariance to affine transformations, as well as occlusion, lighting and intra-class variations.\nStrategy\n1. Compute (SIFT) descriptors at keypoints in all the template images and pool all of them together\nSIFT\nSIFT extracts keypoints and computes its descriptors. It requires the following steps to be done:\n* Scale-space Extrema Detection: Difference of Gaussian (DOG) are used to search for local extrema over scale and space.\n* Keypoint Localization: Once potential keypoints are found, we refine them by eliminating low-contrast keypoints and edge keypoints.\n* Orientation Assignment: Now an orientation is assigned to each keypoint to achieve invariance to image rotation.\n* Keypoint Descriptor: Now a keypoint descriptor is created. A total of 128 elements are available for each keypoint.\nTo get more details about SIFT in OpenCV, do read OpenCV python documentation <a href=\"http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_feature2d/py_sift_intro/py_sift_intro.html\">here</a>.\nOpenCV has a nice API for using SIFT. Let's see what we are looking at:",
"#import Opencv library\nimport os\nSHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')\ntry:\n import cv2\nexcept ImportError:\n print \"You must have OpenCV installed\"\n exit(1)\n\n#check the OpenCV version\ntry:\n v=cv2.__version__\n assert (tuple(map(int,v.split(\".\")))>(2,4,2))\nexcept (AssertionError, ValueError):\n print \"Install newer version of OpenCV than 2.4.2, i.e from 2.4.3\"\n exit(1)\n \nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom shogun import *\nimport shogun as sg\n\n# get the list of all jpg images from the path provided\nimport os\ndef get_imlist(path):\n return [[os.path.join(path,f) for f in os.listdir(path) if (f.endswith('.jpg') or f.endswith('.png'))]]\n\n#Use the following function when reading an image through OpenCV and displaying through plt.\ndef showfig(image, ucmap):\n #There is a difference in pixel ordering in OpenCV and Matplotlib.\n #OpenCV follows BGR order, while matplotlib follows RGB order.\n if len(image.shape)==3 :\n b,g,r = cv2.split(image) # get b,g,r\n image = cv2.merge([r,g,b]) # switch it to rgb\n imgplot=plt.imshow(image, ucmap)\n imgplot.axes.get_xaxis().set_visible(False)\n imgplot.axes.get_yaxis().set_visible(False)",
"We try to construct the vocabulary from a set of template images. It is a set of three general images belonging to the category of car, plane and train. \nOpenCV also provides cv2.drawKeyPoints() function which draws the small circles on the locations of keypoints. If you pass a flag, cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS to it, it will draw a circle with size of keypoint and it will even show its orientation. See below example.",
"plt.rcParams['figure.figsize'] = 17, 4\nfilenames=get_imlist(os.path.join(SHOGUN_DATA_DIR, 'SIFT/template/'))\nfilenames=np.array(filenames)\n\n# for keeping all the descriptors from the template images\ndescriptor_mat=[]\n\n# initialise OpenCV's SIFT\nsift=cv2.SIFT()\nfig = plt.figure()\nplt.title('SIFT detected Keypoints')\nplt.xticks(())\nplt.yticks(())\nfor image_no in xrange(3):\n img=cv2.imread(filenames[0][image_no])\n img=cv2.resize(img, (500, 300), interpolation=cv2.INTER_AREA)\n gray=cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n gray=cv2.equalizeHist(gray)\n \n #detect the SIFT keypoints and the descriptors.\n kp, des=sift.detectAndCompute(gray,None)\n # store the descriptors.\n descriptor_mat.append(des)\n # here we draw the keypoints\n img=cv2.drawKeypoints(img, kp, flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)\n fig.add_subplot(1, 3, image_no+1)\n showfig(img, None)",
"2. Group similar descriptors into an arbitrary number of clusters.\nWe take all the descriptors that we got from the three images above and find similarity in between them.\nHere, similarity is decided by <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CEuclideanDistance.html\">Euclidean distance</a> between the 128-element SIFT descriptors. Similar descriptors are clustered into k number of groups. This can be done using Shogun's <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKNN.html\">KMeans class</a>. These clusters are called bags of keypoints or visual words and they collectively represent the vocabulary of the program. Each cluster has a cluster center, which can be thought of as the representative descriptor of all the descriptors belonging to that cluster. These cluster centers can be found using the <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKMeans.html#a5d8a09aeadada018747786a5470d3653\">get_cluster_centers()</a> method.\nTo perform clustering into k groups, we define the get_similar_descriptors() function below.",
"def get_similar_descriptors(k, descriptor_mat):\n\n descriptor_mat=np.double(np.vstack(descriptor_mat))\n descriptor_mat=descriptor_mat.T\n\n #initialize KMeans in Shogun \n sg_descriptor_mat_features=features(descriptor_mat)\n\n #EuclideanDistance is used for the distance measurement.\n distance = sg.distance('EuclideanDistance')\n distance.init(sg_descriptor_mat_features, sg_descriptor_mat_features)\n\n #group the descriptors into k clusters.\n kmeans=KMeans(k, distance)\n kmeans.train()\n\n #get the cluster centers.\n cluster_centers=(kmeans.get_cluster_centers())\n \n return cluster_centers\n\ncluster_centers=get_similar_descriptors(100, descriptor_mat)",
"3. Now, compute training data for the SVM classifiers. .\nSince we have already constructed the vocabulary, our next step is to generate viable feature vectors which can be used to represent each training image so that we can use them for multiclass classification later in the code. \n\n\nWe begin by computing SIFT descriptors for each training image.\n\n\nFor each training image, associate each of its descriptors with one of the clusters in the vocabulary. The simplest way to do this is by using k-Nearest Neighbour approach. This can be done using Shogun's <a href=\"http://shogun-toolbox.org/doc/en/3.0.0/classshogun_1_1CKNN.html\">KNN class</a>. <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CEuclideanDistance.html\">Euclidean distance</a> measure is used here for finding out the neighbours.\n\n\nMaking a histogram from this association. This histogram has as many bins as there are clusters in the vocabulary. Each bin counts how many descriptors in the training image are associated with the cluster corresponding to that bin. Intuitively, this histogram describes the image in the visual words of the vocabulary, and is called the bag of visual words descriptor of the image. \n\n\nIn short, we approximated each training image into a k element vector. This can be utilized to train any multiclass classifier.\nFirst, let us see a few training images",
"# name of all the folders together\nfolders=['cars','planes','trains']\ntraining_sample=[]\nfor folder in folders:\n #get all the training images from a particular class \n filenames=get_imlist(os.path.join(SHOGUN_DATA_DIR, 'SIFT/%s'%folder))\n for i in xrange(10):\n temp=cv2.imread(filenames[0][i])\n training_sample.append(temp)\n\nplt.rcParams['figure.figsize']=21,16\nfig=plt.figure()\nplt.xticks(())\nplt.yticks(())\nplt.title('10 training images for each class')\nfor image_no in xrange(30):\n fig.add_subplot(6,5, image_no+1)\n showfig(training_sample[image_no], None)",
"We here define get_sift_training() function to get all the SIFT descriptors present in all the training images.",
"def get_sift_training():\n \n # name of all the folders together\n folders=['cars','planes','trains']\n \n folder_number=-1\n des_training=[]\n \n for folder in folders:\n folder_number+=1\n\n #get all the training images from a particular class \n filenames=get_imlist(os.path.join(SHOGUN_DATA_DIR, 'SIFT/%s'%folder))\n filenames=np.array(filenames)\n \n des_per_folder=[]\n for image_name in filenames[0]:\n img=cv2.imread(image_name)\n\n # carry out normal preprocessing routines\n gray= cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)\n gray=cv2.resize(gray, (500, 300), interpolation=cv2.INTER_AREA)\n gray=cv2.equalizeHist(gray)\n\n #get all the SIFT descriptors for an image\n _, des=sift.detectAndCompute(gray, None)\n des_per_folder.append(des)\n \n des_training.append(des_per_folder)\n return des_training\n\ndescriptor_training=get_sift_training()",
"We define the compute_training_data() function which returns the training data required for multiclass classification in the later stages.\nInputs are:\n\n\nk=number of clusters\n\n\ncluster_centers=descriptors that are approximate form of all descriptors belonging to a particular cluster\n\n\ndescriptors=SIFT descriptors of the training images that are obtained from the above function.",
"def compute_training_data(k, cluster_centers, descriptors):\n \n # a list to hold histograms of all the training images\n all_histograms=[]\n # labels for all of the test images\n final_labels=[]\n # to hold the cluster number a descriptor belong to\n cluster_labels=[]\n\n #initialize a KNN in Shogun\n dist = sg.distance('EuclideanDistance')\n labels=MulticlassLabels(np.double(range(k)))\n knn=KNN(1, dist, labels)\n\n #Target descriptors are the cluster_centers that we got earlier. \n #All the descriptors of an image are matched against these for \n #calculating the histogram.\n sg_cluster_centers=features(cluster_centers)\n knn.train(sg_cluster_centers)\n\n # name of all the folders together\n folders=['cars','planes','trains']\n folder_number=-1\n\n for folder in folders:\n folder_number+=1\n\n #get all the training images from a particular class \n filenames=get_imlist(os.path.join(SHOGUN_DATA_DIR, 'SIFT/%s'%folder))\n\n for image_name in xrange(len(filenames[0])):\n \n des=descriptors[folder_number][image_name]\n \n #Shogun works in a way in which columns are samples and rows are features.\n #Hence we need to transpose the observation matrix\n des=(np.double(des)).T\n sg_des=features(np.array(des))\n\n #find all the labels of cluster_centers that are nearest to the descriptors present in the current image. \n cluster_labels=(knn.apply_multiclass(sg_des)).get_real_vector('labels')\n\n histogram_per_image=[]\n for i in xrange(k):\n #find the histogram for the current image\n histogram_per_image.append(sum(cluster_labels==i))\n\n all_histograms.append(np.array(histogram_per_image))\n final_labels.append(folder_number)\n\n # we now have the training features(all_histograms) and labels(final_labels) \n all_histograms=np.array(all_histograms)\n final_labels=np.array(final_labels)\n return all_histograms, final_labels, knn\n\nall_histograms, final_labels, knn=compute_training_data(100, cluster_centers, descriptor_training)",
"We have to solve a multiclass classification problem here. In Shogun these are implemented in: <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMulticlassMachine.html\">MulticlassMachine</a> \n4. We train a one-vs-all SVM for each category of object using the training data:\nThe following function returns a trained <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGMNPSVM.html\">GMNPSVM</a>, which is a true Multiclass SVM in Shogun employing one vs rest approach.\nInputs are:\n* all_histograms=Can be thought as the feature vector for all images for which the SVM has to be trained.\n* final_labels=The labels respective of the above mentioned feature vectors",
"def train_svm(all_histograms, final_labels):\n \n # we will use GMNPSVM class of Shogun for one vs rest multiclass classification\n obs_matrix=np.double(all_histograms.T)\n sg_features=features(obs_matrix)\n sg_labels=MulticlassLabels(np.double(final_labels))\n kernel=LinearKernel(sg_features, sg_features)\n C=1\n gsvm=GMNPSVM(C, kernel, sg_labels)\n _=gsvm.train(sg_features)\n return gsvm\n\ngsvm=train_svm(all_histograms, final_labels)",
"5. Now, classify by using the trained SVM:\nFirst let us see all the test images",
"# Lets see the testing images\ntesting_sample=[]\n#get all the testing images \nfilenames=get_imlist(os.path.join(SHOGUN_DATA_DIR, 'SIFT/test_image/'))\nfor i in xrange(len(filenames[0])):\n temp=cv2.imread(filenames[0][i])\n testing_sample.append(temp)\n\nplt.rcParams['figure.figsize']=20,8\nfig=plt.figure()\nplt.xticks(())\nplt.yticks(())\nplt.title('Test Images')\nfor image_no in xrange(len(filenames[0])):\n fig.add_subplot(3,8, image_no+1)\n showfig(testing_sample[image_no], None)",
"We define the function get_sift_testing() which returns all the descriptors present in the testing images.",
"def get_sift_testing():\n filenames=get_imlist(os.path.join(SHOGUN_DATA_DIR, 'SIFT/test_image/'))\n filenames=np.array(filenames)\n des_testing=[]\n for image_name in filenames[0]:\n result=[]\n #read the test image\n img=cv2.imread(image_name)\n\n #follow the normal preprocessing routines \n gray= cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)\n gray=cv2.resize(gray, (500, 300), interpolation=cv2.INTER_AREA)\n gray=cv2.equalizeHist(gray)\n\n #compute all the descriptors of the test images\n _, des=sift.detectAndCompute(gray, None)\n des_testing.append(des)\n return des_testing\n\ndescriptor_testing=get_sift_testing()",
"In the following classify_svm() function, we use the trained GMNPSVM for classifying the test images. It returns the predictions from our trained SVM.",
"def classify_svm(k, knn, des_testing):\n \n # a list to hold histograms of all the test images\n all_histograms=[]\n filenames=get_imlist(os.path.join(SHOGUN_DATA_DIR, 'SIFT/test_image/'))\n \n for image_name in xrange(len(filenames[0])):\n \n result=[]\n des=des_testing[image_name]\n \n #Shogun works in a way in which columns are samples and rows are features.\n #Hence we need to transpose the observation matrix\n des=(np.double(des)).T\n sg_des=features(np.array(des))\n\n #cluster all the above found descriptors into the vocabulary\n cluster_labels=(knn.apply_multiclass(sg_des)).get_real_vector('labels')\n\n #get the histogram for the current test image\n histogram=[]\n for i in xrange(k):\n histogram.append(sum(cluster_labels==i))\n \n all_histograms.append(np.array(histogram))\n\n all_histograms=np.double(np.array(all_histograms))\n all_histograms=all_histograms.T\n sg_testfeatures=features(all_histograms)\n return gsvm.apply(sg_testfeatures).get_labels()\n\npredicted=classify_svm(100, knn, descriptor_testing)\nprint \"the predicted labels for k=100 are as follows: \"\nprint predicted",
"6.\nSelecting the classifier that gives the best overall classification accuracy with respect to number of clusters (k) :\nWe define the function create_conf_matrix() which creates the confusion matrix. \nInputs are:\n* expected=The actual labels which the test images belong to\n* predicted=The output of our SVM\n* n_classes=number of classes (here 3 i.e cars, trains and planes)",
"def create_conf_matrix(expected, predicted, n_classes):\n m = [[0] * n_classes for i in range(n_classes)]\n for pred, exp in zip(predicted, expected):\n m[exp][int(pred)] += 1\n return np.array(m)",
"Form the expected list. \n\n0 represents cars\n1 represents planes\n2 represents trains",
"import re\nfilenames=get_imlist(os.path.join(SHOGUN_DATA_DIR, 'SIFT/test_image/'))\n# get the formation of the files, later to be used for calculating the confusion matrix\nformation=([int(''.join(x for x in filename if x.isdigit())) for filename in filenames[0]])\n \n# associate them with the correct labels by making a dictionary\nkeys=range(len(filenames[0]))\n\nvalues=[0,1,0,2,1,0,1,0,0,0,1,2,2,2,2,1,1,1,1,1]\nlabel_dict=dict(zip(keys, values))\n\n# the following list holds the actual labels\nexpected=[]\nfor i in formation:\n expected.append(label_dict[i-1])",
"We extend all the steps that we did for k=100 to few other values of k and check their accuracies with respect to the expected labels. Alongside, we also draw their respective confusion matrix.",
"best_k=1\nmax_accuracy=0\n\nfor k in xrange(1,5):\n k=100*k\n \n # step 2\n cluster_centers=get_similar_descriptors(k, descriptor_mat)\n \n # step 3\n all_histograms, final_labels, knn=compute_training_data(k, cluster_centers, descriptor_training)\n \n # step 4\n gsvm=train_svm(all_histograms, final_labels)\n \n # step 5\n predicted=classify_svm(k, knn, descriptor_testing)\n accuracy=sum(predicted==expected)*100/float(len(expected))\n print \"for a k=%d, accuracy is %d%%\"%(k, accuracy)\n \n #step 6\n m=create_conf_matrix(expected, predicted, 3)\n\n if accuracy>max_accuracy:\n best_k=k\n max_accuracy=accuracy\n best_prediction=predicted\n \n print \"confusion matrix for k=%d\"%k\n print m",
"From all the above k's we choose the one which has the best accuracy. Number of k's can be extended further to enhance the overall accuracy.\nTest images along with their predicted labels are shown below for the most optimum value of k:",
"plt.rcParams['figure.figsize']=20,8\nfig=plt.figure()\nfor image_no in xrange(len(filenames[0])):\n fig.add_subplot(3,8, image_no+1)\n plt.title('pred. class: '+folders[int(best_prediction[image_no])])\n showfig(testing_sample[image_no], None)",
"Conclusion\nHere we have presented a simple but novel approach to generic visual categorization using feature vectors constructed from clustered descriptors of image patches.\nReferences:\n\n\nVisual Categorization with Bags of Keypoints by Gabriella Csurka, Christopher R. Dance, Lixin Fan, Jutta Willamowski, Cédric Bray\n\n\nDistinctive Image Features from Scale-Invariant Keypoints by David G. Lowe\n\n\nPractical OpenCV by Samarth Brahmbhatt, University of Pennsylvania"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
roebius/deeplearning1_keras2
|
nbs/statefarm.ipynb
|
apache-2.0
|
[
"Enter State Farm",
"from __future__ import division, print_function\n%matplotlib inline\n#path = \"data/state/\"\npath = \"data/state/sample/\"\nfrom importlib import reload # Python 3\nimport utils; reload(utils)\nfrom utils import *\nfrom IPython.display import FileLink\n\nbatch_size=64",
"Setup batches",
"batches = get_batches(path+'train', batch_size=batch_size)\nval_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)\nsteps_per_epoch = int(np.ceil(batches.samples/batch_size))\nvalidation_steps = int(np.ceil(val_batches.samples/(batch_size*2)))\n\n(val_classes, trn_classes, val_labels, trn_labels, \n val_filenames, filenames, test_filenames) = get_classes(path)",
"Rather than using batches, we could just import all the data into an array to save some processing time. (In most examples I'm using the batches, however - just because that's how I happened to start out.)",
"trn = get_data(path+'train')\nval = get_data(path+'valid')\n\nsave_array(path+'results/val.dat', val)\nsave_array(path+'results/trn.dat', trn)\n\nval = load_array(path+'results/val.dat')\ntrn = load_array(path+'results/trn.dat')",
"Re-run sample experiments on full dataset\nWe should find that everything that worked on the sample (see statefarm-sample.ipynb), works on the full dataset too. Only better! Because now we have more data. So let's see how they go - the models in this section are exact copies of the sample notebook models.\nSingle conv layer",
"def conv1(batches):\n model = Sequential([\n BatchNormalization(axis=1, input_shape=(3,224,224)),\n Conv2D(32,(3,3), activation='relu'),\n BatchNormalization(axis=1),\n MaxPooling2D((3,3)),\n Conv2D(64,(3,3), activation='relu'),\n BatchNormalization(axis=1),\n MaxPooling2D((3,3)),\n Flatten(),\n Dense(200, activation='relu'),\n BatchNormalization(),\n Dense(10, activation='softmax')\n ])\n\n model.compile(Adam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy'])\n model.fit_generator(batches, steps_per_epoch, epochs=2, validation_data=val_batches, \n validation_steps=validation_steps)\n model.optimizer.lr = 0.001\n model.fit_generator(batches, steps_per_epoch, epochs=4, validation_data=val_batches, \n validation_steps=validation_steps)\n return model\n\nmodel = conv1(batches)",
"Interestingly, with no regularization or augmentation we're getting some reasonable results from our simple convolutional model. So with augmentation, we hopefully will see some very good results.\nData augmentation",
"gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, \n shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)\nbatches = get_batches(path+'train', gen_t, batch_size=batch_size)\n\nmodel = conv1(batches)\n\nmodel.optimizer.lr = 0.0001\nmodel.fit_generator(batches, steps_per_epoch, epochs=15, validation_data=val_batches, \n validation_steps=validation_steps)",
"I'm shocked by how good these results are! We're regularly seeing 75-80% accuracy on the validation set, which puts us into the top third or better of the competition. With such a simple model and no dropout or semi-supervised learning, this really speaks to the power of this approach to data augmentation.\nFour conv/pooling pairs + dropout\nUnfortunately, the results are still very unstable - the validation accuracy jumps from epoch to epoch. Perhaps a deeper model with some dropout would help.",
"gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, \n shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)\nbatches = get_batches(path+'train', gen_t, batch_size=batch_size)\n\nmodel = Sequential([\n BatchNormalization(axis=1, input_shape=(3,224,224)),\n Conv2D(32,(3,3), activation='relu'),\n BatchNormalization(axis=1),\n MaxPooling2D(),\n Conv2D(64,(3,3), activation='relu'),\n BatchNormalization(axis=1),\n MaxPooling2D(),\n Conv2D(128,(3,3), activation='relu'),\n BatchNormalization(axis=1),\n MaxPooling2D(),\n Flatten(),\n Dense(200, activation='relu'),\n BatchNormalization(),\n Dropout(0.5),\n Dense(200, activation='relu'),\n BatchNormalization(),\n Dropout(0.5),\n Dense(10, activation='softmax')\n ])\n\nmodel.compile(Adam(lr=10e-5), loss='categorical_crossentropy', metrics=['accuracy'])\n\nmodel.fit_generator(batches, steps_per_epoch, epochs=2, validation_data=val_batches, \n validation_steps=validation_steps)\n\nmodel.optimizer.lr=0.001\n\nmodel.fit_generator(batches, steps_per_epoch, epochs=10, validation_data=val_batches, \n validation_steps=validation_steps)\n\nmodel.optimizer.lr=0.00001\n\nmodel.fit_generator(batches, steps_per_epoch, epochs=10, validation_data=val_batches, \n validation_steps=validation_steps)",
"This is looking quite a bit better - the accuracy is similar, but the stability is higher. There's still some way to go however...\nImagenet conv features\nSince we have so little data, and it is similar to imagenet images (full color photos), using pre-trained VGG weights is likely to be helpful - in fact it seems likely that we won't need to fine-tune the convolutional layer weights much, if at all. So we can pre-compute the output of the last convolutional layer, as we did in lesson 3 when we experimented with dropout. (However this means that we can't use full data augmentation, since we can't pre-compute something that changes every image.)",
"vgg = Vgg16()\nmodel=vgg.model\nlast_conv_idx = [i for i,l in enumerate(model.layers) if type(l) is Convolution2D][-1]\nconv_layers = model.layers[:last_conv_idx+1]\n\nconv_model = Sequential(conv_layers)\n\n(val_classes, trn_classes, val_labels, trn_labels, \n val_filenames, filenames, test_filenames) = get_classes(path)\n\ntest_batches = get_batches(path+'test', batch_size=batch_size*2, shuffle=False)\n\nconv_feat = conv_model.predict_generator(batches, int(np.ceil(batches.samples/batch_size)))\nconv_val_feat = conv_model.predict_generator(val_batches, int(np.ceil(val_batches.samples/(batch_size*2))))\nconv_test_feat = conv_model.predict_generator(test_batches, int(np.ceil(test_batches.samples/(batch_size*2))))\n\nsave_array(path+'results/conv_val_feat.dat', conv_val_feat)\nsave_array(path+'results/conv_test_feat.dat', conv_test_feat)\nsave_array(path+'results/conv_feat.dat', conv_feat)\n\nconv_feat = load_array(path+'results/conv_feat.dat')\nconv_val_feat = load_array(path+'results/conv_val_feat.dat')\nconv_val_feat.shape",
"Batchnorm dense layers on pretrained conv layers\nSince we've pre-computed the output of the last convolutional layer, we need to create a network that takes that as input, and predicts our 10 classes. Let's try using a simplified version of VGG's dense layers.",
"def get_bn_layers(p):\n return [\n MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),\n Flatten(),\n Dropout(p/2),\n Dense(128, activation='relu'),\n BatchNormalization(),\n Dropout(p/2),\n Dense(128, activation='relu'),\n BatchNormalization(),\n Dropout(p),\n Dense(10, activation='softmax')\n ]\n\np=0.8\n\nbn_model = Sequential(get_bn_layers(p))\nbn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])\n\nbn_model.fit(conv_feat, trn_labels, batch_size=batch_size, epochs=1, \n validation_data=(conv_val_feat, val_labels))\n\nbn_model.optimizer.lr=0.01\n\nbn_model.fit(conv_feat, trn_labels, batch_size=batch_size, epochs=2, \n validation_data=(conv_val_feat, val_labels))\n\nbn_model.save_weights(path+'models/conv8.h5')",
"Looking good! Let's try pre-computing 5 epochs worth of augmented data, so we can experiment with combining dropout and augmentation on the pre-trained model.\nPre-computed data augmentation + dropout\nWe'll use our usual data augmentation parameters:",
"gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, \n shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)\nda_batches = get_batches(path+'train', gen_t, batch_size=batch_size, shuffle=False)",
"We use those to create a dataset of convolutional features 5x bigger than the training set.",
"da_conv_feat = conv_model.predict_generator(da_batches, 5*int(np.ceil((da_batches.samples)/(batch_size))), workers=3)\n\nsave_array(path+'results/da_conv_feat2.dat', da_conv_feat)\n\nda_conv_feat = load_array(path+'results/da_conv_feat2.dat')",
"Let's include the real training data as well in its non-augmented form.",
"da_conv_feat = np.concatenate([da_conv_feat, conv_feat])",
"Since we've now got a dataset 6x bigger than before, we'll need to copy our labels 6 times too.",
"da_trn_labels = np.concatenate([trn_labels]*6)",
"Based on some experiments the previous model works well, with bigger dense layers.",
"def get_bn_da_layers(p):\n return [\n MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),\n Flatten(),\n Dropout(p),\n Dense(256, activation='relu'),\n BatchNormalization(),\n Dropout(p),\n Dense(256, activation='relu'),\n BatchNormalization(),\n Dropout(p),\n Dense(10, activation='softmax')\n ]\n\np=0.8\n\nbn_model = Sequential(get_bn_da_layers(p))\nbn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])",
"Now we can train the model as usual, with pre-computed augmented data.",
"bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, epochs=1, \n validation_data=(conv_val_feat, val_labels))\n\nbn_model.optimizer.lr=0.01\n\nbn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, epochs=4, \n validation_data=(conv_val_feat, val_labels))\n\nbn_model.optimizer.lr=0.0001\n\nbn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, epochs=4, \n validation_data=(conv_val_feat, val_labels))",
"Looks good - let's save those weights.",
"bn_model.save_weights(path+'models/da_conv8_1.h5')",
"Pseudo labeling\nWe're going to try using a combination of pseudo labeling and knowledge distillation to allow us to use unlabeled data (i.e. do semi-supervised learning). For our initial experiment we'll use the validation set as the unlabeled data, so that we can see that it is working without using the test set. At a later date we'll try using the test set.\nTo do this, we simply calculate the predictions of our model...",
"val_pseudo = bn_model.predict(conv_val_feat, batch_size=batch_size)",
"...concatenate them with our training labels...",
"comb_pseudo = np.concatenate([da_trn_labels, val_pseudo])\n\ncomb_feat = np.concatenate([da_conv_feat, conv_val_feat])",
"...and fine-tune our model using that data.",
"bn_model.load_weights(path+'models/da_conv8_1.h5')\n\nbn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, epochs=1, \n validation_data=(conv_val_feat, val_labels))\n\nbn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, epochs=4, \n validation_data=(conv_val_feat, val_labels))\n\nbn_model.optimizer.lr=0.00001\n\nbn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, epochs=4, \n validation_data=(conv_val_feat, val_labels))",
"That's a distinct improvement - even although the validation set isn't very big. This looks encouraging for when we try this on the test set.",
"bn_model.save_weights(path+'models/bn-ps8.h5')",
"Submit\nWe'll find a good clipping amount using the validation set, prior to submitting.",
"def do_clip(arr, mx): return np.clip(arr, (1-mx)/9, mx)\n\nval_preds = bn_model.predict(conv_val_feat, batch_size=batch_size*2)\n\nnp.mean(keras.metrics.categorical_crossentropy(val_labels, do_clip(val_preds, 0.93)).eval())\n\nconv_test_feat = load_array(path+'results/conv_test_feat.dat')\n\npreds = bn_model.predict(conv_test_feat, batch_size=batch_size*2)\n\nsubm = do_clip(preds,0.93)\n\nsubm_name = path+'results/subm.gz'\n\nclasses = sorted(batches.class_indices, key=batches.class_indices.get)\n\nsubmission = pd.DataFrame(subm, columns=classes)\nsubmission.insert(0, 'img', [a[4:] for a in test_filenames])\nsubmission.head()\n\nsubmission.to_csv(subm_name, index=False, compression='gzip')\n\nFileLink(subm_name)",
"This gets 0.534 on the leaderboard.\nThe \"things that didn't really work\" section\nYou can safely ignore everything from here on, because they didn't really help.\nFinetune some conv layers too",
"#for l in get_bn_layers(p): conv_model.add(l) # this choice would give a weight shape error\nfor l in get_bn_da_layers(p): conv_model.add(l) # ... so probably this is the right one\n\nfor l1,l2 in zip(bn_model.layers, conv_model.layers[last_conv_idx+1:]):\n l2.set_weights(l1.get_weights())\n\nfor l in conv_model.layers: l.trainable =False\n\nfor l in conv_model.layers[last_conv_idx+1:]: l.trainable =True\n\ncomb = np.concatenate([trn, val])\n\n # not knowing what the experiment was about, added this to avoid a shape match error with comb using gen_t.flow\ncomb_pseudo = np.concatenate([trn_labels, val_pseudo])\n\ngen_t = image.ImageDataGenerator(rotation_range=8, height_shift_range=0.04, \n shear_range=0.03, channel_shift_range=10, width_shift_range=0.08)\n\nbatches = gen_t.flow(comb, comb_pseudo, batch_size=batch_size)\n\nval_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)\n\nconv_model.compile(Adam(lr=0.00001), loss='categorical_crossentropy', metrics=['accuracy'])\n\nconv_model.fit_generator(batches, steps_per_epoch, epochs=1, validation_data=val_batches, \n validation_steps=validation_steps)\n\nconv_model.optimizer.lr = 0.0001\n\nconv_model.fit_generator(batches, steps_per_epoch, epochs=3, validation_data=val_batches, \n validation_steps=validation_steps)\n\nfor l in conv_model.layers[16:]: l.trainable =True\n\n#- added compile instruction in order to avoid Keras 2.1 warning message\nconv_model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])\n\nconv_model.optimizer.lr = 0.00001\n\nconv_model.fit_generator(batches, steps_per_epoch, epochs=8, validation_data=val_batches, \n validation_steps=validation_steps)\n\nconv_model.save_weights(path+'models/conv8_ps.h5')\n\n#conv_model.load_weights(path+'models/conv8_da.h5') # conv8_da.h5 was not saved in this notebook\n\nval_pseudo = conv_model.predict(val, batch_size=batch_size*2)\n\nsave_array(path+'models/pseudo8_da.dat', val_pseudo)",
"Ensembling",
"drivers_ds = pd.read_csv(path+'driver_imgs_list.csv')\ndrivers_ds.head()\n\nimg2driver = drivers_ds.set_index('img')['subject'].to_dict()\n\ndriver2imgs = {k: g[\"img\"].tolist() \n for k,g in drivers_ds[['subject', 'img']].groupby(\"subject\")}\n\n# It seems this function is not used in this notebook\ndef get_idx(driver_list):\n return [i for i,f in enumerate(filenames) if img2driver[f[3:]] in driver_list]\n\n# drivers = driver2imgs.keys() # Python 2\ndrivers = list(driver2imgs) # Python 3\n\nrnd_drivers = np.random.permutation(drivers)\n\nds1 = rnd_drivers[:len(rnd_drivers)//2]\nds2 = rnd_drivers[len(rnd_drivers)//2:]\n\n# The following cells seem to require some preparation code not included in this notebook\nmodels=[fit_conv([d]) for d in drivers]\nmodels=[m for m in models if m is not None]\n\nall_preds = np.stack([m.predict(conv_test_feat, batch_size=128) for m in models])\navg_preds = all_preds.mean(axis=0)\navg_preds = avg_preds/np.expand_dims(avg_preds.sum(axis=1), 1)\n\nkeras.metrics.categorical_crossentropy(val_labels, np.clip(avg_val_preds,0.01,0.99)).eval()\n\nkeras.metrics.categorical_accuracy(val_labels, np.clip(avg_val_preds,0.01,0.99)).eval()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jorisvandenbossche/DS-python-data-analysis
|
notebooks/pandas_06_data_cleaning.ipynb
|
bsd-3-clause
|
[
"<p><font size=\"6\"><b> Pandas: Methods for data cleaning</b></font></p>\n\n\n© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons",
"import pandas as pd\n\nimport numpy as np\nimport matplotlib.pyplot as plt",
"A number of Pandas functions are useful when cleaning up raw data and converting it to a data set ready for analysis and visualisation. In this notebook a selection of methods are introduced:\n\ndrop\nrename\nreplace\nexplode\ndrop_duplicates/duplicates\nastype\nunique\n.str.-methods\n\nNote: Working with missing values is tackled in a dedicated notebook pandas_07_missing_values.\nWe showcase using a dirty example data:",
"countries = pd.DataFrame({'county name': ['Belgium', 'Flance', 'Germany', 'Netherlands', ['United Kingdom', 'Germany']],\n 'population': [11.3, 64.3, 81.3, 16.9, 64.9],\n 'area': [30510, 671308, 357050, 41526, [244820, np.nan]],\n 'capital': ['Brussels', ' Paris ', 'Barlin', 'Amsterdam', 'London']})\ncountries",
"drop\nDrop columns (or rows) by name (this can also be achieved by selecting the columns you want to keep, but if you only want to drop a few columns, drop() is easier). Specify a list of column names to drop:",
"countries.drop(columns=[\"area\", \"capital\"])",
"rename\nUse a dict with the dictionary keys the old column/index name and the dictionary values the new column/index name:",
"countries = countries.rename(columns={\"county name\": \"country\"})",
"replace\nReplace values in a column. Different inputs can be used. The most basic one is providing a value to_replace and a new value:",
"countries[\"capital\"].replace(\"Barlin\", \"Berlin\")",
"Similar to rename, one can use a dict with the dictionary keys the old data and the dictionary values the new data:",
"countries = countries.replace({\"Barlin\": \"Berlin\", \"Flance\": \"France\"})\ncountries",
"explode\nexplode multiple values in a cell to individual records (rows). Not regularly required, but very powerful when in case:",
"countries_exploded = countries.explode([\"country\", \"area\"])\ncountries_exploded",
"drop_duplicates\nChecking duplicate values in a data set with duplicated or remove duplicate values with drop_duplicates:",
"countries_exploded.duplicated(subset=[\"country\"])\n\ncountries_exploded = countries_exploded.drop_duplicates(subset=[\"country\"], keep=\"first\").copy() # More on this copy later\ncountries_exploded",
"astype\nPandas read functions might not always use the most appropriate data type for each of the columns. Converting them to a different data type can also improve the memory usage of the DataFrame (e.g. int16 versus float64). The astype method supports data type conversion to both Numpy data types as well as Pandas specific data types.",
"countries_exploded.dtypes\n\ncountries_exploded[\"area\"] = countries_exploded[\"area\"].astype(int)\n\ncountries_exploded.dtypes",
"unique\nWorking with larger data sets, knowing which values are in a column:",
"countries_exploded[\"capital\"].unique()",
".str.-methods\nNoticed the redundant spaces in the capital column? \nWhereas replace could work for this specific case (it also accepts regular expressions):\npython\ncountries_exploded[\"capital\"].replace(r\"^\\s+|\\s+\", \"\", regex=True)\nPandas provides a set of convenient string methods to handle these (element-wise) cleaning of strings, each of them accessible with the .str. accessor (e.g. str.split, str.startswith, removeprefix):",
"countries_exploded[\"capital\"] = countries_exploded[\"capital\"].str.strip()\ncountries_exploded[\"capital\"].unique()",
"<div class=\"alert alert-info\">\n\n__INFO__\n\nFor an overview of the available `.str.`-methods, see https://pandas.pydata.org/docs/user_guide/text.html#method-summary\n\n</div>\n\nExercises: application on a real dataset\nFor these exercises, we will use data of road casualties in Belgium in 2020 made available by statbel. The metadata is available as well as a reference. The data contains the number of victims due to road causalities:\n\nMS_VCT: Number of victims\nMS_VIC_OK: Number of uninjured\nMS_SLY_INJ: Number of slightly injured\nMS_SERLY_INJ: Number of severely injured\nMS_MORY_INJ: Number of mortally injured\nMS_DEAD: Number of dead\nMS_DEAD_30_DAYS: Number of dead after 30 days\n\nTogether with metadata about date and time, the victim and road type, light conditions, location,...\nPandas can load the data directly from the zip-file :",
"casualties_raw = pd.read_csv(\"./data/TF_ACCIDENTS_VICTIMS_2020.zip\", \n compression='zip', \n sep=\"|\", \n low_memory=False)\ncasualties_raw.head()",
"<div class=\"alert alert-info\">\n\n__INTERMEZZO - display options__\n\nPandas provides a number of configurable settings to display data, for example `display.max_rows`, `display.precision` and `display.max_columns`. When exploring a new data set, adjusting the `display.max_columns` setting is of particular interest to be able to scroll the full data set.\n\nSee https://pandas.pydata.org/docs/user_guide/options.html#options-and-settings for the documentation and an [overview of the available settings](https://pandas.pydata.org/docs/user_guide/options.html#available-options)\n\n</div>",
"pd.options.display.max_columns = 45\n\ncasualties_raw.head()",
"Whereas the data is already well organised and structured, some adjustments are required to support further analysis:\n\nCombine the day and hour into a single datetime-aware data type.\nClean up the column names.\nMetadata is always provided both in Dutch and French.\n...\n\nLet's apply the cleaning methods to clean up the data in the next set of exercises.\n<div class=\"alert alert-success\">\n\n**EXERCISE 1**\n\nRemove all the `_FR` metadata columns from the `casualties_raw` data set and assign the result to a new variable `casualties_nl`. Use the `column_names_with_fr` variable derive in the next cell to remove the columns.\n\n<details><summary>Hints</summary>\n\n- Remove columns with the `drop()` method. The method works with one or more column names.\n- Make sure to explicitly set the `columns=` parameter.\n\n__NOTE__ The `column_names_with_fr` variable is created using the `df.columns` attribute of the DataFrame:\n- Instead of enlisting the column names manually, a [list comprehension](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions) - a _feature of standard Python_ - is used to select the columns names ending on `_FR`.\n- Within the list comprehension, the [`endswith()`](https://docs.python.org/3/library/stdtypes.html#str.endswith) standard string method is used to check if a column name ends on `_FR`. \n- ! Pandas also provides the `.str.endswith()` method, but this is for the data values inside a DataFrame. In this exercise we want to adjust the column names itself. \n\n</details>\n\n</div>",
"column_names_with_fr = [col for col in casualties_raw.columns if col.endswith(\"_FR\")]\ncolumn_names_with_fr\n\n# %load _solutions/pandas_06_data_cleaning1.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 2**\n\nA number of the remaining metadata columns names have the `TX_` and the `_DESCR_NL` in the column name. Clean up these column names by removing the `TX_` at the start and the `_DESCR_NL` at the end of the column names using the helper function `clean_column_name` defined in the next cell. Update the `casualties_nl` variable, assign the result to the variable `casualties`.\n\n<details><summary>Hints</summary>\n\n- Use the `rename` method and apply the mapping on the `columns`.\n- The input of the `rename` method van be a dictionary or a function. Use the `clean_column_name` as the function to rename the columns. \n- Make sure to explicitly set the columns= parameter. \n\n__NOTE__ The function `clean_column_name` takes as input a string and returns the string after removing the prefix and suffix. \n\n- The pandas method `rename` applies this function to each column name individually. \n- `removeprefix()` and `removesuffix()` are [Python string methods](https://docs.python.org/3/library/stdtypes.html#string-methods) to remove start/trailing characters if present.\n\n</details>\n\n</div>",
"def clean_column_name(name):\n return name.removeprefix(\"TX_\").removesuffix(\"_DESCR_NL\")\n\n# %load _solutions/pandas_06_data_cleaning2.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 3**\n\nCheck the unique values of the `SEX` column.\n\nBased on the the values, create a mapping dictionary to replace the values with the english version (`\"male\", \"female\"`). Use `None` for the unknown values (`Onbekend` in Dutch). Apply the mapping to overwrite the values in the `SEX` column with the new value.\n\n<details><summary>Hints</summary>\n\n- Create the mapping by hand and define a `dict`.\n- Use the `replace()` method to update the values of the `SEX` column.\n\n</details>\n\n</div>",
"# %load _solutions/pandas_06_data_cleaning3.py\n\n# %load _solutions/pandas_06_data_cleaning4.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 4**\n\nCheck the unique values of the `DT_HOUR` column. Which of the data values is used as _not a number_ (not known)? Verify the amount of records that with the `DT_HOUR` not known.\n\nA check with the data provider confirmed that the record(s) with value 99 did actually happen at 9 AM and are a typo instead of _not a number_ replacement value. Replace the 99 values with the real hour of the day in the `DT_HOUR` column.\n\n<details><summary>Hints</summary>\n\n- The number `99` is not a valid hour of the day and used as not a number data point.\n- Only one data record has an unknown hour of the day. \n\n</details>\n\n</div>",
"# %load _solutions/pandas_06_data_cleaning5.py\n\n# %load _solutions/pandas_06_data_cleaning6.py\n\n# %load _solutions/pandas_06_data_cleaning7.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 5**\n\nThe day (`DT_DAY`) and hour (`DT_HOUR`) are two separate columns instead of a single `datetime` data type column. \n\n- Check the data types of the `DT_DAY` and `DT_HOUR` columns.\n- Combine the two columns into a single column (using _string concatenation_) and use the `pd.to_datetime` function to convert the combined column (call the column `\"datetime\"`).\n\n<details><summary>Hints</summary>\n\n- The data type of columns is available as the `dtypes` attribute.\n- String concatenation is done element-wise in Pandas using the `+` operator. Do not forget to convert the `DT_HOUR` column into a `str` column using `astype`.\n- Without adding a minute level, the datetime conversion of `pd.to_datetime` won't work. Add `\":00\"` to provide minutes as well.\n\n</details>\n\n</div>",
"# %load _solutions/pandas_06_data_cleaning8.py\n\n# %load _solutions/pandas_06_data_cleaning9.py\n\n# %load _solutions/pandas_06_data_cleaning10.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 6**\n\nFor columns consistiong of a limited number of categories with (_ordinal data_) or without a logical order, Pandas has a specific data type: `Categorical`. An example in the data set is the `DAY_OF_WEEK` (from `Monday` -> `Sunday`). \n\nFor this conversion, the `astype` is not sufficient. Use the `pd.Categorical` function (check the documentation) to create a new column `week_day` with the week days defined as a Categorical variable. Use Monday as the first day of the week and make sure the categories are ordered.\n\n<details><summary>Hints</summary>\n\n- See [Pandas categorical info](https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html#object-creation) for more information\n- Use `ordered=True` to define ordered data. \n\n</details>\n\n</div>",
"# Conversion to english weekday names\ncasualties[\"DAY_OF_WEEK\"] = casualties[\"DAY_OF_WEEK\"].replace({\"maandag\": \"Monday\", \n \"dinsdag\": \"Tuesday\", \n \"woensdag\": \"Wednesday\", \n \"donderdag\": \"Thursday\", \n \"vrijdag\": \"Friday\", \n \"zaterdag\": \"Saturday\", \n \"zondag\": \"Sunday\"})\n\n# %load _solutions/pandas_06_data_cleaning11.py\n\n# %load _solutions/pandas_06_data_cleaning12.py",
"<div class=\"alert alert-success\">\n\n**(OPTIONAL) EXERCISE 7**\n\nIn the `AGE_CLS` column, the age is formatted as `X tot Y jaar` (i.e. _x till y year_). Remove the Dutch description and convert the data into a format `Y - Y` to define the age classes. \n\nUse the string methods as much as possible. The `Onbekend`, ` ` (empty string) and `75 jaar en meer` data values can be done by direct replacement into `None`, `None` and `> 75` respectively.\n\n<details><summary>Hints</summary>\n\n- Use the `.str.replace()` (note the difference with the Pandas `replace()` method) and the `str.removesuffix()` methods to convert the data format.\n- Add an additional `str.strip` to get rid of the spaces and the 'unknown' number of spaces in the empty string case.\n- Using the `replace()` method with a dictionary just works for the remaining two values: `{\"Onbekend\": None, \"75 jaar en meer\": \">75\"}`. It will leave other values (not specified in the dictionary) as is.\n\n</details>\n\n</div>",
"# %load _solutions/pandas_06_data_cleaning13.py\n\n# verify outcome\ncasualties[\"AGE_CLS\"].unique()",
"<div class=\"alert alert-success\">\n\n**(OPTIONAL) EXERCISE 8**\n\nThe data set contains the number of victims. The link with the individual accidents is not available in the current data set and multiple records/rows of the data set can belong to a single accident. \n\nWe can expect that records with the same day, hour, municipality , light condition, road type and build up area are probably linked to the same accident. Try to estimate the number of accidents.\n\n<details><summary>Hints</summary>\n\n- This exercise is a special case of the `drop_duplicates` method. When we drop duplicate records when `\"DT_DAY\", \"DT_HOUR\", \"CD_MUNTY_REFNIS\", \"BUILD_UP_AREA\",\"LIGHT_COND\", \"ROAD_TYPE\"` are all the same, we have an estimate on the number of accidents.\n- Use the `subset` parameter to define a specific set of column names.\n\n</details>\n\n</div>",
"# %load _solutions/pandas_06_data_cleaning14.py\n\n# %load _solutions/pandas_06_data_cleaning15.py"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Jackie789/JupyterNotebooks
|
3.1.3+KNN+RegressionWithJackiesModel.ipynb
|
gpl-3.0
|
[
"import pandas as pd\nimport numpy as np\nimport scipy\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nmusic = pd.DataFrame()\nmusic['duration'] = [184, 134, 243, 186, 122, 197, 294, 382, 102, 264, \n 205, 110, 307, 110, 397, 153, 190, 192, 210, 403,\n 164, 198, 204, 253, 234, 190, 182, 401, 376, 102]\nmusic['loudness'] = [18, 34, 43, 36, 22, 9, 29, 22, 10, 24, \n 20, 10, 17, 51, 7, 13, 19, 12, 21, 22,\n 16, 18, 4, 23, 34, 19, 14, 11, 37, 42]\nmusic['bpm'] = [ 105, 90, 78, 75, 120, 110, 80, 100, 105, 60,\n 70, 105, 95, 70, 90, 105, 70, 75, 102, 100,\n 100, 95, 90, 80, 90, 80, 100, 105, 70, 65]",
"KNN Regression\nSo far we've introduced KNN as a classifier, meaning it assigns observations to categories or assigns probabilities to the various categories. However, KNN is also a reasonable algorithm for regression. It's a simple extension of what we've learned before and just as easy to implement.\nEverything's the Same\nSwitching KNN to a regrssion is a simple process. In our previous models, each of the $k$ oberservations voted for a category. As a regression they vote instead for a value. Then instead of taking the most popular response, the algorithm averages all of the votes. If you have weights you perform a weighted average.\nIt's really that simple.\nLet's go over a quick example just to confirm your understanding.\nLet's stick with the world of music. Instead of trying to classify songs as rock or jazz, lets take the same data with an additional column: beats per minute, or BPM. Can we train our model to predict BPM?\nFirst let's try to predict just in terms of loudness, as this will be easier to represent graphically.",
"from sklearn import neighbors\n\n# Build our model.\nknn = neighbors.KNeighborsRegressor(n_neighbors=10)\nX = pd.DataFrame(music.loudness)\nY = music.bpm\nknn.fit(X, Y)\n\n# Set up our prediction line.\nT = np.arange(0, 50, 0.1)[:, np.newaxis]\n\n# Trailing underscores are a common convention for a prediction.\nY_ = knn.predict(T)\n\nplt.scatter(X, Y, c='k', label='data')\nplt.plot(T, Y_, c='g', label='prediction')\nplt.legend()\nplt.title('K=10, Unweighted')\nplt.show()\n\n# Run the same model, this time with weights.\nknn_w = neighbors.KNeighborsRegressor(n_neighbors=10, weights='distance')\nX = pd.DataFrame(music.loudness)\nY = music.bpm\nknn_w.fit(X, Y)\n\n# Set up our prediction line.\nT = np.arange(0, 50, 0.1)[:, np.newaxis]\n\nY_ = knn_w.predict(T)\n\nplt.scatter(X, Y, c='k', label='data')\nplt.plot(T, Y_, c='g', label='prediction')\nplt.legend()\nplt.title('K=10, Weighted')\nplt.show()",
"Notice how it seems like the weighted model grossly overfits to points. It is interesting that it oscillates around the datapoints. This is because the decay in weight happens so quickly.\nValidating KNN\nNow validating KNN, whether a regression or a classifier, is pretty much exactly the same as evaluating other classifiers or regression. Cross validation is still tremendously valuable. You can do holdouts. You even still get an $R^2$ value for the regression.\nWhy don't we validate that overfitting of the previous model with some k-fold cross validation? The test statistic given by this model is $R^2$, which measures the same as in linear regression.",
"from sklearn.model_selection import cross_val_score\nscore = cross_val_score(knn, X, Y, cv=5)\nprint(\"Unweighted Accuracy: %0.2f (+/- %0.2f)\" % (score.mean(), score.std() * 2))\nscore_w = cross_val_score(knn_w, X, Y, cv=5)\nprint(\"Weighted Accuracy: %0.2f (+/- %0.2f)\" % (score_w.mean(), score_w.std() * 2))",
"First let me state that these two models are fantastically awful. There doesn't seem to be much of a relationship. It's all very poor. However the increased variance in the weighted model is interesting.\nWhy don't you add the other feature and mess around with $k$ and weighting to see if you can do any better than we've done so far?",
"## Your model here.\n\nfrom scipy import stats\nknn = neighbors.KNeighborsRegressor(n_neighbors=4)\n# Run the same model, this time with weights.\nknn_w = neighbors.KNeighborsRegressor(n_neighbors=4, weights='distance')\nX = pd.DataFrame({\n 'loudness': music.loudness,\n 'duration': music.duration\n})\nY = music.bpm\nknn_w.fit(X, Y)\nknn.fit(X,Y)\n\n\nfrom sklearn.model_selection import cross_val_score\nscore = cross_val_score(knn, X, Y, cv=10)\nprint(\"Unweighted Accuracy: %0.2f (+/- %0.2f)\" % (score.mean(), score.std() * 2))\nscore_w = cross_val_score(knn_w, X, Y, cv=10)\nprint(\"Weighted Accuracy: %0.2f (+/- %0.2f)\" % (score_w.mean(), score_w.std() * 2))\nprint(score_w)\n\n# search for an optimal value of K for KNN\nk_range = range(1,20)\nk_scores = []\nfor k in k_range: \n knn_w = neighbors.KNeighborsRegressor(n_neighbors = k, weights = 'distance')\n scores = cross_val_score(knn_w, X, Y, cv = 10)\n k_scores.append(scores.mean())\nprint(k_scores)\n\n# plot the value of K for KNN (x-axis) versus the cross-validated accuracy (y-axis)\nplt.plot(k_range, k_scores)\nplt.xlabel(\"Value of K for KNN\")\nplt.ylabel(\"Cross-Validated Accuracy\")\nplt.title(\"Weighted\")\nplt.show()\n\nprint(\"The optimal value for k (weighted by distance) is k={}\".format(k_scores.index(max(k_scores))+1))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
steinam/teacher
|
jup_notebooks/datenbanken/Versicherung_11FI3_On_Paper-Copy1.ipynb
|
mit
|
[
"Versicherung on Paper",
"%load_ext sql\n\n%sql mysql://steinam:steinam@localhost/versicherung_complete",
"Gesucht wird eine wiederholungsfreie Liste der Herstellerländer 3 P",
"%%sql \n-- meine Lösung\n\nselect distinct(Land) from Fahrzeughersteller;\n\n%%sql\n-- deine Lösung\nselect fahrzeughersteller.Land \nfrom fahrzeughersteller \ngroup by fahrzeughersteller.Land \n; ",
"Listen Sie alle Fahrzeugtypen und die Anzahl Fahrzeuge dieses Typs, aber nur, wenn mehr als 2 Fahrzeuge des Typs vorhanden sind. Sortieren Sie die Ausgabe nach Fahrzeugtypen. 4 P",
"%%sql\n-- meine Lösung\nselect fahrzeugtyp.Bezeichnung, count(fahrzeug.iD) as Anzahl\nfrom fahrzeugtyp left join fahrzeug\non fahrzeugtyp.id = fahrzeug.fahrzeugtyp_id\ngroup by fahrzeugtyp.bezeichnung\nhaving count(Anzahl) > 2\n\n%%sql\nselect *, (select count(*) from fahrzeug where fahrzeug.fahrzeugtyp_id = fahrzeugtyp.id) as Fahrzeuge \nfrom fahrzeugtyp \nhaving Fahrzeuge > 2\norder by fahrzeugtyp.bezeichnung;\n\n\n\n",
"Ermittle die Namen und Vornamen der Mitarbeiter incl. Abteilungsname, deren Abteilung ihren Sitz in Dortmund oder Bochum hat.",
"%%sql\n-- meine Lösung\n\n-- select ID from Abteilung where Abteilung.Ort = 'Dortmund' or abteilung.Ort = 'Bochum'\n\n\nselect Name, vorname, Bezeichnung, Abteilung.ID, Mitarbeiter.Abteilung_ID, Abteilung.Ort from Mitarbeiter inner join Abteilung \non Mitarbeiter.Abteilung_ID = Abteilung.ID \nwhere Abteilung.Ort in('Dortmund', 'Bochum')\norder by Name\n\n\n%%sql\n-- deine Lösung\nselect mitarbeiter.Name, mitarbeiter.Vorname, \n (select abteilung.bezeichnung from abteilung where abteilung.id = mitarbeiter.abteilung_id) as Abteilung, \n (select abteilung.ort from abteilung where abteilung.id = mitarbeiter.abteilung_id) as Standort \nfrom mitarbeiter having Standort = \"Dortmund\" or Standort = \"Bochum\";\n\n\n",
"Gesucht wird für jeden Fahrzeughersteller (Angabe der ID reicht) und jedes Jahr die kleinste und größte Schadenshöhe. \nGeben Sie falls möglich auch die Differenz zwischen den beiden Werten mit in der jeweiligen Ergebnismenge aus. Ansonsten erzeugen Sie für diese Aufgabe ein eigenes sql-Statement. 5 P",
"%%sql\n\n-- meine Lösung\nselect fahrzeughersteller.id, year(datum) as Jahr, min(zuordnung_sf_fz.schadenshoehe), max(zuordnung_sf_fz.Schadenshoehe), \n(max(zuordnung_sf_fz.schadenshoehe) - min(zuordnung_sf_fz.schadenshoehe)) as Differenz\nfrom fahrzeughersteller left join fahrzeugtyp \n on fahrzeughersteller.id = fahrzeugtyp.hersteller_ID\ninner join fahrzeug on fahrzeugtyp.id = fahrzeug.fahrzeugtyp_id\n inner join zuordnung_sf_fz\n on fahrzeug.id = zuordnung_sf_fz.fahrzeug_id\n inner join schadensfall on schadensfall.id = zuordnung_sf_fz.schadensfall_id\ngroup by fahrzeughersteller.id, year(datum)\n\n%%sql\n-- redigierte Version von Wortmann geht\n\nselect \n\nfahrzeughersteller.Name,\n(select min(zuordnung_sf_fz.schadenshoehe) from zuordnung_sf_fz \n\twhere zuordnung_sf_fz.fahrzeug_id in(\n\t\tselect fahrzeug.id from fahrzeug \n\t\twhere fahrzeug.fahrzeugtyp_id in(\n\t\t\tselect fahrzeugtyp.id from fahrzeugtyp \n\t\t\twhere fahrzeugtyp.hersteller_id = fahrzeughersteller.id \n\t\t)\n\t)\n) as Kleinste,\n(select max(zuordnung_sf_fz.schadenshoehe) from zuordnung_sf_fz \n\twhere zuordnung_sf_fz.fahrzeug_id in(\n\t\tselect fahrzeug.id from fahrzeug \n\t\twhere fahrzeug.fahrzeugtyp_id in(\n\t\t\tselect fahrzeugtyp.id from fahrzeugtyp \n\t\t\twhere fahrzeugtyp.hersteller_id = fahrzeughersteller.id \n\t\t)\n\t)\n) as `Groesste`\nfrom fahrzeughersteller;",
"Zeige alle Mitarbeiter und deren Autokennzeichen, die als Dienstwagen einen Opel fahren. \n 4 P",
"%%sql\nselect Mitarbeiter.Name, dienstwagen.Kennzeichen\nfrom Mitarbeiter inner join dienstwagen\non mitarbeiter.id = dienstwagen.Mitarbeiter_id\ninner join fahrzeugtyp \n on dienstwagen.fahrzeugtyp_Id = fahrzeugtyp.id\n inner join fahrzeughersteller\n on fahrzeugtyp.hersteller_id = fahrzeughersteller.id\nwhere Fahrzeughersteller.NAme = 'Opel'\n\n\n\n%%sql\nselect * from mitarbeiter \nwhere mitarbeiter.id in(\n\tselect dienstwagen.mitarbeiter_id from dienstwagen \n\twhere \n\t\tdienstwagen.mitarbeiter_id = mitarbeiter.id \n\t\tand dienstwagen.fahrzeugtyp_id in(\n\t\t\tselect fahrzeugtyp.id from fahrzeugtyp\n\t\t\twhere fahrzeugtyp.hersteller_id in(\n\t\t\t\tselect fahrzeughersteller.id from fahrzeughersteller\n\t\t\t\twhere fahrzeughersteller.name = \"Opel\"\n\t\t\t)\n\t\t)\n)",
"Welche Fahrzeuge haben Schäden verursacht, deren Schadenssumme höher als die durchschnittliche Schadenshöhe sind. 5 P",
"%%sql\n\nselect fahrzeug.kennzeichen, sum(schadenshoehe)\nfrom fahrzeug inner join zuordnung_sf_fz\non fahrzeug.id = zuordnung_sf_fz.fahrzeug_id\ngroup by fahrzeug.kennzeichen\nhaving sum(schadenshoehe) > (select avg(schadenshoehe) from zuordnung_sf_fz)\n\n%%sql\n\n-- deine Lösung Wortmann\n/*\nselect * from fahrzeug having fahrzeug.id in(\n\tselect zuordnung_sf_zf.fahrzeugtyp_id from zuordnung_sf_zf \n\twhere zuordnung_sf_zf.schadenhoehe > ((select sum(zuordnung_sf_zf.schadenhoehe) from zuordnung_sf_zf)) / (select count(*) from zuordnung_sf_zf))\n\n\n*/\n\nselect * from fahrzeug having fahrzeug.id in(\n\tselect zuordnung_sf_fz.fahrzeug_id from zuordnung_sf_fz \n\twhere zuordnung_sf_fz.schadenshoehe > ((select sum(zuordnung_sf_fz.schadenshoehe) from zuordnung_sf_fz)) / (select count(*) from zuordnung_sf_fz))\n\n",
"Welche Mitarbeiter sind älter als das Durchschnittsalter der Mitarbeiter. 4 P",
"%%sql\n\nselect Mitarbeiter.Name, Mitarbeiter.Geburtsdatum\nfrom Mitarbeiter\nwhere Geburtsdatum < (select avg(Geburtsdatum) from Mitarbeiter ma) \norder by Mitarbeiter.Name\n \n\n\n%%sql\n-- geht auch\nselect ma.Name, ma.Geburtsdatum\nfrom Mitarbeiter ma\nwhere (now() - ma.Geburtsdatum) < (now() - (select avg(geburtsdatum) from mitarbeiter))\norder by ma.Name; \n\n%%sql\n-- deine Lösung Wortmann\n\nselect * from mitarbeiter \n having mitarbeiter.geburtsdatum < (select sum(mitarbeiter.geburtsdatum) from mitarbeiter) / (select count(*) from mitarbeiter)\n "
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jellis18/enterprise
|
docs/_static/notebooks/usage.ipynb
|
mit
|
[
"Usage",
"% matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nfrom __future__ import division\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom enterprise.pulsar import Pulsar\nimport enterprise.signals.parameter as parameter\nfrom enterprise.signals import utils\nfrom enterprise.signals import signal_base\nfrom enterprise.signals import selections\nfrom enterprise.signals.selections import Selection\nfrom tests.enterprise_test_data import datadir\nfrom enterprise.signals import white_signals\nfrom enterprise.signals import gp_signals\nfrom enterprise.signals import deterministic_signals",
"Setting up the Pulsar object\nenterprise uses a specific Pulsar object to store all of the relevant pulsar information (i.e. TOAs, residuals, error bars, flags, etc) from the timing package. Eventually enterprise will support both PINT and tempo2; however, for the moment it only supports tempo2 through the libstempo package. This object is then used to initalize Signals that define the generative model for the pulsar residuals. This is in keeping with the overall enterprise philosophy that the pulsar data should be as loosley coupled as possible to the pulsar model.\nBelow we initialize a pulsar class with NANOGrav B1855+09 data by passing it the par and tim file.",
"# pulsar file information\nparfiles = datadir + '/B1855+09_NANOGrav_11yv0.gls.par'\ntimfiles = datadir + 'data/B1855+09_NANOGrav_11yv0.tim'\n\npsr = Pulsar(parfiles, timfiles)",
"Parameters\nIn enterprise signal parameters are set by specifying a prior distribution (i.e., Uniform, Normal, etc.). Below we will give an example of this functionality.",
"# lets define an efac parameter with a uniform prior from [0.5, 5]\nefac = parameter.Uniform(0.5, 5)",
"This is an abstract parameter class in that it is not yet intialized. It is equivalent to defining the class via the standard nomenclature class efac(object)... The parameter is then intialized via a name. This way, a single parameter class can be initialized for multiple signal parameters with different names (i.e. EFAC per observing backend, etc). Once the parameter is initialized then you then have access to many useful methods.",
"# initialize efac parameter with name \"efac_1\"\nefac1 = efac('efac_1')\n\n# return parameter name\nprint(efac1.name)\n\n# get pdf at a point (log pdf is access)\nprint(efac1.get_pdf(1.3), efac1.get_logpdf(1.3))\n\n# return 5 samples from this prior distribution\nprint(efac1.sample(size=5))",
"Set up a basic pulsar noise model\nFor our basic noise model we will use standard EFAC, EQUAD, and ECORR white noise with a powerlaw red noise parameterized by an amplitude and spectral index. Using the methods described above we define our parameters for our noise model below.",
"# white and red noise parameters with uniform priors\nefac = parameter.Uniform(0.5, 5)\nlog10_equad = parameter.Uniform(-10, -5)\nlog10_ecorr = parameter.Uniform(-10, -5)\nlog10_A = parameter.Uniform(-18, -12)\ngamma = parameter.Uniform(1, 7)",
"White noise signals\nWhite noise signals are straightforward to intialize",
"# EFAC, EQUAD, and ECORR signals\nef = ws.MeasurementNoise(efac=efac)\neq = ws.EquadNoise(log10_equad=log10_equad)\nec = gs.EcorrBasisModel(log10_ecorr=log10_ecorr)",
"Again, these are abstract classes that will be in itialized when passes a Pulsar object. This, again, makes for ease of use when constucting pulsar signal models in that these classes are created on the fly and can be re-intialized with different pulsars.\nRed noise signals\nRed noise signals are handled somewhat differently than other signals in that we do not create the class by passing the parameters directly. Instead we use the Function factory (creates a class, not an instance) to set the red noise PSD used (i.e. powerlaw, spectrum, broken, etc). This allows the user to define custom PSDs with no extra coding overhead other than the PSD definition itself.",
"# Use Function object to set power-law red noise with uniform priors\npl = Function(utils.powerlaw, log10_A=log10_A, gamma=gamma)\n\n# red noise signal using Fourier GP\nrn = gs.FourierBasisGP(spectrum=pl, components=30)",
"Here we have defined a power-law function class that will be initialized when the red noise class is initialized. The red noise signal model is then a powerlaw red noise process modeled via a Fourier basis Gaussian Process with 30 components. \nLinear timing model\nWe must include the timing model in all of our analyses. In this case we treat it as a gaussian process with very large variances. Thus, this is equvalent to marginalizing over the linear timing model coefficients assuming uniform priors. In enterprise this is setup via:",
"# timing model as GP (no parameters)\ntm = gs.TimingModel()",
"Initializing the model\nNow that we have all of our signals defined we can define our total noise model as the sum of all of the components and intialize by passing that combined signal class the pulsar object. Is that awesome or what!",
"# create combined signal class with some metaclass magic\ns = ef + ec + eq + rn + tm\n\n# initialize model with pulsar object\npm = s(psr)\n\n# print out the parameter names and priors\npm.params"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/supplemental/solutions/autoencoder.ipynb
|
apache-2.0
|
[
"Convolutional Autoencoder on MNIST dataset\nLearning Objective\n1. Build an autoencoder architecture (consisting of an encoder and decoder) in Keras\n2. Define the loss using the reconstructive error\n3. Define a training step for the autoencoder using tf.GradientTape()\n4. Train the autoencoder on the MNIST dataset\nIntroduction\nThis notebook demonstrates how to build and train a convolutional autoencoder.\nAutoencoders consist of two models: an encoder and a decoder.\n<img src=\"../assets/autoencoder2.png\" width=\"600\">\nIn this notebook we'll build an autoencoder to recreate MNIST digits. This notebook demonstrates this process on the MNIST dataset. The following animation shows a series of images produced by the generator as it was trained for 100 epochs. The images increasingly resemble hand written digits as the autoencoder learns to reconstruct the original images.\n<img src=\"../assets/autoencoder.gif\">\nImport TensorFlow and other libraries",
"from __future__ import absolute_import, division, print_function\n\nimport glob\nimport imageio\nimport os\nimport PIL\nimport time\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\n\nfrom tensorflow.keras import layers\nfrom IPython import display",
"Next, we'll define some of the environment variables we'll use in this notebook. Note that we are setting the EMBED_DIM to be 64. This is the dimension of the latent space for our autoencoder.",
"np.random.seed(1)\ntf.random.set_seed(1)\nBATCH_SIZE = 128\nBUFFER_SIZE = 60000\nEPOCHS = 60\nLR = 1e-2\nEMBED_DIM = 64 # intermediate_dim",
"Load and prepare the dataset\nFor this notebook, we will use the MNIST dataset to train the autoencoder. The encoder will map the handwritten digits into the latent space, to force a lower dimensional representation and the decoder will then map the encoding back.",
"(train_images, _), (test_images, _) = tf.keras.datasets.mnist.load_data()\n\ntrain_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')\ntrain_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]",
"Next, we define our input pipeline using tf.data. The pipeline below reads in train_images as tensor slices and then shuffles and batches the examples for training.",
"# Batch and shuffle the data\ntrain_dataset = tf.data.Dataset.from_tensor_slices(train_images)\ntrain_dataset = train_dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE)\ntrain_dataset = train_dataset.prefetch(BATCH_SIZE*4)\n\ntest_images = test_images.reshape(test_images.shape[0], 28, 28, 1).astype('float32')\ntest_images = (test_images - 127.5) / 127.5 # Normalize the images to [-1, 1]",
"Create the encoder and decoder models\nBoth our encoder and decoder models will be defined using the Keras Sequential API.\nThe Encoder\nThe encoder uses tf.keras.layers.Conv2D layers to map the image into a lower-dimensional latent space. We will start with an image of size 28x28x1 and then use convolution layers to map into a final Dense layer.\nExercise. Complete the code below to create the CNN-based encoder model. Your model should have input_shape to be 28x28x1 and end with a final Dense layer the size of embed_dim.",
"#TODO 1.\ndef make_encoder(embed_dim):\n model = tf.keras.Sequential(name=\"encoder\")\n model.add(layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same',\n input_shape=[28, 28, 1]))\n model.add(layers.LeakyReLU())\n model.add(layers.Dropout(0.3))\n\n model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))\n model.add(layers.LeakyReLU())\n model.add(layers.Dropout(0.3))\n\n model.add(layers.Flatten())\n model.add(layers.Dense(embed_dim))\n assert model.output_shape == (None, embed_dim)\n \n return model",
"The Decoder\nThe decoder uses tf.keras.layers.Conv2DTranspose (upsampling) layers to produce an image from the latent space. We will start with a Dense layer with the same input shape as embed_dim, then upsample several times until you reach the desired image size of 28x28x1.\nExercise. Complete the code below to create the decoder model. Start with a Dense layer that takes as input a tensor of size embed_dim. Use tf.keras.layers.Conv2DTranspose over multiple layers to upsample so that the final layer has shape 28x28x1 (the shape of our original MNIST digits).\nHint: Experiment with using BatchNormalization or different activation functions like LeakyReLU.",
"#TODO 1.\ndef make_decoder(embed_dim):\n model = tf.keras.Sequential(name=\"decoder\")\n model.add(layers.Dense(embed_dim, use_bias=False,\n input_shape=(embed_dim,)))\n model.add(layers.Dense(6272, use_bias=False,\n input_shape=(embed_dim,)))\n model.add(layers.BatchNormalization())\n model.add(layers.LeakyReLU())\n\n model.add(layers.Reshape((7, 7, 128)))\n\n model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), \n padding='same', use_bias=False))\n model.add(layers.BatchNormalization())\n model.add(layers.LeakyReLU())\n\n model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2),\n padding='same', use_bias=False))\n model.add(layers.BatchNormalization())\n model.add(layers.LeakyReLU())\n\n model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same',\n use_bias=False, activation='tanh'))\n assert model.output_shape == (None, 28, 28, 1)\n \n return model",
"Finally, we stitch the encoder and decoder models together to create our autoencoder.",
"ae_model = tf.keras.models.Sequential([make_encoder(EMBED_DIM), make_decoder(EMBED_DIM)])",
"Using .summary() we can have a high-level summary of the full autoencoder model as well as the individual encoder and decoder. Note how the shapes of the tensors mirror each other as data is passed through the encoder and then the decoder.",
"ae_model.summary()\n\nmake_encoder(EMBED_DIM).summary()\n\nmake_decoder(EMBED_DIM).summary()",
"Next, we define the loss for our autoencoder model. The loss we will use is the reconstruction error. This loss is similar to the MSE loss we've commonly use for regression. Here we are applying this error pixel-wise to compare the original MNIST image and the image reconstructed from the decoder.",
"#TODO 2.\ndef loss(model, original):\n reconstruction_error = tf.reduce_mean(\n tf.square(tf.subtract(model(original), original)))\n return reconstruction_error",
"Optimizer for the autoencoder\nNext we define the optimizer for model, specifying the learning rate.",
"optimizer = tf.keras.optimizers.SGD(lr=LR)",
"Save checkpoints\nThis notebook also demonstrates how to save and restore models, which can be helpful in case a long running training task is interrupted.",
"checkpoint_dir = \"./ae_training_checkpoints\"\ncheckpoint_prefix = os.path.join(checkpoint_dir, \"ckpt\")\ncheckpoint = tf.train.Checkpoint(optimizer=optimizer, model=ae_model)",
"Define the training loop\nNext, we define the training loop for training our autoencoder. The train step will use tf.GradientTape() to keep track of gradient steps through training.\nExercise.\nComplete the code below to define the training loop for our autoencoder. Notice the use of tf.function below. This annotation causes the function train_step to be \"compiled\". The train_step function takes as input a batch of images and passes them through the ae_model. The gradient is then computed on the loss against the ae_model output and the original image. In the code below, you should\n- define ae_gradients. This is the gradient of the autoencoder loss with respect to the variables of the ae_model.\n- create the gradient_variables by assigning each ae_gradient computed above to it's respective training variable.\n- apply the gradient step using the optimizer",
"#TODO 3.\n@tf.function\ndef train_step(images):\n with tf.GradientTape() as tape:\n ae_gradients = tape.gradient(loss(ae_model, images),\n ae_model.trainable_variables)\n gradient_variables = zip(ae_gradients, ae_model.trainable_variables)\n \n optimizer.apply_gradients(gradient_variables)",
"We use the train_step function above to define training of our autoencoder. Note here, the train function takes as argument the tf.data dataset and the number of epochs for training.",
"def train(dataset, epochs):\n for epoch in range(epochs):\n start = time.time()\n\n for image_batch in dataset:\n train_step(image_batch)\n\n # Produce images for the GIF as we go\n display.clear_output(wait=True)\n generate_and_save_images(ae_model,\n epoch + 1,\n test_images[:16, :, :, :])\n\n # Save the model every 5 epochs\n if (epoch + 1) % 5 == 0:\n checkpoint.save(file_prefix=checkpoint_prefix)\n\n print ('Time for epoch {} is {} sec'.format(\n epoch + 1, time.time()-start))\n\n # Generate after the final epoch\n display.clear_output(wait=True)\n generate_and_save_images(ae_model,\n epochs,\n test_images[:16, :, :, :])",
"Generate and save images.\nWe'll use a small helper function to generate images and save them.",
"def generate_and_save_images(model, epoch, test_input):\n # Notice `training` is set to False.\n # This is so all layers run in inference mode (batchnorm).\n predictions = model(test_input, training=False)\n\n fig = plt.figure(figsize=(4,4))\n\n for i in range(predictions.shape[0]):\n plt.subplot(4, 4, i+1)\n pixels = predictions[i, :, :] * 127.5 + 127.5\n pixels = np.array(pixels, dtype='float')\n pixels = pixels.reshape((28,28))\n plt.imshow(pixels, cmap='gray')\n plt.axis('off')\n\n plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))\n plt.show()",
"Let's see how our model performs before any training. We'll take as input the first 16 digits of the MNIST test set. Right now they just look like random noise.",
"generate_and_save_images(ae_model, 4, test_images[:16, :, :, :])",
"Train the model\nCall the train() method defined above to train the autoencoder model.\nWe'll print the resulting images as training progresses. At the beginning of the training, the decoded images look like random noise. As training progresses, the model outputs will look increasingly better. After about 50 epochs, they resemble MNIST digits. This may take about one or two minutes / epoch",
"#TODO 4.\ntrain(train_dataset, EPOCHS)",
"Create a GIF\nLastly, we'll create a gif that shows the progression of our produced images through training.",
"# Display a single image using the epoch number\ndef display_image(epoch_no):\n return PIL.Image.open('./ae_images/image_at_epoch_{:04d}.png'.format(epoch_no))\n\ndisplay_image(EPOCHS)\n\nanim_file = 'autoencoder.gif'\n\nwith imageio.get_writer(anim_file, mode='I') as writer:\n filenames = glob.glob('./ae_images/image*.png')\n filenames = sorted(filenames)\n last = -1\n for i,filename in enumerate(filenames):\n frame = 2*(i**0.5)\n if round(frame) > round(last):\n last = frame\n else:\n continue\n image = imageio.imread(filename)\n writer.append_data(image)\n image = imageio.imread(filename)\n writer.append_data(image)\n\nimport IPython\nif IPython.version_info > (6,2,0,''):\n display.Image(filename=anim_file)",
"Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
matthewzimmer/carnd-behavioral-cloning
|
basic_training.ipynb
|
mit
|
[
"import numpy as np\nimport pandas as pd\nimport json\nimport uuid\nimport os\nimport random\nimport cv2\nimport math\n\nfrom scipy import misc\nfrom keras.layers import Convolution2D, Activation, MaxPooling2D, Dropout, Flatten, Dense, ZeroPadding2D, Lambda, ELU, BatchNormalization\nfrom keras.models import Sequential, model_from_json\nfrom keras.optimizers import Adam\nfrom sklearn.model_selection import train_test_split",
"Training data was collected in the Self-Driving Car simulator on Mac OS using a Playstation 3 console controller.\nRecording Measurement class\nTo simplify accessing each measurement from the original CSV, I've encapsulated each row in a special class.\nIt serves the following purposes:\n\nStrips whitespace off the left, center, and right camera image paths.\nCasts the original absolute path of each camera image to a relative path.\n This adds reassurance the image will load on any computer.\nProvides a convenient #is_valid_measurment method which encapsulates pertinent logic to ensure data quality \n is satisfactory.\nProvides convenience getter methods for left, center and camera images. They're essentially designed to \n lazily instantiate (once) the actual image array at the time the method is invoked.",
"class RecordingMeasurement:\n \"\"\"\n A representation of a vehicle's state at a point in time while driving \n around a track during recording.\n \n Features available are:\n \n left_camera_view - An image taken by the LEFT camera.\n center_camera_view - An image taken by the CENTER camera.\n right_camera_view - An image taken by the RIGHT camera.\n steering_angle - A normalized steering angle in the range -1 to 1.\n speed - The speed in which the vehicle was traveling at measurement time.\n \n \n This class serves the following purposes:\n\n 1. Provides convenience getter methods for left, center and camera images. \n In an effort to reduce memory footprint, they're essentially designed \n to lazily instantiate (once) the actual image array at the time the \n method is invoked.\n \n 2. Strips whitespace off the left, center, and right camera image paths.\n \n 3. Casts the original absolute path of each camera image to a relative path.\n This adds reassurance the image will load on any computer.\n \n 4. Provides a convenient #is_valid_measurment method which encapsulates \n pertinent logic to ensure data quality is satisfactory.\n \n \"\"\"\n def __init__(self, measurement_data):\n self.measurement_data = measurement_data\n \n self.steering_angle = round(float(measurement_data['steering']), 4)\n self.speed = round(float(measurement_data['speed']), 4)\n \n l = measurement_data['left'].strip()\n c = measurement_data['center'].strip()\n r = measurement_data['right'].strip()\n \n # cast absolute path to relative path to be environment agnostic\n l, c, r = [('./IMG/' + os.path.split(file_path)[1]) for file_path in (l, c, r)]\n\n self.left_camera_view_path = l\n self.center_camera_view_path = c\n self.right_camera_view_path = r\n \n def is_valid_measurement(self):\n \"\"\"\n Return true if the original center image is available to load.\n \"\"\"\n return os.path.isfile(self.center_camera_view_path)\n \n def left_camera_view(self):\n \"\"\"\n Lazily instantiates the left camera view image at first call.\n \"\"\"\n if not hasattr(self, '__left_camera_view'):\n self.__left_camera_view = self.__load_image(self.left_camera_view_path)\n return self.__left_camera_view\n \n def center_camera_view(self):\n \"\"\"\n Lazily instantiates the center camera view image at first call.\n \"\"\"\n if not hasattr(self, '__center_camera_view'):\n self.__center_camera_view = self.__load_image(self.center_camera_view_path)\n return self.__center_camera_view\n \n def right_camera_view(self):\n \"\"\"\n Lazily instantiates the right camera view image at first call.\n \"\"\"\n if not hasattr(self, '__right_camera_view'):\n self.__right_camera_view = self.__load_image(self.right_camera_view_path)\n return self.__right_camera_view\n \n def __load_image(self, imagepath):\n image_array = None\n if os.path.isfile(imagepath):\n image_array = misc.imread(imagepath)\n else:\n print('File Not Found: {}'.format(imagepath))\n return image_array\n \n def __str__(self):\n results = []\n results.append('Image paths:')\n results.append('')\n results.append(' Left camera path: {}'.format(self.left_camera_view_path))\n results.append(' Center camera path: {}'.format(self.center_camera_view_path))\n results.append(' Right camera path: {}'.format(self.right_camera_view_path))\n results.append('')\n results.append('Additional features:')\n results.append('')\n results.append(' Steering angle: {}'.format(self.steering_angle))\n results.append(' Speed: {}'.format(self.speed))\n return '\\n'.join(results)",
"Image Preprocessor\nAlgorithm used to preprocess am image prior to feeding it into the network for training and predicting.",
"def preprocess_image(image_array, output_shape=(40, 80), colorspace='yuv'):\n \"\"\"\n Reminder: \n \n Source image shape is (160, 320, 3)\n \n Our preprocessing algorithm consists of the following steps:\n \n 1. Converts BGR to YUV colorspace. \n \n This allows us to leverage luminance (Y channel - brightness - black and white representation), \n and chrominance (U and V - blue–luminance and red–luminance differences respectively)\n \n 2. Crops top 31.25% portion and bottom 12.5% portion. \n The entire width of the image is preserved.\n \n This allows the model to generalize better to unseen roadways since we clop \n artifacts such as trees, buildings, etc. above the horizon. We also clip the \n hood from the image.\n \n 3. Finally, I allow users of this algorithm the ability to specify the shape of the final image via \n the output_shape argument.\n \n Once I've cropped the image, I resize it to the specified shape using the INTER_AREA \n interpolation agorithm as it is the best choice to preserve original image features.\n \n See `Scaling` section in OpenCV documentation:\n \n http://docs.opencv.org/trunk/da/d6e/tutorial_py_geometric_transformations.html\n \"\"\"\n # convert image to another colorspace\n if colorspace == 'yuv':\n image_array = cv2.cvtColor(image_array, cv2.COLOR_BGR2YUV)\n elif colorspace == 'hsv':\n image_array = cv2.cvtColor(image_array, cv2.COLOR_BGR2HSV)\n elif colorspace == 'rgb':\n image_array = cv2.cvtColor(image_array, cv2.COLOR_BGR2RGB)\n \n\n # [y1:y2, x1:x2]\n # \n # crops top 31.25% portion and bottom 12.5% portion\n # \n # The entire width of the image is preserved\n image_array = image_array[50:140, 0:320]\n \n # Let's blur the image to smooth out some of the artifacts\n kernel_size = 5 # Must be an odd number (3, 5, 7...)\n image_array = cv2.GaussianBlur(image_array, (kernel_size, kernel_size), 0)\n\n # resize image to output_shape\n image_array = cv2.resize(image_array, (output_shape[1], output_shape[0]), interpolation=cv2.INTER_AREA)\n\n return image_array\n",
"Track 1 Training Dataset class",
"class Track1Dataset:\n \"\"\"\n Parses driving_log.csv and constructs training, validation and test datasets corresponding to\n measurements taken at various points in time while recording on track 1.\n\n * X_train - A set of examples used for learning, that is to fit the parameters [i.e., weights] of the\n classifier.\n\n * X_val - A set of examples used to tune the hyperparameters [i.e., architecture, not weights] of a\n classifier, for example to choose the number of hidden units in a neural network.\n\n * X_test - A set of examples used only to assess the performance [generalization] of a fully-specified\n classifier.\n\n * y_train, y_val, y_test - The steering angle corresponding to their respective X features.\n \"\"\"\n\n DRIVING_LOG_PATH = './driving_log.csv'\n\n def __init__(self, validation_split_percentage=0.2, test_split_percentage=0.05):\n self.X_train = []\n self.X_val = []\n self.X_test = []\n\n self.y_train = []\n self.y_val = []\n self.y_test = []\n\n self.dataframe = None\n self.headers = []\n self.__loaded = False\n\n self.__load(validation_split_percentage=validation_split_percentage,\n test_split_percentage=test_split_percentage)\n\n assert self.__loaded == True, 'The dataset was not loaded. Perhaps driving_log.csv is missing.'\n\n def __load(self, validation_split_percentage, test_split_percentage):\n \"\"\"\n Splits the training data into a validation and test dataset.\n\n * X_train - A set of examples used for learning, that is to fit the parameters [i.e., weights] of the classifier.\n\n * X_val - A set of examples used to tune the hyperparameters [i.e., architecture, not weights] of a\n classifier, for example to choose the number of hidden units in a neural network.\n\n * X_test - A set of examples used only to assess the performance [generalization] of a fully-specified\n classifier.\n\n * y_train, y_val, y_test - The steering angle corresponding to their respective X features.\n \"\"\"\n if not self.__loaded:\n X_train, y_train, headers, df = [], [], [], None\n\n # read in driving_log.csv and construct the\n # initial X_train and y_train before splitting\n # it into validation and testing sets.\n if os.path.isfile(self.DRIVING_LOG_PATH):\n df = pd.read_csv(self.DRIVING_LOG_PATH)\n headers = list(df.columns.values)\n for index, measurement_data in df.iterrows():\n measurement = RecordingMeasurement(measurement_data=measurement_data)\n X_train.append(measurement)\n y_train.append(measurement.steering_angle)\n self.__loaded = True\n\n # generate the validation set\n X_train, X_val, y_train, y_val = train_test_split(\n X_train,\n y_train,\n test_size=validation_split_percentage,\n random_state=0)\n\n X_train, y_train, X_val, y_val = np.array(X_train), np.array(y_train, dtype=np.float32), \\\n np.array(X_val), np.array(y_val, dtype=np.float32)\n\n # generate the test set\n X_train, X_test, y_train, y_test = train_test_split(\n X_train,\n y_train,\n test_size=test_split_percentage,\n random_state=0)\n\n X_train, y_train, X_test, y_test = np.array(X_train), np.array(y_train, dtype=np.float32), \\\n np.array(X_test), np.array(y_test, dtype=np.float32)\n\n self.X_train = X_train\n self.X_val = X_val\n self.X_test = X_test\n\n self.y_train = y_train\n self.y_val = y_val\n self.y_test = y_test\n\n self.dataframe = df\n self.headers = headers\n\n def batch_generator(self, X, Y, label, num_epochs, batch_size=32, output_shape=(160, 320), flip_images=True,\n classifier=None, colorspace='yuv'):\n \"\"\"\n A custom batch generator with the main goal of reducing memory footprint\n on computers and GPUs with limited memory space.\n\n Infinitely yields `batch_size` elements from the X and Y datasets.\n\n During batch iteration, this algorithm randomly flips the image\n and steering angle to reduce bias towards a specific steering angle/direction.\n \"\"\"\n population = len(X)\n counter = 0\n _index_in_epoch = 0\n _tot_epochs = 0\n batch_size = min(batch_size, population)\n batch_count = int(math.ceil(population / batch_size))\n\n assert X.shape[0] == Y.shape[0], 'X and Y size must be identical.'\n\n print('Batch generating against the {} dataset with population {} and shape {}'.format(label, population,\n X.shape))\n while True:\n counter += 1\n print('batch gen iter {}'.format(counter))\n for i in range(batch_count):\n start_i = _index_in_epoch\n _index_in_epoch += batch_size\n if _index_in_epoch >= population:\n # Save the classifier to support manual early stoppage\n if classifier is not None:\n classifier.save()\n print(' sampled entire population. reshuffling deck and resetting all counters.')\n perm = np.arange(population)\n np.random.shuffle(perm)\n X = X[perm]\n Y = Y[perm]\n start_i = 0\n _index_in_epoch = batch_size\n _tot_epochs += 1\n end_i = _index_in_epoch\n\n X_batch = []\n y_batch = []\n\n for j in range(start_i, end_i):\n steering_angle = Y[j]\n measurement = X[j]\n center_image = measurement.center_camera_view()\n if center_image is not None:\n image = preprocess_image(center_image, output_shape=output_shape, colorspace=colorspace)\n\n # Here I throw in a random image flip to reduce bias towards\n # a specific direction/steering angle.\n if flip_images and random.random() > 0.5:\n X_batch.append(np.fliplr(image))\n y_batch.append(-steering_angle)\n else:\n X_batch.append(image)\n y_batch.append(steering_angle)\n\n yield np.array(X_batch), np.array(y_batch)\n\n def __str__(self):\n results = []\n results.append('{} Stats:'.format(self.__class__.__name__))\n results.append('')\n results.append(' [Headers]')\n results.append('')\n results.append(' {}'.format(self.headers))\n results.append('')\n results.append('')\n results.append(' [Shapes]')\n results.append('')\n results.append(' training features: {}'.format(self.X_train.shape))\n results.append(' training labels: {}'.format(self.y_train.shape))\n results.append('')\n results.append(' validation features: {}'.format(self.X_val.shape))\n results.append(' validation labels: {}'.format(self.y_val.shape))\n results.append('')\n results.append(' test features: {}'.format(self.X_test.shape))\n results.append(' test labels: {}'.format(self.y_test.shape))\n results.append('')\n results.append(' [Dataframe sample]')\n results.append('')\n results.append(str(self.dataframe.head(n=5)))\n return '\\n'.join(results)\n",
"Instantiates the Track 1 training dataset, prints details about the object then prints the first 5 elements of the dataframe.",
"dataset = Track1Dataset(validation_split_percentage=0.2, test_split_percentage=0.05)\nprint(dataset)",
"Feature Plotting\nAfter plotting the steering, throttle, brake, and speed features, we can see that the vast majority of driving was done between 27.5 and 31 MPH.\nFurthermore, every feature for brake, steering and throttle was 0 hence the large redish bar at zero for all features.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\ndataset.dataframe.plot.hist(alpha=0.5)",
"Steering Histogram Plot\nAfter plotting a histogram of the steering feature for the entire dataset, we can see that a significant amount more recorded driving was done with the steering wheel between 0.01 and 0.25. We can also observe the majority of recorded driving was done with the steering wheel between -0.25 and 0.25. Very few recorded data went beyond a +/- .5 steering angle.",
"dataset.dataframe['steering'].plot.hist(alpha=0.5)",
"Steering Line Plot\nAfter plotting a standard line graph (honestly, I don't know the official name of this graph) of the steering feature for the entire dataset, the widest curves appear to be dominantly right while smoother, more consistent left curves are observed throughout the entire recording session.\nFurthermore, I postulate the abrupt right turn spikes with large gaps are indicative of recovery training from the right line moving towards the center of the lane.",
"dataset.dataframe['steering'].plot(alpha=0.5)",
"Explore the features\nHere I sample the 0th feature and print some statistics about the 0th RecordingMeasurement.",
"print('Center camera view shape:\\n\\n{}\\n'.format(dataset.X_train[0].center_camera_view().shape))\nprint(dataset.X_train[0])",
"Here I randomize the training data and inject to first 10 measurements into the batch generator. Note each item in X_train is an instance of the RecordingMeasurement class.",
"perm = np.arange(len(dataset.X_train))\nnp.random.shuffle(perm) \noutput_shape = (40, 80, 3)\ngenerator = dataset.batch_generator(\n colorspace='yuv',\n X=dataset.X_train[0:10], \n Y=dataset.y_train[0:10], \n output_shape=output_shape, \n label='batch feature exploration', \n num_epochs=1, \n batch_size=10\n)",
"Visualize batch features\nHere I've plotted the first 10 randomly selected images from the batch of images.\nI've plotted the original image in the YUV colorspace as well as each channel Y, U and V.\nBetween RGB, HSV and YUV, the YUV colorspace captured the most intuitive representation of steering angle to image pixels representing a curved road.",
"from zimpy.plot.image_plotter import ImagePlotter\n\n# Grab the first 10 items from the training set and \nX_batch, y_batch = next(generator)\n# print(X_batch.shape)\n# print(y_batch.shape)\n\n# Cast to string so they render nicely in graph\ny_batch = [str(x) for x in y_batch]\n\nImagePlotter.plot_images(X_batch, y_batch, rows=2, columns=5)\nImagePlotter.plot_images(X_batch[:,:,:,0], y_batch, rows=2, columns=5)\nImagePlotter.plot_images(X_batch[:,:,:,1], y_batch, rows=2, columns=5)\nImagePlotter.plot_images(X_batch[:,:,:,2], y_batch, rows=2, columns=5)",
"Network Architecture\nBaseNetwork is the base class for all network implementations. It contains the necessary plumbing to save trained network data, load previously trained model and weights, and implements the abstract #fit and #build_model methods which all sub classes must implement.",
"class BaseNetwork:\n \n WEIGHTS_FILE_NAME = 'model_final.h5'\n MODEL_FILE_NAME = 'model_final.json'\n \n def __init__(self):\n self.uuid = uuid.uuid4()\n self.model = None\n self.weights = None\n\n def fit(self, X_train, y_train, X_val, y_val, nb_epoch=2, batch_size=32, samples_per_epoch=None, output_shape=(40, 80, 3)):\n raise NotImplementedError\n \n def build_model(self, input_shape, output_shape, learning_rate=0.001, dropout_prob=0.1, activation='relu'):\n raise NotImplementedError\n\n def save(self):\n print('Saved {} model.'.format(self.__class__.__name__))\n self.__persist()\n\n def __persist(self):\n save_dir = os.path.join(os.path.dirname(__file__))\n weights_save_path = os.path.join(save_dir, self.WEIGHTS_FILE_NAME)\n model_save_path = os.path.join(save_dir, self.MODEL_FILE_NAME)\n\n if not os.path.exists(save_dir):\n os.makedirs(save_dir)\n\n self.model.save_weights(weights_save_path)\n with open(model_save_path, 'w') as outfile:\n json.dump(self.model.to_json(), outfile)\n\n def __str__(self):\n results = []\n if self.model is not None:\n results.append(self.model.summary())\n return '\\n'.join(results)",
"Track1 extends BaseNetwork.\nIt contains a simple 4-layer convolutional neural network with 4 fully connected layers with 10% dropout after flattening the data as well as after the first FC layer.",
"class Track1(BaseNetwork):\n def fit(self, model, dataset, nb_epoch=2, batch_size=32, samples_per_epoch=None, output_shape=(40, 80, 3)):\n # Fit the model leveraging the custom \n # batch generator baked into the \n # dataset itself.\n history = model.fit_generator(\n dataset.batch_generator(\n X=dataset.X_train, \n Y=dataset.y_train, \n label='train set', \n num_epochs=nb_epoch,\n batch_size=batch_size,\n output_shape=output_shape,\n classifier=self\n ),\n nb_epoch=nb_epoch,\n samples_per_epoch=len(X_train),\n verbose=2,\n validation_data=dataset.batch_generator(\n X=dataset.X_val, \n Y=dataset.y_val,\n label='validation set',\n num_epochs=nb_epoch,\n batch_size=batch_size,\n output_shape=output_shape\n )\n )\n \n print(history.history)\n self.save()\n \n def build_model(self, input_shape, output_shape, learning_rate=0.001, dropout_prob=0.1, activation='relu'):\n \"\"\"\n Inital zero-mean normalization input layer.\n A 4-layer deep neural network with 4 fully connected layers at the top.\n ReLU activation used on each convolution layer.\n Dropout of 10% (default) used after initially flattening after convolution layers.\n Dropout of 10% (default) used after first fully connected layer.\n \n Adam optimizer with 0.001 learning rate (default) used in this network.\n Mean squared error loss function was used since this is a regression problem and MSE is \n quite common and robust for regression analysis.\n \"\"\"\n model = Sequential()\n model.add(Lambda(lambda x: x / 255 - 0.5,\n input_shape=input_shape,\n output_shape=output_shape))\n model.add(Convolution2D(24, 5, 5, border_mode='valid', activation=activation))\n model.add(MaxPooling2D(pool_size=(2, 2)))\n model.add(Convolution2D(36, 5, 5, border_mode='valid', activation=activation))\n model.add(MaxPooling2D(pool_size=(2, 2)))\n model.add(Convolution2D(48, 5, 5, border_mode='same', activation=activation))\n model.add(MaxPooling2D(pool_size=(2, 2)))\n model.add(Convolution2D(64, 3, 3, border_mode='same', activation=activation))\n model.add(Flatten())\n model.add(Dropout(dropout_prob))\n model.add(Dense(1024, activation=activation))\n model.add(Dropout(dropout_prob))\n model.add(Dense(100, activation=activation))\n model.add(Dense(50, activation=activation))\n model.add(Dense(10, activation=activation))\n model.add(Dense(1, init='normal'))\n\n optimizer = Adam(lr=learning_rate)\n model.compile(loss='mse', optimizer=optimizer, metrics=['accuracy'])\n self.model = model\n model.summary()\n return model",
"Instantiate the classifier",
"output_shape=(40, 80, 3)\n\nclf = Track1()\nmodel = clf.build_model(\n input_shape=output_shape, \n output_shape=output_shape, \n learning_rate=0.001, \n dropout_prob=0.1, \n activation='relu'\n)",
"Train the network",
"if False:\n clf.fit(\n model, \n dataset, \n nb_epoch=2, \n batch_size=32\n )"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
maxhutch/thesis-notebooks
|
Compare Fingers.ipynb
|
gpl-3.0
|
[
"Figure 1\nStart by loading some boiler plate: matplotlib, numpy, scipy, json, functools, and a convenience class.",
"%matplotlib inline\nimport matplotlib\nmatplotlib.rcParams['figure.figsize'] = (10.0, 8.0)\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.interpolate import interp1d, InterpolatedUnivariateSpline\nfrom scipy.optimize import bisect\nimport json\nfrom functools import partial\nclass Foo: pass",
"And some more specialized dependencies:\n 1. Slict provides a convenient slice-able dictionary interface\n 2. Chest is an out-of-core dictionary that we'll hook directly to a globus remote using...\n 3. glopen is an open-like context manager for remote globus files",
"from chest import Chest\nfrom slict import CachedSlict\nfrom glopen import glopen, glopen_many",
"Configuration for this figure.",
"config = Foo()\nconfig.name0 = \"HighAspect/HA_base/HA_base\"\nconfig.name1 = \"HighAspect/HA_visc/HA_visc\"\n#config.arch_end = \"maxhutch#alpha-admin/~/pub/\"\nconfig.arch_end = \"alcf#dtn_mira/projects/alpha-nek/\"",
"Open a chest located on a remote globus endpoint and load a remote json configuration file.",
"c0 = Chest(path = \"{:s}-results\".format(config.name0),\n open = partial(glopen, endpoint=config.arch_end),\n open_many = partial(glopen_many, endpoint=config.arch_end))\nsc0 = CachedSlict(c0)\nwith glopen(\n \"{:s}.json\".format(config.name0), mode='r',\n endpoint = config.arch_end,\n ) as f:\n params0 = json.load(f)\n\nc1 = Chest(path = \"{:s}-results\".format(config.name1),\n open = partial(glopen, endpoint=config.arch_end),\n open_many = partial(glopen_many, endpoint=config.arch_end))\nsc1 = CachedSlict(c1)\nwith glopen(\n \"{:s}.json\".format(config.name1), mode='r',\n endpoint = config.arch_end,\n ) as f:\n params1 = json.load(f)",
"We want to plot the spike depth, which is the 'H' field in the chest.\nChests can prefetch lists of keys more quickly than individual ones, so we'll prefetch the keys we want.",
"c0.prefetch(sc0[:,'H'].full_keys())\nc0.prefetch(sc0[:,'w_max_z'].full_keys())\nc1.prefetch(sc1[:,'H'].full_keys())\nc1.prefetch(sc1[:,'w_max_z'].full_keys())",
"Plot the bubble height, the 'H' keys, vs. time.\nUse a spline to compute the derivative.",
"spl0 = InterpolatedUnivariateSpline(sc0[:,'H'].keys(), sc0[:,'H'].values(), k=3)\ndHdt0 = spl0.derivative()\nTs0 = np.linspace(sc0[:,'H'].keys()[0], sc0[:,'H'].keys()[-1], 1000)\nVb0 = -dHdt0(Ts0)\nVm0 = np.array([np.max(sc0[t,'w_max_z']) for t in sc0[:,'w_max_z'].keys()])\nVc0 = np.array([sc0[t,'w_max_z'][sc0[t,'w_max_z'].shape[0]/2] for t in sc0[:,'w_max_z'].keys()])\ntheory0 = np.sqrt(\n params0[\"atwood\"] * params0[\"g\"] * params0[\"extent_mesh\"][0] / np.pi \n + (2.*np.pi*params0[\"viscosity\"] / params0[\"extent_mesh\"][0])**2\n ) - (2.*np.pi*params0[\"viscosity\"] / params0[\"extent_mesh\"][0]) \n\nspl1 = InterpolatedUnivariateSpline(sc1[:,'H'].keys(), sc1[:,'H'].values(), k=3)\ndHdt1 = spl1.derivative()\nTs1 = np.linspace(sc1[:,'H'].keys()[0], sc1[:,'H'].keys()[-1], 1000)\nVb1 = -dHdt1(Ts1)\nVm1 = np.array([np.max(sc1[t,'w_max_z']) for t in sc1[:,'w_max_z'].keys()])\nVc1 = np.array([sc1[t,'w_max_z'][sc1[t,'w_max_z'].shape[0]/2] for t in sc1[:,'w_max_z'].keys()])\ntheory1 = np.sqrt(\n params1[\"atwood\"] * params1[\"g\"] * params1[\"extent_mesh\"][0] / np.pi \n + (2.*np.pi*params1[\"viscosity\"] / params1[\"extent_mesh\"][0])**2\n ) - (2.*np.pi*params1[\"viscosity\"] / params1[\"extent_mesh\"][0]) \n\nfig, axs = plt.subplots(2,1, sharex=True)\n\naxs[0].plot(Ts0, -spl0(Ts0));\naxs[0].plot(Ts1, -spl1(Ts1));\n#axs[0].plot(sc[:,'H'].keys(), -np.array(sc[:,'H'].values()), 'yo');\naxs[0].set_ylabel('Depth (m)')\n\naxs[1].plot(Ts0, Vb0, 'b-');\naxs[1].plot(Ts1, Vb1, 'b-');\naxs[1].plot(sc0[:,'w_max_z'].keys(), Vm0, 'g-');\naxs[1].plot(sc1[:,'w_max_z'].keys(), Vm1, 'g-');\naxs[1].plot(sc0[:,'w_max_z'].keys(), Vc0, 'r-');\naxs[1].plot(sc1[:,'w_max_z'].keys(), Vc1, 'r-');\naxs[1].plot([Ts0[0],Ts0[-1]], [theory0, theory0], 'k--');\naxs[1].plot([Ts1[0],Ts1[-1]], [theory1, theory1], 'k--');\naxs[1].set_ylabel('Velocity (scaled)')\naxs[1].set_xlabel('Time (s)');\nplt.savefig('Figure1.eps')\n\n%install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py \n%load_ext version_information \n%version_information numpy, matplotlib, slict, chest, glopen, globussh"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google-research/google-research
|
using_dl_to_annotate_protein_universe/Using_Deep_Learning_to_Annotate_the_Protein_Universe.ipynb
|
apache-2.0
|
[
"```\nCopyright 2021 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n```\nThis code supports the publication \"Using Deep Learning to Annotate the Protein Universe\".\npreprint link\nNote: We recommend you enable a free GPU by going:\n\nRuntime → Change runtime type → Hardware Accelerator: GPU\n\nSet-up\nImports",
"import json\nimport numpy as np\nimport tensorflow.compat.v1 as tf\n\n# Suppress noisy log messages.\nfrom tensorflow.python.util import deprecation\ndeprecation._PRINT_DEPRECATION_WARNINGS = False",
"Library functions: convert sequence to one-hot array (input to model)",
"AMINO_ACID_VOCABULARY = [\n 'A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L', 'M', 'N', 'P', 'Q', 'R',\n 'S', 'T', 'V', 'W', 'Y'\n]\ndef residues_to_one_hot(amino_acid_residues):\n \"\"\"Given a sequence of amino acids, return one hot array.\n\n Supports ambiguous amino acid characters B, Z, and X by distributing evenly\n over possible values, e.g. an 'X' gets mapped to [.05, .05, ... , .05].\n\n Supports rare amino acids by appropriately substituting. See\n normalize_sequence_to_blosum_characters for more information.\n\n Supports gaps and pads with the '.' and '-' characters; which are mapped to\n the zero vector.\n\n Args:\n amino_acid_residues: string. consisting of characters from\n AMINO_ACID_VOCABULARY\n\n Returns:\n A numpy array of shape (len(amino_acid_residues),\n len(AMINO_ACID_VOCABULARY)).\n\n Raises:\n ValueError: if sparse_amino_acid has a character not in the vocabulary + X.\n \"\"\"\n to_return = []\n normalized_residues = amino_acid_residues.replace('U', 'C').replace('O', 'X')\n for char in normalized_residues:\n if char in AMINO_ACID_VOCABULARY:\n to_append = np.zeros(len(AMINO_ACID_VOCABULARY))\n to_append[AMINO_ACID_VOCABULARY.index(char)] = 1.\n to_return.append(to_append)\n elif char == 'B': # Asparagine or aspartic acid.\n to_append = np.zeros(len(AMINO_ACID_VOCABULARY))\n to_append[AMINO_ACID_VOCABULARY.index('D')] = .5\n to_append[AMINO_ACID_VOCABULARY.index('N')] = .5\n to_return.append(to_append)\n elif char == 'Z': # Glutamine or glutamic acid.\n to_append = np.zeros(len(AMINO_ACID_VOCABULARY))\n to_append[AMINO_ACID_VOCABULARY.index('E')] = .5\n to_append[AMINO_ACID_VOCABULARY.index('Q')] = .5\n to_return.append(to_append)\n elif char == 'X':\n to_return.append(\n np.full(len(AMINO_ACID_VOCABULARY), 1. / len(AMINO_ACID_VOCABULARY)))\n elif char == _PFAM_GAP_CHARACTER:\n to_return.append(np.zeros(len(AMINO_ACID_VOCABULARY)))\n else:\n raise ValueError('Could not one-hot code character {}'.format(char))\n return np.array(to_return)\n\ndef _test_residues_to_one_hot():\n expected = np.zeros((3, 20))\n expected[0, 0] = 1. # Amino acid A\n expected[1, 1] = 1. # Amino acid C\n expected[2, :] = .05 # Amino acid X\n\n actual = residues_to_one_hot('ACX')\n np.testing.assert_allclose(actual, expected)\n_test_residues_to_one_hot()\n\ndef pad_one_hot_sequence(sequence: np.ndarray,\n target_length: int) -> np.ndarray:\n \"\"\"Pads one hot sequence [seq_len, num_aas] in the seq_len dimension.\"\"\"\n sequence_length = sequence.shape[0]\n pad_length = target_length - sequence_length\n if pad_length < 0:\n raise ValueError(\n 'Cannot set a negative amount of padding. Sequence length was {}, target_length was {}.'\n .format(sequence_length, target_length))\n pad_values = [[0, pad_length], [0, 0]]\n return np.pad(sequence, pad_values, mode='constant')\n\ndef _test_pad_one_hot():\n input_one_hot = residues_to_one_hot('ACX')\n expected = np.array(input_one_hot.tolist() + np.zeros((4, 20)).tolist())\n actual = pad_one_hot_sequence(input_one_hot, 7)\n\n np.testing.assert_allclose(expected, actual)\n_test_pad_one_hot()",
"Download model and vocabulary",
"# Get a TensorFlow SavedModel\n!wget -qN https://storage.googleapis.com/brain-genomics-public/research/proteins/pfam/models/single_domain_per_sequence_zipped_models/seed_random_32.0/5356760.tar.gz\n# unzip\n!tar xzf 5356760.tar.gz\n# Get the vocabulary for the model, which tells you which output index means which family\n!wget https://storage.googleapis.com/brain-genomics-public/research/proteins/pfam/models/single_domain_per_sequence_zipped_models/trained_model_pfam_32.0_vocab.json\n\n# Find the unzipped path\n!ls *5356760*",
"Load the model into TensorFlow",
"sess = tf.Session()\ngraph = tf.Graph()\n\nwith graph.as_default():\n saved_model = tf.saved_model.load(sess, ['serve'], 'trn-_cnn_random__random_sp_gpu-cnn_for_random_pfam-5356760')",
"Load tensors for class confidence prediction",
"class_confidence_signature = saved_model.signature_def['confidences']\nclass_confidence_signature_tensor_name = class_confidence_signature.outputs['output'].name\n\nsequence_input_tensor_name = saved_model.signature_def['confidences'].inputs['sequence'].name\nsequence_lengths_input_tensor_name = saved_model.signature_def['confidences'].inputs['sequence_length'].name",
"Predict Pfam label for domain",
"# Run inference\nhemoglobin = 'MVLSPADKTNVKAAWGKVGAHAGEYGAEALERMFLSFPTTKTYFPHFDLSHGSAQVKGHGKKVADALTNAVAHVDDMPNALSALSDLHAHKLRVDPVNFKLLSHCLLVTLAAHLPAEFTPAVHASLDKFLASVSTVLTSKYR'\nglobin_domain = hemoglobin[6:107] # 0-indexed, right inclusive because of the way slices in python work\n\n# If you want to put in different proteins (other than hemoglobin), you\n# can run this cell multiple times. Simply replace the variable \"hemoglobin\"\n# with your desired protein domain.\n\n# The first run of this cell will be slower; the subsequent runs will be fast.\n# This is because on the first run, the TensorFlow XLA graph is compiled, and\n# then is reused.\nwith graph.as_default():\n confidences_by_class = sess.run(\n class_confidence_signature_tensor_name,\n {\n # Note that this function accepts a batch of sequences which\n # can speed up inference when running on many sequences.\n sequence_input_tensor_name: [residues_to_one_hot(globin_domain)],\n sequence_lengths_input_tensor_name: [len(globin_domain)],\n }\n )\n\nnp.array([residues_to_one_hot(globin_domain)]).shape\n\nconfidences_by_class",
"Map the model's prediction to a Pfam family accession",
"# Load vocab\nwith open('trained_model_pfam_32.0_vocab.json') as f:\n vocab = json.loads(f.read())\n\n# Find what the most likely class is\nnp.argmax(confidences_by_class)\n\nvocab[8505] # PF00042 is family Globin",
"If you want to predict for a bunch of sequences, you can run inference on a batch instead of one-by-one to make it faster",
"hemoglobin = 'MVLSPADKTNVKAAWGKVGAHAGEYGAEALERMFLSFPTTKTYFPHFDLSHGSAQVKGHGKKVADALTNAVAHVDDMPNALSALSDLHAHKLRVDPVNFKLLSHCLLVTLAAHLPAEFTPAVHASLDKFLASVSTVLTSKYR'\nglobin_domain = hemoglobin[6:107] # 0-indexed, right inclusive because of the way slices in python work\n\n# Coronavirus spike glycoprotein S2 (PF01601)\ncovid_spike_protein_domain = \"NSIAIPTNFTISVTTEILPVSMTKTSVDCTMYICGDSTECSNLLLQYGSFCTQLNRALTGIAVEQDKNTQEVFAQVKQIYKTPPIKDFGGFNFSQILPDPSKPSKRSFIEDLLFNKVTLADAGFIKQYGDCLGDIAARDLICAQKFNGLTVLPPLLTDEMIAQYTSALLAGTITSGWTFGAGAALQIPFAMQMAYRFNGIGVTQNVLYENQKLIANQFNSAIGKIQDSLSSTASALGKLQDVVNQNAQALNTLVKQLSSNFGAISSVLNDILSRLDKVEAEVQIDRLITGRLQSLQTYVTQQLIRAAEIRASANLAATKMSECVLGQSKRVDFCGKGYHLMSFPQSAPHGVVFLHVTYVPAQEKNFTTAPAICHDGKAHFPREGVFVSNGTHWFVTQRNFYEPQIITTDNTFVSGNCDVVIGIVNNTVYDPLQPELDSFKEELDKYFKNHTSPDVDLGDISGINASVVNIQKEIDRLNEVAKNLNESLIDLQELGKYEQYIKWPWYIWLGFIAGLIAIVMVTIM\"\n\n# Concatenate and pad sequence inputs\none_hot_sequence_inputs = [\n residues_to_one_hot(globin_domain),\n residues_to_one_hot(covid_spike_protein_domain),\n]\n\nmax_len_within_batch = max(len(globin_domain), len(covid_spike_protein_domain))\npadded_sequence_inputs = [pad_one_hot_sequence(s, max_len_within_batch)\n for s in one_hot_sequence_inputs]\n\n# The first run of this cell will be slower; the subsequent runs will be fast.\n# This is because on the first run, the TensorFlow XLA graph is compiled, and\n# then is reused.\nwith graph.as_default():\n confidences_by_class = sess.run(\n class_confidence_signature_tensor_name,\n {\n sequence_input_tensor_name: padded_sequence_inputs,\n sequence_lengths_input_tensor_name: [\n len(globin_domain),\n len(covid_spike_protein_domain)\n ],\n })\n\nvocab[np.argmax(confidences_by_class[0])] # 0th element is for hemoglobin; PF00042 is family Globin\n\nvocab[np.argmax(confidences_by_class[1])] # 1th element is for covid; PF01601 is Coronavirus spike glycoprotein S2",
"Compute embedding of domain",
"embedding_signature = saved_model.signature_def['pooled_representation']\nembedding_signature_tensor_name = embedding_signature.outputs['output'].name\n\n# The first run of this cell will be slower; the subsequent runs will be fast.\n# This is because on the first run, the TensorFlow XLA graph is compiled, and\n# then is reused.\nwith graph.as_default():\n embedding = sess.run(\n embedding_signature_tensor_name,\n {\n # Note that this function accepts a batch of sequences which\n # can speed up inference when running on many sequences.\n sequence_input_tensor_name: [residues_to_one_hot(globin_domain)],\n sequence_lengths_input_tensor_name: [len(globin_domain)],\n }\n )\n\n# Shape of embedding is (# seqs in batch, number of features in embedding space)\nembedding.shape"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
miykael/nipype_tutorial
|
notebooks/introduction_showcase.ipynb
|
bsd-3-clause
|
[
"Nipype Showcase\nWhat's all the hype about Nipype? Is it really that good? Short answer: Yes!\nLong answer: ... well, let's consider a very simple fMRI preprocessing workflow that just performs:\n1. slice time correction\n2. motion correction\n3. smoothing\nPreparing the preprocessing workflow\nFirst, we need to import the main Nipype tools: Node and Workflow",
"from nipype import Node, Workflow",
"Now, we can import the interfaces that we want to use for the preprocessing.",
"from nipype.interfaces.fsl import SliceTimer, MCFLIRT, Smooth",
"Next, we will put the three interfaces into a node and define the specific input parameters.",
"# Initiate a node to correct for slice wise acquisition\nslicetimer = Node(SliceTimer(index_dir=False,\n interleaved=True,\n time_repetition=2.5),\n name=\"slicetimer\")\n\n# Initiate a node to correct for motion\nmcflirt = Node(MCFLIRT(mean_vol=True,\n save_plots=True),\n name=\"mcflirt\")\n\n# Initiate a node to smooth functional images\nsmooth = Node(Smooth(fwhm=4), name=\"smooth\")",
"After creating the nodes, we can now create the preprocessing workflow.",
"preproc01 = Workflow(name='preproc01', base_dir='.')",
"Now, we can put all the nodes into this preprocessing workflow. We specify the data flow / execution flow of the workflow by connecting the corresponding nodes to each other.",
"preproc01.connect([(slicetimer, mcflirt, [('slice_time_corrected_file', 'in_file')]),\n (mcflirt, smooth, [('out_file', 'in_file')])])",
"To better understand what we did we can write out the workflow graph and visualize it directly in this notebook.",
"preproc01.write_graph(graph2use='orig')\n\n# Visualize graph\nfrom IPython.display import Image\nImage(filename=\"preproc01/graph_detailed.png\")",
"Run the workflow on one functional image\nNow, that we've created a workflow, let's run it on a functional image.\nFor this, we first need to specify the input file of the very first node, i.e. the slicetimer node.",
"slicetimer.inputs.in_file = '/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz'",
"To show off Nipype's parallelization power, let's run the workflow in parallel, on 5 processors and let's show the execution time:",
"%time preproc01.run('MultiProc', plugin_args={'n_procs': 5})",
"Conclusion\nNice, the whole execution took ~2min. But wait... The parallelization didn't really help.\nThat's true, but because there was no possibility to run the workflow in parallel. Each node depends on the output of the previous node.\nResults of preproc01\nSo, what did we get? Let's look at the output folder preproc01:",
"!tree preproc01 -I '*js|*json|*pklz|_report|*.dot|*html'",
"Rerunning of a workflow\nNow, for fun. Let's run the workflow again, but let's change the fwhm value of the Gaussian smoothing kernel to 2.",
"smooth.inputs.fwhm = 2",
"And let's run the workflow again.",
"%time preproc01.run('MultiProc', plugin_args={'n_procs': 5})",
"Conclusion\nInteresting, now it only took ~15s to execute the whole workflow again. What happened?\nAs you can see from the log above, Nipype didn't execute the two nodes slicetimer and mclfirt again. This, because their input values didn't change from the last execution. The preproc01 workflow therefore only had to rerun the node smooth.\nRunning a workflow in parallel\nOk, ok... Rerunning a workflow again is faster. That's nice and all, but I want more. You spoke of parallel execution!\nWe saw that the preproc01 workflow takes about ~2min to execute completely. So, if we would run the workflow on five functional images, it should take about ~10min total. This, of course, assuming the execution will be done sequentially. Now, let's see how long it takes if we run it in parallel.",
"# First, let's copy/clone 'preproc01'\npreproc02 = preproc01.clone('preproc02')\npreproc03 = preproc01.clone('preproc03')\npreproc04 = preproc01.clone('preproc04')\npreproc05 = preproc01.clone('preproc05')",
"We now have five different preprocessing workflows. If we want to run them in parallel, we can put them all in another workflow.",
"metaflow = Workflow(name='metaflow', base_dir='.')\n\n# Now we can add the five preproc workflows to the bigger metaflow\nmetaflow.add_nodes([preproc01, preproc02, preproc03,\n preproc04, preproc05])",
"Note: We now have a workflow (metaflow), that contains five other workflows (preproc0?), each of them containing three nodes.\nTo better understand this, let's visualize this metaflow.",
"# As before, let's write the graph of the workflow\nmetaflow.write_graph(graph2use='flat')\n\n# And visualize the graph\nfrom IPython.display import Image\nImage(filename=\"metaflow/graph_detailed.png\")",
"Ah... so now we can see that the metaflow has potential for parallelization. So let's put it to test",
"%time metaflow.run('MultiProc', plugin_args={'n_procs': 5})",
"This time we can see that Nipype uses all available processors.\nAnd if all went well, the total execution time should still be around ~2min.\nThat's why Nipype is so amazing. The days of opening multiple SPMs, FSLs, AFNIs etc. are past!\nResults of metaflow",
"!tree metaflow -I '*js|*json|*pklz|_report|*.dot|*html'"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kdestasio/online_brain_intensive
|
nipype_tutorial/notebooks/basic_joinnodes.ipynb
|
gpl-2.0
|
[
"<img src=\"../static/images/joinnode.png\" width=\"240\">\nJoinNode\nJoinNode have the opposite effect of iterables. Where iterables split up the execution workflow into many different branches, a JoinNode merges them back into on node. For a more detailed explanation, check out JoinNode, synchronize and itersource from the main homepage.\nSimple example\nLet's consider the very simple example depicted at the top of this page:\n```python\nfrom nipype import Node, JoinNode, Workflow\nSpecify fake input node A\na = Node(interface=A(), name=\"a\")\nIterate over fake node B's input 'in_file?\nb = Node(interface=B(), name=\"b\")\nb.iterables = ('in_file', [file1, file2])\nPass results on to fake node C\nc = Node(interface=C(), name=\"c\")\nJoin forked execution workflow in fake node D\nd = JoinNode(interface=D(),\n joinsource=\"b\",\n joinfield=\"in_files\",\n name=\"d\")\nPut everything into a workflow as usual\nworkflow = Workflow(name=\"workflow\")\nworkflow.connect([(a, b, [('subject', 'subject')]),\n (b, c, [('out_file', 'in_file')])\n (c, d, [('out_file', 'in_files')])\n ])\n```\nAs you can see, setting up a JoinNode is rather simple. The only difference to a normal Node are the joinsource and the joinfield. joinsource specifies from which node the information to join is coming and the joinfield specifies the input field of the JoinNode where the information to join will be entering the node.\nMore realistic example\nLet's consider another example where we have one node that iterates over 3 different numbers and generates randome numbers. Another node joins those three different numbers (each coming from a separate branch of the workflow) into one list. To make the whole thing a bit more realistic, the second node will use the Function interface to do something with those numbers, before we spit them out again.",
"from nipype import JoinNode, Node, Workflow\nfrom nipype.interfaces.utility import Function, IdentityInterface\n\ndef get_data_from_id(id):\n \"\"\"Generate a random number based on id\"\"\"\n import numpy as np\n return id + np.random.rand()\n\ndef merge_and_scale_data(data2):\n \"\"\"Scale the input list by 1000\"\"\"\n import numpy as np\n return (np.array(data2) * 1000).tolist()\n\n\nnode1 = Node(Function(input_names=['id'],\n output_names=['data1'],\n function=get_data_from_id),\n name='get_data')\nnode1.iterables = ('id', [1, 2, 3])\n\nnode2 = JoinNode(Function(input_names=['data2'],\n output_names=['data_scaled'],\n function=merge_and_scale_data),\n name='scale_data',\n joinsource=node1,\n joinfield=['data2'])\n\nwf = Workflow(name='testjoin')\nwf.connect(node1, 'data1', node2, 'data2')\neg = wf.run()\n\nwf.write_graph(graph2use='exec')\nfrom IPython.display import Image\nImage(filename='graph_detailed.dot.png')",
"Now, let's look at the input and output of the joinnode:",
"res = [node for node in eg.nodes() if 'scale_data' in node.name][0].result\nres.outputs\n\nres.inputs",
"Extending to multiple nodes\nWe extend the workflow by using three nodes. Note that even this workflow, the joinsource corresponds to the node containing iterables and the joinfield corresponds to the input port of the JoinNode that aggregates the iterable branches. As before the graph below shows how the execution process is setup.",
"def get_data_from_id(id):\n import numpy as np\n return id + np.random.rand()\n\ndef scale_data(data2):\n import numpy as np\n return data2\n\ndef replicate(data3, nreps=2):\n return data3 * nreps\n\nnode1 = Node(Function(input_names=['id'],\n output_names=['data1'],\n function=get_data_from_id),\n name='get_data')\nnode1.iterables = ('id', [1, 2, 3])\n\nnode2 = Node(Function(input_names=['data2'],\n output_names=['data_scaled'],\n function=scale_data),\n name='scale_data')\n\nnode3 = JoinNode(Function(input_names=['data3'],\n output_names=['data_repeated'],\n function=replicate),\n name='replicate_data',\n joinsource=node1,\n joinfield=['data3'])\n\nwf = Workflow(name='testjoin')\nwf.connect(node1, 'data1', node2, 'data2')\nwf.connect(node2, 'data_scaled', node3, 'data3')\neg = wf.run()\n\nwf.write_graph(graph2use='exec')\nImage(filename='graph_detailed.dot.png')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mlamoureux/PIMS_YRC
|
Using_Python.ipynb
|
mit
|
[
"Using Python in Jupyter\nThis is a typical notebook in Jupyter.\nIt is organized as a series of cells. Each cell could contain text, code, or some raw format (Raw NBConvert). \nYou can select which type of code you want to run. For this notebook, we are using Python 3. \nYou could also choose to use Python 2, Julia, or R. Additional languages can be installed.\nRunning your cells\nYou edit a cell by clicking on it and typing as you wish.\nTo run it (either code, or to format text), you must hit \"Shift-Enter\" on your keyboard.\nUsing Latex and such\nThe text is formatted using something called the Markdown language. This language is useful for simple formatting like making lists, adding bold face or italics, and using basic Latex syntax.\nA list is formed using dashes at the start of each line. Hit \"Shift-return\" to get the formatted version\n- list item one\n- list item two\n- and so on.\nItalics are formed with single asterix, bold with double. Like this and like this.\nLatex works fine, except you don't have access to fancy packages. Use single dollar signs for inline, double for display.\nThis is an integral formula inline $\\sin(x) = \\int_0^x \\cos(t) dt$, while here it is again in display mode:\n$$\\sin(x) = \\int_0^x \\cos(t) dt.$$\nCreating code\nThis is as simple as typing some legal Python code into a cell, and hitting \"Shift-Enter\" to run it. Or use the menu commands. \nPython knows about integers, floating point numbers, even complex numbers like $1+2j$:",
"2+2\n\n2/3\n\n(1+2j)*(2+3j)",
"Python knows about strings, and can manipulate them.",
"\"This is a string\"\n\n\"This is a string\" + \"added to another string\"",
"You can define functions in Python. Notice that it is not fussy about declaring the type of the arguments. (Unlike C or Fortran.)",
"def doubleIt(x):\n return x+x\n\ndoubleIt(2)\n\ndoubleIt(2.0)\n\ndoubleIt(\"Hi there \")",
"You can define for loops, and while loops, using a simple syntax. Notice the colon, the indenting, and the fact the loop definition ends with a blank line. That's all part of the syntax.",
"for i in {1,2,3}:\n print(i)\n\ni = 0 \nwhile i<5:\n i = i+1\n print(i)",
"You can run python code that you saved on disk as a file. For instance, in this folder we have a text file called manysring.py . The code looks like this\ns = 'x'\nwhile len(s)<10:\n print(s)\n s = s+'x'\nTo run it, use the magic command called %run",
"%run manystring\n",
"If you want to do something more complicated, you will have to load in various packages and functions within them.\nFor instance, to compute a sine function, you need to use Numerical Python, also known as Numpy.",
"from numpy import sin\nsin(.1)",
"You will find it to import everything in numpy, then you have access to lots of useful functions, arrays, etc.",
"from numpy import *\n[sin(.1),cos(.1),tan(.1),arcsin(.1),arccos(.1),arctan(.1)]",
"Namespaces\nActually, for big projects, you should not use the import-star command, as the many function names may clash with other function definitions in your code, or other packages.\nIt is considered safer to import the functions with the name of the package attached to them, or an appropriate abbreviation. Like this:",
"import numpy as np\n[np.sin(.1),np.cos(.1),np.tan(.1),np.arcsin(.1),np.arccos(.1),np.arctan(.1)]",
"Wrap up.\nAt this point, you now know how to access Python through the Jupyter notebook. To do some serious coding, you need to learn more Python. This workshop is not specifically about Python, so we will stop here.\nOther things to explore in the Jupyter Notebook.\nCheck these out:\n- File menu, for saving and downloading copies of your notebook.\n- Edit menu allows you to cut, copy, paste entires cells and groups of cells.\n- View menu, to decrease the clutter on your screen.\n- Insert menu, to add cells to a notebook where you need them.\n- Cell menu, lets you run a bunch of cells at once.\n- Kernel menu, to restart when necessary, or even change programming languages.\n- Help menu, yay!\nYou can also use the notebook as a Beamer or Powerpoint replacement."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
cbare/Etudes
|
python/Performance analysis of mean implementations.ipynb
|
apache-2.0
|
[
"import random\nimport timeit\nfrom collections.abc import Sequence\nimport pandas as pd\n\n%matplotlib inline\nimport seaborn as sns\nimport matplotlib.pyplot as plt",
"On a hunch, I made two separate cases in this implementation of mean. I wanted to support any iterable, which the second case does. But, if we have a len method, we can use the built-in sum function.",
"def mean(a):\n if isinstance(a, Sequence):\n return sum(a)/float(len(a))\n else:\n s = n = 0\n for x in a:\n s += x\n n += 1\n return s/float(n)",
"1st case",
"mean([random.random() for i in range(100000)])",
"2nd case",
"mean(random.random() for i in range(100000))",
"But, is it really worth it to have a separate implementation of the mean function for Sequences? Let's try it and see.",
"def mean_loop(a):\n s = n = 0\n for x in a:\n s += x\n n += 1\n return s/float(n)\n\ndef mean_seq(a):\n return sum(a)/float(len(a))\n\nn=1000\nsizes = [1000,2000,3000,5000,10000,20000,50000,70000,100000]\ncases = [('loop', 'mean_loop(a)'),\n ('sum', 'mean_seq(a)'),\n ('if', 'mean(a)')]\ndf = pd.DataFrame(index=sizes, columns=('sizes',)+tuple(key for key,cmd in cases))\ndf.sizes = sizes",
"Now, time a bunch of runs of the mean function on sequences of different sizes. This takes 30 seconds or so.",
"for size in sizes:\n a = tuple(random.random() for i in range(size))\n for key, cmd in cases:\n t = timeit.timeit(cmd, number=n, globals=globals())\n df.set_value(size, key, t)\ndf\n\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nax = sns.regplot(x='sizes', y='loop', data=df, label='loop')\nax = sns.regplot(x='sizes', y='sum', data=df, label='sum')\nax = sns.regplot(x='sizes', y='if', data=df, label='if')\nplt.ylabel('seconds')\nplt.xlabel('sequence length')\nplt.title('Running time to find mean of a sequence {} times'.format(n))\nplt.legend(loc='upper left')",
"For non-humongous sizes of sequences, both implementations will be practically instantaneous. But, we do see a substantial speedup percentage-wise, so why not get it if we can. 😁"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
pylablanche/gcForest
|
gcForest_tuto.ipynb
|
mit
|
[
"<h1>gcForest Algorithm</h1>\n\n<p>The gcForest algorithm was suggested in Zhou and Feng 2017 ( https://arxiv.org/abs/1702.08835 , refer for this paper for technical details) and I provide here a python3 implementation of this algorithm.<br>\nI chose to adopt the scikit-learn syntax for ease of use and hereafter I present how it can be used.</p>",
"from GCForest import gcForest\nfrom sklearn.datasets import load_iris, load_digits\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import accuracy_score",
"<h2>Before starting, a word about sizes.</h2>\n<p>*Note* : I recommend the reader to look at this section with the original paper next to the computer to see what I am talking about.</p>\n<p>The main technical problem in the present gcForest implementation so far is the memory usage when slicing the input data.\nA naive calculation can actually give you an idea of the number and sizes of objects the algorithm will be dealing with.</p>\n<p>Starting with a dataset of $N$ samples of size $[l,L]$ and with $C$ classes, the initial \"size\" is:<br><br>\n$S_{D} = N.l.L$</p>\n<p>**Slicing Step**<br>\nIf my window is of size $[w_l,w_L]$ and the chosen stride are $[s_l,s_L]$ then the number of slices per sample is :<br>\n<br>\n$n_{slices} = \\left(\\frac{l-w_l}{s_l}+1\\right)\\left(\\frac{L-w_L}{s_L}+1\\right)$<br><br>\nObviously the size of slice is $w_l.w_L$ hence the total size of the sliced data set is :<br><br>\n$S_{sliced} = N.w_l.w_L.\\left(\\frac{l-w_l}{s_l}+1\\right)\\left(\\frac{L-w_L}{s_L}+1\\right)$<br>\nThis is when the memory consumption is its peak maximum.</p>\n<p>**Class Vector after Multi-Grain Scanning**<br>\nNow all slices are fed to the random forest to generate *class vectors*.\nThe number of class vector per random forest per window per sample is simply equal to the number of slices given to the random forest $n_{cv}(w) = n_{slices}(w)$.\nHence, if we have $N_{RF}$ random forest per window the size of a class vector is (recall we have $N$ samples and $C$ classes):<br><br>\n$S_{cv}(w) = N.n_{cv}(w).N_{RF}.C$<br><br>\nAnd finally the total size of the Multi-Grain Scanning output will be:<br><br>\n$S_{mgs} = N.\\sum_{w} N_{RF}.C.n_{cv}(w)$\n</p>\n<p>This short calculation is just meant to give you an idea of the data processing during the Multi-Grain Scanning phase. The actual memory consumption depends on the format given (aka float, int, double, etc.) and it might be worth looking at it carefully when dealing with large datasets.</p>\n\n<h2>Iris example</h2>\n\n<p>The iris data set is actually not a very good example as the gcForest algorithm is better suited for time series and images where informations can be found at different scales in one sample.<br>\nNonetheless it is still an easy way to test the method.</p>",
"# loading the data\niris = load_iris()\nX = iris.data\ny = iris.target\nX_tr, X_te, y_tr, y_te = train_test_split(X, y, test_size=0.33)",
"<p>First calling and training the algorithm.\nA specificity here is the presence of the 'shape_1X' keyword to specify the shape of a single sample.\nI have added it as pictures fed to the machinery might not be square.<br>\nObviously it is not very relevant for the iris data set but still, it has to be defined.</p>\n<p>**New in version 0.1.3** : possibility to directly use an int as shape_1X for sequence data.</p>",
"gcf = gcForest(shape_1X=4, window=2, tolerance=0.0)\ngcf.fit(X_tr, y_tr)",
"<p>Now checking the prediction for the test set:<p>",
"pred_X = gcf.predict(X_te)\nprint(pred_X)\n\n# evaluating accuracy\naccuracy = accuracy_score(y_true=y_te, y_pred=pred_X)\nprint('gcForest accuracy : {}'.format(accuracy))",
"<h2>Digits Example</h2>\n<p>A much better example is the digits data set containing images of hand written digits.\nThe scikit data set can be viewed as a mini-MNIST for training purpose.</p>",
"# loading the data\ndigits = load_digits()\nX = digits.data\ny = digits.target\nX_tr, X_te, y_tr, y_te = train_test_split(X, y, test_size=0.4)",
"<p> ... taining gcForest ... (can take some time...) </p>",
"gcf = gcForest(shape_1X=[8,8], window=[4,6], tolerance=0.0, min_samples_mgs=10, min_samples_cascade=7)\ngcf.fit(X_tr, y_tr)",
"<p> ... and predicting classes ... </p>",
"pred_X = gcf.predict(X_te)\nprint(pred_X)\n\n# evaluating accuracy\naccuracy = accuracy_score(y_true=y_te, y_pred=pred_X)\nprint('gcForest accuracy : {}'.format(accuracy))",
"<h2>Saving Models to Disk</h2>\n<p>You probably don't want to re-train your classifier every day especially if you're using it on large data sets.\nFortunately there is a very easy way to save and load models to disk using ```sklearn.externals.joblib```</p>\n<p>__Saving model:__</p>",
"from sklearn.externals import joblib\njoblib.dump(gcf, 'gcf_model.sav')",
"<p>__Loading model__:</p>",
"gcf = joblib.load('gcf_model.sav')",
"<h2>Using mg-scanning and cascade_forest Sperately</h2>\n<p>As the Multi-Grain scanning and the cascade forest modules are quite independent it is possible to use them seperately.<br>\nIf a target `y` is given the code automaticcaly use it for training otherwise it recalls the last trained Random Forests to slice the data.</p>",
"gcf = gcForest(shape_1X=[8,8], window=5, min_samples_mgs=10, min_samples_cascade=7)\nX_tr_mgs = gcf.mg_scanning(X_tr, y_tr)\n\nX_te_mgs = gcf.mg_scanning(X_te)",
"<p>It is now possible to use the mg_scanning output as input for cascade forests using different parameters. Note that the cascade forest module does not directly return predictions but probability predictions from each Random Forest in the last layer of the cascade. Hence the need to first take the mean of the output and then find the max.</p>",
"gcf = gcForest(tolerance=0.0, min_samples_mgs=10, min_samples_cascade=7)\n_ = gcf.cascade_forest(X_tr_mgs, y_tr)\n\npred_proba = gcf.cascade_forest(X_te_mgs)\ntmp = np.mean(pred_proba, axis=0)\npreds = np.argmax(tmp, axis=1)\naccuracy_score(y_true=y_te, y_pred=preds)\n\ngcf = gcForest(tolerance=0.0, min_samples_mgs=20, min_samples_cascade=10)\n_ = gcf.cascade_forest(X_tr_mgs, y_tr)\n\npred_proba = gcf.cascade_forest(X_te_mgs)\ntmp = np.mean(pred_proba, axis=0)\npreds = np.argmax(tmp, axis=1)\naccuracy_score(y_true=y_te, y_pred=preds)",
"<h3>Skipping mg_scanning</h3>\n<p>It is also possible to directly use the cascade forest and skip the multi grain scanning step.</p>",
"gcf = gcForest(tolerance=0.0, min_samples_cascade=20)\n_ = gcf.cascade_forest(X_tr, y_tr)\n\npred_proba = gcf.cascade_forest(X_te)\ntmp = np.mean(pred_proba, axis=0)\npreds = np.argmax(tmp, axis=1)\naccuracy_score(y_true=y_te, y_pred=preds)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
seg/2016-ml-contest
|
geoLEARN/Submission_4_OVR_RF.ipynb
|
apache-2.0
|
[
"Facies classification using Random Forest\nContest entry by <a href=\\\"https://geolern.github.io/index.html#\\\">geoLEARN</a>:\n<a href=\\\"https://github.com/mablou\\\">Martin Blouin</a>, <a href=\\\"https://github.com/lperozzi\\\">Lorenzo Perozzi</a> and <a href=\\\"https://github.com/Antoine-Cate\\\">Antoine Caté</a> <br>\nin collaboration with <a href=\\\"http://ete.inrs.ca/erwan-gloaguen\\\">Erwan Gloaguen</a>\nOriginal contest notebook by Brendon Hall, Enthought\nIn this notebook we will train a machine learning algorithm to predict facies from well log data. The dataset comes from a class exercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).\nThe dataset consists of log data from nine wells that have been labeled with a facies type based on observation of core. We will use this log data to train a Random Forest model to classify facies types. \nExploring the dataset\nFirst, we import and examine the dataset used to train the classifier.",
"###### Importing all used packages\n%matplotlib inline\n\nimport warnings\nwarnings.filterwarnings('ignore')\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as colors\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\nimport seaborn as sns\n\nfrom pandas import set_option\n# set_option(\"display.max_rows\", 10)\npd.options.mode.chained_assignment = None\n\n###### Import packages needed for the make_vars functions\nimport Feature_Engineering as FE\n\n##### import stuff from scikit learn\nfrom sklearn.ensemble import RandomForestClassifier,RandomForestRegressor\nfrom sklearn.model_selection import KFold, cross_val_score,LeavePGroupsOut, LeaveOneGroupOut, cross_val_predict\nfrom sklearn.metrics import confusion_matrix, make_scorer, f1_score, accuracy_score, recall_score, precision_score\nfrom sklearn.multiclass import OneVsOneClassifier,OneVsRestClassifier\n\nfilename = '../facies_vectors.csv'\ntraining_data = pd.read_csv(filename)\ntraining_data.head()\n\ntraining_data.describe()",
"PE imputation: we use a random forest regressor to fill missing PE values",
"X_train = training_data[['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'NM_M', 'RELPOS']][training_data.PE.notnull()]\ny_train = training_data['PE'][training_data.PE.notnull()]\nX_fit = training_data[['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'NM_M', 'RELPOS']][training_data.PE.isnull()]\nClr = RandomForestRegressor(n_estimators=100)\nClr.fit(X_train, y_train)\ny_predict = Clr.predict(X_fit)\ntraining_data['PE'][training_data.PE.isnull()] = y_predict\ntraining_data.describe()",
"A complete description of the dataset is given in the Original contest notebook by Brendon Hall, Enthought. A total of four measured rock properties and two interpreted geological properties are given as raw predictor variables for the prediction of the \"Facies\" class.\nFeature engineering\nAs stated in our previous submission, we believe that feature engineering has a high potential for increasing classification success. A strategy for building new variables is explained below.\nThe dataset is distributed along a series of drillholes intersecting a stratigraphic sequence. Sedimentary facies tend to be deposited in sequences that reflect the evolution of the paleo-environment (variations in water depth, water temperature, biological activity, currents strenght, detrital input, ...). Each facies represents a specific depositional environment and is in contact with facies that represent a progressive transition to an other environment.\nThus, there is a relationship between neighbouring samples, and the distribution of the data along drillholes can be as important as data values for predicting facies.\nA series of new variables (features) are calculated and tested below to help represent the relationship of neighbouring samples and the overall texture of the data along drillholes. These variables are:\n\ndetail and approximation coeficients at various levels of two wavelet transforms (using two types of Daubechies wavelets);\nmeasures of the local entropy with variable observation windows;\nmeasures of the local gradient with variable observation windows;\nrolling statistical calculations (i.e., mean, standard deviation, min and max) with variable observation windows;\nratios between marine and non-marine lithofacies with different observation windows;\ndistances from the nearest marine or non-marine occurence uphole and downhole.\n\nFunctions used to build these variables are located in the Feature Engineering python script.\nAll the data exploration work related to the conception and study of these variables is not presented here.",
"##### cD From wavelet db1\ndwt_db1_cD_df = FE.make_dwt_vars_cD(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],\n levels=[1, 2, 3, 4], wavelet='db1')\n\n##### cA From wavelet db1\ndwt_db1_cA_df = FE.make_dwt_vars_cA(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],\n levels=[1, 2, 3, 4], wavelet='db1')\n\n##### cD From wavelet db3\ndwt_db3_cD_df = FE.make_dwt_vars_cD(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],\n levels=[1, 2, 3, 4], wavelet='db3')\n\n##### cA From wavelet db3\ndwt_db3_cA_df = FE.make_dwt_vars_cA(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],\n levels=[1, 2, 3, 4], wavelet='db3')\n\n##### From entropy\nentropy_df = FE.make_entropy_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],\n l_foots=[2, 3, 4, 5, 7, 10])\n\n###### From gradient\ngradient_df = FE.make_gradient_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],\n dx_list=[2, 3, 4, 5, 6, 10, 20])\n\n##### From rolling average\nmoving_av_df = FE.make_moving_av_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],\n windows=[1, 2, 5, 10, 20])\n\n##### From rolling standard deviation\nmoving_std_df = FE.make_moving_std_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],\n windows=[3 , 4, 5, 7, 10, 15, 20])\n\n##### From rolling max\nmoving_max_df = FE.make_moving_max_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],\n windows=[3, 4, 5, 7, 10, 15, 20])\n\n##### From rolling min\nmoving_min_df = FE.make_moving_min_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],\n windows=[3 , 4, 5, 7, 10, 15, 20])\n\n###### From rolling NM/M ratio\nrolling_marine_ratio_df = FE.make_rolling_marine_ratio_vars(wells_df=training_data, windows=[5, 10, 15, 20, 30, 50, 75, 100, 200])\n\n###### From distance to NM and M, up and down\ndist_M_up_df = FE.make_distance_to_M_up_vars(wells_df=training_data)\ndist_M_down_df = FE.make_distance_to_M_down_vars(wells_df=training_data)\ndist_NM_up_df = FE.make_distance_to_NM_up_vars(wells_df=training_data)\ndist_NM_down_df = FE.make_distance_to_NM_down_vars(wells_df=training_data)\n\nlist_df_var = [dwt_db1_cD_df, dwt_db1_cA_df, dwt_db3_cD_df, dwt_db3_cA_df,\n entropy_df, gradient_df, moving_av_df, moving_std_df, moving_max_df, moving_min_df,\n rolling_marine_ratio_df, dist_M_up_df, dist_M_down_df, dist_NM_up_df, dist_NM_down_df]\ncombined_df = training_data\nfor var_df in list_df_var:\n temp_df = var_df\n combined_df = pd.concat([combined_df,temp_df],axis=1)\ncombined_df.replace(to_replace=np.nan, value='-1', inplace=True)\nprint (combined_df.shape)\ncombined_df.head(5)",
"Building a prediction model from these variables\nA Random Forest model is built here to test the effect of these new variables on the prediction power. Algorithm parameters have been tuned so as to take into account the non-stationarity of the training and testing sets using the LeaveOneGroupOut cross-validation strategy. The size of individual tree leafs and nodes has been increased to the maximum possible without significantly increasing the variance so as to reduce the bias of the prediction.\nBox plot for a series of scores obtained through cross validation are presented below.\nCreate predictor and target arrays",
"X = combined_df.iloc[:, 4:]\ny = combined_df['Facies']\ngroups = combined_df['Well Name']",
"Estimation of validation scores from this tuning",
"Cl = RandomForestClassifier(n_estimators=100, max_features=0.1, min_samples_leaf=25,\n min_samples_split=50, class_weight='balanced', random_state=42, n_jobs=-1)\n\nOVR = OneVsRestClassifier(Cl,n_jobs=-1)\n\nmethods = [Cl, OVR]\nmethod_list = ['RF submission 3','One vs Rest']\n\nlpgo = LeavePGroupsOut(n_groups=2)\n\nscores = []\n\nfor method in methods:\n \n cv=lpgo.split(X, y, groups)\n validated = cross_val_score(method, X, y, scoring=\"f1_weighted\", cv=cv, n_jobs=-1)\n scores.append(validated)\n \nscores = np.array(scores)\nscores = np.swapaxes(scores, 0, 1)\nscores = pd.DataFrame(data=scores, columns=method_list)\n\nsns.set_style('white')\nfig,ax = plt.subplots(figsize=(8,6))\nsns.boxplot(data=scores,whis=1.5)\nplt.xlabel('scoring parameters')\nplt.ylabel('score')\nplt.title('Classification scores for tuned parameters');",
"Applying the classification model to test data",
"filename = '../validation_data_nofacies.csv'\ntest_data = pd.read_csv(filename)\ntest_data.head(5)\n\n##### cD From wavelet db1\ndwt_db1_cD_df = FE.make_dwt_vars_cD(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],\n levels=[1, 2, 3, 4], wavelet='db1')\n\n##### cA From wavelet db1\ndwt_db1_cA_df = FE.make_dwt_vars_cA(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],\n levels=[1, 2, 3, 4], wavelet='db1')\n\n##### cD From wavelet db3\ndwt_db3_cD_df = FE.make_dwt_vars_cD(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],\n levels=[1, 2, 3, 4], wavelet='db3')\n\n##### cA From wavelet db3\ndwt_db3_cA_df = FE.make_dwt_vars_cA(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],\n levels=[1, 2, 3, 4], wavelet='db3')\n\n##### From entropy\nentropy_df = FE.make_entropy_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],\n l_foots=[2, 3, 4, 5, 7, 10])\n\n###### From gradient\ngradient_df = FE.make_gradient_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],\n dx_list=[2, 3, 4, 5, 6, 10, 20])\n\n##### From rolling average\nmoving_av_df = FE.make_moving_av_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],\n windows=[1, 2, 5, 10, 20])\n\n##### From rolling standard deviation\nmoving_std_df = FE.make_moving_std_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],\n windows=[3 , 4, 5, 7, 10, 15, 20])\n\n##### From rolling max\nmoving_max_df = FE.make_moving_max_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],\n windows=[3, 4, 5, 7, 10, 15, 20])\n\n##### From rolling min\nmoving_min_df = FE.make_moving_min_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],\n windows=[3 , 4, 5, 7, 10, 15, 20])\n\n###### From rolling NM/M ratio\nrolling_marine_ratio_df = FE.make_rolling_marine_ratio_vars(wells_df=test_data, windows=[5, 10, 15, 20, 30, 50, 75, 100, 200])\n\n###### From distance to NM and M, up and down\ndist_M_up_df = FE.make_distance_to_M_up_vars(wells_df=test_data)\ndist_M_down_df = FE.make_distance_to_M_down_vars(wells_df=test_data)\ndist_NM_up_df = FE.make_distance_to_NM_up_vars(wells_df=test_data)\ndist_NM_down_df = FE.make_distance_to_NM_down_vars(wells_df=test_data)\n\ncombined_test_df = test_data\nlist_df_var = [dwt_db1_cD_df, dwt_db1_cA_df, dwt_db3_cD_df, dwt_db3_cA_df,\n entropy_df, gradient_df, moving_av_df, moving_std_df, moving_max_df, moving_min_df,\n rolling_marine_ratio_df, dist_M_up_df, dist_M_down_df, dist_NM_up_df, dist_NM_down_df]\nfor var_df in list_df_var:\n temp_df = var_df\n combined_test_df = pd.concat([combined_test_df,temp_df],axis=1)\ncombined_test_df.replace(to_replace=np.nan, value='-99999', inplace=True)\n\nX_test = combined_test_df.iloc[:, 3:]\n\nprint (combined_test_df.shape)\ncombined_test_df.head(5)\n\ntest_pred_df = combined_test_df[['Well Name', 'Depth']]\n\nfor i in range(100):\n Cl = RandomForestClassifier(n_estimators=100, max_features=0.1, min_samples_leaf=25,\n min_samples_split=50, class_weight='balanced', n_jobs=-1,random_state=i)\n OVR = OneVsRestClassifier(Cl,n_jobs=-1)\n OVR.fit(X, y)\n y_test = OVR.predict(X_test)\n y_test = pd.DataFrame(y_test, columns=['Predicted Facies #' + str(i)])\n test_pred_df = pd.concat([test_pred_df, y_test], axis=1)\ntest_pred_df.head()",
"Exporting results of 100 predictions",
"test_pred_df.to_pickle('Prediction_submission_4_OVR_RF.pkl')\n\ntest_pred_df.shape"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
matousc89/Python-Adaptive-Signal-Processing-Handbook
|
notebooks/padasip_adaptive_filters_basics.ipynb
|
mit
|
[
"Padasip Adaptive Filters Basics - Noise Cancelation, System Identification and Signal Prediction\nThis tutorial explains how to perform various tasks with adaptive filters through several examples.\nLets start with importing padasip. In the following examples we will also use numpy and matplotlib.",
"from __future__ import print_function\nimport numpy as np\nimport matplotlib.pylab as plt\n\nimport padasip as pa\n\n%matplotlib inline \nplt.style.use('ggplot') # nicer plots\nnp.random.seed(52102) # always use the same random seed to make results comparable\n%config InlineBackend.print_figure_kwargs = {}",
"Construction of Input Vectors (Input Matrix) from a Time Series\nLet us consider a time series $\\textbf{u}$",
"u = np.arange(0,10,1)\nu",
"If the series is only an input of the adaptive filter, the input matrix can be a so called Delay Embedded Matrix (DEM). The construction of the input matrix can be simply created as",
"x = pa.input_from_history(u, 4)\nx",
"In this matrix, every row represents an input vector for one sample. If you want to extend the input matrix with a bias, then",
"n = 4\nx = pa.input_from_history(u, n, bias=True)\nx",
"The length of the input matrix (number of rows) is\n$N - n + 1$, where $N$ is the length of the time series, $n$ is the desired number of historical values in one row of the input matrix, so:",
"N = len(u)\nn = 4\nN - n + 1",
"is equal to",
"len(x)",
"Noise Cancelation (Example with RLS Filter)\nConsider a sin like natured signal $\\textbf{u}$ contaminated with noise $\\textbf{v}$. This results in a measured signal $\\textbf{d}$. The goal is prediction/reconstruction of signal $\\textbf{u}$ only from signal $\\textbf{d}$. For the evaluation of noise cancelation performance we will use the <a href=\"https://en.wikipedia.org/wiki/Mean_squared_error\">MSE</a> (mean-squared-error).",
"# signals creation: u, v, d\nN = 5000\nn = 10\nu = np.sin(np.arange(0, N/10., N/50000.))\nv = np.random.normal(0, 1, N)\nd = u + v\n\n# filtering\nx = pa.input_from_history(d, n)[:-1]\nd = d[n:]\nu = u[n:]\nf = pa.filters.FilterRLS(mu=0.9, n=n)\ny, e, w = f.run(d, x)\n\n# error estimation\nMSE_d = np.dot(u-d, u-d) / float(len(u))\nMSE_y = np.dot(u-y, u-y) / float(len(u))\n\n# results\nplt.figure(figsize=(12.5,6))\nplt.plot(u, \"r:\", linewidth=4, label=\"original\")\nplt.plot(d, \"b\", label=\"noisy, MSE: {}\".format(MSE_d))\nplt.plot(y, \"g\", label=\"filtered, MSE: {}\".format(MSE_y))\nplt.xlim(N-100,N)\nplt.legend()\nplt.tight_layout()\nplt.show()",
"According to this result, the adaptive filter can decrease the MSE caused by noise.\nSystem Identification (Example with NLMS Filter)\nConsider a system output contaminated with noise $\\textbf{v}$\n$d(k) = 2 x_1(k) + 0.1 x_2(k) - 4 x_3(k) + 0.5 x_4(k) + v(k)$\nwhere $x_i(k)$ is $i$-th channel (column) in $k$ sample (row) of the input matrix $\\textbf{X}$ (random numbers in this example) - in this case, the input of the system is the same as the input of the filter. The noise $\\textbf{v}$ is Gaussian white noise with zero mean and standard deviation of 0.1. Because the scale of the data is small, we can use a high learning rate $\\mu=0.5$",
"# creation of x and d\nN = 700\nx = np.random.random((N, 4))\nv = np.random.normal(0, 1, N) * 0.1\nd = 2*x[:,0] + 0.1*x[:,1] - 4*x[:,2] + 0.5*x[:,3] + v\n\n# identification\nf = pa.filters.FilterNLMS(mu=0.5, n=4)\ny, e, w = f.run(d, x)\n\n# show results\nplt.figure(figsize=(12.5,9))\nplt.subplot(211);plt.title(\"Adaptation\");plt.xlabel(\"Number of iteration [-]\")\nplt.plot(d,\"b\", label=\"d - target\")\nplt.plot(y,\"g\", label=\"y - output\")\nplt.xlim(0, N)\nplt.legend()\n\nplt.subplot(212); plt.title(\"Filter error\"); plt.xlabel(\"Number of iteration [-]\")\nplt.plot(pa.misc.logSE(e),\"r\", label=\"Squared error [dB]\");plt.legend()\nplt.xlim(0, N)\nplt.tight_layout()\nplt.show()\nprint(\"And the resulting coefficients are: {}\".format(w[-1]))",
"Prediction (Example with RLS Filter)\nConsider measured data produced by the following system\n$d(k) = a_1 u(k) + a_2 u(k-1) + a_3 u(k-2) + a_4 u(k-3)$\nFor $d(0)$ we need initial conditions $[u(-3), u(-2), u(-1), u(0)]$. In other words, for $u$ of size $100$ the maximal size of $d$ is $97$.",
"# creation of u, x and d\nN = 100\nu = np.random.random(N)\nd = np.zeros(N)\nfor k in range(3, N):\n d[k] = 2*u[k] + 0.1*u[k-1] - 4*u[k-2] + 0.5*u[k-3]\nd = d[3:]\n\n# identification\nx = pa.input_from_history(u, 4)\nf = pa.filters.FilterRLS(mu=0.1, n=4)\ny, e, w = f.run(d, x)\n\n# show results\nplt.figure(figsize=(12.5,9))\nplt.subplot(211);plt.title(\"Adaptation\");plt.xlabel(\"Number of iteration [-]\")\nplt.plot(d,\"b\", label=\"d - target\")\nplt.plot(y,\"g\", label=\"y - output\");plt.legend()\nplt.subplot(212);plt.title(\"Filter error\");plt.xlabel(\"Number of iteration [-]\")\nplt.plot(pa.misc.logSE(e),\"r\", label=\"Squared error [db]\");plt.legend()\nplt.tight_layout()\nplt.show()",
"From the above figure it is possible to see, that the prediction error decreases with new samples. At the end, the error is really low."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
par2/lamana
|
docs/gettingstarted.ipynb
|
bsd-3-clause
|
[
"# Hidden TimeStamp\nimport time, datetime\nst = datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S')\nprint('Last Run: {}'.format(st))\n\n# Run this cell only once\nfrom IPython.display import clear_output\nimport pandas as pd\n\n%cd ../\nclear_output()\n\npd.set_option('display.max_columns', 9)\npd.set_option('precision', 4)",
"Getting Started\nIf you have already looked through the quick view section and want make one of those plots, this guide will assist you.\nJupyter and LamAna (recommended)\nLamAna was developed with visualization in mind. \nLamAna works best with the Jupter Notebook 4.0+. The Jupyter Notebook is a powerful, browswer-based analytical tool that integrates with many computer languages and runs computations in separate cells . We use Jupyter and Python to make plots using a matplotlib backend.\nIf you installed Anaconda, you have Jupyter installed already. Simply run, open a new notebook and import LamAna:\n> jupyter notebook\n\n\nLamAna from Commandline\nIf visualization is not important to you, you can alternatively run calcuations and export data from the commandline. You can invoke either python or ipython for and interactive prompt.\n\nWe will now explore how the user can input data and generate plots using the distributions module.\nUser Setup\nFirst we must input loading parameters and material properties. Secondly, we must invoke a selected laminate theory. The former requires knowlege of the specimen dimensions, the materials properties and loading configuration. For illustration, an schematic of laminate loading parameters is provided below.\n\nA table is provided defining the illustrated parameters. These loading parameters are coded in a Python dictionary called load_params. \n| Parameter | Units (SI) | Definition |\n| :-------: | :--------: | :--------- |\n| P | N | applied load |\n| R | m | specimen radius | \n| a | m | support radius |\n| b | m | piston radius |\n| r | m | radial distance from central loading |\n| p | - | graphical points or DataFrame rows per layer |\nUser Defined Parameters\nSample code is provided for setting up geometric dimensions, loading parameters and material properties.",
"# SETUP -----------------------------------------------------------------------\n\nimport lamana as la\n\n# For plotting in Jupyter\n%matplotlib inline \n\n# Build dicts for loading parameters and material properties \nload_params = {\n 'P_a': 1, # applied load \n 'R': 12e-3, # specimen radius\n 'a': 7.5e-3, # support radius \n 'p': 4, # points/layer\n 'r': 2e-4, # radial distance from center loading\n}\n\nmat_props = {\n 'HA': [5.2e10, 0.25], # modulus, Poissions ratio\n 'PSu': [2.7e9, 0.33], \n}\n\n# Build a list of geometry strings.\n# Layer convention: 'outer - [{inner...-...}_i] - middle'\ngeos1 = ['400-400-400', '400-200-800', '400-350-500'] # eq. total thickness\ngeos2 = ['400-[400]-400', '400-[200,100]-800'] # eq. outer thickness \n#------------------------------------------------------------------------------",
"Generate Data in 3 Lines\nWith the loading and material information, we can make stress distribution plots to define (reusable) test cases by implementing 3 simple steps. \n\nInstantiate a Feature object with loading and material parameters. This makes a user Case object. \napply() a laminate theory model\nplot() the Case\n\nWithin seconds, you can now build a case and simultaneiously plot stress distributions for an indefinite number of laminates that vary in compostion and dimension.",
"case1 = la.distributions.Case(load_params, mat_props) # instantiate\ncase1.apply(geos1, model='Wilson_LT') # apply \ncase1.plot() # plot ",
"Other Attributes\nA case stores all of the laminate data for a particular set of parameters in two forms: a Python dictionary and a pandas DataFrame (see tutorial for details). Once a case is built, there are serveral covenient builtin attributes for accessing this data for further analysis and plotting them as matplotlib plots. These tools are powerful and open source.\n```python\nCase Attributes\ncase.geometries # geometry object\ncase.total # total laminate thickness (all) \ncase.inner # layer thickness (all) \ncase.total_inner # total layer type (all) \ncase.total_inner[0] # slicing\ncase.total_inner_i # total inner layers\ncase1.snapshot # list of all geometry stacks (unique layers)\ncase1.frames # list all DataFrames (all layers)\n```\nExtensibile\nLamAna is extensible. Users can define custom or modified models based on laminate theory and apply these models to cases (see the Theories section for more details).\nWe can perform sepearate analyses by building different cases and applying different models (default model: \"Wilson_LT\" for circular disks in biaxial flexure).\n```python\nClassical Laminate Theory\ncase2 = la.distributions(load_params, mat_props) # instantiate \ncase2.apply(geos2, model='Classic_LT') # apply model\ncase2.plot()\nCustom Biaxial Flexure Model\ncase3 = la.distributions(load_params, mat_props) # instantiate \ncase3.apply(geos2, model='Wilson_LT') # custom model\ncase3.plot()\n```"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
simpleblob/ml_algorithms_stepbystep
|
algo_example_NN_perceptron.ipynb
|
mit
|
[
"import pandas as pd\nimport numpy as np\nfrom random import randint\nfrom pandas import Series,DataFrame\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport matplotlib\nimport math\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (8, 6)\nplt.rcParams['font.size'] = 14",
"Make Random dataset.\nSource: http://scikit-learn.org/stable/auto_examples/datasets/plot_random_dataset.html",
"from sklearn.datasets import make_classification\n\nplt.title(\"Two informative features, one cluster per class\", fontsize='small')\nX, Y = make_classification(n_samples=200, n_features=2, n_redundant=0, n_informative=2,\n n_clusters_per_class=1, class_sep=0.8)\nplt.scatter(X[:, 0], X[:, 1], marker='o', c=Y)",
"Single Layer Perceptron (feed-forward NN, no backpropagation)\nSource: https://en.wikipedia.org/wiki/Perceptron\nBinary classification where predicted value is\n$$ \nf(x) =\n\\begin{cases}\n1 & \\text{if } w \\cdot x + b \\gt 0 \\\n0 & \\text{Otherwise}\n\\end{cases} \n$$\nLater we will use bias(\"b\") as weight_0 where it will always be activated.\nThe resulting effect is that the bias will shift the decison boundary whereas other weights will change its shape. Here's an example for a more complicated boundary shapes.\nUpdate the weights \"w\" , iterate over each input vector j:\n$$ w_i(t+1) = w_i(t) +(y_i - f(x_i)) \\cdot x_{i,j} $$\nLoss function is a simple absolute sum of errors\n$$ L = \\dfrac{1}{m}\\sum_{i=1}^m \\bigl|\\;f(x_i) - y_i\\;\\bigr| $$",
"def f_func(w,x):\n result = w.dot(x.transpose())\n return np.where(result>0,1,0)\n\ndef loss_func(f,y):\n return np.average(np.abs(f - y))\n\n#initialize everything.\nw = np.array([0.01 , 0.01, 0.01]) # including bias as weight (w0 in the first column)\nXb = np.insert(X,0,1,axis=1) #add bias input, always activated\n\n#run till convergence\ny_pred = f_func(w,Xb)\ncurr_loss = loss_func(y_pred,Y)\nloss = []\nloss.append(curr_loss)\n\nfor i in range(0,100):\n #iterate over each j input vector\n for j in range(0,Y.shape[0]): \n y_pred[j] = f_func(w,Xb[j])\n w = w + (Y[j] - y_pred[j])*Xb[j]\n #evaluate aggregate loss function \n y_pred = f_func(w,Xb)\n curr_loss = loss_func(y_pred,Y)\n loss.append(curr_loss)\n #print 'curr={},prev={},diff={}'.format(curr_loss,loss[-2],loss[-2]-curr_loss)\n if (i>10) and ((loss[-2] - curr_loss) < 10**-5): #stopping criterion\n print 'stop at {}'.format(i)\n break\n\nprint w\nplt.figure()\nplt.xlabel('no. of run')\nplt.ylabel('loss function')\nsns.tsplot(loss)\n\nfrom sklearn.metrics import confusion_matrix\ncm_mat = confusion_matrix(Y,y_pred)\nprint cm_mat.T\ndf_temp = pd.DataFrame(cm_mat.flatten()[np.newaxis].T,columns = ['values'])\nplt.figure(figsize = (6,4),dpi=600)\nsns.heatmap(cm_mat.T, cbar=True ,annot=True, fmt=',.0f')\nplt.xlabel('Truth')\nplt.ylabel('Predicted')\n\n#plot decision boundary\n\nplt.scatter(X[:, 0], X[:, 1], marker='o', c=Y)\n\nx_line = np.array([np.min(X[:, 0]),np.max(X[:, 0])]) #this is \"x0\"\ny_line = -(w[0] + w[1]*x_line)/w[2] #plot y_line based on where it makes \"x1\" becomes zero\nplt.plot(x_line,y_line)\nplt.show()\nprint y_line\n\nf1 = np.matrix([1,2,3])\ny1 = np.matrix([5,5.5,6.0])\nprint loss_func(f1,y1)\nprint (4+3.5+3)/3\n\nY.shape[0]\n\nXb[1]"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
MiroK/seminar-julia
|
praha17/python_examples.ipynb
|
mit
|
[
"Comparing Python and Julia\nThis is a bruteforce attempt at solving Project Euler problem 14 with Python. The implementation is such that it is one-to-one with corresponding Julia code.",
"# Collatz\ndef collatz_chain(n):\n 'Compute the Collatz chain for number n.'\n k = 1\n while n > 1:\n n = 3*n+1 if (n % 2) else n >> 1\n k += 1\n # print n\n return k\n\ndef solve_euler(stop):\n 'Which of the number [1, stop) has the longest Collatz chain.'\n n, N, N_max = 1, 0, 0 \n while n < stop:\n value = collatz_chain(n)\n if value > N_max:\n N = n\n N_max = value\n n += 1\n return N, N_max",
"Python code here is slow so we rather not exacute it multiple times as with %timeit magic.",
"import time\n\nN = 1000000\nt0 = time.time()\nans = solve_euler(N)\nt1 = time.time() - t0\nans, t1",
"Next we consider a problem of drawing a Julia fractal. In Python we try to implement a rather clever heavily vectorized algorithm",
"import numpy as np\n\n# Adopted from https://thesamovar.wordpress.com/2009/03/22/fast-fractals-with-python-and-numpy/\ndef julia(x, y, c):\n X, Y = np.meshgrid(x, y)\n Z = X + complex(0, 1)*Y\n del X, Y\n \n C = c*np.ones(Z.shape, dtype=complex)\n img = 80*np.ones(C.shape, dtype=int)\n # We will shrink Z, C inside the loop if certain point is found unbounded\n ix, iy = np.mgrid[0:Z.shape[0], 0:Z.shape[1]]\n Z, C, ix, iy = map(lambda mat: mat.flatten(), (Z, C, ix, iy))\n\n for i in xrange(80):\n if not len(Z): break\n np.multiply(Z, Z, Z) # z**2 + c\n np.add(Z, C, Z)\n rem = abs(Z) > 2.0 # Unbounded - definite color\n img[ix[rem], iy[rem]] = i + 1\n rem = ~rem # Bounded - keep for next round\n Z = Z[rem] # Update variables for next round\n ix, iy = ix[rem], iy[rem]\n C = C[rem]\n return img\n\ncs = (complex(-0.06, 0.67), complex(0.279, 0), complex(-0.4, 0.6), complex(0.285, 0.01))\n\nx = np.arange(-1.5, 1.5, 0.002)\ny = np.arange(1, -1, -0.002)\n\nJs = []\n# Evaluate fractal generation\nt0 = time.time()\nfor c in cs:\n Js.append(julia(x, y, c))\nt1 = time.time() - t0\nprint 'Generated in %.4f s' % t1\nprint 'Image size %d x %d' % Js[0].shape\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nfor J in Js:\n plt.figure(figsize=(12, 8))\n plt.imshow(J, cmap=\"viridis\", extent=[-1.5, 1.5, -1, 1])\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dafrie/lstm-load-forecasting
|
notebooks/1_entsoe_forecast_only.ipynb
|
mit
|
[
"Model Category 1: Using the ENTSO-E forecast only\nThe first model category will just use the current available ENTSO-E forecast and try to create a better forecast in terms of mean absolute error.\nModel category specific configuration\nThese parameters are model category specific",
"# Model category name used throughout the subsequent analysis\nmodel_cat_id = \"01\"\n\n# Which features from the dataset should be loaded:\n# ['all', 'actual', 'entsoe', 'weather_t', 'weather_i', 'holiday', 'weekday', 'hour', 'month']\nfeatures = ['actual', 'entsoe']\n\n# LSTM Layer configuration\n# ========================\n# Stateful True or false\nlayer_conf = [ True, True, True ]\n# Number of neurons per layer\ncells = [[ 5, 10, 20, 30, 50, 75, 100, 125, 150 ], [0, 10, 20, 50], [0, 10, 15, 20]]\n# Regularization per layer\ndropout = [0, 0.1, 0.2]\n# Size of how many samples are used for one forward/backward pass\nbatch_size = [8]\n# In a sense this is the output neuron dimension, or how many timesteps the neuron should output. Currently not implemented, defaults to 1.\ntimesteps = [1]",
"Module imports",
"import os\nimport sys\nimport math\nimport itertools\nimport datetime as dt\nimport pytz\nimport time as t\nimport numpy as np\nimport pandas as pd\nfrom pandas import read_csv\nfrom pandas import datetime\nfrom numpy import newaxis\n\nimport matplotlib as mpl\n\nimport matplotlib.pyplot as plt\nimport scipy.stats as stats\nfrom statsmodels.tsa import stattools\nfrom tabulate import tabulate\n\nimport math\nimport keras as keras\nfrom keras import backend as K\nfrom keras.models import Sequential\nfrom keras.layers import Activation, Dense, Dropout, LSTM\nfrom keras.callbacks import TensorBoard\nfrom keras.utils import np_utils\nfrom keras.models import load_model\n\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.metrics import mean_squared_error, mean_absolute_error\n\nfrom IPython.display import HTML\nfrom IPython.display import display\n%matplotlib notebook\nmpl.rcParams['figure.figsize'] = (9,5)\n\n# Import custom module functions\nmodule_path = os.path.abspath(os.path.join('../'))\nif module_path not in sys.path:\n sys.path.append(module_path)\n\nfrom lstm_load_forecasting import data, lstm",
"Overall configuration\nThese parameters are later used, but shouldn't have to change between different model categories (model 1-5)",
"# Directory with dataset\npath = os.path.join(os.path.abspath(''), '../data/fulldataset.csv')\n\n# Splitdate for train and test data. As the TBATS and ARIMA benchmark needs 2 full cycle of all seasonality, needs to be after jan 01. \nloc_tz = pytz.timezone('Europe/Zurich')\nsplit_date = loc_tz.localize(dt.datetime(2017,2,1,0,0,0,0))\n\n# Validation split percentage\nvalidation_split = 0.2\n# How many epochs in total\nepochs = 30\n# Set verbosity level. 0 for only per model, 1 for progress bar...\nverbose = 0\n\n# Dataframe containing the relevant data from training of all models\nresults = pd.DataFrame(columns=['model_name', 'config', 'dropout',\n 'train_loss', 'train_rmse', 'train_mae', 'train_mape', \n 'valid_loss', 'valid_rmse', 'valid_mae', 'valid_mape', \n 'test_rmse', 'test_mae', 'test_mape',\n 'epochs', 'batch_train', 'input_shape',\n 'total_time', 'time_step', 'splits'\n ])\n# Early stopping parameters\nearly_stopping = True\nmin_delta = 0.006\npatience = 2",
"Preparation and model generation\nNecessary preliminary steps and then the generation of all possible models based on the settings at the top of this notebook.",
"# Generate output folders and files\nres_dir = '../results/notebook_' + model_cat_id + '/'\nplot_dir = '../plots/notebook_' + model_cat_id + '/'\nmodel_dir = '../models/notebook_' + model_cat_id + '/'\nos.makedirs(res_dir, exist_ok=True)\nos.makedirs(model_dir, exist_ok=True)\noutput_table = res_dir + model_cat_id + '_results_' + t.strftime(\"%Y%m%d\") + '.csv'\ntest_output_table = res_dir + model_cat_id + '_test_results' + t.strftime(\"%Y%m%d\") + '.csv'\n\n# Generate model combinations\nmodels = []\nmodels = lstm.generate_combinations(\n model_name=model_cat_id + '_', layer_conf=layer_conf, cells=cells, dropout=dropout, \n batch_size=batch_size, timesteps=[1])",
"Loading the data:",
"# Load data and prepare for standardization\ndf = data.load_dataset(path=path, modules=features)\ndf_scaled = df.copy()\ndf_scaled = df_scaled.dropna()\n\n# Get all float type columns and standardize them\nfloats = [key for key in dict(df_scaled.dtypes) if dict(df_scaled.dtypes)[key] in ['float64']]\nscaler = StandardScaler()\nscaled_columns = scaler.fit_transform(df_scaled[floats])\ndf_scaled[floats] = scaled_columns\n\n# Split in train and test dataset\ndf_train = df_scaled.loc[(df_scaled.index < split_date )].copy()\ndf_test = df_scaled.loc[df_scaled.index >= split_date].copy()\n\n# Split in features and label data\ny_train = df_train['actual'].copy()\nX_train = df_train.drop('actual', 1).copy()\ny_test = df_test['actual'].copy()\nX_test = df_test.drop('actual', 1).copy()",
"Running through all generated models\nNote: Depending on the above settings, this can take very long!",
"start_time = t.time()\nfor idx, m in enumerate(models):\n stopper = t.time()\n print('========================= Model {}/{} ========================='.format(idx+1, len(models)))\n print(tabulate([['Starting with model', m['name']], ['Starting time', datetime.fromtimestamp(stopper)]],\n tablefmt=\"jira\", numalign=\"right\", floatfmt=\".3f\"))\n try:\n # Creating the Keras Model\n model = lstm.create_model(layers=m['layers'], sample_size=X_train.shape[0], batch_size=m['batch_size'], \n timesteps=m['timesteps'], features=X_train.shape[1])\n # Training...\n history = lstm.train_model(model=model, mode='fit', y=y_train, X=X_train, \n batch_size=m['batch_size'], timesteps=m['timesteps'], epochs=epochs, \n rearrange=False, validation_split=validation_split, verbose=verbose, \n early_stopping=early_stopping, min_delta=min_delta, patience=patience)\n\n # Write results\n min_loss = np.min(history.history['val_loss'])\n min_idx = np.argmin(history.history['val_loss'])\n min_epoch = min_idx + 1\n \n if verbose > 0:\n print('______________________________________________________________________')\n print(tabulate([['Minimum validation loss at epoch', min_epoch, 'Time: {}'.format(t.time()-stopper)],\n ['Training loss & MAE', history.history['loss'][min_idx], history.history['mean_absolute_error'][min_idx] ], \n ['Validation loss & mae', history.history['val_loss'][min_idx], history.history['val_mean_absolute_error'][min_idx] ],\n ], tablefmt=\"jira\", numalign=\"right\", floatfmt=\".3f\"))\n print('______________________________________________________________________')\n \n \n result = [{'model_name': m['name'], 'config': m, 'train_loss': history.history['loss'][min_idx], 'train_rmse': 0,\n 'train_mae': history.history['mean_absolute_error'][min_idx], 'train_mape': 0,\n 'valid_loss': history.history['val_loss'][min_idx], 'valid_rmse': 0, \n 'valid_mae': history.history['val_mean_absolute_error'][min_idx],'valid_mape': 0, \n 'test_rmse': 0, 'test_mae': 0, 'test_mape': 0, 'epochs': '{}/{}'.format(min_epoch, epochs), 'batch_train':m['batch_size'],\n 'input_shape':(X_train.shape[0], timesteps, X_train.shape[1]), 'total_time':t.time()-stopper, \n 'time_step':0, 'splits':str(split_date), 'dropout': m['layers'][0]['dropout']\n }]\n results = results.append(result, ignore_index=True)\n \n # Saving the model and weights\n model.save(model_dir + m['name'] + '.h5')\n \n # Write results to csv\n results.to_csv(output_table, sep=';')\n \n #if not os.path.isfile(output_table):\n #results.to_csv(output_table, sep=';')\n #else: # else it exists so append without writing the header\n # results.to_csv(output_table,mode = 'a',header=False, sep=';')\n \n K.clear_session()\n import tensorflow as tf\n tf.reset_default_graph()\n \n # Shouldn't catch all errors, but for now...\n except BaseException as e:\n print('=============== ERROR {}/{} ============='.format(idx+1, len(models)))\n print(tabulate([['Model:', m['name']], ['Config:', m]], tablefmt=\"jira\", numalign=\"right\", floatfmt=\".3f\"))\n print('Error: {}'.format(e))\n result = [{'model_name': m['name'], 'config': m, 'train_loss': str(e)}]\n results = results.append(result, ignore_index=True)\n results.to_csv(output_table,sep=';')\n continue\n ",
"Model selection based on the validation MAE\nSelect the top 5 models based on the Mean Absolute Error in the validation data:\nhttp://scikit-learn.org/stable/modules/model_evaluation.html#mean-absolute-error",
"# Number of the selected top models \nselection = 5\n# If run in the same instance not necessary. If run on the same day, then just use output_table\nresults_fn = res_dir + model_cat_id + '_results_' + '20170616' + '.csv'\n\nresults_csv = pd.read_csv(results_fn, delimiter=';')\ntop_models = results_csv.nsmallest(selection, 'valid_mae')",
"Evaluate top 5 models",
"# Init test results table\ntest_results = pd.DataFrame(columns=['Model name', 'Mean absolute error', 'Mean squared error'])\n\n# Init empty predictions\npredictions = {}\n\n# Loop through models\nfor index, row in top_models.iterrows():\n filename = model_dir + row['model_name'] + '.h5'\n model = load_model(filename)\n batch_size = int(row['batch_train'])\n \n # Calculate scores\n loss, mae = lstm.evaluate_model(model=model, X=X_test, y=y_test, batch_size=batch_size, timesteps=1, verbose=verbose)\n \n # Store results\n result = [{'Model name': row['model_name'], \n 'Mean squared error': loss, 'Mean absolute error': mae\n }]\n test_results = test_results.append(result, ignore_index=True)\n \n # Generate predictions\n model.reset_states()\n model_predictions = lstm.get_predictions(model=model, X=X_test, batch_size=batch_size, timesteps=timesteps[0], verbose=verbose)\n \n # Save predictions\n predictions[row['model_name']] = model_predictions\n \n K.clear_session()\n import tensorflow as tf\n tf.reset_default_graph()\n \n\ntest_results = test_results.sort_values('Mean absolute error', ascending=True)\ntest_results = test_results.set_index(['Model name'])\n\nif not os.path.isfile(test_output_table):\n test_results.to_csv(test_output_table, sep=';')\nelse: # else it exists so append without writing the header\n test_results.to_csv(test_output_table,mode = 'a',header=False, sep=';')\n\nprint('Test dataset performance of the best {} (out of {} tested models):'.format(min(selection, len(models)), len(models)))\nprint(tabulate(test_results, headers='keys', tablefmt=\"grid\", numalign=\"right\", floatfmt=\".3f\"))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
DBWangGroupUNSW/COMP9318
|
L7 - Multinomial and Bernoulli Naive Bayes Classifiers.ipynb
|
mit
|
[
"Predict Spam SMS using Naive Bayes Classifier\nImport Modules",
"import pandas as pd\nimport numpy as np\nfrom sklearn.naive_bayes import MultinomialNB,BernoulliNB\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.feature_extraction import DictVectorizer\nfrom sklearn.cross_validation import KFold\nimport matplotlib.pyplot as plt\n\n%matplotlib inline",
"Data Preprocessing\nWe use SMS Spam Collection Data Set from UCI, which contains 5574 SMS, 747 of them are spam and the rest are ham.",
"data = pd.read_csv('./asset/SMSSpamCollection.txt', sep='\\t')\npd.set_option(\"display.max_colwidth\",999)\ndata.head()",
"We use the count of each tokens (words) in the SMS as its feature. \nFirstly we convert the SMS into features.",
"def get_freq_of_tokens(sms):\n tokens = {}\n for token in sms.split(' '):\n if token not in tokens:\n tokens[token] = 1\n else:\n tokens[token] += 1\n return tokens\n\nprint(get_freq_of_tokens(data.iloc[0].text))\n\nfeatures_and_labels = []\nfor index in range(len(data)):\n features_and_labels.append((get_freq_of_tokens(data.iloc[index].text), data.iloc[index].category))",
"We then trainsform the features into a sparse martix.",
"encoder = LabelEncoder()\nvectorizer = DictVectorizer(dtype=float, sparse=True)\nX, y = list(zip(*features_and_labels))\nX = vectorizer.fit_transform(X)\n # print(X)\n # print(vectorizer.feature_names_)\ny = encoder.fit_transform(y)\n\nX",
"Use the first SMS (i.e., X[0]) to show how to get the feature names and values.",
"arr = X[0].toarray()\nfor i in range(len(arr[0])):\n if arr[0][i] > 0:\n print('{}:{}'.format(vectorizer.feature_names_[i], arr[0][i]))",
"Build Multinomial Naive Bayes Classifier\nalpha is the additive (Laplace/Lidstone) smoothing parameter. Its default value is 1.0 -- we will see that this works pretty well for our problem later.",
"nb = MultinomialNB(alpha=1)\nnb.fit(X, y)\nnb.score(X, y)",
"Let's use cross validation to explore how alpha affects the performance of the classifier",
"n_folds = 10\nkf = KFold(n=X.shape[0], n_folds=n_folds, shuffle=True, random_state=42)\n\ndef test_Multinomial_NB(train_X, train_y, test_X, test_y, alpha=1, debug = False):\n nb = MultinomialNB(alpha=alpha)\n nb.fit(train_X, train_y)\n train_error = nb.score(train_X, train_y)\n test_error = nb.score(test_X, test_y)\n if debug:\n print('training error:\\t{}'.format(train_error))\n print('testing error:\\t{}'.format(test_error))\n return train_error, test_error\n\ndef cv_MultinomialNB(alpha = 1.0):\n train_error_total = 0\n test_error_total = 0\n for train, test in kf:\n train_X = X[train]\n test_X = X[test]\n train_y = y[train]\n test_y = y[test]\n train_error, test_error = test_Multinomial_NB(train_X, train_y, test_X, test_y, alpha)\n train_error_total += train_error\n test_error_total += test_error\n return train_error_total/n_folds, test_error_total/n_folds\n# print('===================')\n# print('avg. training error:\\t{}'.format(train_error_total/n_folds))\n# print('avg. testing error:\\t{}'.format(test_error_total/n_folds))\n\ndef cv_plot_MultinomialNB():\n cv_res = []\n rng = sorted([0.5] + list(range(0, 10)))\n for i in rng:\n train_error, test_error = cv_MultinomialNB(i)\n cv_res.append([i, train_error, test_error])\n cv_res_arr = np.array(cv_res)\n plt.figure(figsize=(16,9)) \n plt.title('Error vs. alpha') \n plot_train, = plt.plot(cv_res_arr[:,0], cv_res_arr[:,1], label='training')\n plot_test, = plt.plot(cv_res_arr[:,0], cv_res_arr[:,2], label='testing')\n plt.legend(handles=[plot_train, plot_test])\n plt.ylim((min(min(cv_res_arr[:,1]), min(cv_res_arr[:,2])) - 0.01, max(max(cv_res_arr[:,1]), max(cv_res_arr[:,2]))+0.01))\n plt.xticks(rng) \n\ncv_plot_MultinomialNB()",
"Bernoulli Naive Bayes Classifier\nNote that since we have the count matrix for words in the vocabulary, we can binarize it to be used in a Bernoulli NB classifier.",
"nb = BernoulliNB(alpha=1, binarize=0.0) # every value >0.0 will be binarized to 1\nnb.fit(X, y)\nnb.score(X, y)\n\ndef test_Bernoulli_NB(train_X, train_y, test_X, test_y, alpha=1, debug = False):\n nb = BernoulliNB(alpha=alpha)\n nb.fit(train_X, train_y)\n train_error = nb.score(train_X, train_y)\n test_error = nb.score(test_X, test_y)\n if debug:\n print('training error:\\t{}'.format(train_error))\n print('testing error:\\t{}'.format(test_error))\n return train_error, test_error\n\ndef cv_BernoulliNB(alpha = 1.0):\n train_error_total = 0\n test_error_total = 0\n for train, test in kf:\n train_X = X[train]\n test_X = X[test]\n train_y = y[train]\n test_y = y[test]\n train_error, test_error = test_Bernoulli_NB(train_X, train_y, test_X, test_y, alpha)\n train_error_total += train_error\n test_error_total += test_error\n return train_error_total/n_folds, test_error_total/n_folds\n# print('===================')\n# print('avg. training error:\\t{}'.format(train_error_total/n_folds))\n# print('avg. testing error:\\t{}'.format(test_error_total/n_folds))\n\ndef cv_plot_BernoulliNB():\n cv_res = []\n rng = sorted([0.5] + list(range(0, 10)))\n for i in rng:\n train_error, test_error = cv_BernoulliNB(i)\n cv_res.append([i, train_error, test_error])\n cv_res_arr = np.array(cv_res)\n plt.figure(figsize=(16,9)) \n plt.title('Error vs. alpha')\n plot_train, = plt.plot(cv_res_arr[:,0], cv_res_arr[:,1], label='training')\n plot_test, = plt.plot(cv_res_arr[:,0], cv_res_arr[:,2], label='testing')\n plt.legend(handles=[plot_train, plot_test])\n plt.ylim((min(min(cv_res_arr[:,1]), min(cv_res_arr[:,2])) - 0.01, max(max(cv_res_arr[:,1]), max(cv_res_arr[:,2]))+0.01))\n plt.xticks(rng)\n\ncv_plot_BernoulliNB()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jsub10/MLCourse
|
Notebooks/Learning-Curves.ipynb
|
mit
|
[
"Learning Curves\n ACKNOWLEDGEMENT \nCode in this notebook is modified from Jake VanderPlas (https://jakevdp.github.io/PythonDataScienceHandbook/06.00-figure-code.html#Bias-Variance-Tradeoff)\nFor Jake's explanation of learning curves, see https://jakevdp.github.io/PythonDataScienceHandbook/05.03-hyperparameters-and-model-validation.html\nSet up the functions and libraries",
"import numpy as np\n\n# Simulate a dataset containing one feature (and one target)\n# The feature values are contained in X\n# The target values are contained in y\ndef make_data(N=100, err=0.8, rseed=1):\n # randomly sample the data\n rng = np.random.RandomState(rseed)\n X = rng.rand(N, 1) ** 2\n y = 10 - 1. / (X.ravel() + 0.1)\n if err > 0:\n y += err * rng.randn(N)\n return X, y\n\n# For plotting\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns\n\n# For building models and measuring their performance\nfrom sklearn.preprocessing import PolynomialFeatures\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.metrics import mean_squared_error as mse\nfrom sklearn.model_selection import train_test_split as tt_split\n\ndef PolynomialRegression(degree=2, **kwargs):\n return make_pipeline(PolynomialFeatures(degree),\n LinearRegression(**kwargs))",
"Generate a dataset and fit it with a high bias and a high variance model",
"# Create the complete dataset - total size = N\nX, y = make_data(N=100, rseed=42)\n\n# Split the dataset into training and test data\nX_train, X_test, y_train, y_test = tt_split(X, y, test_size=0.2, random_state=42)\n# Keep the test data the same throughout\n\n# Get the first num_datapoints from the training dataset\ndef get_training_slice(num_datapoints):\n xTrainSlice = X_train[0:num_datapoints]\n yTrainSlice = y_train[0:num_datapoints]\n return xTrainSlice, yTrainSlice\n\n# Test it out\nxTrainSlice, yTrainSlice = get_training_slice(3)\nxTrainSlice, yTrainSlice\n\n# Train a model with num_datapoints rows of the training dataset\ndef train_models(num_datapoints):\n # Get the slice of the training data on which the model is trained\n xTrainSlice, yTrainSlice = get_training_slice(num_datapoints)\n \n # Train the model on the slice of training data\n # The high bias model is a polynomial of degree 1\n model_hb = PolynomialRegression(1).fit(xTrainSlice, yTrainSlice)\n # The high variance model is a polynomial of degree 20\n model_hv = PolynomialRegression(20).fit(xTrainSlice, yTrainSlice)\n \n return xTrainSlice, yTrainSlice, model_hb, model_hv\n\n# Test it out\nxTrainSlice, yTrainSlice, model_hb, model_hv = train_models(10)\nxTrainSlice, yTrainSlice, model_hb, model_hv;\n# Quick Check on Performance\n# MSE for the high-bias model\nprint(\"MSE for the high-bias model: {}\".format(mse(y_test, model_hb.predict(X_test))))\n# MSE for the high-variance model\nprint(\"MSE for the high-variance model: {}\".format(mse(y_test, model_hv.predict(X_test))))\n\n# For models trained on num_datapoints rows of the training dataset, show fit and performance\n# on the rows of the training dataset and the entire test dataset\ndef plot_prediction_performance(num_datapoints):\n \n xTrainSlice, yTrainSlice, model_hb, model_hv = train_models(num_datapoints)\n \n # A set of points on the x axis for plotting purposes\n xfit = np.linspace(-0.1, 1.0, 1000)[:, None]\n\n fig, ax = plt.subplots(1, 2, figsize=(16, 8))\n fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)\n \n # High-Bias Model: Scatter plot of the training dataset slice\n ax[0].scatter(xTrainSlice.ravel(), yTrainSlice, s=40, c='blue')\n # Plot of the predictions trained on the training dataset slice \n ax[0].plot(xfit.ravel(), model_hb.predict(xfit), color='red')\n ax[0].axis([-0.1, 1.0, -2, 14])\n ax[0].set_title('High-Bias Model\\n' + '# Training Datapoints = ' + str(num_datapoints) + \"\\n\" + '# Test Datapoints = ' + str(len(y_test)), size=14)\n ax[0].scatter(X_test.ravel(), y_test, s=40, c='green')\n ax[0].text(0.02, 0.98, \"training data $MSE$ = {0:.2f}\".format(mse(yTrainSlice, model_hb.predict(xTrainSlice))),ha='left', va='top', transform=ax[0].transAxes, size=14, color='blue')\n ax[0].text(0.02, 0.91, \"test data $MSE$ = {0:.2f}\".format(mse(y_test, model_hb.predict(X_test))),ha='left', va='top', transform=ax[0].transAxes, size=14, color='green')\n \n # High-Variance Model: Scatter plot of the training dataset slice\n ax[1].scatter(xTrainSlice.ravel(), yTrainSlice, s=40, c='blue')\n # Plot of the predictions trained on the training dataset slice \n ax[1].plot(xfit.ravel(), model_hv.predict(xfit), color='red')\n ax[1].axis([-0.1, 1.0, -2, 14])\n ax[1].set_title('High-Variance Model\\n' + '# Training Datapoints = ' + str(num_datapoints) + \"\\n\" + '# Test Datapoints = ' + str(len(y_test)), size=14)\n ax[1].scatter(X_test.ravel(), y_test, s=40, c='green')\n ax[1].text(0.02, 0.98, \"training data $MSE$ = {0:.2f}\".format(mse(yTrainSlice, model_hv.predict(xTrainSlice))),ha='left', va='top', transform=ax[1].transAxes, size=14, color='blue')\n ax[1].text(0.02, 0.91, \"test data $MSE$ = {0:.2f}\".format(mse(y_test, model_hv.predict(X_test))),ha='left', va='top', transform=ax[1].transAxes, size=14, color='green')",
"Prediction Performance as Training Dataset Size Increases",
"plot_prediction_performance(2)\n\nplot_prediction_performance(5)\n\nplot_prediction_performance(10)\n\nplot_prediction_performance(20)\n\nplot_prediction_performance(50)\n\nplot_prediction_performance(80)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
bspalding/research_public
|
lectures/drafts/Graphic presentation of data.ipynb
|
apache-2.0
|
[
"Histogram\nBy Evgenia \"Jenny\" Nitishinskaya and Delaney Granizo-Mackenzie\nNotebook released under the Creative Commons Attribution 4.0 License.\n\nA histogram displays a frequency distribution using bars. It lets us quickly see where most of the observations are clustered. The height of each bar represents the number of observations that lie in each interval.",
"import numpy as np\nimport matplotlib.pyplot as plt\n\n# Get returns data for S&P 500\nstart = '2014-01-01'\nend = '2015-01-01'\nspy = get_pricing('SPY', fields='price', start_date=start, end_date=end).pct_change()[1:]\n\n# Plot a histogram using 20 bins\nfig = plt.figure(figsize = (16, 7))\n_, bins, _ = plt.hist(spy, 20)\nlabels = ['%.3f' % a for a in bins] # Reduce precision so labels are legible\nplt.xticks(bins, labels)\nplt.xlabel('Returns')\nplt.ylabel('Number of Days')\nplt.title('Frequency distribution of S&P 500 returns, 2014');",
"The graph above shows, for example, that the daily returns on the S&P 500 were between 0.010 and 0.013 on 10 of the days in 2014. Note that we are completely discarding the dates corresponding to these returns.\nAn alternative way to display the data would be using a cumulative distribution function, in which the height of a bar represents the number of observations that lie in that bin or in one of the previous ones. This graph is always nondecreasing since you cannot have a negative number of observations. The choice of graph depends on the information you are interested in.",
"# Example of a cumulative histogram\nfig = plt.figure(figsize = (16, 7))\n_, bins, _ = plt.hist(spy, 20, cumulative='True')\nlabels = ['%.3f' % a for a in bins]\nplt.xticks(bins, labels)\nplt.xlabel('Returns')\nplt.ylabel('Number of Days')\nplt.title('Cumulative distribution of S&P 500 returns, 2014');",
"Scatter plot\nA scatter plot is useful for visualizing the relationship between two data sets. We use two data sets which have some sort of correspondence, such as the date on which the measurement was taken. Each point represents two corresponding values from the two data sets. However, we don't plot the date that the measurements were taken on.",
"# Get returns data for some security\nasset = get_pricing('MSFT', fields='price', start_date=start, end_date=end).pct_change()[1:]\n\n# Plot the asset returns vs S&P 500 returns\nplt.scatter(asset, spy)\nplt.xlabel('MSFT')\nplt.ylabel('SPY')\nplt.title('Returns in 2014');",
"Line graph\nA line graph can be used when we want to track the development of the y value as the x value changes. For instance, when we are plotting the price of a stock, showing it as a line graph instead of just plotting the data points makes it easier to follow the price over time. This necessarily involves \"connecting the dots\" between the data points.",
"spy.plot()\nplt.ylabel('Returns');"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
takanory/pymook-samplecode
|
column/column.ipynb
|
mit
|
[
"便利な標準ライブラリ、サードパーティ製パッケージ\n\nコラムのコードを実行した Jupyter Notebook\n\nPythonの付属バッテリー、標準ライブラリ\nsysとos:システムパラメータとOS",
"import os\nos.mkdir('spam') # ディレクトリを作成する\nos.chdir('spam') # ディレクトリを移動する\nos.getcwd() # 現在のディレクトリを取得する\n\nos.chdir('..') # 元のディレクトリに戻る\n\nimport sys\nsys.exit(1) # 終了ステータス1で異常終了する",
"timeとdatetime:日付と時刻",
"import time\nlocaltime = time.localtime() # 現在時刻をローカルタイムで取得する\ntime.strftime('%Y-%m-%d %H:%M:%S', localtime) # 日時をフォーマットして返す\n\nimport datetime\nnow = datetime.datetime.now() # 現在日時を取得する\nnow.isoformat() # 現在日時をISOフォーマットで返す\n\nnextyear = datetime.datetime(2018, 1, 1) # 2018年の1月1日の日時を生成\ndelta = nextyear - now\ndelta.days # 来年まで261日あることを確認",
"mathとrandom:数学関数と乱数",
"import math\nprint(math.pi) # 定数π\nprint(math.sqrt(2)) # 平方根\n\nimport random\nprint(random.randint(1, 1000)) # 指定された範囲(1から1000)の間の整数を返す\ndata = ['spam', 'ham', 'eggs', 'foo', 'bar', 'baz']\nprint(random.choice(data)) # ランダムに要素を取り出す\nrandom.shuffle(data) # 要素の順番をシャッフルする\nprint(data)",
"itertools:イテレータ生成関数",
"import itertools\nfor x in itertools.chain('ABC', 'DEF'): # 全体をつなげて一つのシーケンスにする\n print(x)\n\nfor x in itertools.combinations('ABCD', 2): # 長さ2の組み合わせを返す\n print(x)",
"shutil:高レベルなファイル操作",
"import shutil\nshutil.copytree('src_dir', 'dst_dir') # src_dir以下のファイルをdst_dirにコピーする\nshutil.rmtree('src_dir') # src_dir以下のファイルをすべて削除する",
"json:JSONエンコーダとデコーダ",
"import json\ndata = {'spam': 'SPAM', 'ham': True, 'eggs': None} # 辞書型のデータを定義\njson.dumps(data) # json形式の文字列に変換する",
"Pythonをさらに強力にするサードパーティ製パッケージ\n\n下記のコマンドをターミナルで実行して、venv 環境を作成し、各種パッケージをインストールする\n\n$ python3.6 -m venv env\n$ source env/bin/activate\n(env) $ pip install python-dateutil\n(env) $ pip install requests\n(env) $ pip install beautifulsoup4\n(env) $ pip install pillow\ndateutil:日時操作の強力な拡張",
"import dateutil.parser\ndateutil.parser.parse(\"Fri Apr 14 20:34:20 UTC 2017\") # 日付の文字列をdatetimeに変換\n\ndateutil.parser.parse(\"2017-04-14 20:34:20 JST\")\n\ndateutil.parser.parse(\"Apr-14-2017\")",
"Requests:わかりやすいHTTPクライアント",
"import requests\n\nr = requests.get('http://connpass.com/api/v1/event/?keyword=python') # 指定URLにアクセス\nprint(r.status_code, r.headers['content-type']) # ステータス、コンテンツタイプを取得\nfor event in r.json()['events'][:3]: # イベントの先頭3件を取得\n print(event['title'], event['started_at']) # イベントのタイトル、日時を表示",
"BeautifulSoup 4:HTMLとXMLファイルのパーサー",
"from bs4 import BeautifulSoup\nimport requests\nr = requests.get('http://docs.python.jp/3/library/index.html')\nsoup = BeautifulSoup(r.content, 'html.parser') # HTMLをパース\ntoctree = soup.find('div', 'toctree-wrapper') # classがtoctree-wrapperのdiv要素を取得\nlinks = toctree.find_all('a') # aタグを全て取得\nprint(len(links)) # リンクの数を取得\nfor link in links[:3]: # 先頭3件を表示\n print(link.text)",
"Pillow:画像処理ライブラリ",
"from PIL import Image\nimage = Image.open('sample.jpg') # 画像を読み込む\nhalf = (image.size[0] // 2, image.size[1] // 2)\nhalf_image = image.resize(half, Image.ANTIALIAS) # 画像を1/2に縮小\nhalf_image.save('sample-half.jpg') # 縮小した画像を保存\nrotate_image = image.transpose(Image.ROTATE_90) # 画像を90度回転\nrotate_image.save('sample-rotate.png') # 回転した画像をPNG形式で保存"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
include/pandas
|
notebooks/example_Running Code.ipynb
|
bsd-3-clause
|
[
"Running Code\nFirst and foremost, the IPython Notebook is an interactive environment for writing and running code. IPython is capable of running code in a wide range of languages. However, this notebook, and the default kernel in IPython 2.0, runs Python code.\nCode cells allow you to enter and run Python code\nRun a code cell using Shift-Enter or pressing the <button class='btn btn-default btn-xs'><i class=\"icon-play fa fa-play\"></i></button> button in the toolbar above:",
"a = 10\n\nprint(a)",
"There are two other keyboard shortcuts for running code:\n\nAlt-Enter runs the current cell and inserts a new one below.\nCtrl-Enter run the current cell and enters command mode.\n\nManaging the IPython Kernel\nCode is run in a separate process called the IPython Kernel. The Kernel can be interrupted or restarted. Try running the following cell and then hit the <button class='btn btn-default btn-xs'><i class='icon-stop fa fa-stop'></i></button> button in the toolbar above.",
"import time\ntime.sleep(10)",
"If the Kernel dies you will be prompted to restart it. Here we call the low-level system libc.time routine with the wrong argument via\nctypes to segfault the Python interpreter:",
"import sys\nfrom ctypes import CDLL\n# This will crash a Linux or Mac system\n# equivalent calls can be made on Windows\ndll = 'dylib' if sys.platform == 'darwin' else 'so.6'\nlibc = CDLL(\"libc.%s\" % dll) \nlibc.time(-1) # BOOM!!",
"Cell menu\nThe \"Cell\" menu has a number of menu items for running code in different ways. These includes:\n\nRun and Select Below\nRun and Insert Below\nRun All\nRun All Above\nRun All Below\n\nRestarting the kernels\nThe kernel maintains the state of a notebook's computations. You can reset this state by restarting the kernel. This is done by clicking on the <button class='btn btn-default btn-xs'><i class='fa fa-repeat icon-repeat'></i></button> in the toolbar above.\nsys.stdout and sys.stderr\nThe stdout and stderr streams are displayed as text in the output area.",
"print(\"hi, stdout\")\n\nfrom __future__ import print_function\nprint('hi, stderr', file=sys.stderr)",
"Output is asynchronous\nAll output is displayed asynchronously as it is generated in the Kernel. If you execute the next cell, you will see the output one piece at a time, not all at the end.",
"import time, sys\nfor i in range(8):\n print(i)\n time.sleep(0.5)",
"Large outputs\nTo better handle large outputs, the output area can be collapsed. Run the following cell and then single- or double- click on the active area to the left of the output:",
"for i in range(50):\n print(i)",
"Beyond a certain point, output will scroll automatically:",
"for i in range(500):\n print(2**i - 1)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
samgoodgame/sf_crime
|
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_20_1213.ipynb
|
mit
|
[
"Kaggle San Francisco Crime Classification\nBerkeley MIDS W207 Final Project: Sam Goodgame, Sarah Cha, Kalvin Kao, Bryan Moore\nEnvironment and Data",
"# Import relevant libraries:\nimport time\nimport numpy as np\nimport pandas as pd\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn import preprocessing\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.naive_bayes import BernoulliNB\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.grid_search import GridSearchCV\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import log_loss\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn import svm\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.tree import DecisionTreeClassifier\n# Import Meta-estimators\nfrom sklearn.ensemble import AdaBoostClassifier\nfrom sklearn.ensemble import BaggingClassifier\nfrom sklearn.ensemble import GradientBoostingClassifier\n# Import Calibration tools\nfrom sklearn.calibration import CalibratedClassifierCV\n\n# Set random seed and format print output:\nnp.random.seed(0)\nnp.set_printoptions(precision=3)",
"DDL to construct table for SQL transformations:\nsql\nCREATE TABLE kaggle_sf_crime (\ndates TIMESTAMP, \ncategory VARCHAR,\ndescript VARCHAR,\ndayofweek VARCHAR,\npd_district VARCHAR,\nresolution VARCHAR,\naddr VARCHAR,\nX FLOAT,\nY FLOAT);\nGetting training data into a locally hosted PostgreSQL database:\nsql\n\\copy kaggle_sf_crime FROM '/Users/Goodgame/Desktop/MIDS/207/final/sf_crime_train.csv' DELIMITER ',' CSV HEADER;\nSQL Query used for transformations:\nsql\nSELECT\n category,\n date_part('hour', dates) AS hour_of_day,\n CASE\n WHEN dayofweek = 'Monday' then 1\n WHEN dayofweek = 'Tuesday' THEN 2\n WHEN dayofweek = 'Wednesday' THEN 3\n WHEN dayofweek = 'Thursday' THEN 4\n WHEN dayofweek = 'Friday' THEN 5\n WHEN dayofweek = 'Saturday' THEN 6\n WHEN dayofweek = 'Sunday' THEN 7\n END AS dayofweek_numeric,\n X,\n Y,\n CASE\n WHEN pd_district = 'BAYVIEW' THEN 1\n ELSE 0\n END AS bayview_binary,\n CASE\n WHEN pd_district = 'INGLESIDE' THEN 1\n ELSE 0\n END AS ingleside_binary,\n CASE\n WHEN pd_district = 'NORTHERN' THEN 1\n ELSE 0\n END AS northern_binary,\n CASE\n WHEN pd_district = 'CENTRAL' THEN 1\n ELSE 0\n END AS central_binary,\n CASE\n WHEN pd_district = 'BAYVIEW' THEN 1\n ELSE 0\n END AS pd_bayview_binary,\n CASE\n WHEN pd_district = 'MISSION' THEN 1\n ELSE 0\n END AS mission_binary,\n CASE\n WHEN pd_district = 'SOUTHERN' THEN 1\n ELSE 0\n END AS southern_binary,\n CASE\n WHEN pd_district = 'TENDERLOIN' THEN 1\n ELSE 0\n END AS tenderloin_binary,\n CASE\n WHEN pd_district = 'PARK' THEN 1\n ELSE 0\n END AS park_binary,\n CASE\n WHEN pd_district = 'RICHMOND' THEN 1\n ELSE 0\n END AS richmond_binary,\n CASE\n WHEN pd_district = 'TARAVAL' THEN 1\n ELSE 0\n END AS taraval_binary\nFROM kaggle_sf_crime;\nLoading the data, version 2, with weather features to improve performance: (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs)\nWe seek to add features to our models that will improve performance with respect to out desired performance metric. There is evidence that there is a correlation between weather patterns and crime, with some experts even arguing for a causal relationship between weather and crime [1]. More specifically, a 2013 paper published in Science showed that higher temperatures and extreme rainfall led to large increases in conflict. In the setting of strong evidence that weather influences crime, we see it as a candidate for additional features to improve the performance of our classifiers. Weather data was gathered from (insert source). Certain features from this data set were incorporated into the original crime data set in order to add features that were hypothesizzed to improve performance. These features included (insert what we eventually include).",
"#data_path = \"./data/train_transformed.csv\"\n\n#df = pd.read_csv(data_path, header=0)\n#x_data = df.drop('category', 1)\n#y = df.category.as_matrix()\n\n########## Adding the date back into the data\n#import csv\n#import time\n#import calendar\n#data_path = \"./data/train.csv\"\n#dataCSV = open(data_path, 'rt')\n#csvData = list(csv.reader(dataCSV))\n#csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']\n#allData = csvData[1:]\n#dataCSV.close()\n\n#df2 = pd.DataFrame(allData)\n#df2.columns = csvFields\n#dates = df2['Dates']\n#dates = dates.apply(time.strptime, args=(\"%Y-%m-%d %H:%M:%S\",))\n#dates = dates.apply(calendar.timegm)\n#print(dates.head())\n\n#x_data['secondsFromEpoch'] = dates\n#colnames = x_data.columns.tolist()\n#colnames = colnames[-1:] + colnames[:-1]\n#x_data = x_data[colnames]\n##########\n\n########## Adding the weather data into the original crime data\n#weatherData1 = \"./data/1027175.csv\"\n#weatherData2 = \"./data/1027176.csv\"\n#dataCSV = open(weatherData1, 'rt')\n#csvData = list(csv.reader(dataCSV))\n#csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']\n#allWeatherData1 = csvData[1:]\n#dataCSV.close()\n\n#dataCSV = open(weatherData2, 'rt')\n#csvData = list(csv.reader(dataCSV))\n#csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']\n#allWeatherData2 = csvData[1:]\n#dataCSV.close()\n\n#weatherDF1 = pd.DataFrame(allWeatherData1)\n#weatherDF1.columns = csvFields\n#dates1 = weatherDF1['DATE']\n#sunrise1 = weatherDF1['DAILYSunrise']\n#sunset1 = weatherDF1['DAILYSunset']\n\n#weatherDF2 = pd.DataFrame(allWeatherData2)\n#weatherDF2.columns = csvFields\n#dates2 = weatherDF2['DATE']\n#sunrise2 = weatherDF2['DAILYSunrise']\n#sunset2 = weatherDF2['DAILYSunset']\n\n#functions for processing the sunrise and sunset times of each day\n#def get_hour_and_minute(milTime):\n # hour = int(milTime[:-2])\n # minute = int(milTime[-2:])\n # return [hour, minute]\n\n#def get_date_only(date):\n# return time.struct_time(tuple([date[0], date[1], date[2], 0, 0, 0, date[6], date[7], date[8]]))\n\n#def structure_sun_time(timeSeries, dateSeries):\n# sunTimes = timeSeries.copy()\n# for index in range(len(dateSeries)):\n# sunTimes[index] = time.struct_time(tuple([dateSeries[index][0], dateSeries[index][1], dateSeries[index][2], timeSeries[index][0], timeSeries[index][1], dateSeries[index][5], dateSeries[index][6], dateSeries[index][7], dateSeries[index][8]]))\n# return sunTimes\n\n#dates1 = dates1.apply(time.strptime, args=(\"%Y-%m-%d %H:%M\",))\n#sunrise1 = sunrise1.apply(get_hour_and_minute)\n#sunrise1 = structure_sun_time(sunrise1, dates1)\n#sunrise1 = sunrise1.apply(calendar.timegm)\n#sunset1 = sunset1.apply(get_hour_and_minute)\n#sunset1 = structure_sun_time(sunset1, dates1)\n#sunset1 = sunset1.apply(calendar.timegm)\n#dates1 = dates1.apply(calendar.timegm)\n\n#dates2 = dates2.apply(time.strptime, args=(\"%Y-%m-%d %H:%M\",))\n#sunrise2 = sunrise2.apply(get_hour_and_minute)\n#sunrise2 = structure_sun_time(sunrise2, dates2)\n#sunrise2 = sunrise2.apply(calendar.timegm)\n#sunset2 = sunset2.apply(get_hour_and_minute)\n#sunset2 = structure_sun_time(sunset2, dates2)\n#sunset2 = sunset2.apply(calendar.timegm)\n#dates2 = dates2.apply(calendar.timegm)\n\n#weatherDF1['DATE'] = dates1\n#weatherDF1['DAILYSunrise'] = sunrise1\n#weatherDF1['DAILYSunset'] = sunset1\n#weatherDF2['DATE'] = dates2\n#weatherDF2['DAILYSunrise'] = sunrise2\n#weatherDF2['DAILYSunset'] = sunset2\n\n#weatherDF = pd.concat([weatherDF1,weatherDF2[32:]],ignore_index=True)\n\n# Starting off with some of the easier features to work with-- more to come here . . . still in beta\n#weatherMetrics = weatherDF[['DATE','HOURLYDRYBULBTEMPF','HOURLYRelativeHumidity', 'HOURLYWindSpeed', \\\n# 'HOURLYSeaLevelPressure', 'HOURLYVISIBILITY', 'DAILYSunrise', 'DAILYSunset']]\n#weatherMetrics = weatherMetrics.convert_objects(convert_numeric=True)\n#weatherDates = weatherMetrics['DATE']\n#'DATE','HOURLYDRYBULBTEMPF','HOURLYRelativeHumidity', 'HOURLYWindSpeed',\n#'HOURLYSeaLevelPressure', 'HOURLYVISIBILITY'\n#timeWindow = 10800 #3 hours\n#hourlyDryBulbTemp = []\n#hourlyRelativeHumidity = []\n#hourlyWindSpeed = []\n#hourlySeaLevelPressure = []\n#hourlyVisibility = []\n#dailySunrise = []\n#dailySunset = []\n#daylight = []\n#test = 0\n#for timePoint in dates:#dates is the epoch time from the kaggle data\n# relevantWeather = weatherMetrics[(weatherDates <= timePoint) & (weatherDates > timePoint - timeWindow)]\n# hourlyDryBulbTemp.append(relevantWeather['HOURLYDRYBULBTEMPF'].mean())\n# hourlyRelativeHumidity.append(relevantWeather['HOURLYRelativeHumidity'].mean())\n# hourlyWindSpeed.append(relevantWeather['HOURLYWindSpeed'].mean())\n# hourlySeaLevelPressure.append(relevantWeather['HOURLYSeaLevelPressure'].mean())\n# hourlyVisibility.append(relevantWeather['HOURLYVISIBILITY'].mean())\n# dailySunrise.append(relevantWeather['DAILYSunrise'].iloc[-1])\n# dailySunset.append(relevantWeather['DAILYSunset'].iloc[-1])\n# daylight.append(1.0*((timePoint >= relevantWeather['DAILYSunrise'].iloc[-1]) and (timePoint < relevantWeather['DAILYSunset'].iloc[-1])))\n #if timePoint < relevantWeather['DAILYSunset'][-1]:\n #daylight.append(1)\n #else:\n #daylight.append(0)\n \n# if test%100000 == 0:\n# print(relevantWeather)\n# test += 1\n\n#hourlyDryBulbTemp = pd.Series.from_array(np.array(hourlyDryBulbTemp))\n#hourlyRelativeHumidity = pd.Series.from_array(np.array(hourlyRelativeHumidity))\n#hourlyWindSpeed = pd.Series.from_array(np.array(hourlyWindSpeed))\n#hourlySeaLevelPressure = pd.Series.from_array(np.array(hourlySeaLevelPressure))\n#hourlyVisibility = pd.Series.from_array(np.array(hourlyVisibility))\n#dailySunrise = pd.Series.from_array(np.array(dailySunrise))\n#dailySunset = pd.Series.from_array(np.array(dailySunset))\n#daylight = pd.Series.from_array(np.array(daylight))\n\n#x_data['HOURLYDRYBULBTEMPF'] = hourlyDryBulbTemp\n#x_data['HOURLYRelativeHumidity'] = hourlyRelativeHumidity\n#x_data['HOURLYWindSpeed'] = hourlyWindSpeed\n#x_data['HOURLYSeaLevelPressure'] = hourlySeaLevelPressure\n#x_data['HOURLYVISIBILITY'] = hourlyVisibility\n#x_data['DAILYSunrise'] = dailySunrise\n#x_data['DAILYSunset'] = dailySunset\n#x_data['Daylight'] = daylight\n\n#x_data.to_csv(path_or_buf=\"C:/MIDS/W207 final project/x_data.csv\")\n##########\n\n# Impute missing values with mean values:\n#x_complete = x_data.fillna(x_data.mean())\n#X_raw = x_complete.as_matrix()\n\n# Scale the data between 0 and 1:\n#X = MinMaxScaler().fit_transform(X_raw)\n\n# Shuffle data to remove any underlying pattern that may exist:\n#shuffle = np.random.permutation(np.arange(X.shape[0]))\n#X, y = X[shuffle], y[shuffle]\n\n# Separate training, dev, and test data:\n#test_data, test_labels = X[800000:], y[800000:]\n#dev_data, dev_labels = X[700000:800000], y[700000:800000]\n#train_data, train_labels = X[:700000], y[:700000]\n\n#mini_train_data, mini_train_labels = X[:75000], y[:75000]\n#mini_dev_data, mini_dev_labels = X[75000:100000], y[75000:100000]\n#labels_set = set(mini_dev_labels)\n#print(labels_set)\n#print(len(labels_set))\n#print(train_data[:10])",
"Local, individual load of updated data set (with weather data integrated) into training, development, and test subsets.",
"data_path = \"/Users/Bryan/Desktop/UC_Berkeley_MIDS_files/Courses/W207_Intro_To_Machine_Learning/Final_Project/x_data_3.csv\"\ndf = pd.read_csv(data_path, header=0)\nx_data = df.drop('category', 1)\ny = df.category.as_matrix()\n\n# Impute missing values with mean values:\nx_complete = x_data.fillna(x_data.mean())\nX_raw = x_complete.as_matrix()\n\n# Scale the data between 0 and 1:\nX = MinMaxScaler().fit_transform(X_raw)\n\n# Shuffle data to remove any underlying pattern that may exist. Must re-run random seed step each time:\nnp.random.seed(0)\nshuffle = np.random.permutation(np.arange(X.shape[0]))\nX, y = X[shuffle], y[shuffle]\n\n# Due to difficulties with log loss and set(y_pred) needing to match set(labels), we will remove the extremely rare\n# crimes from the data for quality issues.\nX_minus_trea = X[np.where(y != 'TREA')]\ny_minus_trea = y[np.where(y != 'TREA')]\nX_final = X_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')]\ny_final = y_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')]\n\n# Separate training, dev, and test data:\ntest_data, test_labels = X_final[800000:], y_final[800000:]\ndev_data, dev_labels = X_final[700000:800000], y_final[700000:800000]\ntrain_data, train_labels = X_final[100000:700000], y_final[100000:700000]\ncalibrate_data, calibrate_labels = X_final[:100000], y_final[:100000]\n\n# Create mini versions of the above sets\nmini_train_data, mini_train_labels = X_final[:20000], y_final[:20000]\nmini_calibrate_data, mini_calibrate_labels = X_final[19000:28000], y_final[19000:28000]\nmini_dev_data, mini_dev_labels = X_final[49000:60000], y_final[49000:60000]\n\n# Create list of the crime type labels. This will act as the \"labels\" parameter for the log loss functions that follow\ncrime_labels = list(set(y_final))\ncrime_labels_mini_train = list(set(mini_train_labels))\ncrime_labels_mini_dev = list(set(mini_dev_labels))\ncrime_labels_mini_calibrate = list(set(mini_calibrate_labels))\nprint(len(crime_labels), len(crime_labels_mini_train), len(crime_labels_mini_dev),len(crime_labels_mini_calibrate))\n\n#print(len(train_data),len(train_labels))\n#print(len(dev_data),len(dev_labels))\n#print(len(mini_train_data),len(mini_train_labels))\n#print(len(mini_dev_data),len(mini_dev_labels))\n#print(len(test_data),len(test_labels))\n#print(len(mini_calibrate_data),len(mini_calibrate_labels))\n#print(len(calibrate_data),len(calibrate_labels))",
"Sarah's School data that we may still get to work as features: (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs)",
"### Read in zip code data\n#data_path_zip = \"./data/2016_zips.csv\"\n#zips = pd.read_csv(data_path_zip, header=0, sep ='\\t', usecols = [0,5,6], names = [\"GEOID\", \"INTPTLAT\", \"INTPTLONG\"], dtype ={'GEOID': int, 'INTPTLAT': float, 'INTPTLONG': float})\n#sf_zips = zips[(zips['GEOID'] > 94000) & (zips['GEOID'] < 94189)]\n\n### Mapping longitude/latitude to zipcodes\n#def dist(lat1, long1, lat2, long2):\n# return np.sqrt((lat1-lat2)**2+(long1-long2)**2)\n# return abs(lat1-lat2)+abs(long1-long2)\n#def find_zipcode(lat, long): \n# distances = sf_zips.apply(lambda row: dist(lat, long, row[\"INTPTLAT\"], row[\"INTPTLONG\"]), axis=1)\n# return sf_zips.loc[distances.idxmin(), \"GEOID\"]\n#x_data['zipcode'] = 0\n#for i in range(0, 1):\n# x_data['zipcode'][i] = x_data.apply(lambda row: find_zipcode(row['x'], row['y']), axis=1)\n#x_data['zipcode']= x_data.apply(lambda row: find_zipcode(row['x'], row['y']), axis=1)\n\n\n### Read in school data\n#data_path_schools = \"./data/pubschls.csv\"\n#schools = pd.read_csv(data_path_schools,header=0, sep ='\\t', usecols = [\"CDSCode\",\"StatusType\", \"School\", \"EILCode\", \"EILName\", \"Zip\", \"Latitude\", \"Longitude\"], dtype ={'CDSCode': str, 'StatusType': str, 'School': str, 'EILCode': str,'EILName': str,'Zip': str, 'Latitude': float, 'Longitude': float})\n#schools = schools[(schools[\"StatusType\"] == 'Active')]\n\n### Find the closest school\n#def dist(lat1, long1, lat2, long2):\n# return np.sqrt((lat1-lat2)**2+(long1-long2)**2)\n\n#def find_closest_school(lat, long): \n# distances = schools.apply(lambda row: dist(lat, long, row[\"Latitude\"], row[\"Longitude\"]), axis=1)\n# return min(distances)\n#x_data['closest_school'] = x_data_sub.apply(lambda row: find_closest_school(row['y'], row['x']), axis=1)\n",
"Formatting to meet Kaggle submission standards: (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs)",
"# The Kaggle submission format requires listing the ID of each example.\n# This is to remember the order of the IDs after shuffling\n#allIDs = np.array(list(df.axes[0]))\n#allIDs = allIDs[shuffle]\n\n#testIDs = allIDs[800000:]\n#devIDs = allIDs[700000:800000]\n#trainIDs = allIDs[:700000]\n\n# Extract the column names for the required submission format\n#sampleSubmission_path = \"./data/sampleSubmission.csv\"\n#sampleDF = pd.read_csv(sampleSubmission_path)\n#allColumns = list(sampleDF.columns)\n#featureColumns = allColumns[1:]\n\n# Extracting the test data for a baseline submission\n#real_test_path = \"./data/test_transformed.csv\"\n#testDF = pd.read_csv(real_test_path, header=0)\n#real_test_data = testDF\n\n#test_complete = real_test_data.fillna(real_test_data.mean())\n#Test_raw = test_complete.as_matrix()\n\n#TestData = MinMaxScaler().fit_transform(Test_raw)\n\n# Here we remember the ID of each test data point, in case we ever decide to shuffle the test data for some reason\n#testIDs = list(testDF.axes[0])",
"Generate baseline prediction probabilities from MNB classifier and store in a .csv file (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs)",
"# Generate a baseline MNB classifier and make it return prediction probabilities for the actual test data\n#def MNB():\n# mnb = MultinomialNB(alpha = 0.0000001)\n# mnb.fit(train_data, train_labels)\n# print(\"\\n\\nMultinomialNB accuracy on dev data:\", mnb.score(dev_data, dev_labels))\n# return mnb.predict_proba(dev_data)\n#MNB()\n\n#baselinePredictionProbabilities = MNB()\n\n# Place the resulting prediction probabilities in a .csv file in the required format\n# First, turn the prediction probabilties into a data frame\n#resultDF = pd.DataFrame(baselinePredictionProbabilities,columns=featureColumns)\n# Add the IDs as a final column\n#resultDF.loc[:,'Id'] = pd.Series(testIDs,index=resultDF.index)\n# Make the 'Id' column the first column\n#colnames = resultDF.columns.tolist()\n#colnames = colnames[-1:] + colnames[:-1]\n#resultDF = resultDF[colnames]\n# Output to a .csv file\n# resultDF.to_csv('result.csv',index=False)",
"Note: the code above will shuffle data differently every time it's run, so model accuracies will vary accordingly.",
"## Data sub-setting quality check-point\nprint(train_data[:1])\nprint(train_labels[:1])\n\n# Modeling quality check-point with MNB--fast model\n\ndef MNB():\n mnb = MultinomialNB(alpha = 0.0000001)\n mnb.fit(train_data, train_labels)\n print(\"\\n\\nMultinomialNB accuracy on dev data:\", mnb.score(dev_data, dev_labels))\n \nMNB()",
"Defining Performance Criteria\nAs determined by the Kaggle submission guidelines, the performance criteria metric for the San Francisco Crime Classification competition is Multi-class Logarithmic Loss (also known as cross-entropy). There are various other performance metrics that are appropriate for different domains: accuracy, F-score, Lift, ROC Area, average precision, precision/recall break-even point, and squared error.\n(Describe each performance metric and a domain in which it is preferred. Give Pros/Cons if able)\n\n\nMulti-class Log Loss:\n\n\nAccuracy:\n\n\nF-score:\n\n\nLift:\n\n\nROC Area:\n\n\nAverage precision\n\n\nPrecision/Recall break-even point:\n\n\nSquared-error:\n\n\nModel Prototyping\nWe will start our classifier and feature engineering process by looking at the performance of various classifiers with default parameter settings in predicting labels on the mini_dev_data:",
"def model_prototype(train_data, train_labels, eval_data, eval_labels):\n knn = KNeighborsClassifier(n_neighbors=5).fit(train_data, train_labels)\n bnb = BernoulliNB(alpha=1, binarize = 0.5).fit(train_data, train_labels)\n mnb = MultinomialNB().fit(train_data, train_labels)\n log_reg = LogisticRegression().fit(train_data, train_labels)\n neural_net = MLPClassifier().fit(train_data, train_labels)\n random_forest = RandomForestClassifier().fit(train_data, train_labels)\n decision_tree = DecisionTreeClassifier().fit(train_data, train_labels)\n support_vm_step_one = svm.SVC(probability = True)\n support_vm = support_vm_step_one.fit(train_data, train_labels)\n \n models = [knn, bnb, mnb, log_reg, neural_net, random_forest, decision_tree, support_vm]\n for model in models:\n eval_prediction_probabilities = model.predict_proba(eval_data)\n eval_predictions = model.predict(eval_data)\n print(model, \"Multi-class Log Loss:\", log_loss(y_true = eval_labels, y_pred = eval_prediction_probabilities, labels = crime_labels_mini_dev), \"\\n\\n\")\n\nmodel_prototype(mini_train_data, mini_train_labels, mini_dev_data, mini_dev_labels)",
"Adding Features, Hyperparameter Tuning, and Model Calibration To Improve Prediction For Each Classifier\nHere we seek to optimize the performance of our classifiers in a three-step, dynamnic engineering process. \n1) Feature addition\nWe previously added components from the weather data into the original SF crime data as new features. We will not repeat work done in our initial submission, where our training dataset did not include these features. For comparision with respoect to how the added features improved our performance with respect to log loss, please refer back to our initial submission.\nWe can have Kalvin expand on exactly what he did here.\n2) Hyperparameter tuning\nEach classifier has parameters that we can engineer to further optimize performance, as opposed to using the default parameter values as we did above in the model prototyping cell. This will be specific to each classifier as detailed below.\n3) Model calibration\nWe can calibrate the models via Platt Scaling or Isotonic Regression to attempt to improve their performance.\n\n\nPlatt Scaling: ((brief explanation of how it works))\n\n\nIsotonic Regression: ((brief explanation of how it works))\n\n\nFor each classifier, we can use CalibratedClassifierCV to perform probability calibration with isotonic regression or sigmoid (Platt Scaling). The parameters within CalibratedClassifierCV that we can adjust are the method ('sigmoid' or 'isotonic') and cv (cross-validation generator). As we will already be training our models before calibration, we will only use cv = 'prefit'. Thus, in practice the cross-validation generator will not be a modifiable parameter for us.\nK-Nearest Neighbors\nHyperparameter tuning:\nFor the KNN classifier, we can seek to optimize the following classifier parameters: n-neighbors, weights, and the power parameter ('p').",
"list_for_ks = []\nlist_for_ws = []\nlist_for_ps = []\nlist_for_log_loss = []\n\ndef k_neighbors_tuned(k,w,p):\n tuned_KNN = KNeighborsClassifier(n_neighbors=k, weights=w, p=p).fit(mini_train_data, mini_train_labels)\n dev_prediction_probabilities = tuned_KNN.predict_proba(mini_dev_data)\n list_for_ks.append(this_k)\n list_for_ws.append(this_w)\n list_for_ps.append(this_p)\n working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev)\n list_for_log_loss.append(working_log_loss)\n #print(\"Multi-class Log Loss with KNN and k,w,p =\", k,\",\",w,\",\", p, \"is:\", working_log_loss)\n\nk_value_tuning = [i for i in range(1,5002,500)]\nweight_tuning = ['uniform', 'distance']\npower_parameter_tuning = [1,2]\n\nstart = time.clock()\nfor this_k in k_value_tuning:\n for this_w in weight_tuning:\n for this_p in power_parameter_tuning:\n k_neighbors_tuned(this_k, this_w, this_p)\n \nindex_best_logloss = np.argmin(list_for_log_loss)\nprint('For KNN the best log loss with hyperparameter tuning is',list_for_log_loss[index_best_logloss], 'with k =', list_for_ks[index_best_logloss], 'w =', list_for_ws[index_best_logloss], 'p =', list_for_ps[index_best_logloss])\nend = time.clock()\nprint(\"Computation time for this step is %.2f\" % (end-start), 'seconds')",
"Model calibration:\nHere we will calibrate the KNN classifier with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV with various parameter settings. The \"method\" parameter can be set to \"sigmoid\" or to \"isotonic\", corresponding to Platt Scaling and to Isotonic Regression respectively.",
"list_for_ks = []\nlist_for_ws = []\nlist_for_ps = []\nlist_for_ms = []\nlist_for_log_loss = []\n\ndef knn_calibrated(k,w,p,m):\n tuned_KNN = KNeighborsClassifier(n_neighbors=k, weights=w, p=p).fit(mini_train_data, mini_train_labels)\n dev_prediction_probabilities = tuned_KNN.predict_proba(mini_dev_data)\n ccv = CalibratedClassifierCV(tuned_KNN, method = m, cv = 'prefit')\n ccv.fit(mini_calibrate_data, mini_calibrate_labels)\n ccv_prediction_probabilities = ccv.predict_proba(mini_dev_data)\n list_for_ks.append(this_k)\n list_for_ws.append(this_w)\n list_for_ps.append(this_p)\n list_for_ms.append(this_m)\n working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = ccv_prediction_probabilities, labels = crime_labels_mini_dev)\n list_for_log_loss.append(working_log_loss)\n print(\"Multi-class Log Loss with KNN and k,w,p =\", k,\",\",w,\",\",p,\",\",m,\"is:\", working_log_loss)\n\nk_value_tuning = ([i for i in range(1,21,1)] + [j for j in range(25,51,5)] + [k for k in range(55,22000,1000)])\nweight_tuning = ['uniform', 'distance']\npower_parameter_tuning = [1,2]\nmethods = ['sigmoid', 'isotonic']\n\nstart = time.clock()\nfor this_k in k_value_tuning:\n for this_w in weight_tuning:\n for this_p in power_parameter_tuning:\n for this_m in methods:\n knn_calibrated(this_k, this_w, this_p, this_m)\n \nindex_best_logloss = np.argmin(list_for_log_loss)\nprint('For KNN the best log loss with hyperparameter tuning and calibration is',list_for_log_loss[index_best_logloss], 'with k =', list_for_ks[index_best_logloss], 'w =', list_for_ws[index_best_logloss], 'p =', list_for_ps[index_best_logloss], 'm =', list_for_ms[index_best_logloss])\nend = time.clock()\nprint(\"Computation time for this step is %.2f\" % (end-start), 'seconds')",
"Comments on results for Hyperparameter tuning and Calibration for KNN:\nWe see that the best log loss we achieve for KNN is with _ neighbors, _ weights, and _ power parameter.\nWhen we add-in calibration, we see that the the best log loss we achieve for KNN is with _ neighbors, _ weights, _ power parameter, and _ calibration method.\n(Further explanation here?)\nMultinomial, Bernoulli, and Gaussian Naive Bayes\nHyperparameter tuning: Bernoulli Naive Bayes\nFor the Bernoulli Naive Bayes classifier, we seek to optimize the alpha parameter (Laplace smoothing parameter) and the binarize parameter (threshold for binarizing of the sample features). For the binarize parameter, we will create arbitrary thresholds over which our features, which are not binary/boolean features, will be binarized.",
"list_for_as = []\nlist_for_bs = []\nlist_for_log_loss = []\n\ndef BNB_tuned(a,b):\n bnb_tuned = BernoulliNB(alpha = a, binarize = b).fit(mini_train_data, mini_train_labels)\n dev_prediction_probabilities = bnb_tuned.predict_proba(mini_dev_data)\n list_for_as.append(this_a)\n list_for_bs.append(this_b)\n working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev)\n list_for_log_loss.append(working_log_loss)\n #print(\"Multi-class Log Loss with BNB and a,b =\", a,\",\",b,\"is:\", working_log_loss)\n\nalpha_tuning = [0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.6, 0.8, 1.0, 1.1, 1.2, 1.4, 1.6, 1.8, 2.0, 10.0]\nbinarize_thresholds_tuning = [1e-20, 1e-19, 1e-18, 1e-17, 1e-16, 1e-15, 1e-14, 1e-13, 1e-12, 1e-11, 1e-10, 1e-9, 1e-8, 1e-7, 1e-6, 1e-5, 1e-4, 0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 0.99, 0.999, 0.9999]\n\nstart = time.clock()\nfor this_a in alpha_tuning:\n for this_b in binarize_thresholds_tuning:\n BNB_tuned(this_a, this_b)\n \nindex_best_logloss = np.argmin(list_for_log_loss)\nprint('For BNB the best log loss with hyperparameter tuning is',list_for_log_loss[index_best_logloss], 'with alpha =', list_for_as[index_best_logloss], 'binarization threshold =', list_for_bs[index_best_logloss])\nend = time.clock()\nprint(\"Computation time for this step is %.2f\" % (end-start), 'seconds') ",
"Model calibration: BernoulliNB\nHere we will calibrate the BNB classifier with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV with various parameter settings. The \"method\" parameter can be set to \"sigmoid\" or to \"isotonic\", corresponding to Platt Scaling and to Isotonic Regression respectively.",
"list_for_as = []\nlist_for_bs = []\nlist_for_ms = []\nlist_for_log_loss = []\n\ndef BNB_calibrated(a,b,m):\n bnb_tuned = BernoulliNB(alpha = a, binarize = b).fit(mini_train_data, mini_train_labels)\n dev_prediction_probabilities = bnb_tuned.predict_proba(mini_dev_data)\n ccv = CalibratedClassifierCV(bnb_tuned, method = m, cv = 'prefit')\n ccv.fit(mini_calibrate_data, mini_calibrate_labels)\n ccv_prediction_probabilities = ccv.predict_proba(mini_dev_data)\n list_for_as.append(this_a)\n list_for_bs.append(this_b)\n list_for_ms.append(this_m)\n working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = ccv_prediction_probabilities, labels = crime_labels_mini_dev)\n list_for_log_loss.append(working_log_loss)\n #print(\"Multi-class Log Loss with BNB and a,b,m =\", a,\",\", b,\",\", m, \"is:\", working_log_loss)\n\nalpha_tuning = [0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.6, 0.8, 1.0, 1.1, 1.2, 1.4, 1.6, 1.8, 2.0, 10.0]\nbinarize_thresholds_tuning = [1e-20, 1e-19, 1e-18, 1e-17, 1e-16, 1e-15, 1e-14, 1e-13, 1e-12, 1e-11, 1e-10, 1e-9, 1e-8, 1e-7, 1e-6, 1e-5, 1e-4, 0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 0.99, 0.999, 0.9999]\nmethods = ['sigmoid', 'isotonic']\n\nstart = time.clock()\nfor this_a in alpha_tuning:\n for this_b in binarize_thresholds_tuning:\n for this_m in methods:\n BNB_calibrated(this_a, this_b, this_m)\n \nindex_best_logloss = np.argmin(list_for_log_loss)\nprint('For BNB the best log loss with hyperparameter tuning and calibration is',list_for_log_loss[index_best_logloss], 'with alpha =', list_for_as[index_best_logloss], 'binarization threshold =', list_for_bs[index_best_logloss], 'method = ', list_for_ms[index_best_logloss])\nend = time.clock()\nprint(\"Computation time for this step is %.2f\" % (end-start), 'seconds') ",
"Hyperparameter tuning: Multinomial Naive Bayes\nFor the Multinomial Naive Bayes classifer, we seek to optimize the alpha parameter (Laplace smoothing parameter).",
"list_for_as = []\nlist_for_log_loss = []\n\ndef MNB_tuned(a):\n mnb_tuned = MultinomialNB(alpha = a).fit(mini_train_data, mini_train_labels)\n dev_prediction_probabilities =mnb_tuned.predict_proba(mini_dev_data)\n list_for_as.append(this_a)\n working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev)\n list_for_log_loss.append(working_log_loss)\n #print(\"Multi-class Log Loss with BNB and a =\", a, \"is:\", working_log_loss)\n\nalpha_tuning = [0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.6, 0.8, 1.0, 1.1, 1.2, 1.4, 1.6, 1.8, 2.0, 10.0]\n\nstart = time.clock()\nfor this_a in alpha_tuning:\n MNB_tuned(this_a)\n \nindex_best_logloss = np.argmin(list_for_log_loss)\nprint('For MNB the best log loss with hyperparameter tuning is',list_for_log_loss[index_best_logloss], 'with alpha =', list_for_as[index_best_logloss])\nend = time.clock()\nprint(\"Computation time for this step is %.2f\" % (end-start), 'seconds') \n",
"Model calibration: MultinomialNB\nHere we will calibrate the MNB classifier with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV with various parameter settings. The \"method\" parameter can be set to \"sigmoid\" or to \"isotonic\", corresponding to Platt Scaling and to Isotonic Regression respectively.",
"list_for_as = []\nlist_for_ms = []\nlist_for_log_loss = []\n\ndef MNB_calibrated(a,m):\n mnb_tuned = MultinomialNB(alpha = a).fit(mini_train_data, mini_train_labels)\n ccv = CalibratedClassifierCV(mnb_tuned, method = m, cv = 'prefit')\n ccv.fit(mini_calibrate_data, mini_calibrate_labels)\n ccv_prediction_probabilities = ccv.predict_proba(mini_dev_data)\n list_for_as.append(this_a)\n list_for_ms.append(this_m)\n working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = ccv_prediction_probabilities, labels = crime_labels_mini_dev)\n list_for_log_loss.append(working_log_loss)\n #print(\"Multi-class Log Loss with MNB and a =\", a, \"and m =\", m, \"is:\", working_log_loss)\n\nalpha_tuning = [0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.6, 0.8, 1.0, 1.1, 1.2, 1.4, 1.6, 1.8, 2.0, 10.0]\nmethods = ['sigmoid', 'isotonic']\n\nstart = time.clock()\nfor this_a in alpha_tuning:\n for this_m in methods:\n MNB_calibrated(this_a, this_m)\n \nindex_best_logloss = np.argmin(list_for_log_loss)\nprint('For MNB the best log loss with hyperparameter tuning and calibration is',list_for_log_loss[index_best_logloss], 'with alpha =', list_for_as[index_best_logloss], 'and method =', list_for_ms[index_best_logloss])\nend = time.clock()\nprint(\"Computation time for this step is %.2f\" % (end-start), 'seconds') ",
"Tuning: Gaussian Naive Bayes\nFor the Gaussian Naive Bayes classifier there are no inherent parameters within the classifier function to optimize, but we will look at our log loss before and after adding noise to the data that is hypothesized to give it a more normal (Gaussian) distribution, which is required by the GNB classifier.",
"def GNB_pre_tune():\n gnb_pre_tuned = GaussianNB().fit(mini_train_data, mini_train_labels)\n dev_prediction_probabilities =gnb_pre_tuned.predict_proba(mini_dev_data)\n working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev)\n print(\"Multi-class Log Loss with pre-tuned GNB is:\", working_log_loss)\n\nGNB_pre_tune()\n \ndef GNB_post_tune():\n # Gaussian Naive Bayes requires the data to have a relative normal distribution. Sometimes\n # adding noise can improve performance by making the data more normal:\n mini_train_data_noise = np.random.rand(mini_train_data.shape[0],mini_train_data.shape[1])\n modified_mini_train_data = np.multiply(mini_train_data,mini_train_data_noise) \n gnb_with_noise = GaussianNB().fit(modified_mini_train_data,mini_train_labels)\n dev_prediction_probabilities =gnb_with_noise.predict_proba(mini_dev_data)\n working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev)\n print(\"Multi-class Log Loss with tuned GNB via addition of noise to normalize the data's distribution is:\", working_log_loss)\n \nGNB_post_tune()",
"Model calibration: GaussianNB\nHere we will calibrate the GNB classifier with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV with various parameter settings. The \"method\" parameter can be set to \"sigmoid\" or to \"isotonic\", corresponding to Platt Scaling and to Isotonic Regression respectively.",
"list_for_ms = []\nlist_for_log_loss = []\n\ndef GNB_calibrated(m):\n # Gaussian Naive Bayes requires the data to have a relative normal distribution. Sometimes\n # adding noise can improve performance by making the data more normal:\n mini_train_data_noise = np.random.rand(mini_train_data.shape[0],mini_train_data.shape[1])\n modified_mini_train_data = np.multiply(mini_train_data,mini_train_data_noise) \n gnb_with_noise = GaussianNB().fit(modified_mini_train_data,mini_train_labels)\n ccv = CalibratedClassifierCV(gnb_with_noise, method = m, cv = 'prefit')\n ccv.fit(mini_calibrate_data, mini_calibrate_labels)\n ccv_prediction_probabilities = ccv.predict_proba(mini_dev_data)\n list_for_ms.append(this_m)\n working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = ccv_prediction_probabilities, labels = crime_labels_mini_dev)\n list_for_log_loss.append(working_log_loss)\n #print(\"Multi-class Log Loss with tuned GNB via addition of noise to normalize the data's distribution and after calibration is:\", working_log_loss, 'with calibration method =', m)\n \nmethods = ['sigmoid', 'isotonic']\n\nstart = time.clock()\nfor this_m in methods:\n GNB_calibrated(this_m)\n \nindex_best_logloss = np.argmin(list_for_log_loss)\nprint('For GNB the best log loss with tuning and calibration is',list_for_log_loss[index_best_logloss], 'with method =', list_for_ms[index_best_logloss])\nend = time.clock()\nprint(\"Computation time for this step is %.2f\" % (end-start), 'seconds') ",
"Logistic Regression\nHyperparameter tuning:\nFor the Logistic Regression classifier, we can seek to optimize the following classifier parameters: penalty (l1 or l2), C (inverse of regularization strength), solver ('newton-cg', 'lbfgs', 'liblinear', or 'sag')\nModel calibration:\nSee above\nDecision Tree\nHyperparameter tuning:\nFor the Decision Tree classifier, we can seek to optimize the following classifier parameters: \n\ncriterion: The function to measure the quality of a split; can be either Gini impurity \"gini\" or information gain \"entropy\"\nsplitter: The strategy used to choose the split at each node; can be either \"best\" to choose the best split or \"random\" to choose the best random split\nmin_samples_leaf: The minimum number of samples required to be at a leaf node\nmax_depth: The maximum depth of trees. If default \"None\" then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples\nclass_weight: The weights associated with classes; can be \"None\" giving all classes weight of one, or can be \"balanced\", which uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data\nmax_features: The number of features to consider when looking for the best split; can be \"int\", \"float\" (percent), \"auto\", \"sqrt\", or \"None\"\n\nOther adjustable parameters include:\n- min_samples_split: The minimum number of samples required to split an internal node; can be an integer or a float (percentage and ceil as the minimum number of samples for each node)\n- min_weight_fraction_leaf: The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node; default = 0\n- max_leaf_nodes: Grosw a tree with max_leaf_nodes in best-first fashion. Best nodes are defined as relative reduction in impurity. If \"None\" then unlimited number of leaf nodes is used.\n- min_impurity_decrease: A node will be split if this split induces a decrease of the impurity greater than or equal to the min_impurity_decrease value. Default is zero. \nSetting min_samples_leaf to approximately 1% of the data points can stop the tree from inappropriately classifying outliers, which can help to improve accuracy (unsure if significantly improves MCLL) [].",
"list_for_cs = []\nlist_for_ss = []\nlist_for_mds = []\nlist_for_mss = []\nlist_for_cws = []\nlist_for_fs = []\nlist_for_log_loss = []\n\ndef DT_tuned(c,s,md,ms,cw,f):\n tuned_DT = DecisionTreeClassifier(criterion=c, splitter=s, max_depth=md, min_samples_leaf=ms, max_features=f, class_weight=cw).fit(mini_train_data, mini_train_labels)\n dev_prediction_probabilities = tuned_DT.predict_proba(mini_dev_data)\n list_for_cs.append(this_c)\n list_for_ss.append(this_s)\n list_for_mds.append(this_md)\n list_for_mss.append(this_ms)\n list_for_cws.append(this_cw)\n list_for_fs.append(this_f)\n working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev)\n list_for_log_loss.append(working_log_loss)\n #print(\"Multi-class Log Loss with DT and c,s,md,ms,cw,f =\", c,\",\",s,\",\", md,\",\",ms,\",\",cw,\",\",f,\"is:\", working_log_loss)\n\ncriterion_tuning = ['gini', 'entropy']\nsplitter_tuning = ['best', 'random']\nmax_depth_tuning = ([None,6,7,8,9,10,11,12,13,14,15,16,17,18,19])\nmin_samples_leaf_tuning = [x + 1 for x in [i for i in range(0,int(0.091*len(mini_train_data)),100)]]\nclass_weight_tuning = [None, 'balanced']\nmax_features_tuning = ['auto', 2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]\n\nstart = time.clock()\nfor this_c in criterion_tuning:\n for this_s in splitter_tuning:\n for this_md in max_depth_tuning:\n for this_ms in min_samples_leaf_tuning:\n for this_cw in class_weight_tuning:\n for this_f in max_features_tuning:\n DT_tuned(this_c, this_s, this_md, this_ms, this_cw, this_f)\n \nindex_best_logloss = np.argmin(list_for_log_loss)\nprint('For DT the best log loss with hyperparameter tuning is',list_for_log_loss[index_best_logloss], 'with criterion =', list_for_cs[index_best_logloss], 'splitter =', list_for_ss[index_best_logloss], 'max_depth =', list_for_mds[index_best_logloss], 'min_samples_leaf =', list_for_mss[index_best_logloss], 'class_weight =', list_for_cws[index_best_logloss], 'max_features =', list_for_fs[index_best_logloss])\nend = time.clock()\nprint(\"Computation time for this step is %.2f\" % (end-start), 'seconds')",
"Model calibration:\nSee above",
"list_for_cs = []\nlist_for_ss = []\nlist_for_mds = []\nlist_for_mss = []\nlist_for_cws = []\nlist_for_fs = []\nlist_for_cms = []\nlist_for_log_loss = []\n\ndef DT_calibrated(c,s,md,ms,cw,f,cm):\n tuned_DT = DecisionTreeClassifier(criterion=c, splitter=s, max_depth=md, min_samples_leaf=ms, max_features=f, class_weight=cw).fit(mini_train_data, mini_train_labels)\n ccv = CalibratedClassifierCV(tuned_DT, method = cm, cv = 'prefit')\n ccv.fit(mini_calibrate_data, mini_calibrate_labels)\n ccv_prediction_probabilities = ccv.predict_proba(mini_dev_data)\n list_for_cs.append(this_c)\n list_for_ss.append(this_s)\n list_for_mds.append(this_md)\n list_for_mss.append(this_ms)\n list_for_cws.append(this_cw)\n list_for_fs.append(this_f)\n list_for_cms.append(this_cm)\n working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = ccv_prediction_probabilities, labels = crime_labels_mini_dev)\n list_for_log_loss.append(working_log_loss)\n print(\"Multi-class Log Loss with DT and c,s,md,ms,cw,f =\", c,\",\",s,\",\", md,\",\",ms,\",\",cw,\",\",f,\",\",cm,\"is:\", working_log_loss)\n\ncriterion_tuning = ['gini', 'entropy']\nsplitter_tuning = ['best', 'random']\nmax_depth_tuning = ([None,6,7,8,9,10,11,12,13,14,15,16,17,18,19])\nmin_samples_leaf_tuning = [x + 1 for x in [i for i in range(0,int(0.091*len(mini_train_data)),100)]]\nclass_weight_tuning = [None, 'balanced']\nmax_features_tuning = ['auto', 2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]\nmethods = ['sigmoid', 'isotonic']\n\nstart = time.clock()\nfor this_c in criterion_tuning:\n for this_s in splitter_tuning:\n for this_md in max_depth_tuning:\n for this_ms in min_samples_leaf_tuning:\n for this_cw in class_weight_tuning:\n for this_f in max_features_tuning:\n for this_cm in methods:\n DT_calibrated(this_c, this_s, this_md, this_ms, this_cw, this_f, this_cm)\n \nindex_best_logloss = np.argmin(list_for_log_loss)\nprint('For DT the best log loss with hyperparameter tuning and calibration is',list_for_log_loss[index_best_logloss], 'with criterion =', list_for_cs[index_best_logloss], 'splitter =', list_for_ss[index_best_logloss], 'max_depth =', list_for_mds[index_best_logloss], 'min_samples_leaf =', list_for_mss[index_best_logloss], 'class_weight =', list_for_cws[index_best_logloss], 'max_features =', list_for_fs[index_best_logloss], 'and calibration method =', list_for_cms[index_best_logloss])\nend = time.clock()\nprint(\"Computation time for this step is %.2f\" % (end-start), 'seconds')",
"Support Vector Machines (Kalvin)\nHyperparameter tuning:\nFor the SVM classifier, we can seek to optimize the following classifier parameters: C (penalty parameter C of the error term), kernel ('linear', 'poly', 'rbf', sigmoid', or 'precomputed')\nSee source [2] for parameter optimization in SVM\nModel calibration:\nSee above\nNeural Nets (Sarah)\nHyperparameter tuning:\nFor the Neural Networks MLP classifier, we can seek to optimize the following classifier parameters: hidden_layer_sizes, activation ('identity', 'logistic', 'tanh', 'relu'), solver ('lbfgs','sgd', adam'), alpha, learning_rate ('constant', 'invscaling','adaptive')",
"### All the work from Sarah's notebook:\n\nimport theano\nfrom theano import tensor as T\nfrom theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams\nprint (theano.config.device) # We're using CPUs (for now)\nprint (theano.config.floatX )# Should be 64 bit for CPUs\n\nnp.random.seed(0)\n\nfrom IPython.display import display, clear_output\n\nnumFeatures = train_data[1].size\nnumTrainExamples = train_data.shape[0]\nnumTestExamples = test_data.shape[0]\nprint ('Features = %d' %(numFeatures))\nprint ('Train set = %d' %(numTrainExamples))\nprint ('Test set = %d' %(numTestExamples))\n\nclass_labels = list(set(train_labels))\nprint(class_labels)\nnumClasses = len(class_labels)\n\n### Binarize the class labels\n\ndef binarizeY(data):\n binarized_data = np.zeros((data.size,39))\n for j in range(0,data.size):\n feature = data[j]\n i = class_labels.index(feature)\n binarized_data[j,i]=1\n return binarized_data\n\ntrain_labels_b = binarizeY(train_labels)\ntest_labels_b = binarizeY(test_labels)\nnumClasses = train_labels_b[1].size\n\nprint ('Classes = %d' %(numClasses))\nprint ('\\n', train_labels_b[:5, :], '\\n')\nprint (train_labels[:10], '\\n')\n\n###1) Parameters\nnumFeatures = train_data.shape[1]\n\nnumHiddenNodeslayer1 = 50\nnumHiddenNodeslayer2 = 30\n\nw_1 = theano.shared(np.asarray((np.random.randn(*(numFeatures, numHiddenNodeslayer1))*0.01)))\nw_2 = theano.shared(np.asarray((np.random.randn(*(numHiddenNodeslayer1, numHiddenNodeslayer2))*0.01)))\nw_3 = theano.shared(np.asarray((np.random.randn(*(numHiddenNodeslayer2, numClasses))*0.01)))\nparams = [w_1, w_2, w_3]\n\n\n###2) Model\nX = T.matrix()\nY = T.matrix()\n\nsrng = RandomStreams()\ndef dropout(X, p=0.):\n if p > 0:\n X *= srng.binomial(X.shape, p=1 - p)\n X /= 1 - p\n return X\n\ndef model(X, w_1, w_2, w_3, p_1, p_2, p_3):\n return T.nnet.softmax(T.dot(dropout(T.nnet.sigmoid(T.dot(dropout(T.nnet.sigmoid(T.dot(dropout(X, p_1), w_1)),p_2), w_2)),p_3),w_3))\ny_hat_train = model(X, w_1, w_2, w_3, 0.2, 0.5,0.5)\ny_hat_predict = model(X, w_1, w_2, w_3, 0., 0., 0.)\n\n### (3) Cost function\ncost = T.mean(T.sqr(y_hat - Y))\ncost = T.mean(T.nnet.categorical_crossentropy(y_hat_train, Y))\n\n### (4) Objective (and solver)\n\nalpha = 0.01\ndef backprop(cost, w):\n grads = T.grad(cost=cost, wrt=w)\n updates = []\n for wi, grad in zip(w, grads):\n updates.append([wi, wi - grad * alpha])\n return updates\n\nupdate = backprop(cost, params)\ntrain = theano.function(inputs=[X, Y], outputs=cost, updates=update, allow_input_downcast=True)\ny_pred = T.argmax(y_hat_predict, axis=1)\npredict = theano.function(inputs=[X], outputs=y_pred, allow_input_downcast=True)\n\nminiBatchSize = 10 \n\ndef gradientDescent(epochs):\n for i in range(epochs):\n for start, end in zip(range(0, len(train_data), miniBatchSize), range(miniBatchSize, len(train_data), miniBatchSize)):\n cc = train(train_data[start:end], train_labels_b[start:end])\n clear_output(wait=True)\n print ('%d) accuracy = %.4f' %(i+1, np.mean(np.argmax(test_labels_b, axis=1) == predict(test_data))) )\n\ngradientDescent(50)\n\n### How to decide what # to use for epochs? epochs in this case are how many rounds?\n### plot costs for each of the 50 iterations and see how much it decline.. if its still very decreasing, you should\n### do more iterations; otherwise if its looking like its flattening, you can stop",
"Model calibration:\nSee above\nRandom Forest (Sam, possibly in AWS)\nHyperparameter tuning:\nFor the Random Forest classifier, we can seek to optimize the following classifier parameters: n_estimators (the number of trees in the forsest), max_features, max_depth, min_samples_leaf, bootstrap (whether or not bootstrap samples are used when building trees), oob_score (whether or not out-of-bag samples are used to estimate the generalization accuracy)\nModel calibration:\nSee above\nMeta-estimators\nAdaBoost Classifier\nHyperparameter tuning:\nThere are no major changes that we seek to make in the AdaBoostClassifier with respect to default parameter values.\nAdaboosting each classifier:\nWe will run the AdaBoostClassifier on each different classifier from above, using the classifier settings with optimized Multi-class Log Loss after hyperparameter tuning and calibration.\nBagging Classifier\nHyperparameter tuning:\nFor the Bagging meta classifier, we can seek to optimize the following classifier parameters: n_estimators (the number of trees in the forsest), max_samples, max_features, bootstrap (whether or not bootstrap samples are used when building trees), bootstrap_features (whether features are drawn with replacement), and oob_score (whether or not out-of-bag samples are used to estimate the generalization accuracy)\nBagging each classifier:\nWe will run the BaggingClassifier on each different classifier from above, using the classifier settings with optimized Multi-class Log Loss after hyperparameter tuning and calibration.\nGradient Boosting Classifier\nHyperparameter tuning:\nFor the Gradient Boosting meta classifier, we can seek to optimize the following classifier parameters: n_estimators (the number of trees in the forsest), max_depth, min_samples_leaf, and max_features\nGradient Boosting each classifier:\nWe will run the GradientBoostingClassifier with loss = 'deviance' (as loss = 'exponential' uses the AdaBoost algorithm) on each different classifier from above, using the classifier settings with optimized Multi-class Log Loss after hyperparameter tuning and calibration.\nFinal evaluation on test data",
"# Here we will likely use Pipeline and GridSearchCV in order to find the overall classifier with optimized Multi-class Log Loss.\n# This will be the last step after all attempts at feature addition, hyperparameter tuning, and calibration are completed\n# and the corresponding performance metrics are gathered.\n",
"References\n1) Hsiang, Solomon M. and Burke, Marshall and Miguel, Edward. \"Quantifying the Influence of Climate on Human Conflict\". Science, Vol 341, Issue 6151, 2013 \n2) Huang, Cheng-Lung. Wang, Chieh-Jen. \"A GA-based feature selection and parameters optimization for support vector machines\". Expert Systems with Applications, Vol 31, 2006, p 231-240\n3) https://gallery.cortanaintelligence.com/Experiment/Evaluating-and-Parameter-Tuning-a-Decision-Tree-Model-1",
"A = [n for n in range(1,21,1)]+([i for i in range(25,50,5)])\nprint(A)\n\nA = int(float(1.1))\nprint(A)\n\nprint([i for i in range(0,int(0.031*len(mini_train_data)),100)])\n\nA = 1+[1,2,3]\nprint(A)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
andim/evolimmune
|
fig2/figure2.ipynb
|
mit
|
[
"Figure 2: Drawing a phase diagram of immune strategies\nThis notebook contains the data analysis and visualization code needed to reproduce Figure 2. The figure explores optimal immune strategies as a function of the characteristic time and frequency of pathogens.\nPrerequisite: produce numerical data\nTo generate the data needed for the following plot type\nmake run\n\nfollowed by\nmake agg\n\nThis will generate three data files phases.npz, pienvcut.npz and tauenvcut.npz which contain respectively numerical results about the position of various phase boundaries, optimal parameters along a cut for fixed $\\pi_{\\rm env}$, and optimal parameters along a cut for fixed $\\tau_{\\rm env}$.\nImport a number of packages that we will need in the following.",
"import sys\nsys.path.append('../lib/')\nimport numpy as np\nimport pandas as pd\npd.options.mode.chained_assignment = None\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom matplotlib import cm, lines, transforms, gridspec, ticker\n%matplotlib inline\nimport palettable\nimport shapely.geometry\nimport shapely.ops\nimport plotting\nimport evolimmune\nimport analysis\n%load_ext autoreload\n%autoreload 2\nplt.style.use(['paper'])\neps = 1e-8",
"Read in data about the position of phase boundaries.",
"df = analysis.loadnpz('data/phases.npz')",
"The following gives a summary of the data we just have read in. The columns with a single unique entries correspond to parameters set to a fixed value throughout the different optimizations.",
"analysis.intelligent_describe(df, nunique=10)",
"Deduce phases from boundaries between strategies",
"polygons = evolimmune.polygons_from_boundaries(df, yconv=evolimmune.to_tau)\nphases = evolimmune.phases_from_polygons(polygons)\ncomplete = polygons['complete']\nc = phases['c']\na = phases['a']\npa = phases['p']\none = phases['o']\ni = phases['i']\nmix = phases['m']\nqpos = shapely.ops.cascaded_union([c, mix, i])\nstrategies = [a, pa, one, i, mix, c]\nstrategies_s = ['a', 'p', 'o', 'i', 'm', 'c']\nstrategies_s_long = [r'adaptive', 'proto-\\nadaptive', r'innate', 'innate\\nbet\\nhedging', r'mixed', r'CRISPR-like']",
"Define some common plotting parameters",
"commontextkwargs = dict(ha='center', va='center')\nphaselabelkwargs = dict(fontsize='large')\nphaselabelkwargs.update(commontextkwargs)\nymargin = 0.05\nymin, ymax = evolimmune.to_tau(df.aenv.min()), evolimmune.to_tau(df.aenv.max())\npad = 0.25\nblack = matplotlib.rcParams['text.color']\ngrey = 'grey'\ncolors = np.asarray(palettable.colorbrewer.qualitative.Set3_6.mpl_colors)[[4, 0, 2, 3, 5, 1]]\nlinecolors = dict(zip(('pup', 'p', 'cconstitutive', 'q', 'tau', 'tau1'), palettable.colorbrewer.qualitative.Dark2_6.mpl_colors))",
"Define helper functions for plotting and labelling phases",
"def plot_phases(ax, ylimmax, patchkwargs=dict()):\n for i, s in enumerate(strategies):\n try:\n ax.add_patch(analysis.shapely_to_mpl(s, ec='None', fc=colors[i], **patchkwargs))\n except:\n pass\n ax.set_xlim(0, 1)\n ax.set_ylim(ymin, ymax)\n ax.set_yscale('log')\n plotting.despine(ax, spines='all')\n\nsx = dict(a=0.18, p=0.54, o=0.87, i=0.87, m=0.65, c=0.35)\nsy = dict(a=0.5, p=0.3, o=0.3, i=3.5, m=4.2, c=8.0)\ndef label_phases(ax, textkwargs):\n for i, s in enumerate(strategies):\n ax.text(sx[strategies_s[i]], sy[strategies_s[i]],\n plotting.latexboldmultiline(strategies_s_long[i]), **textkwargs)",
"Produce stand alone phase diagram (subfig A)",
"fig, ax = plt.subplots(ncols=1)\nplot_phases(ax, ymax)\nlabel_phases(ax, phaselabelkwargs)\nax.yaxis.set_major_formatter(ticker.ScalarFormatter())\nl = ax.set_xlabel(r'frequency $\\pi_{\\rm env} = \\alpha / (\\alpha+\\beta)$', verticalalignment='top')\nl.set_position((l.get_position()[0]-0.02, l.get_position()[1]))\nax.set_ylabel(r'characteristic time $\\tau_{\\rm env} = -1/\\ln(1-\\alpha-\\beta)$')\nfig.tight_layout()",
"Define function for plotting phase diagram of strategic choices",
"def boundaryplots(axes, ylimmax=1.0, ylimmin=0.0):\n baseboundarykwargs = dict(lw=1.0)\n boundarykwargs = dict(color=black)\n boundarykwargs.update(baseboundarykwargs)\n patchkwargs = dict(fill=None, linewidth=0.0, alpha=1.0)\n analysis.plot_interior_boundary(axes[0], pa, ylimmax=ylimmax, ylimmin=ylimmin, **boundarykwargs)\n analysis.plot_interior_boundary(axes[0], a, ylimmax=ylimmax, ylimmin=ylimmin, **boundarykwargs)\n textkwargs = dict(color=linecolors['cconstitutive'], transform=axes[0].transAxes, **commontextkwargs)\n axes[0].text(0.53, 0.2, r'{\\setlength{\\jot}{-2pt}\\begin{gather*}\\bm{c_{\\rm con.}}\\\\\\bm{<}\\\\\\bm{c_{\\rm def.}}\\end{gather*}}', **textkwargs)\n axes[0].text(0.2, 0.35, r'{\\setlength{\\jot}{-2pt}\\begin{gather*}\\bm{c_{\\rm con.}}\\\\\\bm{=}\\\\\\bf{min}\\end{gather*}}', **textkwargs)\n axes[0].text(0.67, 0.72, r'{\\setlength{\\jot}{-2pt}\\begin{gather*}\\bm{c_{\\rm constitutive}}\\\\\\bm{=}\\\\\\bm{c_{\\rm defense}}\\end{gather*}}', **textkwargs)\n\n analysis.plot_interior_boundary(axes[1], qpos, ylimmax=ylimmax, ylimmin=ylimmin, **boundarykwargs)\n textkwargs = dict(fontsize='medium', color=linecolors['q'], transform=axes[1].transAxes, **commontextkwargs)\n axes[1].text(0.67, 0.82, r'$\\bm{q > 0}$', **textkwargs)\n axes[1].text(0.37, 0.23, r'$\\bm{q = 0}$', **textkwargs)\n \n analysis.plot_interior_boundary(axes[2], c, ylimmax=ylimmax, ylimmin=ylimmin, color=black, **baseboundarykwargs)\n analysis.plot_interior_boundary(axes[2], shapely.ops.cascaded_union((c, mix)), ylimmax=ylimmax, ylimmin=ylimmin, color=black, **baseboundarykwargs)\n text = axes[2].text(0.5, 0.2, r'$\\bm{p > 0}$', fontsize='large', color=linecolors['p'], transform=axes[2].transAxes, **commontextkwargs)\n textbox = plotting.box_from_text(text)\n axes[2].add_patch(analysis.shapely_to_mpl(complete-c-shapely.geometry.box(*textbox.flatten()),\n hatch=r'////', ec=linecolors['p'], **patchkwargs))\n text = axes[2].text(0.34, 0.89, r'$\\bm{p_{\\rm uptake} > 0}$', color=linecolors['pup'], transform=axes[2].transAxes, **commontextkwargs)\n textbox = plotting.box_from_text(text)\n axes[2].add_patch(analysis.shapely_to_mpl(shapely.ops.cascaded_union((c, mix))-shapely.geometry.box(*textbox.flatten()),\n hatch=r'\\\\\\\\', ec=linecolors['pup'], **patchkwargs))",
"Produce stand alone phase diagram of strategic choices (subfig B)",
"fig, axes = plt.subplots(ncols=3, figsize=(4.0, 1.5))\nfor ax in axes:\n plot_phases(ax, ymax, patchkwargs=dict(alpha=0.3))\n ax.set_xticks([])\n ax.set_yticks([])\nboundaryplots(axes, ylimmax=ymax, ylimmin=ymin)\nfig.tight_layout()",
"Define functions to plot optimal parameter values along cuts of constant $\\pi_{\\rm env}$ or $\\tau_{\\rm env}$.",
"def cutaxes(ax, label, color=black, labelkwargs=dict(), ymax=1.0, twin=False, yticklabels=None):\n plotting.despine(ax, spines='all')\n #yticks = [0.0, ymax/2, ymax]\n yticks = [0.0, ymax]\n ax.set_yticks(yticks)\n if yticklabels is None:\n yticklabels = ['%g' % yticks[0], '', '%g' % yticks[-1]]\n yticklabels = [yticklabels[0], yticklabels[-1]]\n ax.set_yticklabels(yticklabels)\n # add an invisible point to fool margin calculation\n ax.plot([0.0], [ymax], alpha=0.0)\n if twin:\n x = 1.05\n else:\n x = -.08\n ax.text(x, 0.5, label, color=color, transform=ax.transAxes, **ylabelkwargs)\n ax.margins(y=ymargin)\n for tl in ax.get_yticklabels():\n tl.set_color(color)\n return ax\n\ndef shade_according_to_phase(ax, phases, colors, x=None, y=None, transform=None, vspankwargs=dict()):\n if transform is None:\n transform = lambda val: val\n if x is not None:\n line = shapely.geometry.LineString(((x, 0), (x, 100)))\n if y is not None:\n line = shapely.geometry.LineString(((0, y), (1, y)))\n for i, phase in enumerate(phases):\n intersections = phase.boundary.intersection(line)\n if isinstance(intersections, shapely.geometry.MultiPoint):\n x0, y0 = intersections[0].xy\n x0, y0 = x0[0], y0[0]\n x1, y1 = intersections[1].xy\n x1, y1 = x1[0], y1[0]\n if x is not None:\n from_, to = y0, y1\n if y is not None:\n from_, to = x0, x1\n ax.axvspan(transform(from_), transform(to), color=colors[i], **vspankwargs)",
"Read in data for cuts",
"dftauenvcut = analysis.loadnpz('data/tauenvcut.npz')\ndftauenvcut.p = pd.to_numeric(dftauenvcut.p, errors='coerce')\nevolimmune.derived_quantities(dftauenvcut)\ndftauenvcut_dict = dict((aenv, dfg) for aenv, dfg in dftauenvcut.groupby('aenv'))\naenvcuts = [key for key in dftauenvcut_dict][::-1]\ndfpienvcut = analysis.loadnpz('data/pienvcut.npz')\ndfpienvcut.p = pd.to_numeric(dfpienvcut.p, errors='coerce')\nevolimmune.derived_quantities(dfpienvcut)\n\ndfpienvcut = dfpienvcut[dfpienvcut.pi<0.9999]\naxtau = plt.gca()\naxtau.plot(dfpienvcut.tauenv, dfpienvcut.tau1, '-', c=linecolors['tau1'], label='present')\naxtau.plot(dfpienvcut.tauenv, dfpienvcut.tau, '-', c=linecolors['tau'], label='absent')\naxtau.legend(loc='best', title='pathogen')\nplotting.despine(axtau)\naxtau.set_xlabel(r'$\\tau_{\\rm env}$')\naxtau.set_ylabel(r'$\\tau$')",
"Put it all together and produce the final figure",
"fig = plt.figure(figsize=(7.0, 4.5))\ngsleft = gridspec.GridSpec(2, 3, height_ratios=[2.9, 1]) \n\naxphase = fig.add_subplot(gsleft[0, :])\nplot_phases(axphase, ymax)\nlabel_phases(axphase, phaselabelkwargs)\naxphase.yaxis.set_major_formatter(ticker.ScalarFormatter())\nl = axphase.set_xlabel(r'frequency $\\pi_{\\rm env} = \\alpha / (\\alpha+\\beta)$', verticalalignment='top')\nl.set_position((l.get_position()[0]-0.02, l.get_position()[1]))\naxphase.set_ylabel(r'characteristic time $\\tau_{\\rm env} = -1/\\ln(1-\\alpha-\\beta)$')\n\nsmallaxes = []\nfor counter in range(3):\n ax = fig.add_subplot(gsleft[1, counter])\n smallaxes.append(ax)\nfor ax in smallaxes:\n plot_phases(ax, ymax, patchkwargs=dict(alpha=0.5))\n ax.set_xticks([])\n ax.set_yticks([])\nsmallaxes[0].set_title('adaptability')\nsmallaxes[1].set_title('heritability')\nsmallaxes[2].set_title('acquisition mode')\n# layout needs to be done before boundaryplots are drawn in order for cutouts to work\ngsleft.tight_layout(fig, rect=(0, 0, 0.62, 1), h_pad=1.0, pad=pad, w_pad=0.5)\nboundaryplots(smallaxes, ylimmax=ymax, ylimmin=ymin)\n\nplotkwargs = dict()\nvspankwargs = dict(alpha=0.5, ec='none')\nylabelkwargs = dict(rotation=90, ha='center', va='center', fontsize='medium')\nswitchingmax = 0.1\n\ngsright = gridspec.GridSpec(2, 1) \n\n#### upper constant characteristic time cut panel ####\npienvcut = 0.7\ndftauenvcut_sub = dftauenvcut_dict[aenvcuts[0]]\ndftauenvcut_sub.sort_values(by='pienv', inplace=True)\naxuppercut = fig.add_subplot(gsright[0])\nshade_according_to_phase(axuppercut, strategies, colors, y=evolimmune.to_tau(aenvcuts[0]), vspankwargs=vspankwargs)\ncutaxes(axuppercut, r'$p_{\\rm uptake}$', color=linecolors['pup'], labelkwargs=ylabelkwargs,\n ymax=switchingmax, twin=False)\naxuppercut.plot(dftauenvcut_sub.pienv, dftauenvcut_sub.pup, '-', c=linecolors['pup'], **plotkwargs)\naxuppercut.set_xlabel('')\naxuppercut.grid(axis='y')\nplt.setp(axuppercut.get_xticklabels(), visible=False)\naxuppercuttwin = cutaxes(axuppercut.twinx(), r'$p$', color=linecolors['p'], labelkwargs=ylabelkwargs, ymax=switchingmax, twin=True)\naxuppercuttwin.plot(dftauenvcut_sub.pienv, dftauenvcut_sub.p, '-', c=linecolors['p'], **plotkwargs)\n\n#### lower constant characteristic time cut panel ####\ndftauenvcut_sub2 = dftauenvcut_dict[aenvcuts[1]]\ndftauenvcut_sub2.sort_values(by='pienv', inplace=True)\naxlowercut = fig.add_subplot(gsright[1], sharex=axuppercut)\ncutaxes(axlowercut, r'$c_{\\rm constitutive}$', color=linecolors['cconstitutive'], labelkwargs=ylabelkwargs,\n ymax=1.0, twin=False, yticklabels=['min', '', 'max'])\nx, y = plotting.jumpify(dftauenvcut_sub2.pienv, dftauenvcut_sub2.cconstitutive, threshold=0.2)\naxlowercut.plot(x, y, '-', c=linecolors['cconstitutive'], **plotkwargs)\naxlowercut.set_xlabel(r'$\\pi_{\\rm env}$')\naxlowercut.margins(y=ymargin)\naxlowercuttwin = cutaxes(axlowercut.twinx(), r'$q$', color=linecolors['q'], labelkwargs=ylabelkwargs, ymax=switchingmax, twin=True)\nshade_according_to_phase(axlowercuttwin, strategies, colors, y=evolimmune.to_tau(aenvcuts[1]), vspankwargs=vspankwargs)\naxlowercuttwin.grid(axis='y')\nx, y = plotting.jumpify(dftauenvcut_sub2.pienv, dftauenvcut_sub2.q, threshold=0.02)\naxlowercuttwin.plot(x, y, '-', c=linecolors['q'], **plotkwargs)\n# cconstitutive line on top of q line\naxlowercut.set_zorder(axlowercuttwin.get_zorder()+1)\naxlowercut.patch.set_visible(False)\n\n#### frequency cut panel ####\ndfpienvcut.sort_values(by='tauenv', inplace=True)\ndfpienvcut = dfpienvcut[dfpienvcut.pi<0.9999]\ngstau = gridspec.GridSpec(1, 1) \naxtau = fig.add_subplot(gstau[0])\naxtau.plot(dfpienvcut.tauenv, dfpienvcut.tau1, '-', c=linecolors['tau1'], label='present')\naxtau.plot(dfpienvcut.tauenv, dfpienvcut.tau, '-', c=linecolors['tau'], label='absent')\nlims = [0, 8]\naxtau.set_xlim(*lims)\nw = (lims[1]-lims[0])*0.5\naxtau.set_ylim(lims[0]-ymargin*w, lims[1]+ymargin*w)\naxtau.set_xlabel(r'$\\tau_{\\rm env}$')\naxtau.set_ylabel(r'$\\tau$')\naxtau.legend(loc='best', title='pathogen')\naxtau.locator_params(nbins=6)\nshade_according_to_phase(axtau, strategies, colors, x=pienvcut, vspankwargs=vspankwargs)\nplotting.despine(axtau, spines='all')\n\n#### make link to cuts ####\n# plot lines corresponding to plots on the right\nvlinekwargs=dict(color=grey, alpha=0.4, lw=0.8)\naxphase.vlines([pienvcut], [0], [lims[1]], **vlinekwargs)\nfor aenv in aenvcuts:\n axphase.axhline(evolimmune.to_tau(aenv), **vlinekwargs)\noffset = -.15\narrowprops = dict(edgecolor=grey, facecolor='w', arrowstyle='-|>',\n mutation_scale=8, shrinkA=0, clip_on=False)\ntransformB = transforms.blended_transform_factory(axlowercut.transAxes, axphase.transData)\naxphase.annotate(\"\", xy=(1.0, evolimmune.to_tau(aenvcuts[0])), xycoords=('axes fraction', 'data'),\n xytext=(offset, evolimmune.to_tau(aenvcuts[0])), textcoords=transformB,\n arrowprops=arrowprops)\ntransformB = transforms.blended_transform_factory(axuppercut.transAxes, axphase.transData)\naxphase.annotate(\"\", xy=(1.0, evolimmune.to_tau(aenvcuts[1])), xycoords=('axes fraction', 'data'),\n xytext=(-0.15, evolimmune.to_tau(aenvcuts[1])), textcoords=transformB,\n arrowprops=arrowprops)\ntransformB = transforms.blended_transform_factory(axtau.transAxes, axphase.transAxes)\naxphase.annotate(\"\", xy=(pienvcut, 0.0), xycoords=('data', 'axes fraction'),\n xytext=(-0.15, -0.08), textcoords=transformB,\n arrowprops=dict(connectionstyle='angle,angleA=180,angleB=-90,rad=0', **arrowprops))\nfor ax in [axlowercut, axuppercut, axtau]:\n line = lines.Line2D([offset, offset], [0.0, 1.0], color=grey, transform=ax.transAxes,\n linewidth=matplotlib.rcParams['patch.linewidth'], clip_on=False)\n ax.add_line(line)\n\n#### overall layout ####\ngsright.tight_layout(fig, rect=(0.66, 0.45, 1.0, 1), h_pad=0.5, pad=pad)\ngstau.tight_layout(fig, rect=(0.66, 0.0, 0.975, 0.43), pad=pad)\nsubfiglabelkwargs = dict(fontsize='large', verticalalignment='top', horizontalalignment='left')\naxphase.text(0.005, 1.0, r'\\textbf{A}',\n transform=transforms.blended_transform_factory(fig.transFigure, axphase.transAxes),\n **subfiglabelkwargs)\nsmallaxes[0].text(0.005, 1.15, r'\\textbf{B}',\n transform=transforms.blended_transform_factory(fig.transFigure, smallaxes[0].transAxes),\n **subfiglabelkwargs)#\naxuppercut.text(0.63, 1.0, r'\\textbf{C}',\n transform=transforms.blended_transform_factory(fig.transFigure, axuppercut.transAxes),\n **subfiglabelkwargs)\naxuppercut.text(0.63, 1.0, r'\\textbf{D}',\n transform=transforms.blended_transform_factory(fig.transFigure, axlowercut.transAxes),\n **subfiglabelkwargs)\naxtau.text(0.63, 1.0, r'\\textbf{E}',\n transform=transforms.blended_transform_factory(fig.transFigure, axtau.transAxes),\n **subfiglabelkwargs)\n\nfig.savefig('figure2.pdf')\nfig.savefig('figure2.svg')",
"Optimal immune strategies as a function of the frequency and characteristic time of pathogens.\n(A) Distinct optimal immune strategies emerge for different statistics of appearance of the pathogens. Each phase is characterized by the value of parameters indicated in panel B and named after a known immune system that has similar characteristics (the name 'adaptive' is after the vertebrate immune system).\n(B) The different phases of immunity are defined by the values of parameters along three main axes: adaptability (constitutive cost $c_{\\rm constitutive}$), heritability ($1-q$) and mode of acquisition ($p$ and $p_{\\rm uptake}$). \n(C) and (D) Optimal parameters as a function of $\\pi_{\\rm env}$ for $\\tau_{\\rm env} = 12$ (C) and $\\tau_{\\rm env} = 0.8$ (D). For slowly varying environments (C), rare pathogens are best targeted by CRISPR-like uptake of protection, while frequent pathogens are best dealt with by spontaneous acquisition of protection, with a crossover in-between where both co-exist. For faster varying environments (D), the constitutive cost invested in the protection goes from negligible to maximal as the pathogen frequency increases. When it is maximal, the best strategy transitions from bet-hedging ($q>0$) to a full protection of the population ($q=0$).\n(E) The correlation times of protection in absence of the pathogen, $\\tau = -1/\\ln(1 - p - q)$, and in its presence, $\\tau = -1/\\ln(1 - p - p_{\\rm uptake} - q)$, are shown for $\\pi_{\\rm env} = 0.7$ as a function of $\\tau_{\\rm env}$. Both increase with the correlation time of the pathogen. In this figure, an infinite population size is assumed and the following parameters are used: $c_{\\rm infection} = 3; \\; c_{\\rm constitutive} = \\left(1.8 - c_{\\rm defense}\\right) / \\left(c_{\\rm defense} - 0.2\\right); \\; c_{\\rm uptake}(p_{\\rm uptake}) = 0.1\\times p_{\\rm uptake} + p_{\\rm uptake}^2$ (see Fig. S2 for other choices)."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mdiaz236/DeepLearningFoundations
|
tv-script-generation/dlnd_tv_script_generation.ipynb
|
mit
|
[
"TV Script Generation\nIn this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.\nGet the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]",
"Explore the Data\nPlay around with view_sentence_range to view different parts of the data.",
"view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))",
"Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\nLookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call vocab_to_int\n- Dictionary to go from the id to word, we'll call int_to_vocab\nReturn these dictionaries in the following tuple (vocab_to_int, int_to_vocab)",
"words = list(set(text.split()))\n{i: word for (i, word) in enumerate(words)}\n\nimport numpy as np\nimport problem_unittests as tests\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n words = list(set(text))\n vocab_to_int = {word: i for (i, word) in enumerate(words)}\n int_to_vocab = {i: word for (i, word) in enumerate(words)}\n return vocab_to_int, int_to_vocab\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)",
"Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\nImplement the function token_lookup to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".",
"def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n return {\n '.': '||period||',\n ',': '||comma||',\n '\"': '||quotation_mark||',\n ';': '||semicolon||',\n '!': '||exclamation_mark||',\n '?': '||question_mark||',\n '(': '||left_parentheses||',\n ')': '||right_parentheses||',\n '--': '||dash||',\n '\\n': '||return||'\n }\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)",
"Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()",
"Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\nCheck the Version of TensorFlow and Access to GPU",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))",
"Input\nImplement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the TF Placeholder name parameter.\n- Targets placeholder\n- Learning Rate placeholder\nReturn the placeholders in the following the tuple (Input, Targets, LearingRate)",
"def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n input_placeholder = tf.placeholder(tf.int32, [None, None], name = 'input')\n targets_placeholder = tf.placeholder(tf.int32, [None, None], name = 'targets')\n learning_rate_placeholder =tf.placeholder(tf.float32, name = 'learning_rate')\n return input_placeholder, targets_placeholder, learning_rate_placeholder \n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)",
"Build RNN Cell and Initialize\nStack one or more BasicLSTMCells in a MultiRNNCell.\n- The Rnn size should be set using rnn_size\n- Initalize Cell State using the MultiRNNCell's zero_state() function\n - Apply the name \"initial_state\" to the initial state using tf.identity()\nReturn the cell and initial state in the following tuple (Cell, InitialState)",
"test_batch_size_ph = tf.placeholder(tf.int32)\n\ntest_batch_size_ph.shape\n\ndef get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n lstm_layers = 2\n cell = tf.contrib.rnn.BasicLSTMCell(num_units=rnn_size)\n# drop = tf.contrib.rnn.DropoutWrapper(cell)\n multi = tf.contrib.rnn.MultiRNNCell([cell] * lstm_layers)\n initial_state = multi.zero_state(batch_size, tf.float32)\n initial_state = tf.identity(initial_state, \"initial_state\")\n return multi, initial_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)",
"Word Embedding\nApply embedding to input_data using TensorFlow. Return the embedded sequence.",
"def get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))\n embed = tf.nn.embedding_lookup(embedding, input_data)\n return embed\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)",
"Build RNN\nYou created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.\n- Build the RNN using the tf.nn.dynamic_rnn()\n - Apply the name \"final_state\" to the final state using tf.identity()\nReturn the outputs and final_state state in the following tuple (Outputs, FinalState)",
"def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)\n final_state = tf.identity(final_state, \"final_state\")\n return outputs, final_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)",
"Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.\n- Build RNN using cell and your build_rnn(cell, inputs) function.\n- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.\nReturn the logits and final state in the following tuple (Logits, FinalState)",
"def build_nn(cell, rnn_size, input_data, vocab_size):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :return: Tuple (Logits, FinalState)\n \"\"\"\n embed_data = get_embed(input_data, vocab_size, rnn_size)\n outputs, final_state = build_rnn(cell, embed_data)\n logits = tf.contrib.layers.fully_connected(\n outputs, num_outputs=vocab_size,\n weights_initializer = tf.contrib.layers.xavier_initializer(\n uniform=False, seed=None, dtype=tf.float32), \n activation_fn=None)\n return logits, final_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)",
"Batches\nImplement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:\n- The first element is a single batch of input with the shape [batch size, sequence length]\n- The second element is a single batch of targets with the shape [batch size, sequence length]\nIf you can't fill the last batch with enough data, drop the last batch.\nFor exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2 3], [ 7 8 9]],\n # Batch of targets\n [[ 2 3 4], [ 8 9 10]]\n ],\n# Second Batch\n [\n # Batch of Input\n [[ 4 5 6], [10 11 12]],\n # Batch of targets\n [[ 5 6 7], [11 12 13]]\n ]\n]\n```",
"def get_batches(int_text, batch_size, seq_length):\n \"\"\"\n Return batches of input and target\n :param int_text: Text with the words replaced by their ids\n :param batch_size: The size of batch\n :param seq_length: The length of sequence\n :return: Batches as a Numpy array\n \"\"\"\n n_batches = int(len(int_text) / (batch_size * seq_length))\n\n # Drop the last few characters to make only full batches\n xdata = np.array(int_text[: n_batches * batch_size * seq_length])\n ydata = np.array(int_text[1: n_batches * batch_size * seq_length + 1])\n\n x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)\n y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)\n\n return np.array(list(zip(x_batches, y_batches)))\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_batches(get_batches)",
"Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet num_epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet seq_length to the length of sequence.\nSet learning_rate to the learning rate.\nSet show_every_n_batches to the number of batches the neural network should print progress.",
"# Number of Epochs\nnum_epochs = 50\n# Batch Size\nbatch_size = 256\n# RNN Size\nrnn_size = 1000\n# Sequence Length\nseq_length = 10\n# Learning Rate\nlearning_rate = .01\n# Show stats for every n number of batches\nshow_every_n_batches = 13\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'",
"Build the Graph\nBuild the graph using the neural network you implemented.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)",
"Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')",
"Save Parameters\nSave seq_length and save_dir for generating a new TV script.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))",
"Checkpoint",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()",
"Implement Generate Functions\nGet Tensors\nGet tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\nReturn the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)",
"def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n InputTensor = loaded_graph.get_tensor_by_name(\"input:0\")\n InitialStateTensor = loaded_graph.get_tensor_by_name(\"initial_state:0\")\n FinalStateTensor = loaded_graph.get_tensor_by_name(\"final_state:0\")\n ProbsTensor = loaded_graph.get_tensor_by_name(\"probs:0\")\n \n return InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)",
"Choose Word\nImplement the pick_word() function to select the next word using probabilities.",
"def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n max_pos = max(enumerate(probabilities),key=lambda x: x[1])[0]\n return int_to_vocab[max_pos]\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)",
"Generate TV Script\nThis will generate the TV script for you. Set gen_length to the length of TV script you want to generate.",
"gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)",
"The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
DS-100/sp17-materials
|
sp17/hw/hw2/hw2_solution.ipynb
|
gpl-3.0
|
[
"Homework 2: Language in the 2016 Presidential Election\nPopular figures often have help managing their media presence. In the 2016 election, Twitter was an important communication medium for every major candidate. Many Twitter posts posted by the top two candidates were actually written by their aides. You might wonder how this affected the content or language of the tweets.\nIn this assignment, we'll look at some of the patterns in tweets by the top two candidates, Clinton and Trump. We'll start with Clinton.\nAlong the way, you'll get a first look at Pandas. Pandas is a Python package that provides a DataFrame data structure similar to the datascience package's Table, which you might remember from Data 8. DataFrames are a bit harder to use than Tables, but they provide more advanced functionality and are a standard tool for data analysis in Python.\nSome of the analysis in this assignment is based on a post by David Robinson. Feel free to read the post, but do not copy from it! David's post is written in the R programming language, which is a favorite of many data analysts, especially academic statisticians. Once you're done with your analysis, you may find it interesting to see whether R is easier to use for this task.\nTo start the assignment, run the cell below to set up some imports and the automatic tests.",
"import math\nimport numpy as np\nimport matplotlib\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\n!pip install -U okpy\nfrom client.api.notebook import Notebook\nok = Notebook('hw2.ok')",
"Getting the dataset\nSince we'll be looking at Twitter data, we need to download the data from Twitter!\nTwitter provides an API for downloading tweet data in large batches. The tweepy package makes it fairly easy to use.\nQuestion 0\nInstall tweepy, if you don't already have it. (Be sure to activate your Conda environment for the class first. Then run pip install tweepy.)\nThere are instructions on using tweepy here, but we will give you example code.\nTwitter requires you to have authentication keys to access their API. To get your keys, you'll have to sign up as a Twitter developer.\nQuestion 1\nFollow these instructions to get your keys:\n\nCreate a Twitter account. You can use an existing account if you have one.\nUnder account settings, add your phone number to the account.\nCreate a Twitter developer account. Attach it to your Twitter account.\nOnce you're logged into your developer account, create an application for this assignment. You can call it whatever you want, and you can write any URL when it asks for a web site.\nOn the page for that application, find your Consumer Key and Consumer Secret.\nOn the same page, create an Access Token. Record the resulting Access Token and Access Token Secret.\nEdit the file keys.json and replace the placeholders with your keys. Don't turn in that file.\n\nI AM AN IMPORTANT NOTE. DO NOT SKIP ME.\nIf someone has your authentication keys, they can access your Twitter account and post as you! So don't give them to anyone, and don't write them down in this notebook. The usual way to store sensitive information like this is to put it in a separate file and read it programmatically. That way, you can share the rest of your code without sharing your keys. That's why we're asking you to put your keys in keys.json for this assignment.\nI AM A SECOND IMPORTANT NOTE.\nTwitter limits developers to a certain rate of requests for data. If you make too many requests in a short period of time, you'll have to wait awhile (around 15 minutes) before you can make more. So carefully follow the code examples you see and don't rerun cells without thinking. Instead, always save the data you've collected to a file. We've provided templates to help you do that.\nIn the example below, we have loaded some tweets by @BerkeleyData. Run it, inspect the output, and read the code.",
"ds_tweets_save_path = \"BerkeleyData_recent_tweets.pkl\"\nfrom pathlib import Path\n# Guarding against attempts to download the data multiple\n# times:\nif not Path(ds_tweets_save_path).is_file():\n import json\n # Loading your keys from keys.json (which you should have filled\n # in in question 1):\n with open(\"keys.json\") as f:\n keys = json.load(f)\n\n import tweepy\n\n # Authenticating:\n auth = tweepy.OAuthHandler(keys[\"consumer_key\"], keys[\"consumer_secret\"])\n auth.set_access_token(keys[\"access_token\"], keys[\"access_token_secret\"])\n api = tweepy.API(auth)\n\n # Getting as many recent tweets by @BerkeleyData as Twitter will let us have:\n example_tweets = list(tweepy.Cursor(api.user_timeline, id=\"BerkeleyData\").items())\n\n # Saving the tweets to a file as \"pickled\" objects:\n with open(ds_tweets_save_path, \"wb\") as f:\n import pickle\n pickle.dump(example_tweets, f)\n\n# Re-loading the results:\nwith open(ds_tweets_save_path, \"rb\") as f:\n import pickle\n example_tweets = pickle.load(f)\n\n# Looking at one tweet object, which has type Status:\nexample_tweets[0]\n# You can try something like this:\n# import pprint; pprint.pprint(vars(example_tweets[0]))\n# ...to get a more easily-readable view.",
"Question 2\nWrite code to download all the recent tweets by Hillary Clinton (@HillaryClinton). Follow our example code if you wish. Write your code in the form of four functions matching the documentation provided. (You may define additional functions as helpers.) Once you've written your functions, you can run the subsequent cell to download the tweets.",
"def load_keys(path):\n \"\"\"Loads your Twitter authentication keys from a file on disk.\n \n Args:\n path (str): The path to your key file. The file should\n be in JSON format and look like this (but filled in):\n {\n \"consumer_key\": \"<your Consumer Key here>\",\n \"consumer_secret\": \"<your Consumer Secret here>\",\n \"access_token\": \"<your Access Token here>\",\n \"access_token_secret\": \"<your Access Token Secret here>\"\n }\n \n Returns:\n dict: A dictionary mapping key names (like \"consumer_key\") to\n key values.\"\"\"\n import json\n with open(path) as f:\n return json.load(f)\n\ndef download_recent_tweets_by_user(user_account_name, keys):\n \"\"\"Downloads tweets by one Twitter user.\n\n Args:\n user_account_name (str): The name of the Twitter account\n whose tweets will be downloaded.\n keys (dict): A Python dictionary with Twitter authentication\n keys (strings), like this (but filled in):\n {\n \"consumer_key\": \"<your Consumer Key here>\",\n \"consumer_secret\": \"<your Consumer Secret here>\",\n \"access_token\": \"<your Access Token here>\",\n \"access_token_secret\": \"<your Access Token Secret here>\"\n }\n\n Returns:\n list: A list of Status objects, each representing one tweet.\"\"\"\n import tweepy\n\n # Authenticating:\n auth = tweepy.OAuthHandler(keys[\"consumer_key\"], keys[\"consumer_secret\"])\n auth.set_access_token(keys[\"access_token\"], keys[\"access_token_secret\"])\n api = tweepy.API(auth)\n\n return list(tweepy.Cursor(api.user_timeline, id=user_account_name).items())\n\ndef save_tweets(tweets, path):\n \"\"\"Saves a list of tweets to a file in the local filesystem.\n \n This function makes no guarantee about the format of the saved\n tweets, **except** that calling load_tweets(path) after\n save_tweets(tweets, path) will produce the same list of tweets\n and that only the file at the given path is used to store the\n tweets. (That means you can implement this function however\n you want, as long as saving and loading works!)\n\n Args:\n tweets (list): A list of tweet objects (of type Status) to\n be saved.\n path (str): The place where the tweets will be saved.\n\n Returns:\n None\"\"\"\n with open(path, \"wb\") as f:\n import pickle\n pickle.dump(tweets, f)\n \ndef load_tweets(path):\n \"\"\"Loads tweets that have previously been saved.\n \n Calling load_tweets(path) after save_tweets(tweets, path)\n will produce the same list of tweets.\n \n Args:\n path (str): The place where the tweets were be saved.\n\n Returns:\n list: A list of Status objects, each representing one tweet.\"\"\"\n with open(path, \"rb\") as f:\n import pickle\n return pickle.load(f)\n\n# When you are done, run this cell to load @HillaryClinton's tweets.\n# Note the function get_tweets_with_cache. You may find it useful\n# later.\n\ndef get_tweets_with_cache(user_account_name, keys_path):\n \"\"\"Get recent tweets from one user, loading from a disk cache if available.\n \n The first time you call this function, it will download tweets by\n a user. Subsequent calls will not re-download the tweets; instead\n they'll load the tweets from a save file in your local filesystem.\n All this is done using the functions you defined in the previous cell.\n This has benefits and drawbacks that often appear when you cache data:\n \n +: Using this function will prevent extraneous usage of the Twitter API.\n +: You will get your data much faster after the first time it's called.\n -: If you really want to re-download the tweets (say, to get newer ones,\n or because you screwed up something in the previous cell and your\n tweets aren't what you wanted), you'll have to find the save file\n (which will look like <something>_recent_tweets.pkl) and delete it.\n \n Args:\n user_account_name (str): The Twitter handle of a user, without the @.\n keys_path (str): The path to a JSON keys file in your filesystem.\n \"\"\"\n save_path = \"{}_recent_tweets.pkl\".format(user_account_name)\n from pathlib import Path\n if not Path(save_path).is_file():\n keys = load_keys(keys_path)\n tweets = download_recent_tweets_by_user(user_account_name, keys)\n save_tweets(tweets, save_path)\n return load_tweets(save_path)\n\nclinton_tweets = get_tweets_with_cache(\"HillaryClinton\", \"keys.json\")\n\n# If everything is working properly, this should print out\n# a Status object (a single tweet). clinton_tweets should\n# contain around 3000 tweets.\nclinton_tweets[0]\n\n_ = ok.grade('q02')\n_ = ok.backup()",
"Exploring the dataset\nTwitter gives us a lot of information about each tweet, not just its text. You can read the full documentation here. Look at one tweet to get a sense of the information we have available.\nQuestion 3\nWhich fields contain:\n1. the actual text of a tweet,\n2. the time when the tweet was posted, and\n3. the source (device and app) from which the tweet was posted?\nTo answer the question, write functions that extract each field from a tweet. (Each one should take a single Status object as its argument.)",
"def extract_text(tweet):\n return tweet.text #SOLUTION\n\ndef extract_time(tweet):\n return tweet.created_at #SOLUTION\n\ndef extract_source(tweet):\n return tweet.source #SOLUTION\n\n_ = ok.grade('q03')\n_ = ok.backup()",
"Question 4\nAre there any other fields you think might be useful in identifying the true author of an @HillaryClinton tweet? (If you're reading the documentation, consider whether fields are actually present often enough in the data to be useful.)\nSOLUTION: Some possible answers: retweet_count or favorite_count might be useful if we think tweets by the candidate herself are retweeted or favorited more often. coordinates might be useful if we can identify some pattern in the aides' or candidate's locations (for example, if the aides always tweet from the same campaign office building, which Hillary rarely visits). quoted_status might be useful if aides are more likely to quote other tweets than the candidate herself.\nBuilding a Pandas table\nJSON (and the Status object, which is just Tweepy's translation of the JSON produced by the Twitter API to a Python object) is nice for transmitting data, but it's not ideal for analysis. The data will be easier to work with if we put them in a table.\nTo create an empty table in Pandas, write:",
"import pandas as pd\ndf = pd.DataFrame()",
"(pd is the standard abbrevation for Pandas.)\nNow let's make a table with useful information in it. To add a column to a DataFrame called df, write:\ndf['column_name'] = some_list_or_array\n\n(This page is a useful reference for many of the basic operations in Pandas. You don't need to read it now, but it might be helpful if you get stuck.)\nQuestion 5\nWrite a function called make_dataframe. It should take as its argument a list of tweets like clinton_tweets and return a Pandas DataFrame. The DataFrame should contain columns for all the fields in question 3 and any fields you listed in question 4. Use the field names as the names of the corresponding columns.",
"def make_dataframe(tweets):\n \"\"\"Make a DataFrame from a list of tweets, with a few relevant fields.\n \n Args:\n tweets (list): A list of tweets, each one a Status object.\n \n Returns:\n DataFrame: A Pandas DataFrame containing one row for each element\n of tweets and one column for each relevant field.\"\"\"\n df = pd.DataFrame() #SOLUTION\n df['text'] = [extract_text(t) for t in tweets] #SOLUTION\n df['created_at'] = [extract_time(t) for t in tweets] #SOLUTION\n df['source'] = [extract_source(t) for t in tweets] #SOLUTION\n return df",
"Now you can run the next line to make your DataFrame.",
"clinton_df = make_dataframe(clinton_tweets)\n\n# The next line causes Pandas to display all the characters\n# from each tweet when the table is printed, for more\n# convenient reading. Comment it out if you don't want it.\npd.set_option('display.max_colwidth', 150)\n\nclinton_df.head()\n\n_ = ok.grade('q05')\n_ = ok.backup()",
"Tweetsourcing\nNow that the preliminaries are done, we can do what we set out to do: Try to distinguish between Clinton's own tweets and her aides'.\nQuestion 6\nCreate a plot showing how many tweets came from each kind of source. For a real challenge, try using the Pandas documentation and Google to figure out how to do this. Otherwise, hints are provided.\nHint: Start by grouping the data by source. df['source'].value_counts() will create an object called a Series (which is like a table that contains exactly 2 columns, where one column is called the index). You can create a version of that Series that's sorted by source (in this case, in alphabetical order) by calling sort_index() on it.\nHint 2: To generate a bar plot from a Series s, call s.plot.barh(). You can also use matplotlib's plt.barh, but it's a little bit complicated to use.",
"clinton_df['source'].value_counts().sort_index().plot.barh(); #SOLUTION",
"You should find that most tweets come from TweetDeck.\nQuestion 7\nFilter clinton_df to examine some tweets from TweetDeck and a few from the next-most-used platform. From examining only a few tweets (say 10 from each category), can you tell whether Clinton's personal tweets are limited to one platform?\nHint: If df is a DataFrame and filter_array is an array of booleans of the same length, then df[filter_array] is a new DataFrame containing only the rows in df corresponding to True values in filter_array.",
"# Do your analysis, then write your conclusions in a brief comment.\ntweetdeck = clinton_df[clinton_df['source'] == 'TweetDeck']\ntwc = clinton_df[clinton_df['source'] == 'Twitter Web Client']\nimport numpy as np\ndef rounded_linspace(start, stop, count):\n import numpy as np\n return np.linspace(start, stop, count, endpoint=False).astype(int)\nprint(tweetdeck.iloc[rounded_linspace(0, tweetdeck.shape[0], 10)]['text'])\nprint(twc.iloc[rounded_linspace(0, twc.shape[0], 10)]['text'])\n\n# It does look like Twitter Web Client is used more for retweeting,\n# but it's not obvious which tweets are by Hillary.",
"When in doubt, read...\nCheck Hillary Clinton's Twitter page. It mentions an easy way to identify tweets by the candidate herself. All other tweets are by her aides.\nQuestion 8\nWrite a function called is_clinton that takes a tweet (in JSON) as its argument and returns True for personal tweets by Clinton and False for tweets by her aides. Use your function to create a column called is_personal in clinton_df.\nHint: You might find the string method endswith helpful.",
"def is_clinton(tweet):\n \"\"\"Distinguishes between tweets by Clinton and tweets by her aides.\n \n Args:\n tweet (Status): One tweet.\n \n Returns:\n bool: True if the tweet is written by Clinton herself.\"\"\"\n return extract_text(tweet).endswith(\"-H\") #SOLUTION\n\nclinton_df['is_personal'] = [is_clinton(t) for t in clinton_tweets] #SOLUTION",
"Now we have identified Clinton's personal tweets. Let us return to our analysis of sources and see if there was any pattern we could have found.\nYou may recall that Tables from Data 8 have a method called pivot, which is useful for cross-classifying a dataset on two categorical attrbiutes. DataFrames support a more complicated version of pivoting. The cell below pivots clinton_df for you.",
"# This cell is filled in for you; just run it and examine the output.\ndef pivot_count(df, vertical_column, horizontal_column):\n \"\"\"Cross-classifies df on two columns.\"\"\"\n pivoted = pd.pivot_table(df[[vertical_column, horizontal_column]], index=[vertical_column], columns=[horizontal_column], aggfunc=len, fill_value=0)\n return pivoted.rename(columns={False: \"False\", True: \"True\"})\n\nclinton_pivoted = pivot_count(clinton_df, 'source', 'is_personal')\nclinton_pivoted",
"Do Clinton and her aides have different \"signatures\" of tweet sources? That is, for each tweet they send, does Clinton send tweets from each source with roughly the same frequency as her aides? It's a little hard to tell from the pivoted table alone.\nQuestion 9\nCreate a visualization to facilitate that comparison.\nHint: df.plot.barh works for DataFrames, too. But think about what data you want to plot.",
"clinton_pivoted[\"aides proportion\"] = clinton_pivoted['False'] / sum(clinton_pivoted['False'])\nclinton_pivoted[\"clinton proportion\"] = clinton_pivoted['True'] / sum(clinton_pivoted['True'])\nclinton_pivoted[[\"aides proportion\", \"clinton proportion\"]].plot.barh();",
"You should see that there are some differences, but they aren't large. Do we need to worry that the differences (or lack thereof) are just \"due to chance\"?\nStatistician Ani argues as follows:\n\n\"The tweets we see are not a random sample from anything. We have simply gathered every tweet by @HillaryClinton from the last several months. It is therefore meaningless to compute, for example, a confidence interval for the rate at which Clinton used TweetDeck. We have calculated exactly that rate from the data we have.\"\n\nStatistician Belinda responds:\n\n\"We are interested in whether Clinton and her aides behave differently in general with respect to Twitter client usage in a way that we could use to identify their tweets. It's plausible to imagine that the tweets we see are a random sample from a huge unobserved population of all the tweets Clinton and her aides might send. We must worry about error due to random chance when we draw conclusions about this population using only the data we have available.\"\n\nQuestion 10\nWhat position would you take on this question? Choose a side and give one (brief) argument for it, or argue for some third position.\nSOLUTION: Here is an argument for Belinda's position. Imagine that Clinton had tweeted only 5 times. Then we would probably not think we could come to a valid conclusion about her behavior patterns. So there is a distinction between the data and an underlying parameter that we're trying to learn about. However, this does not mean it's reasonable to use methods (like the simple bootstrap) that assume the data are a simple random sample from the population we're interested in.\nQuestion 11\nAssume you are convinced by Belinda's argument. Perform a statistical test of the null hypothesis that the Clinton and aide tweets' sources are all independent samples from the same distribution (that is, that the differences we observe are \"due to chance\"). Briefly describe the test methodology and report your results.\nHint: If you need a refresher, this section of the Data 8 textbook from Fall 2016 covered this kind of hypothesis test.\nHint 2: Feel free to use datascience.Table to answer this question. However, it will be advantageous to learn how to do it with numpy alone. In our solution, we used some numpy functions you might not be aware of: np.append, np.random.permutation, np.bincount, and np.count_nonzero. We have provided the function expand_counts, which should help you solve a tricky problem that will arise.",
"# Use this cell to perform your hypothesis test.\n\ndef expand_counts(source_counts):\n \"\"\"Blow up a list/array of counts of categories into an array of\n individuals matching the counts. For example, we can generate\n a list of 2 individuals of type 0, 4 of type 1, and 1 of type 3\n as follows:\n \n >>> expand_counts([2, 4, 0, 1])\n array([0, 0, 1, 1, 1, 1, 3])\"\"\"\n return np.repeat(np.arange(len(source_counts)), source_counts)\n\ndef tvd(a, b):\n return .5*sum(np.abs(a/sum(a) - b/sum(b)))\n\ndef test_difference_in_distributions(sample0, sample1, num_trials):\n num_sources = len(sample0)\n individuals0 = expand_counts(sample0)\n individuals1 = expand_counts(sample1)\n count0 = len(individuals0)\n count1 = len(individuals1)\n all_individuals = np.append(individuals0, individuals1)\n def simulate_under_null():\n permuted_pool = np.random.permutation(all_individuals)\n simulated_sample0 = np.bincount(permuted_pool[:count0], minlength=num_sources)\n simulated_sample1 = np.bincount(permuted_pool[count0:], minlength=num_sources)\n return tvd(simulated_sample0, simulated_sample1)\n \n actual_tvd = tvd(sample0, sample1)\n simulated_tvds = np.array([simulate_under_null() for _ in range(num_trials)])\n return np.count_nonzero(simulated_tvds > actual_tvd) / num_trials\n\np_value = test_difference_in_distributions(clinton_pivoted['True'], clinton_pivoted['False'], 100000)\nprint(\"P-value: {:.6f}\".format(p_value))",
"SOLUTION: We simulated many times under the null hypothesis by pooling the data and permuting the sources. We found a P-value around .04%, so we have very strong evidence against the null hypothesis that Clinton and her aides tweet from the same distribution of sources. It's important to note that strong evidence that the difference is not zero (which we have found) is very different from evidence that the difference is large (which we have not found). The next question demonstrates this.\nQuestion 12\nSuppose you sample a random @HillaryClinton tweet and find that it is from the Twitter Web Client. Your visualization in question 9 should show you that Clinton tweets from this source about twice as frequently as her aides do, so you might imagine it's reasonable to predict that the tweet is by Clinton. But what is the probability that the tweet is by Clinton? (You should find a relatively small number. Clinton's aides tweet much more than she does. So even though there is a difference in their tweet source usage, it would be difficult to classify tweets this way.)\nHint: Bayes' rule is covered in this section of the Data 8 textbook.",
"probability_clinton = clinton_pivoted.loc['Twitter Web Client']['True'] / sum(clinton_pivoted.loc['Twitter Web Client']) #SOLUTION\nprobability_clinton\n\n_ = ok.grade('q12')\n_ = ok.backup()",
"Another candidate\nOur results so far aren't Earth-shattering. Clinton uses different Twitter clients at slightly different rates than her aides.\nNow that we've categorized the tweets, we could of course investigate their contents. A manual analysis (also known as \"reading\") might be interesting, but it is beyond the scope of this course. And we'll have to wait a few more weeks before we can use a computer to help with such an analysis.\nInstead, let's repeat our analysis for Donald Trump.\nQuestion 13\nDownload the tweet data for Trump (@realDonaldTrump), and repeat the steps through question 6 to create a table called trump_df.",
"trump_tweets = get_tweets_with_cache(\"realDonaldTrump\", \"keys.json\") #SOLUTION\ntrump_df = make_dataframe(trump_tweets) #SOLUTION\n\ntrump_df.head()",
"Question 14\nMake a bar chart of the sources of Trump tweets.",
"trump_df['source'].value_counts().sort_index().plot.barh(); #SOLUTION",
"You should find two major sources of tweets.\nIt is reported (for example, in this Gawker article) that Trump himself uses an Android phone (a Samsung Galaxy), while his aides use iPhones. But Trump has not confirmed this. Also, he has reportedly switched phones since his inauguration! How might we verify whether this is a way to identify his tweets?\nA retweet is a tweet that replies to (or simply repeats) a tweet by another user. Twitter provides several mechanisms for this, as explained in this article. However, Trump has an unusual way of retweeting: He simply adds the original sender's name to the original message, puts everything in quotes, and then adds his own comments at the end.\nFor example, this is a tweet by user @melissa7889:\n @realDonaldTrump @JRACKER33 you should run for president!\n\nHere is Trump's retweet of this, from 2013:\n \"@melissa7889: @realDonaldTrump @JRACKER33 you should run for president!\" Thanks,very nice!\n\nSince 2015, the usual way of retweeting this message, and the method used by Trump's aides (but not Trump himself), would have been:\n Thanks,very nice! RT @melissa7889: @realDonaldTrump @JRACKER33 you should run for president!\n\nQuestion 15\nWrite a function to identify Trump-style retweets, and another function to identify the aide-style retweets. Then, use them to create a function called tweet_type that takes a tweet as its argument and returns values \"Trump retweet\", \"Aide retweet\", and \"Not a retweet\" as appropriate. Use your function to add a 'tweet_type' column to trump_df.\nHint: Try the string method startswith and the Python keyword in.",
"def is_trump_style_retweet(tweet_text):\n \"\"\"Returns True if tweet_text looks like a Trump-style retweet.\"\"\"\n return tweet_text.startswith('\"@')\n \ndef is_aide_style_retweet(tweet_text):\n \"\"\"Returns True if tweet_text looks like an aide-style retweet.\"\"\"\n return \"RT @\" in tweet_text\n\ndef tweet_type(tweet_text):\n \"\"\"Returns \"Trump retweet\", \"Aide retweet\", or \"Not a retweet\"\n as appropriate.\"\"\"\n if is_trump_style_retweet(tweet_text):\n return \"Trump retweet\"\n elif is_aide_style_retweet(tweet_text):\n return \"Aide retweet\"\n return \"Not a retweet\"\n\ntrump_df['tweet_type'] = [tweet_type(t) for t in trump_df['text']]\n\ntrump_df\n\n_ = ok.grade('q15')\n_ = ok.backup()",
"Question 16\nCross-classify @realDonaldTrump tweets by source and by tweet_type into a table called trump_pivoted.\nHint: We did something very similar after question 7. You don't need to write much new code for this.",
"trump_pivoted = pivot_count(trump_df, 'source', 'tweet_type') #SOLUTION\ntrump_pivoted\n\n_ = ok.grade('q16')\n_ = ok.backup()",
"Question 17\nDoes the cross-classified table show evidence against the hypothesis that Trump and his advisors tweet from roughly the same sources? Again assuming you agree with Statistician Belinda, run an hypothesis test in the next cell to verify that there is a difference in the relevant distributions. Then use the subsequent cell to describe your methodology and results. Are there any important caveats?",
"test_difference_in_distributions(trump_pivoted['Aide retweet'], trump_pivoted['Trump retweet'], 100000) #SOLUTION",
"SOLUTION: We eliminated the non-retweets and performed a test for a difference in categorical distributions as we did for Clinton. As should obvious from the table, there is a difference! (We find a P-value of 0, though this is approximate, and the true P-value is merely extremely close to 0.) One small caveat is that we are looking only at retweets. It's plausible that people behave differently when retweeting - maybe they find one device or app more convenient for retweets. A bigger caveat is that we don't just care about there being any difference, but that the difference is large. This is obvious from looking at the table - Trump almost never retweets from an iPhone and his aides never retweet from an Android phone. (Since we care about magnitudes, it would be useful to create confidence intervals for the chances of Trump and his aides tweeting from various devices. With a dataset this large, they would be narrow.)\nWe are really interested in knowing whether we can classify @realDonaldTrump tweets on the basis of the source. Just knowing that there is a difference in source distributions isn't nearly enough. Instead, we would like to claim something like this: \"@realDonaldTrump tweets from Twitter for Android are generally authored by Trump himself. Other tweets are generally authored by his aides.\"\nQuestion 18\nIf you use bootstrap methods to compute a confidence interval for the proportion of Trump aide retweets from Android phones in \"the population of all @realDonaldTrump retweets,\" you will find that the interval is [0, 0]. That's because there are no retweets from Android phones by Trump aides in our dataset. Is it reasonable to conclude from this that Trump aides definitely never tweet from Android phones?\nSOLUTION: No, the bootstrap is misleading in this case. If we'd seen 1 million retweets by Trump aides, it might be okay to make this conclusion. But we have seen only 177, so the conclusion seems a bit premature.\nSubmitting your assignment\nFirst, run the next cell to run all the tests at once.",
"_ = ok.grade_all()",
"Now, run this code in your terminal to make a\ngit commit\nthat saves a snapshot of your changes in git. The last line of the cell\nruns git push, which will send your work to your personal Github repo.\nNote: Don't add and commit your keys.json file! git add -A will do that, but the code we've written below won't.\n# Tell git to commit your changes to this notebook\ngit add sp17/hw/hw2/hw2.ipynb\n\n# Tell git to make the commit\ngit commit -m \"hw2 finished\"\n\n# Send your updates to your personal private repo\ngit push origin master\n\nFinally, we'll submit the assignment to OkPy so that the staff will know to grade it. You can submit as many times as you want, and you can choose which submission you want us to grade by going to https://okpy.org/cal/data100/sp17/.",
"# Now, we'll submit to okpy\n_ = ok.submit()",
"Congratulations, you're done!\nWe've only scratched the surface of this dataset. Twitter is a rich source of data about language and social interaction, and not only for political figures. Now you know how to access it!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Brunel-Visualization/Brunel
|
python/examples/Whiskey.ipynb
|
apache-2.0
|
[
"Whiskey Data\nThis data set contains data on a small number of whiskies",
"import pandas as pd\nfrom numpy import log, abs, sign, sqrt\nimport brunel\n\nwhiskey = pd.read_csv(\"data/whiskey.csv\")\n\nprint('Data on whiskies:', ', '.join(whiskey.columns))",
"Summaries\nShown below are the following charts:\n\nA treemap display for each whiskey, broken down by country and category. The cells are colored by the rating, with lower-rated whiskies in blue, and higher-rated in reds. Missing data for ratings show as black.\nA filtered chart allowing you to select whiskeys based on price and category\nA line chart showing the relationship between age and rating. A simple treemap of categories is linked to this chart\nA bubble chart of countries linked to a heatmap of alcohol level (ABV) by rating",
"%%brunel data('whiskey') x(country, category) color(rating) treemap label(name:3) tooltip(#all) \n style('.label {font-size:7pt}') legends(none)\n:: width=900, height=600\n\n%%brunel data('whiskey') bubble color(rating:red) sort(rating) size(abv) label(name:6) tooltip(#all) filter(price, category) \n :: height=500\n\n%%brunel data('whiskey')\n line x(age) y(rating) mean(rating) using(interpolate) label(country) split(country) \n bin(age:8) color(#selection) legends(none) |\n treemap x(category) interaction(select) size(#count) color(#selection) legends(none) sort(#count:ascending) bin(category:9)\n tooltip(country) list(country) label(#count) style('.labels .label {font-size:14px}')\n:: width=900\n\n%%brunel data('whiskey')\n bubble label(country:3) bin(country) size(#count) color(#selection) sort(#count) interaction(select) tooltip(name) list(name) legends(none) at(0,10,60,100)\n | x(abv) y(rating) color(#count:blue) legends(none) bin(abv:8) bin(rating:5) style('symbol:rect; stroke:none; size:100%') \n interaction(select) label(#selection) list(#selection) at(60,15,100,100) tooltip(rating, abv,#count) legends(none) \n | bar label(brand:70) list(brand) at(0,0, 100, 10) axes(none) color(#selection) legends(none) interaction(filter)\n:: width=900, height=600",
"Some Analysis\nHere we use the sci-kit decision tree regression tool to predict the price of a whiskey given its age, rating and ABV value.\nWe transform the output for plotting purposes, but note that the tooltips give the original data",
"from sklearn import tree\nD = whiskey[['Name', 'ABV', 'Age', 'Rating', 'Price']].dropna()\nX = D[ ['ABV', 'Age', 'Rating'] ]\ny = D['Price']\nclf = tree.DecisionTreeRegressor(min_samples_leaf=4)\nclf.fit(X, y)\nD['Predicted'] = clf.predict(X)\nf = D['Predicted'] - D['Price']\nD['Diff'] = sqrt(abs(f)) * sign(f)\nD['LPrice'] = log(y)\n%brunel data('D') y(diff) x(LPrice) tooltip(name, price, predicted, rating) color(rating) :: width=700",
"Simple Linked Charts\nClick on a bar to see the proportions of Whiskey categories per country",
"%%brunel data('whiskey') \n bar x(country) y(#count) interaction(select) color(#selection) | \n bar color(category) y(#count) percent(#count) polar stack label(category) legends(none) interaction(filter) tooltip(#count,category)\n:: width=900, height=300"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
CalPolyPat/phys202-2015-work
|
assignments/assignment12/FittingModelsEx02.ipynb
|
mit
|
[
"Fitting Models Exercise 2\nImports",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.optimize as opt",
"Fitting a decaying oscillation\nFor this problem you are given a raw dataset in the file decay_osc.npz. This file contains three arrays:\n\ntdata: an array of time values\nydata: an array of y values\ndy: the absolute uncertainties (standard deviations) in y\n\nYour job is to fit the following model to this data:\n$$ y(t) = A e^{-\\lambda t} \\cos{\\omega t + \\delta} $$\nFirst, import the data using NumPy and make an appropriately styled error bar plot of the raw data.",
"data=np.load(\"decay_osc.npz\")\nT=data[\"tdata\"]\nY=data[\"ydata\"]\ndy=data[\"dy\"]\nf=plt.figure(figsize=(15,10))\nplt.errorbar(T,Y,yerr=dy,fmt='o');\n\nassert True # leave this to grade the data import and raw data plot",
"Now, using curve_fit to fit this model and determine the estimates and uncertainties for the parameters:\n\nPrint the parameters estimates and uncertainties.\nPlot the raw and best fit model.\nYou will likely have to pass an initial guess to curve_fit to get a good fit.\nTreat the uncertainties in $y$ as absolute errors by passing absolute_sigma=True.",
"par_est=[0,0,0,0]\npopt, popc = opt.curve_fit(lambda t,A,lam,ome,delt:A*np.exp(-lam*t)*np.cos(ome*t+delt), T, Y, par_est, sigma=dy, absolute_sigma=True)\nprint(list(zip(popt, np.diag(popc))))\nf=plt.figure(figsize=(15,10))\nplt.errorbar(T,Y,yerr=dy,fmt='o')\nTx=np.linspace(0,20, 1000)\nplt.plot(Tx,popt[0]*np.exp(-popt[1]*Tx)*np.cos(-popt[2]*Tx+popt[3]));\n\nassert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
utensil/julia-playground
|
py/galgebra_gr_metrics.ipynb
|
mit
|
[
"from spacetimeengine import *\n\nfrom galgebra.printer import Format, GaLatexPrinter\n\nFormat()\nfrom galgebra.ga import Ga\nfrom galgebra.mv import ONE, ZERO, HALF\n\nfrom IPython.display import display, Math\n\ndef show(x):\n if isinstance(x, list):\n for item in x:\n display(Math(GaLatexPrinter.latex(item)))\n else:\n display(Math(GaLatexPrinter.latex(x)))\n\ndef build_ga_from_solution(solution, norm=False):\n [metric, coordinate_set, _index_config, cosmological_constant] = solution\n return Ga('', g=metric, coords=coordinate_set, norm=norm)\n\ndef dot_basis_r_basis(ga):\n return [ga.dot(ga.basis[i], ga.r_basis[i]) for i in ga.n_range]\n\ndef gg(ga):\n return simplify(ga.g * ga.g_inv)\n\ndef conv_christoffel_symbols(cf):\n return permutedims(Array(cf), (2, 0, 1))\n\ndef show_christoffel_symbols(ga):\n if ga.connect_flg:\n display(conv_christoffel_symbols(ga.Christoffel_symbols(mode=1)))\n display(conv_christoffel_symbols(ga.Christoffel_symbols(mode=2)))",
"Minkowski Spacetime Metric",
"minkowski = build_ga_from_solution(Solution().minkowski())\n\nshow(minkowski.mv())\n\nminkowski.g\n\nminkowski.e_sq\n\ndot_basis_r_basis(minkowski)\n\ngg(minkowski)",
"Kerr-Debney Metric\n$$\ng=\\left[ \\begin{array}{cccc}{0} & {0} & {-e^{-z}} & {0} \\ {0} & {\\frac{u^{2} e^{4 z}}{2}} & {0} & {0} \\ {-e^{-z}} & {0} & {12 e^{-2 z}} & {u e^{-z}} \\ {0} & {0} & {u e^{-z}} & {\\frac{u^{2}}{2}}\\end{array}\\right]\n$$",
"g4coords = (u, x, y, z) = symbols(\"u x y z\")\ng = [\n [0, 0, -exp(-z), 0],\n [0, HALF * u ** 2 * exp(4 * z), 0, 0],\n [-exp(-z), 0, 12 * exp(-2 * z), u * exp(-z)],\n [0, 0, u * exp(-z), HALF * u ** 2],\n]\ng4 = build_ga_from_solution([g, g4coords, None, 0])\n\nshow(g4.mv())\n\ng4.g\n\ng4.e_sq\n\ndot_basis_r_basis(g4)\n\ngg(g4)\n\nshow_christoffel_symbols(g4)",
"Schwarzschild Metric\nThe classic black hole solution. Uncharged and rotationally stationary.",
"schwarzschild = build_ga_from_solution(Solution().schwarzschild())\n\nshow(schwarzschild.mv())\n\nschwarzschild.g\n\nschwarzschild.e_sq\n\ndot_basis_r_basis(schwarzschild)\n\ngg(schwarzschild)\n\nshow_christoffel_symbols(schwarzschild)",
"Einstein-Rosen Bridge Metric\nThe most famous wormhole solution.",
"einstein_rosen_bridge = build_ga_from_solution(Solution().einstein_rosen_bridge())\n\nshow(einstein_rosen_bridge.mv())\n\neinstein_rosen_bridge.g\n\neinstein_rosen_bridge.e_sq\n\ndot_basis_r_basis(einstein_rosen_bridge)\n\ngg(einstein_rosen_bridge)\n\nshow_christoffel_symbols(einstein_rosen_bridge)",
"Weak Field Approximation",
"wfa = build_ga_from_solution(Solution().weak_field_approximation())\n\nshow(wfa.mv())\n\nwfa.g\n\nshow(wfa.mvr())\n\nwfa.e_sq\n\ndot_basis_r_basis(wfa)\n\ngg(wfa)\n\nwfa.connect_flg\n\nshow_christoffel_symbols(wfa)",
"Friedmann Lemaitre Robertson Walker metric\nThe spacetime for an expanding universe.",
"flrw = build_ga_from_solution(Solution().friedmann_lemaitre_robertson_walker())\n\nshow(flrw.mv())\n\nflrw.g\n\nflrw.e_sq\n\ndot_basis_r_basis(flrw)\n\ngg(flrw)",
"Hypersphere Metric\nA metric which describes a spacetime where the cosmic time is assigned to the meaning of a 4D hypersphere radius. The essential idea behind this spacetime is that the \"3+1\" dimensionality commonly referenced in physics can be meaningfully mapped to the \"3+1\" dimensionality associated with a hypersphere; by the \"3\" angular coordinates and the \"1\" radial coordinate.",
"hypersphere = build_ga_from_solution(Solution().hypersphere())\n\nshow(hypersphere.mv())\n\nhypersphere.g\n\nhypersphere.e_sq\n\ndot_basis_r_basis(hypersphere)\n\ngg(hypersphere)\n\nshow_christoffel_symbols(hypersphere)",
"Kerr Metric",
"# Modified from https://github.com/spacetimeengineer/spacetimeengine/blob/master/spacetimeengine/src/solutions.py\ndef kerr_metric():\n \"\"\"\n Description\n ===========\n Returns Kerr metric.\n \"\"\"\n\n # Index configuration for the metric\n index_config = \"dd\"\n x0, x1, x2, x3 = symbols('x0 x1 x2 x3')\n # Reference to the coordiante system.\n coordinate_set = [x0, x1, x2, x3]\n # Cosmological constant.\n cosmological_constant = 0\n a, J, M, c, G, r, delt, sigm = symbols('a J M c G r Delta Sigma')\n a = (J/(M*c))\n rs = (2*G*M/(c**2))\n sigm = (x1**2 + (J/(M*c))**2 * cos(x2)**2)\n delt = (r**2 - x0 * (2*G*M/(c**2)) + (J/(M*c))**2)\n # Metric solution.\n metric = Matrix([\n [ (1 - rs * x1 / (x1**2 + (J/(M*c))**2 * cos(x2)**2) ), 0, 0, (2*G*M/(c**2))*x1*(J/(M*c))*sin(x2)**2 / (x1**2 + (J/(M*c))**2 * cos(x2)**2) ], \n [ 0, -1 * ( (r**2 - x0 * (2*G*M/(c**2)) + (J/(M*c))**2) / (x1**2 + (J/(M*c))**2 * cos(x2)**2) ), 0, 0 ], \n [ 0, 0, -1 * (x1**2 + (J/(M*c))**2 * cos(x2)**2), 0 ],\n [ (2*G*M/(c**2))*x1*(J/(M*c))*sin(x2)**2 / (x1**2 + (J/(M*c))**2 * cos(x2)**2), 0, 0, -1 * (x1**2 + (J/(M*c))**2 + (rs*x1*(J/(M*c))**2/(x1**2 + (J/(M*c))**2 * cos(x2)**2))*sin(x2))*sin(x2) ]\n ])\n\n # An array detailing the solution.\n solution_array = [ metric, coordinate_set, index_config, cosmological_constant ]\n return solution_array\n\n# kerr = build_ga_from_solution(kerr_metric())\n# FIXME takes too long"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sanger-pathogens/Roary
|
contrib/roary_plots/roary_plots.ipynb
|
gpl-3.0
|
[
"Roary pangenome plots\n<h6><a href=\"javascript:toggle()\" target=\"_self\">Toggle source code</a></h6>",
"# Plotting imports\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nsns.set_style('white')\n\n# Other imports\nimport os\nimport pandas as pd\nimport numpy as np\nfrom Bio import Phylo",
"parSNP tree\nAny other valid newick file is fine, if the tip labels is the same as in the gene_presence_absence matrix from roary.",
"t = Phylo.read('parsnp.tree', 'newick')\n\n# Max distance to create better plots\nmdist = max([t.distance(t.root, x) for x in t.get_terminals()])",
"Roary",
"# Load roary\nroary = pd.read_table('gene_presence_absence.csv',\n sep=',',\n low_memory=False)\n# Set index (group name)\nroary.set_index('Gene', inplace=True)\n# Drop the other info columns\nroary.drop(list(roary.columns[:13]), axis=1, inplace=True)\n\n# Transform it in a presence/absence matrix (1/0)\nroary.replace('.{2,100}', 1, regex=True, inplace=True)\nroary.replace(np.nan, 0, regex=True, inplace=True)\n\n# Sort the matrix by the sum of strains presence\nidx = roary.sum(axis=1).sort_values(ascending=False).index\nroary_sorted = roary.loc[idx]\n\n# Pangenome frequency plot\nplt.figure(figsize=(7, 5))\n\nplt.hist(roary.sum(axis=1), roary.shape[1],\n histtype=\"stepfilled\", alpha=.7)\n\nplt.xlabel('Number of genomes')\nplt.ylabel('Number of genes')\n\nsns.despine(left=True,\n bottom=True)\n\n# Sort the matrix according to tip labels in the tree\nroary_sorted = roary_sorted[[x.name for x in t.get_terminals()]]\n\n# PLot presence/absence matrix against the tree\nwith sns.axes_style('whitegrid'):\n fig = plt.figure(figsize=(17, 10))\n\n ax1=plt.subplot2grid((1,40), (0, 10), colspan=30)\n a=ax1.imshow(roary_sorted.T, cmap=plt.cm.Blues,\n vmin=0, vmax=1,\n aspect='auto',\n interpolation='none',\n )\n ax1.set_yticks([])\n ax1.set_xticks([])\n ax1.axis('off')\n\n ax = fig.add_subplot(1,2,1)\n ax=plt.subplot2grid((1,40), (0, 0), colspan=10, facecolor='white')\n\n fig.subplots_adjust(wspace=0, hspace=0)\n\n ax1.set_title('Roary matrix\\n(%d gene clusters)'%roary.shape[0])\n\n Phylo.draw(t, axes=ax, \n show_confidence=False,\n label_func=lambda x: None,\n xticks=([],), yticks=([],),\n ylabel=('',), xlabel=('',),\n xlim=(-0.01,mdist+0.01),\n axis=('off',),\n title=('parSNP tree\\n(%d strains)'%roary.shape[1],), \n )\n\n# Plot the pangenome pie chart\nplt.figure(figsize=(10, 10))\n\ncore = roary[roary.sum(axis=1) == roary.shape[1]].shape[0]\nsoftcore = roary[(roary.sum(axis=1) < roary.shape[1]) &\n (roary.sum(axis=1) >= roary.shape[1]*0.95)].shape[0]\nshell = roary[(roary.sum(axis=1) < roary.shape[1]*0.95) &\n (roary.sum(axis=1) >= roary.shape[1]*0.15)].shape[0]\ncloud = roary[roary.sum(axis=1) < roary.shape[1]*0.15].shape[0]\n\ntotal = roary.shape[0]\n\ndef my_autopct(pct):\n val=int(pct*total/100.0)\n return '{v:d}'.format(v=val)\n\na=plt.pie([core, softcore, shell, cloud],\n labels=['core\\n(%d strains)'%roary.shape[1],\n 'soft-core\\n(%d <= strains < %d)'%(roary.shape[1]*.95,\n roary.shape[1]),\n 'shell\\n(%d <= strains < %d)'%(roary.shape[1]*.15,\n roary.shape[1]*.95),\n 'cloud\\n(strains < %d)'%(roary.shape[1]*.15)],\n explode=[0.1, 0.05, 0.02, 0], radius=0.9,\n colors=[(0, 0, 1, float(x)/total) for x in (core, softcore, shell, cloud)],\n autopct=my_autopct)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
coolharsh55/advent-of-code
|
2016/python3/Day02.ipynb
|
mit
|
[
"Day 2: Bathroom Security\nauthor: Harshvardhan Pandit\nlicense: MIT\nlink to problem statement\nYou arrive at Easter Bunny Headquarters under cover of darkness. However, you left in such a rush that you forgot to use the bathroom! Fancy office buildings like this one usually have keypad locks on their bathrooms, so you search the front desk for the code.\n\"In order to improve security,\" the document you find says, \"bathroom codes will no longer be written down. Instead, please memorize and follow the procedure below to access the bathrooms.\"\nThe document goes on to explain that each button to be pressed can be found by starting on the previous button and moving to adjacent buttons on the keypad: U moves up, D moves down, L moves left, and R moves right. Each line of instructions corresponds to one button, starting at the previous button (or, for the first line, the \"5\" button); press whatever button you're on at the end of each line. If a move doesn't lead to a button, ignore it.\nYou can't hold it much longer, so you decide to figure out the code as you walk to the bathroom. You picture a keypad like this:\n1 2 3\n4 5 6\n7 8 9\n\nSuppose your instructions are:\nULL\nRRDDD\nLURDL\nUUUUD\n\nYou start at \"5\" and move up (to \"2\"), left (to \"1\"), and left (you can't, and stay on \"1\"), so the first button is 1.\nStarting from the previous button (\"1\"), you move right twice (to \"3\") and then down three times (stopping at \"9\" after two moves and ignoring the third), ending up with 9.\nContinuing from \"9\", you move left, up, right, down, and left, ending with 8.\nFinally, you move up four times (stopping at \"2\"), then down once, ending with 5.\nSo, in this example, the bathroom code is 1985.\nYour puzzle input is the instructions from the document you found at the front desk. What is the bathroom code?\nSolution logic\nThis is quite similar to the previous problem. We have a grid that we traverse. In this case, it is a fixed grid, with only 9 points. For simplicity, we can select 7 as the origin so that all points lie in the first quadrant (all values are positive). Every movement (left, right, up, down) can be mapped into a movement along the axis.\nAlternatively, we can start at 1 (0,0) as this allows some simple math to get the position of the number using mod and division. To get the number from the position, we do (y * 3 + x + 1).\nWhich gives us:\nNUMBER 9-origin 1-origin (y * 3) (x + 1) (y * 3 + x + 1)\n 1 (0,2) (0,0) 0 1 1\n 2 (1,2) (1,0) 0 2 2\n 3 (2,2) (2,0) 0 3 3\n 4 (0,1) (0,1) 3 1 4\n 5 (1,1) (1,1) 3 2 5\n 6 (2,1) (2,1) 3 3 6\n 7 (0,0) (0,2) 3 1 7\n 8 (1,0) (1,2) 3 2 8\n 9 (2,0) (2,2) 3 3 9\n\nSYMBOL DIRECTION X-AXIS Y-AXIS\nL LEFT -1 0\nR RIGHT +1 0\nU UP 0 -1\nD DOWN 0 +1\n\nTo prevent the values from going out of bounds, we take the mod of each operation with an upper bound for each direction. To put it simply - we cap the values of each direction.\nDIRECTION CAPPED VALUE FUNCTION\nUP 3 (max value is 2) x % 3\nDOWN 0 (max value is 0, min is 2) x < 0 ? 0 : x\nLEFT 0 (max value is 0, min is 2) x < 0 ? 0 : x\nRIGHT 3 (max value is 2) x % 3\n\nAlgorithm:\n - Set starting position to number 5 (1,1)\n - For each line of input, do the following (handle as a different number)\n - Tokenise each character, and identify the direction\n - Apply the direction mask to the position and apply the capping function\n - At the end of input, add the number of code (as a string, just append it)\nGet number from co-ordinate\nApply the function to get the number",
"def numpad_number_from_point(point):\n return str(point.y * 3 + point.x + 1)",
"Point\nSince each movement is along some direction, we go about this in a very pretentious and academic way. By declaring classes and objects and functions for everything. The end logic is a clean layout of the solution.\nThe Point class has attributes x and y. We use setter and getter to set the caps on these values.",
"class Point:\n def __init__(self, x=0, y=0):\n self._x, self._y = x, y\n \n @property\n def x(self):\n return self._x\n \n @property\n def y(self):\n return self._y\n \n @x.setter\n def x(self, value):\n if 0 <= value <= 2:\n self._x = value\n \n @y.setter\n def y(self, value):\n if 0 <= value <= 2:\n self._y = value \n \n def __str__(self):\n return '(%s, %s)' % (self._x, self._y)\n ",
"Directions\nWe use a dictionary to store the directions and the masks they represent. Each mask is a namedtuple having attributes x and y.",
"from collections import namedtuple\nMask = namedtuple('Mask', ('x', 'y'))\ndirections = {\n 'U': Mask(0, -1),\n 'D': Mask(0, 1),\n 'L': Mask(-1, 0),\n 'R': Mask(1, 0),\n}",
"Solve\ncode holds the input code, and is a string.",
"code = ''",
"point is the point we are moving on the graph",
"point = Point(1, 1)",
"Reading in the input from file",
"with open('../inputs/day02.txt', 'r') as f:\n data = f.readlines()",
"Define a function that will parse the number from each line of input. We supply it the line.",
"def calculate_number(line):\n for char in line:\n mask = directions[char]\n point.x += mask.x\n point.y += mask.y\n return numpad_number_from_point(point)\n\nfor line in data:\n code += calculate_number(line.strip())",
"Answer lies in the variable code\nPart Two\nYou finally arrive at the bathroom (it's a several minute walk from the lobby so visitors can behold the many fancy conference rooms and water coolers on this floor) and go to punch in the code. Much to your bladder's dismay, the keypad is not at all like you imagined it. Instead, you are confronted with the result of hundreds of man-hours of bathroom-keypad-design meetings:\n 1\n 2 3 4\n5 6 7 8 9\n A B C\n D\n\nYou still start at \"5\" and stop when you're at an edge, but given the same instructions as above, the outcome is very different:\nYou start at \"5\" and don't move at all (up and left are both edges), ending at 5.\nContinuing from \"5\", you move right twice and down three times (through \"6\", \"7\", \"B\", \"D\", \"D\"), ending at D.\nThen, from \"D\", you move five more times (through \"D\", \"B\", \"C\", \"C\", \"B\"), ending at B.\nFinally, after five more moves, you end at 3.\nSo, given the actual keypad layout, the code would be 5DB3.\nUsing the same instructions in your puzzle input, what is the correct bathroom code?\nSolution logic\nSince the keypad design has changed, we need to change our approach as well. In the new keypad, the grid is no longer symmetrical, so we cannot apply the mod logic to this keypad. The simplest approach would be to define each point in a dictionary with their co-ordinates, starting with 7 as the origin. This would allow the points to span in all four quadrants. Then we need to change the setters to accomodate the new co-ordinates, and finally, the numpad function to get the correct number using the dictionary.\nNumbers\nThe numbers dictionary holds the co-ordinates for each key on the numpad. At the end, we can simply lookup the co-ordinate points and get the corresponding number. The number_keys allows the easier declaration of syntax, which is then inverted using the numbers_values dictionary.",
"numbers_keys = {\n '1': Mask(0, 2),\n '2': Mask(-1, 1),\n '3': Mask(0, 1),\n '4': Mask(1, 1),\n '5': Mask(-2, 0),\n '6': Mask(-1, 0),\n '7': Mask(0, 0),\n '8': Mask(1, 0),\n '9': Mask(2, 0),\n 'A': Mask(-1, -1),\n 'B': Mask(0, -1),\n 'C': Mask(1, -1),\n 'D': Mask(0, -2)\n}\nnumbers_values = {v:k for k,v in numbers_keys.items()}",
"Point\nWe redefine the point class with updated setters for x and y. The new setters do nothing, and the actual update is done through the xy method, which takes in the two values. This checks whether the given values are in the correct range.",
"class Point:\n def __init__(self, x=0, y=0):\n self._x, self._y = x, y\n \n @property\n def x(self):\n return self._x\n \n @property\n def y(self):\n return self._y\n \n @x.setter\n def x(self, value):\n pass\n \n @y.setter\n def y(self, value):\n pass\n \n def xy(self, x, y):\n if (\n (-2 <= x <= 2 and y == 0) or\n (-1 <= x <= 1 and -1 <= y <= 1) or\n (x == 0 and -2 <= y <= 2)):\n \n self._x, self._y = x, y \n \n def __str__(self):\n return '(%s, %s)' % (self._x, self._y)\n \n def __repr__(self):\n return '(%s, %s)' % (self._x, self._y)\n ",
"Calculate Number\nThe updated method uses the new xy method to assign the values. Since we assume point to be global (in this scope), we directly use it.",
"def calculate_number(line):\n for char in line:\n mask = directions[char]\n point.xy(point.x + mask.x, point.y + mask.y)",
"keys holds the characters of the keypad code",
"keys = []",
"Directions\nWe must also update the directions, as we have changed the order of the numbers on the keypad. This involves reversing the sign of the UP and DOWN directions.",
"directions = {\n 'U': Mask(0, 1),\n 'D': Mask(0, -1),\n 'L': Mask(-1, 0),\n 'R': Mask(1, 0),\n}",
"Point\nThe starting point for this problem is 5, which we retrieve using the dictionary.",
"point = Point(*numbers_keys['5'])",
"Keypad code\nWe run the keypad code over each line and store the result in keys.",
"for line in data:\n calculate_number(line.strip())\n keys.append(numbers_values[Mask(point.x, point.y)])\n\nanswer = ''.join(keys)",
"== END =="
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Upward-Spiral-Science/the-vat
|
Code/inferential_simulation_AL.ipynb
|
apache-2.0
|
[
"Inferential Simulation\nBy: Adam Li\nAssumptions:\nWe want to create a statistical model for our data and simulate a prospective power analysis. We take our dataset, ${x1_i, ..., x24_i}$ that are drawn i.i.d. from $F_{x1,...,x24}$, which belongs to a family of distributions. \nOur 24 variables represent the 24 different proteins being expressed and measured for a set of 6 metrics ${f_0,...,f_5}$\nFormalize A Test:\n$$H_o: $$\n$$H_a: $$\nDescribe Test Statistic:\nHow are we getting a test statistic for our null and alternative hypothesis test? \nSample from Null and Alternative Distribution:\nThe choice of the magnitude of the null/alternative differences will affect the power because it affects the overlap of the distributions. If you can't analytically formulate your distribution for test statistics, then you will need to somehow get an estimate of your distributions instead. You can simulate samples many many times.\nReferences:\n* http://scikit-learn.org/stable/modules/generated/sklearn.cluster.MiniBatchKMeans.html#sklearn.cluster.MiniBatchKMeans\n* http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html",
"# Import Necessary Libraries\nimport numpy as np\nimport os, csv, json\n\nfrom matplotlib import *\nfrom matplotlib import pyplot as plt\n\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\n\nimport scipy\nimport itertools\nfrom sklearn.decomposition import PCA\nimport skimage.measure\nfrom sklearn.cluster import MiniBatchKMeans\n\n\n# pretty charting\nimport seaborn as sns\nsns.set_palette('muted')\nsns.set_style('darkgrid')\n\n%matplotlib inline\n\n#### RUN AT BEGINNING AND TRY NOT TO RUN AGAIN - TAKES WAY TOO LONG ####\n# write new list_of_features to new txt file\ncsvfile = \"../data_normalized/shortenedFeatures_normalized.txt\"\n\n# load in the feature data\nlist_of_features = []\nwith open(csvfile) as file:\n for line in file:\n inner_list = [float(elt.strip()) for elt in line.split(',')]\n \n # create list of features\n list_of_features.append(inner_list)\n\n# conver to a numpy matrix\nlist_of_features = np.array(list_of_features)\n\n# ###### RUN TO SAVE NORMALIZED SHORTENED FEATURES #########\n# num_rows, num_cols = np.shape(list_of_features)\n\n# # initialize normalize_features matrix\n# normalize_features = np.empty((num_rows,num_cols))\n\n# # normalize all features (columns)\n# for i in range(0, num_cols):\n# col_max = max(list_of_features[:,i])\n# buff = list_of_features[:,i]/col_max\n# normalize_features[:,i] = buff\n\n# # write new list_of_features to new txt file\n# csvfile = \"data_normalized/shortenedFeatures_normalized.txt\"\n\n# #Assuming res is a flat list\n# with open(csvfile, \"w\") as output:\n# # write to new file the data\n# writer = csv.writer(output, lineterminator='\\n')\n# for row in range(0, len(normalize_features)):\n# writer.writerow(normalize_features[row,:])\n \n# ########## DON'T RUN AGAIN\n\n# Compute feature correlation matrix\nR = np.corrcoef(list_of_features,rowvar=0)\n# R_normalize = np.corrcoef(normalize_features, rowvar=0)\nfig = plt.figure(figsize=(10,10))\nim =plt.imshow(R, cmap=plt.get_cmap('jet'), interpolation='none')\nplt.title(\"Correlation plot of all features f0, f1, f2, f3\", fontsize=20)\n\nplt.xticks(np.arange(12,96, 24), ['f0', 'f1', 'f2', 'f3'], fontsize=20)\nplt.yticks(np.arange(12,96, 24), ['f0', 'f1', 'f2', 'f3'], fontsize=20)\nax = plt.gca()\nax.grid(False)\nxmin = ax.get_xlim\nvmin, vmax = plt.gci().get_clim()\ndivider = make_axes_locatable(ax)\ncax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\nplt.colorbar(im, cax=cax)\n\nyline = np.arange(0, 96, 24)\nfor i in range(0, len(yline)):\n plt.axhline(y=yline[i], color='k')\n plt.axvline(x=yline[i], color='k')\n\ntemp = np.corrcoef(list_of_features,rowvar=0)\ntemp = temp < 0.6\nbuff = R\nbuff[temp] = 0\nfig = plt.figure(figsize=(10,10))\nim = plt.imshow(buff, cmap=plt.get_cmap('jet'), interpolation='none')\nplt.title(\"Correlation plot of all features f0, f1, f2, f3\", fontsize=20)\n# plt.colorbar()\nplt.xticks(np.arange(12,96, 24), ['f0', 'f1', 'f2', 'f3'], fontsize=20)\nplt.yticks(np.arange(12,96, 24), ['f0', 'f1', 'f2', 'f3'], fontsize=20)\nax = plt.gca()\nax.grid(False)\nxmin = ax.get_xlim\nvmin, vmax = plt.gci().get_clim()\ndivider = make_axes_locatable(ax)\ncax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\nplt.colorbar(im, cax=cax)\n\nyline = np.arange(0, 96, 24)\nfor i in range(0, len(yline)):\n plt.axhline(y=yline[i], color='w')\n plt.axvline(x=yline[i], color='w')\n\n# Run k-means and plot\ny_pred = KMeans(n_clusters=10).fit_predict(list_of_features)\n\n\ntest = MiniBatchKMeans(n_clusters=50).fit(list_of_features)\n\ntest2 = MiniBatchKMeans(n_clusters=50).fit(list_of_features.T)\n\n# compute distances between each centroid\ncentroids = test.cluster_centers_\nprint centroids.shape\ndistances = scipy.spatial.distance.pdist(centroids)\ndistances = scipy.spatial.distance.squareform(distances)\n\n# Plot the distances\nfig = plt.figure(figsize=(10,10))\nplt.imshow(distances, cmap=plt.get_cmap('jet'), interpolation='none')\nplt.title(\"Distance plot of all features Clusters\")\nplt.colorbar()",
"Inferential Simulation part 2\nFormal Simulated Inference\n\nDefine F (i.e. your model and assumptions)\nFormalize test\nDescribe test statistic\nA) Sample data from F∈ℱ0\nB) Sample data from F∈ℱA\nPlot power vs n (i.e. perspective power analysis)\nApply to data\n\nAssumptions:\nWe want to create a statistical model for our data and simulate a prospective power analysis. We take our dataset, ${x1_i, ..., x24_i}$ that are drawn i.i.d. from $F_{x1,...,x24}$, which belongs to a family of distributions. \nOur 24 variables represent the 24 different proteins being expressed and measured for a set of 6 metrics ${f_0,...,f_5}$\n\n\nOur other data is <x,y,z> coordinates of each row in our feature matrix (e.g. each synapse, or measurement that we have). One hypothesis that we can test initially as a sanity check is if our data is uniformly distributed in space? \nFor each row i in normalized location data matrix D(x,y,z):\n$D_i(x), D_i(y), D_i(z) \\in U$, where U represents a uniform distribution [0,1]\nOur alternative would be that they do not.\n\n\nFormalize A Test:\n$$H_o: D_i \\in U \\ \\forall \\ i \\in {x,y,z}$$\n$$H_a: \\exists i \\in {x,y,z} \\ s.t. D_i \\not \\in U$$\nDescribe Test Statistic:\nHow are we getting a test statistic for our null and alternative hypothesis test? \nThe Kolmogorov-Smirnov test is a nonparametric test of the equality of 1-D pdfs to compare a sample with a reference pdf. It quantifies a distance between the empirical distribution of the sample and the cdf of the reference distribution (in this case, the uniform distribution).\nThe empirical distribution function $F_n$ for n iid observationss is:\n $$F_n(x) = \\frac{1}{n} * \\sum_{i=1}^n(I_{[-\\infty,x]} \\ (X_i))$$\nwhere $I_{[-\\infty,x]} \\ (X_i) $ is the indicator function $= \\cases {{1 \\ if \\ X_i \\leqslant x}, \\ {0 \\ otherwise}}$\nThe Kolmogorov-Smirnov statistic is $D_n = \\sup_{x}|F_n(x) - F(x)|$ and $F(x)$ is a given distribution we want to test against, namely the uniform distribution.\nSample from Null and Alternative Distribution:\nThe choice of the magnitude of the null/alternative differences will affect the power because it affects the overlap of the distributions. If you can't analytically formulate your distribution for test statistics, then you will need to somehow get an estimate of your distributions instead. You can simulate samples many many times.\nSTEP 4A Sample Data From Null",
"# define number of subjects per class\nnp.random.seed(123456789) # for reproducibility, set random seed\n\nS = np.array((4, 6, 8, 10, 14, 18, 20, 26, 30, 40,\n 50, 60, 70, 80, 100, 120, 150, 200, 250,\n 300, 400, 500, 750, 1000, 1500, 2000,\n 3000, 5000))\nalpha = 0.05\nN = 50 # # samples at each iteration\n\npow_null = np.array((), dtype=np.dtype('float64'))\npowks_null = np.array((), dtype=np.dtype('float64'))\npowks2_null = np.array((), dtype=np.dtype('float64'))\n\n# compute this statistic for various sizes of datasets\nfor s in S:\n # compute this many times for each operating point to get average\n pval = np.array((), dtype=np.dtype('float64')) \n kspval = np.array((), dtype=np.dtype('float64')) \n ks2pval = np.array((), dtype=np.dtype('float64')) \n \n for _ in itertools.repeat(None,N):\n g0 = np.random.uniform(0, 1, s) # (null)\n g1 = np.random.uniform(0, 1, s) # null\n \n # compute Kolmogorov-test statistic on generated data\n ks2test_stat = scipy.stats.ks_2samp(g0, g1)\n ks2pval = np.append(pval, ks2test_stat.pvalue)\n \n kstest_stat = scipy.stats.kstest(g0, 'uniform', args=(0,1))\n kspval = np.append(pval, kstest_stat.pvalue)\n \n # Wilcoxon\n test_stat = scipy.stats.wilcoxon(g0, g1)\n pval = np.append(pval, test_stat.pvalue)\n \n \n \n # record average p value at operating point\n pow_null = np.append(pow_null, np.sum(1.0*(pval < alpha))/N)\n \n powks_null = np.append(powks_null, np.sum(1.0*(kspval < alpha))/N)\n powks2_null = np.append(powks2_null, np.sum(1.0*(ks2pval < alpha))/N)",
"STEP 4B Sample Data From Alternative",
"pow_alt = np.array((), dtype=np.dtype('float64'))\npowks_alt = np.array((), dtype=np.dtype('float64'))\npowks2_alt = np.array((), dtype=np.dtype('float64'))\n\n# compute this statistic for various sizes of datasets\nfor s in S:\n\n # compute this many times for each operating point to get average\n pval = np.array((), dtype=np.dtype('float64')) \n kspval = np.array((), dtype=np.dtype('float64')) \n ks2pval = np.array((), dtype=np.dtype('float64')) \n \n for _ in itertools.repeat(None,N):\n g0 = np.random.uniform(0.2, 0.9, s) # (null)\n g1 = np.random.uniform(0, 1, s) # alternative\n \n # compute Kolmogorov-test statistic on generated data\n ks2test_stat = scipy.stats.ks_2samp(g0, g1)\n ks2pval = np.append(pval, ks2test_stat.pvalue)\n \n kstest_stat = scipy.stats.kstest(g0, 'uniform', args=(0,1))\n kspval = np.append(pval, kstest_stat.pvalue)\n \n # Wilcoxon\n test_stat = scipy.stats.wilcoxon(g0, g1)\n pval = np.append(pval, test_stat.pvalue)\n \n # record average p value at operating point\n pow_alt = np.append(pow_alt, np.sum(1.0*(pval < alpha))/N)\n powks_alt = np.append(powks_alt, np.sum(1.0*(kspval < alpha))/N)\n powks2_alt = np.append(powks2_alt, np.sum(1.0*(ks2pval < alpha))/N)",
"STEP 5 Plot Power vs. n On Null Set",
"# Plotting power vs. n on the null set\nfig = plt.figure(figsize=(10,5))\nplt.subplot(311)\nplt.scatter(S, pow_null, hold=True, label='null')\nplt.scatter(S, pow_alt, color='green', hold=True, label='alt')\nplt.xscale('log')\nplt.xlabel('number of samples')\nplt.ylabel('power')\nplt.title('Strength of Uniform Classification under With Wilcoxon')\nplt.axhline(alpha, color='red', linestyle='--', label='alpha')\nplt.legend(loc=5)\nplt.show()\n\n# Plotting power vs. n on the null set\nfig = plt.figure(figsize=(10,5))\nplt.subplot(312)\nplt.scatter(S, powks_null, hold=True, label='null')\nplt.scatter(S, powks_alt, color='green', hold=True, label='alt')\nplt.xscale('log')\nplt.xlabel('number of samples')\nplt.ylabel('power')\nplt.title('Strength of Uniform Classification under With Ks one sample test')\nplt.axhline(alpha, color='red', linestyle='--', label='alpha')\nplt.legend(loc=5)\nplt.show()\n\n# Plotting power vs. n on the null set\nfig = plt.figure(figsize=(10,5))\nplt.subplot(313)\nplt.scatter(S, powks2_null, hold=True, label='null')\nplt.scatter(S, powks2_alt, color='green', hold=True, label='alt')\nplt.xscale('log')\nplt.xlabel('number of samples')\nplt.ylabel('power')\nplt.title('Strength of Uniform Classification under With Ks two sample test')\nplt.axhline(alpha, color='red', linestyle='--', label='alpha')\nplt.legend(loc=5)\nplt.show()",
"STEP 6 Actual Data\nNow we run our test statistics on our actual location normalized data.\nRun 01A, or 01B to see the analysis done on either normalized, or regular data.",
"# 01A\n\n#### RUN AT BEGINNING AND TRY NOT TO RUN AGAIN - TAKES WAY TOO LONG ####\n# read in normalized data locations\ncsvfile = \"data_normalized/locations_normalized.txt\"\n\n# load in the feature data\nlist_of_locations = []\nwith open(csvfile) as file:\n for line in file:\n inner_list = [float(elt.strip()) for elt in line.split(',')]\n \n # create list of features\n list_of_locations.append(inner_list)\n\n# convert to a numpy matrix\nlist_of_locations = np.array(list_of_locations)\n\n# 01B\n\n#### RUN AT BEGINNING AND TRY NOT TO RUN AGAIN - TAKES WAY TOO LONG ####\n#### this is only to run for non-normalized to see if we get the same results\ncsvfile = \"../data/original data/synapsinR_7thA.tif.Pivots.txt\"\n\n# load in the feature data\nlist_of_locations = []\nwith open(csvfile) as file:\n for line in file:\n inner_list = [float(elt.strip()) for elt in line.split(',')]\n \n # create list of features\n list_of_locations.append(inner_list)\n\n# convert to a numpy matrix\nlist_of_locations = np.array(list_of_locations)\n\n# run Komogorove-Smirnov Test \nx_locations = list_of_locations[:,0]\ny_locations = list_of_locations[:,1]\nz_locations = list_of_locations[:,2]\n\n# Ks 1 sample test\ntest_x = scipy.stats.kstest(x_locations, 'uniform', args=(min(x_locations),max(x_locations)))\ntest_y = scipy.stats.kstest(y_locations, 'uniform', args=(min(y_locations),max(y_locations)))\ntest_z = scipy.stats.kstest(z_locations, 'uniform', args=(min(z_locations),max(z_locations)))\n\ng0 = np.random.uniform(0, 1, len(x_locations)) # (null)\n\n# ks_2sample test\n# test_x = scipy.stats.ks_2samp(x_locations, g0)\n# test_y = scipy.stats.ks_2samp(y_locations, g0)\n# test_z = scipy.stats.ks_2samp(z_locations, g0)\n\nprint test_x\nprint test_y\nprint test_z",
"Plotting Data\nNow that we have finished the kolmogorov test of our data, we see that all three columns of our normalized location data does not come from uniformly distribution. \nWe want to plot our data, but be wary that although it looks uniform, we reject that hypothesis with sufficiently high power.",
"fig = plt.figure(figsize=(10,20))\nfig.add_subplot(311)\nplt.hist(x_locations)\nplt.title(\"X Locations Histogram\", fontsize=20)\nplt.xlabel(\"Locations\", fontsize=20)\nplt.ylabel(\"Frequency\", fontsize=20)\n\nfig.add_subplot(312)\nplt.hist(y_locations)\nplt.title(\"Y Locations Histogram\", fontsize=20)\nplt.xlabel(\"Locations\", fontsize=20)\nplt.ylabel(\"Frequency\", fontsize=20)\n\nfig.add_subplot(313)\nplt.hist(z_locations)\nplt.title(\"Z Locations Histogram\", fontsize=20)\nplt.xlabel(\"Locations\", fontsize=20)\nplt.ylabel(\"Frequency\", fontsize=20)\nplt.tight_layout",
"Discussion\nAfter simulating the Kolmogorov-Smirnov test statistic, it shows that the null increases to 1, so we end up rejecting the null incorrectly all the time? \nIt is shown here that the Kolmogorov one sample, two sample, or Wilcoxon signed-rank test can be used for testing if our data comes from uniform distribution [0,1]. Even against alternative hypotheses with close to uniform [0,1] (e.g. [0.2, 0.9], can have sufficiently high power with the amount of samples we have. \nTherefore when we reject our null hypotheses, we see that our data is not uniformly distributed in space, even though it looks like it is when we plot our histograms."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ktakagaki/kt-2015-DSPHandsOn
|
MedianFilter/Python/07. Real Data/Comparison between DTW and fast DTW.ipynb
|
gpl-2.0
|
[
"Threshold, Dynamic Time Warping\nDW (2016.01.04)",
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.signal import medfilt\nimport gitInformation\nfrom neo.io.neuralynxio import NeuralynxIO\nimport quantities as pq\nimport sklearn\nfrom scipy.interpolate import Rbf\nimport fastdtw\n#import dtw\n\n% matplotlib inline\n\ngitInformation.printInformation()\n\n# Session folder with all needed neuralynx files\nsessionfolder = 'C:\\\\Users\\\\Dominik\\\\Documents\\\\GitRep\\\\kt-2015-DSPHandsOn\\\\MedianFilter\\\\Python\\\\07. Real Data'\n\n# Loading the files with all datas and store them as a np.array\nNIO = NeuralynxIO(sessiondir = sessionfolder, cachedir = sessionfolder)\nblock = NIO.read_block()\nseg = block.segments[0]\nanalogsignal = seg.analogsignalarrays[0]\n\ncsc = analogsignal.magnitude\n\nplt.figure(figsize=(30,7))\nplt.plot(csc)\n\n# Filter the Data with a median filter\nfiltered = medfilt(csc,45)\nnew_data = csc-filtered\n\nplt.figure(figsize=(30,10))\nplt.plot(csc, color = 'cornflowerblue')\nplt.plot(filtered, color = 'g', lw = 1.5)\nplt.plot(new_data, color = 'r')\n\n# Automatic Threshold calculation\nthreshold = 5*np.median(abs(new_data)/0.6745)\n\n# Declaring counter and and dead time.\n# Dead time: if the threshold is reached, we wait 50 samples until the threshhold can be\n# activated again\ncount = -1\ncount2 = 0\ntimer = 0\n# Dictionary with all thresholded shapes\nthresholds = {}\n# Get the value in the new_data array:\nfor i in new_data:\n # Increment the counter (counter = position in the array)\n count += 1\n if i >= threshold:\n # check the thresholded window if some values are bigger then 0.00005\n temp = [i for i in new_data[count -6 : count + 18] if i >= 0.00005]\n # If no values are bigger then 0.00005 and the dead time is zero,\n # save the window in the dictionary \n if len(temp) == 0 and timer == 0:\n # set the timer to 20, so 20 samples will be passed\n timer = 16\n # increment count2, for the array name\n count2 += 1\n thresholds[\"spike{0}\".format(count2)] = new_data[count -6 : count + 18]\n elif timer > 0: \n # Decrement the timer.\n timer -= 1\n else: \n pass\n\n# Transfrom the thresholded shpaes into a array\nthresholds_array = np.zeros((24,len(thresholds)))\ncount = -1\nfor o in thresholds:\n count += 1\n thresholds_array[:,count] = thresholds[o]\n\nx = np.arange(24)\nx_new = np.linspace(0,24,240)\n#Interpolate each spike with a Cubic RBF function\nthresholds_interp = np.zeros((len(x_new),len(thresholds_array[1,:])))\nfor o in range(len(thresholds_array[1,:])):\n newfunc = Rbf(x, thresholds_array[:,o], function = 'cubic')\n thresholds_interp[:,o] = newfunc(x_new)\n\nthresholds_norm = thresholds_array/float(thresholds_array.max())\n\ncount = -1\nfor o in range(len(thresholds_interp[0,:])):\n count += 1\n fig = plt.figure(1)\n plt.axis([0,25,-0.78, 1.0])\n plt.plot( thresholds_norm[:,o], color = 'black', linewidth = 0.4)\n plt.xlabel(str(count))\n\ntemplate1 = thresholds_norm[:,29]\ntemplate2 = thresholds_norm[:,31]\ntemplate3 = thresholds_norm[:,75]\ntemplate4 = thresholds_norm[:,124]\ntemplate5 = thresholds_norm[:,175]\n\ntemplates = np.zeros((1,24))\n\n#templates[0,:] = template1\ntemplates[0,:] = template1\n#templates[2,:] = template3\n#templates[3,:] = template4\n#templates[4,:] = template5\n\nplt.plot(templates[0,:])\n#plt.plot(templates[1,:])\n#plt.plot(templates[2,:])\n#plt.plot(templates[3,:])\n#plt.plot(templates[4,:])\n#plt.plot(templates[5,:])",
"Comparison between fastDTW and normal DTW",
"label = np.zeros(len(thresholds_norm[0,:]))\ndef fastDtw(thresholds_norm, template1, label):\n count = 0\n # Go through all detected windows and compare them with the template\n for k in range(len(thresholds_norm[0,:])): \n dist = fastdtw.fastdtw(template1,thresholds_norm[:,k])\n dist = dist[0]\n # If the distance between both templates is smaller then 1.6, it's a match\n if dist < 1.6:\n label[k] = 1 \n else:\n pass\n return label\n\nlabel1 = np.zeros(len(thresholds_norm[0,:]))\ndef normalDtw(thresholds_norm, template1, label1):\n count = 0\n for k in range(len(thresholds_norm[0,:])):\n temp = thresholds_norm[:,k]\n o = template1.reshape(-1,1)\n dist, cost, acc, path = dtw.dtw(o, temp, dist=lambda o, temp: np.linalg.norm(o - temp, ord=1))\n \n if dist < 0.03:\n label1[k] = 1 \n else:\n pass\n\nimport time\nstart_time = time.time()\nfastDtw(thresholds_norm, template1, label)\nprint(\"--- %s seconds ---\" % (time.time() - start_time))\n\nimport time\nstart_time = time.time()\nnormalDtw(thresholds_norm, template1, label1)\nprint(\"--- %s seconds ---\" % (time.time() - start_time))",
"Fast DTW is about 3 times faster then the normal DTW. Using an interpolation with 10 times more samples would make the algorithm 16 times slower then normal.",
"for i in range(len(label1)):\n if label1[i] == 1:\n plt.plot(thresholds_array[:,i], color = 'black')\n\nfor i in range(len(label)):\n if label[i] == 1:\n plt.plot(thresholds_array[:,i], color = 'black')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
joelagnel/lisa
|
ipynb/examples/trace_analysis/TraceAnalysis_IdleStates.ipynb
|
apache-2.0
|
[
"Trace Analysis Examples\nIdle States Residency Analysis\nThis notebook shows the features provided by the idle state analysis module. It will be necessary to collect the following events:\n\ncpu_idle, to filter out intervals of time in which the CPU is idle\nsched_switch, to recognise tasks on kernelshark\n\nDetails on idle states profiling ar given in Per-CPU/Per-Cluster Idle State Residency Profiling below.",
"import logging\nfrom conf import LisaLogging\nLisaLogging.setup()\n\n%matplotlib inline\n\nimport os\n\n# Support to access the remote target\nfrom env import TestEnv\n\n# Support to access cpuidle information from the target\nfrom devlib import *\n\n# Support to configure and run RTApp based workloads\nfrom wlgen import RTA, Ramp\n\n# Support for trace events analysis\nfrom trace import Trace\n\n# DataFrame support\nimport pandas as pd\nfrom pandas import DataFrame\n\n# Trappy (plots) support\nfrom trappy import ILinePlot\nfrom trappy.stats.grammar import Parser",
"Target Configuration\nThe target configuration is used to describe and configure your test environment.\nYou can find more details in examples/utils/testenv_example.ipynb.\nOur target is a Juno R0 development board running Linux.",
"# Setup a target configuration\nmy_conf = {\n \n # Target platform and board\n \"platform\" : 'linux',\n \"board\" : 'juno',\n \n # Target board IP/MAC address\n \"host\" : '192.168.0.1',\n \n # Login credentials\n \"username\" : 'root',\n \"password\" : 'juno',\n \n \"results_dir\" : \"IdleAnalysis\",\n \n # RTApp calibration values (comment to let LISA do a calibration run)\n \"rtapp-calib\" : {\n \"0\": 318, \"1\": 125, \"2\": 124, \"3\": 318, \"4\": 318, \"5\": 319\n },\n \n # Tools required by the experiments\n \"tools\" : ['rt-app', 'trace-cmd'],\n \"modules\" : ['bl', 'cpufreq', 'cpuidle'],\n \"exclude_modules\" : ['hwmon'],\n \n # FTrace events to collect for all the tests configuration which have\n # the \"ftrace\" flag enabled\n \"ftrace\" : {\n \"events\" : [\n \"cpu_idle\",\n \"sched_switch\"\n ],\n \"buffsize\" : 10 * 1024,\n },\n}\n\n# Initialize a test environment\nte = TestEnv(my_conf, wipe=False, force_new=True)\ntarget = te.target\n\n# We're going to run quite a heavy workload to try and create short idle periods.\n# Let's set the CPU frequency to max to make sure those idle periods exist\n# (otherwise at a lower frequency the workload might overload the CPU\n# so it never went idle at all)\nte.target.cpufreq.set_all_governors('performance')",
"Workload configuration and execution\nDetailed information on RTApp can be found in examples/wlgen/rtapp_example.ipynb.\nThis experiment:\n- Runs a periodic RT-App workload, pinned to CPU 1, that ramps down from 80% to 10% over 7.5 seconds\n- Uses perturb_cpus to ensure 'cpu_idle' events are present in the trace for all CPUs\n- Triggers and collects ftrace output",
"cpu = 1\ndef experiment(te):\n\n # Create RTApp RAMP task\n rtapp = RTA(te.target, 'ramp', calibration=te.calibration())\n rtapp.conf(kind='profile',\n params={\n 'ramp' : Ramp(\n start_pct = 80,\n end_pct = 10,\n delta_pct = 5,\n time_s = 0.5,\n period_ms = 5,\n cpus = [cpu]).get()\n })\n\n # FTrace the execution of this workload\n te.ftrace.start()\n # Momentarily wake all CPUs to ensure cpu_idle trace events are present from the beginning\n te.target.cpuidle.perturb_cpus()\n rtapp.run(out_dir=te.res_dir)\n te.ftrace.stop()\n\n # Collect and keep track of the trace\n trace_file = os.path.join(te.res_dir, 'trace.dat')\n te.ftrace.get_trace(trace_file)\n\n # Dump platform descriptor\n te.platform_dump(te.res_dir)\n\nexperiment(te)",
"Parse trace and analyse data",
"# Base folder where tests folder are located\nres_dir = te.res_dir\nlogging.info('Content of the output folder %s', res_dir)\n!tree {res_dir}\n\ntrace = Trace(te.platform, res_dir, events=my_conf['ftrace']['events'])",
"Per-CPU Idle State Residency Profiling\nIt is possible to get the residency in each idle state of a CPU or a cluster with the following commands:",
"# Idle state residency for CPU 3\nCPU=3\nstate_res = trace.data_frame.cpu_idle_state_residency(CPU)\nstate_res",
"For the translation between the idle value and its description:",
"DataFrame(data={'value': state_res.index.values,\n 'name': [te.target.cpuidle.get_state(i, cpu=CPU) for i in state_res.index.values]})",
"The IdleAnalysis module provide methods for plotting residency data:",
"ia = trace.analysis.idle\n\n# Actual time spent in each idle state\nia.plotCPUIdleStateResidency([1,2])\n\n# Percentage of time spent in each idle state\nia.plotCPUIdleStateResidency([1,2], pct=True)",
"CPU idle state over time\nTake a look at the target's idle states:",
"te.target.cpuidle.get_states()",
"Now use trappy to plot the idle state of a single CPU over time. Higher is deeper: the plot is at -1 when the CPU is active, 0 for WFI, 1 for CPU sleep, etc.\nWe should see that as the workload ramps down and the idle periods become longer, the idle states used become deeper.",
"p = Parser(trace.ftrace, filters = {'cpu_id': cpu})\nidle_df = p.solve('cpu_idle:state')\nILinePlot(idle_df, column=cpu, drawstyle='steps-post').view()",
"Examine idle period lengths\nLet's get a DataFrame showing the length of each idle period on the CPU and the index of the cpuidle state that was entered.",
"def get_idle_periods(df):\n series = df[cpu]\n series = series[series.shift() != series].dropna()\n if series.iloc[0] == -1:\n series = series.iloc[1:]\n\n idles = series.iloc[0::2] \n wakeups = series.iloc[1::2]\n if len(idles) > len(wakeups):\n idles = idles.iloc[:-1]\n else:\n wakeups = wakeups.iloc[:-1]\n\n lengths = pd.Series((wakeups.index - idles.index), index=idles.index)\n return pd.DataFrame({\"length\": lengths, \"state\": idles})",
"Make a scatter plot of the length of idle periods against the state that was entered. We should see that for long idle periods, deeper states were entered (i.e. we should see a positive corellation between the X and Y axes).",
"lengths = get_idle_periods(idle_df)\nlengths.plot(kind='scatter', x='length', y='state')",
"Draw a histogram of the length of idle periods shorter than 100ms in which the CPU entered cpuidle state 2.",
"df = lengths[(lengths['state'] == 2) & (lengths['length'] < 0.010)]\ndf.hist(column='length', bins=50)",
"Per-cluster Idle State Residency",
"# Idle state residency for CPUs in the big cluster\ntrace.data_frame.cluster_idle_state_residency('big')\n\n# Actual time spent in each idle state for CPUs in the big and LITTLE clusters\nia.plotClusterIdleStateResidency(['big', 'LITTLE'])\n\n# Percentage of time spent in each idle state for CPUs in the big and LITTLE clusters\nia.plotClusterIdleStateResidency(['big', 'LITTLE'], pct=True)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
luofan18/deep-learning
|
image-classification/dlnd_image_classification.ipynb
|
mit
|
[
"Image Classification\nIn this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.\nGet the Data\nRun the following cell to download the CIFAR-10 dataset for python.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nfrom urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport problem_unittests as tests\nimport tarfile\n\ncifar10_dataset_folder_path = 'cifar-10-batches-py'\n\n# Use Floyd's cifar-10 dataset if present\nfloyd_cifar10_location = '/input/cifar-10/python.tar.gz'\nif isfile(floyd_cifar10_location):\n tar_gz_path = floyd_cifar10_location\nelse:\n tar_gz_path = 'cifar-10-python.tar.gz'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(tar_gz_path):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:\n urlretrieve(\n 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',\n tar_gz_path,\n pbar.hook)\n\nif not isdir(cifar10_dataset_folder_path):\n with tarfile.open(tar_gz_path) as tar:\n tar.extractall()\n tar.close()\n\n\ntests.test_folder_path(cifar10_dataset_folder_path)",
"Explore the Data\nThe dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:\n* airplane\n* automobile\n* bird\n* cat\n* deer\n* dog\n* frog\n* horse\n* ship\n* truck\nUnderstanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.\nAsk yourself \"What are all possible labels?\", \"What is the range of values for the image data?\", \"Are the labels in order or random?\". Answers to questions like these will help you preprocess the data and end up with better predictions.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport helper\nimport numpy as np\n\n# Explore the dataset\nbatch_id = 1\nsample_id = 5\nhelper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)",
"Implement Preprocess Functions\nNormalize\nIn the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.",
"def normalize(x):\n \"\"\"\n Normalize a list of sample image data in the range of 0 to 1\n : x: List of image data. The image shape is (32, 32, 3)\n : return: Numpy array of normalize data\n \"\"\"\n # TODO: Implement Function\n arrays = []\n for x_ in x:\n array = np.array(x_)\n arrays.append(array)\n return np.stack(arrays, axis=0) / 256.\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_normalize(normalize)",
"One-hot encode\nJust like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.\nHint: Don't reinvent the wheel.",
"def one_hot_encode(x):\n \"\"\"\n One hot encode a list of sample labels. Return a one-hot encoded vector for each label.\n : x: List of sample Labels\n : return: Numpy array of one-hot encoded labels\n \"\"\"\n # TODO: Implement Function\n # class_num = np.array(x).max()\n class_num = 10\n num = len(x)\n out = np.zeros((num, class_num))\n for i in range(num):\n out[i, x[i]-1] = 1\n return out\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_one_hot_encode(one_hot_encode)",
"Randomize Data\nAs you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.\nPreprocess all the data and save it\nRunning the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport pickle\nimport problem_unittests as tests\nimport helper\n\n# Load the Preprocessed Validation data\nvalid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))",
"Build the network\nFor the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.\n\nNote: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the \"Convolutional and Max Pooling Layer\" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.\nHowever, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. \n\nLet's begin!\nInput\nThe neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions\n* Implement neural_net_image_input\n * Return a TF Placeholder\n * Set the shape using image_shape with batch size set to None.\n * Name the TensorFlow placeholder \"x\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_label_input\n * Return a TF Placeholder\n * Set the shape using n_classes with batch size set to None.\n * Name the TensorFlow placeholder \"y\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_keep_prob_input\n * Return a TF Placeholder for dropout keep probability.\n * Name the TensorFlow placeholder \"keep_prob\" using the TensorFlow name parameter in the TF Placeholder.\nThese names will be used at the end of the project to load your saved model.\nNote: None for shapes in TensorFlow allow for a dynamic size.",
"import tensorflow as tf\n\ndef neural_net_image_input(image_shape):\n \"\"\"\n Return a Tensor for a batch of image input\n : image_shape: Shape of the images\n : return: Tensor for image input.\n \"\"\"\n # TODO: Implement Function\n # print ('image_shape')\n # print (image_shape)\n shape = (None, )\n shape = shape + image_shape\n # print ('shape')\n # print (shape)\n inputs = tf.placeholder(tf.float32, shape=shape, name='x')\n # print ('inputs')\n # print (inputs)\n return inputs\n\n\ndef neural_net_label_input(n_classes):\n \"\"\"\n Return a Tensor for a batch of label input\n : n_classes: Number of classes\n : return: Tensor for label input.\n \"\"\"\n # TODO: Implement Function\n shape = (None, )\n shape = shape + (n_classes, )\n return tf.placeholder(tf.float32, shape=shape, name='y')\n\n\ndef neural_net_keep_prob_input():\n \"\"\"\n Return a Tensor for keep probability\n : return: Tensor for keep probability.\n \"\"\"\n # TODO: Implement Function\n return tf.placeholder(tf.float32, name='keep_prob')\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntf.reset_default_graph()\ntests.test_nn_image_inputs(neural_net_image_input)\ntests.test_nn_label_inputs(neural_net_label_input)\ntests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)",
"Convolution and Max Pooling Layer\nConvolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:\n* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.\n* Apply a convolution to x_tensor using weight and conv_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\n* Add bias\n* Add a nonlinear activation to the convolution.\n* Apply Max Pooling using pool_ksize and pool_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\nNote: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.",
"def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides, maxpool=True):\n \"\"\"\n Apply convolution then max pooling to x_tensor\n :param x_tensor: TensorFlow Tensor\n :param conv_num_outputs: Number of outputs for the convolutional layer\n :param conv_ksize: kernal size 2-D Tuple for the convolutional layer\n :param conv_strides: Stride 2-D Tuple for convolution\n :param pool_ksize: kernal size 2-D Tuple for pool\n :param pool_strides: Stride 2-D Tuple for pool\n : return: A tensor that represents convolution and max pooling of x_tensor\n \"\"\"\n # TODO: Implement Function\n input_channel = x_tensor.get_shape().as_list()[-1]\n weights_size = conv_ksize + (input_channel,) + (conv_num_outputs,)\n conv_strides = (1,) + conv_strides + (1,)\n pool_ksize = (1,) + pool_ksize + (1,)\n pool_strides = (1,) + pool_strides + (1,)\n \n weights = tf.Variable(tf.random_normal(weights_size, stddev=0.01))\n biases = tf.Variable(tf.zeros(conv_num_outputs))\n out = tf.nn.conv2d(x_tensor, weights, conv_strides, padding='SAME')\n out = out + biases\n out = tf.nn.relu(out)\n if maxpool:\n out = tf.nn.max_pool(out, pool_ksize, pool_strides, padding='SAME')\n return out\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_con_pool(conv2d_maxpool)",
"Flatten Layer\nImplement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.",
"def flatten(x_tensor):\n \"\"\"\n Flatten x_tensor to (Batch Size, Flattened Image Size)\n : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.\n : return: A tensor of size (Batch Size, Flattened Image Size).\n \"\"\"\n # TODO: Implement Function\n num, hight, width, channel = tuple(x_tensor.get_shape().as_list())\n new_shape = (-1, hight * width * channel)\n # print ('new_shape')\n # print (new_shape)\n return tf.reshape(x_tensor, new_shape)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_flatten(flatten)",
"Fully-Connected Layer\nImplement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.",
"def fully_conn(x_tensor, num_outputs):\n \"\"\"\n Apply a fully connected layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n num, dim = x_tensor.get_shape().as_list()\n weights = tf.Variable(tf.random_normal((dim, num_outputs), stddev=np.sqrt(2 / num_outputs)))\n biases = tf.Variable(tf.zeros(num_outputs))\n return tf.nn.relu(tf.matmul(x_tensor, weights) + biases)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_fully_conn(fully_conn)",
"Output Layer\nImplement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.\nNote: Activation, softmax, or cross entropy should not be applied to this.",
"def output(x_tensor, num_outputs):\n \"\"\"\n Apply a output layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n num, dim = x_tensor.get_shape().as_list()\n weights = tf.Variable(tf.random_normal((dim, num_outputs), np.sqrt(2 / num_outputs)))\n biases = tf.Variable(tf.zeros(num_outputs))\n return tf.matmul(x_tensor, weights) + biases\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_output(output)",
"Create Convolutional Model\nImplement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:\n\nApply 1, 2, or 3 Convolution and Max Pool layers\nApply a Flatten Layer\nApply 1, 2, or 3 Fully Connected Layers\nApply an Output Layer\nReturn the output\nApply TensorFlow's Dropout to one or more layers in the model using keep_prob.",
"def conv_net(x, keep_prob):\n \"\"\"\n Create a convolutional neural network model\n : x: Placeholder tensor that holds image data.\n : keep_prob: Placeholder tensor that hold dropout keep probability.\n : return: Tensor that represents logits\n \"\"\"\n # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers\n # Play around with different number of outputs, kernel size and stride\n # Function Definition from Above:\n # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)\n conv_ksize3 = (3, 3)\n conv_ksize1 = (1, 1)\n conv_ksize5 = (5, 5)\n conv_ksize7 = (7, 7)\n conv_strides1 = (1, 1)\n conv_strides2 = (2, 2)\n pool_ksize = (2, 2)\n pool_strides = (2, 2)\n channels = [32,128,512,512]\n # L = 4\n out = x\n # 6 layers\n # for i in range(int(L / 4)):\n out = conv2d_maxpool(out, channels[0], conv_ksize7, conv_strides1, pool_ksize, pool_strides, maxpool=True)\n out = conv2d_maxpool(out, channels[1], conv_ksize5, conv_strides1, pool_ksize, pool_strides, maxpool=True)\n out = conv2d_maxpool(out, channels[2], conv_ksize3, conv_strides1, pool_ksize, pool_strides, maxpool=True)\n # out = conv2d_maxpool(out, channels[3], conv_ksize5, conv_strides2, pool_ksize, pool_strides, maxpool=True)\n\n # TODO: Apply a Flatten Layer\n # Function Definition from Above:\n # flatten(x_tensor)\n out = flatten(out)\n\n # TODO: Apply 1, 2, or 3 Fully Connected Layers\n # Play around with different number of outputs\n # Function Definition from Above:\n # fully_conn(x_tensor, num_outputs)\n # by remove this fully connected layer can improve performance\n out = fully_conn(out, 256) \n \n \n # TODO: Apply an Output Layer\n # Set this to the number of classes\n # Function Definition from Above:\n # output(x_tensor, num_outputs)\n out = tf.nn.dropout(out, keep_prob)\n out = output(out, 10)\n \n # TODO: return output\n return out\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n\n##############################\n## Build the Neural Network ##\n##############################\n\n# Remove previous weights, bias, inputs, etc..\ntf.reset_default_graph()\n\n# Inputs\nx = neural_net_image_input((32, 32, 3))\ny = neural_net_label_input(10)\nkeep_prob = neural_net_keep_prob_input()\n\n# Model\nlogits = conv_net(x, keep_prob)\n\n# Name logits Tensor, so that is can be loaded from disk after training\nlogits = tf.identity(logits, name='logits')\n\n# Loss and Optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\n# Accuracy\ncorrect_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')\n\ntests.test_conv_net(conv_net)",
"Train the Neural Network\nSingle Optimization\nImplement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:\n* x for image input\n* y for labels\n* keep_prob for keep probability for dropout\nThis function will be called for each batch, so tf.global_variables_initializer() has already been called.\nNote: Nothing needs to be returned. This function is only optimizing the neural network.",
"def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):\n \"\"\"\n Optimize the session on a batch of images and labels\n : session: Current TensorFlow session\n : optimizer: TensorFlow optimizer function\n : keep_probability: keep probability\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n \"\"\"\n # TODO: Implement Function\n feed_dict = {keep_prob: keep_probability, x: feature_batch, y: label_batch}\n session.run(optimizer, feed_dict=feed_dict)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_train_nn(train_neural_network)",
"Show Stats\nImplement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.",
"def print_stats(session, feature_batch, label_batch, cost, accuracy):\n \"\"\"\n Print information about loss and validation accuracy\n : session: Current TensorFlow session\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n : cost: TensorFlow cost function\n : accuracy: TensorFlow accuracy function\n \"\"\"\n # TODO: Implement Function\n # here will print loss, train_accuracy, and val_accuracy\n # I implemented the val_accuracy, please read them all, thanks\n # print train_accuracy to see overfit\n loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})\n train_accuracy = session.run(accuracy, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})\n \n batch = feature_batch.shape[0]\n num_valid = valid_features.shape[0]\n val_accuracy = 0\n for i in range(0, num_valid, batch):\n end_i = i + batch\n if end_i > num_valid:\n end_i = num_valid\n batch_accuracy = session.run(accuracy, feed_dict={\n x: valid_features[i:end_i], y: valid_labels[i:end_i], keep_prob: 1.0})\n batch_accuracy *= (end_i - i)\n val_accuracy += batch_accuracy\n val_accuracy /= num_valid\n print ('loss is {}, train_accuracy is {}, val_accuracy is {}'.format(loss, train_accuracy, val_accuracy))",
"Hyperparameters\nTune the following parameters:\n* Set epochs to the number of iterations until the network stops learning or start overfitting\n* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:\n * 64\n * 128\n * 256\n * ...\n* Set keep_probability to the probability of keeping a node using dropout",
"# TODO: Tune Parameters\nepochs = 10\nbatch_size = 128\nkeep_probability = 0.8",
"Train on a Single CIFAR-10 Batch\nInstead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nprint('Checking the Training on a Single Batch...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n batch_i = 1\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)",
"Fully Train the Model\nNow that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_model_path = './image_classification'\n\nprint('Training...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n # Loop over all batches\n n_batches = 5\n for batch_i in range(1, n_batches + 1):\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)\n \n # Save Model\n saver = tf.train.Saver()\n save_path = saver.save(sess, save_model_path)",
"Checkpoint\nThe model has been saved to disk.\nTest Model\nTest your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport tensorflow as tf\nimport pickle\nimport helper\nimport random\n\n# Set batch size if not already set\ntry:\n if batch_size:\n pass\nexcept NameError:\n batch_size = 64\n\nsave_model_path = './image_classification'\nn_samples = 4\ntop_n_predictions = 3\n\ndef test_model():\n \"\"\"\n Test the saved model against the test dataset\n \"\"\"\n\n test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))\n loaded_graph = tf.Graph()\n\n with tf.Session(graph=loaded_graph) as sess:\n # Load model\n loader = tf.train.import_meta_graph(save_model_path + '.meta')\n loader.restore(sess, save_model_path)\n\n # Get Tensors from loaded model\n loaded_x = loaded_graph.get_tensor_by_name('x:0')\n loaded_y = loaded_graph.get_tensor_by_name('y:0')\n loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n loaded_logits = loaded_graph.get_tensor_by_name('logits:0')\n loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')\n \n # Get accuracy in batches for memory limitations\n test_batch_acc_total = 0\n test_batch_count = 0\n \n for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):\n test_batch_acc_total += sess.run(\n loaded_acc,\n feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})\n test_batch_count += 1\n\n print('Testing Accuracy: {}\\n'.format(test_batch_acc_total/test_batch_count))\n\n # Print Random Samples\n random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))\n random_test_predictions = sess.run(\n tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),\n feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})\n helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)\n\n\ntest_model()",
"Why 50-80% Accuracy?\nYou might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_image_classification.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
sdpython/ensae_teaching_cs
|
_doc/notebooks/td2a_eco/td2a_eco_regressions_lineaires.ipynb
|
mit
|
[
"Régressions linéaires avec Statsmodels et Scikit-Learn\nOn peut réaliser des régressions linéaires de beaucoup de manières avec Python. \nOn en a retenu 2, statsmodels et scikit-learn. Les deux librairies ont chacunes leurs qualités et leurs défauts, sachez que l'une est plus orienté data science et l'autre plus pour des économistes. \n(inspiré de la page Linear Regression with Statsmodels and Scikit-Learn)\nPar exemple, statsmodel vous fournira directement le tableau de regression, pour scikit c'est moins immédiat \nOn commence par charger tous les packages qui vont servir dans ce notebook :",
"from jyquickhelper import add_notebook_menu\nadd_notebook_menu()\n\n%matplotlib inline\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy import stats\nimport seaborn as sns\nimport statsmodels.api as sm\nfrom sklearn import linear_model",
"On va utiliser la fameuse base des Iris qu'on peut appeler depuis la librairie seaborn.\nPour chaque fleur, on a des informations sur la longueur et largeur de ses sépales ainsi que de ses pétales.\nPour comprendre la différence en image : Sépale.",
"iris = sns.load_dataset(\"iris\")\niris.head()\n\nsns.pairplot(iris, hue=\"species\");",
"Grâce à ces graphiques, on observere une forte relation entre longeur et la largeur des pétales. \nOn réalise une regression à une variable entre la longueur et la largeur grâce à seaborn. \nRegression à une variable avec seaborn",
"sns.lmplot(x=\"petal_length\", y=\"petal_width\", data=iris);",
"Si on observe bien qu'une relation existe, on ne connait pas grand chose d'autre. \nRegression en utilisant scikit-learn",
"X = iris[[\"petal_length\"]]\ny = iris[\"petal_width\"]\n\n# On fit le modele\nmodel = linear_model.LinearRegression()\nresults = model.fit(X, y)\n\nprint(results.intercept_, results.coef_)",
"La meilleure approximation linéaire est donc y=a+bx avec \n- a=−0.363075521319\n- b=0.41575542\nMais en termes de présentation de résultats, c'est un peu limité... Pas de standard errors, pas de $R^2$ etc (du moins pas automatiquement) \nRegression en utilisant statsmodels\nRemarque : on invese X et y dans la spécification du modèle pour cette librairie.",
"model = sm.OLS(y, X)\nresults = model.fit()\n# Avec statsmodel, on a une sortie qui ressemble beaucoup à celle de R\nprint(results.summary())",
"Si vous regardez de plus près, vous observez que les coefficients des deux régressions sont proches mais pas égaux. Problème de précision ? Non...\nIl faut faire attention avec statsmodels, il n'inclut pas de lui-même un intercept ($\\beta_0$) alors que scikit learn le fait.",
"X = iris[\"petal_length\"]\nX = np.vander(X, 2) #ici on ajoute\ny = iris[\"petal_width\"]\nmodel = sm.OLS(y, X)\nresults = model.fit()\nprint(results.summary())",
"Ca y est, les coefficients de scikit et de statsmodels sont quasi identiques (en réalité si vous regardez après la 10ème décimale, ils sont légèrement différents...)\nSi on regadre le $R^2$, on se dit qu'on est plutôt pas mal pour cette régression (en même temps, dire que pour un iris, la longueur et la largeur du pétale sont corrélées, c'est un peu normal...)'\nMais voyons si on peut aller encore plus loin, par exemple, en ajoutant l'information de l'espèce de l'iris, avec des variables catégorielles (qu'on ajoutera sous la forme d'indicatries ou dummies)",
"dummies = pd.get_dummies(iris[\"species\"])\niris = pd.concat([iris, dummies], axis=1)\niris.head()",
"La regression multi-linéaire avec statsmodels",
"X = iris[[\"petal_length\", \"setosa\", \"versicolor\", \"virginica\"]]\nX = sm.add_constant(X) # une autre façons d'ajouter une constante \ny = iris[\"petal_width\"]\n\nmodel = sm.OLS(y, X)\nresults = model.fit()\nprint(results.summary())",
"Avec cette version, on s'améliore en termes de $R^2$ mais la précision de notre estimateur est plus faible...\nPour aller plus loin, de nombreux exemples sont disponibles là : \n\nOLS\nGLS\nRegression quantile"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
scottprahl/miepython
|
docs/01_basics.ipynb
|
mit
|
[
"Mie Basics\nScott Prahl\nApril 2021\nIf miepython is not installed, uncomment the following cell (i.e., delete the #) and run (shift-enter)",
"#!pip install --user miepython\n\nimport importlib.resources\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ntry:\n import miepython\n\nexcept ModuleNotFoundError:\n print('miepython not installed. To install, uncomment and run the cell above.')\n print('Once installation is successful, rerun this cell again.')",
"Index of Refraction and Size Parameter\nWhen a monochromatic plane wave is incident on a sphere, it scatters and absorbs light depending on the properties of the light and sphere. If the sphere is in a vacuum, then the complex index of refraction of the sphere is\n$$\nm_\\mathrm{vac}= m_\\mathrm{re}- j\\,m_\\mathrm{im}\n$$\nThe factor $m_\\mathrm{im}=\\kappa$ is the index of absorption or the index of attenuation. \nThe non-dimensional sphere size parameter for a sphere in a vacuum is\n$$\nx_\\mathrm{vac} = \\frac{2\\pi r }{\\lambda_\\mathrm{vac}}\n$$\nwhere $r$ is the radius of the sphere and $\\lambda_\\mathrm{vac}$ is the wavelength of the light in a vacuum.\nIf the sphere is in a non-absorbing environment with real index $n_\\mathrm{env}$ then the Mie scattering formulas can still be used, but the index of refraction of the sphere is now \n$$\nm= \\frac{m_\\mathrm{re}- j\\,m_\\mathrm{im}}{ n_\\mathrm{env}}\n$$\nThe wavelength in the sphere size parameter should be the wavelength of the plane wave in the environment, thus\n$$\nx = \\frac{2\\pi r } {\\lambda_\\mathrm{vac}/n_\\mathrm{env}}\n$$\nSign Convention\nThe sign of the imaginary part of the index of refraction in miepython is assumed negative (as shown above). This convention is standard for atmospheric science and follows that of van de Hulst.\nAbsorption Coefficient\nThe imaginary part of the refractive index is a non-dimensional representation of light absorption. This can be seen by writing out the equation for a monochromatic, planar electric field\n$$\n\\mathcal{E}(z,t) = \\mathcal{E}_0 e^{j (k z - \\omega t)}\n$$\nwhere $k$ is the complex wavenumber \n$$\nk=k_\\mathrm{re}-k_\\mathrm{im}=2\\pi {{m_\\mathrm{re}}\\over{\\lambda_\\mathrm{vac}}}-2\\pi j \\frac{m_\\mathrm{im}}{\\lambda_\\mathrm{vac}}\n$$\nThus\n$$\n\\mathcal{E}(z,t) = \\mathcal{E}0 e^{-k\\mathrm{im}z}e^{j (k_\\mathrm{re} z - \\omega t)}\n$$\nand the corresponding time-averaged irradiance $E(z)$\n$$\nE(z) = {1\\over2} c\\epsilon |\\mathcal{E}|^2 = E_0 \\exp(-2k_\\mathrm{im}z) = E_0 \\exp(-\\mu_a z)\n$$\nand therefore\n$$\n\\mu_a = 2k_\\mathrm{im} = 4\\pi\\cdot \\frac{ m_\\mathrm{im}}{ \\lambda_\\mathrm{vac}}\n$$\nThus the imaginary part of the index of refraction is basically just the absorption coefficient measured in wavelengths.",
"miepython.mie_S1_S2(1.507-0.002j , 0.7086 , np.array([-1.0],dtype=float))\n\nmiepython.mie_S1_S2(1.507-0.002j , 0.7086 , -1)",
"Complex Refractive Index of Water\nLet's import and plot some data from the M.S. Thesis of D. Segelstein, \"The Complex Refractive Index of Water\",\nUniversity of Missouri — Kansas City, (1981) to get some sense the complex index of refraction. The imaginary part shows absorption peaks at 3 and 6 microns, as well as the broad peak starting at 10 microns.",
"#import the Segelstein data\n#h2o = np.genfromtxt('http://omlc.org/spectra/water/data/segelstein81_index.txt', delimiter='\\t', skip_header=4)\n\nnname = \"data/segelstein81_index.txt\"\nref = importlib.resources.files('miepython').joinpath(nname)\nh2o = np.genfromtxt(ref, delimiter='\\t', skip_header=4)\n\nh2o_lam = h2o[:,0]\nh2o_mre = h2o[:,1]\nh2o_mim = h2o[:,2]\n\n#plot it\nplt.plot(h2o_lam,h2o_mre)\nplt.plot(h2o_lam,h2o_mim*3)\nplt.plot((1,15),(1.333,1.333))\nplt.xlim((1,15))\nplt.ylim((0,1.8))\nplt.xlabel('Wavelength (microns)')\nplt.ylabel('Refractive Index')\nplt.annotate(r'$m_\\mathrm{re}$', xy=(3.4,1.5))\nplt.annotate(r'$m_\\mathrm{im}\\,\\,(3\\times)$', xy=(3.4,0.5))\nplt.annotate(r'$m_\\mathrm{re}=1.333$', xy=(10,1.36))\n\nplt.title('Infrared Complex Refractive Index of Water')\n\nplt.show()\n\n# import the Johnson and Christy data for gold\n# https://refractiveindex.info/tmp/data/main/Au/Johnson.txt\n\nnname = \"data/Johnson.txt\"\nref = importlib.resources.files('miepython').joinpath(nname)\nau = np.genfromtxt(ref, delimiter='\\t')\n\n# data is stacked so need to rearrange\nN = len(au)//2\nau_lam = au[1:N,0]\nau_mre = au[1:N,1]\nau_mim = au[N+1:,1]\n\nplt.scatter(au_lam,au_mre,s=1,color='blue')\nplt.scatter(au_lam,au_mim,s=1,color='red')\nplt.xlim((0.2,2))\nplt.xlabel('Wavelength (microns)')\nplt.ylabel('Refractive Index')\nplt.annotate(r'$m_\\mathrm{re}$', xy=(1.0,0.5),color='blue')\nplt.annotate(r'$m_\\mathrm{im}$', xy=(1.0,8),color='red')\n\nplt.title('Complex Refractive Index of Gold')\n\nplt.show()",
"The Absorption Coefficient of Water",
"mua = 4*np.pi* h2o_mim/h2o_lam\n\nplt.plot(h2o_lam,mua)\nplt.xlim((0.1,20))\nplt.ylim((0,1.5))\nplt.xlabel('Wavelength (microns)')\nplt.ylabel('Absorption Coefficient (1/micron)')\n\nplt.title('Water')\n\nplt.show()",
"Size Parameters\nSize Parameter $x$\nThe sphere size relative to the wavelength is called the size \nparameter $x$ \n$$\nx = 2\\pi {r/\\lambda}\n$$\nwhere $r$ is the radius of the sphere.",
"N=500\nm=1.5\nx = np.linspace(0.1,20,N) # also in microns\n\nqext, qsca, qback, g = miepython.mie(m,x)\n\nplt.plot(x,qsca)\nplt.xlabel(\"Sphere Size Parameter x\")\nplt.ylabel(\"bScattering Efficiency\")\nplt.title(\"index of refraction m=1.5\")\nplt.show()",
"Size Parameter $\\rho$\nThe value $\\rho$ is also sometimes used to facilitate comparisons for spheres with different indicies of refraction\n$$\n\\rho = 2x(m-1)\n$$\nNote that when $m=1.5$ and therefore $\\rho=x$. \nAs can be seen in the graph below, the scattering for spheres with different indicies of refraction pretty similar when plotted against $\\rho$, but no so obvious when plotted against $x$",
"N=500\nm=1.5\nrho = np.linspace(0.1,20,N) # also in microns\n\nm = 1.5\nx15 = rho/2/(m-1)\nqext, sca15, qback, g = miepython.mie(m,x15)\n\nm = 1.1\nx11 = rho/2/(m-1)\nqext, sca11, qback, g = miepython.mie(m,x11)\n\nf, (ax1, ax2) = plt.subplots(1, 2, sharey=True)\n\nax1.plot(rho,sca11,color='blue')\nax1.plot(rho,sca15,color='red')\nax1.set_xlabel(r\"Size parameter $\\rho$\")\nax1.set_ylabel(\"Scattering Efficiency\")\nax1.annotate('m=1.5', xy=(10,3.3), color='red')\nax1.annotate('m=1.1', xy=(8,1.5), color='blue')\n\nax2.plot(x11,sca11,color='blue')\nax2.plot(x15,sca15,color='red')\nax2.set_xlabel(r\"Size parameter $x$\")\nax2.annotate('m=1.5', xy=(5,4), color='red')\nax2.annotate('m=1.1', xy=(40,1.5), color='blue')\n\nplt.show()",
"Embedded spheres\nThe short answer is that everything just scales. \nSpecifically, divide the index of the sphere $m$ by the index of the surrounding material to get a relative index $m'$\n$$\nm' =\\frac{m}{n_\\mathrm{surroundings}}\n$$\nThe wavelength in the surrounding medium $\\lambda'$ is also altered\n$$\n\\lambda' = \\frac{\\lambda_\\mathrm{vacuum}}{n_\\mathrm{surroundings}}\n$$\nThus, the relative size parameter $x'$ becomes\n$$\nx' = \\frac{2 \\pi r} {\\lambda'}= \\frac{2 \\pi r n_\\mathrm{surroundings}}{ \\lambda_\\mathrm{vacuum}}\n$$\nScattering calculations for an embedded sphere uses $m'$ and $x'$ instead of $m$ and $x$.\nIf the spheres are air ($m=1$) bubbles in water ($m=4/3$), then the relative index of refraction will be about \n$$\nm' = m/n_\\mathrm{water} \\approx 1.0/(4/3) = 3/4 = 0.75\n$$",
"N=500\nm=1.0\nr=500 # nm\nlambdaa = np.linspace(300,800,N) # also in nm\n\nmwater = 4/3 # rough approximation\nmm = m/mwater\nxx = 2*np.pi*r*mwater/lambdaa\n\nqext, qsca, qback, g = miepython.mie(mm,xx)\n\nplt.plot(lambdaa,qsca)\nplt.xlabel(\"Wavelength (nm)\")\nplt.ylabel(\"Scattering Efficiency\")\nplt.title(\"One micron diameter air bubbles in water\")\nplt.show()",
"or just use ez_mie(m, d, lambda0, n_env)",
"m_sphere = 1.0\nn_water = 4/3\nd = 1000 # nm\nlambda0 = np.linspace(300,800) # nm\n\nqext, qsca, qback, g = miepython.ez_mie(m_sphere, d, lambda0, n_water)\n\nplt.plot(lambda0,qsca)\nplt.xlabel(\"Wavelength (nm)\")\nplt.ylabel(\"Scattering Efficiency\")\nplt.title(\"One micron diameter air bubbles in water\")\nplt.show()",
"Multiple scatterers\nThis will eventually turn into a description of the scattering coefficient.",
"m = 1.5\nx = np.pi/3\ntheta = np.linspace(-180,180,1800)\nmu = np.cos(theta/180*np.pi)\ns1,s2 = miepython.mie_S1_S2(m,x,mu)\nscat = 5*(abs(s1)**2+abs(s2)**2)/2 #unpolarized scattered light\n\nN=13\nxx = 3.5 * np.random.rand(N, 1) - 1.5\nyy = 5 * np.random.rand(N, 1) - 2.5\n\nplt.scatter(xx,yy,s=40,color='red')\nfor i in range(N):\n plt.plot(scat*np.cos(theta/180*np.pi)+xx[i],scat*np.sin(theta/180*np.pi)+yy[i],color='red')\n\nplt.plot([-5,7],[0,0],':k')\n\nplt.annotate('incoming\\nirradiance', xy=(-4.5,-2.3),ha='left',color='blue',fontsize=14)\nfor i in range(6):\n y0 = i -2.5\n plt.annotate('',xy=(-1.5,y0),xytext=(-5,y0),arrowprops=dict(arrowstyle=\"->\",color='blue'))\n\nplt.annotate('unscattered\\nirradiance', xy=(3,-2.3),ha='left',color='blue',fontsize=14)\nfor i in range(6):\n y0 = i -2.5\n plt.annotate('',xy=(7,y0),xytext=(3,y0),arrowprops=dict(arrowstyle=\"->\",color='blue',ls=':'))\n\n#plt.annotate('scattered\\nspherical\\nwave', xy=(0,1.5),ha='left',color='red',fontsize=16)\n#plt.annotate('',xy=(2.5,2.5),xytext=(0,0),arrowprops=dict(arrowstyle=\"->\",color='red'))\n#plt.annotate(r'$\\theta$',xy=(2,0.7),color='red',fontsize=14)\n#plt.annotate('',xy=(2,2),xytext=(2.7,0),arrowprops=dict(connectionstyle=\"arc3,rad=0.2\", arrowstyle=\"<->\",color='red'))\n\nplt.xlim(-5,7)\nplt.ylim(-3,3)\nplt.axis('off')\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dnc1994/MachineLearning-UW
|
ml-classification/module-2-linear-classifier-solution.ipynb
|
mit
|
[
"Predicting sentiment from product reviews\nThe goal of this first notebook is to explore logistic regression and feature engineering with existing GraphLab functions.\nIn this notebook you will use product review data from Amazon.com to predict whether the sentiments about a product (from its reviews) are positive or negative.\n\nUse SFrames to do some feature engineering\nTrain a logistic regression model to predict the sentiment of product reviews.\nInspect the weights (coefficients) of a trained logistic regression model.\nMake a prediction (both class and probability) of sentiment for a new product review.\nGiven the logistic regression weights, predictors and ground truth labels, write a function to compute the accuracy of the model.\nInspect the coefficients of the logistic regression model and interpret their meanings.\nCompare multiple logistic regression models.\n\nLet's get started!\nFire up GraphLab Create\nMake sure you have the latest version of GraphLab Create.",
"from __future__ import division\nimport graphlab\nimport math\nimport string",
"Data preperation\nWe will use a dataset consisting of baby product reviews on Amazon.com.",
"products = graphlab.SFrame('amazon_baby.gl/')",
"Now, let us see a preview of what the dataset looks like.",
"products",
"Build the word count vector for each review\nLet us explore a specific example of a baby product.",
"products[269]",
"Now, we will perform 2 simple data transformations:\n\nRemove punctuation using Python's built-in string functionality.\nTransform the reviews into word-counts.\n\nAside. In this notebook, we remove all punctuations for the sake of simplicity. A smarter approach to punctuations would preserve phrases such as \"I'd\", \"would've\", \"hadn't\" and so forth. See this page for an example of smart handling of punctuations.",
"def remove_punctuation(text):\n import string\n return text.translate(None, string.punctuation) \n\nreview_without_puctuation = products['review'].apply(remove_punctuation)\nproducts['word_count'] = graphlab.text_analytics.count_words(review_without_puctuation)",
"Now, let us explore what the sample example above looks like after these 2 transformations. Here, each entry in the word_count column is a dictionary where the key is the word and the value is a count of the number of times the word occurs.",
"products[269]['word_count']",
"Extract sentiments\nWe will ignore all reviews with rating = 3, since they tend to have a neutral sentiment.",
"products = products[products['rating'] != 3]\nlen(products)",
"Now, we will assign reviews with a rating of 4 or higher to be positive reviews, while the ones with rating of 2 or lower are negative. For the sentiment column, we use +1 for the positive class label and -1 for the negative class label.",
"products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)\nproducts",
"Now, we can see that the dataset contains an extra column called sentiment which is either positive (+1) or negative (-1).\nSplit data into training and test sets\nLet's perform a train/test split with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.",
"train_data, test_data = products.random_split(.8, seed=1)\nprint len(train_data)\nprint len(test_data)",
"Train a sentiment classifier with logistic regression\nWe will now use logistic regression to create a sentiment classifier on the training data. This model will use the column word_count as a feature and the column sentiment as the target. We will use validation_set=None to obtain same results as everyone else.\nNote: This line may take 1-2 minutes.",
"sentiment_model = graphlab.logistic_classifier.create(train_data,\n target = 'sentiment',\n features=['word_count'],\n validation_set=None)\n\nsentiment_model",
"Aside. You may get an warning to the effect of \"Terminated due to numerical difficulties --- this model may not be ideal\". It means that the quality metric (to be covered in Module 3) failed to improve in the last iteration of the run. The difficulty arises as the sentiment model puts too much weight on extremely rare words. A way to rectify this is to apply regularization, to be covered in Module 4. Regularization lessens the effect of extremely rare words. For the purpose of this assignment, however, please proceed with the model above.\nNow that we have fitted the model, we can extract the weights (coefficients) as an SFrame as follows:",
"weights = sentiment_model.coefficients\nweights.column_names()",
"There are a total of 121713 coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment. \nFill in the following block of code to calculate how many weights are positive ( >= 0). (Hint: The 'value' column in SFrame weights must be positive ( >= 0)).",
"num_positive_weights = len(weights[weights['value'] >= 0])\nnum_negative_weights = len(weights[weights['value'] < 0])\n\nprint \"Number of positive weights: %s \" % num_positive_weights\nprint \"Number of negative weights: %s \" % num_negative_weights",
"Quiz question: How many weights are >= 0?\nMaking predictions with logistic regression\nNow that a model is trained, we can make predictions on the test data. In this section, we will explore this in the context of 3 examples in the test dataset. We refer to this set of 3 examples as the sample_test_data.",
"sample_test_data = test_data[10:13]\nprint sample_test_data['rating']\nsample_test_data",
"Let's dig deeper into the first row of the sample_test_data. Here's the full review:",
"sample_test_data[0]['review']",
"That review seems pretty positive.\nNow, let's see what the next row of the sample_test_data looks like. As we could guess from the sentiment (-1), the review is quite negative.",
"sample_test_data[1]['review']",
"We will now make a class prediction for the sample_test_data. The sentiment_model should predict +1 if the sentiment is positive and -1 if the sentiment is negative. Recall from the lecture that the score (sometimes called margin) for the logistic regression model is defined as:\n$$\n\\mbox{score}_i = \\mathbf{w}^T h(\\mathbf{x}_i)\n$$ \nwhere $h(\\mathbf{x}_i)$ represents the features for example $i$. We will write some code to obtain the scores using GraphLab Create. For each row, the score (or margin) is a number in the range [-inf, inf].",
"scores = sentiment_model.predict(sample_test_data, output_type='margin')\nprint scores",
"Predicting sentiment\nThese scores can be used to make class predictions as follows:\n$$\n\\hat{y} = \n\\left{\n\\begin{array}{ll}\n +1 & \\mathbf{w}^T h(\\mathbf{x}_i) > 0 \\\n -1 & \\mathbf{w}^T h(\\mathbf{x}_i) \\leq 0 \\\n\\end{array} \n\\right.\n$$\nUsing scores, write code to calculate $\\hat{y}$, the class predictions:\nRun the following code to verify that the class predictions obtained by your calculations are the same as that obtained from GraphLab Create.",
"print \"Class predictions according to GraphLab Create:\" \nprint sentiment_model.predict(sample_test_data)",
"Checkpoint: Make sure your class predictions match with the one obtained from GraphLab Create.\nProbability predictions\nRecall from the lectures that we can also calculate the probability predictions from the scores using:\n$$\nP(y_i = +1 | \\mathbf{x}_i,\\mathbf{w}) = \\frac{1}{1 + \\exp(-\\mathbf{w}^T h(\\mathbf{x}_i))}.\n$$\nUsing the variable scores calculated previously, write code to calculate the probability that a sentiment is positive using the above formula. For each row, the probabilities should be a number in the range [0, 1].\nCheckpoint: Make sure your probability predictions match the ones obtained from GraphLab Create.",
"print \"Class predictions according to GraphLab Create:\" \nprint sentiment_model.predict(sample_test_data, output_type='probability')",
"Quiz Question: Of the three data points in sample_test_data, which one (first, second, or third) has the lowest probability of being classified as a positive review?\nFind the most positive (and negative) review\nWe now turn to examining the full test dataset, test_data, and use GraphLab Create to form predictions on all of the test data points for faster performance.\nUsing the sentiment_model, find the 20 reviews in the entire test_data with the highest probability of being classified as a positive review. We refer to these as the \"most positive reviews.\"\nTo calculate these top-20 reviews, use the following steps:\n1. Make probability predictions on test_data using the sentiment_model. (Hint: When you call .predict to make predictions on the test data, use option output_type='probability' to output the probability rather than just the most likely class.)\n2. Sort the data according to those predictions and pick the top 20. (Hint: You can use the .topk method on an SFrame to find the top k rows sorted according to the value of a specified column.)",
"test_data['prediction'] = sentiment_model.predict(test_data, output_type='probability')\n\ntop_20_positive = test_data.topk('prediction', k=20)\ntop_20_positive.print_rows(num_rows=20)",
"Quiz Question: Which of the following products are represented in the 20 most positive reviews? [multiple choice]\nNow, let us repeat this excercise to find the \"most negative reviews.\" Use the prediction probabilities to find the 20 reviews in the test_data with the lowest probability of being classified as a positive review. Repeat the same steps above but make sure you sort in the opposite order.",
"top_20_negative = test_data.topk('prediction', k=20, reverse=True)\ntop_20_negative.print_rows(num_rows=20)",
"Quiz Question: Which of the following products are represented in the 20 most negative reviews? [multiple choice]\nCompute accuracy of the classifier\nWe will now evaluate the accuracy of the trained classifer. Recall that the accuracy is given by\n$$\n\\mbox{accuracy} = \\frac{\\mbox{# correctly classified examples}}{\\mbox{# total examples}}\n$$\nThis can be computed as follows:\n\nStep 1: Use the trained model to compute class predictions (Hint: Use the predict method)\nStep 2: Count the number of data points when the predicted class labels match the ground truth labels (called true_labels below).\nStep 3: Divide the total number of correct predictions by the total number of data points in the dataset.\n\nComplete the function below to compute the classification accuracy:",
"def get_classification_accuracy(model, data, true_labels):\n # First get the predictions\n ## YOUR CODE HERE\n predictions = model.predict(data)\n \n # Compute the number of correctly classified examples\n ## YOUR CODE HERE\n num_correct = sum(predictions == true_labels)\n\n # Then compute accuracy by dividing num_correct by total number of examples\n ## YOUR CODE HERE\n accuracy = num_correct / len(data)\n \n return accuracy",
"Now, let's compute the classification accuracy of the sentiment_model on the test_data.",
"get_classification_accuracy(sentiment_model, test_data, test_data['sentiment'])",
"Quiz Question: What is the accuracy of the sentiment_model on the test_data? Round your answer to 2 decimal places (e.g. 0.76).\nQuiz Question: Does a higher accuracy value on the training_data always imply that the classifier is better?\nLearn another classifier with fewer words\nThere were a lot of words in the model we trained above. We will now train a simpler logistic regression model using only a subet of words that occur in the reviews. For this assignment, we selected a 20 words to work with. These are:",
"significant_words = ['love', 'great', 'easy', 'old', 'little', 'perfect', 'loves', \n 'well', 'able', 'car', 'broke', 'less', 'even', 'waste', 'disappointed', \n 'work', 'product', 'money', 'would', 'return']\n\nlen(significant_words)",
"For each review, we will use the word_count column and trim out all words that are not in the significant_words list above. We will use the SArray dictionary trim by keys functionality. Note that we are performing this on both the training and test set.",
"train_data['word_count_subset'] = train_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)\ntest_data['word_count_subset'] = test_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)",
"Let's see what the first example of the dataset looks like:",
"train_data[0]['review']",
"The word_count column had been working with before looks like the following:",
"print train_data[0]['word_count']",
"Since we are only working with a subet of these words, the column word_count_subset is a subset of the above dictionary. In this example, only 2 significant words are present in this review.",
"print train_data[0]['word_count_subset']",
"Train a logistic regression model on a subset of data\nWe will now build a classifier with word_count_subset as the feature and sentiment as the target.",
"simple_model = graphlab.logistic_classifier.create(train_data,\n target = 'sentiment',\n features=['word_count_subset'],\n validation_set=None)\nsimple_model",
"We can compute the classification accuracy using the get_classification_accuracy function you implemented earlier.",
"get_classification_accuracy(simple_model, test_data, test_data['sentiment'])",
"Now, we will inspect the weights (coefficients) of the simple_model:",
"simple_model.coefficients",
"Let's sort the coefficients (in descending order) by the value to obtain the coefficients with the most positive effect on the sentiment.",
"simple_model.coefficients.sort('value', ascending=False).print_rows(num_rows=21)",
"Quiz Question: Consider the coefficients of simple_model. There should be 21 of them, an intercept term + one for each word in significant_words. How many of the 20 coefficients (corresponding to the 20 significant_words and excluding the intercept term) are positive for the simple_model?",
"len(simple_model.coefficients[simple_model.coefficients['value'] > 0])",
"Quiz Question: Are the positive words in the simple_model (let us call them positive_significant_words) also positive words in the sentiment_model?",
"simple_model.coefficients[simple_model.coefficients['value'] >= 0]\n\nweights[weights['index'] == 'old']",
"Comparing models\nWe will now compare the accuracy of the sentiment_model and the simple_model using the get_classification_accuracy method you implemented above.\nFirst, compute the classification accuracy of the sentiment_model on the train_data:",
"get_classification_accuracy(sentiment_model, train_data, train_data['sentiment'])",
"Now, compute the classification accuracy of the simple_model on the train_data:",
"get_classification_accuracy(simple_model, train_data, train_data['sentiment'])",
"Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TRAINING set?\nNow, we will repeat this excercise on the test_data. Start by computing the classification accuracy of the sentiment_model on the test_data:",
"get_classification_accuracy(sentiment_model, test_data, test_data['sentiment'])",
"Next, we will compute the classification accuracy of the simple_model on the test_data:",
"get_classification_accuracy(simple_model, test_data, test_data['sentiment'])",
"Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TEST set?\nBaseline: Majority class prediction\nIt is quite common to use the majority class classifier as the a baseline (or reference) model for comparison with your classifier model. The majority classifier model predicts the majority class for all data points. At the very least, you should healthily beat the majority class classifier, otherwise, the model is (usually) pointless.\nWhat is the majority class in the train_data?",
"num_positive = (train_data['sentiment'] == +1).sum()\nnum_negative = (train_data['sentiment'] == -1).sum()\nprint num_positive\nprint num_negative",
"Now compute the accuracy of the majority class classifier on test_data.\nQuiz Question: Enter the accuracy of the majority class classifier model on the test_data. Round your answer to two decimal places (e.g. 0.76).",
"print float((test_data['sentiment'] == +1).sum()) / len(test_data)",
"Quiz Question: Is the sentiment_model definitely better than the majority class classifier (the baseline)?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ricklupton/sankeyview
|
docs/cookbook/scale.ipynb
|
mit
|
[
"Setting the scale\nThis recipe demonstrates how the scale of the Sankey diagram is set.\nBy default the scale is calculated for each diagram to achieve a certain whitespace-to-flow ratio within the height that is given. But in some cases, you may want to set the scale explicitly.\nFor demonstration, the CSV data is written directly in the cell below -- in practice you would want to load data a file.",
"import pandas as pd\nfrom io import StringIO\n\nflows = pd.read_csv(StringIO(\"\"\"\nyear,source,target,value\n2020,A,B,10\n2025,A,B,20\n\"\"\"))\n\nflows\n\nfrom floweaver import *\n\n# Set the default size to fit the documentation better.\nsize = dict(width=100, height=100,\n margins=dict(left=20, right=20, top=10, bottom=10))\n\nnodes = {\n 'A': ProcessGroup(['A']),\n 'B': ProcessGroup(['B']),\n}\n\nbundles = [\n Bundle('A', 'B'),\n]\n\nordering = [['A'], ['B']]\n\nsdd = SankeyDefinition(nodes, bundles, ordering)",
"If we draw the flow for the year 2020 and the year 2025 separately, they appear the same:",
"w1 = weave(sdd, flows.query('year == 2020')).to_widget(**size)\nw1\n\nw2 = weave(sdd, flows.query('year == 2025')).to_widget(**size)\nw2",
"But in fact they have different scales:",
"w1.scale, w2.scale",
"The units of the scale are units-of-value per pixel.\nIf we draw the Sankeys again while setting the scale, we can see that the flow indeed has changed between years:",
"SCALE = 2.0\n\nfrom ipywidgets import HBox\n\nw1 = weave(sdd, flows.query('year == 2020')).to_widget(**size)\nw2 = weave(sdd, flows.query('year == 2025')).to_widget(**size)\n\nw1.scale = w2.scale = SCALE\n\nHBox([w1, w2])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
wikistat/Ateliers-Big-Data
|
CatsVSDogs/Atelier-keras-CatsVSDogs.ipynb
|
mit
|
[
"<center>\n<a href=\"http://www.insa-toulouse.fr/\" ><img src=\"http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo-insa.jpg\" style=\"float:left; max-width: 120px; display: inline\" alt=\"INSA\"/></a> \n<a href=\"http://wikistat.fr/\" ><img src=\"http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/wikistat.jpg\" style=\"max-width: 250px; display: inline\" alt=\"Wikistat\"/></a>\n<a href=\"http://www.math.univ-toulouse.fr/\" ><img src=\"http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo_imt.jpg\" width=400, style=\"float:right; display: inline\" alt=\"IMT\"/> </a>\n</center>\nAteliers: Technologies des grosses data\nReconnaissance d'images: cats vs. dogs\nTranfert d'apprentissage avec <a href=\"https://www.tensorflow.org/\"><img src=\"https://avatars0.githubusercontent.com/u/15658638?s=200&v=4\" width=100, style=\"display: inline\" alt=\"TensorFlow\"/></a> tensorflow et <a href=\"https://keras.io/\"><img src=\"https://s3.amazonaws.com/keras.io/img/keras-logo-2018-large-1200.png\" width=250, style=\"display: inline\" alt=\"Keras\"/></a>\nRésumé\nApprentissage d'un réseau convolutionnel élémentaire puis utilisation de réseaux pré-entrainés (VGG16, InceptionV3) sur la base ImageNet afin de résoudre un autre exemple de reconnaissance d'images. Utilisation de Keras pour piloter la librairie tensorFlow. Comparaison des performances des réseaux et des environnements de calcul CPU et GPU.\nIntroduction\nObjectifs\nLa reconnaissance d'images a franchi une étape majeure en 2012. L'empilement de couches de neurones, dont certaines convolutionnelles, ont conduit à des algorithmes nettement plus performants en reconnaissance d'image, traitement du langage naturel, et à l'origine d'un battage médiatique considérable autour de l'apprentissage épais ou deep learning. Néanmoins, apprendre un réseau profond comportant des milions de paramètres nécessite une base d'apprentissage excessivement volumineuse (e.g. ImageNet) avec des millions d'images labellisées. \nL'apprentissage s'avère donc très couteux en temps de calcul, même avec des technologies adaptées (GPU). Pour résoudre ce problème il est possible d'utiliser des réseaux pré-entrainés. Ces réseaux possèdent une structure particulière, établie de façon heuristique dans différents départements de recherche (Microsoft: Resnet, Google: Inception V3, Facebook: ResNet) avant d'être ajustés sur des banques d'images publiques telles que ImageNet. \nLa stratégie de ce transfert d'apprentissage consiste à exploiter la connaissance acquise sur un problème de classification général pour l’appliquer à un problème particulier.\nLa librairie Keras permet de construire de tels réseaux en utlisant relativement simplement l'environnement tensorFlow de Google à partir de programmes récrits en Python. De plus Keras permet d'utiliser les performances d'une carte GPU afin d'atteindre des performances endant possible ce transfert d'apprentissage, même avec des réseaux complexes.\nL'objectif de ce tutoriel est de montrer les capacités du transfert d'apprentissage permettant de résoudre des problèmes complexes avec des moyens de calcul modestes. Néanmoins, une carte GPU est vivement conseillé.\nCe tutoriel est en grande partie inspiré du blog de François Chollet à l'initiative de Keras.\nEnvironnement matériel et logiciel\nKeras et tensorFlow s'installent simplement à partir de la distribution Anaconda de Python.",
"# Utils\nimport sys\nimport os\nimport shutil\nimport time\nimport pickle\nimport numpy as np\n\n# Deep Learning Librairies\nimport tensorflow as tf\nimport keras.preprocessing.image as kpi\nimport keras.layers as kl\nimport keras.optimizers as ko\nimport keras.backend as k\nimport keras.models as km\nimport keras.applications as ka\n\n# Visualisaiton des données\nfrom matplotlib import pyplot as plt",
"La commande suivante permet de verifier qu'une carte GPU est bien disponible sur la machine utilisée. Si c'est le cas et si Keras a bien été installé dans la configuration GPU (c'est généralement le cas dans l'environement virtuel GPU d'Anaconda), deux options vont apparaitre, une CPU et une GPU. La configuration GPU sera alors automatiquement utilisée.",
"from tensorflow.python.client import device_lib\nprint(device_lib.list_local_devices())\n\nMODE = \"GPU\" if \"GPU\" in [k.device_type for k in device_lib.list_local_devices()] else \"CPU\"\nprint(MODE)",
"Prise en charge des données\nStructure des données\nLes données originales peuvent être téléchargées à partir du site kaggle.\nL'ensemble d'apprentissage contient 25.000 images. C'est beaucoup trop pour des machines usuelles à moins de se montrer très patient. Aussi, deux sous-échantillons d'apprentissage ont été créés et disposés dans le dépôt.\n\n100 images de chats et 100 images de chiens plus un échantillon de validation consitué de 40 images de chats et 40 images de chien. \n1000 images de chats et 1000 images de chiens plus un échantillon de validation consitué de 400 images de chats et 400 images de chien.\n\nPour utiliser certaines fonctionnalités de Keras, les données doivent être organisées selon une abrorescence précise. Les fichiers appartenant à une même classe doivent être dans un même dossier. \ndata_dir\n└───subsample/\n│ └───train/\n│ │ └───cats/\n│ │ │ │ cat.0.jpg\n│ │ │ │ cat.1.jpg\n│ │ │ │ ...\n│ │ └───dogs/\n│ │ │ │ dog.0.jpg\n│ │ │ │ dog.1.jpg\n│ │ │ │ ...\n│ └───test/\n│ │ └───cats/\n│ │ │ │ cat.1000.jpg\n│ │ │ │ cat.1000.jpg\n│ │ │ │ ...\n│ │ └───dogs/\n│ │ │ │ dog.1000.jpg\n│ │ │ │ dog.1000.jpg\n│ │ │ │ ...\nN.B. Des sous-échantillons plus importants créés à partir des données originales doivent être enregistrés en respectant scrupuleusement cette structure.\nCréation d'un jeu d'apprentissage et de validation\nSpécifier le chemin du dossier contenant les données, si ce n'est pas le répertoire courant, ainsi que les tailles des échantillons d'apprentissage et de validation.",
"data_dir = '' # chemin d'accès aux données\n\nN_train = 200 #2000\nN_val = 80 #800\n\ndata_dir_sub = data_dir+'subsample_%d_Ntrain_%d_Nval' %(N_train, N_val)",
"Illustration des données\nLa fonction load_img permet de charger une image comme une image PIL.",
"img = kpi.load_img(data_dir_sub+'/train/cats/cat.1.jpg') # this is a PIL image\nimg",
"La fonction img_to_array génére un array numpy a partir d'une image PIL .",
"x = kpi.img_to_array(img) \nplt.imshow(x/255, interpolation='nearest')\nplt.show()",
"Pré-traitements\nLes images du jeu de données sont de dimensions différentes :",
"x_0 = kpi.img_to_array(kpi.load_img(data_dir_sub+\"/train/cats/cat.0.jpg\"))\nx_1 = kpi.img_to_array(kpi.load_img(data_dir_sub+\"/train/cats/cat.1.jpg\"))\nx_0.shape, x_1.shape",
"Or les images doivent être de même dimensions pour être utilisée dans un même réseau.\nLa fonction ImageDataGeneratorde Keras permet de remédier à ce problème.\nPlus généralement cette fonction applique un certain nombre de traitements (transformation, normalisation) aléatoires sur les images de sorte que le modèle n'apprenne jamais deux fois la même image.\nQuelques arguments de cette fonction:\n* rotation_range: Un interval représentant les degrés possibles de rotation de l'image,\n* width_shift and height_shift: intervales au sein desquels les données peuvent être translatées horizontalement ou verticalement, \n* rescale: Une valeur par lequelle les données sont multipliées,\n* shear_range: Transvection,\n* zoom_range: Permet des zoom au sein d'une image,\n* horizontal_flip: Inverse aléatoirement des images selon l'axe horizontal,\n* fill_mode: La strategie adoptée pour combler les pixels manquants après une transformation.",
"datagen = kpi.ImageDataGenerator(\n rotation_range=40,\n width_shift_range=0.2,\n height_shift_range=0.2,\n shear_range=0.2,\n zoom_range=0.2,\n horizontal_flip=True,\n fill_mode='nearest')",
"La commande .flow() genere de nouveaux exemples à partir de l'image originale et les sauve dans le dossier spécifié dans save_to_dir.\nOn force l'arrêt de cette génération après huits images générées.",
"img_width = 150\nimg_height = 150\n\nimg = kpi.load_img(data_dir_sub+\"/train/cats/cat.1.jpg\") # this is a PIL image\nx = kpi.img_to_array(img) \nx_ = x.reshape((1,) + x.shape)\n\nif not(os.path.isdir(data_dir_sub+\"/preprocessing_example\")):\n os.mkdir(data_dir_sub+\"/preprocessing_example\")\n\n i = 0\n for batch in datagen.flow(x_, batch_size=1,save_to_dir=data_dir_sub+\"/preprocessing_example\", save_prefix='cat', save_format='jpeg'):\n i += 1\n if i > 7:\n break ",
"Illustration des images transformées.",
"X_list=[]\nfor f in os.listdir(data_dir_sub+\"/preprocessing_example\"):\n X_list.append(kpi.img_to_array(kpi.load_img(data_dir_sub+\"/preprocessing_example/\"+f)))\n\n\nfig=plt.figure(figsize=(16,8))\nfig.patch.set_alpha(0)\nax = fig.add_subplot(3,3,1)\nax.imshow(x/255, interpolation=\"nearest\")\nax.set_title(\"Image original\")\nfor i,xt in enumerate(X_list):\n ax = fig.add_subplot(3,3,i+2)\n ax.imshow(xt/255, interpolation=\"nearest\")\n ax.set_title(\"Random transformation %d\" %(i+1))\nplt.tight_layout()\nplt.savefig(\"cats_transformation.png\", dpi=100, bbox_to_anchor=\"tight\", facecolor=fig.get_facecolor())\nplt.show()",
"Classification d'image à l'aide du Deep Learning\nDans un premier temps, nous allons fixer le nombre d'epochs ainsi que la taille de notre batch afin que ces deux paramètres soit communs aux différentes méthodes que nous allons tester. \nQueques règles à suivre pour le choix de ces paramètres :\n\nepochs: Commencer avec un nombre d'epochs relativement faible (2,3) afin de voir le temps de calcul nécessaire à votre machine, puis augmenter le en conséquence.\nbatch_size: La taille du batch correspond au nombre d'éléments qui seront étudié a chaque itération au cours d'une epochs. \n\nImportant : Avec Keras, lorsque les données sont générés avec un générateur (voir précédemment) la taille du batch doit être un diviseur de la taille de l'échantillon. Sinon l'algorithme aura des comportement anormaux qui ne généreront pas forcément un message d'erreur.",
"epochs = 10\nbatch_size=20",
"Réseau convolutionnel\nDans un premiers temps, on construit notre propre réseau de neurones convolutionnel.\nGénération des données\nOn définit deux objets ImageDataGenerator :\n\ntrain_datagen: pour l'apprentissage, où différentes transformations sont appliquées, comme précédement\nvalid_datagen: pour la validation, où l'on applique seulement une transformation rescale pour ne pas déformer les données.\n\nIl est également important de définir la taille des images dans laquelle nos images seront reformatées. Ici nous choisirons un taille d'image de 150x150",
"# this is the augmentation configuration we will use for training\ntrain_datagen = kpi.ImageDataGenerator(\n rescale=1./255,\n)\n\n# this is the augmentation configuration we will use for testing:\n# only rescaling\nvalid_datagen = kpi.ImageDataGenerator(rescale=1./255)\n\n# this is a generator that will read pictures found in\n# subfolers of 'data/train', and indefinitely generate\n# batches of augmented image data\ntrain_generator = train_datagen.flow_from_directory(\n data_dir_sub+\"/train/\", # this is the target directory\n target_size=(img_width, img_height), \n batch_size=batch_size,\n class_mode='binary') # since we use binary_crossentropy loss, we need binary labels\n\n# this is a similar generator, for validation data\nvalidation_generator = valid_datagen.flow_from_directory(\n data_dir_sub+\"/validation/\",\n target_size=(img_width, img_height),\n batch_size=batch_size,\n class_mode='binary')",
"Définition du modèle\nLe modèle est consitué de 3 blocs de convolution consitutés chacun de:\n\nUne couche de Convolution2D\nUne couche d'Activation ReLU\nUne couche MaxPooling2D\n\nSuivi de :\n* Une couche Flatten, permettant de convertir les features de 2 à 1 dimensions. \n* Une couche Dense (Fully connected layer)\n* Une couche d' Activation ReLU\n* Une couche Dropout\n* Une couche Dense de taille 1 suivi d'une Activation sigmoid permettant la classification binaire\nOn utilise la fonction de perte binary_crossentropy pour apprendre notre modèle",
"model_conv = km.Sequential()\nmodel_conv.add(kl.Conv2D(32, (3, 3), input_shape=(img_width, img_height, 3), data_format=\"channels_last\"))\nmodel_conv.add(kl.Activation('relu'))\nmodel_conv.add(kl.MaxPooling2D(pool_size=(2, 2)))\n\nmodel_conv.add(kl.Conv2D(32, (3, 3)))\nmodel_conv.add(kl.Activation('relu'))\nmodel_conv.add(kl.MaxPooling2D(pool_size=(2, 2)))\n\nmodel_conv.add(kl.Conv2D(64, (3, 3)))\nmodel_conv.add(kl.Activation('relu'))\nmodel_conv.add(kl.MaxPooling2D(pool_size=(2, 2)))\n\nmodel_conv.add(kl.Flatten()) # this converts our 3D feature maps to 1D feature vectors\nmodel_conv.add(kl.Dense(64))\nmodel_conv.add(kl.Activation('relu'))\nmodel_conv.add(kl.Dropout(0.5))\nmodel_conv.add(kl.Dense(1))\nmodel_conv.add(kl.Activation('sigmoid'))\n\nmodel_conv.compile(loss='binary_crossentropy',\n optimizer='rmsprop',\n metrics=['accuracy'])\n\n\nmodel_conv.summary()",
"Apprentissage",
"ts = time.time()\nmodel_conv.fit_generator(train_generator, steps_per_epoch=N_train // batch_size, epochs=epochs,\n validation_data=validation_generator,validation_steps=N_val // batch_size)\nte = time.time()\nt_learning_conv_simple_model = te-ts\nprint(\"Learning TIme for %d epochs : %d seconds\"%(epochs,t_learning_conv_simple_model))\nmodel_conv.save(data_dir_sub+'/'+MODE+'_models_convolutional_network_%d_epochs_%d_batch_size.h5' %(epochs, batch_size))",
"Prédiction",
"ts = time.time()\nscore_conv_val = model_conv.evaluate_generator(validation_generator, N_val /batch_size, verbose=1)\nscore_conv_train = model_conv.evaluate_generator(train_generator, N_train / batch_size, verbose=1)\nte = time.time()\nt_prediction_conv_simple_model = te-ts\nprint('Train accuracy:', score_conv_train[1])\nprint('Validation accuracy:', score_conv_val[1])\nprint(\"Time Prediction: %.2f seconds\" %t_prediction_conv_simple_model )",
"Q Commentez les valeurs de prédictions d'apprentissage et de validation. Comparez les avec les résultats de la dernière epochs d'apprentissage. Qu'observez vous? Est-ce normal?\nExercice Re-faites tournez ce modèle en ajoutant plus de transformation aléatoire dans le générateur d'image au moment de l'apprentissage. Que constatez-vous?\nRéseau pré-entrainé : VGG16\nNous allons voir dans cette partie deux façon d'utiliser un modèle pré-entrainé:\n\nDans un premier temps on utilise le modèle pour extraire des features des images qui seront utilisés dans un réseaux de convolution \"classique\". Ces features sont le résultats des transformations des différents blocs de convolution sur nos images. \nDans un second temps on branchera le modèle \"classique\" généré directement sur le modèle pré-entrainé. Ce modèle sera ensuite ré-entraîné plus finement (Fine Tuning) sur le dernier bloc de convolution.\n\nIllustration du réseau\n\nExtraction de nouvelle caractéristiques (features)\nTéléchargement des poids du modèle\nSi cest la première fois que vous appeler l'application VGG16, le lancement des poids commencera automatiquement et seront stocké dans votre home : \"~/.keras/models\"\nOn utilise le modèle avec l'option ìnclude_top = False. C'est à dire que l'on ne télécharge pas le dernier bloc Fully connected classifier. \nLa fonction summary permet de retrouver la structure décrite précédemment.",
"model_VGG16_without_top = ka.VGG16(include_top=False, weights='imagenet')\nmodel_VGG16_without_top.summary()",
"Création des caractéristiques\nOn applique alors les 5 blocs du modèle VGG16 sur les images de nos échantillons d'apprentissage et de validation.\nCette opération peut-être couteuse, c'est pourquoi on va sauver ces features dans des fichiers afin d'effectuer qu'une fois cette opération.\nSi ces fichiers existent, les poids seront téléchargés, sinon il seront créés.",
"features_train_path = data_dir_sub+'/features_train.npy' \nfeatures_validation_path = data_dir_sub+'/features_validation.npy' \n\nif os.path.isfile(features_train_path) and os.path.isfile(features_validation_path):\n print(\"Load Features\")\n features_train = np.load(open(features_train_path, \"rb\"))\n features_validation = np.load(open(features_validation_path, \"rb\"))\n \nelse:\n print(\"Generate Features\")\n datagen = kpi.ImageDataGenerator(rescale=1. / 255)\n\n generator = datagen.flow_from_directory(\n data_dir_sub+\"/train\",\n target_size=(img_width, img_height),\n batch_size=batch_size,\n class_mode=None, # this means our generator will only yield batches of data, no labels\n shuffle=False) \n features_train = model_VGG16_without_top.predict_generator(generator, N_train / batch_size, verbose = 1)\n # save the output as a Numpy array\n np.save(open(features_train_path, 'wb'), features_train)\n\n \n generator = datagen.flow_from_directory(\n data_dir_sub+\"/validation\",\n target_size=(img_width, img_height),\n batch_size=batch_size,\n class_mode=None,\n shuffle=False)\n features_validation = model_VGG16_without_top.predict_generator(generator, N_val / batch_size, verbose = 1)\n # save the output as a Numpy array\n np.save(open(features_validation_path, 'wb'), features_validation)\n ",
"Construction d'un réseaux de neurone classique.\nOn construit un réseaux de neurones \"classique\", identique à la seconde partie du réseau précédent. \nAttention : La première couche de ce réseaux (Flatten) doit être configurée pour prendre en compte des données dans la dimension des features générées précédemment",
"model_VGG_fcm = km.Sequential()\nmodel_VGG_fcm.add(kl.Flatten(input_shape=features_train.shape[1:]))\nmodel_VGG_fcm.add(kl.Dense(64, activation='relu'))\nmodel_VGG_fcm.add(kl.Dropout(0.5))\nmodel_VGG_fcm.add(kl.Dense(1, activation='sigmoid'))\n\nmodel_VGG_fcm.compile(optimizer='rmsprop',\n loss='binary_crossentropy',\n metrics=['accuracy'])\n\nmodel_VGG_fcm.summary()",
"Apprentissage",
"# On créer des vecteurs labels\n\ntrain_labels = np.array([0] * int((N_train/2)) + [1] * int((N_train/2)))\nvalidation_labels = np.array([0] * int((N_val/2)) + [1] * int((N_val/2)))\n\nmodel_VGG_fcm.fit(features_train, train_labels,\n epochs=epochs,\n batch_size=batch_size,\n validation_data=(features_validation, validation_labels))\nt_learning_VGG_fcm = te-ts",
"Q Commentez les performances de ce nouveau modèle\nNous allons également sauver les poids de ce modèle afin de les réusiliser dans la prochaine partie.",
"model_VGG_fcm.save_weights(data_dir_sub+'/weights_model_VGG_fully_connected_model_%d_epochs_%d_batch_size.h5' %(epochs, batch_size))",
"Prédiction",
"ts = time.time()\nscore_VGG_fcm_val = model_VGG_fcm.evaluate(features_validation, validation_labels)\nscore_VGG_fcm_train = model_VGG_fcm.evaluate(features_train, train_labels)\nte = time.time()\nt_prediction_VGG_fcm = te-ts\nprint('Train accuracy:', score_VGG_fcm_train[1])\nprint('Validation accuracy:', score_VGG_fcm_val[1])\nprint(\"Time Prediction: %.2f seconds\" %t_prediction_VGG_fcm)",
"Ajustement fin du réseau VGG16\nDans la partie précédente, nous avons configurer un bloc de réseaux de neurones, à même de prendre en entrée les features issues des transformation des 5 premiers blocs de convolution du modèle VGG16. \nDans cette partie, nous allons 'brancher' ce bloc directement sur les cinq premiers blocs du modèle VGG16 pour pouvoir affiner le modèle en itérant a la fois sur les blocs de convolution mais également sur notre bloc de réseau de neurone.\n\nCréation du modèle\nOn télécharge dans un premier temps le modèle VGG16, comme précédement. \nCependant, le modèle va cette fois être \"entrainé\" directement. Il ne va pas servir qu'a générer des features. Il faut donc préciser en paramètre la taille des images que l'on va lui donner.",
"# build the VGG16 network\nmodel_VGG16_without_top = ka.VGG16(include_top=False, weights='imagenet', input_shape=(150,150,3))\nprint('Model loaded.')",
"On ajoute au modèle VGG, notre bloc de réseaux de neuronne construit précédemment pour générer des features. \nPour cela, on construit le bloc comme précédemment, puis on y ajoute les poids issus de l'apprentissage réalisé précédemment.",
"# build a classifier model to put on top of the convolutional model\ntop_model = km.Sequential()\ntop_model.add(kl.Flatten(input_shape=model_VGG16_without_top.output_shape[1:]))\ntop_model.add(kl.Dense(64, activation='relu'))\ntop_model.add(kl.Dropout(0.5))\ntop_model.add(kl.Dense(1, activation='sigmoid'))\n\n# note that it is necessary to start with a fully-trained\n# classifier, including the top classifier,\n# in order to successfully do fine-tuning\n\ntop_model.load_weights(data_dir_sub+'/weights_model_VGG_fully_connected_model_%d_epochs_%d_batch_size.h5' %(epochs, batch_size))\n",
"Enfin on assemble les deux parties du modèles",
"# add the model on top of the convolutional base\nmodel_VGG_LastConv_fcm = km.Model(inputs=model_VGG16_without_top.input, outputs=top_model(model_VGG16_without_top.output))\n\nmodel_VGG_LastConv_fcm.summary()",
"Gèle des 4 premiers blocs de convolution\nEn pratique, et pour pouvoir effectuer ces calculs dans un temps raisonable, nous allons \"fine-tuner\" seulement le dernier bloc de convolution du modèle, le bloc 5 (couches 16 à 19 dans le summary du modèle précédent) ainsi que le bloc de réseau de neurones que nous avons ajoutés. \nPour cela on va \"geler\" (Freeze) les 15 premières couches du modèle pour que leur paramètre ne soit pas optimiser pendant la phase d'apprentissage.",
"for layer in model_VGG_LastConv_fcm.layers[:15]:\n layer.trainable = False",
"Generate Data",
"# prepare data augmentation configuration\ntrain_datagen = kpi.ImageDataGenerator(\n rescale=1. / 255,\n shear_range=0.2,\n zoom_range=0.2,\n horizontal_flip=True)\n\ntest_datagen = kpi.ImageDataGenerator(rescale=1. / 255)\n\ntrain_generator = train_datagen.flow_from_directory(\n data_dir_sub+\"/train/\",\n target_size=(img_height, img_width),\n batch_size=batch_size,\n class_mode='binary')\n\nvalidation_generator = test_datagen.flow_from_directory(\n data_dir_sub+\"/validation/\",\n target_size=(img_height, img_width),\n batch_size=batch_size,\n class_mode='binary')\n",
"Apprentissage",
"model_VGG_LastConv_fcm.compile(loss='binary_crossentropy',\n optimizer=ko.SGD(lr=1e-4, momentum=0.9),\n metrics=['accuracy'])\n\n\n# fine-tune the model\nts = time.time()\nmodel_VGG_LastConv_fcm.fit_generator(\n train_generator,\n steps_per_epoch=N_train // batch_size,\n epochs=epochs,\n validation_data=validation_generator,\n validation_steps=N_val // batch_size)\nte = time.time()\nt_learning_VGG_LastConv_fcm = te-ts\n",
"Prédiction",
"ts = time.time()\nscore_VGG_LastConv_fcm_val = model_VGG_LastConv_fcm.evaluate_generator(validation_generator, N_val // batch_size)\nscore_VGG_LastConv_fcm_train = model_VGG_LastConv_fcm.evaluate_generator(train_generator, N_train // batch_size)\n\nte = time.time()\nt_prediction_VGG_LastConv_fcm = te-ts\nprint('Train accuracy:', score_VGG_LastConv_fcm_val[1])\nprint('Validation accuracy:', score_VGG_LastConv_fcm_train[1])\nprint(\"Time Prediction: %.2f seconds\" %t_prediction_VGG_LastConv_fcm)",
"Autres modèles\nKeras possède un certain nombre d'autres modèles pré-entrainés:\n\nXception\nVGG16\nVGG19\nResNet50\nInceptionV3\nInceptionResNetV2\nMobileNet\n\nCertains possèdent une structure bien plus complexe, notamment InceptionV3. Vous pouvez très facilement remplacer la fonction ka.VGG16 par une autre fonction (ex : ka.InceptionV3) pour tester la performance des ces différents modèles et leur complexité.\nExercice Vous pouvez re-effectuer les manipulations précédentes sur d'autres modèle pré-entrainé, en prenant le temps d'étudiez leur architecture.\nExercice Vous pouvez également re-effectuer ces apprentissage sur un jeu de données plus gros en en créant un nouveau à partir des données originales. \nL'application de ces exercices sur les données du challenge est vivement conseillées :)\nPrédiction sur le jeu test de Kaggle\nVoyons à présent comment notre réseau performe sur un échantillon du dataset test de keras.",
"data_dir_test = data_dir+'test/'\nN_test = len(os.listdir(data_dir_test+\"/test\"))\n\ntest_datagen = kpi.ImageDataGenerator(rescale=1. / 255)\n\ntest_generator = test_datagen.flow_from_directory(\n data_dir_test,\n #data_dir_sub+\"/train/\",\n target_size=(img_height, img_width),\n batch_size=batch_size,\n class_mode=None,\n shuffle=False)\n\ntest_prediction = model_VGG_LastConv_fcm.predict_generator(test_generator, N_test // batch_size)\n\nimages_test = [data_dir_test+\"/test/\"+k for k in os.listdir(data_dir_test+\"/test\")][:9]\nx_test = [kpi.img_to_array(kpi.load_img(image_test))/255 for image_test in images_test] # this is a PIL image\n\nfig = plt.figure(figsize=(10,10))\nfor k in range(9):\n ax = fig.add_subplot(3,3,k+1)\n ax.imshow(x_test[k], interpolation='nearest')\n pred = test_prediction[k]\n if pred >0.5:\n title = \"Probabiliy for dog : %.1f\" %(pred*100)\n else:\n title = \"Probabiliy for cat : %.1f\" %((1-pred)*100)\n ax.set_title(title)\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gully/adrasteia
|
notebooks/adrasteia_03-01_get_tgas_data.ipynb
|
mit
|
[
"Gaia\nReal data!\ngully\nSept 14, 2016\nOutline:\n\nBatch download the data\nCompare TGAS and GaiaSource\nExplore\n\nImport these first-- I auto import them every time!:",
"#! cat /Users/gully/.ipython/profile_default/startup/start.ipy\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%config InlineBackend.figure_format = 'retina'\n%matplotlib inline",
"1. Batch download the data",
"import os\n\nfor i in range(16):\n fn = 'http://cdn.gea.esac.esa.int/Gaia/tgas_source/csv/TgasSource_000-000-{:03d}.csv.gz'.format(i)\n executable = 'wget '+fn\n print(executable)\n os.system(executable) ## Uncomment to actually download\n\n#! mv Tgas* ../data\n\n#! gzip -d ../data/Tgas*\n\n! ls ../data/Tgas*",
"Compare to a Gaia full catalog source (download from previous notebook or manually):",
"#! wget http://cdn.gea.esac.esa.int/Gaia/gaia_source/csv/GaiaSource_000-000-000.csv.gz\n#! mv GaiaSource_000-000-000.csv.gz ../data/\n\n! ls ../data/GaiaSource*",
"2. Compare TGAS and GaiaSource files",
"import pandas as pd\n\n%time t000 = pd.read_csv('../data/TgasSource_000-000-000.csv')\n\n%time g000 = pd.read_csv('../data/GaiaSource_000-000-000.csv')\n\nset(t000.columns) - set(g000.columns)",
"TGAS is just the subset with parallaxes available, while Gaia source has only Positions and Magnitudes but for billions of sources:\n<blockquote class=\"twitter-tweet\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\"><a href=\"https://twitter.com/hashtag/GaiaDR1?src=hash\">#GaiaDR1</a> details: 1billion stars w/ position+magnitude; 2million stars w/ pos+mag+parallax+proper motion; 3194 variable stars; 2152 quasars</p>— ESA Science (@esascience) <a href=\"https://twitter.com/esascience/status/776001201511010304\">September 14, 2016</a></blockquote>\n<script async src=\"//platform.twitter.com/widgets.js\" charset=\"utf-8\"></script>",
"len(t000), len(g000)\n\np_i = t000.parallax == t000.parallax\ntp000 = t000[p_i]\np_i.sum()\n\np_i = g000.parallax == g000.parallax\ngp000 = g000[p_i]\np_i.sum()\n\nsns.set_color_codes()",
"For a single file, TGAS covers much more area. The file sizes are capped at 40 Mb.",
"plt.plot(tp000.ra[0:2], tp000.dec[0:2], 'b.', label='TGAS') # Hack to get bold labels\nplt.plot(gp000.ra[0:2], gp000.dec[0:2], 'r.', label='Gaia Source')\nplt.plot(tp000.ra.values, tp000.dec.values, 'b.', alpha=0.1)\nplt.plot(gp000.ra.values, gp000.dec.values, 'r.', alpha=0.1)\nplt.legend(loc='lower left')",
"3. Explore!",
"df_list = []",
"This takes a finite amount of RAM, but should be fine for modern laptops.",
"for i in range(16):\n df_list.append(pd.read_csv('../data/TgasSource_000-000-{:03d}.csv'.format(i)))\n\ntt = pd.concat(df_list, ignore_index=True)\n\nt000.shape\n\ntt.shape\n\nlen(tt.source_id.unique())",
"So 2.05+ million sources with 59 \"features\" or columns of metadata.",
"plt.plot(tt.parallax, tt.parallax_error, '.', alpha=0.005)\nplt.xscale('log')\n\nbins = np.arange(-50, 200, 3)\nsns.distplot(tt.parallax, bins=bins,kde=False)\nplt.yscale('log')\n\nsns.distplot(tt.parallax_error)\n\nbins = np.arange(0, 160, 2)\nsns.distplot(tt.astrometric_n_obs_ac, bins=bins, kde=False)\nsns.distplot(tt.astrometric_n_bad_obs_ac, bins=bins, kde=False)\nsns.distplot(tt.astrometric_n_good_obs_ac, bins=bins, kde=False)\n\nsns.distplot(tt.phot_g_mean_mag)\n\nbins = np.arange(0,40,1)\nsns.distplot(tt.matched_observations, bins=bins,kde=False)\n\ntt.iloc[0]\n\ngi = tt.astrometric_delta_q == tt.astrometric_delta_q\nbins= np.arange(0,500, 5)\nsns.distplot(tt.astrometric_delta_q[gi], bins=bins, kde=False)\nplt.yscale('log')\n\ntt.phot_variable_flag.unique()\n\nvi = tt.phot_variable_flag == 'VARIABLE'\n\nvi.sum(), len(vi)",
"Only one variable star of all the TGAS sample, about what you'd expect.",
"tt[vi]",
"The end."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
miklevin/pipulate
|
examples/LESSON08_Selecting-by-Label-with-loc.ipynb
|
mit
|
[
"Pandas DataFrame.loc()\ndf.loc() is mostly about the df.loc('row') but\nColon-comma makes your d.loc(:, 'column') show.\n\nSo, the API for .loc() is mostly just this:\ndf.loc[row-label, col-label]\n\nSo for the very common simple task of selecting columns by label, you need to know some special Pythonic magic: \ndf.loc[:, col-label]\n\nThis colon is Python short-hand for a slice from beginning to end... or in other words \"all rows\". This is the single most confusing part of the Pandas API, which becoming more and more important because the old ways of selecting rows are deprecated.",
"import pandas as pd\n\npd.set_option('display.max_columns', 500)\n\ndef a1_notation(n):\n string = \"\"\n while n > 0:\n n, remainder = divmod(n - 1, 26)\n string = chr(65 + remainder) + string\n return string\n\n# First we create a 30 x 30 DataFrame with both row and column labels.\nalist = list(range(1, 31))\nA1_list = [a1_notation(x) for x in alist]\nA1_list_plus = [\"%s-label\" % a1_notation(x) for x in alist]\ndf = pd.DataFrame([alist for aline in alist], columns=A1_list, index=A1_list_plus)\ndf\n\n# Using a single row-label with .loc() returns a the row as a Pandas Series.\ndf.loc['A-label']",
"Let's Talk about Rows",
"# Using a row-label and a column-label in .loc() returns the contents of the cell.\ndf.loc['A-label', 'AD']\n\n# Using a single row-label INSIDE A LIST in .loc() returns that row as a DataFrame.\ndf.loc[['A-label']]\n\n# Using a list of row-labels in .loc() returns those specific rows as a DataFrame.\ndf.loc[['A-label', 'C-label', 'E-label', 'G-label', 'I-label']]\n\n# Using row-labels with slice notation in .loc() returns that range of rows as a DataFrame.\ndf.loc['A-label':'G-label']\n\n# Dropping the first row-label in slice notation in .loc() indicates starting at top of DataFrame.\ndf.loc[:'G-label']\n\n# Dropping the last row-label in slice notation in .loc() indicates ending at bottom of DataFrame.\ndf.loc['W-label':]\n\n# Dropping both row-labels in slice notation in .loc() selects the entire DataFrame (top to bottom).\ndf.loc[:]\n\n# As with iloc, there is a comma in the interface which seperates row arguments from column arguments.\ndf.loc[:, ]",
"Let's Talk about Columns",
"# If we use just a single label-name after the comma (like using an integer in iloc) we get a Pandas Series.\ndf.loc[:, 'A']\n\n# Putting a label-name in a list after the comma returns a DataFrame.\ndf.loc[:, ['A']]\n\n# This is how we perform the very common task of selecting specific columns by label.\ndf.loc[:, ['P', 'A', 'N', 'D', 'A', 'S']]\n\n# Label-based slice definitions can be used in place of lists.\ndf.loc[:, 'P':'U']\n\n# After the comma you can leave off the ending slice value for \"to the end\".\ndf.loc[:, 'P':]\n\n# After the comma you can leave off the beginning slice value for \"from beginning\".\ndf.loc[:, :'N']",
"Rows & Columns",
"# This is how we get the intersection of label-based slice definitions for rows and columns.\ndf.loc[:'N-label', :'N']\n\n# We can use lists of labels to define intersections. If each list has one item, we get a 1-cell DataFrame\ndf.loc[['N-label'], ['N']]\n\n# If you're going for a single value, there's not much sense in asking for a DataFrame to be returned.\ndf.loc['N-label', 'N']\n\n# We can use references (variable names) to make list-intersection requests more readable.\nrows = ['P-label', 'A-label', 'N-label', 'D-label', 'A-label', 'S-label']\ncols = ['P', 'A', 'N', 'D', 'A', 'S']\ndf.loc[rows, cols]\n\n# We can use list comprehension and the enumerate and modulus functions to list every other column. \n[(i, x)[1] for i, x in enumerate(list(df.columns)) if i%2]\n\n# This is how we select every other column by label name.\ndf.loc[:, [(i, x)[1] for i, x in enumerate(list(df.columns)) if i%2]]\n\n# We generate a warning and get a column full of NaN addressing nonexistant columns\ndf.loc[:,['foo', 'B', 'A', 'R']]\n\ndf_blah"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
martinjrobins/hobo
|
examples/sampling/transformation-with-and-without-jacobian.ipynb
|
bsd-3-clause
|
[
"Need for Jacobian adjustment in parameter transformation\nThis example illustrates the importance of including a Jacobian term when sampling from a transformed parameter space (if you want to obtain samples on the untransformed space, that is).\nWe will show the difference between sampling under parameter transformation with and without Jacobian adjustment, and how not using Jacobian adjustment gives an incorrect result.",
"import pints\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Set some random seed so this notebook can be reproduced\nnp.random.seed(10)",
"Before we get started\nWe will use a 1-dimensional beta distribution as an illustration:\n$$f(x; a, b) = \\frac{\\Gamma(a+b)}{\\Gamma(a)\\Gamma(b)} x^{a-1} (1-x)^{b-1},$$\nwith $a=2$ and $b=2$.",
"from scipy.stats import beta\n\nclass BetaLogPDF(pints.LogPDF):\n def n_parameters(self):\n return 1\n\n def __call__(self, x):\n if 0. < x[0] < 1.:\n return np.log(beta.pdf(x, a=2, b=2))[0]\n else:\n return -np.inf\n\nlog_pdf = BetaLogPDF()\n\n# Generate some initial positions\nx0 = np.random.uniform([0], [1], size=(4, 1))",
"We try to sample it with MCMC.",
"# Create an adaptive covariance MCMC routine\nmcmc = pints.MCMCController(log_pdf, 4, x0, method=pints.HaarioBardenetACMC)\n\n# Set maximum number of iterations\nmcmc.set_max_iterations(10000)\n\n# Disable logging\nmcmc.set_log_to_screen(False)\n\n# Number of chains\nnum_chains = 4\n\n# Run!\nprint('Running...')\nchains = mcmc.run()\nprint('Done!')\n\n# Discard warm-up\nchains = [chain[1000:] for chain in chains]",
"Now we inspect the MCMC samples and how it compares to the analytical form of the beta distribution.",
"plt.hist(chains[0], bins=15, density=True, label='samples')\nx = np.linspace(0, 1, 100)\nplt.plot(x, beta.pdf(x, a=2, b=2), label='true')\nplt.legend()\nplt.xlabel(r'$x$')\nplt.ylabel('PDF')\nplt.show()",
"So far so good, we see that we are able to sample from the original space $x \\in [0, 1]$, let's call this the \"model space\" where it's meaningful to be within $0$ and $1$ for some problem of interest.\nTransformation\nNow we consider a transformation, we will use the logit (or log-odds) transformation:\n$$y = \\text{logit}(x) = \\log\\left(\\frac{x}{1-x}\\right),$$\nwhich transforms the constrained model parameter $x$ to an unconstrained search space $y$.",
"transformation = pints.LogitTransformation(1)",
"We will now compare two transformations of the beta distribution: without and with Jacobian adjustment.\nA naive transformation of PDF without Jacobian adjustment and how it gets wrong\nThe first one is the naive transformation which we simply transform the parameters as if using the wrapping the model without applying Jacobian adjustment.",
"class NaiveTransformedLogPDF(pints.LogPDF):\n \"\"\"Transforming LogPDF without Jacobian adjustment\"\"\"\n def __init__(self, log_pdf, transform):\n self._log_pdf = log_pdf\n self._transform = transform\n \n def n_parameters(self):\n return self._log_pdf.n_parameters()\n \n def __call__(self, y):\n # Transform from search space y back to model space x\n x = self._transform.to_model(y)\n # Then we call the log-pdf in the model space x\n return self._log_pdf(x)\n\nnaive_trans_log_pdf = NaiveTransformedLogPDF(log_pdf, transformation)\n\n# Transform the initial position to search space\ny0 = [transformation.to_search(x) for x in x0]",
"We then sample this naive transformed beta distribution using MCMC in the search space $y$.",
"# Create an adaptive covariance MCMC routine\nmcmc = pints.MCMCController(\n naive_trans_log_pdf, # Naive transformation without Jacobian\n 4,\n y0, # Input is in the search space y\n method=pints.HaarioBardenetACMC)\n\n# Set maximum number of iterations\nmcmc.set_max_iterations(10000)\n\n# Disable logging\nmcmc.set_log_to_screen(False)\n\n# Number of chains\nnum_chains = 4\n\n# Run!\nprint('Running...')\nnaive_chains_in_y = mcmc.run()\nprint('Done!')\n\n# Discard warm-up\nnaive_chains_in_y = [naive_chain[1000:] for naive_chain in naive_chains_in_y]",
"Let's have a look at what the samples look like in the search space $y$ first.",
"plt.hist(naive_chains_in_y[0], bins=15, density=True)\nplt.xlabel(r'$y$')\nplt.ylabel('PDF')\nplt.show()",
"OK, not quite sure what that means, $y$ is the search space parameter which we are not interested in it!\nLet's backward transform these samples directly back to the model space $x$ and hope for the best...",
"# Transform the samples from y back to x\nnaive_chains_in_x = np.zeros(naive_chains_in_y[0].shape)\nfor i, y in enumerate(naive_chains_in_y[0]):\n naive_chains_in_x[i, :] = transformation.to_model(y)\n\nplt.hist(naive_chains_in_x, bins=15, density=True, label='samples')\nx = np.linspace(0, 1, 100)\nplt.plot(x, beta.pdf(x, a=2, b=2), label='true')\nplt.legend()\nplt.xlabel(r'$x$')\nplt.ylabel('PDF (without Jacobian adjustment)')\nplt.show()",
"The sampled distribution has gone horribly wrong using this naive transformation without Jacobian adjustment!!!\nTransforming PDF using Pints (with Jacobian adjustment)\nThis time we will use pints.TransformedLogPDF which will handle all the necessary correction and adjustment for the transformation!",
"pints_trans_log_pdf = pints.TransformedLogPDF(log_pdf, transformation)",
"We will repeat the same procedure as before: sampling in the search space $y$.",
"# Create an adaptive covariance MCMC routine\nmcmc = pints.MCMCController(\n pints_trans_log_pdf, # Transformation using pints\n 4,\n y0, # Again in search space y\n method=pints.HaarioBardenetACMC)\n\n# Set maximum number of iterations\nmcmc.set_max_iterations(10000)\n\n# Disable logging\nmcmc.set_log_to_screen(False)\n\n# Number of chains\nnum_chains = 4\n\n# Run!\nprint('Running...')\npints_chains_in_y = mcmc.run()\nprint('Done!')\n\n# Discard warm-up\npints_chains_in_y = [pints_chain[1000:] for pints_chain in pints_chains_in_y]\n\nplt.hist(pints_chains_in_y[0], bins=15, density=True)\nplt.xlabel(r'$y$')\nplt.ylabel('PDF')\nplt.show()\n\n# Transform the samples from y back to x\npints_chains_in_x = np.zeros(pints_chains_in_y[0].shape)\nfor i, y in enumerate(pints_chains_in_y[0]):\n pints_chains_in_x[i, :] = transformation.to_model(y)\n\nplt.hist(pints_chains_in_x, bins=15, density=True, label='samples')\nx = np.linspace(0, 1, 100)\nplt.plot(x, beta.pdf(x, a=2, b=2), label='true')\nplt.legend()\nplt.xlabel(r'$x$')\nplt.ylabel('PDF (with Jacobian adjustment)')\nplt.show()",
"This is look very much like the correct beta distribution when we use the transform provided by Pints!\nRemarks\nThis example simply tries to show the importance of the Jacobian adjustment when applying parameter transformation for PDFs (if you are interested in the original untransformed parameters)! For practical use, all the pints.Transformation classes can be directly passed to the Controllers as shown in another example notebook."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dsacademybr/PythonFundamentos
|
Cap03/Notebooks/DSA-Python-Cap03-03-While.ipynb
|
gpl-3.0
|
[
"<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 3</font>\nDownload: http://github.com/dsacademybr",
"# Versão da Linguagem Python\nfrom platform import python_version\nprint('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())",
"While",
"# Usando o loop while para imprimir os valores de 0 a 9\ncounter = 0\nwhile counter < 10:\n print(counter)\n counter = counter + 1\n\n# Também é possível usar a claúsula else para encerrar o loop while\nx = 0\n\nwhile x < 10:\n print ('O valor de x nesta iteração é: ', x)\n print (' x ainda é menor que 10, somando 1 a x')\n x += 1\n \nelse:\n print ('Loop concluído!')",
"Pass, Break, Continue",
"counter = 0\nwhile counter < 100:\n if counter == 4:\n break\n else:\n pass\n print(counter)\n counter = counter + 1\n\nfor verificador in \"Python\":\n if verificador == \"h\":\n continue\n print(verificador)",
"While e For juntos",
"for i in range(2,30):\n j = 2\n counter = 0\n while j < i:\n if i % j == 0:\n counter = 1\n j = j + 1\n else:\n j = j + 1\n \n if counter == 0:\n print(str(i) + \" é um número primo\")\n counter = 0\n else:\n counter = 0",
"Fim\nObrigado\nVisite o Blog da Data Science Academy - <a href=\"http://blog.dsacademy.com.br\">Blog DSA</a>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
herruzojm/udacity-deep-learning
|
transfer-learning/.ipynb_checkpoints/Transfer_Learning-checkpoint.ipynb
|
mit
|
[
"Transfer Learning\nMost of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.\n<img src=\"assets/cnnarchitecture.jpg\" width=700px>\nVGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.\nYou can read more about transfer learning from the CS231n course notes.\nPretrained VGGNet\nWe'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash.\ngit clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg\nThis is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.",
"from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\n\nvgg_dir = 'tensorflow_vgg/'\n# Make sure vgg exists\nif not isdir(vgg_dir):\n raise Exception(\"VGG directory doesn't exist!\")\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(vgg_dir + \"vgg16.npy\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:\n urlretrieve(\n 'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',\n vgg_dir + 'vgg16.npy',\n pbar.hook)\nelse:\n print(\"Parameter file already exists!\")",
"Flower power\nHere we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.",
"import tarfile\n\ndataset_folder_path = 'flower_photos'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('flower_photos.tar.gz'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:\n urlretrieve(\n 'http://download.tensorflow.org/example_images/flower_photos.tgz',\n 'flower_photos.tar.gz',\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with tarfile.open('flower_photos.tar.gz') as tar:\n tar.extractall()\n tar.close()",
"ConvNet Codes\nBelow, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.\nHere we're using the vgg16 module from tensorflow_vgg. The network takes images of size $244 \\times 244 \\times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):\n```\nself.conv1_1 = self.conv_layer(bgr, \"conv1_1\")\nself.conv1_2 = self.conv_layer(self.conv1_1, \"conv1_2\")\nself.pool1 = self.max_pool(self.conv1_2, 'pool1')\nself.conv2_1 = self.conv_layer(self.pool1, \"conv2_1\")\nself.conv2_2 = self.conv_layer(self.conv2_1, \"conv2_2\")\nself.pool2 = self.max_pool(self.conv2_2, 'pool2')\nself.conv3_1 = self.conv_layer(self.pool2, \"conv3_1\")\nself.conv3_2 = self.conv_layer(self.conv3_1, \"conv3_2\")\nself.conv3_3 = self.conv_layer(self.conv3_2, \"conv3_3\")\nself.pool3 = self.max_pool(self.conv3_3, 'pool3')\nself.conv4_1 = self.conv_layer(self.pool3, \"conv4_1\")\nself.conv4_2 = self.conv_layer(self.conv4_1, \"conv4_2\")\nself.conv4_3 = self.conv_layer(self.conv4_2, \"conv4_3\")\nself.pool4 = self.max_pool(self.conv4_3, 'pool4')\nself.conv5_1 = self.conv_layer(self.pool4, \"conv5_1\")\nself.conv5_2 = self.conv_layer(self.conv5_1, \"conv5_2\")\nself.conv5_3 = self.conv_layer(self.conv5_2, \"conv5_3\")\nself.pool5 = self.max_pool(self.conv5_3, 'pool5')\nself.fc6 = self.fc_layer(self.pool5, \"fc6\")\nself.relu6 = tf.nn.relu(self.fc6)\n```\nSo what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use\nwith tf.Session() as sess:\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n with tf.name_scope(\"content_vgg\"):\n vgg.build(input_)\nThis creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,\nfeed_dict = {input_: images}\ncodes = sess.run(vgg.relu6, feed_dict=feed_dict)",
"import os\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom tensorflow_vgg import vgg16\nfrom tensorflow_vgg import utils\n\ndata_dir = 'flower_photos/'\ncontents = os.listdir(data_dir)\nclasses = [each for each in contents if os.path.isdir(data_dir + each)]",
"Below I'm running images through the VGG network in batches.\n\nExercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).",
"batch_size = 100\ncodes_list = []\nlabels = []\nbatch = []\n\ncodes = None\n\nwith tf.Session() as sess:\n \n # TODO: Build the vgg network here\n\n for each in classes:\n print(\"Starting {} images\".format(each))\n class_path = data_dir + each\n files = os.listdir(class_path)\n for ii, file in enumerate(files, 1):\n # Add images to the current batch\n # utils.load_image crops the input images for us, from the center\n img = utils.load_image(os.path.join(class_path, file))\n batch.append(img.reshape((1, 224, 224, 3)))\n labels.append(each)\n \n # Running the batch through the network to get the codes\n if ii % batch_size == 0 or ii == len(files):\n \n # Image batch to pass to VGG network\n images = np.concatenate(batch)\n \n # TODO: Get the values from the relu6 layer of the VGG network\n codes_batch = \n \n # Here I'm building an array of the codes\n if codes is None:\n codes = codes_batch\n else:\n codes = np.concatenate((codes, codes_batch))\n \n # Reset to start building the next batch\n batch = []\n print('{} images processed'.format(ii))\n\n# write codes to file\nwith open('codes', 'w') as f:\n codes.tofile(f)\n \n# write labels to file\nimport csv\nwith open('labels', 'w') as f:\n writer = csv.writer(f, delimiter='\\n')\n writer.writerow(labels)",
"Building the Classifier\nNow that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.",
"# read codes and labels from file\nimport csv\n\nwith open('labels') as f:\n reader = csv.reader(f, delimiter='\\n')\n labels = np.array([each for each in reader]).squeeze()\nwith open('codes') as f:\n codes = np.fromfile(f, dtype=np.float32)\n codes = codes.reshape((len(labels), -1))",
"Data prep\nAs usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!\n\nExercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.",
"labels_vecs = # Your one-hot encoded labels array here",
"Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.\nYou can create the splitter like so:\nss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\nThen split the data with \nsplitter = ss.split(x, y)\nss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.\n\nExercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.",
"train_x, train_y = \nval_x, val_y = \ntest_x, test_y = \n\nprint(\"Train shapes (x, y):\", train_x.shape, train_y.shape)\nprint(\"Validation shapes (x, y):\", val_x.shape, val_y.shape)\nprint(\"Test shapes (x, y):\", test_x.shape, test_y.shape)",
"If you did it right, you should see these sizes for the training sets:\nTrain shapes (x, y): (2936, 4096) (2936, 5)\nValidation shapes (x, y): (367, 4096) (367, 5)\nTest shapes (x, y): (367, 4096) (367, 5)\nClassifier layers\nOnce you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.\n\nExercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.",
"inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])\nlabels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])\n\n# TODO: Classifier layers and operations\n\nlogits = # output layer logits\ncost = # cross entropy loss\n\noptimizer = # training optimizer\n\n# Operations for validation/test accuracy\npredicted = tf.nn.softmax(logits)\ncorrect_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))",
"Batches!\nHere is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.",
"def get_batches(x, y, n_batches=10):\n \"\"\" Return a generator that yields batches from arrays x and y. \"\"\"\n batch_size = len(x)//n_batches\n \n for ii in range(0, n_batches*batch_size, batch_size):\n # If we're not on the last batch, grab data with size batch_size\n if ii != (n_batches-1)*batch_size:\n X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] \n # On the last batch, grab the rest of the data\n else:\n X, Y = x[ii:], y[ii:]\n # I love generators\n yield X, Y",
"Training\nHere, we'll train the network.\n\nExercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!",
"saver = tf.train.Saver()\nwith tf.Session() as sess:\n \n # TODO: Your training code here\n saver.save(sess, \"checkpoints/flowers.ckpt\")",
"Testing\nBelow you see the test accuracy. You can also see the predictions returned for images.",
"with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: test_x,\n labels_: test_y}\n test_acc = sess.run(accuracy, feed_dict=feed)\n print(\"Test accuracy: {:.4f}\".format(test_acc))\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nfrom scipy.ndimage import imread",
"Below, feel free to choose images and see how the trained classifier predicts the flowers in them.",
"test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'\ntest_img = imread(test_img_path)\nplt.imshow(test_img)\n\n# Run this cell if you don't have a vgg graph built\nwith tf.Session() as sess:\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n vgg = vgg16.Vgg16()\n vgg.build(input_)\n\nwith tf.Session() as sess:\n img = utils.load_image(test_img_path)\n img = img.reshape((1, 224, 224, 3))\n\n feed_dict = {input_: img}\n code = sess.run(vgg.relu6, feed_dict=feed_dict)\n \nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: code}\n prediction = sess.run(predicted, feed_dict=feed).squeeze()\n\nplt.imshow(test_img)\n\nplt.barh(np.arange(5), prediction)\n_ = plt.yticks(np.arange(5), lb.classes_)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Kaggle/learntools
|
notebooks/sql/raw/tut4.ipynb
|
apache-2.0
|
[
"Introduction\nSo far, you've learned how to use several SQL clauses. For instance, you know how to use SELECT to pull specific columns from a table, along with WHERE to pull rows that meet specified criteria. You also know how to use aggregate functions like COUNT(), along with GROUP BY to treat multiple rows as a single group. \nNow you'll learn how to change the order of your results using the ORDER BY clause, and you'll explore a popular use case by applying ordering to dates. To illustrate what you'll learn in this tutorial, we'll work with a slightly modified version of our familiar pets table.\n\nORDER BY\nORDER BY is usually the last clause in your query, and it sorts the results returned by the rest of your query.\nNotice that the rows are not ordered by the ID column. We can quickly remedy this with the query below.\n\nThe ORDER BY clause also works for columns containing text, where the results show up in alphabetical order.\n\nYou can reverse the order using the DESC argument (short for 'descending'). The next query sorts the table by the Animal column, where the values that are last in alphabetic order are returned first.\n\nDates\nNext, we'll talk about dates, because they come up very frequently in real-world databases. There are two ways that dates can be stored in BigQuery: as a DATE or as a DATETIME. \nThe DATE format has the year first, then the month, and then the day. It looks like this:\nYYYY-[M]M-[D]D\n\nYYYY: Four-digit year\n[M]M: One or two digit month\n[D]D: One or two digit day\n\nSo 2019-01-10 is interpreted as January 10, 2019.\nThe DATETIME format is like the date format ... but with time added at the end.\nEXTRACT\nOften you'll want to look at part of a date, like the year or the day. You can do this with EXTRACT. We'll illustrate this with a slightly different table, called pets_with_date.\n\nThe query below returns two columns, where column Day contains the day corresponding to each entry the Date column from the pets_with_date table: \n\nSQL is very smart about dates, and we can ask for information beyond just extracting part of the cell. For example, this query returns one column with just the week in the year (between 1 and 53) for each date in the Date column: \n\nYou can find all the functions you can use with dates in BigQuery in this documentation under \"Date and time functions\". \nExample: Which day of the week has the most fatal motor accidents?\nLet's use the US Traffic Fatality Records database, which contains information on traffic accidents in the US where at least one person died.\nWe'll investigate the accident_2015 table. Here is a view of the first few rows. (We have hidden the corresponding code. To take a peek, click on the \"Code\" button below.)",
"#$HIDE_INPUT$\nfrom google.cloud import bigquery\n\n# Create a \"Client\" object\nclient = bigquery.Client()\n\n# Construct a reference to the \"nhtsa_traffic_fatalities\" dataset\ndataset_ref = client.dataset(\"nhtsa_traffic_fatalities\", project=\"bigquery-public-data\")\n\n# API request - fetch the dataset\ndataset = client.get_dataset(dataset_ref)\n\n# Construct a reference to the \"accident_2015\" table\ntable_ref = dataset_ref.table(\"accident_2015\")\n\n# API request - fetch the table\ntable = client.get_table(table_ref)\n\n# Preview the first five lines of the \"accident_2015\" table\nclient.list_rows(table, max_results=5).to_dataframe()",
"Let's use the table to determine how the number of accidents varies with the day of the week. Since:\n- the consecutive_number column contains a unique ID for each accident, and\n- the timestamp_of_crash column contains the date of the accident in DATETIME format,\nwe can:\n- EXTRACT the day of the week (as day_of_week in the query below) from the timestamp_of_crash column, and\n- GROUP BY the day of the week, before we COUNT the consecutive_number column to determine the number of accidents for each day of the week.\nThen we sort the table with an ORDER BY clause, so the days with the most accidents are returned first.",
"# Query to find out the number of accidents for each day of the week\nquery = \"\"\"\n SELECT COUNT(consecutive_number) AS num_accidents, \n EXTRACT(DAYOFWEEK FROM timestamp_of_crash) AS day_of_week\n FROM `bigquery-public-data.nhtsa_traffic_fatalities.accident_2015`\n GROUP BY day_of_week\n ORDER BY num_accidents DESC\n \"\"\"",
"As usual, we run it as follows:",
"# Set up the query (cancel the query if it would use too much of \n# your quota, with the limit set to 1 GB)\nsafe_config = bigquery.QueryJobConfig(maximum_bytes_billed=10**9)\nquery_job = client.query(query, job_config=safe_config)\n\n# API request - run the query, and convert the results to a pandas DataFrame\naccidents_by_day = query_job.to_dataframe()\n\n# Print the DataFrame\naccidents_by_day",
"Notice that the data is sorted by the num_accidents column, where the days with more traffic accidents appear first.\nTo map the numbers returned for the day_of_week column to the actual day, you might consult the BigQuery documentation on the DAYOFWEEK function. It says that it returns \"an integer between 1 (Sunday) and 7 (Saturday), inclusively\". So, in 2015, most fatal motor accidents in the US occured on Sunday and Saturday, while the fewest happened on Tuesday.\nYour Turn\nORDER BY can make your results easier to interpret. Try it yourself."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dpiparo/ROOTaaS
|
NoteBooks/Howto-inherit-C++-classes-from-python.ipynb
|
lgpl-3.0
|
[
"Inheriting C++ classes from Python\nIn this exercise we would like to demostrate the inheritance of a C++ class from Python. The base class in C++ is Base and we want to have an specialization of if in Python PyDerived",
"from ROOTaaS.iPyROOT import ROOT",
"To bridge between the C++ and Python virtual functions we need to define a proper C++ class that will forward the virtual function calls to the the Python object. The only trick here is that this C++ class needs to have a constructor to receive the self from the Python object for use afterwords.\nThe implemetation is done completely in C++ with the next block.",
"%%dcl\n#include \"TObject.h\"\n#include \"Python.h\"\n#include <stdexcept>\nvoid call_python_method(PyObject* self, const char* method)\n{\n // check arguments\n if ( 0 == self || 0 == method ) { throw std::runtime_error(\"Invalid Python object and method\"); }\n // call Python\n PyObject* r = PyObject_CallMethod(self, const_cast<char*>(method), const_cast<char*>(\"\"));\n if ( 0 == r ) { PyErr_Print(); return;}\n // release used objects\n Py_XDECREF( r ) ;\n //\n return;\n}\n\nclass Base {\n public:\n Base() {}\n virtual ~Base() {}\n virtual void Foo() { cout << \"base::Foo\" << endl; }\n void CallFoo() { this->Foo(); }\n};\n\nclass PyBase : public Base {\n public:\n PyBase(PyObject* self) : fSelf(self) {}\n virtual ~PyBase() {}\n virtual void Foo() { call_python_method(fSelf,\"Foo\"); }\n private:\n PyObject* fSelf;\n};\n",
"Now we define the python PyDerived class that inherits from the just declared C++ PyBase",
"class PyDerived(ROOT.PyBase):\n def __init__(self): ROOT.PyBase.__init__(self, self)\n def Foo(self): print 'Python::Foo'\n",
"Now that we have the two classes defined (Base and PyDerived) we exercise calling the Foo method.",
"o = ROOT.Base()\no.CallFoo()\nd = PyDerived()\nd.CallFoo()",
"If we put the objects in a std::vector<Base*> we can exercise true polymorphic access from the C++ side",
"v = ROOT.vector('Base*')()\nv.push_back(o)\nv.push_back(d)\nfor i in v : i.Foo()",
"Notice that the printout order is not correct (there is a bug in the current ROOTaaS implementation) becuase the Python output somehow overtakes the C++ generated output."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jamesjia94/BIDMach
|
tutorials/NVIDIA/.ipynb_checkpoints/GeneralDNNregression-checkpoint.ipynb
|
bsd-3-clause
|
[
"General-purpose prediction with DNNs\nMulti-layer deep networks are powerful predictors and often outperform classical models like kernel-SVMs and gradient-boosted trees. Here we'll apply a simple multi-layer network to classification of Higgs Boson data. \nLet's load BIDMat/BIDMach",
"import BIDMat.{CMat,CSMat,DMat,Dict,IDict,Image,FMat,FND,GDMat,GMat,GIMat,GSDMat,GSMat,HMat,IMat,Mat,SMat,SBMat,SDMat}\nimport BIDMat.MatFunctions._\nimport BIDMat.SciFunctions._\nimport BIDMat.Solvers._\nimport BIDMat.JPlotting._\nimport BIDMach.Learner\nimport BIDMach.models.{FM,GLM,KMeans,KMeansw,ICA,LDA,LDAgibbs,Model,NMF,RandomForest,SFA,SVD}\nimport BIDMach.networks.{Net}\nimport BIDMach.datasources.{DataSource,MatSource,FileSource,SFileSource}\nimport BIDMach.mixins.{CosineSim,Perplexity,Top,L1Regularizer,L2Regularizer}\nimport BIDMach.updaters.{ADAGrad,Batch,BatchNorm,IncMult,IncNorm,Telescoping}\nimport BIDMach.causal.{IPTW}\n\nMat.checkMKL\nMat.checkCUDA\nMat.setInline\nif (Mat.hasCUDA > 0) GPUmem",
"And define the root directory for this dataset.",
"val dir = \"/code/BIDMach/data/uci/Higgs/parts/\"",
"Constructing a deep network Learner\nThe \"Net\" class is the parent class for Deep networks. By defining a learner, we also configure a datasource, an optimization method, and possibly a regularizer.",
"val (mm, opts) = Net.learner(dir+\"data%03d.fmat.lz4\", dir+\"label%03d.fmat.lz4\")",
"The next step is to define the network to run. First we set some options:",
"opts.hasBias = true; // Include additive bias in linear layers\nopts.links = iones(1,1); // The link functions specify output loss, 1= logistic\nopts.nweight = 1e-4f // weight for normalization layers",
"Now we define the network itself. We use the function \"dnodes3\" which builds stack of 3 layers with linear, non-linear and normalization layers. The non-linearity is configurable. The arguments to the function are:\n* depth:Int the number of layers,\n* width:Int the number of units in the first hidden layer (not the input layer which is set by the data source)\n* taper:Float the decrease (multiplicative) in width of each hidden layer from the first\n* ntargs:Int how many targets to predict\n* opts:Opts the options above\n* nonlin:Int the type of non-linear layer, 1=tanh, 2=sigmoid, 3=rectifying, 4=softplus",
"opts.nodeset = Net.dnodes3(12, 500, 0.6f, 1, opts, 2);",
"Here's the source for dnodes3. It creates a \"nodeset\" or flow graph for the network. The nodeSet is \"nodes\" and can be access like an array. nodes(i) is set to a node whose input is nodes(i-1) etc. \n<pre>\n def dnodes3(depth0:Int, width:Int, taper:Float, ntargs:Int, opts:Opts, nonlin:Int = 1):NodeSet = {\n val depth = (depth0/3)*3; // Round up to an odd number of nodes \n val nodes = new NodeSet(depth);\n var w = width;\n nodes(0) = new InputNode;\n for (i <- 1 until depth-2) {\n if (i % 3 == 1) {\n nodes(i) = new LinNode{inputs(0)=nodes(i-1); outdim=w; hasBias=opts.hasBias; aopts=opts.aopts};\n w = (taper*w).toInt;\n } else if (i % 3 == 2) {\n nonlin match {\n case 1 => nodes(i) = new TanhNode{inputs(0)=nodes(i-1)};\n case 2 => nodes(i) = new SigmoidNode{inputs(0)=nodes(i-1)};\n case 3 => nodes(i) = new RectNode{inputs(0)=nodes(i-1)};\n case 4 => nodes(i) = new SoftplusNode{inputs(0)=nodes(i-1)};\n }\n } else {\n nodes(i) = new NormNode{inputs(0)=nodes(i-1); targetNorm=opts.targetNorm; weight=opts.nweight};\n }\n }\n nodes(depth-2) = new LinNode{inputs(0)=nodes(depth-3); outdim=ntargs; hasBias=opts.hasBias; aopts=opts.aopts};\n nodes(depth-1) = new GLMNode{inputs(0)=nodes(depth-2); links=opts.links};\n nodes;\n }\n </pre>\n\nTuning Options\nHere follow some tuning options",
"opts.nend = 10 // The last file number in the datasource\nopts.npasses = 5 // How many passes to make over the data \nopts.batchSize = 200 // The minibatch size\nopts.evalStep = 511 // Count of minibatch between eval steps\n\nopts.lrate = 0.01f; // Learning rate\nopts.texp = 0.4f; // Time exponent for ADAGRAD",
"You invoke the learner the same way as before.",
"mm.train",
"Now lets extract the model and use it to predict labels on a held-out sample of data.",
"val model = mm.model.asInstanceOf[Net]\n\nval ta = loadFMat(dir + \"data%03d.fmat.lz4\" format 10);\nval tc = loadFMat(dir + \"label%03d.fmat.lz4\" format 10);\n\nval (nn,nopts) = Net.predictor(model, ta);\nnopts.batchSize=10000",
"Let's run the predictor",
"nn.predict",
"To evaluate, we extract the predictions as a floating matrix, and then compute a ROC curve with them. The mean of this curve is the AUC (Area Under the Curve).",
"val pc = FMat(nn.preds(0))\nval rc = roc(pc, tc, 1-tc, 1000);\nmean(rc)\n\nplot(rc)\n",
"Not Bad!: this net gives competitive performance on the Kaggle challenge for Higgs Boson classification. \nTuning\nThis net an be optimized in a variety of ways. Try adding an extra block of layers (your should increment the net depth by 3) and re=running. You may need to restart the notebook."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/cnrm-cerfacs/cmip6/models/sandbox-2/ocnbgchem.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Ocnbgchem\nMIP Era: CMIP6\nInstitute: CNRM-CERFACS\nSource ID: SANDBOX-2\nTopic: Ocnbgchem\nSub-Topics: Tracers. \nProperties: 65 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:52\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-2', 'ocnbgchem')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\n3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\n4. Key Properties --> Transport Scheme\n5. Key Properties --> Boundary Forcing\n6. Key Properties --> Gas Exchange\n7. Key Properties --> Carbon Chemistry\n8. Tracers\n9. Tracers --> Ecosystem\n10. Tracers --> Ecosystem --> Phytoplankton\n11. Tracers --> Ecosystem --> Zooplankton\n12. Tracers --> Disolved Organic Matter\n13. Tracers --> Particules\n14. Tracers --> Dic Alkalinity \n1. Key Properties\nOcean Biogeochemistry key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean biogeochemistry model code (PISCES 2.0,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Geochemical\" \n# \"NPZD\" \n# \"PFT\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Elemental Stoichiometry\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe elemental stoichiometry (fixed, variable, mix of the two)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Fixed\" \n# \"Variable\" \n# \"Mix of both\" \n# TODO - please enter value(s)\n",
"1.5. Elemental Stoichiometry Details\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe which elements have fixed/variable stoichiometry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList of all prognostic tracer variables in the ocean biogeochemistry component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.7. Diagnostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList of all diagnotic tracer variables in the ocean biogeochemistry component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Damping\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any tracer damping used (such as artificial correction or relaxation to climatology,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.damping') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\nTime stepping method for passive tracers transport in ocean biogeochemistry\n2.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime stepping framework for passive tracers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n",
"2.2. Timestep If Not From Ocean\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTime step for passive tracers (if different from ocean)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\nTime stepping framework for biology sources and sinks in ocean biogeochemistry\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime stepping framework for biology sources and sinks",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n",
"3.2. Timestep If Not From Ocean\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTime step for biology sources and sinks (if different from ocean)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4. Key Properties --> Transport Scheme\nTransport scheme in ocean biogeochemistry\n4.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of transport scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline\" \n# \"Online\" \n# TODO - please enter value(s)\n",
"4.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTransport scheme used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Use that of ocean model\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4.3. Use Different Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDecribe transport scheme if different than that of ocean model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Key Properties --> Boundary Forcing\nProperties of biogeochemistry boundary forcing\n5.1. Atmospheric Deposition\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how atmospheric deposition is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Atmospheric Chemistry model\" \n# TODO - please enter value(s)\n",
"5.2. River Input\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how river input is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Land Surface model\" \n# TODO - please enter value(s)\n",
"5.3. Sediments From Boundary Conditions\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList which sediments are speficied from boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Sediments From Explicit Model\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList which sediments are speficied from explicit sediment model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Gas Exchange\n*Properties of gas exchange in ocean biogeochemistry *\n6.1. CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.2. CO2 Exchange Type\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nDescribe CO2 gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.3. O2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs O2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.4. O2 Exchange Type\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nDescribe O2 gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.5. DMS Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs DMS gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.6. DMS Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify DMS gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.7. N2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs N2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.8. N2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify N2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.9. N2O Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs N2O gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.10. N2O Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify N2O gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.11. CFC11 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CFC11 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.12. CFC11 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify CFC11 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.13. CFC12 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CFC12 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.14. CFC12 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify CFC12 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.15. SF6 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs SF6 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.16. SF6 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify SF6 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.17. 13CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs 13CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.18. 13CO2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify 13CO2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.19. 14CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs 14CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.20. 14CO2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify 14CO2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.21. Other Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any other gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Carbon Chemistry\nProperties of carbon chemistry biogeochemistry\n7.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how carbon chemistry is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other protocol\" \n# TODO - please enter value(s)\n",
"7.2. PH Scale\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf NOT OMIP protocol, describe pH scale.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea water\" \n# \"Free\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7.3. Constants If Not OMIP\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf NOT OMIP protocol, list carbon chemistry constants.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Tracers\nOcean biogeochemistry tracers\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of tracers in ocean biogeochemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Sulfur Cycle Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs sulfur cycle modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.3. Nutrients Present\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList nutrient species present in ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrogen (N)\" \n# \"Phosphorous (P)\" \n# \"Silicium (S)\" \n# \"Iron (Fe)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Nitrous Species If N\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf nitrogen present, list nitrous species.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrates (NO3)\" \n# \"Amonium (NH4)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.5. Nitrous Processes If N\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf nitrogen present, list nitrous processes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dentrification\" \n# \"N fixation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9. Tracers --> Ecosystem\nEcosystem properties in ocean biogeochemistry\n9.1. Upper Trophic Levels Definition\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDefinition of upper trophic level (e.g. based on size) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Upper Trophic Levels Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDefine how upper trophic level are treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Tracers --> Ecosystem --> Phytoplankton\nPhytoplankton properties in ocean biogeochemistry\n10.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of phytoplankton",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"PFT including size based (specify both below)\" \n# \"Size based only (specify below)\" \n# \"PFT only (specify below)\" \n# TODO - please enter value(s)\n",
"10.2. Pft\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPhytoplankton functional types (PFT) (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diatoms\" \n# \"Nfixers\" \n# \"Calcifiers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Size Classes\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPhytoplankton size classes (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microphytoplankton\" \n# \"Nanophytoplankton\" \n# \"Picophytoplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11. Tracers --> Ecosystem --> Zooplankton\nZooplankton properties in ocean biogeochemistry\n11.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of zooplankton",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"Size based (specify below)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Size Classes\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nZooplankton size classes (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microzooplankton\" \n# \"Mesozooplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Tracers --> Disolved Organic Matter\nDisolved organic matter properties in ocean biogeochemistry\n12.1. Bacteria Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there bacteria representation ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Lability\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe treatment of lability in dissolved organic matter",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Labile\" \n# \"Semi-labile\" \n# \"Refractory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Tracers --> Particules\nParticulate carbon properties in ocean biogeochemistry\n13.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is particulate carbon represented in ocean biogeochemistry?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diagnostic\" \n# \"Diagnostic (Martin profile)\" \n# \"Diagnostic (Balast)\" \n# \"Prognostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Types If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, type(s) of particulate matter taken into account",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"POC\" \n# \"PIC (calcite)\" \n# \"PIC (aragonite\" \n# \"BSi\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Size If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No size spectrum used\" \n# \"Full size spectrum\" \n# \"Discrete size classes (specify which below)\" \n# TODO - please enter value(s)\n",
"13.4. Size If Discrete\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf prognostic and discrete size, describe which size classes are used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.5. Sinking Speed If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, method for calculation of sinking speed of particules",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Function of particule size\" \n# \"Function of particule type (balast)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Tracers --> Dic Alkalinity\nDIC and alkalinity properties in ocean biogeochemistry\n14.1. Carbon Isotopes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich carbon isotopes are modelled (C13, C14)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"C13\" \n# \"C14)\" \n# TODO - please enter value(s)\n",
"14.2. Abiotic Carbon\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs abiotic carbon modelled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14.3. Alkalinity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is alkalinity modelled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Prognostic\" \n# \"Diagnostic)\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
maniacalbrain/Caves-of-Qud-analysis
|
2. Ploting my Caves of Qud data.ipynb
|
mit
|
[
"Ploting my Caves of Qud data\nHaving previously cleaned the data it's now time to pull it into a pandas dataframe and get some info from it.",
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%pylab inline\n\ncol_names = [\"Name\", \"End Time\", \"Game End Time\", \"Enemy\", \"x hit\", \"Damage\", \"Weapon\", \"PV\", \"Pos Dam\", \"Score\", \"Turns\", \"Zones\", \"Storied Items\", \"Artifact\"]\n\n#read in the data from the text file, setting the seperator between each column as \"\\t\". \nqud = pd.read_csv(\"Cleaned_Qud_HighScores_1.txt\", sep=r\"\\t+\", names = col_names, engine='python')\nqud.head()\n\n#Dropping these two values\nprint qud.iloc[53].values #Forgot to name my character. Deceided to quit by attacking either Mehmet or Warden Ualraig\nprint qud.iloc[54].values #Took one step and Ualraig wasted Mehmet. Walked around for a while but quit as I could not longer start the \"What's Eating The Watervine? mission\n\n#As these are my two lowest scores I can the set the dataframe to be rows 0 to 53 (does not include 53)\nqud = qud[0:53]\nqud.tail(3)",
"Further Cleaning\nWith the data now pulled into a dataframe there is still a small bit of cleaning to do. Below are three functions to convert the End Time of each game to a datetime, to pull the game month from the Game End Time (I left the day in as at a later time I might check am I more likely to die early or late in the month) and to clean up the Artifact column",
"import re\nfrom datetime import datetime\nfrom time import strptime\n\ndef convert_to_date(date_in):\n date_search = re.search(\"(\\w{6,9}),\\s*(\\w{3,9})\\s*(\\d{2}),\\s*(\\d{4})\\s*(\\d{1,2}):(\\d{2}):(\\d{2})\\s*(\\w{2})\", date_in)\n #date_search.group(1) = Day as word(ie Sunday), 2 = Month as word (ie August), 3 = day of month, 4 = year, 5 = hour, 6 = minute, 7 = second, 8 = AM or PM\n \n #In End Time hour is expressed from 1 to 12, ie 1 AM or 1 PM. The below code converts that to 0 to 23, ie 1 or 13\n hour = int(date_search.group(5))\n\n if date_search.group(8) == \"PM\":\n if hour == 12:\n pass\n else:\n hour += 12\n \n if date_search.group(8) == \"AM\":\n if hour == 12:\n hour = 0\n \n \n #Create a datetime. strptime is used to take the first 3 letters of the Month as word and get the int value for that month, ie August = Aug, is month 8 of 12\n new_date = datetime(int(date_search.group(4)), strptime(date_search.group(2)[:3], \"%b\").tm_mon, int(date_search.group(3)), hour, int(date_search.group(6)), int(date_search.group(7)))\n \n return new_date\n\nqud[\"End Time\"] = qud[\"End Time\"].apply(convert_to_date)\n\n#Pull the month out of Game Time\n\ndef convert_game_month(date_in):\n date_search = re.search(\"of\\s*((\\w*\\s*)*)\", date_in)\n return date_search.group(1)\n\nqud[\"Game End Time\"] = qud[\"Game End Time\"].apply(convert_game_month)\n\ndef clean_artifacts(artifact):\n x_search = re.search(\"(x\\d+)\", artifact) #remove multipliers like \"x2\"\n if x_search != None:\n artifact = artifact.replace(x_search.group(0), \"\").strip()\n \n mul_search = re.search(\"((-?\\d+\\s*\\d+d\\d+)+)\", artifact) #removes pv and possible weapon damage like \"2 1d3\"\n if mul_search != None:\n artifact = artifact.replace(mul_search.group(0), \"\").strip()\n \n artifact = artifact.replace(\"->\", \"\").replace(\"<>\", \"\").strip() #removes -> and <> which should be empty from previous cleaning\n \n cell_search = re.search(\"(\\[(\\w*\\s*)*\\])\", artifact) #removes [no cell], [shotgun shell] etc\n if cell_search != None:\n artifact = artifact.replace(cell_search.group(0), \"\").strip()\n \n digit_search = re.search(\"((\\d+\\s*)+)\", artifact) #removes any remaining digits such as av dv ie 2 4\n if digit_search != None:\n artifact = artifact.replace(digit_search.group(0), \"\").strip()\n \n return artifact\n\nqud[\"Artifact\"] = qud[\"Artifact\"].apply(clean_artifacts)\nqud.head() #print new, clean dataframe",
"Sorting by date and by score\nWith the End Time now cleaned up and converted to a datetime the entire dataframe can be sorted on this column, giving the dataframe in order of my earliest game to my most recent game. I can then print off my highscore progression and after sorting the dataframe by score I can print my 5 highest scores.",
"sorted_qud = qud.sort([\"End Time\"]).reset_index(drop = True) #Sort by End Time, reset the index and drop the old index\nhighscore = -10000\nprint \"Highscore Progression\" #Game Number, Name, Date, Score\nfor score in sorted_qud[\"Score\"]:\n if int(score) > highscore:\n highscore = int(score)\n print \"%d %s %s %d\" % (\n int(sorted_qud.index[sorted_qud[\"Score\"] == score][0])+ 1, #the index value of the game + 1. My first game is at index 0 so add 1 and this becomes game 1\n sorted_qud[\"Name\"][sorted_qud[\"Score\"] == score].tolist()[0], #Character's name\n sorted_qud[\"End Time\"][sorted_qud[\"Score\"] == score].tolist()[0], #End Time of game\n int(score) #Score\n )\nprint \"\\n\" \nprint \"Highest Scores\"\nsorted_scores = qud.sort([\"Score\"], ascending = False).reset_index(drop = True) #sort by score\nfor i in range(5):\n print sorted_scores[\"Name\"].iloc[i], sorted_scores[\"End Time\"].iloc[i], sorted_scores[\"Score\"].iloc[i] #print Name, End Time and Score for first 5 rows",
"Ploting my score data\nThere are now a number of plots I can build using the data I have pulled down. Two simple scatter plots can be made, one containing points for my score and the number of turns taken in each game and another for my score and the number of zones visited. Using the sklearn library I can plot a linear line to each plot and also use this to predict the score of my current game.\nUsing the dataframe sorted on date I can plot a bar for the score in each game and then plot a 5 game simple moving average. Red vertical lines are added for patch updates to see if my score is affected by these. The lines represent the 4th, 8th and 15/21st of August (I didn't play any games between the 15th and 21st) and I remember it took me a while to get to grips with the game after the 21st of August patch. I blame Ctesiphus.\nThe final plot is a histogram of my highscores. Not impressive.",
"from sklearn import linear_model\n\nfig = plt.figure(figsize = (20,10))\np1 = fig.add_subplot(221) # 2x2, plot 1 (top left)\np2 = fig.add_subplot(222) # 2x2, plot 2 (top right)\np3 = fig.add_subplot(223) # 2x2, plot 3 (bottom left)\np4 = fig.add_subplot(224) # 2x2, plot 4 (bottom right)\n\n#Turns to Score\np1.scatter(qud[\"Turns\"], qud[\"Score\"], color=\"green\") #Turns on x axis, score on y axis, color green (this is Qud after all)\n\nX = np.array(qud[\"Turns\"]).reshape(len(qud),1) #variable X is an np.array of the turns, len(qud) rows, 1 column\ny= np.array(qud[\"Score\"]).reshape(len(qud),1) #variable y is an np.array of the scores, len(qud) rows, 1 column\n\nturns_score = linear_model.LinearRegression()\nturns_score.fit(X, y) #fit turns and score using linear regression\n\n#plot a line with turns on the x axis and predicted score for that many turns from the linear regression model on the y axis\np1.plot(qud[\"Turns\"], turns_score.predict(X), color=\"red\") \np1.set_title(\"Score per Turn\")\np1.set_xlabel(\"Turns\")\np1.set_ylabel(\"Score\")\np1.axis('tight')\n\n#Zones to Score\np2.scatter(qud[\"Zones\"], qud[\"Score\"], color=\"green\")\nX= np.array(qud[\"Zones\"]).reshape(len(qud),1) #Update X to be an np.array of zones, y stays as score above\n\nzones_score = linear_model.LinearRegression()\nzones_score.fit(X, y) #fit zones to score\n\n#plot a line with zones on the x axis and predicted score for that many zones from the linear regression model on the x axis\np2.plot(qud[\"Zones\"], zones_score.predict(X), color=\"red\")\np2.set_title(\"Score per Zone\")\np2.set_xlabel(\"Zones\")\np2.set_ylabel(\"Score\")\np2.axis('tight')\n\n#using the sorted by date dataframe plot a bar chart of the scores. sorted_qud.index.values starts at 0, not 1\np3.bar(sorted_qud.index.values, sorted_qud[\"Score\"], color=\"green\")\np3.plot(pd.rolling_mean(sorted_qud[\"Score\"].values, window=5, min_periods=1), color=\"red\", linewidth=2) #plot a 5 game simple moving average\n\np3.set_title(\"5 Game Moving Average\")\np3.set_xlabel(\"Game (Vertical lines represent patches: Aug 4, Aug 8, Aug 15/21)\")\np3.set_ylabel(\"Score\")\np3.axis('tight')\n#These numbers are plotted manually from looking at the dataframe and seeing when was the first game I played on/after each patch release\np3.axvline(24, color = \"red\", linewidth = 2) #first game on/after Aug 4th\np3.axvline(27, color = \"red\", linewidth = 2) #first game on/after Aug 8th\np3.axvline(29, color = \"red\", linewidth = 2) #first game on/after Aug 15th and 21st\n\n#Histogram. Depressing\np4.hist(qud[\"Score\"], bins = 50);\np4.axis('tight')\np4.set_title(\"Score Frequency\")\np4.set_xlabel(\"Score (50 bins)\")\np4.set_ylabel(\"Frequency\")\n\n\nplt.tight_layout()",
"Using the linear regression models I can now get the coefficient, intercept and root mean square error for the score per turn line and the score per zone line. Also, as the number of turns is displayed when a game is saved I can calculate how many points my current save game is worth. However, I would not expect this figure to be very accurate due to the small number of points available and I'm not willing to have my character die to check how right the figure is!\nIn linear regression y = a + Xb where a is the intercept and b is the coefficient. So using the below data:\nscore = -2171.04135919 + Turns(1.21948537419)\nThe root mean square error is found by taking the root of the mean_squared_error of the score compared to the predicted score\nWith a bit of moving around of figures I get to equation turns = (score + intercept)/coefficient which allows me to predict the number of turns needed for 100,000 points and 1,000,000 points. I have a bit of work to do yet! (I expect these numbers to change substantially as the model gets more data points in the high game points range, over half the data points are in the minus range at the moment)",
"from sklearn.metrics import mean_squared_error\nfrom math import sqrt\nprint \"For Score Per Turn\"\nprint \"Total turns multiplied by the coefficient plus the intercept = my score\"\nprint \"Coefficient: \", turns_score.coef_[0][0]\nprint \"Intercept: \", turns_score.intercept_[0]\nprint \"RMSE: \", sqrt(mean_squared_error(y, turns_score.predict(np.array(qud[\"Turns\"]).reshape(len(qud),1))))\nprint \"Predicted score from my current game (59924 turns): \", int(turns_score.predict(59924)[0][0])\nprint \"Turns needed for 100,000 points: \", int(math.ceil(((100000 + abs(turns_score.intercept_))/turns_score.coef_)[0][0]))\nprint \"Turns needed for 1,000,000 points: \", int(math.ceil(((1000000 + abs(turns_score.intercept_))/turns_score.coef_)[0][0]))\n\nprint \"For Score Per Zone\"\nprint \"Total zones visited multiplied by the coefficient plus the intercept = my score\"\nprint \"Coefficient: \", zones_score.coef_[0][0]\nprint \"Intercept \", zones_score.intercept_[0]\nprint \"RMSE: \", sqrt(mean_squared_error(y, zones_score.predict(np.array(qud[\"Zones\"]).reshape(len(qud),1))))",
"A look at some of the remaining data\nI took in a lot of data into the dataframe but have only looked at End Time, score, zone and turns. As time goes on and I get more entries I may be able to do more with the following bits of data",
"#Each month mentioned in the Game End Time\ngame_months = qud[\"Game End Time\"]\nprint np.unique(game_months)\nprint len(np.unique(game_months))\n\n#Use groupby to find most mentioned month, ie the month I have died most in. Nivvun Ut is the very first month...\nqud['Game End Time'].groupby(qud['Game End Time']).count().order(ascending = False)\n\n#Use group by to find the most advanced artifact I held when I died. Lots of no artifacts and lots of artifacts awarded for finishing the first 2 missions in Joppa\nqud['Artifact'].groupby(qud['Artifact']).count().order(ascending = False)",
"I have done slightly more work with the Enemy column. First I print it off as is but then make some changes. \nDeaths from bleeding were always the result of young ivory. In my early games I took hemophilia as a defect which basicly meant instant death at low toughness levels and no bandages when I stared bleeding, so \"bleeding\" and \"young ivory\" are combined into a single group.\nDeaths from scalding steam were always the result of fire ants setting the water around me on fire and I, like an idiot, taking a step forward. Only once was there another reason...I set the water on fire with my Flaming Hands and walked into it. Since then I have only used Freezing Hands.\nAll snapjaws, including the two faction leaders, are put into a single catagory.\n\"Wahmahcalcalit\", \"Umchuum\", \"Duhmahcaluhcal\" are evidence my character was suffering confusion when he died, these were 3 wizard faction leaders.\nI could do further grouping. Chute crab and king crab could be combined, the two named Snapjaws could be put in with the ogre instead to form a Named or Faction Leader catagory, dawnglider could be added to the fire ant/scalding steam catagory etc.",
"qud['Enemy'].groupby(qud['Enemy']).count().order(ascending = False)\n\n#create a list called enemies, add new values to it, convert to a dataframe and groupby name\nenemies = qud[\"Enemy\"].tolist()\nfor i in range(len(enemies)):\n name = enemies[i].strip()\n if name in [\"Wahmahcalcalit\", \"Umchuum\", \"Duhmahcaluhcal\"]:\n enemies[i] = \"wizard\"\n if name in [\"snapjaw scavenger\", \"napjaw scavenger\", \"snapjaw hunter\", \"Groubuubu-wof-wofuz, the stalwart Snapjaw Tot-eater\", \"Ruf-ohoubub, the stalwart Snapjaw Bear-baiter\"]:\n enemies[i] = \"snapjaw\"\n if name in [\"young ivory\", \"bleeding\"]:\n enemies[i] = \"young ivory/bleeding\" \n if name in [\"scalding steam\", \"fire ant\"]:\n enemies[i] = \"fire ant/scalding steam\"\n\nenemy_df = pd.DataFrame(enemies, columns=[\"Name\"])\nenemy_df['Name'].groupby(enemy_df['Name']).count().order(ascending = False)",
"There are a number of highscores that have an empty line where the death description should be and these are marked unknown. Also, others will just read \"died from bleeding\". Here bleeding is added as both the enemy name and the weapon.\nBelow is a look at the weapons that have killed my characters. Rending mandibles. Giant pseudopod. The things I've seen.",
"qud['Weapon'].groupby(qud['Weapon']).count().order(ascending = False)",
"Below is the complete dataframe sorted by date",
"sorted_qud"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
JorisBolsens/PYNQ
|
Pynq-Z1/notebooks/examples/pmod_grove_light.ipynb
|
bsd-3-clause
|
[
"Grove Light Sensor 1.1\nThis example shows how to use the Grove Light Sensor v1.1. You will also see how to plot a graph using matplotlib.\nThe Grove Light Sensor produces an analog signal which requires an ADC.\nThe Grove Light Sensor, PYNQ Grove Adapter, and Grove I2C ADC are used for this example. \nWhen the ambient light intensity increases, the resistance of the LDR or Photoresistor will decrease. This means that the output signal from this module will be HIGH in bright light, and LOW in the dark. Values for the sensor ranges from ~5.0 (bright) to >35.0 (dark).\n1. Load overlay\nDownload base overlay.",
"from pynq import Overlay\nOverlay(\"base.bit\").download()",
"2. Read single luminance value\nNow read from the Grove Light sensor which is connected to the ADC. In this example, the PYNQ Grove Adapter is connected to PMODA interface on the board.\nThe Grove I2C ADC is used as a bridge between G4 on the Grove Adapter and the Grove Light Sensor.",
"from pynq.iop import Grove_Light\nfrom pynq.iop import PMODA\nfrom pynq.iop import PMOD_GROVE_G4\n\nlgt = Grove_Light(PMODA, PMOD_GROVE_G4)\n\nsensor_val = lgt.read()\nprint(sensor_val)",
"3. Plot the light intensity over time\nThis Python code will do multiple light measurements over a 10 second period. \nTo change the light intensity, cover and uncover the light sensor. In typical ambient light, there is no need to provide an external light source, as the sensor is already reading at full scale.",
"import time\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nlgt.set_log_interval_ms(100)\nlgt.start_log()\n# Change input during this time\ntime.sleep(10)\nr_log = lgt.get_log()\n\nplt.plot(range(len(r_log)), r_log, 'ro')\nplt.title('Grove Light Plot')\nmin_r_log = min(r_log)\nmax_r_log = max(r_log)\nplt.axis([0, len(r_log), min_r_log, max_r_log])\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tkurfurst/deep-learning
|
tv-script-generation/dlnd_tv_script_generation.ipynb
|
mit
|
[
"TV Script Generation\nIn this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.\nGet the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]\n\ntext[:100]",
"Explore the Data\nPlay around with view_sentence_range to view different parts of the data.",
"view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))",
"Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\nLookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call vocab_to_int\n- Dictionary to go from the id to word, we'll call int_to_vocab\nReturn these dictionaries in the following tuple (vocab_to_int, int_to_vocab)",
"import numpy as np\nimport problem_unittests as tests\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n # TODO: Implement Function\n return None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)",
"Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\nImplement the function token_lookup to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".",
"def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n # TODO: Implement Function\n return None\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)",
"Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()",
"Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\nCheck the Version of TensorFlow and Access to GPU",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))",
"Input\nImplement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the TF Placeholder name parameter.\n- Targets placeholder\n- Learning Rate placeholder\nReturn the placeholders in the following the tuple (Input, Targets, LearingRate)",
"def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n # TODO: Implement Function\n return None, None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)",
"Build RNN Cell and Initialize\nStack one or more BasicLSTMCells in a MultiRNNCell.\n- The Rnn size should be set using rnn_size\n- Initalize Cell State using the MultiRNNCell's zero_state() function\n - Apply the name \"initial_state\" to the initial state using tf.identity()\nReturn the cell and initial state in the following tuple (Cell, InitialState)",
"def get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n # TODO: Implement Function\n return None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)",
"Word Embedding\nApply embedding to input_data using TensorFlow. Return the embedded sequence.",
"def get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)",
"Build RNN\nYou created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.\n- Build the RNN using the tf.nn.dynamic_rnn()\n - Apply the name \"final_state\" to the final state using tf.identity()\nReturn the outputs and final_state state in the following tuple (Outputs, FinalState)",
"def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n # TODO: Implement Function\n return None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)",
"Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.\n- Build RNN using cell and your build_rnn(cell, inputs) function.\n- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.\nReturn the logits and final state in the following tuple (Logits, FinalState)",
"def build_nn(cell, rnn_size, input_data, vocab_size):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :return: Tuple (Logits, FinalState)\n \"\"\"\n # TODO: Implement Function\n return None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)",
"Batches\nImplement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:\n- The first element is a single batch of input with the shape [batch size, sequence length]\n- The second element is a single batch of targets with the shape [batch size, sequence length]\nIf you can't fill the last batch with enough data, drop the last batch.\nFor exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2], [ 7 8], [13 14]]\n # Batch of targets\n [[ 2 3], [ 8 9], [14 15]]\n ]\n# Second Batch\n [\n # Batch of Input\n [[ 3 4], [ 9 10], [15 16]]\n # Batch of targets\n [[ 4 5], [10 11], [16 17]]\n ]\n# Third Batch\n [\n # Batch of Input\n [[ 5 6], [11 12], [17 18]]\n # Batch of targets\n [[ 6 7], [12 13], [18 1]]\n ]\n]\n```\nNotice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.",
"def get_batches(int_text, batch_size, seq_length):\n \"\"\"\n Return batches of input and target\n :param int_text: Text with the words replaced by their ids\n :param batch_size: The size of batch\n :param seq_length: The length of sequence\n :return: Batches as a Numpy array\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_batches(get_batches)",
"Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet num_epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet seq_length to the length of sequence.\nSet learning_rate to the learning rate.\nSet show_every_n_batches to the number of batches the neural network should print progress.",
"# Number of Epochs\nnum_epochs = None\n# Batch Size\nbatch_size = None\n# RNN Size\nrnn_size = None\n# Sequence Length\nseq_length = None\n# Learning Rate\nlearning_rate = None\n# Show stats for every n number of batches\nshow_every_n_batches = None\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'",
"Build the Graph\nBuild the graph using the neural network you implemented.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]\n train_op = optimizer.apply_gradients(capped_gradients)",
"Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')",
"Save Parameters\nSave seq_length and save_dir for generating a new TV script.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))",
"Checkpoint",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()",
"Implement Generate Functions\nGet Tensors\nGet tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\nReturn the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)",
"def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n # TODO: Implement Function\n return None, None, None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)",
"Choose Word\nImplement the pick_word() function to select the next word using probabilities.",
"def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)",
"Generate TV Script\nThis will generate the TV script for you. Set gen_length to the length of TV script you want to generate.",
"gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)",
"The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
bayesimpact/bob-emploi
|
data_analysis/notebooks/datasets/imt/market_score_api_dataset.ipynb
|
gpl-3.0
|
[
"Author: Marie Laure, marielaure@bayesimpact.org\nIMT Market Score from API\nThe IMT dataset provides regional statistics about different jobs. Here we are interested in the market score (called by Pôle Emploi the tension ratio. It is a bit misleading as a big tension ratio means plenty of jobs...). It corresponds to a ratio of the average number of weekly open offers to the average number of weekly applications per 10 candidates. This value is provided among others (e.g. number of offers in the last week, number of application in the last week...) in the \"statitics on offers and demands\" subset of the IMT dataset.\nPreviously, we retrieved IMT data by scraping the IMT website. As an exploratory step, we are interested in the sanity of the API based data and identifying putative additional information provided only by the API.\nThe dataset can be retrieved with the following command (it takes ~15 minutes):\ndocker-compose run --rm data-analysis-prepare make data/imt/market_score.csv\nData Sanity\nLoading and General View\nFirst let's load the csv file:",
"import os\nfrom os import path\n\nimport pandas as pd\nimport seaborn as _\n\nDATA_FOLDER = os.getenv('DATA_FOLDER')\n\nmarket_statistics = pd.read_csv(path.join(DATA_FOLDER, 'imt/market_score.csv'))\nmarket_statistics.head()",
"Wow! Tons of columns! There is a lot of information on whether a job is seasonal, shows a peak in offers at a particular month or not. Seasonal is described as having twice as much offers than the monthly average (calculated over a year), and seeing this pattern on two subsequent years. Because we are not interested in the seasonality here, we'll remove at least the per month data (12 columns).",
"to_remove = [name for name in market_statistics.columns if 'SEASONAL_' in name]\nmarket_statistics.drop(to_remove, axis=1, inplace=True)\nmarket_statistics.sort_values(['ROME_PROFESSION_CARD_CODE', 'AREA_CODE']).head()",
"OK. Some values are missing for Market score, documentation states that the ratio is undefined when offers and demands are below 30.\nHow many missing values do we have for tension ratio here?",
"market_statistics.TENSION_RATIO.notnull().describe()",
"Data is missing for 87% of the lines!\nLines represents data for an Area x Rome job group. So how many lines should we expect? First, how many areas, area types and job groups do we have?",
"market_statistics[['ROME_PROFESSION_CARD_CODE', 'AREA_CODE', 'AREA_TYPE_CODE']].describe()",
"Oh! Look at the job groups... even the most recent ROME job groups groups are here! Good job Pôle Emploi! There are 4 area types (consistent with the documentation) and 509 areas.\nBecause some areas may be labelled with multiple area types, let's see how many area x area types we have here.",
"pd.concat([market_statistics['AREA_TYPE_CODE'], market_statistics['AREA_CODE']]).nunique()",
"A little bit more than the 509 unique area names. Thus, confirming the redundant area names describing more than one area types.\nWith this in mind, we would expect 509 x 532 = 272916 lines if there were information on each job in each area. Hmmm... That's not the case and we have ~9.5% of the expected lines missing.\nFor the remaining 32013 (~11.7% of the expected lines) lines with market score data, what is the distribution of these scores?",
"market_with_score = market_statistics[market_statistics.TENSION_RATIO.notnull()]\nmarket_with_score.TENSION_RATIO.describe()",
"On the subset with market score information, the market score is usually between 3 and 8. Which is not super reassuring on a candidate point of view... we should remember that this corresponds to the number of offer per 10 candidates. At the end, most of the time we have less than 1 offer per candidate.\nHowever, in some markets (area/job) we can find extreme values (the max is at 1664 offers for 10 persons). How many of these extreme/unexpected values can we found, and to which jobs and areas do they correspond?",
"market_with_score[market_with_score.TENSION_RATIO > 50].TENSION_RATIO.hist();\nmarket_with_score[market_with_score.TENSION_RATIO > 50]\\\n .sort_values('TENSION_RATIO', ascending=False)\\\n [['TENSION_RATIO', 'ROME_PROFESSION_CARD_NAME', 'AREA_NAME', 'SEASONAL']].head()",
"The 1664 offers for 10 persons observed above appears to be a real outlier. However, it corresponds to a seasonal job and may be linked to a specific place recruiting tons of people (mall, resort...). Note that it sounds like a great idea to apply to be a baker in Arles!\nWe noticed that the AREA_TYPE_NAME variable can cover multiple values. Can we say more about this?",
"market_statistics.AREA_TYPE_NAME.value_counts()",
"This dataset have multiple granularity layers. We have information at the department (\"Département\") level, region level or whole country! \nFor one job, can you have observations for multiple areas? Let's try for butchers in the \"Lyon\" area, the department is Rhône and the region Auvergne-Rhône-Alpes.",
"market_statistics[\n (market_statistics.AREA_NAME == 'LYON CENTRE') &\\\n (market_statistics.ROME_PROFESSION_CARD_NAME == 'Boucherie')]\\\n [['AREA_TYPE_NAME', 'ROME_PROFESSION_CARD_NAME', 'AREA_NAME', 'TENSION_RATIO']]\n\nmarket_statistics[\n (market_statistics.AREA_NAME == 'RHONE') &\\\n (market_statistics.ROME_PROFESSION_CARD_NAME == 'Boucherie')]\\\n [['AREA_TYPE_NAME', 'ROME_PROFESSION_CARD_NAME', 'AREA_NAME', 'TENSION_RATIO']]\n\nmarket_statistics[\n (market_statistics.AREA_NAME == 'AUVERGNE-RHONE-ALPES') &\\\n (market_statistics.ROME_PROFESSION_CARD_NAME == 'Boucherie')]\\\n [['AREA_TYPE_NAME', 'ROME_PROFESSION_CARD_NAME', 'AREA_NAME', 'TENSION_RATIO']]",
"Good! We have info for all of these.\nLet's go a little bit more general. How many jobs do we have here?",
"market_statistics.ROME_PROFESSION_CARD_CODE.nunique()",
"How many of these are represented in each area. If we have data for every job in every area, we expect to have 532 jobs in each area.",
"area_romes = market_statistics.groupby(['AREA_TYPE_CODE', 'AREA_CODE']).ROME_PROFESSION_CARD_NAME.size()\narea_romes.hist();",
"For some areas, we have missing jobs. They could be missing because some jobs have 0 offers, 0 applications etc... \nCan we find some of these zero values in the dataset?",
"market_statistics[(market_statistics.NB_APPLICATION_END_MONTH == 0) &\\\n (market_statistics.NB_OFFER_END_MONTH == 0) &\\\n (market_statistics.NB_OFFER_LAST_WEEK == 0) &\\\n (market_statistics.NB_APPLICATION_LAST_WEEK == 0)].head()",
"There are jobs with zeros and NA. So probably, the missing values and the zeros are different things. We couldn't find any information about this in the documentation.\nIs there an area level (except whole country) for which we have info for all job groups?",
"department_romes = market_statistics[market_statistics.AREA_TYPE_CODE == 'D'].\\\n groupby('AREA_NAME').ROME_PROFESSION_CARD_NAME.size()\ndepartment_romes.hist();",
"Arf... Almost... A couple of departments have some jobs not represented.\nLet's see with the region.",
"region_romes = market_statistics[market_statistics.AREA_TYPE_CODE == 'R'].\\\n groupby('AREA_NAME').ROME_PROFESSION_CARD_NAME.size()\nregion_romes.hist();",
"Nothing is perfect! But most of the regions have information for all jobs.\nLet's have a look to an area for which there is less jobs than expected (532)?\nFirst, what are the areas with less than 532 job groups?",
"area_romes = area_romes.to_frame()\narea_romes = area_romes.reset_index(['AREA_TYPE_CODE', 'AREA_CODE'])\narea_romes.columns = [['AREA_TYPE_CODE', 'AREA_CODE', 'jobgroups']]\ndepartment_romes = area_romes[area_romes.AREA_TYPE_CODE == 'D']\ndepartment_romes.sort_values('jobgroups').head(10)",
"Overseas territories (97X area codes) and Corsica (2X area codes) are the areas where there are the higher number of job groups missing.\nConclusion\nThis dataset seems quite clean even if:\n- There are few information on market score\n- There are some areas with missing jobs. This seems not to be related with lines with zeros...\nHowever, there are multiple granularity layers that seems consistent between each others.\nComparison with Scraped Data\nLet's compare these data from the one that are now (2017/09/14) online.\nFor a Nurse in the department \"Yonne\", there are no value.",
"market_statistics[\n (market_statistics.AREA_NAME == 'YONNE') &\\\n (market_statistics.ROME_PROFESSION_CARD_CODE == 'J1502')]\\\n [['AREA_TYPE_NAME', 'ROME_PROFESSION_CARD_NAME', 'AREA_NAME', 'TENSION_RATIO']]",
"Same here!\nWhat about a plumber in the Cher department. The website announces 3 offers for 10 people.",
"market_statistics[\n (market_statistics.AREA_NAME == 'CHER') &\\\n (market_statistics.ROME_PROFESSION_CARD_CODE == 'F1603')]\\\n [['AREA_TYPE_NAME', 'ROME_PROFESSION_CARD_NAME', 'AREA_NAME', 'TENSION_RATIO']]",
"Same story here. Yippee! \nLet's have a look to the areas with missing jobs. As an example we'll look at the Haute-Saône department (code 70).",
"haute_saone_jobs = market_statistics[market_statistics.AREA_CODE == '70'].ROME_PROFESSION_CARD_NAME.unique()\nmarket_statistics[-market_statistics.ROME_PROFESSION_CARD_NAME.isin(haute_saone_jobs)].\\\n ROME_PROFESSION_CARD_NAME.unique()",
"Online, the \"Films d'animation et effets spéciaux\" values are specified as insufficient data.\nConclusion\nScraped data and data provided by the API are similar. \nA previous overview of the scraped data observed a market score median of 4 jobs for 10 candidates. Here we observed it at 5 jobs per 10 candidates.\nGeneral Conclusion\nThis dataset gives various information on job offers and demands at multiple area level (from very local to national). It covers all the 532 job groups. We noticed that some jobs are missing at certain area levels and that this seems different from zero information. We should go back to Pôle Emploi with this question.\nWhen focusing only on the market score we observed a majority of missing values probably due to the low average number of offers and demands.\nNevertheless, as compared to the scraped data, we have now access to deeper and larger area types (\"Country\" and \"Bassin\" in addition to \"Department\" and \"Région\"). Furthermore, the data is clean enough to recommend to\n- Switch to the API data and drop the scraped data\n- Explore using the data at different levels"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GSimas/EEL7045
|
Aula 9.1 - Capacitores.ipynb
|
mit
|
[
"Jupyter Notebook desenvolvido por Gustavo S.S.\n\n\"Na ciência, o crédito vai para o homem que convence o mundo,\nnão para o que primeiro teve a ideia\" - Francis Darwin\n\nCapacitores e Indutores\nContrastando com um resistor,\nque gasta ou dissipa energia de\nforma irreversível, um indutor ou\num capacitor armazena ou libera\nenergia (isto é, eles têm capacidade\nde memória).\nCapacitor\nCapacitor é um elemento passivo projetado para armazenar energia em seu\ncampo elétrico. Um capacitor é formado por duas placas condutoras separadas por um\nisolante (ou dielétrico).\n\nQuando uma fonte de tensão v é conectada ao capacitor, como na Figura\n6.2, a fonte deposita uma carga positiva q sobre uma placa e uma carga negativa\n–q na outra placa. Diz-se que o capacitor armazena a carga elétrica. A quantidade\nde carga armazenada, representada por q, é diretamente proporcional à\ntensão aplicada v de modo que:\n\\begin{align}\n{\\Large q = Cv}\n\\end{align}\nCapacitância é a razão entre a carga depositada em uma placa de um capacitor\ne a diferença de potencial entre as duas placas, medidas em farads (F). Embora a capacitância C de um capacitor seja a razão entre a carga q por placa e a tensão aplicada v, ela não depende de q ou v, mas, sim, das dimensões físicas do capacitor\n\\begin{align}\n{\\Large C = \\epsilon \\frac{A}{d}}\n\\end{align}\nOnde A é a área de cada placa, d é a distância entre as placas e ε é a permissividade elétrica do material dielétrico entre as placas\n\nPara obter a relação corrente-tensão do capacitor, utilizamos:\n\\begin{align}\n{\\Large i = C \\frac{dv}{dt}}\n\\end{align}\nDiz-se que os capacitores que realizam a Equação acima são lineares. Para um capacitor não linear, o gráfico da relação corrente-tensão não é uma linha reta. E embora alguns capacitores sejam não lineares, a maioria é linear.\n\nRelação Tensão-Corrente:\n\\begin{align}\n{\\Large v(t) = \\frac{1}{C} \\int_{t_0}^{t} i(\\tau)d\\tau + v(t_0)}\n\\end{align}\nA Potência Instantânea liberada para o capacitor é:\n\\begin{align}\n{\\Large p = vi = Cv \\frac{dv}{dt}}\n\\end{align}\nA energia armazenada no capacitor é:\n\\begin{align}\n{\\Large w = \\int_{-\\infty}^{t} p(\\tau)d\\tau}\n\\= \n\\{\\Large C \\int_{-\\infty}^{t} v \\frac{dv}{d\\tau}d\\tau}\n\\=\n\\{\\Large C \\int_{v(-\\infty)}^{v(t)} vdv}\n\\= \n\\{\\Large \\frac{1}{2} Cv^2}\n\\end{align}\nPercebemos que v(-∞) = 0, pois o capacitor foi descarregado em t = -∞. Logo:\n\\begin{align}\n{\\Large w = \\frac{1}{2} Cv^2}\n\\\n\\{\\Large w = \\frac{q^2}{2C}}\n\\end{align}\nAs quais representam a energia armazenada no campo elétrico existente entre as placas do capacitor. Essa energia pode ser recuperada, já que um capacitor ideal não pode dissipar energia. De fato, a palavra capacitor deriva da capacidade de esse elemento armazenar energia em um campo elétrico.\n\n\nUm capacitor é um circuito aberto em CC.\n\n\nA tensão em um capacitor não pode mudar abruptamente.\n\n\nO capacitor ideal não dissipa energia, mas absorve potência do circuito ao armazenar energia em seu campo e retorna energia armazenada previamente ao liberar potência para o circuito.\n\n\nUm capacitor real, não ideal, possui uma resistência de fuga em paralelo conforme pode ser observado no modelo visto na Figura 6.8. A resistência de fuga pode chegar a valores bem elevados como 100 MΩ e pode ser desprezada para a maioria das aplicações práticas.\n\n\n\nExemplo 6.1\na. Calcule a carga armazenada em um capacitor de 3 pF com 20 V entre seus terminais.\nb. Determine a energia armazenada no capacitor.",
"print(\"Exemplo 6.1\")\n\nC = 3*(10**(-12))\nV = 20\nq = C*V\n\nprint(\"Carga armazenada:\",q,\"C\")\n\nw = q**2/(2*C)\n\nprint(\"Energia armazenada:\",w,\"J\")",
"Problema Prático 6.1\nQual é a tensão entre os terminais de um capacitor de 4,5 uF se a carga em uma placa\nfor 0,12 mC? Quanta energia é armazenada?",
"print(\"Problema Prático 6.1\")\n\nC = 4.5*10**-6\nq = 0.12*10**-3\nV = q/C\n\nprint(\"Tensão no capacitor:\",V,\"V\")\n\nw = q**2/(2*C)\n\nprint(\"Energia armazenada:\",w,\"J\")",
"Exemplo 6.2\nA tensão entre os terminais de um capacitor de 5 uF é:\nv(t) 10 cos 6.000t V\nCalcule a corrente que passa por ele.",
"print(\"Exemplo 6.2\")\n\nimport numpy as np\nfrom sympy import *\n\nC = 5*10**-6\nt = symbols('t')\nv = 10*cos(6000*t)\ni = C*diff(v,t)\n\nprint(\"Corrente que passa no capacitor:\",i,\"A\")",
"Problema Prático 6.2\nSe um capacitor de 10 uF for conectado a uma fonte de tensão com:\nv(t) 75 sen 2.000t V\ndetermine a corrente através do capacitor.",
"print(\"Problema Prático 6.2\")\n\nC = 10*10**-6\nv = 75*sin(2000*t)\ni = C * diff(v,t)\n\nprint(\"Corrente:\",i,\"A\")",
"Exemplo 6.3\nDetermine a tensão através de um capacitor de 2 uF se a corrente através dele for\ni(t) 6e^-3.000t mA\nSuponha que a tensão inicial no capacitor seja igual a zero.",
"print(\"Exemplo 6.3\")\n\nC = 2*10**-6\n\ni = 6*exp(-3000*t)*10**-3\nv = integrate(i,(t,0,t))\nv = v/C\n\nprint(\"Tensão no capacitor:\",v,\"V\")",
"Problema Prático 6.3\nA corrente contínua através de um capacitor de 100 uF é:\ni(t) = 50 sen(120pi*t) mA. \nCalcule a tensão nele nos instantes t = 1 ms e t = 5 ms. Considere v(0) = 0.",
"print(\"Problema Prático 6.3\")\n\nC = 100*10**-6\ni = 50*sin(120*np.pi*t)*10**-3\n\nv = integrate(i,(t,0,0.001))\nv = v/C\n\nprint(\"Tensão no capacitor para t = 1ms:\",v,\"V\")\n\nv = integrate(i,(t,0,0.005))\nv = v/C\n\nprint(\"Tensão no capacitor para t = 5ms:\",v,\"V\")",
"Exemplo 6.4\nDetermine a corrente através de um capacitor de 200 mF cuja tensão é mostrada na\nFigura 6.9.",
"print(\"Exemplo 6.4\")\n\n#v(t) = 50t, 0<t<1\n#v(t) = 100 - 50t, 1<t<3\n#v(t) = -200 + 50t, 3<t<4\n#v(t) = 0, caso contrario\n\nC = 200*10**-6\n\nv1 = 50*t\nv2 = 100 - 50*t\nv3 = -200 + 50*t\n\ni1 = C*diff(v1,t)\ni2 = C*diff(v2,t)\ni3 = C*diff(v3,t)\n\nprint(\"Corrente para 0<t<1:\",i1,\"A\")\nprint(\"Corrente para 1<t<3:\",i2,\"A\")\nprint(\"Corrente para 3<t<4:\",i3,\"A\")",
"Problema Prático 6.4\nUm capacitor inicialmente descarregado de 1 mF possui a corrente mostrada na Figura 6.11 entre seus terminais. Calcule a tensão entre seus terminais nos instantes t = 2 ms\ne t = 5 ms.",
"print(\"Problema Prático 6.4\")\n\nC = 1*10**-3\n\ni = 50*t*10**-3\nv = integrate(i,(t,0,0.002))\nv = v/C\n\nprint(\"Tensão para t=2ms:\",v,\"V\")\n\ni = 100*10**-3\nv = integrate(i,(t,0,0.005))\nv = v/C\n\nprint(\"Tensão para t=5ms:\",v,\"V\")",
"Exemplo 6.5\nObtenha a energia armazenada em cada capacitor na Figura 6.12a em condições\nde CC.",
"print(\"Exemplo 6.5\")\n\nC1 = 2*10**-3\nC2 = 4*10**-3\n\nI1 = (6*10**-3)*(3000)/(3000 + 2000 + 4000) #corrente que passa no resistor de 2k\nVc1 = I1*2000 # tensao sobre o cap1 = tensao sobre o resistor 2k\nwc1 = (C1*Vc1**2)/2\n\nprint(\"Energia do Capacitor 1:\",wc1,\"J\")\n\nVc2 = I1*4000\nwc2 = (C2*Vc2**2)/2\n\nprint(\"Energia do Capacitor 2:\",wc2,\"J\")",
"Problema Prático 6.5\nEm condições CC, determine a energia armazenada nos capacitores da Figura 6.13.",
"print(\"Problema Prático 6.5\")\n\nC1 = 20*10**-6\nC2 = 30*10**-6\nVf = 50 #tensao da fonte\nReq = 1000 + 3000 + 6000\n\nVc1 = Vf*(3000+6000)/Req\nVc2 = Vf*3000/Req\n\nwc1 = (C1*Vc1**2)/2\nwc2 = (C2*Vc2**2)/2\n\nprint(\"Energia no Capacitor 1:\",wc1,\"J\")\nprint(\"Energia no Capacitor 2:\",wc2,\"J\")",
"Capacitores em Série e Paralelo\nParalelo\nA capacitância equivalente de N capacitores ligados em paralelo é a soma\nde suas capacitâncias individuais.\n\\begin{align}\n{\\Large C_{eq} = C_1 + C_2 + ... + C_N = \\sum_{i=1}^{N} C_i}\n\\end{align}\n\nSérie\nA capacitância equivalente dos capacitores associados em série é o inverso\nda soma dos inversos das capacitâncias individuais.\n\\begin{align}\n{\\Large \\frac{1}{C_{eq}} = \\frac{1}{C_1} + \\frac{1}{C_2} + ... + \\frac{1}{C_N}}\n\\end{align}\n\\begin{align}\n{\\Large C_{eq} = \\frac{1}{\\sum_{i=1}^{N} \\frac{1}{C_i}}}\n\\end{align}\n\\begin{align}\n\\{\\Large C_{eq} = (\\sum_{i=1}^{N} (C_i)^{-1})^{-1}}\n\\end{align}\n\nPara 2 Capacitores:\n\\begin{align}\n{\\Large C_{eq} = \\frac{C_1 C_2}{C_1 + C_2}}\n\\end{align}\nExemplo 6.6\nDetermine a capacitância equivalente vista entre os terminais a-b do circuito da\nFigura 6.16.",
"print(\"Exemplo 6.6\")\n\nu = 10**-6 #definicao de micro\nCeq1 = (20*u*5*u)/((20 + 5)*u)\nCeq2 = Ceq1 + 6*u + 20*u\nCeq3 = (Ceq2*60*u)/(Ceq2 + 60*u)\n\nprint(\"Capacitância Equivalente:\",Ceq3,\"F\")",
"Problema Prático 6.6\nDetermine a capacitância equivalente nos terminais do circuito da Figura 6.17.",
"print(\"Problema Prático 6.6\")\n\nCeq1 = (60*u*120*u)/((60 + 120)*u)\nCeq2 = 20*u + Ceq1\nCeq3 = 50*u + 70*u\nCeq4 = (Ceq2 * Ceq3)/(Ceq2 + Ceq3)\n\nprint(\"Capacitância Equivalente:\",Ceq4,\"F\")",
"Exemplo 6.7\nPara o circuito da Figura 6.18, determine a tensão em cada capacitor.",
"print(\"Exemplo 6.7\")\n\nm = 10**-3\nVf = 30\n\nCeq1 = 40*m + 20*m\nCeq2 = 1/(1/(20*m) + 1/(30*m) + 1/(Ceq1))\n\nprint(\"Capacitância Equivalente:\",Ceq2,\"F\")\n\nq = Ceq2*Vf\n\nv1 = q/(20*m)\nv2 = q/(30*m)\nv3 = Vf - v1 - v2\n\nprint(\"Tensão v1:\",v1,\"V\")\nprint(\"Tensão v2:\",v2,\"V\")\nprint(\"Tensão v3:\",v3,\"V\")",
"Problema Prático 6.7\nDetermine a tensão em cada capacitor na Figura 6.20.",
"print(\"Problema Prático 6.7\")\n\nVf = 90\n\nCeq1 = (30*u * 60*u)/(30*u + 60*u)\nCeq2 = Ceq1 + 20*u\nCeq3 = (40*u * Ceq2)/(40*u + Ceq2)\n\nprint(\"Capacitância Equivalente:\",Ceq3,\"F\")\n\nq1 = Ceq3*Vf\n\nv1 = q1/(40*u)\nv2 = Vf - v1\n\nq3 = Ceq1*v2\n\nv3 = q3/(60*u)\nv4 = q3/(30*u)\n\nprint(\"Tensão v1:\",v1,\"V\")\nprint(\"Tensão v2:\",v2,\"V\")\nprint(\"Tensão v3:\",v3,\"V\")\nprint(\"Tensão v4:\",v4,\"V\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
BrainIntensive/OnlineBrainIntensive
|
resources/matplotlib/Examples/boxplots.ipynb
|
mit
|
[
"Sebastian Raschka\nback to the matplotlib-gallery at https://github.com/rasbt/matplotlib-gallery",
"%load_ext watermark\n\n%watermark -u -v -d -p matplotlib,numpy",
"<font size=\"1.5em\">More info about the %watermark extension</font>",
"%matplotlib inline",
"<br>\n<br>\nBoxplots in matplotlib\nSections\n\n\nSimple Boxplot\n\n\nBlack and white Boxplot\n\n\nHorizontal Boxplot\n\n\nFilled and cylindrical boxplots\n\n\nBoxplots with custom fill colors\n\n\nViolin plots\n\n\n<br>\n<br>\n<br>\n<br>\n\n<br>\n<br>\nSimple Boxplot\n[back to top]",
"import matplotlib.pyplot as plt\nimport numpy as np\n\nall_data = [np.random.normal(0, std, 100) for std in range(1, 4)]\n\nfig = plt.figure(figsize=(8,6))\n\nplt.boxplot(all_data, \n notch=False, # box instead of notch shape \n sym='rs', # red squares for outliers\n vert=True) # vertical box aligmnent\n\nplt.xticks([y+1 for y in range(len(all_data))], ['x1', 'x2', 'x3'])\nplt.xlabel('measurement x')\nt = plt.title('Box plot')\nplt.show()",
"<br>\n<br>\nBlack and white Boxplot\n[back to top]",
"import matplotlib.pyplot as plt\nimport numpy as np\n\nall_data = [np.random.normal(0, std, 100) for std in range(1, 4)]\n\nfig = plt.figure(figsize=(8,6))\n\nbplot = plt.boxplot(all_data, \n notch=False, # box instead of notch shape \n sym='rs', # red squares for outliers\n vert=True) # vertical box aligmnent\n\nplt.xticks([y+1 for y in range(len(all_data))], ['x1', 'x2', 'x3'])\nplt.xlabel('measurement x')\n\nfor components in bplot.keys():\n for line in bplot[components]:\n line.set_color('black') # black lines\n\nt = plt.title('Black and white box plot')\n\nplt.show()",
"<br>\n<br>\nHorizontal Boxplot\n[back to top]",
"import matplotlib.pyplot as plt\nimport numpy as np\n\nall_data = [np.random.normal(0, std, 100) for std in range(1, 4)]\n\nfig = plt.figure(figsize=(8,6))\n\nplt.boxplot(all_data, \n notch=False, # box instead of notch shape \n sym='rs', # red squares for outliers\n vert=False) # horizontal box aligmnent\n\nplt.yticks([y+1 for y in range(len(all_data))], ['x1', 'x2', 'x3'])\nplt.ylabel('measurement x')\nt = plt.title('Horizontal Box plot')\nplt.show()",
"<br>\n<br>\nFilled and cylindrical boxplots\n[back to top]",
"import matplotlib.pyplot as plt\nimport numpy as np\n\nall_data = [np.random.normal(0, std, 100) for std in range(1, 4)]\n\nfig = plt.figure(figsize=(8,6))\n\nplt.boxplot(all_data, \n notch=True, # notch shape \n sym='bs', # blue squares for outliers\n vert=True, # vertical box aligmnent\n patch_artist=True) # fill with color\n\nplt.xticks([y+1 for y in range(len(all_data))], ['x1', 'x2', 'x3'])\nplt.xlabel('measurement x')\nt = plt.title('Box plot')\nplt.show()",
"<br>\n<br>\nBoxplots with custom fill colors\n[back to top]",
"import matplotlib.pyplot as plt\nimport numpy as np\n\nall_data = [np.random.normal(0, std, 100) for std in range(1, 4)]\n\nfig = plt.figure(figsize=(8,6))\n\nbplot = plt.boxplot(all_data, \n notch=False, # notch shape \n vert=True, # vertical box aligmnent\n patch_artist=True) # fill with color\n\ncolors = ['pink', 'lightblue', 'lightgreen']\nfor patch, color in zip(bplot['boxes'], colors):\n patch.set_facecolor(color)\n\nplt.xticks([y+1 for y in range(len(all_data))], ['x1', 'x2', 'x3'])\nplt.xlabel('measurement x')\nt = plt.title('Box plot')\nplt.show()",
"<br>\n<br>\nViolin plots\n[back to top]\nViolin plots are closely related to Tukey's (1977) box plots but add useful information such as the distribution of the sample data (density trace). \nViolin plots were added in matplotlib 1.4.",
"import matplotlib.pyplot as plt\nimport numpy as np\n\nfig, axes = plt.subplots(nrows=1,ncols=2, figsize=(12,5))\n\nall_data = [np.random.normal(0, std, 100) for std in range(6, 10)]\n\n#fig = plt.figure(figsize=(8,6))\n\naxes[0].violinplot(all_data,\n showmeans=False,\n showmedians=True \n )\naxes[0].set_title('violin plot')\n\naxes[1].boxplot(all_data,\n )\naxes[1].set_title('box plot')\n\n# adding horizontal grid lines \nfor ax in axes:\n ax.yaxis.grid(True) \n ax.set_xticks([y+1 for y in range(len(all_data))], )\n ax.set_xlabel('xlabel')\n ax.set_ylabel('ylabel')\n\nplt.setp(axes, xticks=[y+1 for y in range(len(all_data))], \n xticklabels=['x1', 'x2', 'x3', 'x4'],\n ) \n \nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
HrantDavtyan/Data_Scraping
|
Week 3/W3_Readin_files_1.ipynb
|
apache-2.0
|
[
"Reading and Writing files\nThere is a built ing function in Python which is used for opening files. The function is called open() and takes two arguments:\n\nthe filename (required argument),\nthe opening mode (optional argument).\n\nOpening mode tells Python for what reason to open the file. The possible values for that argument are:\n\n\"r\" - Read mode which is used when the file is only being read, this is the default value of the argument, if it is not explicitly declared,\n\"w\" - Write mode which is used to edit and write new information to the file (any existing files with the same name will be erased when this mode is activated),\n\"a\" - Appending mode, which is used to add new data to the end of the file; that is new information is automatically amended to the end,\n\"r+\" - Special read and write mode, which is used to handle both actions when working with a file\n\nThe open() function works closely with the close() function, which tells Python to close the opened file.\nLet's, for example, write the lyrics of John Lennong's \"Imagine\" song to a new text file. Then that file will be used for reading purposes.\nPart 1: Writing to txt files",
"lyrics = \"Imagine all the people, \\nliving life in peace... \\n\\tJohn Lennon\"\nprint(lyrics)\n\nour_file = open(\"imagine_lyrics.txt\",\"w\")",
"So now a new text file titled \"imagine_lyrics\" was created for writing purposes (please take a look to the file dimension .txt which was mandatory to include). We can now use the write() function to write different text to our new file. We will write there the lyrics.",
"our_file.write(\"Imagine by John Lennon\")\n\nour_file.write(lyrics)",
"As mentioned above, once the actions with the opened file are complete, you may already close it.",
"our_file.close()",
"Part 2: Reading from .txt files\nOnce the file already exists, we can now read it back to Python. Again, we must open it, but in read only mode this time.",
"lyrics_imported = open(\"imagine_lyrics.txt\",\"r\")\n\nprint(lyrics_imported)",
"As you can see above, hte file was succesfully read, but printing it gives only some info, and not the content of the file. For that reason, the read only opened file should also be read, using the built-in read() function.",
"lyrics_imported.read()\n\nprint lyrics_imported.read()",
"If you give any integer as an argument to read() function, then it will return only that much very first characters of the string (e.g. read(7) would return \"Imagine\", while read(4) would return \"Imag\").\nOne thing you may have already noticed about read() function is that it returns one full string from the text file. Yet, you might also be interested in reading the file line-by-line. For that purpose, the readlines() function can come handy.",
"lyrics_imported.readlines()",
"In Jupyter notebook (this is specific to Jupyter notebook only), the opened read only file can be read only once. So In order to read it again, we have to open it once again.",
"lyrics_imported = open(\"imagine_lyrics.txt\",\"r\")\n\nlyrics_imported.readlines()",
"As you can see, not only it read each line separately, but the output we received has the type of list. We may check it, if we save the results in a new file.",
"lyrics_imported = open(\"imagine_lyrics.txt\",\"r\")\nlyrics_list = lyrics_imported.readlines()\n\ntype(lyrics_list)",
"Fine, now we are sure that the output is a list, so we can perform different actions on it",
"lyrics_list[0]\n\nlyrics_list[1]",
"Once verything is done, we should remember to close the file down.",
"lyrics_imported.close()",
"Final note: besides readlines() function there is only the singular version called readline() which returns a single line from the text file (e.h. readline(3) will return the whole content of the 3rd line only).\nPart 3: automatically closing files\nOne of the main drawbacks of the above approach is that you shouls always remember to close the files once your actions are completed. A good shortcut is to write all content to a new variable, and close your file immediately. For that purpose a very handy statement called with exists, which closes the file itself. Let's try it out.",
"with open(\"imagine_lyrics.txt\",\"r\") as data_imported:\n data = data_imported.readlines()",
"As it can be seen above, no close function was used, as the with statement automatically closes the file down. Another important thing to note, is that the data_imported varible was a temporary name used for an imported file to perform actions on. So we wrote its content to a new global variable (called data) that can be used further for analysis.",
"print data\n\nprint data_imported"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
madhurilalitha/Python-Projects
|
AnomaliesTwitterText/anomalies_in_tweets.ipynb
|
mit
|
[
"\"Detection of anomalous tweets using supervising outlier techniques\"\nImporting the Dependencies and Loading the Data",
"import nltk\nimport pandas as pd\nimport numpy as np\n\ndata = pd.read_csv(\"original_train_data.csv\", header = None,delimiter = \"\\t\", quoting=3,names = [\"Polarity\",\"TextFeed\"])\n\n#Data Visualization\ndata.head()",
"Data Preparation\nData prepration with the available data. I made the combination such that the classes are highly imbalanced making it apt for anomaly detection problem",
"data_positive = data.loc[data[\"Polarity\"]==1]\ndata_negative = data.loc[data[\"Polarity\"]==0]\nanomaly_data = pd.concat([data_negative.sample(n=10),data_positive,data_negative.sample(n=10)])\nanomaly_data.Polarity.value_counts()\n\n#Number of words per sentence\nprint (\"No of words for sentence in train data\",np.mean([len(s.split(\" \")) for s in anomaly_data.TextFeed]))",
"Data pre-processing - text analytics to create a corpus\n1) Converting text to matrix of token counts [Bag of words]\n Stemming - lowercasing, removing stop-words, removing punctuation and reducing words to its lexical roots \n2) Stemmer, tokenizer(removes non-letters) are created by ourselves.These are passed as parameters to CountVectorizer of sklearn.\n3) Extracting important words and using them as input to the classifier\n\nFeature Engineering",
"import re\nfrom sklearn.feature_extraction.text import CountVectorizer\nnltk.download()\nfrom nltk.stem.porter import PorterStemmer\n\n''' this code is taken from\nhttp://www.cs.duke.edu/courses/spring14/compsci290/assignments/lab02.html\n'''\n# a stemmer widely used\nstemmer = PorterStemmer() \n\ndef stem_tokens(tokens, stemmer):\n stemmed = []\n for item in tokens:\n stemmed.append(stemmer.stem(item))\n return stemmed\n\ndef tokenize(text):\n # remove non letters\n text = re.sub(\"[^a-zA-Z]\", \" \", text)\n # tokenize\n tokens = nltk.word_tokenize(text)\n # stem\n stems = stem_tokens(tokens, stemmer)\n return stems\n",
"The below implementation produces a sparse representation of the counts using scipy.sparse.csr_matrix.\nNote: I am not using frequencies(TfidTransformer, apt for longer documents) because the text size is small and can be dealt with occurences(CountVectorizer).",
"#Max_Features selected as 80 - can be changed for the better trade-off\nvector_data = CountVectorizer(\n analyzer = 'word',\n tokenizer = tokenize,\n lowercase = True,\n stop_words = 'english',\n max_features = 90\n)",
"Fit_Transform:\n 1) Fits the model and learns the vocabulary\n 2) transoforms the data into feature vectors",
"#using only the \"Text Feed\" column to build the features\nfeatures = vector_data.fit_transform(anomaly_data.TextFeed.tolist())\n#converting the data into the array\nfeatures = features.toarray()\nfeatures.shape\n\n#printing the words in the vocabulary\nvocab = vector_data.get_feature_names()\nprint (vocab)\n\n# Sum up the counts of each vocabulary word\ndist = np.sum(corpus_data_features_nd, axis=0)\n \n# For each, print the vocabulary word and the number of times it \n# appears in the data set\na = zip(vocab,dist)\nprint (list(a))",
"Train-Test Split",
"from sklearn.cross_validation import train_test_split\n#80:20 ratio\nX_train, X_test, y_train, y_test = train_test_split(\n features, \n anomaly_data.Polarity,\n train_size=0.80, \n random_state=1234)\n\nprint (\"Training data - positive and negative values\")\nprint (pd.value_counts(pd.Series(y_train)))\nprint (\"Testing data - positive and negative values\")\nprint (pd.value_counts(pd.Series(y_test)))",
"A text polarity depends on what words appear in that text, discarding any grammar or word order but keeping multiplicity.\n1) All the above text processing for features ended up with the same entries in our dataset\n2) Instead of having them defined by a whole text, they are now defined by a series of counts of the most frequent words in our whole corpus.\n3) These vectors are used as features to train a classifier.\nTraining the model",
"from sklearn.svm import SVC\nclf = SVC()\nclf.fit(X=X_train,y=y_train)\n\nwclf = SVC(class_weight={0: 20})\nwclf.fit(X=X_train,y=y_train)\n\ny_pred = clf.predict(X_test)\ny_pred_weighted = wclf.predict(X_test)\n\nfrom sklearn.metrics import classification_report\nprint (\"Basic SVM metrics\")\nprint(classification_report(y_test, y_pred))\nprint (\"Weighted SVM metrics\")\nprint(classification_report(y_test, y_pred_weighted))\n\nfrom sklearn.metrics import confusion_matrix\nprint (\"Basic SVM Confusion Matrix\")\nprint (confusion_matrix(y_test, y_pred))\nprint (\"Weighted SVM Confusion Matrix\")\nprint (confusion_matrix(y_test, y_pred_weighted))\n\n\ntn, fp, fn, tp = confusion_matrix(y_test, y_pred_weighted).ravel()\n(tn, fp, fn, tp)",
"Interpretation:\nAs seen from the above procedure, we have to perform cost-sensitive learning using weighting methods (adding more weight to anomaly classes) to deal with anomalies"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
evanmiltenburg/python-for-text-analysis
|
Chapters-colab/Chapter_03_Strings.ipynb
|
apache-2.0
|
[
"<a href=\"https://colab.research.google.com/github/cltl/python-for-text-analysis/blob/colab/Chapters-colab/Chapter_03_Strings.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"%%capture\n!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Data.zip\n!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/images.zip\n!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Extra_Material.zip\n\n!unzip Data.zip -d ../\n!unzip images.zip -d ./\n!unzip Extra_Material.zip -d ../\n\n!rm Data.zip\n!rm Extra_Materil.zip\n!rm images.zip",
"Chapter 3 - Strings\nThis notebook uses code snippets and explanations from this course.\nIn this notebook, we will focus on the datatype strings. The first thing you learned was printing a simple sentence: \"Hello, world!\" This sentence, as any other text, was stored by Python as a string. Here are some reasons why strings are important:\n\nText is usually represented as a string. Text analysis is the ofcus of our course, so we will be dealing with strings a lot. \nStrings are also used when reading in files: We tell python which file to open by giving it a filepath, e.g. '../Data/books/HuckFinn.txt'. Don't worry about this for now, we will explain it in block 3\n\nAt the end of this chapter, you will be able to:\n\ndefine strings and understand their internal representation\nunderstand strings as sequences\nuse character indices for string slicing\ncombine strings through printing, concatenation and insertion\ncompare strings using comparison operators and the in operator\nunderstand strings as immutable objects\nwork with and understand string methods\nunderstand the difference between args and kwargs\n\nIf you want to learn more about these topics, you might find the following links useful:\n\nDocumentation: String methods\nDocumentation: Literal String Interpolation (f-strings)\nExplanation: Strings\nExplanation: F-strings\nVideo: Strings - working with text data\nVideo: Strings\nVideo: String Indexing and Slicing\n\nIf you have questions about this chapter, please contact us (cltl.python.course@gmail.com).\n1. Defining and representing strings\nA string is a sequence of letters/characters which together form a whole (for instance a word, sentence or entire text). In Python, a string is a type of object for which the value is enclosed by single or double quotes. Let's define a few of them:",
"# Here are some strings:\nstring_1 = \"Hello, world!\"\nstring_2 = 'I ❤️ cheese' # If you are using Python 2, your computer will not like this.\nstring_3 = '1,2,3,4,5,6,7,8,9'",
"There is no difference in declaring a string with single or double quotes. However, if your string contains a quote symbol it can lead to errors if you try to enclose it with the same quotes.",
"# Run this cell to see the error generated by the following line.\nrestaurant = 'Wendy's'",
"In the example above the error indicates that there is something wrong with the letter s. This is because the single quote closes the string we started, and anything after that is unexpected.\nTo solve this we can enclose the string in double quotes, as follows:",
"restaurant = \"Wendy's\"\n# Similarly, we can enclose a string containing double quotes with single quotes:\nquotes = 'Using \"double\" quotes enclosed by a single quote.'",
"We can also use the escape character \"\\\" in front of the quote, which will tell Python not to treat this specific quote as the end of the string.",
"restaurant = 'Wendy\\'s'\nprint(restaurant)\nrestaurant = \"Wendy\\\"s\"\nprint(restaurant)",
"1.1 Multi-line strings\nStrings in Python can also span across multiple lines, which can be useful for when you have a very long string, or when you want to format the output of the string in a certain way. This can be achieved in two ways:\n\nWith single or double quotes, where we manually indicate that the rest of the string continues on the next line with a backslash.\nWith three single or double quotes.\n\nWe will first demonstrate how this would work when you use one double or single quote.",
"# This example also works with single-quotes.\nlong_string = \"A very long string\\n\\\ncan be split into multiple\\n\\\nsentences by appending a newline symbol\\n\\\nto the end of the line.\"\n\nprint(long_string)",
"The \\n or newline symbol indicates that we want to start the rest of the text on a new line in the string, the following \\ indicates that we want the string to continue on the next line of the code. This difference can be quite hard to understand, but best illustrated with an example where we do not include the \\n symbol.",
"long_string = \"A very long string \\\ncan be split into multiple \\\nsentences by appending a backslash \\\nto the end of the line.\"\n\nprint(long_string)",
"As you can see, Python now interprets this example as a single line of text. If we use the recommended way in Python to write multiline strings, with triple double or single quotes, you will see that the \\n or newline symbol is automatically included.",
"long_string = \"\"\"A very long string\ncan also be split into multiple \nsentences by enclosing the string\nwith three double or single quotes.\"\"\"\n\nprint(long_string)\n\nprint()\n\nanother_long_string = '''A very long string\ncan also be split into multiple \nsentences by enclosing the string\nwith three double or single quotes.'''\n\nprint(another_long_string)",
"What will happen if you remove the backslash characters in the example? Try it out in the cell below.",
"long_string = \"A very long string\\\ncan be split into multiple\\\nsentences by appending a backslash\\\nto the end of the line.\"\n\nprint(long_string)",
"1.2 Internal representation: using repr()\nAs we have seen above, it is possible to make strings that span multiple lines. Here are two ways to do so:",
"multiline_text_1 = \"\"\"This is a multiline text, so it is enclosed by triple quotes.\nPretty cool stuff!\nI always wanted to type more than one line, so today is my lucky day!\"\"\"\nmultiline_text_2 = \"This is a multiline text, so it is enclosed by triple quotes.\\nPretty cool stuff!\\nI always wanted to type more than one line, so today is my lucky day!\"\nprint(multiline_text_1)\nprint() # this just prints an empty line\nprint(multiline_text_2)",
"Internally, these strings are equally represented. We can check that with the double equals sign, which checks if two objects are the same:",
"print(multiline_text_1 == multiline_text_2)",
"So from this we can conclude that multiline_text_1 has the same hidden characters (in this case \\n, which stands for 'new line') as multiline_text_2. You can show that this is indeed true by using the built-in repr() function (which gives you the Python-internal representation of an object).",
"# Show the internal representation of multiline_text_1.\nprint(repr(multiline_text_1))\nprint(repr(multiline_text_2))",
"Another hidden character that is often used is \\t, which represents tabs:",
"colors = \"yellow\\tgreen\\tblue\\tred\"\nprint(colors)\nprint(repr(colors))",
"2. Strings as sequences\n2.1 String indices\nStrings are simply sequences of characters. Each character in a string therefore has a position, which can be referred to by the index number of the position. The index numbers start at 0 and then increase to the length of the string. You can also start counting backwards using negative indices. The following table shows all characters of the sentence \"Sandwiches are yummy\" in the first row. The second row and the third row show respectively the positive and negative indices for each character:\n| Characters | S | a | n | d | w | i | c | h | e | s | | a | r | e | | y | u | m | m | y |\n|----------------|---|---|---|---|---|---|---|---|---|---|----|----|----|----|----|----|----|----|----|----|\n| Positive index | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 |\n| Negative index |-20|-19|-18|-17|-16|-15|-14|-13|-12|-11|-10|-9|-8|-7|-6|-5|-4|-3|-2|-1|\nYou can access the characters of a string as follows:",
"my_string = \"Sandwiches are yummy\"\nprint(my_string[1]) \nprint(my_string[-1]) ",
"Length: Python has a built-in function called len() that lets you compute the length of a sequence. It works like this:",
"number_of_characters = len(my_string)\nprint(number_of_characters) # Note that spaces count as characters too!",
"2.2 Slicing and indices applied to strings\nBesides using single indices we can also extract a range from a string:",
"my_string = \"Sandwiches are yummy\"\nprint(my_string[1:4]) ",
"This is called string slicing. So how does this notation work?\npython\nmy_string[i] # Get the character at index i.\nmy_string[start:end] # Get the substring starting at 'start' and ending *before* 'end'.\nmy_string[start:end:stepsize] # Get all characters starting from 'start', ending before 'end', \n # with a specific step size.\nYou can also leave parts out:\npython\nmy_string[:i] # Get the substring starting at index 0 and ending just before i.\nmy_string[i:] # Get the substring starting at i and running all the way to the end.\nmy_string[::i] # Get a string going from start to end with step size i.\nYou can also have negative step size. my_string[::-1] is the idiomatic way to reverse a string.\n\nTip: Slicing and accessing values via indices is very useful and can be applied to other python objects, which have a fixed sequence, such as lists (we will see how in the subsequent notebooks). Try to understand what is going on with string slicing - it will be very helpful in rest of the course! \nDo you know what the following statements will print?",
"print(my_string[1:4]) \n\nprint(my_string[1:4:1]) \n\nprint(my_string[11:14])\n\nprint(my_string[15:])\n\nprint(my_string[:9])\n\nprint('cow'[::2])\n\nprint('cow'[::-2])\n\n# a fun trick to reverse sequences:\nprint(my_string[::-1])\n\n# You can do something similar with lists (you don't have to understand this is detail now - but we'll show you an\n# example already, so you've seen it):\n\nmy_list = ['a', 'bunch', 'of', 'words']\nprint(my_list[3])\nprint(my_list[2:4])\nprint(my_list[-1])",
"3. Immutability\nThe mutability of an object refers to whether an object can change or not. Strings are immutable, meaning that they cannot be changed. It is possible to create a new string-object based on the old one, but we cannot modify the existing string-object. The cells below demonstrate this.",
"# This is fine, because we are creating a new string. The old one remains unchanged:\nfruit = 'guanabana'\nisland = fruit[:5] \nprint(island, 'island')\nprint(fruit, 'fruit')\n\n# This works because we are creating a new string and overwriting our old one\nfruit = fruit[5:] + 'na' \nprint(fruit)\n\n# This attempt to change the ending into `aan' does not work because now we are trying to change an existing string\nfruit[4:5] = 'an' \nprint(fruit)\n\n# We could do this with a list though (don't worry about this yet - it is just meant to show the contrast)\n\nfruits = ['cherry', 'blueberry', 'banana']\nfruits[2:3] = ['rasperry', 'kiwi']\nfruits\n\n# If we want to modify a string by exchanging characters, we need to do:\nfruit = fruit[:4] + 'an'\nprint(fruit)",
"The reasons for why strings are immutable are beyond the scope of this notebook. Just remember that if you want to modify a string, you need to overwrite the entire string, and you cannot modify parts of it by using individual indices.\n4. Comparing strings\nIn Python it is possible to use comparison operators (as used in conditional statements) on strings. These operators are: \n\n== ('is the same as') \n!= ('is not the same as')\n< ('is smaller than')\n<= ('is the same as or smaller than')\n> ('is greater than')\n>= ('is the same as or greater than')\n\nAttention \n'=' is used to assign a value to a variable whereas '==' is used to compare two values. If you get errors in comparisons, check if you used the correct operator. \nSome of these symbols are probably familiar to you from your math classes. Most likely, you have used them before to compare numbers. However, we can also use them to compare strings! \nThere are a number of things we have to know about python when comparing strings:\n\nString comparison is always case-sensitive \nInternally, characters are represented as numerical values, which can be ranked. You can use the smaller than/greater than operators to put words in lexicographical order. This is similar to the alphabetical order you would use with a dictionary, except that all the uppercase letters come before all the lowercase letters (so first A, B, C, etc. and then a, b, c, etc.)\n\nHint: In practice, you will often use == and !=. The 'greater than' and 'smaller than' operators are used in sorting algorithms (e.g. to sort a list of strings in alphabetical order), but you will hardly ever use them directly to compare strings.",
"print('a' == 'a')\nprint('a' != 'b')\nprint('a' == 'A') # string comparison is case-sensitive\nprint('a' < 'b') # alphabetical order\nprint('A' < 'a') # uppercase comes before lowercase\nprint('B' < 'a') # uppercase comes before lowercase\nprint()\nprint('orange' == 'Orange')\nprint('orange' > 'Orange')\nprint('orange' < 'Orange')\nprint('orange' > 'banana')\nprint('Orange' > 'banana')",
"Another way of comparing strings is to check whether a string is part of another string, which can be done using the in operator. It returns True if the string contains the relevant substring, and False if it doesn't. These two values (True and False) are called boolean values, or booleans for short. We'll talk about them in more detail later. Here are some examples to try (can you predict what will happen before running them?):",
"\"fun\" in \"function\"\n\n\"I\" in \"Team\"\n\n\"am\" in \"Team\"\n\n\"App\" in \"apple\" # Capitals are not the same as lowercase characters!\n\n\"apple\" in \"apple\" \n\n\"applepie\" in \"apple\"",
"5. Printing, concatenating and inserting strings\nYou will often find yourself concatenating and printing combinations of strings. Consider the following examples:",
"print(\"Hello\", \"World\")\nprint(\"Hello \" + \"World\")",
"Even though they may look similar, there are two different things happening here. Simply said: the plus in the expression is doing concatenation, but the comma is not doing concatenation.\nThe 'print()' function, which we have seen many times now, will print as strings everything in a comma-separated sequence of expressions to your screen, and it will separate the results with single blanks by default. Note that you can mix types: anything that is not already a string is automatically converted to its string representation.",
"number = 5\nprint(\"I have\", number, \"apples\")",
"String concatenation, on the other hand, happens when we merge two strings into a single object using the + operator. No single blanks are inserted, and you cannot concatenate mix types. So, if you want to merge a string and an integer, you will need to convert the integer to a string.",
"number = 5\nprint(\"I have \" + str(number) + \" apples\")",
"Optionally, we can assign the concatenated string to a variable:",
"my_string = \"I have \" + str(number) + \" apples\"\nprint(my_string)",
"In addition to using + to concatenate strings, we can also use the multiplication sign * in combination with an integer for repeating strings (note that we again need to add a blank after 'apples' if we want it to be inserted):",
"my_string = \"apples \" * 5\nprint(my_string)",
"The difference between \",\" and \"+\" when printing and concatenating strings can be confusing at first. Have a look at these examples to get a better sense of their differences.",
"print(\"Hello\", \"World\")\n\nprint(\"Hello\" + \"World\")\n\nprint(\"Hello \" + \"World\")\n\nprint(5, \"eggs\")\n\nprint(str(5), \"eggs\")\n\nprint(5 + \" eggs\")\n\nprint(str(5) + \" eggs\")\n\ntext = \"Hello\" + \"World\"\nprint(text)\nprint(type(text))\n\ntext = \"Hello\", \"World\"\nprint(text)\nprint(type(text))",
"5.1 Using f-strings\nWe can imagine that string concatenation can get rather confusing and unreadable if we have more variables. Consider the following example:",
"name = \"Pia\"\nage = 26\ncountry = \"Austria\"\nresidence = \"The Netherlands\"\n\nintroduction = \"Hello. My name is \" + name + \". I'm \" + str(age) + \" years old and I'm from \" + country + \\\n\", but I live in \"+ residence +'.'\nprint(introduction)",
"Luckily, there is a way to make the code a lot more easy to understand and nicely formatted. In Python, you can use a\nstring formatting mechanism called Literal String Interpolation. Strings that are formatted using this mechanism are called f-strings, after the leading character used to denote such strings, and standing for \"formatted strings\". It works as follows:",
"name=\"Pia\" \nage=26\ncountry=\"Austria\"\nresidence = \"The Netherlands\"\nintroduction = f\"Hello. My name is {name}. I'm {age} years old and I'm from {country}, but I live in {residence}.\"\nintroduction",
"We can even do cool stuff like this with f-strings:",
"text = f\"Soon, I'm turning {age+1} years.\"\nprint(text)",
"Other formatting methods that you may come across include %-formatting and str.format(), but we recommend that you use f-strings because they are the most intuitive.\nUsing f-strings can be extremely useful if you're dealing with a lot of data you want to modify in a similar way. Suppose you want to create many new files containing data and name them according to a specific system. You can create a kind of template name and then fill in specific information using variables. (More about files later.)\n6. String methods\nA method is a function that is associated with an object. For example, the string-method lower() turns a string into all lowercase characters, and the string method upper() makes strings uppercase. You can call this method using the dot-notation as shown below:",
"string_1 = 'Hello, world!'\nprint(string_1) # The original string.\nprint(string_1.lower()) # Lowercased.\nprint(string_1.upper())# Uppercased.",
"6.1 Learning about methods\nSo how do you find out what kind of methods an object has? There are two options:\n\nRead the documentation. See here for the string methods.\nUse the dir() function, which returns a list of method names (as well as attributes of the object). If you want to know what a specific method does, use the help() function.\n\nRun the code below to see what the output of dir() looks like. \nThe method names that start and end with double underscores ('dunder methods') are Python-internal. They are what makes general methods like len() work (len() internally calls the string.__len__() function), and cause Python to know what to do when you, for example, use a for-loop with a string.\nThe other method names indicate common and useful methods.",
"# Run this cell to see all methods for strings\ndir(str)",
"If you'd like to know what one of these methods does, you can just use help() (or look it up online):",
"help(str.upper)",
"It's important to note that string methods only return the result. They do not change the string itself.",
"x = 'test' # Defining x.\ny = x.upper() # Using x.upper(), assigning the result to variable y.\nprint(y) # Print y.\nprint(x) # Print x. It is unchanged.",
"Below we illustrate some of the string methods. Try to understand what is happening. Use the help() function to find more information about each of these methods.",
"# Find out more about each of the methods used below by changing the name of the method\nhelp(str.strip)\n\ns = ' Humpty Dumpty sat on the wall '\nprint(s)\ns = s.strip() \nprint(s)\n\nprint(s.upper())\nprint(s.lower())\n\nprint(s.count(\"u\"))\nprint(s.count(\"U\"))\n\nprint(s.find('sat'))\nprint(s.find('t', 12))\nprint(s.find('q', 12))\n\nprint(s.replace('sat on', 'fell off'))\n\nwords = s.split() # This returns a list, which we will talk about later.\nfor word in words: # But you can iterate over each word in this manner\n print(word.capitalize())\n\nprint('-'.join(words))",
"Exercises\nExercise 1:\nCan you identify and explain the errors in the following lines of code? Correct them please!",
"print(\"A message\").\nprint(\"A message')\nprint('A message\"')",
"Exercise 2:\nCan you print the following? Try using both positive and negative indices.\n\nthe letter 'd' in my_string \nthe letter 'c' in my_string",
"my_string = \"Sandwiches are yummy\"\n# your code here",
"Can you print the following? Try using both positive and negative indices.\n\nmake a new string containing your first name and print its first letter\nprint the number of letters in your name",
"# your code here",
"Exercise 3:\nCan you print all a's in the word 'banana'?",
"# your code here",
"Can you print 'banana' in reverse ('ananab')?",
"# your code here",
"Can you exchange the first and last characters in my_string ('aananb')? Create a new variable new_string to store your result.",
"my_string = \"banana\"\nnew_string = # your code here",
"Exercise 4:\nFind a way to fix the spacing problem below keeping the \"+\".",
"name = \"Bruce Banner\"\nalterego = \"The Hulk\"\ncolour = \"Green\"\ncountry = \"USA\"\n\nprint(\"His name is\" + name + \"and his alter ego is\" + alterego + \n \", a big\" + colour + \"superhero from the\" + country + \".\")",
"How would you print the same sentence using \",\"?",
"name = \"Bruce Banner\"\nalterego = \"The Hulk\"\ncolour = \"Green\"\ncountry = \"USA\"\n\nprint(\"His name is\" + name + \"and his alter ego is\" + alterego + \n \", a big\" + colour + \"superhero from the\" + country + \".\")",
"Can you rewrite the code below using an f-string?",
"name = \"Bruce Banner\"\nalterego = \"The Hulk\"\ncolour = \"green\"\ncountry = \"the USA\"\nbirth_year = 1969\ncurrent_year = 2017\n\nprint(\"His name is \" + name + \" and his alter ego is \" + alterego + \n \", a big \" + colour + \" superhero from \" + country + \". He was born in \" + str(birth_year) + \n \", so he must be \" + str(current_year - birth_year - 1) + \" or \" + str(current_year - birth_year) + \n \" years old now.\")",
"Exercise 5:\nReplace all a's by o's in 'banana' using a string method.",
"my_string = \"banana\"\n# your code here",
"Remove all spaces in the sentence using a string method.",
"my_string = \"Humpty Dumpty sat on the wall\"\n# your code here",
"What do the methods lstrip() and rstrip() do? Try them out below.",
"# find out what lstrip() and rstrip() do",
"What do the methods startswith() and endswith() do? Try them out below.",
"# find out what startswith() and endswith() do"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.15/_downloads/plot_resample.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Resampling data\nWhen performing experiments where timing is critical, a signal with a high\nsampling rate is desired. However, having a signal with a much higher sampling\nrate than is necessary needlessly consumes memory and slows down computations\noperating on the data.\nThis example downsamples from 600 Hz to 100 Hz. This achieves a 6-fold\nreduction in data size, at the cost of an equal loss of temporal resolution.",
"# Authors: Marijn van Vliet <w.m.vanvliet@gmail.com>\n#\n# License: BSD (3-clause)\n\nfrom matplotlib import pyplot as plt\n\nimport mne\nfrom mne.datasets import sample",
"Setting up data paths and loading raw data (skip some data for speed)",
"data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\nraw = mne.io.read_raw_fif(raw_fname).crop(120, 240).load_data()",
"Since downsampling reduces the timing precision of events, we recommend\nfirst extracting epochs and downsampling the Epochs object:",
"events = mne.find_events(raw)\nepochs = mne.Epochs(raw, events, event_id=2, tmin=-0.1, tmax=0.8, preload=True)\n\n# Downsample to 100 Hz\nprint('Original sampling rate:', epochs.info['sfreq'], 'Hz')\nepochs_resampled = epochs.copy().resample(100, npad='auto')\nprint('New sampling rate:', epochs_resampled.info['sfreq'], 'Hz')\n\n# Plot a piece of data to see the effects of downsampling\nplt.figure(figsize=(7, 3))\n\nn_samples_to_plot = int(0.5 * epochs.info['sfreq']) # plot 0.5 seconds of data\nplt.plot(epochs.times[:n_samples_to_plot],\n epochs.get_data()[0, 0, :n_samples_to_plot], color='black')\n\nn_samples_to_plot = int(0.5 * epochs_resampled.info['sfreq'])\nplt.plot(epochs_resampled.times[:n_samples_to_plot],\n epochs_resampled.get_data()[0, 0, :n_samples_to_plot],\n '-o', color='red')\n\nplt.xlabel('time (s)')\nplt.legend(['original', 'downsampled'], loc='best')\nplt.title('Effect of downsampling')\nmne.viz.tight_layout()",
"When resampling epochs is unwanted or impossible, for example when the data\ndoesn't fit into memory or your analysis pipeline doesn't involve epochs at\nall, the alternative approach is to resample the continuous data. This\ncan only be done on loaded or pre-loaded data.",
"# Resample to 300 Hz\nraw_resampled = raw.copy().resample(300, npad='auto')",
"Because resampling also affects the stim channels, some trigger onsets might\nbe lost in this case. While MNE attempts to downsample the stim channels in\nan intelligent manner to avoid this, the recommended approach is to find\nevents on the original data before downsampling.",
"print('Number of events before resampling:', len(mne.find_events(raw)))\n\n# Resample to 100 Hz (generates warning)\nraw_resampled = raw.copy().resample(100, npad='auto')\nprint('Number of events after resampling:',\n len(mne.find_events(raw_resampled)))\n\n# To avoid losing events, jointly resample the data and event matrix\nevents = mne.find_events(raw)\nraw_resampled, events_resampled = raw.copy().resample(\n 100, npad='auto', events=events)\nprint('Number of events after resampling:', len(events_resampled))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
musketeer191/job_analytics
|
.ipynb_checkpoints/job_post_cluster-checkpoint.ipynb
|
gpl-3.0
|
[
"Clustering Job Posts\nGiven a job title (in an industry), we would like to group its job posts into clusters by their similarity in topic distributions. The clusters are speculated to reveal us some interesting insights (e.g. trends) on the job title.",
"import my_util as my_util\nimport cluster_skill_helpers as cluster_skill_helpers\nfrom cluster_skill_helpers import *\n\nimport random as rd\n\nglobal doc_topic_distr\n\nHOME_DIR = 'd:/larc_projects/job_analytics/'\nSKILL_DAT = HOME_DIR + 'data/clean/skill_cluster/' \nSKILL_RES = HOME_DIR + 'results/' + 'skill_cluster/new/'\nJOB_PROF = SKILL_RES + 'job_prof/'\n\ndf = pd.read_csv(SKILL_DAT + 'filter_doc_index.csv')\nprint df.shape\ndf.set_index('index', inplace=True)\n\nwith(open(SKILL_RES + 'doc_20topic_distr.mtx', 'r')) as f:\n doc_topic_distr = mmread(f)\n\n# reload(my_util)\n# from my_util import *\nreload(cluster_skill_helpers)\nfrom cluster_skill_helpers import *\n\ndef changeTitle(df, t1='Teacher, Kindergarten', t2='Kindergarten Teacher'):\n n_post = df.shape[0]\n idx = [i for i in range(n_post) if df.iloc[i].title == t1]\n out_df = df.copy()\n out_df.title.iloc[idx] = t2\n return out_df\n\ndef vizPair(i, sim_df, labels, abbv_title=''):\n fig = vizDists4Pair(sim_df.iloc[i], df, doc_topic_distr, labels)\n fig.savefig(SKILL_RES + 'fig/{}_p{}.pdf'.format(abbv_title, i+1))\n plt.show(); plt.close()\n \ndef calSimScores(job_title='Research Fellow', industry=None, df=df, out_fmt='data_frame'): # 'Education'\n posts = df[(df['title'] == job_title) & (df['industry'] == industry)] if industry is not None else df[df['title'] == job_title]\n# posts = rmBadPosts(posts, job_title)\n n_post = posts.shape[0]\n if n_post > 100: posts = posts.sample(100)\n n_post = posts.shape[0]\n# print('{} in {}: {} posts'.format(job_title, industry, n_post))\n return pairwiseSim(posts, doc_topic_distr, out_fmt, verbose=False)\n\ndef consistency(job_title, industry, save_sim=False, abbv_job='', abbv_industry=''):\n '''\n @brief: calculate consistency score of given job title in given industry as avg of job post sims\n @param: save_sim=True if want to save the sims\n '''\n sims = calSimScores(job_title, industry)\n if save_sim:\n fname = JOB_PROF + 'consistency/{}_{}_sims.csv'.format(abbv_industry, abbv_job)\n sims.to_csv(fname, index=False)\n \n cscore = round(sims['topic_sim'].mean(), 3)\n return cscore\n\ndef cScoreAtRow(row):\n '''\n @return: consistency score of pair (job_title, industry) in the given row\n '''\n count = row.name\n if (count % 100 == 0): print('{} pairs and counting...'.format(count))\n job_title, industry = row['title'], row['industry']\n sims = calSimScores(job_title, industry)\n cscore = round(sims['topic_sim'].mean(), 3)\n return cscore\n\ndef simScore(t1, t2):\n print('{} vs. {}'.format(t1, t2))\n posts1 = df[df.title == t1]; posts2 = df[df.title == t2]\n## Rm lousy posts with too few skills from both sets\n# posts1 = rmBadPosts(posts1, t1)\n# posts2 = rmBadPosts(posts2, t2)\n \n## Sample for efficiency (if too many posts)\n n1, n2 = posts1.shape[0], posts2.shape[0]\n if n1 > 100: posts1 = posts1.sample(100)\n if n2 > 100: posts2 = posts2.sample(100)\n \n if (n1 > 0) and (n2 > 0): # needed to avoid empty sets if bad posts are removed\n res = crossSimScores(posts1, posts2, doc_topic_distr, verbose=False)\n topic_sim = round(res['topic_sim'].mean(), 3)\n return topic_sim # return res\n # print('Topic similarity score bw {} and {}: {}'.format(t1, t2, topic_sim))\n return np.nan\n\ndef AF_clustering(posts, job_title, sim_mat):\n \n af = cluster.AffinityPropagation(affinity='precomputed').fit(sim_mat) # preference=-50,\n \n cluster_centers_indices = af.cluster_centers_indices_\n n_clusters_ = len(cluster_centers_indices)\n n_post = posts.shape[0]\n print('# posts to be clustered by Affinity Propagation model: {}'.format(n_post))\n print('Estimated number of clusters: %d' %n_clusters_)\n \n # Representatives (cluster centers)\n reps = posts.iloc[cluster_centers_indices]\n reps.to_csv(JOB_PROF + 'clusters/{}_reps.csv'.format(job_title), index=False)\n \n # Retrieve labels of posts\n res = posts.copy()\n res['af_label'] = af.labels_\n return res.sort_values('af_label')\n\ndef plotCluster(c, job_title, cluster_res): # figsize=(12,6)\n posts = cluster_res.query('af_label == {}'.format(c))\n n_post = posts.shape[0]\n if (n_post % 2 == 1):\n print('n_post={} is odd number, drop 1 post'.format(n_post)); n_post -= 1\n posts = posts.iloc[1:]\n\n w = 12; h = 3*n_post/4 if n_post >= 8 else 6\n fig = vizTopicDists(posts, doc_topic_distr, figsize=(w, h))\n i=c+1; title = 'Topic distribution of {} posts in {}th cluster'.format(job_title, i)\n fig.suptitle(title, fontsize=20)\n fig.savefig(SKILL_RES + 'fig/c{}_{}.pdf'.format(i, job_title))\n return fig\n\ngood_df = df.query('n_skill >= 10')\ngood_df.shape[0]",
"We need to get basic stats of job titles to understand more about them. Given a title, we need to know:\n\nHow many posts for the title in whole ds\navg n_skill in the posts\nn_employer having the title",
"stats = getTitleStats(df)\nstats.describe().round(1)\nstats.to_csv(SKILL_RES + 'stats.csv', index=False)\n\nk = 20; fname = SKILL_RES + 'lda/{}_topics.csv'.format(k)\ndoc_topic_distr = topic_distr[k]\ntopic_df = pd.read_csv(fname) \nlabels = map(str.upper, topic_df['label'])",
"Range of Job Post Similarity Scores\nWe will look at distribution of topic similarity scores of certain samples in data. We investigate how the distribution varies with n_topic and sample size.",
"with(open(SKILL_RES + 'doc_topic_distr.mtx', 'r')) as f:\n doc_15topic_distr = mmread(f)\n\nwith(open(SKILL_RES + 'doc_20topic_distr.mtx', 'r')) as f:\n doc_20topic_distr = mmread(f)\n\nwith(open(SKILL_RES + 'doc_30topic_distr.mtx', 'r')) as f:\n doc_30topic_distr = mmread(f)\n\nprint(doc_15topic_distr.shape)\nprint(doc_20topic_distr.shape)\nprint(doc_30topic_distr.shape)\n\n# Allow us to loop over doc-topic dists wrt diff no. of topics\ntopic_distr = {15: doc_15topic_distr, 20: doc_20topic_distr, 30: doc_30topic_distr}\n\nrd.seed(1234567) # time()\nsize = 500; posts = good_df.sample(size)\n\nsims_15 = pairwiseSim(posts, doc_15topic_distr)\nsims_20 = pairwiseSim(posts, doc_20topic_distr)\nsims_30 = pairwiseSim(posts, doc_30topic_distr)\n\nsims_15.topic_sim.describe().round(3)\nsims_20.topic_sim.describe().round(3)\nsims_30.topic_sim.describe().round(3)\n\nsims_15.sort_values('topic_sim', inplace=True)\nvizPair(0, sims_15)\nvizPair(1, sims_15)\n\nmedium = sims_15.query('0.7 < topic_sim and topic_sim <= 0.8')\nmedium.sort_values('topic_sim', inplace=True)\n\nvizPair(0, medium, abbv_title='medium')\n\nlast = medium.shape[0]-1\nvizPair(last, medium, abbv_title='medium')\n\nlarge = sims_15.query('topic_sim > 0.8')\nvizPair(0, large, abbv_title='large')\n\nn_pair = sims_15.shape[0]; last = n_pair - 1\nvizPair(last, sims_15)\n\nsims = {15: sims_15, 20: sims_20, 30: sims_30}\nks = [15, 20, 30]\n\nfor k in ks:\n fig = plotSimDists(sims[k])\n fig.suptitle('Sample size: {} posts'.format(size), fontsize=20)\n fname = SKILL_RES + 'fig/sim_dists_{}topics.pdf'.format(k)\n# fig.set_tight_layout(True)\n fig.savefig(fname)\n plt.show(); plt.close()",
"The plots show that topic similarity distribution $sim_{topic}$ changes gradually with the number of topics $k$ and with sample size. For a sample of size 300, we have\n+ $k=15$: $sim_{topic}$ is distributed in the range $ [0.63, 0.93] $\n+ $k=20$: $sim_{topic}$ is distributed in the range $ [0.62, 0.92] $\n+ $k=30$: $sim_{topic}$ is distributed in the range $ [0.6, 0.9] $\nSimilarity of Job Posts and Consistency Score\nWe will look at consistency score of a given job title in a given industry. First, let's see how many pairs (job title, industry) we have and get stats for the pairs.",
"df.head(1)\n\nby_job_and_industry = df.groupby(['title', 'industry'])\n\nagg_df = by_job_and_industry.agg({'job_id': len, 'employer_id': 'nunique', 'n_skill': 'mean'})\n\nagg_df = agg_df.rename(columns={'job_id': 'n_post', 'employer_id': 'n_employer', \n 'n_skill': 'avg_n_skill'})\nagg_df = agg_df.reset_index()\nagg_df.describe().round(1)\n\nagg_df.sort_values('n_post', inplace=True)\n\nagg_df.head(10)",
"First, analyze tuples (job title, industry) with $\\ge 100 $ posts.",
"res = agg_df.query('n_post >= 100').copy()\nprint('# pairs to analyze: %d' %res.shape[0])\n\ncScoreAtRow(res.iloc[0])\n\nres['cscore'] = res.apply(cScoreAtRow, axis=1)\n\nres.cscore.describe().round(3)\n\nres = res.sort_values('cscore', ascending=False)\n\nres.head()\n\nres.tail()\n\nres.to_csv(JOB_PROF + 'cscore_jobs_100posts.csv', index=False)",
"Distribution of c-scores",
"def vizCScores(res):\n fig = plt.figure(figsize=(6,5))\n plt.hist(res.cscore)\n avg, std = round(res.cscore.mean(), 3), round(res.cscore.std(), 3)\n xl = 'Consistency score' + r'$(\\mu = {}, \\sigma = {})$'.format(avg, std)\n plt.xlabel(xl, fontsize=16); \n plt.ylabel('Count', fontsize=16)\n plt.grid(True)\n return fig\n\nres = agg_df.query('2 <= n_post < 100')\nres.reset_index(inplace=True)\n\n_ = agg_df.query('2 <= n_post')\nprint('# pairs with at least 2 posts: %d' % _.shape[0])\n\ndel res['index']\nres.head()\n\nres['cscore'] = res.apply(cScoreAtRow, axis=1)\n\nres_55 = pd.read_csv(JOB_PROF + 'cscore_jobs_100posts.csv')\nres = pd.concat([res, res_55])\nres.to_csv(JOB_PROF + 'cscore_all.csv', index=False)\n\nfig = vizCScores(res)\nfig.savefig(JOB_PROF + 'cscore_dist.pdf')\nplt.show(); plt.close()\n\nres = res.sort_values('cscore', ascending=False)",
"Analyze cases where cscore is 1 (posts are 100% consistent):",
"res.query('cscore == 1')",
"As these cases have only 2, 3 posts, they are very likely to be re-posts. Let's see:",
"def checkRepost(in_df):\n for i in range(in_df.shape[0]):\n row = in_df.iloc[i]\n title = row['title']; industry = row['industry']\n docs = set(df[(df.title == title) & (df.industry == industry)]['doc'])\n if (len(docs) == 1):\n print (i, True) \n else: print(docs)\n\ntmp = res.query('cscore == 1')\ncheckRepost(tmp)",
"Gotcha:\nAll of them are re-posts, some with a bit editing (cases 5, 7 and 17).",
"res.query('cscore < 1').head()\n\ncheckRepost(res.query('cscore < 1').head())",
"Niche vs. General Job Titles\nInfoCom industry",
"info_df = agg_df.query('industry == \"Information and Communications\"')\n\ninfo_top50 = info_df.iloc[range(50)]\ninfo_top50['cscore'] = info_top50['title'].apply(consistency, industry='Information and Communications')\ninfo_top50.to_csv(JOB_PROF + 'consistency/infocom_cscore.csv', index=False)\n\ninfo_top50.cscore.describe()\n\ninfo_top50.sort_values('cscore', ascending=False, inplace=True)\n\ninfo_top50.head()\n\ninfo_top50.tail()",
"Financial and Insurance Activities",
"fin_df = agg_df.query('industry == \"Financial and Insurance Activities\"')\n\nfin_top50 = fin_df.iloc[range(50)]\nfin_top50['cscore'] = fin_top50['title'].apply(consistency, industry='Financial and Insurance Activities')\nfin_top50.to_csv(JOB_PROF + 'consistency/fin50_cscore.csv', index=False)\n\nfin_top50.cscore.describe()\n\nfin_top50 = fin_top50.sort_values('cscore', ascending=False)\n\nfin_top50.head()\n\nfin_top50.tail()\n\nfin_res = fin_df.query('n_post >= 2').copy()\nprint('# cscores to be computed: %d' %fin_res.shape[0])\nfin_res['cscore'] = fin_res['title'].apply(consistency, industry='Financial and Insurance Activities')\nfin_res.to_csv(JOB_PROF + 'consistency/fin_cscore.csv', index=False)\n\nfin_res.cscore.describe()\n\n# Plot dist of cscore\n",
"Entertainment",
"ent_df = agg_df.query('industry == \"Arts, Entertainment and Recreation\"')\nent_res = ent_df.query('n_post >= 2').copy()\nent_res['cscore'] = ent_res['title'].apply(consistency, industry='Arts, Entertainment and Recreation')\nent_res.to_csv(JOB_PROF + 'consistency/art_cscore.csv', index=False)\n\nent_res.cscore.describe()",
"Senior Software Engineer vs. Marketing Executive (Wholesale and Retail Trade)\nDenote Senior Software Engineer as SSE, Administrative Assistant as AA, and Marketing Executive as ME.",
"sse_cscore = consistency('Senior Software Engineer', 'Wholesale and Retail Trade', abbv_job='sse', abbv_industry='wholesale')\naa_cscore = consistency('Administrative Assistant', 'Wholesale and Retail Trade', abbv_job='aa', abbv_industry='wholesale')\nme_sims = consistency('Marketing Executive', 'Wholesale and Retail Trade', abbv_job='me', abbv_industry='wholesale')",
"Vice President vs. Information Technology Specialist in Financial and Insurance Activities",
"agg_df.query('industry == \"Financial and Insurance Activities\"').head()",
"Software Engineer in Finance",
"fin_se = calSimScores('Software Engineer', 'Financial and Insurance Activities', df)\n\nprint fin_se.topic_sim.describe().round(2)\nprint fin_se.skill_sim.describe().round(2)\n\nfin_se = fin_se.sort_values('topic_sim', ascending=False) \n# del fin_se['index']\n\nfin_se.head()\n\nfin_se.head().to_csv(SKILL_RES + 'fin_se_top5.csv', index=False)\n\nfin_se = fin_se.sort_values('skill_sim', ascending=False)\nfin_se.head()\n\nnp.corrcoef(fin_se.skill_sim, fin_se.topic_sim)[0,1]",
"Plot cluster dists of the first 5 pairs:",
"posts = getPostsInPairs(fin_se.head())\nfig = vizTopicDists(posts, doc_topic_distr, figsize=(12, 6))\nplt.savefig(SKILL_RES + 'fig/fin_se_top5.pdf')\nplt.show(); plt.close()\n\nfin_se.tail().to_csv(SKILL_RES + 'fin_se_bottom5.csv', index=False)",
"Manager in Finance",
"fin_man = calSimScores('Manager', 'Financial and Insurance Activities', df)\n\nprint fin_man.topic_sim.describe().round(2)\nprint fin_man.skill_sim.describe().round(2)\n\nfin_man = fin_man.sort_values('topic_sim', ascending=False); del fin_man['index']\n\nfin_man.head().to_csv(SKILL_RES + 'fin_man_top5.csv', index=False)\nfin_man.tail().to_csv(SKILL_RES + 'fin_man_bottom5.csv', index=False)\n\nposts = getPostsInPairs(fin_man.tail(), df)\n\ntop5 = fin_man.query('employer1 != employer2 and skill_sim <= 0.8').head()\n\nfig = vizTopicDists(posts, doc_topic_distr, figsize=(12, 6))\nplt.savefig(SKILL_RES + 'fig/fin_man_bottom5.pdf')\nplt.show(); plt.close()",
"Research Fellow",
"rf_sims = calSimScores(job_title='Research Fellow', industry='Education', df=df)\n\nprint rf_sims.topic_sim.describe().round(3)\nprint rf_sims.skill_sim.describe().round(2)\n\nrf_sims = rf_sims.sort_values('topic_sim', ascending=False)\nrf_sims.head()",
"Clustering Job Posts\nIn this section, we will try the following clustering models which can work directly on precomputed similarity matrix. \n+ Affinity Propagation (AP): can learn number of clusters from data\n+ Spectral Clustering (SC): need to preset number of clusters, which can be guessed using AP result.\nAfter training each model, we will analyze the returned clusters by:\n+ metrics (from sklearn.metrics) such as homogeneity, completeness or mutual information\n+ manually looking at representative posts in each cluster. The representative posts are provided by AP as cluster centers.",
"import sklearn.cluster as cluster\n\n# dir to store results\nJOB_PROF = SKILL_RES + 'job_prof/'",
"Affinity Propagation\n\nSoftware Engineer:",
"se_sims = calSimScores('Software Engineer', df=df, out_fmt='matrix_topic_sim') # 'Financial and Insurance Activities',\n\nse_posts = df[df.title == 'Software Engineer']\nse_cluster = AF_clustering(se_posts, job_title='se', sim_mat=se_sims)\n\nse_cluster.groupby('af_label').size().sort_values(ascending=False)\n\nfig = plotCluster(0, job_title='SE',cluster_res=se_cluster)\n# plt.show(); plt.close()\n\nfor i in range(9):\n fig = plotCluster(i, job_title='SE',cluster_res=se_cluster)\n\nfig = plotCluster(10, job_title='SE',cluster_res=se_cluster)\nplt.show(); plt.close()\n\nfig = plotCluster(22, job_title='SE', cluster_res=se_cluster, figsize=(12, 6))\nplt.show(); plt.close()",
"Comparing Different Job Titles\nWe compare job titles to see if their job posts are consistent (thru the topic sim distribution among the job posts).",
"se_sims_df = calSimScores('Software Engineer')\n\nse_sims_df.sort_values('topic_sim', ascending=False, inplace=True)\nse_sims_df.head()\n\nse_sims_df.query('skill_sim < 1 and employer1 != employer2').head()",
"Software Developer:",
"dev_titles = set([s for s in df.title if ('Developer' in s)])\ndev_titles\n\ndev_posts = df[(df.title == 'Software Developer') & (df.industry == 'Financial and Insurance Activities')]\nprint('# posts of Software Developer in Finance: %d' %dev_posts.shape[0])\n\nsd_sims_df = calSimScores('Software Developer')\n\nsd_sims_df.topic_sim.describe().round(3)",
"Result: The similarity of posts in Software Developer are also high with a mean of 0.83 .\n\nManager:",
"man_posts = df[(df.title == 'Manager')] # (df.industry == 'Financial and Insurance Activities')\nman_sims = calSimScores('Manager', industry=None, df=df, out_fmt='matrix_topic_sim')\n\nman_cluster = AF_clustering(man_posts, job_title='Manager', sim_mat=man_sims)\n\nman_cluster.groupby('af_label').size().sort_values(ascending=False)\n\nfor c in [7, 9, 11, 14, 22]:\n plotCluster(c, job_title='fin_man', cluster_res=man_cluster)\n\nplt.close('all')\n\nman_sim_df = calSimScores('Manager')",
"Associate:",
"assoc_sim_df = calSimScores('Associate')\n\nassoc_sim_df.topic_sim.describe().round(3)\n\nfig = plotSimDists(assoc_sim_df, job_title='Associate')\nfig.savefig(SKILL_RES + 'fig/assoc_sim_dists.pdf')\nplt.show(); plt.close()\n\nse_sims_df.topic_sim.describe().round(3)\n\nman_sim_df.topic_sim.describe().round(3)\n\nplt.close('all')\n\nfig = plotSimDists(sim_df=se_sims_df, job_title='Software Engineer')\nplt.savefig(SKILL_RES + 'fig/se_sim_dists.pdf')\nplt.show(); plt.close()\n\nfig = plotSimDists(sim_df=man_sim_df, job_title='Manager')\nfig.savefig(SKILL_RES + 'fig/man_sim_dists.pdf')\nplt.show(); plt.close()",
"Research Fellow",
"rf_sim_mat = calSimScores(job_title='Research Fellow', industry='Education', df=df, out_fmt='matrix_topic_sim')\n\nrf_posts = df[(df.title=='Research Fellow') & (df.industry == 'Education')]\nprint('# posts of Research Fellow: %d' %rf_posts.shape[0])\n\nrf_cluster = AF_clustering(rf_posts, job_title='rf', sim_mat=rf_sim_mat)\n\nrf_cluster.groupby('af_label').size().sort_values(ascending=False)\n\nrf_c0 = plotCluster(0, job_title='RF', cluster_res=rf_cluster)\nrf_c1 = plotCluster(1, job_title='RF', cluster_res=rf_cluster)\nplt.close('all')",
"Find \"Synonym\" Job Titles\n\nSoftware Engineer vs. Software Developer",
"se_and_sd = simScore('Software Engineer', 'Software Developer')\n\nse_and_sd = se_and_sd.sort_values('topic_sim', ascending=False)\nse_and_sd.reset_index(inplace=True); del se_and_sd['index']\nse_and_sd.head()\n\nfig = plotSimDists(se_and_sd, 'SE_and_SD')\nfig.savefig(SKILL_RES + 'fig/se_and_sd_sims.pdf')\nplt.show(); plt.close()\n\nplt.close('all')\n\nvizPair(0, se_and_sd, abbv_title='se_vs_sd')\n\nlast = se_and_sd.shape[0] - 1\nvizPair(last, se_and_sd, abbv_title='se_vs_sd')",
"Software Engineer vs. Manager",
"se_and_man = simScore('Software Engineer', 'Manager')\n\nse_and_man.topic_sim.describe().round(3)\n\nfig = plotSimDists(se_and_man)\nfig.savefig(SKILL_RES + 'fig/se_and_man_sims.pdf')\nplt.show(); plt.close()",
"Spectral Clustering",
"spectral = cluster.SpectralClustering(n_clusters=2, eigen_solver='arpack', affinity=\"precomputed\")",
"SE in Finance:",
"spectral.fit(fin_se_sims)\n\nfin_se_posts = df[(df.title == 'Software Engineer') & (df.industry == 'Financial and Insurance Activities')]\nfin_se_posts['cluster'] = spectral.labels_\nfin_se_posts = fin_se_posts.sort_values('cluster')",
"Manager in Finance:",
"spectral.fit(fin_man_sims)\n\nfin_man_posts = df[(df.title == 'Manager') & (df.industry == 'Financial and Insurance Activities')]\n\nfin_man_posts['cluster'] = spectral.labels_\nfin_man_posts = fin_man_posts.sort_values('cluster')",
"Evaluation using the Frameworks from SkillsFuture\nIn this section, we use the skill frameworks from SkillsFuture (SF) as a source to evaluate our topic model as well as our proposed consistency score. Currently available frameworks are for 3 sectors: (i) Hotel and Accomodation services, (ii) Early Childcare and Education, and (iii) Precision Engineering. Given a job title t in one of the three sectors, we proceed by the following steps.\n\nObtain an exhaustive list of synonym titles for t\nUsing the titles to retrieve posts for t\nMeasure the similarity among the retrieved posts\nCategorize the skills in these posts using the SF framework for t",
"df = df[-df.title.isnull()]\n# standardize employer_name\ndf.employer_name = df.employer_name.apply(lambda s: s.replace('PTE LTD', 'PTE. LTD.').replace('PTE. LIMITED', 'PTE. LTD.')\n .replace('PRE-SCHOOL', 'PRESCHOOL') )\n\ndf.to_csv(SKILL_DAT + 'filter_doc_index.csv')",
"Early Childhood Care and Education (ECCE)\nPre-School Teachers (PST)\nThe list of titles for PST may be formed by looking at the titles from certain pre-schools in SG. We tried with the top pre-schools obtained from https://skoolopedia.com/preschool-singapore-2015-infographic/. First try return empty results! Checking with employer name in data shows that we need to append 'PTE. LTD.' to school names. We then added schools found in data.",
"keys = map(str.upper, ['PreSchool', 'Skool', 'CHILDCARE', 'Kid', 'toddler', 'infant'])\n\nguessed_preschools = set([s for s in df.employer_name if found(keys, s)])\nprint('# guessed preschools: %d' %len(guessed_preschools))\n\nguessed_preschools\n\n# ['Shaws CDLC', 'childfirst', 'kiddiWinkie', 'little footprints', 'brighton montessori', 'posso', 'little skool-house', \n# 'little village', 'mulberry', 'learning vision', 'Star Learners', 'global eduhub', 'sunflower childcare', 'frobel']\n \npreschool_keys = ['E-BRIDGE', 'ETONHOUSE PRESCHOOL', 'MINDCHAMPS', 'LECLARE', \"Pat's Schoolhouse\", \n 'SKOOL4KIDZ', 'VIVA KIDS', 'JUST KIDS LEARNING PLACE', 'BIBINOGS KIDS ACADEMY', \n 'CREATIVE LAND CHILDCARE', 'Lorna Whiston', \n 'Carpe Diem', 'Crestar', 'nurture edu', 'safari house']\npreschool_keys = map(str.upper, preschool_keys)\n\npreschools = [s for s in df.employer_name if found(preschool_keys, s)]\npreschool_posts = df[df.employer_name.isin(preschools)]\nprint('# posts from preschool employers: %d, distributed as follows:' %preschool_posts.shape[0])\nsortDown(preschool_posts.groupby('employer_name').size())\n\ntitles = set(preschool_posts['title'])\ntitles",
"Among the titles, we can only find 2 titles for pre-school teacher [Child Care Teacher, Pre-Primary Education Teacher]. The reason is because the set of preschools are not exhaustive. How to fix this?\nAnother way to search for titles of PST is to look at the job titles for Teacher and manually narrow down to Pre-school Teacher as follow.",
"idx = [i for i,s in enumerate(df.title) if ('Teacher' in s)]\nteacher_df = df.iloc[idx]\n\nprint('# posts of titles containing kw Teacher: %d' %teacher_df.shape[0])\n\nteacher_stat = getTitleStats(teacher_df)\nteacher_stat.to_csv(SKILL_RES + 'pst/teachers.csv', index=False)",
"Based on this, we guessed the following titles for PST.",
"cc_teachers = ['Pre-School Teacher', 'Kindergarten Teacher', 'Child Care Teacher', 'Pre-Primary Education Teacher',\n 'Teacher, Kindergarten', 'Teacher, Child Care', 'Day Care Teacher']\n\nteacher_stat.describe().round(1)\n\nteacher_stat[teacher_stat.title.isin(cc_teachers)]",
"Seed set of PST",
"pst_posts = df[df.title == 'Pre-School Teacher']\npst_posts.to_csv(SKILL_RES + 'pst/posts.csv', index=False)\n\npst_sims = pairwiseSim(pst_posts, doc_topic_distr)\n\nprint pst_sims.topic_sim.describe().round(3)\npst_sims.skill_sim.describe().round(2)\n\nfig = plotSimDists(pst_sims, sci_fmt=False)\n# fig.suptitle('Pre-School Teacher (13 posts)', fontsize=20)\nfig.savefig(SKILL_RES + 'fig/pst_sims.pdf')\nplt.show(); plt.close()\n\npst_sims.query('skill_sim >= 0.6')\n\npst_sims.sort_values('topic_sim', ascending=False, inplace=True)\npst_sims.to_csv(SKILL_RES + 'pst/pst_sims.csv', index=False)\n\npst_sims = pst_sims.query('skill_sim < 0.6')\nvizPair(0, pst_sims, labels, abbv_title='pst')\n\nlast = pst_sims.shape[0] - 1\nvizPair(last, pst_sims, labels, abbv_title='pst')",
"Relevant titles for PST",
"# employers having PST positions\npst_employers = np.unique(pst_posts.employer_name)\nprint('# PST employers: %d' %len(pst_employers))\n# pst_employers",
"Get other titles from the PST employers:",
"posts_of_pst_employers = df[df.employer_name.isin(pst_employers)]\nprint('# posts of PST employers: {}'.format(posts_of_pst_employers.shape[0]))\n\nother_titles_df = getTitleStats(posts_of_pst_employers).query('title != \"Pre-School Teacher\"') # n_post > 1\nother_titles = other_titles_df['title']\n\nteachers = teacher_stat.title",
"Titles from PST employers will not include all Teacher titles.",
"# set(teachers).difference(other_titles)",
"Similarity scores of relevant titles with PST",
"rel_titles = set(teachers.append(other_titles))\nrel_posts = df[df.title.isin(rel_titles)]\n\n# merge diff versions of some titles\nrel_posts = changeTitle(rel_posts, 'Teacher, Kindergarten', 'Kindergarten Teacher')\nrel_posts = changeTitle(rel_posts, 'Teacher, Child Care', 'Child Care Teacher')\n\nrel_titles = set(rel_posts.title)\nprint('# relevant titles: %d' %len(rel_titles))\nprint('# titles retrieved by kw teacher: {}'.format(len(teachers)))\nprint('# titles retrieved by PST employers: {}'.format(len(other_titles)))\n\nres = getTitleStats(rel_posts)\n\nres['topic_sim_with_pst'] = res['title'].apply(simScore, t2='Pre-School Teacher')\n\nres.sort_values('topic_sim_with_pst', ascending=False, inplace=True)\nres.avg_n_skill = res.avg_n_skill.round(1)\n\nres.to_csv(SKILL_RES + 'pst/sims.csv', index=False)\n\nres.describe().round(3)",
"The description shows us the following:\n+ totally $128$ job titles are compared against PST\n+",
"res.query('n_post >= 6').describe().round(3)\n\nres.query('n_post >= 6').head(10)",
"Why 'Speech and Drama Teacher' is very similar to PST?",
"df[df.title == 'Speech and Drama Teacher'].iloc[0].doc\n\nres.tail()\n\ntmp = teacher_df.query('title == \"Student Teacher\"')\nprint tmp.iloc[0]['employer_name']\nprint tmp.iloc[0]['doc']",
"Measure the similarity among the retrieved posts:",
"cc_sims = pairwiseSim(cc_df, doc_topic_distr)\n\nprint('# pairs: %d' %cc_sims.shape[0])\n\n# del cc_sims['index']\ncc_sims.topic_sim.describe().round(3)\n\nfig = plotSimDists(cc_sims, 'Pre-School Teacher')\nfig.savefig(SKILL_RES + 'fig/pst_sims.pdf')\nplt.show(); plt.close()\n\ncc_sims = cc_sims.sort_values('skill_sim', ascending=False)\n\ncc_sims.query('skill_sim >= 0.8').to_csv(SKILL_RES + 'job_prof/pst_variants.csv', index=False)\n\ncc_sims.query('(employer1 != employer2) and (topic_sim >= 0.9) and (skill_sim < 0.8)')\n\nniwa_df = cc_sims.query('employer1 == \"NIWA SCHOOLHOUSE\"')\nprint(niwa_df.shape[0])\n\nniwa_df.head()\n\ntmp = niwa_df.head()\nplt.close('all')\n\nvizPair(0, niwa_df)\n\nvizPair(1, niwa_df)",
"Result: The topic similarity scores are very high with a mean value of 0.99!\n\nCategorize the skills in these posts using the SF framework for PST:\n\nThe SF framework for PST has 10 categories of skills (listed below). When we manually labeled the top-100 popular skills using the categories, we detected another group of skills, namely language skills e.g. chinese, mandarin. We labeled this group as Language Skill. We also found several skills which cannot go into any categories, we thus create Others group for these skills. In short, we have 12 following categories of skills.",
"categs = {'c-dev': 'Child Development', 'ped': 'Curriculum & Pedagogy', \n 'env': 'Learning environment', 'rel' : 'Interaction & Relationships', \n 'safety' : 'Health, Safety & Nutrition', 'family & com' : 'Family & Community partnerships', \n 'prof dev' : 'Professional Mastery', 'prof val' : 'Professional Values & Ethics', \n 'teamwork' : 'Teamwork & Collaboration', 'plan' : 'Visioning & Planning', \n 'lang': 'Language Skill', 'others' : 'Others'}\n\nskill_sets = map(lambda s: set(s.split(',')), cc_df.occur_skills)\ncc_skills = unionAll(skill_sets)\nprint('# skills in child care posts: %d' %len(cc_skills))\n\nfreqs = [freq(sk, skill_sets) for sk in cc_skills]\ncc_skill_df = pd.DataFrame({'skill': list(cc_skills), 'freq': freqs})\n\n# cc_skill_df.sort_values('freq', ascending=False).to_csv(SKILL_RES + 'cc_skills.csv', index=False)",
"The final result is follows.",
"cc_skill_df = pd.read_csv(SKILL_RES + 'cc_skills.csv')\n\n# top 100 skills\ntop_100 = pd.DataFrame(cc_skill_df.head(100))\ntop_100['Category'] = top_100.label.apply(lambda x: categs[x])\nprint('Distribution of categories among top-100 skills')\nsortDown(top_100.groupby('Category').size())\n\ntop_100['skill_freq'] = top_100.skill + ' (' + map(str, top_100.freq) + ')'\ntop_100.head(1)\n\ntmp = top_100.groupby('Category').agg({'skill_freq': joinEntries, 'skill': len})\ntmp = tmp.reset_index().rename(columns = {'skill_freq': 'skills (freq)', 'skill': 'n_skill'})\ntmp\n\ntmp.to_csv(JOB_PROF + 'cc_skill_categ.csv', index=False)\n\nprint [sk for sk in cc_skills if ('child' in sk)]\nprint [sk for sk in cc_skills if ('curriculum' in sk)]",
"Pinnacle Leader\nHotel and Accomodation Services (HAS)\n\nObtain an exhaustive list of job titles available for hotels, hostels in SG:",
"# query employers in HAS in ds\nhotel_kws = map(str.upper, ['hotel', 'hostel', 'motel', 'lodging', 'resort'])\nnames = [s for s in df.employer_name if found(hotel_kws, s)]\nhotels = pd.DataFrame({'name': names}).drop_duplicates()\n\nprint('# employers in HAS: %d' %hotels.shape[0])\n\nhotels.to_csv(SKILL_RES + 'hotels.csv', index=False)\n\n# query all posts of the employers\nhas_posts = df[df.employer_name.isin(hotels)]\nprint('# posts in HAS: %d, distributed as follows:' %has_posts.shape[0])\n# sortDown(has_posts.groupby('employer_name').size())\n\n# query possible job titles for the employers in HAS\nhas_title = set(has_posts.title)\nprint('# titles in the HAS employers: %d' %len(has_title))\n\nhas_title_stats = stats[stats.title.isin(has_title)]\nhas_title_stats.to_csv(SKILL_RES + 'has_title_stats.csv', index=False)\n\nhas_title_stats.describe().round(1)",
"Housekeeping (HK) track",
"HK_DIR = SKILL_RES + 'job_prof/hk/'\n\n# based on __keywords__ in titles suggested by SF\nhk_kw = ['Housekeeper', 'Housekeeping Attendant', 'Room']\nhk_titles = set([t for t in has_posts.title if found(hk_kw, t)])\nhk_titles",
"Only 'Housekeeper (Hotels and Other Establishments)' and 'Housekeeping Attendant' are exact match for HK track. So we only keep the two titles for HK. We then change the former to the latter s.t. we have a consistent title for posts of HK.",
"df = changeTitle(df, 'Housekeeper (Hotels and Other Establishments)', 'Housekeeping Attendant')\n\nhk_kw = ['Housekeep']\nhk_titles = set([t for t in df.title if found(hk_kw, t)])\nhk_titles",
"As HK Manager and HK Supervisor are similar, we merge them together.",
"df = changeTitle(df, 'Housekeeping Manager', 'Housekeeping Supervisor')",
"After the merging, we need to re-comp the statistics for job titles as the stats change.",
"stats = getTitleStats(df)\nstats.to_csv(SKILL_RES + 'stats.csv', index=False)\n\nhk_titles = set([t for t in df.title if found(hk_kw, t)])\nhk_titles\n\n_ = df[df.title.isin(hk_titles)]\nprint('# posts: %d' %_.shape[0])",
"Seed set for HK",
"hk_posts = df[df.title == 'Housekeeping Attendant']\nprint('# HK posts: %d' %hk_posts.shape[0])",
"Measure the similarity among HK posts:",
"hk_sims = pairwiseSim(hk_posts, doc_topic_distr)\nhk_sims.topic_sim.describe().round(2)",
"Again, the topic similarity is high with a mean value of $0.89$.",
"fig = plotSimDists(hk_sims, sci_fmt=False)\nfig.set_tight_layout(True)\nfig.savefig(SKILL_RES + 'fig/HK/hk_sims.pdf')\nplt.show(); plt.close()\n\nhk_sims = hk_sims.sort_values('topic_sim', ascending=False)\n\nvizPair(0, hk_sims, labels, abbv_title='hk')",
"The two posts above are from hotels of the same group: Park Hotel Group.",
"vizPair(1, hk_sims, labels, abbv_title='hk')",
"This case a job agency reposted the job for the employer.",
"vizPair(2, hk_sims, labels, abbv_title='hk')\n\nlast = hk_sims.shape[0]-1\nvizPair(last, hk_sims, labels, abbv_title='hk')",
"Relevant titles for HK\n\nFrom employers of HK positions:",
"hk_employers = hk_posts.employer_name.drop_duplicates()\nhk_employers.to_csv(HK_DIR + 'employers.csv', index=False)\n\nprint('# employers having HK positions: %d' %len(hk_employers))\n\n# posts from HK employers\nposts_of_hk_employers = df[df.employer_name.isin(hk_employers)]\nprint('# posts from HK employers: %d' %posts_of_hk_employers.shape[0])\n\ntitles_from_hk_employers = set(posts_of_hk_employers['title'])\nprint('# titles from HK employers: %d' %len(titles_from_hk_employers))\n\nrel_titles = titles_from_hk_employers.union(hk_titles)\nprint('# relevant titles: %d' %len(rel_titles))\n\nhk_titles.difference(titles_from_hk_employers)",
"As we already cover a large number of relevant titles. We may not need to retrieve more titles based on key words.",
"# + Contains keywords relevant to HK:",
"But first let us look at basic stats of the titles.",
"rel_title_stats = stats[stats.title.isin(rel_titles)]\nrel_title_stats.describe().round(1)\n\nrel_title_stats.to_csv(HK_DIR + 'rel_title_stats.csv', index=False)",
"From the summary, we see that:\n\n25% of titles have <= 8 post\n50% of titles have <= 22 posts.",
"hk_title_stats = stats[stats.title.isin(hk_titles)]\nhk_title_stats",
"As 512 titles is too many, we need to narrow down. We want to keep all the titles with kw 'Housekeep', except the last one. \nCombining the stats of HK titles with that of relevant titles, we have 2 options for the filtering threshold $\\theta$:\n\n$\\theta = 22$: median of n_post of all relevant titles.\n$\\theta = 38$: filter out more titles but still can keep both HK Supervisor and HK Attendant.",
"thetas = [22, 38]\nfor t in thetas:\n _ = rel_title_stats.query('n_post >= {}'.format(t))\n print('# titles to be compared if threshold is {}: {}'.format(t, _.shape[0]))",
"Similarity with relevant titles",
"def sims2RelTitles(min_post):\n print('Picked titles with at least {} posts'.format(min_post))\n res = rel_title_stats.query('n_post >= {}'.format(min_post))\n res['topic_sim_with_hk'] = res['title'].apply(simScore, t2='Housekeeping Attendant')\n\n res = res.round(2)\n res = res.sort_values('topic_sim_with_hk', ascending=False)\n\n res.reset_index(inplace=True); del res['index']\n return res\n\nres_38 = sims2RelTitles(min_post=38)\n\nres_38.head(10).to_csv(HK_DIR + 'top_sim_titles.csv', index=False)\n\nres_38.head(10)\n\nres_38.describe().round(2)\n\n# limit to titles with > 22 posts.\n_ = rel_title_stats.query('n_post > 22')\nprint('# titles with > 22 posts: %d' %_.shape[0])\n\nres = rel_title_stats.query('n_post > 22')\nres['topic_sim_with_hk'] = res['title'].apply(simScore, t2='Housekeeping Attendant')\n\nres = res.round(2)\nres = res.sort_values('topic_sim_with_hk', ascending=False)\nres.head(10)\n\nres.to_csv(HK_DIR + 'sims_to_hk.csv', index=False)",
"Categorize the skills in these posts using the SF framework for HK track:",
"skill_df = pd.DataFrame({'skill': c.keys(), 'freq': c.values()})\nskill_df = skill_df.sort_values('freq', ascending=False)\n\nhk_skills = skillFreq(hk_posts)\nprint('# skills in HK posts: %d' %hk_skills.shape[0])\nhk_skills.head()\n\nhk_skills.to_csv(SKILL_RES + 'job_prof/hk_skills.csv', index=False)",
"Precision Engineering - Technician Track\n\nFind an exhaustive list of job titles for Technician:",
"# Machinist/Technician are suggested by SF\ntech_kw = ['Machinist', 'Technician']\ntech_titles = [t for t in df.title if found(tech_kw, t)]\nc = Counter(tech_titles)\n\ntech_titles = pd.DataFrame({'title': c.keys(), 'n_post': c.values()}).sort_values('n_post', ascending=False)\n\ntech_titles.n_post.describe().round(1)\n\ntech_titles = tech_titles.query('n_post > 10')\nprint('# titles: %d' %tech_titles.shape[0])\n\ntech_titles.tail()",
"Using the titles to retrieve posts for Technician:",
"tech_posts = df[df.title.isin(tech_titles.title)]\nprint('# posts for Technician: %d' %tech_posts.shape[0])\n\ngetTitleStats(tech_posts).to_csv(SKILL_RES + 'job_prof/tech_titles.csv', index=False)\n\nsortDown(tech_posts.groupby('title').size())",
"Measure the similarity among the retrieved posts:",
"tech_sims = pairwiseSim(tech_posts, doc_topic_distr)\n\n# plot dists of the sims\nfig = plotSimDists(tech_sims, 'Technician jobs')\nfig.savefig(SKILL_RES + 'fig/tech_jobs_sim.pdf')\nplt.show(); plt.close()\n\ntech_sims.skill_sim.describe().round(2)\n\ntech_sims.query('skill_sim == 1')\n\ntech_sims.topic_sim.describe().round(2)\n\ntech_sims = tech_sims.sort_values('topic_sim', ascending=False)\n\nn_pair = tech_sims.shape[0]; last = n_pair - 1\nvizPair(last, tech_sims)",
"Categorize the skills in these posts using the SF framework for Technician:\n\nFlagging Variants/Branches of an Employer",
"fm_posts = df[df.title == 'Fashion Merchandiser']\nfm_sims = pairwiseSim(fm_posts, doc_topic_distr)\n\nfm = fm_sims.sort_values('skill_sim', ascending=False)\nfm.head().to_csv(SKILL_RES + 'job_prof/fm_variants.csv', index=False)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gear/motifwalk
|
motifwalk/motifs/motif-test.ipynb
|
mit
|
[
"import graph_tool as gt\nfrom graph_tool.all import *\nimport motifwalk as mw\n\nfrom motifwalk import utils as u\n\nfrom motifwalk.motifs import Motif\n\nfrom motifwalk.motifs import *\n\nfrom motifwalk.motifs import analysis\n\nanalysis.count_motif()\n\ng = random_graph(1000, lambda: (5,5))\nresult = motifs(g, m.num_vertices(), motif_list=[m], return_maps=True)\n\nmo, count, vmap = result\n\nlen(vmap[0])\n\np = vmap[0][15]\n\ngraph = p.get_graph()\n\ngt.draw.graph_draw(graph, output_size=(100,100))\n\np[0]\n\np[1]\n\np[2]\n\ng.get_out_edges(151)\n\ng.get_out_edges(383)\n\n\ng.get_out_edges(562)\n\nm3_7.gt_motif.get_edges()\n\np.get_graph().get_edges()\n\np[0]\n\ng.get_out_edges(151)\n\np.get_array()\n\nm = m3_9.gt_motif\ngt.draw.graph_draw(m, output_size=(200,200))\n\nresult = motifs(g, m.num_vertices(), motif_list=[m], return_maps=True)\n\nmo, count, vmap = result\n\np = vmap[0][39]\n\np.get_array()\n\nm.get_edges()\n\nre = motifs(g, k=4, return_maps=True)\n\nlen(re)\n\nlen(re[1])\n\ngt.draw.graph_draw(re[0][43], output_size=(100,200))\ngt.draw.graph_draw(re[0][42], output_size=(100,200))\ngt.draw.graph_draw(re[0][40], output_size=(100,200))\ngt.draw.graph_draw(re[0][41], output_size=(100,200))\n\nre[0][3].get_edges()[:,0:2]\n\n[i for i in map(tuple, re[0][3].get_edges()[:,0:2])]\n\nfor i, motifs in enumerate(re[0]):\n arr = str([i for i in map(tuple, motifs.get_edges()[:,0:2])])\n s = \"m4_{} = Motif({}, is_directed=True, name='m4_{}')\".format(i, arr, i)\n print(s)\n\nla = Graph()\n\nna = nx.Graph()\n\nna.size()\n\nfrom itertools import combinations\n\n[i for i in combinations(p.get_array(), 2)]\n\ndef construct_motif_graph(graph_container, vertex_maps=None, motif=None):\n \"\"\"Construct and return a undirected gt graph containing\n motif relationship. TODO: Add anchors nodes\n\n Parameters:\n graph_container - GraphContainer - Store the original network\n vertex_map - list - contains PropertyMap that maps to vertices in motif\n\n Returns:\n m_graph - gt.Graph - Undirected graph for motif cooccurence\n \"\"\"\n if motif is not None and motif.anchors is not None:\n print(\"Warning: TODO refactor anchor code.\")\n # graph_tool.Graph\n m_graph = Graph(directed=False)\n if vertex_maps is None:\n _, _, vertex_maps = count_motif(graph_container, motif)\n for prop in vertex_maps:\n edges = [i for i in combinations(prop.get_array(), 2)]\n m_graph.add_edge_listadd_edges_from(edges)\n return m_graph\n\nla = construct_motif_graph(None, vertex_maps=re[2][0])\n\ngraph_draw(m4_0.gt_motif, output_size=(100,100))\n\ngraph_draw(m3_5.gt_motif, output_size=(100,100))\n\nm3_5_r = motifs(g,k=len(m3_5.gt_motif.get_vertices()),motif_list=[m3_5.gt_motif],return_maps=True)\n\nm3_5_r[2][0][0].get_array()\n\ng.get_out_edges(216)\n\ng.get_out_edges(938)\n\nre = m3_5_r[2][0][0]\n\nre.shrink_to_fit()\n\nre[0]",
"Using isomorphism doesn't help",
"re = isomorphism(re.get_graph(), m3_5.gt_motif, isomap=True)\n\nre[1][2]\n\nre = m3_5_r[2][0][0]\n\ngraph_draw(re.get_graph(), output_size=(100,100))\n\nre.get_graph().get_edges()\n\nre[0]\n\nre[1]\n\nre[2]\n\ng.get_out_edges(216)\n\nre.get_graph().get_edges()\n\nre[0], re[1], re[2]\n\nfor i in _:\n print(g.get_out_edges(i))",
"Since the Property map doesn't map exactly to the node id in the main graph, I have to use the induced subgraphs.",
"# select some vertices\nvfilt = g.new_vertex_property('bool');\nvfilt[216] = True\nvfilt[756] = True\nvfilt[938] = True\n\nsub = GraphView(g, vfilt)\n\nka = isomorphism(sub, m3_5.gt_motif, isomap=True)\n\nka[1][216], ka[1][756], ka[1][938]\n\n[i for i in [216, 756, 938] if ka[1][i] in {0,1}]"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
HazyResearch/snorkel
|
tutorials/cdr/CDR_Tutorial_3.ipynb
|
apache-2.0
|
[
"Chemical-Disease Relation (CDR) Tutorial\nIn this example, we'll be writing an application to extract mentions of chemical-induced-disease relationships from Pubmed abstracts, as per the BioCreative CDR Challenge. This tutorial will show off some of the more advanced features of Snorkel, so we'll assume you've followed the Intro tutorial.\nLet's start by reloading from the last notebook.",
"%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n\nimport numpy as np\nfrom snorkel import SnorkelSession\n\nsession = SnorkelSession()\n\nfrom snorkel.models import candidate_subclass\n\nChemicalDisease = candidate_subclass('ChemicalDisease', ['chemical', 'disease'])\n\ntrain = session.query(ChemicalDisease).filter(ChemicalDisease.split == 0).all()\ndev = session.query(ChemicalDisease).filter(ChemicalDisease.split == 1).all()\ntest = session.query(ChemicalDisease).filter(ChemicalDisease.split == 2).all()\n\nprint('Training set:\\t{0} candidates'.format(len(train)))\nprint('Dev set:\\t{0} candidates'.format(len(dev)))\nprint('Test set:\\t{0} candidates'.format(len(test)))",
"Part V: Training an LSTM extraction model\nIn the intro tutorial, we automatically featurized the candidates and trained a linear model over these features. Here, we'll train a more complicated model for relation extraction: an LSTM network. You can read more about LSTMs here or here. An LSTM is a type of recurrent neural network and automatically generates a numerical representation for the candidate based on the sentence text, so no need for featurizing explicitly as in the intro tutorial. LSTMs take longer to train, and Snorkel doesn't currently support hyperparameter searches for them. We'll train a single model here, but feel free to try out other parameter sets. Just make sure to use the development set - and not the test set - for model selection.\nNote: Again, training for more epochs than below will greatly improve performance- try it out!",
"from snorkel.annotations import load_marginals\ntrain_marginals = load_marginals(session, split=0)\n\nfrom snorkel.annotations import load_gold_labels\nL_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)\n\nfrom snorkel.learning.pytorch import LSTM\n\ntrain_kwargs = {\n 'lr': 0.01,\n 'embedding_dim': 100,\n 'hidden_dim': 100,\n 'n_epochs': 20,\n 'dropout': 0.5,\n 'rebalance': 0.25,\n 'print_freq': 5,\n 'seed': 1701\n}\n\nlstm = LSTM(n_threads=None)\nlstm.train(train, train_marginals, X_dev=dev, Y_dev=L_gold_dev, **train_kwargs)",
"Scoring on the test set\nFinally, we'll evaluate our performance on the blind test set of 500 documents. We'll load labels similar to how we did for the development set, and use the score function of our extraction model to see how we did.",
"from load_external_annotations import load_external_labels\nload_external_labels(session, ChemicalDisease, split=2, annotator='gold')\nL_gold_test = load_gold_labels(session, annotator_name='gold', split=2)\nL_gold_test\n\nlstm.score(test, L_gold_test)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mykespb/jupyters
|
tester1.ipynb
|
mit
|
[
"Test file 01\nPlay with python3 and math modules",
"2**1000\n\nimport datetime\n\nd = datetime.date.today()\nd.year, d.month, d.day",
"доработанный пример из документации по scipy, numpy, matplotlib",
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import special, optimize\n\nf = lambda x: -special.jv(3, x)\n\nsol = optimize.minimize(f, 1.0)\n\nx = np.linspace(0, 10, 5000)\nx\n\nplt.plot(x, special.jv(3, x), '-', sol.x, -sol.fun, 'o')",
"в стране средняя зарплата растёт!\n(сотрудники, начальник)",
"salaries = np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 11.])\nsalaries\n\nnp.mean(salaries), np.median(salaries)",
"если прожит. минимум = 2, то официально - все рады, а реально - почти все - трупы."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tpin3694/tpin3694.github.io
|
machine-learning/detect_edges.ipynb
|
mit
|
[
"Title: Detect Edges\nSlug: detect_edges\nSummary: How to detect edges images using OpenCV in Python. \nDate: 2017-09-11 12:00\nCategory: Machine Learning\nTags: Preprocessing Images \nAuthors: Chris Albon\nPreliminaries",
"# Load image\nimport cv2\nimport numpy as np\nfrom matplotlib import pyplot as plt",
"Load image",
"# Load image as greyscale\nimage_gray = cv2.imread('images/plane_256x256.jpg', cv2.IMREAD_GRAYSCALE)",
"Detect Edges",
"# Calculate median intensity\nmedian_intensity = np.median(image_gray)\n\n# Set thresholds to be one standard deviation above and below median intensity\nlower_threshold = int(max(0, (1.0 - 0.33) * median_intensity))\nupper_threshold = int(min(255, (1.0 + 0.33) * median_intensity))\n\n# Apply canny edge detector\nimage_canny = cv2.Canny(image_gray, lower_threshold, upper_threshold)",
"View Edges",
"# Show image\nplt.imshow(image_canny, cmap='gray'), plt.axis(\"off\")\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.23/_downloads/51cca4c9f4bd40623cb6bfa890e2eb4b/20_erp_stats.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Visualising statistical significance thresholds on EEG data\nMNE-Python provides a range of tools for statistical hypothesis testing\nand the visualisation of the results. Here, we show a few options for\nexploratory and confirmatory tests - e.g., targeted t-tests, cluster-based\npermutation approaches (here with Threshold-Free Cluster Enhancement);\nand how to visualise the results.\nThe underlying data comes from :footcite:DufauEtAl2015; we contrast long vs.\nshort words. TFCE is described in :footcite:SmithNichols2009.",
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.stats import ttest_ind\n\nimport mne\nfrom mne.channels import find_ch_adjacency, make_1020_channel_selections\nfrom mne.stats import spatio_temporal_cluster_test\n\nnp.random.seed(0)\n\n# Load the data\npath = mne.datasets.kiloword.data_path() + '/kword_metadata-epo.fif'\nepochs = mne.read_epochs(path)\nname = \"NumberOfLetters\"\n\n# Split up the data by the median length in letters via the attached metadata\nmedian_value = str(epochs.metadata[name].median())\nlong_words = epochs[name + \" > \" + median_value]\nshort_words = epochs[name + \" < \" + median_value]",
"If we have a specific point in space and time we wish to test, it can be\nconvenient to convert the data into Pandas Dataframe format. In this case,\nthe :class:mne.Epochs object has a convenient\n:meth:mne.Epochs.to_data_frame method, which returns a dataframe.\nThis dataframe can then be queried for specific time windows and sensors.\nThe extracted data can be submitted to standard statistical tests. Here,\nwe conduct t-tests on the difference between long and short words.",
"time_windows = ((.2, .25), (.35, .45))\nelecs = [\"Fz\", \"Cz\", \"Pz\"]\nindex = ['condition', 'epoch', 'time']\n\n# display the EEG data in Pandas format (first 5 rows)\nprint(epochs.to_data_frame(index=index)[elecs].head())\n\nreport = \"{elec}, time: {tmin}-{tmax} s; t({df})={t_val:.3f}, p={p:.3f}\"\nprint(\"\\nTargeted statistical test results:\")\nfor (tmin, tmax) in time_windows:\n long_df = long_words.copy().crop(tmin, tmax).to_data_frame(index=index)\n short_df = short_words.copy().crop(tmin, tmax).to_data_frame(index=index)\n for elec in elecs:\n # extract data\n A = long_df[elec].groupby(\"condition\").mean()\n B = short_df[elec].groupby(\"condition\").mean()\n\n # conduct t test\n t, p = ttest_ind(A, B)\n\n # display results\n format_dict = dict(elec=elec, tmin=tmin, tmax=tmax,\n df=len(epochs.events) - 2, t_val=t, p=p)\n print(report.format(**format_dict))",
"Absent specific hypotheses, we can also conduct an exploratory\nmass-univariate analysis at all sensors and time points. This requires\ncorrecting for multiple tests.\nMNE offers various methods for this; amongst them, cluster-based permutation\nmethods allow deriving power from the spatio-temoral correlation structure\nof the data. Here, we use TFCE.",
"# Calculate adjacency matrix between sensors from their locations\nadjacency, _ = find_ch_adjacency(epochs.info, \"eeg\")\n\n# Extract data: transpose because the cluster test requires channels to be last\n# In this case, inference is done over items. In the same manner, we could\n# also conduct the test over, e.g., subjects.\nX = [long_words.get_data().transpose(0, 2, 1),\n short_words.get_data().transpose(0, 2, 1)]\ntfce = dict(start=.2, step=.2)\n\n# Calculate statistical thresholds\nt_obs, clusters, cluster_pv, h0 = spatio_temporal_cluster_test(\n X, tfce, adjacency=adjacency,\n n_permutations=100) # a more standard number would be 1000+\nsignificant_points = cluster_pv.reshape(t_obs.shape).T < .05\nprint(str(significant_points.sum()) + \" points selected by TFCE ...\")",
"The results of these mass univariate analyses can be visualised by plotting\n:class:mne.Evoked objects as images (via :class:mne.Evoked.plot_image)\nand masking points for significance.\nHere, we group channels by Regions of Interest to facilitate localising\neffects on the head.",
"# We need an evoked object to plot the image to be masked\nevoked = mne.combine_evoked([long_words.average(), short_words.average()],\n weights=[1, -1]) # calculate difference wave\ntime_unit = dict(time_unit=\"s\")\nevoked.plot_joint(title=\"Long vs. short words\", ts_args=time_unit,\n topomap_args=time_unit) # show difference wave\n\n# Create ROIs by checking channel labels\nselections = make_1020_channel_selections(evoked.info, midline=\"12z\")\n\n# Visualize the results\nfig, axes = plt.subplots(nrows=3, figsize=(8, 8))\naxes = {sel: ax for sel, ax in zip(selections, axes.ravel())}\nevoked.plot_image(axes=axes, group_by=selections, colorbar=False, show=False,\n mask=significant_points, show_names=\"all\", titles=None,\n **time_unit)\nplt.colorbar(axes[\"Left\"].images[-1], ax=list(axes.values()), shrink=.3,\n label=\"µV\")\n\nplt.show()",
"References\n.. footbibliography::"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.20/_downloads/81308ca6ca6807326a79661c989cfcba/plot_make_report.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Make an MNE-Report with a Slider\nIn this example, MEG evoked data are plotted in an html slider.",
"# Authors: Teon Brooks <teon.brooks@gmail.com>\n# Eric Larson <larson.eric.d@gmail.com>\n#\n# License: BSD (3-clause)\n\nfrom mne.report import Report\nfrom mne.datasets import sample\nfrom mne import read_evokeds\nfrom matplotlib import pyplot as plt\n\n\ndata_path = sample.data_path()\nmeg_path = data_path + '/MEG/sample'\nsubjects_dir = data_path + '/subjects'\nevoked_fname = meg_path + '/sample_audvis-ave.fif'",
"Do standard folder parsing (this can take a couple of minutes):",
"report = Report(image_format='png', subjects_dir=subjects_dir,\n info_fname=evoked_fname, subject='sample',\n raw_psd=False) # use False for speed here\nreport.parse_folder(meg_path, on_error='ignore', mri_decim=10)",
"Add a custom section with an evoked slider:",
"# Load the evoked data\nevoked = read_evokeds(evoked_fname, condition='Left Auditory',\n baseline=(None, 0), verbose=False)\nevoked.crop(0, .2)\ntimes = evoked.times[::4]\n# Create a list of figs for the slider\nfigs = list()\nfor t in times:\n figs.append(evoked.plot_topomap(t, vmin=-300, vmax=300, res=100,\n show=False))\n plt.close(figs[-1])\nreport.add_slider_to_section(figs, times, 'Evoked Response',\n image_format='png') # can also use 'svg'\n\n# Save the report\nreport.save('my_report.html', overwrite=True)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
NAU-CFL/Python_Learning_Source
|
reference_notebooks/Notes-03.ipynb
|
mit
|
[
"Conditionals and Recursion\nModulus Operator\nThe modulus operator works on integers and yields the remainder when the first operand is divided by the second. In Python, the modulus operator is a percent sign (%). The syntax is the same as for other operators:",
"quotient = 7 / 3\nprint(format(quotient, '.2f'))\n\nremainder = 7 % 3\nprint(remainder)",
"The modulus operator turns out to be surprisingly useful. For example, you can check whether one number is divisible by another\nif x % y is zero, then x is divisible by y.\n\nAlso, you can extract the right-most digit or digits from a number. \nx % 10 yields the right-most digit of x (in base 10). Similarly x % 100 yields the last two digits.\n\nBoolean Expressions\nA boolean expression is an expression that is either true or false. The following examples use the operator ==, which compares two operands and produces True if they are equal and False otherwise:",
"5 == 5\n\n5 == 6",
"The True and False are special values that belong to the type bool; they are not strings:",
"type(True), type(False)",
"The == operator is one of the relational operators; the others are:\n x != y # x is not equal to y\n x > y # x is greater than y\n x < y # x is less than y\n x >= y # x is greater than or equal to y\n x <= y # x is less than or equal to y\nAlthough these operations are probably familiar to you, the Python symbols are different from the mathematical symbols. A common error is to use a single equal sign (=) instead of a double equal sign (==). Remember that = is an assignment operator and == is a relational operator. There is no such thing as =< or =>.\nLogical Operators\nThere are three logical operators: and, or, and not. The semantics (meaning) of these operators is similar to their meaning in English. For example, x > 0 and x < 10 is true only if x is greater than 0 and less than 10.\nn%2 == 0 or n%3 == 0 is true if either of the conditions is true, that is, if the number is divisible by 2 or 3.\n\nConditional Execution\nIn order to write useful programs, we almost always need the ability to check conditions and change the behavior of the program accordingly. Conditional statements give us this ability. The simplest form is the if statement:\nPython\nif x > 0:\n print('x is positive')\nThe boolean expression after if is called the condition. If it is true, then the indented statement gets executed. If not, nothing happens.\nThe if statements have the same structure as function definitions: a header followed by an indented body. Statements like this are called compound statements.\nThere is no limit on the number of statements that can appear in the body, but there has to be at least one. Occasionally, it is useful to have a body with no statements (usually as a place keeper for code you haven’t written yet). In that case, you can use the pass statement, which does nothing.\nPython\nif x < 0:\n pass # need to handle negative values!\nAlternative Execution\nA second form of the if statement is alternative execution, in which there are two possibilities and the condition determines which one gets executed. The syntax looks like this:\nPython\nif x%2 == 0:\n print('x is even')\nelse:\n print('x is odd')\nIf the remainder when x is divided by 2 is 0, then we know that x is even, and the program displays a message to that effect. If the condition is false, the second set of statements is executed. Since the condition must be true or false, exactly one of the alternatives will be executed. The alternatives are called branches, because they are branches in the flow of execution.\nChained Conditionals\nSometimes there are more than two possibilities and we need more than two branches. One way to express a computation like that is a chained conditional:",
"x = 10\ny = 9\nif x < y:\n print('x is less than y')\nelif x > y:\n print('x is greater than y')\nelse:\n print('x and y are equal')",
"The elif is an abbreviation of “else if.” Again, exactly one branch will be executed. There is no limit on the number of elif statements. If there is an else clause, it has to be at the end, but there doesn’t have to be one.\nPython\nif choice == 'a':\n draw_a()\nelif choice == 'b':\n draw_b()\nelif choice == 'c':\n draw_c()\nEach condition is checked in order. If the first is false, the next is checked, and so on. If one of them is true, the corresponding branch executes, and the statement ends. Even if more than one condition is true, only the first true branch executes.\nNested Conditionals\nOne conditional can also be nested within another. We could have written the trichotomy example like this:",
"x = 18\ny = 20\nif x == y:\n print('x and y are equal')\nelse:\n if x < y:\n print('x is less than y')\n else:\n print('x is greater than y')",
"The outer conditional contains two branches. The first branch contains a simple statement. The second branch contains another if statement, which has two branches of its own. Those two branches are both simple statements, although they could have been conditional statements as well.\nAlthough the indentation of the statements makes the structure apparent, nested conditionals become difficult to read very quickly. In general, it is a good idea to avoid them when you can.\nLogical operators often provide a way to simplify nested conditional statements. For example, we can rewrite the following code using a single conditional:",
"x = 8\nif 0 < x:\n if x < 10:\n print('x is a positive single-digit number.')",
"The print statement is executed only if we make it past both conditionals, so we can get the same effect with the and operator:",
"if 0 < x and x < 10:\n print('x is a positive single-digit number.')",
"Recursion\nIt is legal for one function to call another; it is also legal for a function to call itself. It may not be obvious why that is a good thing, but it turns out to be one of the most magical things a program can do. For example, look at the following function:",
"def countdown(n):\n if n <= 0:\n print('Blastoff!')\n else:\n print(n)\n countdown(n-1)",
"If n is 0 or negative, it outputs the word, “Blastoff!” Otherwise, it outputs n and then calls a function named countdown itself passing n-1 as an argument.",
"countdown(3)",
"A function that calls itself is recursive; the process is called recursion.\nAs another example, we can write a function that prints a string n times.",
"def print_n(s, n):\n if n <= 0:\n return\n print(s)\n print_n(s, n-1)",
"If n <= 0 the return statement exits the function. The flow of execution immediately returns to the caller, and the remaining lines of the function are not executed.\nThe rest of the function is similar to countdown : if n is greater than 0, it displays s and then calls itself to display s n 1 additional times. So the number of lines of output is1 + (n -\n1), which adds up to n.\nFor simple examples like this, it is probably easier to use a for loop. But we will see examples later that are hard to write with a for loop and easy to write with recursion, so it is good to start early.\nKeyboard Input\nThe programs we have written so far are a bit rude in the sense that they accept no input from the user. They just do the same thing every time.\nPython 3 provides a built-in function called input that gets input from the keyboard.\nWhen this function is called, the program stops and waits for the user to type something. When the user presses Return or Enter, the program resumes and input returns what the user typed as a string.",
"text = input()\n\nprint(text)",
"Before getting input from the user, it is a good idea to print a prompt telling the user what\nto input. input can take a prompt as an argument:",
"name = input('What...is your name?\\n')\n\nprint(name)",
"The sequence \\n at the end of the prompt represents a newline, which is a special character that causes a line break. That’s why the user’s input appears below the prompt.",
"prompt = 'What...is the airspeed velocity of an unladen swallow?\\n'\nspeed = input(prompt)\n\nint(speed)",
"But if the user types something other than a string of digits, you get an error:",
"speed = input(prompt)\n\nint(speed)",
"Exercises\nExercise 1: Fermat’s Last Theorem says that there are no positive integers a, b, and c such that \n$$\na^n + b^n = c^n\n$$\nfor any values of n greater than 2.\n1.\nWrite a function named check_fermat that takes four parameters (a, b, c, n) and that checks to see if Fermat’s theorem holds. If n is greater than 2 and it turns out to be true that\n$$\na^n + b^n = c^n\n$$\nthe program should print, “Holy smokes, Fermat was wrong!” Otherwise the program should print, “No, that doesn’t work.”\n2.\nWrite a function that prompts the user to input values for a, b, c and n, converts them to integers, and uses check_fermat to check whether they violate Fermat’s theorem.\nExercise 2: If you are given three sticks, you may or may not be able to arrange them in a triangle. For example, if one of the sticks is 12 inches long and the other two are one inch long, it is clear that you will not be able to get the short sticks to meet in the middle. For any three lengths, there is a simple test to see if it is possible to form a triangle:\nIf any of the three lengths is greater than the sum of the other two, then you cannot form a triangle. Otherwise, you can. (If the sum of two lengths equals the third, they form what is called a “degenerate” triangle.)\n\nWrite a function named is_triangle that takes three integers as arguments, and that prints either “Yes” or “No,” depending on whether you can or cannot form a triangle from sticks with the given lengths.\nWrite a function that prompts the user to input three stick lengths, converts them to integers, and uses is_triangle to check whether sticks with the given lengths can form a triangle."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ucsd-ccbb/jupyter-genomics
|
notebooks/networkAnalysis/network_viz_interactive_widget/network_viz_interactive_widget.ipynb
|
mit
|
[
"This notebook will create an interactive widget to illustrate network interconnections",
"import numpy as np\nfrom pandas import *\nimport networkx as nx\nimport matplotlib.pyplot as plt\n \n% matplotlib inline",
"Import real gene network (from macular degeneration experiment)\nAlso import the measured differential expression levels\nNOTE: change paths/filenames in this cell to apply network visualizer to other datasets. Network format from genemania (e.g. columns are 'Entity 1', 'Entity 2', 'Weight', 'Network_group', 'Networks')\nNOTE: if no fold change analysis desired, set diff_exp_analysis=False",
"# set file name and directory: network format from GeneMania\n\n\nfilename = 'ayyagari_mass_spec_MD_network/09232015_ayyagari_mass_spec_genemania_network.txt' \n\nmd_network = read_csv(filename, sep='\\t', header=6)\nmd_network.columns = ['Entity 1','Entity 2', 'Weight','Network_group','Networks']\n\nprint(md_network.head())\n\n# also import differential expression\ndiff_exp_analysis = True\nif diff_exp_analysis:\n md_diff_exp = read_csv('ayyagari_mass_spec_MD_network/09232015_ayyagari_mass_spec_fold_changes.csv',sep=',')\n md_diff_exp.index = md_diff_exp['GeneSymbol']\n \n # drop empty columns\n md_diff_exp = md_diff_exp.dropna(axis='columns',how='all')\n\n md_diff_exp.columns = ['GeneSymbol','FoldChange'] # rename columns to be more universal\n print('\\n \\n Differential expression (fold change): \\n')\n print(md_diff_exp.FoldChange[1:20])\n \n\n\n",
"Section: Implement heat diffusion (HotNet2)\nNote- these functions moved to plot_network module\nCreate sample network\nCreate a random network using networkx, for use in testing widget",
"G_rand = nx.connected_watts_strogatz_graph(200,4,.1)\n#G_rand = nx.random_graphs.powerlaw_cluster_graph(100,10,.2)\nedges = G_rand.edges()\nnumedges = len(G_rand.edges())\nfor e in range(numedges):\n edge_temp = edges[e]+(np.random.rand(),)\n edges[e] = edge_temp\n\nnodes = G_rand.nodes()\nG_rand.add_weighted_edges_from(edges)\n\n\n# rename nodes to strings for compatibility with widget\nstring_nodes = [str(i) for i in nodes]\nnode_map = dict(zip(nodes,string_nodes))\nG_rand = nx.relabel_nodes(G_rand,node_map,copy=False)\n\n\n\n# use md_network to build adjacency matrix\nG_MD = nx.Graph()\nG_MD.add_nodes_from(list(md_network['Entity 1']))\nprint(G_MD.nodes())\nprint('number of nodes = '+ str(len(G_MD.nodes())))\n\n# add some edges\nedge_list = zip(list(md_network['Entity 1']),list(md_network['Entity 2']))\nG_MD.add_edges_from(edge_list)\n\nprint('number of edges = ' + str(len(G_MD.edges())))\n\n\n# create version with weighted edges\nG_MD_w = nx.Graph()\nG_MD_w.add_nodes_from(G_MD.nodes())\nedge_list_w = zip(list(md_network['Entity 1']),list(md_network['Entity 2']),list(md_network['Weight']))\n \nG_MD_w.add_weighted_edges_from(edge_list_w)\n\nimport imp\nimport plot_network\nimp.reload(plot_network)\n\nfrom IPython.html.widgets import interact\nfrom IPython.html import widgets\nimport matplotlib.colorbar as cb\nimport seaborn as sns\nimport community\n\n# import network plotting module\nfrom plot_network import *\n\nGtest = nx.Graph()\n\n# select whether you want to analyze real data or simulated network\nnet_type = 'data'\n\nif net_type == 'data':\n Gtest.add_nodes_from(G_MD_w.nodes())\n Gtest.add_edges_from(G_MD_w.edges(data=True))\n \n # prep border colors\n nodes = Gtest.nodes()\n gene_list = md_diff_exp['GeneSymbol'].unique()\n \n if diff_exp_analysis:\n diff_exp = Series(md_diff_exp.FoldChange)\n genes_intersect = np.intersect1d(gene_list,nodes)\n border_cols = Series(index=nodes)\n\n for i in genes_intersect:\n if diff_exp[i]=='Unmeasured':\n border_cols[i] = np.nan\n else:\n border_cols[i] = diff_exp[i] \n else: # if no differential expression data\n border_cols = [None]\n\nelif net_type == 'random':\n Gtest.add_nodes_from(G_rand.nodes())\n Gtest.add_edges_from(G_rand.edges(data=True))\n \n nodes = Gtest.nodes()\n \n # make border colors array of ones\n border_cols = np.ones(len(nodes))\n border_cols = Series(border_cols,index=nodes)\n \n # make other interaction type\n node_0_list = [u for (u,v) in Gtest.edges()]\n node_1_list = [v for (u,v) in Gtest.edges()]\n \n NGlist = ['Other' for i in range(len(Gtest.edges()))]\n \n md_network = DataFrame({'Entity 1':node_0_list,'Entity 2': node_1_list,'Weight':np.ones(len(Gtest.edges())),\n 'Network_group':NGlist})\n\n\nnumnodes = len(Gtest)\n\n# make these three global to feed into widget\nglobal Gtest\n\nglobal boder_cols \n\nglobal md_network\n\ndef plot_network_shell(focal_node_name,edge_thresh=.5,network_algo='spl',map_degree=True, plot_border_col=False, draw_shortest_paths=True,\n coexpression=True, colocalization=True, other=False,physical_interactions=False,\n predicted_interactions=False,shared_protein_domain=False):\n \n # this is the main plotting function, called from plot_network module\n fig = plot_network(Gtest, border_cols, md_network,\n focal_node_name, edge_thresh, network_algo, map_degree, plot_border_col, draw_shortest_paths,\n coexpression, colocalization, other, physical_interactions, predicted_interactions, shared_protein_domain)\n\n\n return fig\n\n# threshold slider parameters\nif net_type=='data':\n min_thresh = np.min(md_network['Weight'])\n max_thresh = np.max(md_network['Weight']/50)\nelse:\n min_thresh = 0\n max_thresh = 1\n \nthresh_step = (max_thresh-min_thresh)/1000.0\n\ninteract(plot_network_shell, focal_node_name=list(np.sort(nodes)),\n edge_thresh=widgets.FloatSliderWidget(min=min_thresh,max=max_thresh,step=thresh_step,value=min_thresh,description='edge threshold'),\n network_algo = ['community','clustering_coefficient','hotnet2','pagerank','spl']);\n",
"To do: implement effective distance\n(From Brockmann, Helbing: The Hidden Geometry of Complex, Network-Driven Contagion Phenomena)\nCalculate the effective distance as:\n\nPij is the fraction of connections going from J to I\nPij = Aij/sum_j(Aij)\ndij = 1/(1-log(Pij)\nD = all shortest paths in dij\n\nAlso to do:\n\nImplement alternative module/clustering mechanisms\nImplement force-directed edge bundling \n\nadd animation componenent (e.g. heat spreading from a source)\n\n\nhow do network measures change with node removal? Can we identify 'bottleneck' regions? What is the best node/ edge to remove?\n\n\ncalculate network backbone (http://www.pnas.org/content/106/16/6483.abstract)",
"HEBim = plt.imread('hierarchical_edge-bundling3.png')\n#plt.figure(figsize=(12,12))\n#plt.imshow(HEBim);"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
thalesians/tsa
|
src/jupyter/python/foundations/linear-algebra-2.ipynb
|
apache-2.0
|
[
"",
"# Copyright (c) Thalesians Ltd, 2018-2019. All rights reserved\n# Copyright (c) Paul Alexander Bilokon, 2018-2019. All rights reserved\n# Author: Paul Alexander Bilokon <paul@thalesians.com>\n# Version: 2.0 (2019.04.19)\n# Previous versions: 1.0 (2018.08.03)\n# Email: education@thalesians.com\n# Platform: Tested on Windows 10 with Python 3.6",
"Linear algebra — part ii: matrices\nMotivation\nIn data science, machine learning (ML), and artificial intelligence (AI), we usually deal not with single numbers but with multivariate (i.e. containing multiple elements or entries) lists of numbers — mathematically speaking, vectors, — and multivariate tables of numbers — mathematically speaking, matrices. Therefore we solve multivariate equations, apply multivariate calculus to find optima of multivariate functions, etc.\nThe branch of mathematics that studies vectors, matrices, and related mathematical objects is called linear algebra. It is one of the most practically useful areas of mathematics in applied work and a prerequisite for data science, machine learning (ML), and artificial intelligence (AI).\nObjectives\n\nTo consider matrices.\nTo introduce the transpose of a matrix.\nTo demonstrate that matrices, like vectors, form a vector space.\nTo introduce matrix multiplication and consider four different views of matrix multiplication.\nTo introduce the identity matrix.\nTo introduce linear systems.\nTo introduce matrix inverses.\nTo demonstrate that matrices \"work\" on vectors as linear operators.\nTo introduce the notion of a determinant.\nTo consider in more detail the homogeneous and nonhomogeneous linear systems.\nTo consider when a square matrix is invertible and introduce the invertible matrix theorem.\nTo introduce gradients (grad), Jacobians, and Hessians.\n\nSo that plots can appear inline in our Jupyter notebook, we should issue the \"magic\" command",
"%matplotlib inline",
"We also import several Python libraries that we use in this Jupyter notebook:",
"import math\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D",
"Matrices\nVectors can be further generalized — to matrices. Instead of lists of numbers we could consider tables of numbers. E.g., we could have a two-by-three matrix (one with two rows and three columns):\n$$\\mathbf{A} = \\begin{pmatrix} 3.5 & 7 & 8.3 \\ 5.73 & 0 & -5 \\end{pmatrix} \\in \\mathbb{R}^{2 \\times 3}.$$\nVectors are matrices too. By default, we think of vectors as column vectors, i.e.\n$$\\begin{pmatrix} -3 \\ 3 \\ 5 \\end{pmatrix} \\in \\mathbb{R}^{3 \\times 1},$$\nalthough we may also deal with row vectors,\n$$\\begin{pmatrix} -3 & 3 & 5 \\end{pmatrix} \\in \\mathbb{R}^{1 \\times 3}.$$\nMatrices can be implemented in Python as two-dimensional NumPy arrays. In particular, we can represent the $2 \\times 3$ matrix $\\mathbf{A}$, which we have just introduced, as",
"A = np.array([[3.5, 7., 8.3], [5.73, 0., -5.]])",
"In data science, we usually deal with tables of observations. Such as this table from the real estate valuation dataset from the paper\nYeh, I. C., & Hsu, T. K. (2018). Building real estate valuation models with comparative approach through case-based reasoning. Applied Soft Computing, 65, 260-271\nwhich can be found on https://archive.ics.uci.edu/ml/datasets/Real+estate+valuation+data+set",
"import pandas as pd\ndf = pd.DataFrame({\n 'transaction date': [2012.917, 2012.917, 2013.583, 2013.500, 2012.833], 'house age': [32.0, 19.5, 13.3, 13.3, 5.0],\n 'distance to the nearest MRT station': [84.87882, 306.59470, 561.98450, 561.98450, 390.56840],\n 'number of convenience stores': [10, 9, 5, 5, 5],\n 'latitude': [24.98298, 24.98034, 24.98746, 24.98746, 24.97937],\n 'longitude': [121.54024, 121.53951, 121.54391, 121.54391, 121.54245],\n 'house price per unit area': [37.9, 42.2, 47.3, 54.8, 43.1]\n}, columns=[\n 'transaction date', 'house age', 'distance to the nearest MRT station', 'number of convenience stores',\n 'latitude', 'longitude', 'house price per unit area'\n])\ndf",
"It is natural to represent such tables as matrices.\nWhile we usually use Pandas DataFrames to represent such tables in Python, as we just did above, the raw data, which is used for calculations, is usually represented as NumPy arrays. We can obtain the NumPy array representing the matrix behind the above table with",
"df.values",
"Transposes\nThe transpose of a $p \\times q$ matrix $\\mathbf{A}$ is the $q \\times p$ matrix $\\mathbf{A}^{\\intercal}$ such that its $(i, j)$-th element is equal to the $(j, i)$-th element of $\\mathbf{A}$:\n$$(\\mathbf{A}^{\\intercal}){i, j} = (\\mathbf{A}){j, i}, \\quad 1 \\leq i \\leq q, 1 \\leq j \\leq p.$$\nThus, if\n$$\\mathbf{A} = \\begin{pmatrix} 3.5 & 7 & 8.3 \\ 5.73 & 0 & -5 \\end{pmatrix} \\in \\mathbb{R}^{2 \\times 3},$$\nthen\n$$\\mathbf{A}^{\\intercal} = \\begin{pmatrix} 3.5 & 5.73 \\ 7 & 0 \\ 8.3 & -5 \\end{pmatrix} \\in \\mathbb{R}^{3 \\times 2}.$$\nThus we can think of the transpose of a matrix as a kind of flip, in which its rows become columns (and vice versa).\nIt is easy to obtain the transpose of a matrix in NumPy:",
"A = np.array([[3.5, 7., 8.3], [5.73, 0., -5.]])\nA\n\nA.T",
"A matrix, which is equal to its transpose, is called symmetric. It goes without saying that a symmetric matrix must be square, i.e. the number of rows in that matrix must be equal the number of columns in that matrix.\nThe matrix\n$$\\mathbf{A} = \\begin{pmatrix} 3.5 & 7 & 8.3 \\ 5.73 & 0 & -5 \\end{pmatrix} \\in \\mathbb{R}^{2 \\times 3},$$\nwhich we introduced above, is neither square, nor, or course, is it symmetric.\nThe matrix\n$$\\mathbf{B} = \\begin{pmatrix} 18 & 4 \\ -11 & 22 \\end{pmatrix} \\in \\mathbb{R}^{2 \\times 2}$$\nis square, but it is not symmetric.\nThe matrix\n$$\\mathbf{C} = \\begin{pmatrix} 18 & 4 \\ 4 & 22 \\end{pmatrix} \\in \\mathbb{R}^{2 \\times 2}$$\nis a square, symmetric matrix. It is clearly equal to its transpose, $\\mathbf{C}^{\\intercal}$.\nMatrices are a vector space\nTwo matrices with the same number of rows and the same number of columns can be added, elementwise, just like vectors.\nFor example, if\n$$\\mathbf{A} = \\begin{pmatrix} 3.5 & 7 & 8.3 \\ 5.73 & 0 & -5 \\end{pmatrix} \\in \\mathbb{R}^{2 \\times 3}, \\quad \\mathbf{B} = \\begin{pmatrix} 4 & 18 & 22 \\ -3 & 5.7 & 9.3 \\end{pmatrix} \\in \\mathbb{R}^{2 \\times 3},$$\nthen\n$$\\mathbf{A} + \\mathbf{B} = \\begin{pmatrix} 3.5 + 4 & 7 + 18 & 8.3 + 22 \\ 5.73 + (-3) & 0 + 5.7 & -5 + 9.3 \\end{pmatrix} = \\begin{pmatrix} 7.5 & 25 & 30.3 \\ 2.73 & 5.7 & 4.3 \\end{pmatrix} \\in \\mathbb{R}^{2 \\times 3}.$$\nWe can conveniently add up matrices in Python using the habitual $+$ operator if the matrices are represented as NumPy arrays:",
"A = np.array([[3.5, 7., 8.3], [5.73, 0., -5.]])\n\nB = np.array([[4., 18., 22.], [-3., 5.7, 9.3]])\n\nA + B",
"Also, just like vectors, matrices can be multiplied (elementwise) by scalars.\nFor example, if\n$$\\mathbf{A} = \\begin{pmatrix} 3.5 & 7 & 8.3 \\ 5.73 & 0 & -5 \\end{pmatrix} \\in \\mathbb{R}^{2 \\times 3},$$\nthen\n$$3 \\cdot \\mathbf{A} = 3 \\cdot \\begin{pmatrix} 3.5 & 7 & 8.3 \\ 5.73 & 0 & -5 \\end{pmatrix} = \\begin{pmatrix} 3 \\cdot 3.5 & 3 \\cdot 7 & 3 \\cdot 8.3 \\ 3 \\cdot 5.73 & 3 \\cdot 0 & 3 \\cdot (-5) \\end{pmatrix} = \\begin{pmatrix} 10.5 & 21 & 24.9 \\ 17.19 & 0 & -15 \\end{pmatrix} \\in \\mathbb{R}^{2 \\times 3}.$$",
"3. * A",
"Taken with the operations of matrix addition and multiplication by scalars, the $p \\times q$ matrices form a vector space, which we can denote by $\\mathbb{R}^{p \\times q}$, just as we denote the corresponding set.\nExercise\nShow that, if $\\mathbf{A}, \\mathbf{B} \\in \\mathbb{R}^{p \\times q}$, then\n$$(\\mathbf{A} + \\mathbf{B})^{\\intercal} = \\mathbf{A}^{\\intercal} + \\mathbf{B}^{\\intercal}.$$\nExercise\nShow that, if $\\mathbf{A} \\in \\mathbb{R}^{p \\times p}$, then\n$$\\frac{1}{2} (\\mathbf{A} + \\mathbf{A}^{\\intercal})$$\nis symmetric.\nIn numerical analysis this trick is often employed when the matrix in an algorithm needs to be symmetric, but isn't quite symmetric (this may happen after several iterations of the algorithm). We can then \"symmetrize\" it:",
"A = np.array([[9., -5., 11.], [-5., 25., 4.], [12., 5., 36.]])\nA\n\n.5 * (A + A.T)",
"Matrix multiplication\nAlthough multiplication of matrices by matrices is not one of the operations under which matrices form a vector space, it is no less important than matrix addition and multiplication by scalars.\nMatrix multiplication forms the backbone of many machine learning and scientific computing algorithms.\nWe can only multiply two matrices if their sizes are compatible: the number of columns of the first matrix must equal the number of rows of the second.\nThe product of two matrices, $\\mathbf{A} \\in \\mathbb{R}^{p \\times q}$ and $\\mathbf{B} \\in \\mathbb{R}^{q \\times r}$, is given by the matrix $\\mathbf{A}\\mathbf{B} \\in \\mathbb{R}^{p \\times r}$, whose $(i, j)$th element, $1 \\leq i \\leq p$, $1 \\leq j \\leq r$, is given by\n$$(\\mathbf{A}\\mathbf{B}){i, j} = \\sum{k=1}^q \\mathbf{A}{i, k} \\mathbf{B}{k, j}.$$\nConsider the matrices\n$$\\mathbf{A} = \\begin{pmatrix} 3.7 & 7 & 8.3 \\ 5.73 & 0 & -5 \\end{pmatrix} \\in \\mathbb{R}^{2 \\times 3}, \\quad \\mathbf{B} = \\begin{pmatrix} 10.3 & 5.3 \\ 7.8 & 0 \\ -3.57 & 9.3 \\end{pmatrix} \\in \\mathbb{R}^{3 \\times 2}.$$\nIn NumPy, matrix multiplication is implemented using np.dot:",
"A = np.array([[3.5, 7., 8.3], [5.73, 0., -5.]])\nB = np.array([[10.3, 5.3], [7.8, 0.0], [-3.57, 9.3]])\nnp.dot(A, B)",
"There are different ways of looking at matrix multiplication.\nFirst view of matrix multiplication\nWe can think of the matrix product considering the individual elements of the result:\n$$\\mathbf{A}\\mathbf{B} = \\begin{pmatrix} 3.7 \\cdot 10.3 + 7 \\cdot 7.8 + 8.3 \\cdot (-3.57) & 3.7 \\cdot 5.3 + 7 \\cdot 0 + 8.3 \\cdot 9.3 \\ 5.73 \\cdot 10.3 + 0 \\cdot 7.8 + (-5) \\cdot (-3.57) & 5.73 \\cdot 5.3 + 0 \\cdot 0 + (-5) \\cdot 9.3 \\end{pmatrix}.$$\nNotice that the individual individual elements are given by\n$$(\\mathbf{A}\\mathbf{B}){1,1} = \\begin{pmatrix} 3.7 & 7 & 8.3 \\end{pmatrix} \\begin{pmatrix} 10.3 \\ 7.8 \\ -3.57 \\end{pmatrix},$$\n$$(\\mathbf{A}\\mathbf{B}){1,2} = \\begin{pmatrix} 3.7 & 7 & 8.3 \\end{pmatrix} \\begin{pmatrix} 5.3 \\ 0 \\ 9.3 \\end{pmatrix},$$\n$$(\\mathbf{A}\\mathbf{B}){2,1} = \\begin{pmatrix} 5.73 & 0 & -5 \\end{pmatrix} \\begin{pmatrix} 10.3 \\ 7.8 \\ -3.57 \\end{pmatrix},$$\n$$(\\mathbf{A}\\mathbf{B}){2,2} = \\begin{pmatrix} 5.73 & 0 & -5 \\end{pmatrix} \\begin{pmatrix} 5.3 \\ 0 \\ 9.3 \\end{pmatrix},$$\ni.e. by the dot products of the row vectors of $\\mathbf{A}$ with the column vectors of $\\mathbf{B}$.\nIncidentally, if we regard the vectors $\\mathbf{u}, \\mathbf{v} \\in \\mathbb{R}^k$, $k = 2, 3, 4, \\ldots$, as column vectors, then the dot product can be written as a matrix product\n$$\\langle \\mathbf{u}, \\mathbf{v} \\rangle = \\mathbf{u}^{\\intercal} \\mathbf{v},$$\nsince the transpose of a column vector is a row vector.\nThe transpose is on the \"inside\" of this product, a good mnemonic for remembering that the dot product is also known as the inner product. The result is $1 \\times 1$, so corresponds to a scalar.\nSecond view of matrix multiplication\nThere is another way of looking at this product. Notice that the first column of the product $\\mathbf{A}\\mathbf{B}$ is given by a linear combination of the columns of $\\mathbf{A}$, the weights being given by the values in the first column of $\\mathbf{B}$:\n$$(\\mathbf{A}\\mathbf{B})_{:,1} = 10.3 \\cdot \\begin{pmatrix} 3.7 \\ 5.73 \\end{pmatrix} + 7.8 \\cdot \\begin{pmatrix} 7 \\ 0 \\end{pmatrix} + (-3.57) \\cdot \\begin{pmatrix} 8.3 \\ -5 \\end{pmatrix}.$$\nSimilarly, the second column of $\\mathbf{A}\\mathbf{B}$ is given by a linear combination of the columns of $\\mathbf{A}$, the weights being given by the values in the second column of $\\mathbf{B}$:\n$$(\\mathbf{A}\\mathbf{B})_{:,2} = 5.3 \\cdot \\begin{pmatrix} 3.7 \\ 5.73 \\end{pmatrix} + 0 \\cdot \\begin{pmatrix} 7 \\ 0 \\end{pmatrix} + 9.3 \\cdot \\begin{pmatrix} 8.3 \\ -5 \\end{pmatrix}.$$\nThird view of matrix multiplication\nAlso notice that the first row of the product $\\mathbf{A}\\mathbf{B}$ is given by a linear combination of the rows of $\\mathbf{B}$, the weights being given by the values in the first row of $\\mathbf{A}$:\n$$(\\mathbf{A}\\mathbf{B})_{1,:} = 3.7 \\cdot \\begin{pmatrix} 10.3 & 5.3 \\end{pmatrix} + 7 \\cdot \\begin{pmatrix} 7.8 & 0 \\end{pmatrix} + 8.3 \\cdot \\begin{pmatrix} -3.57 & 9.3 \\end{pmatrix}.$$\nSimilarly, the second row of the product $\\mathbf{A}\\mathbf{B}$ is given by a linear combination of the rows of $\\mathbf{B}$, the weights being given by the values in the second row of $\\mathbf{A}$:\n$$(\\mathbf{A}\\mathbf{B})_{1,:} = 5.73 \\cdot \\begin{pmatrix} 10.3 & 5.3 \\end{pmatrix} + 0 \\cdot \\begin{pmatrix} 7.8 & 0 \\end{pmatrix} + (-5) \\cdot \\begin{pmatrix} -3.57 & 9.3 \\end{pmatrix}.$$\nNotice that matrix multiplication is not, in general, commutative: $\\mathbf{A}\\mathbf{B} \\neq \\mathbf{B}\\mathbf{A}$. In this it differs significantly from scalar multiplication.",
"C = np.array([[3., 5.], [0., -7.]])\nD = np.array([[0., -8.], [10., 12.]])\nnp.dot(C, D)\n\nnp.dot(D, C)",
"Fourth view of matrix multiplication\nWhereas the inner product of two column vectors $\\mathbf{u}, \\mathbf{v} \\in \\mathbb{R}^{k \\times 1}$ is 1-by-1, so a scalar, the outer product, defined as $\\mathbf{u}\\mathbf{v}^{\\intercal}$ is a $k \\times k$ matrix. It is essentially a \"multiplication table\": its $(i, j)$th entry is given by $u_i \\cdot v_j$.\nThe fourth view of matrix multiplication is viewed as a sum of outer products of the columns of $\\mathbf{A}$ with the rows of $\\mathbf{B}$:\n$$\\mathbf{A}\\mathbf{B} = \\begin{pmatrix} 3.7 \\ 5.73 \\end{pmatrix} \\begin{pmatrix} 10.3 & 5.3 \\end{pmatrix} + \\begin{pmatrix} 7 \\ 0 \\end{pmatrix} \\begin{pmatrix} 7.8 & 0 \\end{pmatrix} + \\begin{pmatrix} 8.3 \\ -5 \\end{pmatrix} \\begin{pmatrix} -3.57 & 9.3 \\end{pmatrix}.$$\nExercise\nWhile matrix multiplication is not commutative, it is associative:\n$$\\mathbf{A}(\\mathbf{B}\\mathbf{C}) = (\\mathbf{A}\\mathbf{B})\\mathbf{C}$$\n(we are assuming that the matrices are compatible, as required). Prove the associativity of matrix multiplication.\nExercise\nProve that\n$$(\\mathbf{A}\\mathbf{B})^{\\intercal} = \\mathbf{B}^{\\intercal}\\mathbf{A}^{\\intercal}.$$\nExercise\nProve that, if $\\mathbf{A} \\in \\mathbb{R}^{n \\times n}$, and $\\mathbf{x}, \\mathbf{y} \\in \\mathbb{R}^{n \\times 1}$, then\n$$\\mathbf{x}^{\\intercal} \\mathbf{A} \\mathbf{y} = \\sum_{j=1}^n \\sum_{k=1}^n A_{j,k} x_j y_k.$$\nWe sometimes refer to the above as the inner product induced by $\\mathbf{A}$.\nExercise\nLet $\\mathbf{A} \\in \\mathbb{R}^{n \\times n}$ and $\\mathbf{e}1, \\mathbf{e}_2, \\ldots, \\mathbf{e}_n$ be the standard basis. Show that $A{jk} = \\mathbf{e}_j^{\\intercal} \\mathbf{A} \\mathbf{e}_k$.\nThe identity matrix\nThe special square matrix $\\mathbf{I}_n \\in \\mathbb{R}^{n \\times n}$ with ones in the diagonal and zeros everywhere else is called the matrix identity. When multiplied by any compatible matrix on the left or on the right it gives the same matrix:\n$$\\mathbf{I}_n \\mathbf{A} = \\mathbf{A}, \\quad \\mathbf{B} \\mathbf{I}_n = \\mathbf{B}.$$\nIn NumPy, we can use np.eye to \"manufacture\" an identity matrix:",
"I2 = np.eye(2)\nI2\n\nC = np.array([[3., 5.], [0., -7.]])\nnp.dot(I2, C)\n\nnp.dot(C, I2)\n\nI3 = np.eye(3)\nI3\n\nI4 = np.eye(4)\nI4",
"Linear systems\nRecall that in the section on linear combinations we tried to obtain an arbitrary vector $\\mathbf{b}$ as a linear combination of two other vectors, $\\mathbf{u} = \\begin{pmatrix} 4 \\ 6 \\end{pmatrix}$ and $\\mathbf{v} = \\begin{pmatrix} 5 \\ 3 \\end{pmatrix}$. In other words, we tried to find such $x_1$ and $x_2$ that\n$$x_1 \\mathbf{u} + x_2 \\mathbf{v} = \\mathbf{b}.$$\nWe found that finding $x_1$ and $x_2$ amounts to solving a system of linear equations, a linear system.\nIf we write $\\mathbf{u}$ and $\\mathbf{v}$ as columns of a matrix, $\\mathbf{A}$, we can express this linear system as a matrix equation,\n$$\\mathbf{A}\\mathbf{x} = \\mathbf{b},$$\nwhere $\\mathbf{x} = \\begin{pmatrix} x_1 \\ x_2 \\end{pmatrix}$.\nThe solution of linear systems is one of the most ubiquitous tasks in scientific computing, including machine learning.\nIt can be performed using straightforward algorithms, such as Gauss-Jordan elimination.\nIn Python, we can use np.linalg.solve(A, b) to solve linear systems:",
"A = np.array([[4., 5.], [6., 3.]])\nb = np.array([-7., 3.])\nnp.linalg.solve(A, b)",
"Inverses\nThe inverse of a square matrix $\\mathbf{A} \\in \\mathbb{R}^{n \\times n}$ is another square matrix $\\mathbf{A}^{-1} \\in \\mathbb{R}^{n \\times n}$, such that\n$$\\mathbf{A} \\mathbf{A}^{-1} = \\mathbf{A}^{-1} \\mathbf{A} = \\mathbf{I}_n.$$",
"C = np.array([[3., 5.], [0., -7.]])\nCinv = np.linalg.inv(C)\nCinv\n\nnp.dot(C, Cinv)\n\nnp.dot(Cinv, C)",
"Note that, in the above example, solving the linear system amounts to inverting the matrix, since\n$$\\mathbf{A} \\mathbf{x} = \\mathbf{b}$$\nimplies\n$$\\mathbf{A}^{-1} \\mathbf{A} \\mathbf{x} = \\mathbf{A}^{-1} \\mathbf{b},$$\ni.e.\n$$\\mathbf{x} = \\mathbf{A}^{-1} \\mathbf{b}.$$",
"A = np.array([[4., 5.], [6., 3.]])\nb = np.array([-7., 3.])\nnp.linalg.solve(A, b)\n\nnp.linalg.inv(A)\n\nnp.dot(np.linalg.inv(A), b)",
"Matrices as linear operators\nWe can think of scalars as working (operating) on vectors when we multiply vectors by these scalars.",
"v = np.array([3., 5.])\nminus_half_v = -.5 * v\nthree_v = 3 * v\nminus_two_v = -2. * v\n\nplt.plot(0, 0, 'o')\nplt.annotate('$O$', [0, 0])\nplt.arrow(0, 0, v[0], v[1], shape='full', head_width=1, length_includes_head=True)\nplt.annotate('$v$', v)\nplt.arrow(0, 0, minus_half_v[0], minus_half_v[1], shape='full', head_width=1, length_includes_head=True)\nplt.annotate('$-\\\\frac{1}{2} v$', minus_half_v)\nplt.arrow(0, 0, three_v[0], three_v[1], shape='full', head_width=1, length_includes_head=True)\nplt.annotate('$3 v$', three_v)\nplt.arrow(0, 0, minus_two_v[0], minus_two_v[1], shape='full', head_width=1, length_includes_head=True)\nplt.annotate('$-2 v$', minus_two_v)\nplt.axis([-15.5, 15.5, -15.5, 15.5])\nplt.gca().set_aspect('equal', adjustable='box')",
"Note that scalars can only rescale vectors (hence the name: \"scalar\"). If the scalar is negative, the direction of the vector is \"flipped\" but remains along the same line.\nMatrices also operate on vectors. For this reason they are sometimes referred to as operators.\nConsider, for example, the two-dimensional rotation matrix that rotates the vector through the angle of $\\theta$:\n$$\\mathbf{A} = \\begin{pmatrix} \\cos \\theta & - \\sin \\theta \\ \\sin \\theta & \\cos \\theta \\end{pmatrix}:$$",
"angle = math.pi / 3. # In radians; equal to 60 degrees\nA = np.array([[math.cos(angle), -math.sin(angle)], [math.sin(angle), math.cos(angle)]])\nA\n\nA_v = np.dot(A, v)\nA_v\n\nplt.plot(0, 0, 'o')\nplt.annotate('$O$', [0, 0])\nplt.arrow(0, 0, v[0], v[1], shape='full', head_width=1, length_includes_head=True)\nplt.annotate('$v$', v)\nplt.arrow(0, 0, A_v[0], A_v[1], shape='full', head_width=1, length_includes_head=True)\nplt.annotate('$Av$', A_v)\nplt.axis([-15.5, 15.5, -15.5, 15.5])\nplt.gca().set_aspect('equal', adjustable='box')",
"Matrices aren't just any operators, they are linear operators, because, for any scalars $\\alpha, \\beta$ and (suitable dimension) vectors $\\mathbf{u}, \\mathbf{v}$,\n$$\\mathbf{A}(\\alpha \\mathbf{u} + \\beta \\mathbf{v}) = \\alpha \\mathbf{A} \\mathbf{u} + \\beta \\mathbf{A} \\mathbf{v}.$$\nThe integral, $\\int$, the sum, $\\sum$, and the probabilistic expectation, $\\mathbb{E}$, are also linear operators.",
"B = 2. * A\nB_v = np.dot(B, v)\n\nplt.plot(0, 0, 'o')\nplt.annotate('$O$', [0, 0])\nplt.arrow(0, 0, v[0], v[1], shape='full', head_width=1, length_includes_head=True)\nplt.annotate('$v$', v)\nplt.arrow(0, 0, B_v[0], B_v[1], shape='full', head_width=1, length_includes_head=True)\nplt.annotate('$Bv$', B_v)\nplt.axis([-15.5, 15.5, -15.5, 15.5])\nplt.gca().set_aspect('equal', adjustable='box')",
"The determinant\nThe determinant of a matrix $\\mathbf{A}$, denoted $\\det \\mathbf{A}$ or $|\\mathbf{A}|$, is a scalar value that can be computed from the elements of a square matrix and encodes certain properties of the linear transformation described by the matrix.\nIn the case of a 2-by-2 matrix\n$$\\mathbf{A} = \\begin{pmatrix} a & b \\ c & d \\end{pmatrix}$$\nthe determinant is given by\n$$|\\mathbf{A}| = ad - bc.$$\nThe determinant can be thought of as the volume scaling factor of the linear transformation described by the matrix.\nIn particular, our rotation matrix",
"angle = math.pi / 3. # In radians; equal to 60 degrees\nA = np.array([[math.cos(angle), -math.sin(angle)], [math.sin(angle), math.cos(angle)]])\nA",
"preserves the volume, so",
"np.linalg.det(A)",
"whereas the matrix given by",
"2. * A",
"quadruples it:",
"np.linalg.det(2. * A)",
"The determinant is zero if and only if the matrix has linearly dependent columns (equivalently, linearly dependent rows):",
"A = np.array([[1., 0., 1.], [2., 1., 0.], [0., 0., 2.]])\nA\n\nnp.linalg.det(A)",
"whereas",
"A = np.array([[1., 2., 1.], [2., 4., 0.], [0., 0., 2.]])\nA\n\nnp.linalg.det(A)",
"More on linear equations\nConsider again one of the most basic and important problems in linear algebra — finding such a vector $\\mathbf{x} \\in \\mathbb{R}^{n \\times 1}$ that\n$$\\mathbf{A} \\mathbf{x} = \\mathbf{b},$$\nwhere $\\mathbf{A} \\in \\mathbb{R}^{m \\times n}$ and $\\mathbf{b} \\in \\mathbb{R}^{m \\times 1}$, i.e., the solution of a linear system.\nRemember the four views of matrix multiplication? One of them regards the product $\\mathbf{A}\\mathbf{x}$ as a linear combination of the columns of $\\mathbf{A}$. Thus if we write\n$$\\mathbf{A} = \\begin{pmatrix} | & | & & | \\ \\mathbf{a}{:,1} & \\mathbf{a}{:,2} & \\cdots & \\mathbf{a}{:,n} \\ | & | & & | \\end{pmatrix}, \\quad \\mathbf{x} = \\begin{pmatrix} x_1 \\ x_2 \\ \\vdots \\ x_n \\end{pmatrix},$$\nthen $\\mathbf{A}\\mathbf{x} = \\mathbf{b}$ can be written as\n$$x_1 \\mathbf{a}{:,1} + x_2 \\mathbf{a}{:,2} + \\ldots + x_n \\mathbf{a}{:,n} = \\mathbf{b}.$$\nThus, for this equation to have a solution, $\\mathbf{b}$ must be expressible a linear combination of the columns of $\\mathbf{A}$.\nIt is easy to come up with examples where this is not possible. For example, consider\n$$\\mathbf{A} = \\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \\end{pmatrix}, \\quad \\mathbf{b} = \\begin{pmatrix} 0 \\ 0 \\ 1 \\end{pmatrix}.$$\nNo linear combination of the columns of $\\mathbf{A}$ gives $\\mathbf{b}$, so the equation $\\mathbf{A}\\mathbf{x} = \\mathbf{b}$ has no solutions.\nThe columns of $\\mathbf{A}$ generate a vector space whose elements are precisely their linear combinations. This vector space is called the column space of $\\mathbf{A}$, denoted $\\text{col}(\\mathbf{A})$, and it is a subspace of $\\mathbb{R}^m$. The equation $\\mathbf{A}\\mathbf{x} = \\mathbf{b}$ has solutions if and only if $\\mathbf{b}$ is in $\\text{col}(\\mathbf{A})$.\nNow consider the following example:\n$$\\mathbf{A} = \\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 2 \\ 0 & 0 & 0 \\end{pmatrix}, \\quad \\mathbf{b} = \\begin{pmatrix} 0 \\ 5 \\ 0 \\end{pmatrix}.$$\nNow $\\mathbf{b}$ is clearly in the column space of $\\mathbf{A}$. For example, we can express it as five times the second column of $\\mathbf{A}$, so\n$$\\mathbf{x} = \\begin{pmatrix} 0 \\ 5 \\ 0 \\end{pmatrix}$$\nis a solution. However, we could also express it as $\\frac{5}{2}$ times the third column of $\\mathbf{A}$, so\n$$\\mathbf{x} = \\begin{pmatrix} 0 \\ 0 \\ 5/2 \\end{pmatrix}$$\nis also a solution. In fact, a suitable mixture of these two solutions will also be a solution. For example,\n$$\\mathbf{x} = \\frac{1}{4} \\begin{pmatrix} 0 \\ 5 \\ 0 \\end{pmatrix} + \\frac{3}{4} \\begin{pmatrix} 0 \\ 0 \\ 5/2 \\end{pmatrix} = \\begin{pmatrix} 0 \\ \\frac{5}{4} \\ \\frac{15}{8} \\end{pmatrix}$$\nis a solution:",
"A = np.array([[1., 0., 0.], [0., 1., 2.], [0., 0., 0.]])\nx = np.array([[0.], [5./4.], [15./8.]])\nnp.dot(A, x)",
"In fact, we could take any $w_1, w_2$ such that $w_1 + w_2 = 1$ and obtain a solution\n$$\\mathbf{x} = w_1 \\begin{pmatrix} 0 \\ 5 \\ 0 \\end{pmatrix} + w_2 \\begin{pmatrix} 0 \\ 0 \\ 5/2 \\end{pmatrix},$$\nso there are infinitely many solutions:",
"A = np.array([[1., 0., 0.], [0., 1., 2.], [0., 0., 0.]])\nw1 = -2.\nw2 = 1. - w1\nx = w1 * np.array([[0.], [5.], [0.]]) + w2 * np.array([[0.], [0.], [5./2.]])\nnp.dot(A, x)",
"Thus we have seen that $\\mathbf{A}\\mathbf{x} = \\mathbf{b}$ may have a single solution, infinitely many solutions, or no solutions at all.\nThe homogeneous system\nThe linear system $\\mathbf{A}\\mathbf{x} = \\mathbf{0}$ is known as a homogeneous system — whereas the linear system $\\mathbf{A}\\mathbf{x} = \\mathbf{b}$ with $\\mathbf{b} \\neq \\mathbf{0}$ is described as nonhomogeneous.\nClearly, for $\\mathbf{A}\\mathbf{x} = \\mathbf{0}$ to have any nontrivial solutions ($\\mathbf{x} = \\mathbf{0}$ is always a solution, a trivial solution), $A$ must have linearly dependent columns.\nThe solutions $\\mathbf{x}$ of the homogeneous system form a vector space, known as the nullspace or kernel.\nThe dimension of the column space of $\\mathbf{A}$ is known as the rank of $\\mathbf{A}$, whereas the dimension of the nullspace of $\\mathbf{A}$ is known as the nullity of $\\mathbf{A}$.\nIf $\\mathbf{A}$ is an $m$-by-$n$ matrix, with $n$ columns, then, according to the rank-nullity theorem,\n$$\\text{rank}(\\mathbf{A}) + \\text{nullity}(\\mathbf{A}) = n.$$\nIt can also be shown that the dimension of the vector space generated by the rows of $\\mathbf{A}$ — the row space of $\\mathbf{A}$ — is equal to the dimension of the column space of $\\mathbf{A}$ and is therefore also $\\text{rank}(\\mathbf{A})$.\nLet us again consider the example\n$$\\mathbf{A} = \\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 2 \\ 0 & 0 & 0 \\end{pmatrix}, \\quad \\mathbf{b} = \\begin{pmatrix} 0 \\ 5 \\ 0 \\end{pmatrix}.$$\nThe dimension of the column space of this matrix $\\mathbf{A}$ is two. Since this matrix does not have three linearly independent columns, it is not full rank.\nThus $\\text{rank}(\\mathbf{A}) = 2$ and, by rank-nullity theorem, $\\text{nullity}(\\mathbf{A}) = 3 - 2 = 1$.\nSince $\\text{nullity}(\\mathbf{A}) > 0$, the equation $\\mathbf{A}\\mathbf{x} = \\mathbf{0}$ has a nontrivial solution. Indeed, one such solution is given by\n$$\\mathbf{x} = \\begin{pmatrix} 0 \\ 2 \\ -1 \\end{pmatrix}.$$\nAll scalar multiples of $\\mathbf{x}$ are also solutions. In fact, since $\\text{nullity}(\\mathbf{A}) = 1$ this gives us all solutions of $\\mathbf{A}\\mathbf{x} = \\mathbf{0}$ (otherwise there would be a basis for the nullspace of $\\mathbf{A}$ with two elements).\nThe invertible matrix theorem\nNot all square $n$-by-$n$ matrices have inverses. The ones that do, are called invertible or nonsingular, or nondegenerate, whereas the ones that don't — noninvertible, or singular, or degenerate.\nFor example,",
"A = np.array([[1., 0., 1.], [2., 1., 0.], [0., 0., 2.]])\nA",
"is invertible:",
"np.linalg.inv(A)",
"so its effect on some vector, say,",
"v = np.array([[3.], [5.], [7.]])\n\nnp.dot(A, v)",
"can be reversed by $\\mathbf{A}^{-1}$:",
"np.dot(np.linalg.inv(A), np.dot(A, v))",
"Whereas",
"A = np.array([[1., 2., 1.], [2., 4., 0.], [0., 0., 2.]])\nA",
"is singular. np.linalg.inv will raise an error if we try to apply it to this matrix.\nYou may have noticed that this matrix has linearly dependent columns. This is one way of seeing that this matrix is singular.\nIn fact, the invertible matrix theorem tells us that the following are all equivalent. Let $\\mathbf{A}$ be a square $n$-by-$n$ matrix:\n* $\\mathbf{A}$ is invertible, that is $\\mathbf{A}$ has an inverse, $\\mathbf{A}^{-1}$, (is nonsingular, is nondegenerate).\n* $\\det \\mathbf{A} \\neq 0$.\n* $\\mathbf{A}$ has full rank; that is $\\text{rank}(\\mathbf{A}) = n$.\n* The equation $\\mathbf{A}\\mathbf{x} = \\mathbf{0}$ has only the trivial solution $\\mathbf{x} = \\mathbf{0}$.\n* The nullspace (also known as kernel) of $\\mathbf{A}$ is trivial, that is, it contains only the zero vector.\n* The nullity of $\\mathbf{A}$ is zero.\n* The equation $\\mathbf{A}\\mathbf{x} = \\mathbf{b}$ has exactly one solution for each $\\mathbf{b} \\in \\mathbb{R}^n$.\n* The columns of $\\mathbf{A}$ are linearly independent.\n* The columns of $\\mathbf{A}$ span $\\mathbb{R}^n$.\n* The column space of $\\mathbf{A}$ is $\\mathbb{R}^n$.\n* The columns of $\\mathbf{A}$ form a basis for $\\mathbb{R}^n$.\n* The linear transformation mapping $\\mathbf{x}$ to $\\mathbf{A}\\mathbf{x}$ is a bijection from $\\mathbb{R}^n$ to $\\mathbb{R}^n$.\n* The transpose of $\\mathbf{A}$, $\\mathbf{A}^{\\intercal}$, is an invertible matrix.\n* The number zero is not an eigenvalue of $\\mathbf{A}$.\nGradients, Jacobians, and Hessians\nAt school we mostly study single-valued, univariate functions $f: \\mathbb{R} \\rightarrow \\mathbb{R}$, such as $f: x \\mapsto x^2$:",
"xs = np.linspace(-10., 10., 100)\nplt.plot(xs, [x*x for x in xs]);",
"A function is single-valued if its result is a scalar. We restrict ourselves to real numbers, so for us this will mean $f: \\cdot \\rightarrow \\mathbb{R}$.\nA function is multivalued if its result is a two- or higher-dimensional vector, $f: \\cdot \\rightarrow \\mathbb{R}^m$, $m > 1$.\nA function is univariate if its argument is a scalar, $f: \\mathbb{R} \\rightarrow \\cdot$.\nA function is multivariate if its argument is a two- or higher-dimensional vector, $f: \\mathbb{R}^n \\rightarrow \\cdot$, $n > 1$.\n\nMatrices and vectors help us define multivalued, multivariate functions, e.g. if $\\mathbf{x} \\in \\mathbb{R}^n$, $\\mathbf{A} \\in \\mathbb{R}^{m \\times n}$, $\\mathbf{b} \\in \\mathbb{R}^m$,\n$$f(\\mathbf{x}) = \\mathbf{A}\\mathbf{x} + \\mathbf{b},$$\nwe have a $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ function. Functions of this form are sometimes called linear functions.\nNot all multivalued and multivariate functions are of this form, but many are. And, if $m = 1$, we still get a single-valued function.\nDifferentiation can be generalized to higher dimensions. Just as we know that single-valued, univariate functions, such as\n$$f(x) = x^2$$\nhave derivatives\n$$\\frac{d}{dx} f(x) = 2x$$\nand second derivatives\n$$\\frac{d^2}{dx^2} f(x) = 2,$$\nmultivariate functions $f: \\mathbb{R}^n \\rightarrow \\cdot$ have partial derivatives\n$$\\frac{\\partial}{\\partial x_1} f(\\mathbf{x}), \\ldots, \\frac{\\partial}{\\partial x_n} f(\\mathbf{x})$$\nand partial second derivatives\n$$\\frac{\\partial^2}{\\partial x_1^2} f(\\mathbf{x}), \\frac{\\partial^2}{\\partial x_1 x_2} f(\\mathbf{x}), \\ldots, \\frac{\\partial^2}{\\partial x_1 x_n} f(\\mathbf{x}), \\frac{\\partial^2}{\\partial x_2 x_1} f(\\mathbf{x}), \\frac{\\partial^2}{\\partial x_2^2} f(\\mathbf{x}), \\frac{\\partial^2}{\\partial x_2 x_3} f(\\mathbf{x}) \\ldots, \\frac{\\partial^2}{\\partial x_n^2} f(\\mathbf{x}).$$\nThese are themselves functions (unless evaluated at a specific point, $\\mathbf{x}_0$, in which case they become scalar or vector values), and if $f$ is $m$-valued, then so are they.\nFor single-valued functions, the vector of first derivatives is known as the gradient or grad:\n$$\\nabla f (\\mathbf{x}) = \\begin{pmatrix} \\frac{\\partial}{\\partial x_1} f(\\mathbf{x}) \\ \\vdots \\ \\frac{\\partial}{\\partial x_n} f(\\mathbf{x}) \\end{pmatrix}.$$\nFor multivalued functions, instead of the column vector gradient, one usualy works with the Jacobian:\n$$J (\\mathbf{x}) = \\begin{pmatrix} \\frac{\\partial}{\\partial x_1} f_1(\\mathbf{x}) & \\cdots & \\frac{\\partial}{\\partial x_n} f_1(\\mathbf{x}) \\ \\vdots & \\ddots & \\vdots \\ \\frac{\\partial}{\\partial x_1} f_m(\\mathbf{x}) & \\cdots & \\frac{\\partial}{\\partial x_n} f_m(\\mathbf{x}) \\end{pmatrix}.$$\nNotice that, for single-valued functions, the Jacobian is a row vector, a transpose of the gradient.\nAlso, for single-valued functions, it makes sense to define the matrix of second derivatives, the so-called Hessian:\n$$H (\\mathbf{x}) = \\begin{pmatrix} \\frac{\\partial^2}{\\partial x_1^2} f_1(\\mathbf{x}) & \\cdots & \\frac{\\partial^2}{\\partial x_1 \\partial x_n} f_1(\\mathbf{x}) \\ \\vdots & \\ddots & \\vdots \\ \\frac{\\partial^2}{\\partial x_n x_1} f_m(\\mathbf{x}) & \\cdots & \\frac{\\partial^2}{\\partial x_n^2} f_m(\\mathbf{x}) \\end{pmatrix}.$$\nJust as the first and second derivative of a single-valued, univariate function can help us locate and classify its optimum, we need to look at the gradient/Jacobian of a multivariate function to locate and classify its optimum (minimum, maximum, saddle point).\nJust as we can apply the (univariate) Newton-Leibnitz differential calculus to find the first and second derivatives, we can use the matrix calculus to deal with multi-valued/multivariate functions. Its rules can be found in Wikipedia: https://en.wikipedia.org/wiki/Matrix_calculus\nBibliography\n\nAn excellent, deeper introduction to linear algebra can be found in Professor Gilbert Strang's video lectures for the 18.06 Linear Algebra course at MIT: https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/\nThe supporting textbook for that course is Introduction to Linear Algebra, 5th edition, by Gilbert Strang.\nA more recent version of this book updated for data science and deep learning is Linear Algebra and Learning from Data, by Gilbert Strang, published in 2019.\nAnother good text on linear algebra is Linear Algebra, 3rd edition, by John B. Fraleigh and Raymond A. Beauregard.\nOne may also be recommended to read Schaum's Outline of Linear Algebra, 6th edition, by Seymour Lipschutz and Marc Lipson.\nFinally, we recommend getting hold of M2N1 — Numerical Analysis lecture notes by Brad Baxter, which, in addition to theory, contain some useful exercises: http://www.cato.tzo.com/brad/M2N1/m2n1_notes.pdf"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
psiq/gdsfactory
|
notebooks/05_sparameters.ipynb
|
mit
|
[
"Sparameters\ngdsfactory provides you with a Lumerical FDTD interface to calculate Sparameters\nby default another repo gdslib stores the Sparameters \nYou can chain the Sparameters to calculate solve of larger\ncircuits using a circuit solver such as:\n\nLumerical interconnect\nsimphony (open source)\n\nIf the Sparameters exists in gdslib you can access them really fast.",
"# NBVAL_SKIP\nimport pp\n\npp.sp.plot(pp.c.mmi1x2(), keys=['S23m', 'S13m'], logscale=True)",
"gdsfactory can also compute the Sparameters of a component that have not been simulated before.",
"# NBVAL_SKIP\nimport pp\n\ncs = [pp.c.coupler_ring(gap=gap, bend_radius=bend_radius) for gap in [0.15, 0.2, 0.3] for bend_radius in [5, 10]]\n\nfor c in cs:\n pp.show(c)\n print(c)\n pp.sp.write(c)",
"To debug a simulation you can create a Lumerical session outside the simulator, pass it to the simulator, and use run=False flag",
"# NBVAL_SKIP\nimport lumapi\ns = lumapi.FDTD()\nc = pp.c.waveguide()\npp.sp.write(c, run=False, session=s)",
"By default gdsfactory uses the generic layermap for 220nm height silicon layer.\nYou can also define your components with a different material, thicknes or GDS layermap",
"# NBVAL_SKIP\nlayer2material = {\n (2, 0): \"sin\",\n}\n\nlayer2nm = {(2, 0): 400}\n\nc = pp.c.waveguide(layer=(2,0))\nsp = pp.sp.write(c, layer2nm=layer2nm, layer2material=layer2material, session=s)\npp.qp(c)\npp.sp.plot(c)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/mri/cmip6/models/sandbox-2/seaice.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: MRI\nSource ID: SANDBOX-2\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:19\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'mri', 'sandbox-2', 'seaice')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties --> Model\n2. Key Properties --> Variables\n3. Key Properties --> Seawater Properties\n4. Key Properties --> Resolution\n5. Key Properties --> Tuning Applied\n6. Key Properties --> Key Parameter Values\n7. Key Properties --> Assumptions\n8. Key Properties --> Conservation\n9. Grid --> Discretisation --> Horizontal\n10. Grid --> Discretisation --> Vertical\n11. Grid --> Seaice Categories\n12. Grid --> Snow On Seaice\n13. Dynamics\n14. Thermodynamics --> Energy\n15. Thermodynamics --> Mass\n16. Thermodynamics --> Salt\n17. Thermodynamics --> Salt --> Mass Transport\n18. Thermodynamics --> Salt --> Thermodynamics\n19. Thermodynamics --> Ice Thickness Distribution\n20. Thermodynamics --> Ice Floe Size Distribution\n21. Thermodynamics --> Melt Ponds\n22. Thermodynamics --> Snow Processes\n23. Radiative Processes \n1. Key Properties --> Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of sea ice model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of prognostic variables in the sea ice component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3. Key Properties --> Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Ocean Freezing Point Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4. Key Properties --> Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Target\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Simulations\nIs Required: TRUE Type: STRING Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Metrics Used\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any observed metrics used in tuning model/parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.5. Variables\nIs Required: FALSE Type: STRING Cardinality: 0.1\nWhich variables were changed during the tuning process?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nWhat values were specificed for the following parameters if used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.2. Additional Parameters\nIs Required: FALSE Type: STRING Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. On Diagnostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Missing Processes\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nProvide a general description of conservation methodology.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Properties\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Budget\nIs Required: TRUE Type: STRING Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Was Flux Correction Used\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes conservation involved flux correction?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Grid --> Discretisation --> Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.2. Grid Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the type of sea ice grid?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.3. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the advection scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.4. Thermodynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.5. Dynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.6. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional horizontal discretisation details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Grid --> Discretisation --> Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Number Of Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using multi-layers specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"10.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional vertical grid details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Grid --> Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"11.2. Number Of Categories\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using sea ice categories specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Category Limits\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Other\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Grid --> Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow on ice represented in this model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Number Of Snow Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels of snow on ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.3. Snow Fraction\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.4. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional details related to snow on ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Transport In Thickness Space\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Ice Strength Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhich method of sea ice strength formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.4. Redistribution\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.5. Rheology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nRheology, what is the ice deformation formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Thermodynamics --> Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the energy formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Thermal Conductivity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of thermal conductivity is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.3. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of heat diffusion?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.4. Basal Heat Flux\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.5. Fixed Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.6. Heat Content Of Precipitation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.7. Precipitation Effects On Salinity\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15. Thermodynamics --> Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Ice Vertical Growth And Melt\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Ice Lateral Melting\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice lateral melting?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.4. Ice Surface Sublimation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.5. Frazil Ice\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of frazil ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Thermodynamics --> Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17. Thermodynamics --> Salt --> Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Thermodynamics --> Salt --> Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19. Thermodynamics --> Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice thickness distribution represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20. Thermodynamics --> Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice floe-size represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Thermodynamics --> Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre melt ponds included in the sea ice model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"21.2. Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat method of melt pond formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21.3. Impacts\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat do melt ponds have an impact on?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22. Thermodynamics --> Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.2. Snow Aging Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.3. Has Snow Ice Formation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.4. Snow Ice Formation Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow ice formation scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.5. Redistribution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat is the impact of ridging on snow cover?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.6. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod used to handle surface albedo.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Ice Radiation Transmission\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Lolcroc/AI
|
ML1/lab3_yves.ipynb
|
gpl-3.0
|
[
"Save this file as studentid1_studentid2_lab#.ipynb\n(Your student-id is the number shown on your student card.)\nE.g. if you work with 3 people, the notebook should be named:\n12301230_3434343_1238938934_lab1.ipynb.\nThis will be parsed by a regexp, so please double check your filename.\nBefore you turn this problem in, please make sure everything runs correctly. First, restart the kernel (in the menubar, select Kernel$\\rightarrow$Restart) and then run all cells (in the menubar, select Cell$\\rightarrow$Run All).\nMake sure you fill in any place that says YOUR CODE HERE or \"YOUR ANSWER HERE\", as well as your names and email adresses below.",
"NAME = \"\"\nNAME2 = \"\"\nNAME3 = \"\"\nEMAIL = \"\"\nEMAIL2 = \"\"\nEMAIL3 = \"\"",
"Lab 3: Gaussian Processes and Support Vector Machines\nMachine Learning 1, September 2017\nNotes on implementation:\n\nYou should write your code and answers in this IPython Notebook: http://ipython.org/notebook.html. If you have problems, please contact your teaching assistant.\nPlease write your answers right below the questions.\nAmong the first lines of your notebook should be \"%pylab inline\". This imports all required modules, and your plots will appear inline.\nRefer to last week's lab notes, i.e. http://docs.scipy.org/doc/, if you are unsure about what function to use. There are different correct ways to implement each problem!\nuse the provided test boxes to check if your answers are correct",
"%pylab inline\nplt.rcParams[\"figure.figsize\"] = [20,10]",
"Part 1: Gaussian Processes\nFor part 1 we will be refer to Bishop sections 6.4.2 and 6.4.3. You may also want to refer to Rasmussen's Gaussian Process text which is available online at http://www.gaussianprocess.org/gpml/chapters/ and especially to the project found at https://www.automaticstatistician.com/index/ by Ghahramani for some intuition in GP. To understand Gaussian processes, it is highly recommended understand how marginal, partitioned Gaussian distributions can be converted into conditional Gaussian distributions. This is covered in Bishop 2.3 and summarized in Eqns 2.94-2.98.\n$\\newcommand{\\bt}{\\mathbf{t}}$\n$\\newcommand{\\bx}{\\mathbf{x}}$\n$\\newcommand{\\by}{\\mathbf{y}}$\n$\\newcommand{\\bw}{\\mathbf{w}}$\n$\\newcommand{\\ba}{\\mathbf{a}}$\nPeriodic Data\nWe will use the same data generating function that we used previously for regression.",
"def true_mean_function(x):\n return np.cos(2*pi*(x+1))\n\ndef add_noise(y, sigma):\n return y + sigma*np.random.randn(len(y))\n\ndef generate_t(x, sigma):\n return add_noise(true_mean_function(x), sigma)\n\n\nsigma = 0.2\nbeta = 1.0 / pow(sigma, 2)\nN_test = 100\n\nx_test = np.linspace(-1, 1, N_test) \nmu_test = np.zeros(N_test)\ny_test = true_mean_function(x_test)\nt_test = add_noise(y_test, sigma)\n\nplt.plot( x_test, y_test, 'b-', lw=2)\nplt.plot( x_test, t_test, 'go')\nplt.show()",
"1. Sampling from the Gaussian process prior (30 points)\nWe will implement Gaussian process regression using the kernel function in Bishop Eqn. 6.63. \n1.1 k_n_m( xn, xm, thetas ) (5 points)\nTo start, implement function k_n_m(xn, xm, thetas) that takes scalars $x_n$ and $x_m$, and a vector of $4$ thetas, and computes the kernel function Bishop Eqn. 6.63 (10 points). NB: usually the kernel function will take $D$ by $1$ vectors, but since we are using a univariate problem, this makes things easier.",
"def k_n_m(xn, xm, thetas):\n # YOUR CODE HERE\n raise NotImplementedError()\n \n return k",
"1.2 computeK( X1, X2, thetas ) (10 points)\nEqn 6.60 is the marginal distribution of mean output of $N$ data vectors: $p(\\mathbf{y}) = \\mathcal{N}(0, \\mathbf{K})$. Notice that the expected mean function is $0$ at all locations, and that the covariance is a $N$ by $N$ kernel matrix $\\mathbf{K}$. Write a function computeK(x1, x2, thetas)\nthat computes the kernel matrix. Use k_n_m as part of an inner loop (of course, there are more efficient ways of computing the kernel function making better use of vectorization, but that is not necessary) (5 points).",
"def computeK(x1, x2, thetas):\n # YOUR CODE HERE\n raise NotImplementedError()\n \n return K\n\n### Test your function\nx1 = [0, 1, 2]\nx2 = [1, 2, 3, 4]\nthetas = [1, 2, 3, 4]\nK = computeK(x1, x2, thetas)\n\nassert K.shape == (len(x1), len(x2)), \"the shape of K is incorrect\"\n",
"1.3 Plot function samples (15 points)\nNow sample mean functions at the x_test locations for the theta values in Bishop Figure 6.5, make a figure with a 2 by 3 subplot and make sure the title reflects the theta values (make sure everything is legible). In other words, sample $\\by_i \\sim \\mathcal{N}(0, \\mathbf{K}_{\\theta})$. Make use of numpy.random.multivariate_normal(). On your plots include the expected value of $\\by$ with a dashed line and fill_between 2 standard deviations of the uncertainty due to $\\mathbf{K}$ (the diagonal of $\\mathbf{K}$ is the variance of the model uncertainty) (15 points).",
"# YOUR CODE HERE\nraise NotImplementedError()\n",
"2. Predictive distribution (35 points)\nSo far we have sampled mean functions from the prior. We can draw actual data $\\bt$ two ways. The first way is generatively, by first sampling $\\by | \\mathbf{K}$, then sampling $\\bt | \\by, \\beta$ (Eqns 6.60 followed by 6.59). The second way is to integrate over $\\by$ (the mean draw) and directly sample $\\bt | \\mathbf{K}, \\beta$ using Eqn 6.61. This is the generative process for $\\bt$. Note that we have not specified a distribution over inputs $\\bx$; this is because Gaussian processes are conditional models. Because of this we are free to generate locations $\\bx$ when playing around with the GP; obviously a dataset will give us input-output pairs.\nOnce we have data, we are interested in the predictive distribution (note: the prior is the predictive distribution when there is no data). Consider the joint distribution for $N+1$ targets, given by Eqn 6.64. Its covariance matrix is composed of block components $C_N$, $\\mathbf{k}$, and $c$. The covariance matrix $C_N$ for $\\bt_N$ is $C_N = \\mathbf{K}_N + \\beta^{-1}\\mathbf{I}_N$. We have just made explicit the size $N$ of the matrix; $N$ is the number of training points. The kernel vector $\\mathbf{k}$ is a $N$ by $1$ vector of kernel function evaluations between the training input data and the test input vector. The scalar $c$ is a kernel evaluation at the test input.\n2.1 gp_predictive_distribution(...) (10 points)\nWrite a function gp_predictive_distribution(x_train, t_train, x_test, theta, beta, C=None) that computes Eqns 6.66 and 6.67, except allow for an arbitrary number of test points (not just one) and now the kernel matrix is for training data. By having C as an optional parameter, we can avoid computing it more than once (for this problem it is unimportant, but for real problems this is an issue). The function should compute $\\mathbf{C}$, $\\mathbf{k}$, and return the mean, variance and $\\mathbf{C}$. Do not forget: the computeK function computes $\\mathbf{K}$, not $\\mathbf{C}$! (10 points)",
"def gp_predictive_distribution(x_train, t_train, x_test, theta, beta, C=None):\n # YOUR CODE HERE\n raise NotImplementedError()\n \n return mean_test, var_test, C\n\n### Test your function\nN = 2\ntrain_x = np.linspace(-1, 1, N)\ntrain_t = 2*train_x\ntest_N = 3\ntest_x = np.linspace(-1, 1, test_N) \ntheta = [1, 2, 3, 4]\nbeta = 25\ntest_mean, test_var, C = gp_predictive_distribution(train_x, train_t, test_x, theta, beta, C=None)\n\nassert test_mean.shape == (test_N,), \"the shape of mean is incorrect\"\nassert test_var.shape == (test_N, test_N), \"the shape of var is incorrect\"\nassert C.shape == (N, N), \"the shape of C is incorrect\"\n\nC_in = np.array([[0.804, -0.098168436], [-0.098168436, 0.804]])\n_, _, C_out = gp_predictive_distribution(train_x, train_t, test_x, theta, beta, C=C_in)\n\nassert np.allclose(C_in, C_out), \"C is not reused!\"\n",
"2.2 gp_log_likelihood(...) (10 points)\nTo learn the hyperparameters, we would need to compute the log-likelihood of the of the training data. Implicitly, this is conditioned on the value setting for $\\mathbf{\\theta}$. Write a function gp_log_likelihood(x_train, t_train, theta, C=None, invC=None, beta=None), where C and invC can be stored and reused. It should return the log-likelihood, C and invC (10 points)",
"def gp_log_likelihood(x_train, t_train, theta, beta, C=None, invC=None):\n # YOUR CODE HERE\n raise NotImplementedError()\n \n return lp, C, invC\n\n### Test your function\nN = 2\ntrain_x = np.linspace(-1, 1, N)\ntrain_t = 2 * train_x\ntheta = [1, 2, 3, 4]\nbeta = 25\nlp, C, invC = gp_log_likelihood(train_x, train_t, theta, beta, C=None, invC=None)\n \nassert lp < 0, \"the log-likelihood should smaller than 0\"\nassert C.shape == (N, N), \"the shape of var is incorrect\"\nassert invC.shape == (N, N), \"the shape of C is incorrect\"\n\nC_in = np.array([[0.804, -0.098168436], [-0.098168436, 0.804]])\n_, C_out, _ = gp_log_likelihood(train_x, train_t, theta, beta, C=C_in, invC=None)\n\nassert np.allclose(C_in, C_out), \"C is not reused!\"\n\ninvC_in = np.array([[1.26260453, 0.15416407], [0.15416407, 1.26260453]])\n_, _, invC_out = gp_log_likelihood(train_x, train_t, theta, beta, C=None, invC=invC_in)\n\nassert np.allclose(invC_in, invC_out), \"invC is not reused!\"\n",
"2.3 Plotting (10 points)\nRepeat the 6 plots above, but this time conditioned on the training points. Use the periodic data generator to create 2 training points where x is sampled uniformly between $-1$ and $1$. For these plots, feel free to use the provided function \"gp_plot\". Make sure you put the parameters in the title and this time also the log-likelihood. Try to understand the two types of uncertainty! If you do not use gp_plot(...), please add a fill between for the model and target noise. (10 points)",
"def gp_plot( x_test, y_test, mean_test, var_test, x_train, t_train, theta, beta ):\n # x_test: \n # y_test: the true function at x_test\n # mean_test: predictive mean at x_test\n # var_test: predictive covariance at x_test \n # t_train: the training values\n # theta: the kernel parameters\n # beta: the precision (known)\n \n # the reason for the manipulation is to allow plots separating model and data stddevs.\n std_total = np.sqrt(np.diag(var_test)) # includes all uncertainty, model and target noise \n std_model = np.sqrt(std_total**2 - 1.0/beta) # remove data noise to get model uncertainty in stddev\n std_combo = std_model + np.sqrt(1.0/beta) # add stddev (note: not the same as full)\n \n plt.plot(x_test, y_test, 'b', lw=3)\n plt.plot(x_test, mean_test, 'k--', lw=2)\n plt.fill_between(x_test, mean_test+2*std_combo,mean_test-2*std_combo, color='k', alpha=0.25)\n plt.fill_between(x_test, mean_test+2*std_model,mean_test-2*std_model, color='r', alpha=0.25)\n plt.plot(x_train, t_train, 'ro', ms=10)\n \n\n# YOUR CODE HERE\nraise NotImplementedError()\n",
"2.4 More plotting (5 points)\nRepeat the 6 plots above, but this time conditioned a new set of 10 training points. (5 points)",
"# YOUR CODE HERE\nraise NotImplementedError()\n",
"Part 2: Support Vector Machines (45 points)\nAs seen in Part 1: Gaussian Processes, one of the significant limitations of many such algorithms is that the kernel function $k(\\bx_n , \\bx_m)$ must be evaluated for all possible pairs $\\bx_n$ and $\\bx_m$ of training points, which can be computationally infeasible during training and can lead to excessive computation times when making predictions for new data points.\nIn Part 2: Support Vector Machines, we shall look at kernel-based algorithms that have sparse solutions, so that predictions for new inputs depend only on the kernel function evaluated at a subset of the training data points.\n2.1 Generating a linearly separable dataset (15 points)\na) (5 points) First of all, we are going to create our own 2D toy dataset $X$. The dataset will consists of two i.i.d. subsets $X_1$ and $X_2$, each of the subsets will be sampled from a multivariate Gaussian distribution,\n\\begin{align}\nX_1 \\sim &\\mathcal{N}(\\mu_1, \\Sigma_1)\\\n&\\text{ and }\\\nX_2 \\sim &\\mathcal{N}(\\mu_2, \\Sigma_2).\n\\end{align}\nIn the following, $X_1$ will have $N_1=20$ samples and a mean $\\mu_1=(1,1)$. $X_2$ will have $N_2=30$ samples and a mean $\\mu_2=(3,3)$.\nPlot the two subsets in one figure, choose two colors to indicate which sample belongs to which subset. In addition you should choose, $\\Sigma_1$ and $\\Sigma_2$ in a way that the two subsets become linearly separable. (Hint: Which form has the covariance matrix for a i.i.d. dataset?)",
"# YOUR CODE HERE\n# raise NotImplementedError()\nnp.random.seed(1)\nplt.rcParams[\"figure.figsize\"] = [10,10]\n\n# Cov should be diagonal (independency) and have the same values (identical), i.e. a*I.\ndef create_X(mean, sig, N):\n return np.random.multivariate_normal(mean, sig * np.identity(2), N)\n\nm1 = [1, 1]; m2 = [3, 3]\ns1 = 1/2; s2 = 1/2\nN1 = 20; N2 = 30\n\nX1 = create_X(m1, s1, N1)\nX2 = create_X(m2, s2, N2)\n\nplt.figure()\nplt.axis('equal')\nplt.scatter(X1[:, 0], X1[:, 1], c='b', marker='o')\nplt.scatter(X2[:, 0], X2[:, 1], c='r', marker='o')\nplt.show()",
"b) (10 points) In the next step we will combine the two datasets X_1, X_2 and generate a vector t containing the labels. Write a function create_X_and_t(X1, X2) it should return the combined data set X and the corresponding target vector t.",
"def create_X_and_t(X1, X2):\n # YOUR CODE HERE\n # raise NotImplementedError()\n X1_len = X1.shape[0]\n X2_len = X2.shape[0]\n \n X = np.vstack((X1, X2))\n t = np.hstack((-np.ones(X1_len), np.ones(X2_len)))\n \n # Shuffle data?\n indices = np.arange(X1_len + X2_len)\n np.random.shuffle(indices)\n \n return X[indices], t[indices]\n\n### Test your function\ndim = 2\nN1_test = 2\nN2_test = 3\nX1_test = np.arange(4).reshape((N1_test, dim))\nX2_test = np.arange(6).reshape((N2_test, dim))\nX_test, t_test = create_X_and_t(X1_test, X2_test)\n\nassert X_test.shape == (N1_test + N2_test, dim), \"the shape of X is incorrect\"\nassert t_test.shape == (N1_test + N2_test,), \"the shape of t is incorrect\"\n",
"2.2 Finding the support vectors (15 points)\nFinally we going to use a SVM to obtain the decision boundary for which the margin is maximized. We have to solve the optimization problem\n\\begin{align}\n\\arg \\min_{\\bw, b} \\frac{1}{2} \\lVert \\bw \\rVert^2,\n\\end{align}\nsubject to the constraints\n\\begin{align}\nt_n(\\bw^T \\phi(\\bx_n) + b) \\geq 1, n = 1,...,N.\n\\end{align}\nIn order to solve this constrained optimization problem, we introduce Lagrange multipliers $a_n \\geq 0$. We obtain the dual\nrepresentation of the maximum margin problem in which we maximize\n\\begin{align}\n\\sum_{n=1}^N a_n - \\frac{1}{2}\\sum_{n=1}^N\\sum_{m=1}^N a_n a_m t_n t_m k(\\bx_n, \\bx_m),\n\\end{align}\nwith respect to a subject to the constraints\n\\begin{align}\na_n &\\geq 0, n=1,...,N,\\\n\\sum_{n=1}^N a_n t_n &= 0.\n\\end{align}\nThis takes the form of a quadratic programming problem in which we optimize a quadratic function of a subject to a set of inequality constraints.\na) (5 points) In this example we will use a linear kernel $k(\\bx, \\bx') = \\bx^T\\bx'$. Write a function computeK(X) that computes the kernel matrix $K$ for the 2D dataset $X$.",
"def computeK(X):\n # YOUR CODE HERE\n # raise NotImplementedError()\n K = np.dot(X, X.T).astype('float')\n return K\n\ndim = 2\nN_test = 3\nX_test = np.arange(6).reshape((N_test, dim))\nK_test = computeK(X_test)\n\nassert K_test.shape == (N_test, N_test)\n",
"Next, we will rewrite the dual representation so that we can make use of computationally efficient vector-matrix multiplication. The objective becomes\n\\begin{align}\n\\min_{\\ba} \\frac{1}{2} \\ba^T K' \\ba - 1^T\\ba,\n\\end{align}\nsubject to the constraints\n\\begin{align}\na_n &\\geq 0, n=1,...,N,\\\n\\bt^T\\ba &= 0.\n\\end{align}\nWhere\n\\begin{align}\nK'{nm} = t_n t_m k(\\bx_n, \\bx_m),\n\\end{align}\nand in the special case of a linear kernel function,\n\\begin{align}\nK'{nm} = t_n t_m k(\\bx_n, \\bx_m) = k(t_n \\bx_n, t_m \\bx_m).\n\\end{align}\nTo solve the quadratic programming problem we will use a python module called cvxopt. You first have to install the module in your virtual environment (you have to activate it first), using the following command:\nconda install -c conda-forge cvxopt\nThe quadratic programming solver can be called as\ncvxopt.solvers.qp(P, q[, G, h[, A, b[, solver[, initvals]]]])\nThis solves the following problem,\n\\begin{align}\n\\min_{\\bx} \\frac{1}{2} \\bx^T P \\bx + q^T\\bx,\n\\end{align}\nsubject to the constraints,\n\\begin{align}\nG\\bx &\\leq h,\\\nA\\bx &= b.\n\\end{align}\nAll we need to do is to map our formulation to the cvxopt interface.\nb) (10 points) Write a function compute_multipliers(X, t) that solves the quadratic programming problem using the cvxopt module and returns the lagrangian multiplier for every sample in the dataset.",
"import cvxopt\n\ndef compute_multipliers(X, t):\n # YOUR CODE HERE\n # raise NotImplementedError()\n K = computeK(np.dot(np.diag(t), X))\n q = cvxopt.matrix(-np.ones_like(t, dtype='float'))\n G = cvxopt.matrix(np.diag(-np.ones_like(t, dtype='float')))\n A = cvxopt.matrix(t).T\n h = cvxopt.matrix(np.zeros_like(t, dtype='float'))\n b = cvxopt.matrix(0.0)\n \n P = cvxopt.matrix(K)\n sol = cvxopt.solvers.qp(P, q, G, h, A, b)\n a = np.array(sol['x'])\n return a\n\n### Test your function\ndim = 2\nN_test = 3\nX_test = np.arange(6).reshape((N_test, dim))\nt_test = np.array([-1., 1., 1.])\na_test = compute_multipliers(X_test, t_test)\n\nassert a_test.shape == (N_test, 1)\n",
"2.3 Plot support vectors (5 points)\nNow that we have obtained the lagrangian multipliers $\\ba$, we use them to find our support vectors. Repeat the plot from 2.1, this time use a third color to indicate which samples are the support vectors.",
"# YOUR CODE HERE\n# raise NotImplementedError()\nnp.random.seed(420)\n\nX, t = create_X_and_t(X1, X2)\na_opt = compute_multipliers(X, t)\n\nsv_ind = np.nonzero(np.around(a_opt[:, 0]))\n\nX_sv = X[sv_ind]\nt_sv = t[sv_ind]\na_sv = a_opt[sv_ind]\n\nplt.figure()\nplt.axis('equal')\nplt.scatter(X1[:, 0], X1[:, 1], c='b', marker='o')\nplt.scatter(X2[:, 0], X2[:, 1], c='r', marker='o')\nplt.scatter(X_sv[:, 0], X_sv[:, 1], s=200, facecolors='none', edgecolors='lime', linewidth='3')\nplt.show()\n",
"2.4 Plot the decision boundary (10 Points)\nThe decision boundary is fully specified by a (usually very small) subset of training samples, the support vectors. Make use of\n\\begin{align}\n\\bw &= \\sum_{n=1}^N a_n t_n \\mathbf{\\phi}(\\bx_n)\\\nb &= \\frac{1}{N_S}\\sum_{n \\in S} (t_n - \\sum_{m \\in S} a_m t_m k(\\bx_n, \\bx_m)),\n\\end{align}\nwhere $S$ denotes the set of indices of the support vectors, to calculate the slope and intercept of the decision boundary. Generate a last plot that contains the two subsets, support vectors and decision boundary.",
"# YOUR CODE HERE\n# raise NotImplementedError()\n\nw_opt = np.squeeze(np.dot(a_opt.T, np.dot(np.diag(t), X)))\nK_sv = computeK(X_sv)\nN_sv = size(sv_ind)\n\natk_sv = np.dot(a_sv.T * t_sv, K_sv)\nb = np.sum(t_sv - atk_sv)/N_sv\n\nx_lim = np.array([1, 4])\ny_lim = (-w_opt[0] * x_lim - b) / w_opt[1]\n\nplt.figure()\nplt.axis('equal')\nplt.scatter(X1[:, 0], X1[:, 1], c='b', marker='o')\nplt.scatter(X2[:, 0], X2[:, 1], c='r', marker='o')\nplt.scatter(X_sv[:, 0], X_sv[:, 1], s=200, facecolors='none', edgecolors='lime', linewidth='3')\nplt.plot(x_lim, y_lim, c='black')\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cliburn/sta-663-2017
|
notebook/04_Strings.ipynb
|
mit
|
[
"Strings\nThe process of cleaning data for analysis often requires working with text, for example, to correct typos, convert to standard nomenclature and resolve ambiguous labels. In some statistical fields that deal with (say) processing electronic medical records, information science or recommendations based on user feedback, text must be processed before analysis - for example, by converting to a bag of words.\nWe will use a whimsical example to illustrate Python tools for munging text data using string methods and regular expressions. Finally, we will see how to format text data for reporting. \nGet \"Through the Looking Glass\"",
"import requests\n\ntry:\n with open('looking_glass.txt') as f:\n text = f.read()\nexcept IOError:\n url = 'http://www.gutenberg.org/cache/epub/12/pg12.txt'\n res = requests.get(url)\n text = res.text\n with open('looking_glass.txt', 'w') as f:\n f.write(str(text))",
"Slice to get Jabberwocky",
"start = text.find('JABBERWOCKY')\n\ntext[start:start+2000]\n\nend = text.find('It seems very pretty', start)\n\npoem = text[start:end]\npoem\n\nprint(poem)\n\nprint(poem.title())\n\npoem.count('the')\n\nprint(poem.replace('the', 'XXX'))",
"Find palindromic words in poem if any",
"poem = poem.lower()\n\nimport string\nstring.punctuation\n\npoem = poem.translate(dict.fromkeys(map(ord, string.punctuation)))\npoem\n\nwords = poem.split()\nwords[:10]\n\ndef is_palindrome(word):\n return word == word[::-1]\n\n{word for word in words if is_palindrome(word)}",
"Top 10 most frequent words",
"import collections\n\npoem_counter = collections.Counter(words)\n\npoem_counter.most_common(10)",
"Words that appear exactly twice.",
"[(k, v) for (k, v) in poem_counter.items() if v==2]",
"Trigrams\nAll possible sequences of 3 words in the poem.",
"list(zip(words[:], words[1:], words[2:]))[:10]\n\nimport itertools as it\n\ndef window(x, n):\n \"\"\"Sliding widnow of size n from iterable x.\"\"\"\n s = (it.islice(x, i, None) for i in range(n))\n return zip(*s)\n\nlist(window(words, 3))[:10]",
"Find words in poem that are over-represented",
"book = text\n\nbook = book.lower().translate(dict.fromkeys(map(ord, string.punctuation)))\n\nbook_counter = collections.Counter(book.split())\n\nn = sum(book_counter.values())\nbook_freqs = {k: v/n for k, v in book_counter.items()}\n\nn = sum(poem_counter.values())\nstats = [(k, v, book_freqs.get(k,0)*n) for k, v in poem_counter.items()]\n\nfrom pandas import DataFrame\n\ndf = DataFrame(stats, columns = ['word', 'observed', 'expected'])\n\ndf['score'] = (df.observed-df.expected)**2/df.expected\n\ndf = df.sort_values(['score'], ascending=False)\ndf.head(n=10)",
"Encode and decode poem using a Caesar cipher",
"print(poem)",
"Encoding",
"def encode(text, k):\n table = dict(zip(map(ord, string.ascii_lowercase), \n string.ascii_lowercase[k:] + string.ascii_lowercase[:k]))\n return text.translate(table)\n\ncipher = encode(poem, 2)\nprint(cipher)",
"Decoding",
"recovered = encode(cipher, -2)\nprint(recovered)",
"Using Regular Expressions\n\nRegular Expression HOWTO\nTest your Python regular expressions\n\nFind all words with a sequence of two or more identical letters e.g. \"look\"",
"import re\n\nregex = re.compile(r'(\\w*(\\w)\\2+\\w*)', re.IGNORECASE | re.MULTILINE)\n\nfor match in regex.finditer(poem):\n print(match.group(2), match.group(1))",
"Convert the identical sequences to uppercase",
"def f(match):\n word, letter = match.groups()\n return word.replace(letter, letter.upper())\n\nprint(regex.sub(f, poem))",
"Natural language processing\nIf you intend to perform statistical analysis on natural language, you should probably use NLTK to pre-process the text instead of using string methods and regular expressions. For example, a simple challenge is to first parse the paragraph below into sentences, then parse each sentence into words.\nParagraph from random Pubmed abstract.",
"para = \"\"\"When compared with the control group no significant associations were found for the NS-PEecl group after adjustment of confounding variables. For the S-PEecl group, antiβ2GP1 IgG (OR 16.91, 95% CI 3.71-77.06) was associated, as well as age, obesity, smoking and multiparity. Antiβ2GP1-domain I IgG were associated with aCL, antiβ2GP1 and aPS/PT IgG in the three groups. aPS/PT IgG were associated with aCL IgG, and aPS/PT IgM were associated with aCL and antiβ2GP1 IgM in the three groups CONCLUSION: S-PEecl is a distinct entity from NS-PEecl and is mainly associated with the presence of antiβ2GP1 IgG. Antiβ2GP1 domain I correlate with other aPL IgG tests, and aPS/PT may be promising in patients in which LA tests cannot be interpreted.\"\"\"",
"Naive splitting of sentences as anything separated by ., ! or ?",
"sep = re.compile(r'[\\?\\!\\.]')\n\nss = sep.split(para)\n\nfor i, s in enumerate(ss, 1):\n print(i, ':', s, end='\\n\\n')",
"Using NLTK",
"import nltk\n\nss_nltk = nltk.sent_tokenize(para)\n\nfor i, s in enumerate(ss_nltk, 1):\n print(i, ':', s, end='\\n\\n')",
"Naive parsing of the second sentence into words",
"s = ss_nltk[1]\ns\n\n# remove punctuation and split on whit space\ntable = dict.fromkeys(map(ord, string.punctuation))\ns.translate(table).split()",
"Using NLTK",
"text = nltk.word_tokenize(s)\ntext",
"NLTK is a rich library for natural language processing\nSee http://www.nltk.org for details.\nTag tokens with part-of-speech labels",
"tagged_text = nltk.pos_tag(text)\ntagged_text\n\ns",
"A simplistic way to pick up nouns",
"[w for w, t in tagged_text if t.startswith('N')]",
"String formatting\nSelection",
"import math\n\nstuff = ('bun', 'shoe', ['bee', 'door'], 2, math.pi, 0.05)\n\n'One: {}, Two {}'.format(*stuff)\n\n'One: {0}, Two {1}'.format(*stuff)\n\n'One: {1}, Two {1}'.format(*stuff)\n\n'One: {0}, Two {2[1]}'.format(*stuff)",
"Formatting",
"'One: {0:^10s}, Two {1:_>15s}'.format(*stuff)\n\n'One: {3}, Two {4}'.format(*stuff)\n\n'One: {3:+10d}, Two {4:.4f}'.format(*stuff)\n\n'One: {3:04d}, Two {4:.4g}'.format(*stuff)\n\n'One: {3:.4e}, Two {4:.4e}'.format(*stuff)\n\n'One: {5:.2%}, Two {5:f}'.format(*stuff)",
"Old style formatting",
"'%s, %s, %a, %d, %.4f, %.2f' % stuff",
"Formatting numpy arrays",
"import numpy as np\n\nx = np.arange(1, 13).reshape(3,4)\nx\n\nnp.set_printoptions(formatter={'int': lambda x: '%8.2f' % x})\n\nx\n\nnp.set_printoptions()\n\nx"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jo-tez/aima-python
|
knowledge_FOIL.ipynb
|
mit
|
[
"KNOWLEDGE\nThe knowledge module covers Chapter 19: Knowledge in Learning from Stuart Russel's and Peter Norvig's book Artificial Intelligence: A Modern Approach.\nExecute the cell below to get started.",
"from knowledge import *\n\nfrom notebook import pseudocode, psource",
"CONTENTS\n\nOverview\nInductive Logic Programming (FOIL)\n\nOVERVIEW\nLike the learning module, this chapter focuses on methods for generating a model/hypothesis for a domain; however, unlike the learning chapter, here we use prior knowledge to help us learn from new experiences and to find a proper hypothesis.\nFirst-Order Logic\nUsually knowledge in this field is represented as first-order logic, a type of logic that uses variables and quantifiers in logical sentences. Hypotheses are represented by logical sentences with variables, while examples are logical sentences with set values instead of variables. The goal is to assign a value to a special first-order logic predicate, called goal predicate, for new examples given a hypothesis. We learn this hypothesis by infering knowledge from some given examples.\nRepresentation\nIn this module, we use dictionaries to represent examples, with keys being the attribute names and values being the corresponding example values. Examples also have an extra boolean field, 'GOAL', for the goal predicate. A hypothesis is represented as a list of dictionaries. Each dictionary in that list represents a disjunction. Inside these dictionaries/disjunctions we have conjunctions.\nFor example, say we want to predict if an animal (cat or dog) will take an umbrella given whether or not it rains or the animal wears a coat. The goal value is 'take an umbrella' and is denoted by the key 'GOAL'. An example:\n{'Species': 'Cat', 'Coat': 'Yes', 'Rain': 'Yes', 'GOAL': True}\nA hypothesis can be the following:\n[{'Species': 'Cat'}]\nwhich means an animal will take an umbrella if and only if it is a cat.\nConsistency\nWe say that an example e is consistent with an hypothesis h if the assignment from the hypothesis for e is the same as e['GOAL']. If the above example and hypothesis are e and h respectively, then e is consistent with h since e['Species'] == 'Cat'. For e = {'Species': 'Dog', 'Coat': 'Yes', 'Rain': 'Yes', 'GOAL': True}, the example is no longer consistent with h, since the value assigned to e is False while e['GOAL'] is True.\nInductive Logic Programming (FOIL)\nInductive logic programming (ILP) combines inductive methods with the power of first-order representations, concentrating in particular on the representation of hypotheses as logic programs. The general knowledge-based induction problem is to solve the entailment constraint: <br> <br>\n$ Background ∧ Hypothesis ∧ Descriptions \\vDash Classifications $\nfor the unknown $Hypothesis$, given the $Background$ knowledge described by $Descriptions$ and $Classifications$.\nThe first approach to ILP works by starting with a very general rule and gradually specializing\nit so that it fits the data. <br> \nThis is essentially what happens in decision-tree learning, where a\ndecision tree is gradually grown until it is consistent with the observations. <br> To do ILP we\nuse first-order literals instead of attributes, and the $Hypothesis$ is a set of clauses (set of first order rules, where each rule is similar to a Horn clause) instead of a decision tree. <br>\nThe FOIL algorithm learns new rules, one at a time, in order to cover all given positive and negative examples. <br>\nMore precicely, FOIL contains an inner and an outer while loop. <br>\n- outer loop: <font color='blue'>(function foil()) </font> add rules until all positive examples are covered. <br>\n (each rule is a conjuction of literals, which are chosen inside the inner loop)\n\ninner loop: <font color ='blue'>(function new_clause()) </font> add new literals until all negative examples are covered, and some positive examples are covered. <br>\n\nIn each iteration, we select/add the most promising literal, according to an estimate of its utility. <font color ='blue'>(function new_literal()) </font> <br>\n\n\nThe evaluation function to estimate utility of adding literal $L$ to a set of rules $R$ is <font color ='blue'>(function gain()) </font> : \n\n\n$$ FoilGain(L,R) = t \\big( \\log_2{\\frac{p_1}{p_1+n_1}} - \\log_2{\\frac{p_0}{p_0+n_0}} \\big) $$\n where: \n $p_0: \\text{is the number of possitive bindings of rule R } \\\\ n_0: \\text{is the number of negative bindings of R} \\\\ p_1: \\text{is the is the number of possitive bindings of rule R'}\\\\ n_0: \\text{is the number of negative bindings of R'}\\\\ t: \\text{is the number of possitive bindings of rule R that are still covered after adding literal L to R}$\n\n\n\nCalculate the extended examples for the chosen literal <font color ='blue'>(function extend_example()) </font> <br>\n (the set of examples created by extending example with each possible constant value for each new variable in literal)\n\n\nFinally, the algorithm returns a disjunction of first order rules (= conjuction of literals)",
"psource(FOIL_container)",
"Example Family\nSuppose we have the following family relations:\n<br>\n\n<br>\nGiven some positive and negative examples of the relation 'Parent(x,y)', we want to find a set of rules that satisfies all the examples. <br>\nA definition of Parent is $Parent(x,y) \\Leftrightarrow Mother(x,y) \\lor Father(x,y)$, which is the result that we expect from the algorithm.",
"A, B, C, D, E, F, G, H, I, x, y, z = map(expr, 'ABCDEFGHIxyz')\n\nsmall_family = FOIL_container([expr(\"Mother(Anne, Peter)\"),\n expr(\"Mother(Anne, Zara)\"),\n expr(\"Mother(Sarah, Beatrice)\"),\n expr(\"Mother(Sarah, Eugenie)\"),\n expr(\"Father(Mark, Peter)\"),\n expr(\"Father(Mark, Zara)\"),\n expr(\"Father(Andrew, Beatrice)\"),\n expr(\"Father(Andrew, Eugenie)\"),\n expr(\"Father(Philip, Anne)\"),\n expr(\"Father(Philip, Andrew)\"),\n expr(\"Mother(Elizabeth, Anne)\"),\n expr(\"Mother(Elizabeth, Andrew)\"),\n expr(\"Male(Philip)\"),\n expr(\"Male(Mark)\"),\n expr(\"Male(Andrew)\"),\n expr(\"Male(Peter)\"),\n expr(\"Female(Elizabeth)\"),\n expr(\"Female(Anne)\"),\n expr(\"Female(Sarah)\"),\n expr(\"Female(Zara)\"),\n expr(\"Female(Beatrice)\"),\n expr(\"Female(Eugenie)\"),\n])\n\ntarget = expr('Parent(x, y)')\n\nexamples_pos = [{x: expr('Elizabeth'), y: expr('Anne')},\n {x: expr('Elizabeth'), y: expr('Andrew')},\n {x: expr('Philip'), y: expr('Anne')},\n {x: expr('Philip'), y: expr('Andrew')},\n {x: expr('Anne'), y: expr('Peter')},\n {x: expr('Anne'), y: expr('Zara')},\n {x: expr('Mark'), y: expr('Peter')},\n {x: expr('Mark'), y: expr('Zara')},\n {x: expr('Andrew'), y: expr('Beatrice')},\n {x: expr('Andrew'), y: expr('Eugenie')},\n {x: expr('Sarah'), y: expr('Beatrice')},\n {x: expr('Sarah'), y: expr('Eugenie')}]\nexamples_neg = [{x: expr('Anne'), y: expr('Eugenie')},\n {x: expr('Beatrice'), y: expr('Eugenie')},\n {x: expr('Mark'), y: expr('Elizabeth')},\n {x: expr('Beatrice'), y: expr('Philip')}]\n\n# run the FOIL algorithm \nclauses = small_family.foil([examples_pos, examples_neg], target)\nprint (clauses)\n",
"Indeed the algorithm returned the rule: \n<br>$Parent(x,y) \\Leftrightarrow Mother(x,y) \\lor Father(x,y)$\nSuppose that we have some positive and negative results for the relation 'GrandParent(x,y)' and we want to find a set of rules that satisfies the examples. <br>\nOne possible set of rules for the relation $Grandparent(x,y)$ could be: <br>\n\n<br>\nOr, if $Background$ included the sentence $Parent(x,y) \\Leftrightarrow [Mother(x,y) \\lor Father(x,y)]$ then: \n$$Grandparent(x,y) \\Leftrightarrow \\exists \\: z \\quad Parent(x,z) \\land Parent(z,y)$$",
"target = expr('Grandparent(x, y)')\n\nexamples_pos = [{x: expr('Elizabeth'), y: expr('Peter')},\n {x: expr('Elizabeth'), y: expr('Zara')},\n {x: expr('Elizabeth'), y: expr('Beatrice')},\n {x: expr('Elizabeth'), y: expr('Eugenie')},\n {x: expr('Philip'), y: expr('Peter')},\n {x: expr('Philip'), y: expr('Zara')},\n {x: expr('Philip'), y: expr('Beatrice')},\n {x: expr('Philip'), y: expr('Eugenie')}]\nexamples_neg = [{x: expr('Anne'), y: expr('Eugenie')},\n {x: expr('Beatrice'), y: expr('Eugenie')},\n {x: expr('Elizabeth'), y: expr('Andrew')},\n {x: expr('Elizabeth'), y: expr('Anne')},\n {x: expr('Elizabeth'), y: expr('Mark')},\n {x: expr('Elizabeth'), y: expr('Sarah')},\n {x: expr('Philip'), y: expr('Anne')},\n {x: expr('Philip'), y: expr('Andrew')},\n {x: expr('Anne'), y: expr('Peter')},\n {x: expr('Anne'), y: expr('Zara')},\n {x: expr('Mark'), y: expr('Peter')},\n {x: expr('Mark'), y: expr('Zara')},\n {x: expr('Andrew'), y: expr('Beatrice')},\n {x: expr('Andrew'), y: expr('Eugenie')},\n {x: expr('Sarah'), y: expr('Beatrice')},\n {x: expr('Mark'), y: expr('Elizabeth')},\n {x: expr('Beatrice'), y: expr('Philip')}, \n {x: expr('Peter'), y: expr('Andrew')}, \n {x: expr('Zara'), y: expr('Mark')},\n {x: expr('Peter'), y: expr('Anne')},\n {x: expr('Zara'), y: expr('Eugenie')}, ]\n\nclauses = small_family.foil([examples_pos, examples_neg], target)\n\nprint(clauses)",
"Indeed the algorithm returned the rule: \n<br>$Grandparent(x,y) \\Leftrightarrow \\exists \\: v \\: \\: Parent(x,v) \\land Parent(v,y)$\nExample Network\nSuppose that we have the following directed graph and we want to find a rule that describes the reachability between two nodes (Reach(x,y)). <br>\nSuch a rule could be recursive, since y can be reached from x if and only if there is a sequence of adjacent nodes from x to y: \n$$ Reach(x,y) \\Leftrightarrow \\begin{cases} \n Conn(x,y), \\: \\text{(if there is a directed edge from x to y)} \\\n \\lor \\quad \\exists \\: z \\quad Reach(x,z) \\land Reach(z,y) \\end{cases}$$",
"\"\"\"\nA H\n|\\ /|\n| \\ / |\nv v v v\nB D-->E-->G-->I\n| / |\n| / |\nvv v\nC F\n\"\"\"\nsmall_network = FOIL_container([expr(\"Conn(A, B)\"),\n expr(\"Conn(A ,D)\"),\n expr(\"Conn(B, C)\"),\n expr(\"Conn(D, C)\"),\n expr(\"Conn(D, E)\"),\n expr(\"Conn(E ,F)\"),\n expr(\"Conn(E, G)\"),\n expr(\"Conn(G, I)\"),\n expr(\"Conn(H, G)\"),\n expr(\"Conn(H, I)\")])\n\n\ntarget = expr('Reach(x, y)')\nexamples_pos = [{x: A, y: B},\n {x: A, y: C},\n {x: A, y: D},\n {x: A, y: E},\n {x: A, y: F},\n {x: A, y: G},\n {x: A, y: I},\n {x: B, y: C},\n {x: D, y: C},\n {x: D, y: E},\n {x: D, y: F},\n {x: D, y: G},\n {x: D, y: I},\n {x: E, y: F},\n {x: E, y: G},\n {x: E, y: I},\n {x: G, y: I},\n {x: H, y: G},\n {x: H, y: I}]\nnodes = {A, B, C, D, E, F, G, H, I}\nexamples_neg = [example for example in [{x: a, y: b} for a in nodes for b in nodes]\n if example not in examples_pos]\nclauses = small_network.foil([examples_pos, examples_neg], target)\n\nprint(clauses)",
"The algorithm produced something close to the recursive rule: \n $$ Reach(x,y) \\Leftrightarrow [Conn(x,y)] \\: \\lor \\: [\\exists \\: z \\: \\: Reach(x,z) \\, \\land \\, Reach(z,y)]$$\nThis happened because the size of the example is small."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
oscar6echo/ezhc
|
demo_ezhc_themes_and_global_options.ipynb
|
mit
|
[
"%load_ext autoreload\n%autoreload 2\n\nimport os\nimport ezhc as hc\n\n%version_information ipython, ezhc ",
"Themes\n\nDirectly available in ezhc",
"th = hc.Theme()\nth.info()",
"Global Options\n\nCan be accessed through the GlobalOptions class\nSee the Highcharts documentation to see which options are available\nTo suppress the output of inject() set verbose=False",
"hc_global = hc.GlobalOptions({'lang': {'decimalPoint': '.', 'thousandsSep': ','}})\nhc_global.inject(verbose=True)",
"Themes as global options\n\nAs described in the documentation it can be convenient to separate the styling of charts from the data\nThis can be done through the GlobalOptions class as show here",
"hc_global = hc.GlobalOptions(th.themes['dark-unica'])\nhc_global.inject(verbose=True)\n\ng = hc.Highcharts()\ng.title.text = 'Solar Employment Growth by Sector, 2010-2016'\ng.subtitle.text = 'Source: thesolarfoundation.com'\ng.yAxis.title.text = 'Number of Employees'\ng.legend = {'layout': 'vertical', 'align': 'right', 'verticalAlign': 'middle'}\ng.exporting = False\ng.chart.width = '800'\n\ng.plotOptions.series = {'label': {'connectorAllowed': False}, 'pointStart': 2010}\ng.series = [{'name': 'Installation', 'data': [43934.3, 52503.1, 57177.2, 69658.8, 97031.9, 119931, 137133.5, 154175.6]},\n {'name': 'Manufacturing', 'data': [24916, 24064, 29742, 29851, 32490, 30282, 38121, 40434]},\n {'name': 'Sales & Distribution', 'data': [11744, 17722, 16005, 19771, 20185, 24377, 32147, 39387]},\n {'name': 'Project Development', 'data': [None, None, 7988, 12169, 15112, 22452, 34400, 34227]},\n {'name': 'Other', 'data': [12908, 5948, 8105, 11248, 8989, 11816, 18274, 18111]}\n ]\ng.plot()\n\nhc_global.reset(verbose=True)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
billzhao1990/CS231n-Spring-2017
|
assignment2/FullyConnectedNets.ipynb
|
mit
|
[
"Fully-Connected Neural Nets\nIn the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.\nIn this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this:\n```python\ndef layer_forward(x, w):\n \"\"\" Receive inputs x and weights w \"\"\"\n # Do some computations ...\n z = # ... some intermediate value\n # Do some more computations ...\n out = # the output\ncache = (x, w, z, out) # Values we need to compute gradients\nreturn out, cache\n```\nThe backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this:\n```python\ndef layer_backward(dout, cache):\n \"\"\"\n Receive derivative of loss with respect to outputs and cache,\n and compute derivative with respect to inputs.\n \"\"\"\n # Unpack cache values\n x, w, z, out = cache\n# Use values in cache to compute derivatives\n dx = # Derivative of loss with respect to x\n dw = # Derivative of loss with respect to w\nreturn dx, dw\n```\nAfter implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.\nIn addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks.",
"# As usual, a bit of setup\nfrom __future__ import print_function\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n\ndata = get_CIFAR10_data()\nfor k, v in list(data.items()):\n print(('%s: ' % k, v.shape))",
"Affine layer: foward\nOpen the file cs231n/layers.py and implement the affine_forward function.\nOnce you are done you can test your implementaion by running the following:",
"# Test the affine_forward function\n\nnum_inputs = 2\ninput_shape = (4, 5, 6)\noutput_dim = 3\n\ninput_size = num_inputs * np.prod(input_shape)\nweight_size = output_dim * np.prod(input_shape)\n\nx = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)\nw = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)\nb = np.linspace(-0.3, 0.1, num=output_dim)\n\nout, _ = affine_forward(x, w, b)\ncorrect_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],\n [ 3.25553199, 3.5141327, 3.77273342]])\n\n# Compare your output with ours. The error should be around 1e-9.\nprint('Testing affine_forward function:')\nprint('difference: ', rel_error(out, correct_out))",
"Affine layer: backward\nNow implement the affine_backward function and test your implementation using numeric gradient checking.",
"# Test the affine_backward function\nnp.random.seed(231)\nx = np.random.randn(10, 2, 3)\nw = np.random.randn(6, 5)\nb = np.random.randn(5)\ndout = np.random.randn(10, 5)\n\ndx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)\n\n_, cache = affine_forward(x, w, b)\ndx, dw, db = affine_backward(dout, cache)\n\n# The error should be around 1e-10\nprint('Testing affine_backward function:')\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dw error: ', rel_error(dw_num, dw))\nprint('db error: ', rel_error(db_num, db))",
"ReLU layer: forward\nImplement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following:",
"# Test the relu_forward function\n\nx = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)\n\nout, _ = relu_forward(x)\ncorrect_out = np.array([[ 0., 0., 0., 0., ],\n [ 0., 0., 0.04545455, 0.13636364,],\n [ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])\n\n# Compare your output with ours. The error should be around 5e-8\nprint('Testing relu_forward function:')\nprint('difference: ', rel_error(out, correct_out))",
"ReLU layer: backward\nNow implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking:",
"np.random.seed(231)\nx = np.random.randn(10, 10)\ndout = np.random.randn(*x.shape)\n\ndx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)\n\n_, cache = relu_forward(x)\ndx = relu_backward(dout, cache)\n\n# The error should be around 3e-12\nprint('Testing relu_backward function:')\nprint('dx error: ', rel_error(dx_num, dx))",
"\"Sandwich\" layers\nThere are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.\nFor now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass:",
"from cs231n.layer_utils import affine_relu_forward, affine_relu_backward\nnp.random.seed(231)\nx = np.random.randn(2, 3, 4)\nw = np.random.randn(12, 10)\nb = np.random.randn(10)\ndout = np.random.randn(2, 10)\n\nout, cache = affine_relu_forward(x, w, b)\ndx, dw, db = affine_relu_backward(dout, cache)\n\ndx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)\n\nprint('Testing affine_relu_forward:')\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dw error: ', rel_error(dw_num, dw))\nprint('db error: ', rel_error(db_num, db))",
"Loss layers: Softmax and SVM\nYou implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py.\nYou can make sure that the implementations are correct by running the following:",
"np.random.seed(231)\nnum_classes, num_inputs = 10, 50\nx = 0.001 * np.random.randn(num_inputs, num_classes)\ny = np.random.randint(num_classes, size=num_inputs)\n\ndx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)\nloss, dx = svm_loss(x, y)\n\n# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9\nprint('Testing svm_loss:')\nprint('loss: ', loss)\nprint('dx error: ', rel_error(dx_num, dx))\n\ndx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)\nloss, dx = softmax_loss(x, y)\n\n# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8\nprint('\\nTesting softmax_loss:')\nprint('loss: ', loss)\nprint('dx error: ', rel_error(dx_num, dx))",
"Two-layer network\nIn the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.\nOpen the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.",
"np.random.seed(231)\nN, D, H, C = 3, 5, 50, 7\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=N)\n\nstd = 1e-3\nmodel = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)\n\nprint('Testing initialization ... ')\nW1_std = abs(model.params['W1'].std() - std)\nb1 = model.params['b1']\nW2_std = abs(model.params['W2'].std() - std)\nb2 = model.params['b2']\nassert W1_std < std / 10, 'First layer weights do not seem right'\nassert np.all(b1 == 0), 'First layer biases do not seem right'\nassert W2_std < std / 10, 'Second layer weights do not seem right'\nassert np.all(b2 == 0), 'Second layer biases do not seem right'\n\nprint('Testing test-time forward pass ... ')\nmodel.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)\nmodel.params['b1'] = np.linspace(-0.1, 0.9, num=H)\nmodel.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)\nmodel.params['b2'] = np.linspace(-0.9, 0.1, num=C)\nX = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T\nscores = model.loss(X)\ncorrect_scores = np.asarray(\n [[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],\n [12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],\n [12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])\nscores_diff = np.abs(scores - correct_scores).sum()\nassert scores_diff < 1e-6, 'Problem with test-time forward pass'\n\nprint('Testing training loss (no regularization)')\ny = np.asarray([0, 5, 1])\nloss, grads = model.loss(X, y)\ncorrect_loss = 3.4702243556\nassert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'\n\nmodel.reg = 1.0\nloss, grads = model.loss(X, y)\ncorrect_loss = 26.5948426952\nassert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'\n\nfor reg in [0.0, 0.7]:\n print('Running numeric gradient check with reg = ', reg)\n model.reg = reg\n loss, grads = model.loss(X, y)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)\n print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))",
"Solver\nIn the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.\nOpen the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.",
"model = TwoLayerNet()\nsolver = None\n\n##############################################################################\n# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #\n# 50% accuracy on the validation set. #\n##############################################################################\nmodel.reg = 1e-2\n\nsolver = Solver(model, data, update_rule='sgd',\n optim_config={\n 'learning_rate': 2e-3,\n },\n lr_decay=0.9,\n num_epochs=10, batch_size=100,\n print_every=100)\nsolver.train()\n\nprint('The best accuracy on the validation set is ', solver.best_val_acc)\n##############################################################################\n# END OF YOUR CODE #\n##############################################################################\n\n# Run this cell to visualize training loss and train / val accuracy\n\nplt.subplot(2, 1, 1)\nplt.title('Training loss')\nplt.plot(solver.loss_history, 'o')\nplt.xlabel('Iteration')\n\nplt.subplot(2, 1, 2)\nplt.title('Accuracy')\nplt.plot(solver.train_acc_history, '-o', label='train')\nplt.plot(solver.val_acc_history, '-o', label='val')\nplt.plot([0.5] * len(solver.val_acc_history), 'k--')\nplt.xlabel('Epoch')\nplt.legend(loc='lower right')\nplt.gcf().set_size_inches(15, 12)\nplt.show()",
"Multilayer network\nNext you will implement a fully-connected network with an arbitrary number of hidden layers.\nRead through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.\nImplement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.\nInitial loss and gradient check\nAs a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?\nFor gradient checking, you should expect to see errors around 1e-6 or less.",
"np.random.seed(231)\nN, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\nfor reg in [0, 3.14]:\n print('Running check with reg = ', reg)\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64)\n\n loss, grads = model.loss(X, y)\n print('Initial loss: ', loss)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))",
"As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.",
"# TODO: Use a three-layer Net to overfit 50 training examples.\n\nnum_train = 50\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 1e-2\nlearning_rate = 1e-2\nmodel = FullyConnectedNet([100, 100],\n weight_scale=weight_scale, dtype=np.float64)\nsolver = Solver(model, small_data,\n print_every=10, num_epochs=20, batch_size=25,\n update_rule='sgd',\n optim_config={\n 'learning_rate': learning_rate,\n }\n )\nsolver.train()\n\nplt.plot(solver.loss_history, 'o')\nplt.title('Training loss history')\nplt.xlabel('Iteration')\nplt.ylabel('Training loss')\nplt.show()",
"Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.",
"# TODO: Use a five-layer Net to overfit 50 training examples.\n\nnum_train = 50\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nlearning_rate = 1e-3\nweight_scale = 1e-1\nmodel = FullyConnectedNet([100, 100, 100, 100],\n weight_scale=weight_scale, dtype=np.float64)\nsolver = Solver(model, small_data,\n print_every=10, num_epochs=20, batch_size=25,\n update_rule='sgd',\n optim_config={\n 'learning_rate': learning_rate,\n }\n )\nsolver.train()\n\nplt.plot(solver.loss_history, 'o')\nplt.title('Training loss history')\nplt.xlabel('Iteration')\nplt.ylabel('Training loss')\nplt.show()",
"Inline question:\nDid you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net?\nAnswer:\n\nmore sensitive to weight initialization and learning rate\n\nUpdate rules\nSo far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD.\nSGD+Momentum\nStochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent.\nOpen the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8.",
"from cs231n.optim import sgd_momentum\n\nN, D = 4, 5\nw = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)\ndw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)\nv = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)\n\nconfig = {'learning_rate': 1e-3, 'velocity': v}\nnext_w, _ = sgd_momentum(w, dw, config=config)\n\nexpected_next_w = np.asarray([\n [ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],\n [ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],\n [ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],\n [ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])\nexpected_velocity = np.asarray([\n [ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],\n [ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],\n [ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],\n [ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])\n\nprint('next_w error: ', rel_error(next_w, expected_next_w))\nprint('velocity error: ', rel_error(expected_velocity, config['velocity']))",
"Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.",
"num_train = 4000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nsolvers = {}\n\nfor update_rule in ['sgd', 'sgd_momentum']:\n print('running with ', update_rule)\n model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)\n\n solver = Solver(model, small_data,\n num_epochs=5, batch_size=100,\n update_rule=update_rule,\n optim_config={\n 'learning_rate':1e-2,\n },\n verbose=True)\n solvers[update_rule] = solver\n solver.train()\n print()\n\nplt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nfor update_rule, solver in list(solvers.items()):\n plt.subplot(3, 1, 1)\n plt.plot(solver.loss_history, 'o', label=update_rule)\n \n plt.subplot(3, 1, 2)\n plt.plot(solver.train_acc_history, '-o', label=update_rule)\n\n plt.subplot(3, 1, 3)\n plt.plot(solver.val_acc_history, '-o', label=update_rule)\n \nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\nplt.gcf().set_size_inches(15, 15)\nplt.show()",
"RMSProp and Adam\nRMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.\nIn the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.\n[1] Tijmen Tieleman and Geoffrey Hinton. \"Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude.\" COURSERA: Neural Networks for Machine Learning 4 (2012).\n[2] Diederik Kingma and Jimmy Ba, \"Adam: A Method for Stochastic Optimization\", ICLR 2015.",
"# Test RMSProp implementation; you should see errors less than 1e-7\nfrom cs231n.optim import rmsprop\n\nN, D = 4, 5\nw = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)\ndw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)\ncache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)\n\nconfig = {'learning_rate': 1e-2, 'cache': cache}\nnext_w, _ = rmsprop(w, dw, config=config)\n\nexpected_next_w = np.asarray([\n [-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],\n [-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],\n [ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],\n [ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])\nexpected_cache = np.asarray([\n [ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],\n [ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],\n [ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],\n [ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])\n\nprint('next_w error: ', rel_error(expected_next_w, next_w))\nprint('cache error: ', rel_error(expected_cache, config['cache']))\n\n# Test Adam implementation; you should see errors around 1e-7 or less\nfrom cs231n.optim import adam\n\nN, D = 4, 5\nw = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)\ndw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)\nm = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)\nv = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)\n\nconfig = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}\nnext_w, _ = adam(w, dw, config=config)\n\nexpected_next_w = np.asarray([\n [-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],\n [-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],\n [ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],\n [ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])\nexpected_v = np.asarray([\n [ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],\n [ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],\n [ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],\n [ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])\nexpected_m = np.asarray([\n [ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],\n [ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],\n [ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],\n [ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])\n\nprint('next_w error: ', rel_error(expected_next_w, next_w))\nprint('v error: ', rel_error(expected_v, config['v']))\nprint('m error: ', rel_error(expected_m, config['m']))",
"Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules:",
"learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}\nfor update_rule in ['adam', 'rmsprop']:\n print('running with ', update_rule)\n model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)\n\n solver = Solver(model, small_data,\n num_epochs=5, batch_size=100,\n update_rule=update_rule,\n optim_config={\n 'learning_rate': learning_rates[update_rule]\n },\n verbose=True)\n solvers[update_rule] = solver\n solver.train()\n print()\n\nplt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nfor update_rule, solver in list(solvers.items()):\n plt.subplot(3, 1, 1)\n plt.plot(solver.loss_history, 'o', label=update_rule)\n \n plt.subplot(3, 1, 2)\n plt.plot(solver.train_acc_history, '-o', label=update_rule)\n\n plt.subplot(3, 1, 3)\n plt.plot(solver.val_acc_history, '-o', label=update_rule)\n \nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\nplt.gcf().set_size_inches(15, 15)\nplt.show()",
"Train a good model!\nTrain the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.\nIf you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.\nYou might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.",
"best_model = None\n################################################################################\n# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #\n# batch normalization and dropout useful. Store your best model in the #\n# best_model variable. #\n################################################################################\nbest_val_acc = -1\n\nfor learning_rate in [0.05]:\n for d_o in [0.75]:\n \n model = FullyConnectedNet([500],\n weight_scale=1e-1, dtype=np.float64, dropout=d_o, use_batchnorm=True)\n\n solver = Solver(model, data,\n print_every=100, num_epochs=10, batch_size=100,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-2,\n }\n )\n solver.train()\n\n if solver.best_val_acc > best_val_acc:\n best_val_acc = solver.best_val_acc\n best_model = model\n\n print('learning_rate: %f, drop_out: %f, val_acc: %f, best_acc: %f' %\n (learning_rate, d_o, solver.best_val_acc, best_val_acc))\n################################################################################\n# END OF YOUR CODE #\n################################################################################",
"Test you model\nRun your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.",
"y_test_pred = np.argmax(best_model.loss(data['X_test']), axis=1)\ny_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1)\nprint('Validation set accuracy: ', (y_val_pred == data['y_val']).mean())\nprint('Test set accuracy: ', (y_test_pred == data['y_test']).mean())"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fonnesbeck/PyMC3_Oslo
|
notebooks/c. Bayesian Neural Network.ipynb
|
cc0-1.0
|
[
"Variational Inference: Bayesian Neural Networks\n(c) 2016 by Thomas Wiecki\nOriginal blog post: http://twiecki.github.io/blog/2016/06/01/bayesian-deep-learning/\nCurrent trends in Machine Learning\nThere are currently three big trends in machine learning: Probabilistic Programming, Deep Learning and \"Big Data\". Inside of PP, a lot of innovation is in making things scale using Variational Inference. In this blog post, I will show how to use Variational Inference in PyMC3 to fit a simple Bayesian Neural Network. I will also discuss how bridging Probabilistic Programming and Deep Learning can open up very interesting avenues to explore in future research.\nProbabilistic Programming at scale\nProbabilistic Programming allows very flexible creation of custom probabilistic models and is mainly concerned with insight and learning from your data. The approach is inherently Bayesian so we can specify priors to inform and constrain our models and get uncertainty estimation in form of a posterior distribution. Using MCMC sampling algorithms we can draw samples from this posterior to very flexibly estimate these models. PyMC3 and Stan are the current state-of-the-art tools to consruct and estimate these models. One major drawback of sampling, however, is that it's often very slow, especially for high-dimensional models. That's why more recently, variational inference algorithms have been developed that are almost as flexible as MCMC but much faster. Instead of drawing samples from the posterior, these algorithms instead fit a distribution (e.g. normal) to the posterior turning a sampling problem into and optimization problem. ADVI -- Automatic Differentation Variational Inference -- is implemented in PyMC3 and Stan, as well as a new package called Edward which is mainly concerned with Variational Inference. \nUnfortunately, when it comes to traditional ML problems like classification or (non-linear) regression, Probabilistic Programming often plays second fiddle (in terms of accuracy and scalability) to more algorithmic approaches like ensemble learning (e.g. random forests or gradient boosted regression trees.\nDeep Learning\nNow in its third renaissance, deep learning has been making headlines repeatadly by dominating almost any object recognition benchmark, kicking ass at Atari games, and beating the world-champion Lee Sedol at Go. From a statistical point, Neural Networks are extremely good non-linear function approximators and representation learners. While mostly known for classification, they have been extended to unsupervised learning with AutoEncoders and in all sorts of other interesting ways (e.g. Recurrent Networks, or MDNs to estimate multimodal distributions). Why do they work so well? No one really knows as the statistical properties are still not fully understood.\nA large part of the innoviation in deep learning is the ability to train these extremely complex models. This rests on several pillars:\n* Speed: facilitating the GPU allowed for much faster processing.\n* Software: frameworks like Theano and TensorFlow allow flexible creation of abstract models that can then be optimized and compiled to CPU or GPU.\n* Learning algorithms: training on sub-sets of the data -- stochastic gradient descent -- allows us to train these models on massive amounts of data. Techniques like drop-out avoid overfitting.\n* Architectural: A lot of innovation comes from changing the input layers, like for convolutional neural nets, or the output layers, like for MDNs.\nBridging Deep Learning and Probabilistic Programming\nOn one hand we Probabilistic Programming which allows us to build rather small and focused models in a very principled and well-understood way to gain insight into our data; on the other hand we have deep learning which uses many heuristics to train huge and highly complex models that are amazing at prediction. Recent innovations in variational inference allow probabilistic programming to scale model complexity as well as data size. We are thus at the cusp of being able to combine these two approaches to hopefully unlock new innovations in Machine Learning. For more motivation, see also Dustin Tran's recent blog post.\nWhile this would allow Probabilistic Programming to be applied to a much wider set of interesting problems, I believe this bridging also holds great promise for innovations in Deep Learning. Some ideas are:\n* Uncertainty in predictions: As we will see below, the Bayesian Neural Network informs us about the uncertainty in its predictions. I think uncertainty is an underappreciated concept in Machine Learning as it's clearly important for real-world applications. But it could also be useful in training. For example, we could train the model specifically on samples it is most uncertain about.\n* Uncertainty in representations: We also get uncertainty estimates of our weights which could inform us about the stability of the learned representations of the network.\n* Regularization with priors: Weights are often L2-regularized to avoid overfitting, this very naturally becomes a Gaussian prior for the weight coefficients. We could, however, imagine all kinds of other priors, like spike-and-slab to enforce sparsity (this would be more like using the L1-norm).\n* Transfer learning with informed priors: If we wanted to train a network on a new object recognition data set, we could bootstrap the learning by placing informed priors centered around weights retrieved from other pre-trained networks, like GoogLeNet. \n* Hierarchical Neural Networks: A very powerful approach in Probabilistic Programming is hierarchical modeling that allows pooling of things that were learned on sub-groups to the overall population (see my tutorial on Hierarchical Linear Regression in PyMC3). Applied to Neural Networks, in hierarchical data sets, we could train individual neural nets to specialize on sub-groups while still being informed about representations of the overall population. For example, imagine a network trained to classify car models from pictures of cars. We could train a hierarchical neural network where a sub-neural network is trained to tell apart models from only a single manufacturer. The intuition being that all cars from a certain manufactures share certain similarities so it would make sense to train individual networks that specialize on brands. However, due to the individual networks being connected at a higher layer, they would still share information with the other specialized sub-networks about features that are useful to all brands. Interestingly, different layers of the network could be informed by various levels of the hierarchy -- e.g. early layers that extract visual lines could be identical in all sub-networks while the higher-order representations would be different. The hierarchical model would learn all that from the data.\n* Other hybrid architectures: We can more freely build all kinds of neural networks. For example, Bayesian non-parametrics could be used to flexibly adjust the size and shape of the hidden layers to optimally scale the network architecture to the problem at hand during training. Currently, this requires costly hyper-parameter optimization and a lot of tribal knowledge.\nBayesian Neural Networks in PyMC3\nGenerating data\nFirst, lets generate some toy data -- a simple binary classification problem that's not linearly separable.",
"%matplotlib inline\nimport pymc3 as pm\nimport theano.tensor as T\nimport theano\nimport sklearn\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_style('white')\nfrom sklearn import datasets\nfrom sklearn.preprocessing import scale\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.datasets import make_moons\n\nX, Y = make_moons(noise=0.2, random_state=0, n_samples=1000)\nX = scale(X)\nX_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=.5)\n\nfig, ax = plt.subplots()\nax.scatter(X[Y==0, 0], X[Y==0, 1], label='Class 0')\nax.scatter(X[Y==1, 0], X[Y==1, 1], color='r', label='Class 1')\nsns.despine(); ax.legend()\nax.set(xlabel='X', ylabel='Y', title='Toy binary classification data set');",
"Model specification\nA neural network is quite simple. The basic unit is a perceptron which is nothing more than logistic regression. We use many of these in parallel and then stack them up to get hidden layers. Here we will use 2 hidden layers with 5 neurons each which is sufficient for such a simple problem.",
"# Trick: Turn inputs and outputs into shared variables. \n# It's still the same thing, but we can later change the values of the shared variable \n# (to switch in the test-data later) and pymc3 will just use the new data. \n# Kind-of like a pointer we can redirect.\n# For more info, see: http://deeplearning.net/software/theano/library/compile/shared.html\nann_input = theano.shared(X_train)\nann_output = theano.shared(Y_train)\n\nn_hidden = 5\n\n# Initialize random weights between each layer\ninit_1 = np.random.randn(X.shape[1], n_hidden)\ninit_2 = np.random.randn(n_hidden, n_hidden)\ninit_out = np.random.randn(n_hidden)\n \nwith pm.Model() as neural_network:\n # Weights from input to hidden layer\n weights_in_1 = pm.Normal('w_in_1', 0, sd=1, \n shape=(X.shape[1], n_hidden), \n testval=init_1)\n \n # Weights from 1st to 2nd layer\n weights_1_2 = pm.Normal('w_1_2', 0, sd=1, \n shape=(n_hidden, n_hidden), \n testval=init_2)\n \n # Weights from hidden layer to output\n weights_2_out = pm.Normal('w_2_out', 0, sd=1, \n shape=(n_hidden,), \n testval=init_out)\n \n # Build neural-network using tanh activation function\n act_1 = T.tanh(T.dot(ann_input, \n weights_in_1))\n act_2 = T.tanh(T.dot(act_1, \n weights_1_2))\n act_out = T.nnet.sigmoid(T.dot(act_2, \n weights_2_out))\n \n # Binary classification -> Bernoulli likelihood\n out = pm.Bernoulli('out', \n act_out,\n observed=ann_output)",
"That's not so bad. The Normal priors help regularize the weights. Usually we would add a constant b to the inputs but I omitted it here to keep the code cleaner.\nVariational Inference: Scaling model complexity\nWe could now just run a MCMC sampler like NUTS which works pretty well in this case but as I already mentioned, this will become very slow as we scale our model up to deeper architectures with more layers.\nInstead, we will use the brand-new ADVI variational inference algorithm which was recently added to PyMC3. This is much faster and will scale better. Note, that this is a mean-field approximation so we ignore correlations in the posterior.",
"%%time\n\nwith neural_network:\n # Run ADVI which returns posterior means, standard deviations, and the evidence lower bound (ELBO)\n v_params = pm.variational.advi(n=50000)",
"< 40 seconds on my older laptop. That's pretty good considering that NUTS is having a really hard time. Further below we make this even faster. To make it really fly, we probably want to run the Neural Network on the GPU.\nAs samples are more convenient to work with, we can very quickly draw samples from the variational posterior using sample_vp() (this is just sampling from Normal distributions, so not at all the same like MCMC):",
"with neural_network:\n trace = pm.variational.sample_vp(v_params, draws=5000)",
"Plotting the objective function (ELBO) we can see that the optimization slowly improves the fit over time.",
"plt.plot(v_params.elbo_vals)\nplt.ylabel('ELBO')\nplt.xlabel('iteration')",
"Now that we trained our model, lets predict on the hold-out set using a posterior predictive check (PPC). We use sample_ppc() to generate new data (in this case class predictions) from the posterior (sampled from the variational estimation).",
"# Replace shared variables with testing set\nann_input.set_value(X_test)\nann_output.set_value(Y_test)\n\n# Creater posterior predictive samples\nppc = pm.sample_ppc(trace, model=neural_network, samples=500)\n\n# Use probability of > 0.5 to assume prediction of class 1\npred = ppc['out'].mean(axis=0) > 0.5\n\nfig, ax = plt.subplots()\nax.scatter(X_test[pred==0, 0], X_test[pred==0, 1])\nax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r')\nsns.despine()\nax.set(title='Predicted labels in testing set', xlabel='X', ylabel='Y');\n\nprint('Accuracy = {}%'.format((Y_test == pred).mean() * 100))",
"Hey, our neural network did all right!\nLets look at what the classifier has learned\nFor this, we evaluate the class probability predictions on a grid over the whole input space.",
"grid = np.mgrid[-3:3:100j,-3:3:100j]\ngrid_2d = grid.reshape(2, -1).T\ndummy_out = np.ones(grid.shape[1], dtype=np.int8)\n\nann_input.set_value(grid_2d)\nann_output.set_value(dummy_out)\n\n# Creater posterior predictive samples\nppc = pm.sample_ppc(trace, model=neural_network, samples=500)",
"Probability surface",
"cmap = sns.diverging_palette(250, 12, s=85, l=25, as_cmap=True)\nfig, ax = plt.subplots(figsize=(10, 6))\ncontour = ax.contourf(*grid, ppc['out'].mean(axis=0).reshape(100, 100), cmap=cmap)\nax.scatter(X_test[pred==0, 0], X_test[pred==0, 1])\nax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r')\ncbar = plt.colorbar(contour, ax=ax)\n_ = ax.set(xlim=(-3, 3), ylim=(-3, 3), xlabel='X', ylabel='Y');\ncbar.ax.set_ylabel('Posterior predictive mean probability of class label = 0');",
"Uncertainty in predicted value\nSo far, everything I showed we could have done with a non-Bayesian Neural Network. The mean of the posterior predictive for each class-label should be identical to maximum likelihood predicted values. However, we can also look at the standard deviation of the posterior predictive to get a sense for the uncertainty in our predictions. Here is what that looks like:",
"cmap = sns.cubehelix_palette(light=1, as_cmap=True)\nfig, ax = plt.subplots(figsize=(10, 6))\ncontour = ax.contourf(*grid, ppc['out'].std(axis=0).reshape(100, 100), cmap=cmap)\nax.scatter(X_test[pred==0, 0], X_test[pred==0, 1])\nax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r')\ncbar = plt.colorbar(contour, ax=ax)\n_ = ax.set(xlim=(-3, 3), ylim=(-3, 3), xlabel='X', ylabel='Y');\ncbar.ax.set_ylabel('Uncertainty (posterior predictive standard deviation)');",
"We can see that very close to the decision boundary, our uncertainty as to which label to predict is highest. You can imagine that associating predictions with uncertainty is a critical property for many applications like health care. To further maximize accuracy, we might want to train the model primarily on samples from that high-uncertainty region.\nMini-batch ADVI: Scaling data size\nSo far, we have trained our model on all data at once. Obviously this won't scale to something like ImageNet. Moreover, training on mini-batches of data (stochastic gradient descent) avoids local minima and can lead to faster convergence.\nFortunately, ADVI can be run on mini-batches as well. It just requires some setting up:",
"# Set back to original data to retrain\nann_input.set_value(X_train)\nann_output.set_value(Y_train)\n\n# Tensors and RV that will be using mini-batches\nminibatch_tensors = [ann_input, ann_output]\nminibatch_RVs = [out]\n\n# Generator that returns mini-batches in each iteration\ndef create_minibatch(data):\n rng = np.random.RandomState(0)\n \n while True:\n # Return random data samples of set size 100 each iteration\n ixs = rng.randint(len(data), size=50)\n yield data[ixs]\n\nminibatches = [\n create_minibatch(X_train), \n create_minibatch(Y_train),\n]\n\ntotal_size = len(Y_train)",
"While the above might look a bit daunting, I really like the design. Especially the fact that you define a generator allows for great flexibility. In principle, we could just pool from a database there and not have to keep all the data in RAM. \nLets pass those to advi_minibatch():",
"%%time\n\nwith neural_network:\n # Run advi_minibatch\n v_params = pm.variational.advi_minibatch(\n n=50000, minibatch_tensors=minibatch_tensors, \n minibatch_RVs=minibatch_RVs, minibatches=minibatches, \n total_size=total_size, learning_rate=1e-2, epsilon=1.0\n )\n\nwith neural_network: \n trace = pm.variational.sample_vp(v_params, draws=5000)\n\nplt.plot(v_params.elbo_vals)\nplt.ylabel('ELBO')\nplt.xlabel('iteration')\nsns.despine()",
"As you can see, mini-batch ADVI's running time is much lower. It also seems to converge faster.\nFor fun, we can also look at the trace. The point is that we also get uncertainty of our Neural Network weights.",
"pm.traceplot(trace);"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
pycrystem/pycrystem
|
doc/demos/03 Reference Standards - Dimension Calibrations - Rotation Calibrations.ipynb
|
gpl-3.0
|
[
"Calibrations\nThis notebook demonstrates how to calibrate real and reciprocal space coordinates of scanning electron diffraction data. Calibrations include correcting the diffraction pattern for lens distortions and determining the rotation between the scan and diffraction planes based on data acquired from reference standards.\nThis functionaility has been checked to run in pyxem-0.11.0 (May 2020). Bugs are always possible, do not trust the code blindly, and if you experience any issues please report them here: https://github.com/pyxem/pyxem-demos/issues\nContents\n\n<a href='#ini'> Load Data & Initialize Generator</a>\n<a href='#dis'> Determine Lens Distortions</a>\n<a href='#cal'> Determine Real & Reciprocal Space Calibrations</a>\n<a href='#rot'> Determin Real & Reciprocal Space Rotation</a> \n\nImport pyxem, required libraries and pyxem modules",
"%matplotlib inline\nimport numpy as np\nimport pyxem as pxm\nimport hyperspy.api as hs\nfrom pyxem.libraries.calibration_library import CalibrationDataLibrary\nfrom pyxem.generators.calibration_generator import CalibrationGenerator",
"Download and the data for this demo from here and put in directory with notebooks:\nhttps://drive.google.com/drive/folders/1guzxUcHYNkB3CMClQ-Dhv9cCc1-N15Fj?usp=sharing\n<a id='ini'></a>\n1. Load Data & Initialize Generator\nLoad spatially averaged diffraction pattern from Au X-grating for distortion calibration",
"au_dpeg = hs.load('./data/03/au_xgrating_20cm.tif')\nau_dpeg.plot(vmax=1)",
"Load a VDF image of Au X-grating for scan pixel calibration",
"au_im = hs.load('./data/03/au_xgrating_100kX.hspy')\nau_im.plot()",
"Load spatially averaged diffraction pattern from MoO3 standard for rotation calibration",
"moo3_dpeg = hs.load('./data/03/moo3_20cm.tif')\nmoo3_dpeg.plot(vmax=1)",
"Load a VDF image of MoO3 standard for rotation calibration",
"moo3_im = hs.load('./data/03/moo3_100kX.tif')\nmoo3_im.plot()",
"Initialise a CalibrationGenerator with the CalibrationDataLibrary",
"#Calibration Standard can only be gold for now\ncal = CalibrationGenerator(diffraction_pattern=au_dpeg,\n grating_image=au_im)",
"<a id='ids'></a>\n2. Determine Lens Distortions\nLens distortions are assumed to be dominated by elliptical distortion due to the projector lens system. See, for example: https://www.sciencedirect.com/science/article/pii/S0304399105001087?via%3Dihub\nDistortion correction is based on measuring the ellipticity of a ring pattern obtained from an Au X-grating calibration standard in scaninng mode.\nDetermine distortion correction matrix by ring fitting",
"cal.get_elliptical_distortion(mask_radius=10,\n scale=100, amplitude=1000,\n asymmetry=0.9,spread=2)",
"Obtain residuals before and after distortion correction and plot to inspect, the aim is for any differences to be small and circularly symmetric",
"residuals = cal.get_distortion_residuals(mask_radius=10, spread=2)\nresiduals.plot(cmap='RdBu', vmax=0.04)",
"Plot distortion corrected diffraction pattern with adjustable reference circle for inspection",
"cal.plot_corrected_diffraction_pattern(vmax=0.1)",
"Check the affine matrix, which may be applied to other data",
"cal.affine_matrix",
"Inspect the ring fitting parameters",
"cal.ring_params",
"Calculate correction matrix and confirm that in this case it is equal to the affine matrix",
"cal.get_correction_matrix()",
"<a href='#cal'></a>\n3. Determining Real & Reciprocal Space Scales\nDetermine the diffraction pattern calibration in reciprocal Angstroms per pixel",
"cal.get_diffraction_calibration(mask_length=30,\n linewidth=5)",
"Plot the calibrated diffraction data to check it looks about right",
"cal.plot_calibrated_data(data_to_plot='au_x_grating_dp',\n cmap='magma', vmax=0.1)",
"Plot the cross grating image data to define the line along which to take trace",
"cal.grating_image.plot()\nline = hs.roi.Line2DROI(x1=4.83957, y1=44.4148, x2=246.46, y2=119.159, linewidth=5.57199)\nline.add_widget(cal.grating_image)\n\ntrace = line(cal.grating_image)\ntrace = trace.as_signal1D(spectral_axis=0)\ntrace.plot()",
"Obtain the navigation calibration from the trace",
"cal.get_navigation_calibration(line_roi=line, x1=40.,x2=232.,\n n=3, xspace=500.)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
rashikaranpuria/Machine-Learning-Specialization
|
Clustering_&_Retrieval/Week5/5_lda_blank.ipynb
|
mit
|
[
"Latent Dirichlet Allocation for Text Data\nIn this assignment you will\n\napply standard preprocessing techniques on Wikipedia text data\nuse GraphLab Create to fit a Latent Dirichlet allocation (LDA) model\nexplore and interpret the results, including topic keywords and topic assignments for documents\n\nRecall that a major feature distinguishing the LDA model from our previously explored methods is the notion of mixed membership. Throughout the course so far, our models have assumed that each data point belongs to a single cluster. k-means determines membership simply by shortest distance to the cluster center, and Gaussian mixture models suppose that each data point is drawn from one of their component mixture distributions. In many cases, though, it is more realistic to think of data as genuinely belonging to more than one cluster or category - for example, if we have a model for text data that includes both \"Politics\" and \"World News\" categories, then an article about a recent meeting of the United Nations should have membership in both categories rather than being forced into just one.\nWith this in mind, we will use GraphLab Create tools to fit an LDA model to a corpus of Wikipedia articles and examine the results to analyze the impact of a mixed membership approach. In particular, we want to identify the topics discovered by the model in terms of their most important words, and we want to use the model to predict the topic membership distribution for a given document. \nNote to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.\nText Data Preprocessing\nWe'll start by importing our familiar Wikipedia dataset.",
"import graphlab as gl\nimport numpy as np\nimport matplotlib.pyplot as plt \n\n%matplotlib inline\n\n# import wiki data\nwiki = gl.SFrame('people_wiki.gl/')\nwiki",
"In the original data, each Wikipedia article is represented by a URI, a name, and a string containing the entire text of the article. Recall from the video lectures that LDA requires documents to be represented as a bag of words, which ignores word ordering in the document but retains information on how many times each word appears. As we have seen in our previous encounters with text data, words such as 'the', 'a', or 'and' are by far the most frequent, but they appear so commonly in the English language that they tell us almost nothing about how similar or dissimilar two documents might be. \nTherefore, before we train our LDA model, we will preprocess the Wikipedia data in two steps: first, we will create a bag of words representation for each article, and then we will remove the common words that don't help us to distinguish between documents. For both of these tasks we can use pre-implemented tools from GraphLab Create:",
"wiki_docs = gl.text_analytics.count_words(wiki['text'])\nwiki_docs = wiki_docs.dict_trim_by_keys(gl.text_analytics.stopwords(), exclude=True)",
"Model fitting and interpretation\nIn the video lectures we saw that Gibbs sampling can be used to perform inference in the LDA model. In this assignment we will use a GraphLab Create method to learn the topic model for our Wikipedia data, and our main emphasis will be on interpreting the results. We'll begin by creating the topic model using create() from GraphLab Create's topic_model module.\nNote: This may take several minutes to run.",
"topic_model = gl.topic_model.create(wiki_docs, num_topics=10, num_iterations=200)",
"GraphLab provides a useful summary of the model we have fitted, including the hyperparameter settings for alpha, gamma (note that GraphLab Create calls this parameter beta), and K (the number of topics); the structure of the output data; and some useful methods for understanding the results.",
"topic_model",
"It is certainly useful to have pre-implemented methods available for LDA, but as with our previous methods for clustering and retrieval, implementing and fitting the model gets us only halfway towards our objective. We now need to analyze the fitted model to understand what it has done with our data and whether it will be useful as a document classification system. This can be a challenging task in itself, particularly when the model that we use is complex. We will begin by outlining a sequence of objectives that will help us understand our model in detail. In particular, we will\n\nget the top words in each topic and use these to identify topic themes\npredict topic distributions for some example documents\ncompare the quality of LDA \"nearest neighbors\" to the NN output from the first assignment\nunderstand the role of model hyperparameters alpha and gamma\n\nLoad a fitted topic model\nThe method used to fit the LDA model is a randomized algorithm, which means that it involves steps that are random; in this case, the randomness comes from Gibbs sampling, as discussed in the LDA video lectures. Because of these random steps, the algorithm will be expected to yield slighty different output for different runs on the same data - note that this is different from previously seen algorithms such as k-means or EM, which will always produce the same results given the same input and initialization.\nIt is important to understand that variation in the results is a fundamental feature of randomized methods. However, in the context of this assignment this variation makes it difficult to evaluate the correctness of your analysis, so we will load and analyze a pre-trained model. \nWe recommend that you spend some time exploring your own fitted topic model and compare our analysis of the pre-trained model to the same analysis applied to the model you trained above.",
"topic_model = gl.load_model('lda_assignment_topic_model')",
"Identifying topic themes by top words\nWe'll start by trying to identify the topics learned by our model with some major themes. As a preliminary check on the results of applying this method, it is reasonable to hope that the model has been able to learn topics that correspond to recognizable categories. In order to do this, we must first recall what exactly a 'topic' is in the context of LDA. \nIn the video lectures on LDA we learned that a topic is a probability distribution over words in the vocabulary; that is, each topic assigns a particular probability to every one of the unique words that appears in our data. Different topics will assign different probabilities to the same word: for instance, a topic that ends up describing science and technology articles might place more probability on the word 'university' than a topic that describes sports or politics. Looking at the highest probability words in each topic will thus give us a sense of its major themes. Ideally we would find that each topic is identifiable with some clear theme and that all the topics are relatively distinct.\nWe can use the GraphLab Create function get_topics() to view the top words (along with their associated probabilities) from each topic.\nQuiz Question: Identify the top 3 most probable words for the first topic. \n Quiz Question: What is the sum of the probabilities assigned to the top 50 words in the 3rd topic?\nLet's look at the top 10 words for each topic to see if we can identify any themes:",
"topics = topic_model.get_topics()\ntopics_1 = topics[topics['topic']==0]\nprint topics_1[0:3]['word']\ntopics_2 = topics[topics['topic']==1]\n# print topics_2\ntopics_3 = topics[topics['topic']==2]\nprint topics_3[0:50]['score'].sum()\n\ntopic_3\n\n[x['words'] for x in topic_model.get_topics(output_type='topic_words', num_words=10)]",
"We propose the following themes for each topic:\n\ntopic 0: Science and research\ntopic 2: Team sports\ntopic 3: Music, TV, and film\ntopic 4: American college and politics\ntopic 5: General politics\ntopic 6: Art and publishing\ntopic 7: Business\ntopic 8: International athletics\ntopic 9: Great Britain and Australia\ntopic 10: International music\n\nWe'll save these themes for later:",
"themes = ['science and research','team sports','music, TV, and film','American college and politics','general politics', \\\n 'art and publishing','Business','international athletics','Great Britain and Australia','international music']",
"Measuring the importance of top words\nWe can learn more about topics by exploring how they place probability mass (which we can think of as a weight) on each of their top words.\nWe'll do this with two visualizations of the weights for the top words in each topic:\n - the weights of the top 100 words, sorted by the size\n - the total weight of the top 10 words\nHere's a plot for the top 100 words by weight in each topic:",
"for i in range(10):\n plt.plot(range(100), topic_model.get_topics(topic_ids=[i], num_words=100)['score'])\nplt.xlabel('Word rank')\nplt.ylabel('Probability')\nplt.title('Probabilities of Top 100 Words in each Topic')",
"In the above plot, each line corresponds to one of our ten topics. Notice how for each topic, the weights drop off sharply as we move down the ranked list of most important words. This shows that the top 10-20 words in each topic are assigned a much greater weight than the remaining words - and remember from the summary of our topic model that our vocabulary has 547462 words in total!\nNext we plot the total weight assigned by each topic to its top 10 words:",
"top_probs = [sum(topic_model.get_topics(topic_ids=[i], num_words=10)['score']) for i in range(10)]\n\nind = np.arange(10)\nwidth = 0.5\n\nfig, ax = plt.subplots()\n\nax.bar(ind-(width/2),top_probs,width)\nax.set_xticks(ind)\n\nplt.xlabel('Topic')\nplt.ylabel('Probability')\nplt.title('Total Probability of Top 10 Words in each Topic')\nplt.xlim(-0.5,9.5)\nplt.ylim(0,0.15)\nplt.show()",
"Here we see that, for our topic model, the top 10 words only account for a small fraction (in this case, between 5% and 13%) of their topic's total probability mass. So while we can use the top words to identify broad themes for each topic, we should keep in mind that in reality these topics are more complex than a simple 10-word summary.\nFinally, we observe that some 'junk' words appear highly rated in some topics despite our efforts to remove unhelpful words before fitting the model; for example, the word 'born' appears as a top 10 word in three different topics, but it doesn't help us describe these topics at all.\nTopic distributions for some example documents\nAs we noted in the introduction to this assignment, LDA allows for mixed membership, which means that each document can partially belong to several different topics. For each document, topic membership is expressed as a vector of weights that sum to one; the magnitude of each weight indicates the degree to which the document represents that particular topic.\nWe'll explore this in our fitted model by looking at the topic distributions for a few example Wikipedia articles from our data set. We should find that these articles have the highest weights on the topics whose themes are most relevant to the subject of the article - for example, we'd expect an article on a politician to place relatively high weight on topics related to government, while an article about an athlete should place higher weight on topics related to sports or competition.\nTopic distributions for documents can be obtained using GraphLab Create's predict() function. GraphLab Create uses a collapsed Gibbs sampler similar to the one described in the video lectures, where only the word assignments variables are sampled. To get a document-specific topic proportion vector post-facto, predict() draws this vector from the conditional distribution given the sampled word assignments in the document. Notice that, since these are draws from a distribution over topics that the model has learned, we will get slightly different predictions each time we call this function on a document - we can see this below, where we predict the topic distribution for the article on Barack Obama:",
"obama = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Barack Obama')[0])]])\npred1 = topic_model.predict(obama, output_type='probability')\npred2 = topic_model.predict(obama, output_type='probability')\nprint(gl.SFrame({'topics':themes, 'predictions (first draw)':pred1[0], 'predictions (second draw)':pred2[0]}))",
"To get a more robust estimate of the topics for each document, we can average a large number of predictions for the same document:",
"def average_predictions(model, test_document, num_trials=100):\n avg_preds = np.zeros((model.num_topics))\n for i in range(num_trials):\n avg_preds += model.predict(test_document, output_type='probability')[0]\n avg_preds = avg_preds/num_trials\n result = gl.SFrame({'topics':themes, 'average predictions':avg_preds})\n result = result.sort('average predictions', ascending=False)\n return result\n\nprint average_predictions(topic_model, obama, 100)\n\nbush = gl.SArray([wiki_docs[int(np.where(wiki['name']=='George W. Bush')[0])]])\nprint average_predictions(topic_model, bush, 100)\n\ngerrard = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Steven Gerrard')[0])]])\nprint average_predictions(topic_model, gerrard, 100)",
"Quiz Question: What is the topic most closely associated with the article about former US President George W. Bush? Use the average results from 100 topic predictions.\nQuiz Question: What are the top 3 topics corresponding to the article about English football (soccer) player Steven Gerrard? Use the average results from 100 topic predictions.\nComparing LDA to nearest neighbors for document retrieval\nSo far we have found that our topic model has learned some coherent topics, we have explored these topics as probability distributions over a vocabulary, and we have seen how individual documents in our Wikipedia data set are assigned to these topics in a way that corresponds with our expectations. \nIn this section, we will use the predicted topic distribution as a representation of each document, similar to how we have previously represented documents by word count or TF-IDF. This gives us a way of computing distances between documents, so that we can run a nearest neighbors search for a given document based on its membership in the topics that we learned from LDA. We can contrast the results with those obtained by running nearest neighbors under the usual TF-IDF representation, an approach that we explored in a previous assignment. \nWe'll start by creating the LDA topic distribution representation for each document:",
"wiki['lda'] = topic_model.predict(wiki_docs, output_type='probability')",
"Next we add the TF-IDF document representations:",
"wiki['word_count'] = gl.text_analytics.count_words(wiki['text'])\nwiki['tf_idf'] = gl.text_analytics.tf_idf(wiki['word_count'])",
"For each of our two different document representations, we can use GraphLab Create to compute a brute-force nearest neighbors model:",
"model_tf_idf = gl.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],\n method='brute_force', distance='cosine')\nmodel_lda_rep = gl.nearest_neighbors.create(wiki, label='name', features=['lda'],\n method='brute_force', distance='cosine')",
"Let's compare these nearest neighbor models by finding the nearest neighbors under each representation on an example document. For this example we'll use Paul Krugman, an American economist:",
"model_tf_idf.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10)\n\nmodel_lda_rep.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10)\n\nRodriguez_nearest = model_tf_idf.query(wiki[wiki['name'] == 'Alex Rodriguez'], label='name', k=5000)\nprint Rodriguez_nearest[Rodriguez_nearest['reference_label']=='Mariano Rivera']\n\nRodriguez_nearest_lda = model_lda_rep.query(wiki[wiki['name'] == 'Alex Rodriguez'], label='name', k=5000)\nprint Rodriguez_nearest_lda[Rodriguez_nearest_lda['reference_label']=='Mariano Rivera']\n\ntype(Rodriguez_nearest_lda['reference_label'])",
"Notice that that there is no overlap between the two sets of top 10 nearest neighbors. This doesn't necessarily mean that one representation is better or worse than the other, but rather that they are picking out different features of the documents. \nWith TF-IDF, documents are distinguished by the frequency of uncommon words. Since similarity is defined based on the specific words used in the document, documents that are \"close\" under TF-IDF tend to be similar in terms of specific details. This is what we see in the example: the top 10 nearest neighbors are all economists from the US, UK, or Canada. \nOur LDA representation, on the other hand, defines similarity between documents in terms of their topic distributions. This means that documents can be \"close\" if they share similar themes, even though they may not share many of the same keywords. For the article on Paul Krugman, we expect the most important topics to be 'American college and politics' and 'science and research'. As a result, we see that the top 10 nearest neighbors are academics from a wide variety of fields, including literature, anthropology, and religious studies.\nQuiz Question: Using the TF-IDF representation, compute the 5000 nearest neighbors for American baseball player Alex Rodriguez. For what value of k is Mariano Rivera the k-th nearest neighbor to Alex Rodriguez? (Hint: Once you have a list of the nearest neighbors, you can use mylist.index(value) to find the index of the first instance of value in mylist.)\nQuiz Question: Using the LDA representation, compute the 5000 nearest neighbors for American baseball player Alex Rodriguez. For what value of k is Mariano Rivera the k-th nearest neighbor to Alex Rodriguez? (Hint: Once you have a list of the nearest neighbors, you can use mylist.index(value) to find the index of the first instance of value in mylist.)\nUnderstanding the role of LDA model hyperparameters\nFinally, we'll take a look at the effect of the LDA model hyperparameters alpha and gamma on the characteristics of our fitted model. Recall that alpha is a parameter of the prior distribution over topic weights in each document, while gamma is a parameter of the prior distribution over word weights in each topic. \nIn the video lectures, we saw that alpha and gamma can be thought of as smoothing parameters when we compute how much each document \"likes\" a topic (in the case of alpha) or how much each topic \"likes\" a word (in the case of gamma). In both cases, these parameters serve to reduce the differences across topics or words in terms of these calculated preferences; alpha makes the document preferences \"smoother\" over topics, and gamma makes the topic preferences \"smoother\" over words.\nOur goal in this section will be to understand how changing these parameter values affects the characteristics of the resulting topic model.\nQuiz Question: What was the value of alpha used to fit our original topic model? \nQuiz Question: What was the value of gamma used to fit our original topic model? Remember that GraphLab Create uses \"beta\" instead of \"gamma\" to refer to the hyperparameter that influences topic distributions over words.\nWe'll start by loading some topic models that have been trained using different settings of alpha and gamma. Specifically, we will start by comparing the following two models to our original topic model:\n - tpm_low_alpha, a model trained with alpha = 1 and default gamma\n - tpm_high_alpha, a model trained with alpha = 50 and default gamma",
"tpm_low_alpha = gl.load_model('lda_low_alpha')\ntpm_high_alpha = gl.load_model('lda_high_alpha')",
"Changing the hyperparameter alpha\nSince alpha is responsible for smoothing document preferences over topics, the impact of changing its value should be visible when we plot the distribution of topic weights for the same document under models fit with different alpha values. In the code below, we plot the (sorted) topic weights for the Wikipedia article on Barack Obama under models fit with high, original, and low settings of alpha.",
"a = np.sort(tpm_low_alpha.predict(obama,output_type='probability')[0])[::-1]\nb = np.sort(topic_model.predict(obama,output_type='probability')[0])[::-1]\nc = np.sort(tpm_high_alpha.predict(obama,output_type='probability')[0])[::-1]\nind = np.arange(len(a))\nwidth = 0.3\n\ndef param_bar_plot(a,b,c,ind,width,ylim,param,xlab,ylab):\n fig = plt.figure()\n ax = fig.add_subplot(111)\n\n b1 = ax.bar(ind, a, width, color='lightskyblue')\n b2 = ax.bar(ind+width, b, width, color='lightcoral')\n b3 = ax.bar(ind+(2*width), c, width, color='gold')\n\n ax.set_xticks(ind+width)\n ax.set_xticklabels(range(10))\n ax.set_ylabel(ylab)\n ax.set_xlabel(xlab)\n ax.set_ylim(0,ylim)\n ax.legend(handles = [b1,b2,b3],labels=['low '+param,'original model','high '+param])\n\n plt.tight_layout()\n \nparam_bar_plot(a,b,c,ind,width,ylim=1.0,param='alpha',\n xlab='Topics (sorted by weight of top 100 words)',ylab='Topic Probability for Obama Article')",
"Here we can clearly see the smoothing enforced by the alpha parameter - notice that when alpha is low most of the weight in the topic distribution for this article goes to a single topic, but when alpha is high the weight is much more evenly distributed across the topics.\nQuiz Question: How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on Paul Krugman in the low alpha model? Use the average results from 100 topic predictions.\nQuiz Question: How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on Paul Krugman in the high alpha model? Use the average results from 100 topic predictions.\nChanging the hyperparameter gamma\nJust as we were able to see the effect of alpha by plotting topic weights for a document, we expect to be able to visualize the impact of changing gamma by plotting word weights for each topic. In this case, however, there are far too many words in our vocabulary to do this effectively. Instead, we'll plot the total weight of the top 100 words and bottom 1000 words for each topic. Below, we plot the (sorted) total weights of the top 100 words and bottom 1000 from each topic in the high, original, and low gamma models.\nNow we will consider the following two models:\n - tpm_low_gamma, a model trained with gamma = 0.02 and default alpha\n - tpm_high_gamma, a model trained with gamma = 0.5 and default alpha",
"del tpm_low_alpha\ndel tpm_high_alpha\ntpm_low_gamma = gl.load_model('lda_low_gamma')\ntpm_high_gamma = gl.load_model('lda_high_gamma')\n\na_top = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]\nb_top = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]\nc_top = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]\n\na_bot = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]\nb_bot = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]\nc_bot = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]\n\nind = np.arange(len(a))\nwidth = 0.3\n \nparam_bar_plot(a_top, b_top, c_top, ind, width, ylim=0.6, param='gamma',\n xlab='Topics (sorted by weight of top 100 words)', \n ylab='Total Probability of Top 100 Words')\n\nparam_bar_plot(a_bot, b_bot, c_bot, ind, width, ylim=0.0002, param='gamma',\n xlab='Topics (sorted by weight of bottom 1000 words)',\n ylab='Total Probability of Bottom 1000 Words')",
"From these two plots we can see that the low gamma model results in higher weight placed on the top words and lower weight placed on the bottom words for each topic, while the high gamma model places relatively less weight on the top words and more weight on the bottom words. Thus increasing gamma results in topics that have a smoother distribution of weight across all the words in the vocabulary.\nQuiz Question: For each topic of the low gamma model, compute the number of words required to make a list with total probability 0.5. What is the average number of words required across all topics? (HINT: use the get_topics() function from GraphLab Create with the cdf_cutoff argument).\nQuiz Question: For each topic of the high gamma model, compute the number of words required to make a list with total probability 0.5. What is the average number of words required across all topics? (HINT: use the get_topics() function from GraphLab Create with the cdf_cutoff argument).\nWe have now seen how the hyperparameters alpha and gamma influence the characteristics of our LDA topic model, but we haven't said anything about what settings of alpha or gamma are best. We know that these parameters are responsible for controlling the smoothness of the topic distributions for documents and word distributions for topics, but there's no simple conversion between smoothness of these distributions and quality of the topic model. In reality, there is no universally \"best\" choice for these parameters. Instead, finding a good topic model requires that we be able to both explore the output (as we did by looking at the topics and checking some topic predictions for documents) and understand the impact of hyperparameter settings (as we have in this section)."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Resly/pipeline
|
jupyterhub.ml/notebooks/spark/.ipynb_checkpoints/Deploy_SparkML_Airbnb_LinearRegression-checkpoint.ipynb
|
apache-2.0
|
[
"Step 0: Load Libraries, Data, and SparkSession",
"# You may need to Reconnect (more than Restart) the Kernel to pick up changes to these sett\nimport os\n\nmaster = '--master spark://127.0.0.1:47077'\nconf = '--conf spark.cores.max=1 --conf spark.executor.memory=512m'\npackages = '--packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.1'\njars = '--jars /root/lib/jpmml-sparkml-package-1.0-SNAPSHOT.jar'\npy_files = '--py-files /root/lib/jpmml.py'\n\nos.environ['PYSPARK_SUBMIT_ARGS'] = master \\\n + ' ' + conf \\\n + ' ' + packages \\\n + ' ' + jars \\\n + ' ' + py_files \\\n + ' ' + 'pyspark-shell'\n\nprint(os.environ['PYSPARK_SUBMIT_ARGS'])\n\nfrom pyspark.ml.linalg import Vectors\nfrom pyspark.ml.feature import VectorAssembler, StandardScaler\nfrom pyspark.ml.feature import OneHotEncoder, StringIndexer\nfrom pyspark.ml import Pipeline, PipelineModel\nfrom pyspark.ml.regression import LinearRegression\n\nfrom pyspark.sql import SparkSession\n\nspark = SparkSession.builder.getOrCreate()\n\ndf = spark.read.format(\"csv\") \\\n .option(\"inferSchema\", \"true\").option(\"header\", \"true\") \\\n .load(\"hdfs://127.0.0.1:39000/datasets/airbnb/airbnb.csv.bz2\")\n\ndf.registerTempTable(\"df\")\n\nprint(df.head())\n\nprint(df.count())",
"Step 1: Clean, Filter, and Summarize the Data",
"df_filtered = df.filter(\"price >= 50 AND price <= 750 AND bathrooms > 0.0 AND bedrooms is not null\")\n\ndf_filtered.registerTempTable(\"df_filtered\")\n\ndf_final = spark.sql(\"\"\"\n select\n id,\n city,\n case when state in('NY', 'CA', 'London', 'Berlin', 'TX' ,'IL', 'OR', 'DC', 'WA')\n then state\n else 'Other'\n end as state,\n space,\n cast(price as double) as price,\n cast(bathrooms as double) as bathrooms,\n cast(bedrooms as double) as bedrooms,\n room_type,\n host_is_super_host,\n cancellation_policy,\n cast(case when security_deposit is null\n then 0.0\n else security_deposit\n end as double) as security_deposit,\n price_per_bedroom,\n cast(case when number_of_reviews is null\n then 0.0\n else number_of_reviews\n end as double) as number_of_reviews,\n cast(case when extra_people is null\n then 0.0\n else extra_people\n end as double) as extra_people,\n instant_bookable,\n cast(case when cleaning_fee is null\n then 0.0\n else cleaning_fee\n end as double) as cleaning_fee,\n cast(case when review_scores_rating is null\n then 80.0\n else review_scores_rating\n end as double) as review_scores_rating,\n cast(case when square_feet is not null and square_feet > 100\n then square_feet\n when (square_feet is null or square_feet <=100) and (bedrooms is null or bedrooms = 0)\n then 350.0\n else 380 * bedrooms\n end as double) as square_feet\n from df_filtered\n\"\"\").persist()\n\ndf_final.registerTempTable(\"df_final\")\n\ndf_final.select(\"square_feet\", \"price\", \"bedrooms\", \"bathrooms\", \"cleaning_fee\").describe().show()\n\nprint(df_final.count())\n\nprint(df_final.schema)\n\n# Most popular cities\n\nspark.sql(\"\"\"\n select \n state,\n count(*) as ct,\n avg(price) as avg_price,\n max(price) as max_price\n from df_final\n group by state\n order by count(*) desc\n\"\"\").show()\n\n# Most expensive popular cities\n\nspark.sql(\"\"\"\n select \n city,\n count(*) as ct,\n avg(price) as avg_price,\n max(price) as max_price\n from df_final\n group by city\n order by avg(price) desc\n\"\"\").filter(\"ct > 25\").show()",
"Step 2: Define Continous and Categorical Features",
"continuous_features = [\"bathrooms\", \\\n \"bedrooms\", \\\n \"security_deposit\", \\\n \"cleaning_fee\", \\\n \"extra_people\", \\\n \"number_of_reviews\", \\\n \"square_feet\", \\\n \"review_scores_rating\"]\n\ncategorical_features = [\"room_type\", \\\n \"host_is_super_host\", \\\n \"cancellation_policy\", \\\n \"instant_bookable\", \\\n \"state\"]",
"Step 3: Split Data into Training and Validation",
"[training_dataset, validation_dataset] = df_final.randomSplit([0.8, 0.2])",
"Step 4: Continous Feature Pipeline",
"continuous_feature_assembler = VectorAssembler(inputCols=continuous_features, outputCol=\"unscaled_continuous_features\")\n\ncontinuous_feature_scaler = StandardScaler(inputCol=\"unscaled_continuous_features\", outputCol=\"scaled_continuous_features\", \\\n withStd=True, withMean=False)",
"Step 5: Categorical Feature Pipeline",
"categorical_feature_indexers = [StringIndexer(inputCol=x, \\\n outputCol=\"{}_index\".format(x)) \\\n for x in categorical_features]\n\ncategorical_feature_one_hot_encoders = [OneHotEncoder(inputCol=x.getOutputCol(), \\\n outputCol=\"oh_encoder_{}\".format(x.getOutputCol() )) \\\n for x in categorical_feature_indexers]",
"Step 6: Assemble our features and feature pipeline",
"feature_cols_lr = [x.getOutputCol() \\\n for x in categorical_feature_one_hot_encoders]\nfeature_cols_lr.append(\"scaled_continuous_features\")\n\nfeature_assembler_lr = VectorAssembler(inputCols=feature_cols_lr, \\\n outputCol=\"features_lr\")",
"Step 7: Train a Linear Regression Model",
"linear_regression = LinearRegression(featuresCol=\"features_lr\", \\\n labelCol=\"price\", \\\n predictionCol=\"price_prediction\", \\\n maxIter=10, \\\n regParam=0.3, \\\n elasticNetParam=0.8)\n\nestimators_lr = \\\n [continuous_feature_assembler, continuous_feature_scaler] \\\n + categorical_feature_indexers + categorical_feature_one_hot_encoders \\\n + [feature_assembler_lr] + [linear_regression]\n\npipeline = Pipeline(stages=estimators_lr)\n\npipeline_model = pipeline.fit(training_dataset)\n\nprint(pipeline_model)",
"TODO: Step 8: Validate Linear Regression Model\nStep 9: Convert PipelineModel to PMML",
"from jpmml import toPMMLBytes\n\npmmlBytes = toPMMLBytes(spark, training_dataset, pipeline_model)\n\nprint(pmmlBytes.decode(\"utf-8\"))",
"Deployment Option 1: Mutable Model Deployment\nDeploy New Model to Live, Running Model Server",
"import urllib.request\n\nupdate_url = 'http://<your-ip>:39040/update-pmml/pmml_airbnb'\n\nupdate_headers = {}\nupdate_headers['Content-type'] = 'application/xml'\n\nreq = urllib.request.Request(update_url, \\\n headers=update_headers, \\\n data=pmmlBytes)\n\nresp = urllib.request.urlopen(req)\n\nprint(resp.status) # Should return Http Status 200 \n\nimport urllib.parse\nimport json\n\n# Note: You will need to run this twice.\n# A fallback will trigger the first time. (bug)\nevaluate_url = 'http://<your-ip>:39040/evaluate-pmml/pmml_airbnb'\n\nevaluate_headers = {}\nevaluate_headers['Content-type'] = 'application/json'\n\ninput_params = '{\"bathrooms\":2.0, \\\n \"bedrooms\":2.0, \\\n \"security_deposit\":175.00, \\\n \"cleaning_fee\":25.0, \\\n \"extra_people\":1.0, \\\n \"number_of_reviews\": 2.0, \\\n \"square_feet\": 250.0, \\\n \"review_scores_rating\": 2.0, \\\n \"room_type\": \"Entire home/apt\", \\\n \"host_is_super_host\": \"0.0\", \\\n \"cancellation_policy\": \"flexible\", \\\n \"instant_bookable\": \"1.0\", \\\n \"state\": \"CA\"}' \nencoded_input_params = input_params.encode('utf-8')\n\nreq = urllib.request.Request(evaluate_url, \\\n headers=evaluate_headers, \\\n data=encoded_input_params)\n\nresp = urllib.request.urlopen(req)\n\nprint(resp.read())",
"Model Server Dashboard\nFill in <your-ip> below, then copy/paste to your browser\nhttp://<your-ip>:47979/hystrix-dashboard/monitor/monitor.html?streams=%5B%7B%22name%22%3A%22%22%2C%22stream%22%3A%22http%3A%2F%2F<your-ip>%3A39043%2Fhystrix.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%2C%7B%22name%22%3A%22%22%2C%22stream%22%3A%22http%3A%2F%2F<your-ip>%3A39042%2Fhystrix.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%2C%7B%22name%22%3A%22%22%2C%22stream%22%3A%22http%3A%2F%2F<your-ip>%3A39041%2Fhystrix.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%2C%7B%22name%22%3A%22%22%2C%22stream%22%3A%22http%3A%2F%2F<your-ip>%3A39040%2Fhystrix.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%5D\nDeployment Option 2: Immutable Model Deployment\nSave Model to Disk",
"!mkdir -p /root/src/pmml/airbnb/\n\nwith open('/root/src/pmml/airbnb/pmml_airbnb.pmml', 'wb') as f:\n f.write(pmmlBytes)\n\n!ls /root/src/pmml/airbnb/pmml_airbnb.pmml",
"TODO: Trigger Airflow to Build New Docker Image (ie. via Github commit)\nLoad Test",
"!start-loadtest.sh $SOURCE_HOME/loadtest/RecommendationServiceStressTest-local-airbnb.jmx"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jplourenco/bokeh
|
examples/plotting/notebook/interact_numba.ipynb
|
bsd-3-clause
|
[
"Interactive Image Processing with Numba and Bokeh\nThis demo shows off how interactive image processing can be done in the notebook, using Numba for numerics, Bokeh for plotting, and Ipython interactors for widgets. The demo runs entirely inside the Ipython notebook, with no Bokeh server required.\nNumba must be installed in order to run this demo. To run, click on, Cell->Run All in the top menu, then scroll down to individual examples and play around with their controls.",
"from __future__ import print_function, division\n\nfrom timeit import default_timer as timer\n\nfrom bokeh.plotting import figure, show, output_notebook\nfrom bokeh.models import GlyphRenderer, LinearColorMapper\nfrom numba import jit, njit\n\nfrom IPython.html.widgets import interact\nimport numpy as np\nimport scipy.misc\n\noutput_notebook()",
"Gaussian Blur\nThis first section demonstrates performing a simple Gaussian blur on an image. It presents the image, as well as a slider that controls how much blur is applied. Numba is used to compile the python blur kernel, which is invoked when the user modifies the slider. \nNote: This simple example does not handle the edge case, so the edge of the image will remain unblurred as the slider is increased.",
"# smaller image\nimg_blur = (scipy.misc.ascent()[::-1,:]/255.0)[:250, :250].copy(order='C')\n\npalette = ['#%02x%02x%02x' %(i,i,i) for i in range(256)]\nwidth, height = img_blur.shape\np_blur = figure(x_range=(0, width), y_range=(0, height))\np_blur.image(image=[img_blur], x=[0], y=[0], dw=[width], dh=[height], palette=palette, name='blur')\n\n@njit\ndef blur(outimg, img, amt):\n iw, ih = img.shape\n for i in range(amt, iw-amt):\n for j in range(amt, ih-amt):\n px = 0.\n for w in range(-amt//2, amt//2):\n for h in range(-amt//2, amt//2):\n px += img[i+w, j+h]\n outimg[i, j]= px/(amt*amt)\n\ndef update(i=0):\n level = 2*i + 1\n \n out = img_blur.copy()\n \n ts = timer()\n blur(out, img_blur, level)\n te = timer()\n print('blur takes:', te - ts)\n \n renderer = p_blur.select(dict(name=\"blur\", type=GlyphRenderer))\n ds = renderer[0].data_source\n ds.data['image'] = [out]\n ds.push_notebook()\n\nshow(p_blur)\n\ninteract(update, i=(0, 10))",
"3x3 Image Kernels\nMany image processing filters can be expressed as 3x3 matrices. This more sophisticated example demonstrates how numba can be used to compile kernels for arbitrary 3x3 kernels, and then provides serveral predefined kernels for the user to experiment with. \nThe UI presents the image to process (along with a dropdown to select a different image) as well as a dropdown that lets the user select which kernel to apply. Additioanlly there are sliders the permit adjustment to the bias and scale of the final greyscale image. \nNote: Right now, adjusting the scale and bias are not as efficient as possible, because the update function always also applies the kernel (even if it has not changed). A better implementation might have a class that keeps track of the current kernal and output image so that bias and scale can be applied by themselves.",
"@jit\ndef getitem(img, x, y):\n w, h = img.shape\n if x >= w:\n x = w - 1 - (x - w)\n if y >= h:\n y = h - 1 - (y - h)\n return img[x, y]\n \ndef filter_factory(kernel):\n ksum = np.sum(kernel)\n if ksum == 0:\n ksum = 1\n k9 = kernel / ksum\n \n @jit\n def kernel_apply(img, out, x, y):\n tmp = 0\n for i in range(3):\n for j in range(3):\n tmp += img[x+i-1, y+j-1] * k9[i, j]\n out[x, y] = tmp\n \n @jit\n def kernel_apply_edge(img, out, x, y):\n tmp = 0\n for i in range(3):\n for j in range(3):\n tmp += getitem(img, x+i-1, y+j-1) * k9[i, j]\n out[x, y] = tmp\n \n @jit\n def kernel_k9(img, out):\n # Loop through all internals\n for x in range(1, img.shape[0] -1):\n for y in range(1, img.shape[1] -1):\n kernel_apply(img, out, x, y)\n \n # Loop through all the edges\n for x in range(img.shape[0]):\n kernel_apply_edge(img, out, x, 0)\n kernel_apply_edge(img, out, x, img.shape[1] - 1)\n \n for y in range(img.shape[1]):\n kernel_apply_edge(img, out, 0, y)\n kernel_apply_edge(img, out, img.shape[0] - 1, y)\n \n return kernel_k9\n\naverage = np.array([\n [1, 1, 1],\n [1, 1, 1],\n [1, 1, 1],\n], dtype=np.float32)\n\nsharpen = np.array([\n [-1, -1, -1],\n [-1, 12, -1],\n [-1, -1, -1],\n], dtype=np.float32)\n\nedge = np.array([\n [ 0, -1, 0],\n [-1, 4, -1],\n [ 0, -1, 0],\n], dtype=np.float32)\n\nedge_h = np.array([\n [ 0, 0, 0],\n [-1, 2, -1],\n [ 0, 0, 0],\n], dtype=np.float32)\n\nedge_v = np.array([\n [0, -1, 0],\n [0, 2, 0],\n [0, -1, 0],\n], dtype=np.float32)\n\ngradient_h = np.array([\n [-1, -1, -1],\n [ 0, 0, 0],\n [ 1, 1, 1],\n], dtype=np.float32)\n\ngradient_v = np.array([\n [-1, 0, 1],\n [-1, 0, 1],\n [-1, 0, 1],\n], dtype=np.float32)\n\nsobol_h = np.array([\n [ 1, 2, 1],\n [ 0, 0, 0],\n [-1, -2, -1],\n], dtype=np.float32)\n\nsobol_v = np.array([\n [-1, 0, 1],\n [-2, 0, 2],\n [-1, 0, 1],\n], dtype=np.float32)\n \nemboss = np.array([ \n [-2, -1, 0],\n [-1, 1, 1],\n [ 0, 1, 2],\n], dtype=np.float32)\n\nkernels = {\n \"average\" : filter_factory(average),\n \"sharpen\" : filter_factory(sharpen),\n \"edge (both)\" : filter_factory(edge),\n \"edge (horizontal)\" : filter_factory(edge_h),\n \"edge (vertical)\" : filter_factory(edge_v),\n \"gradient (horizontal)\" : filter_factory(gradient_h),\n \"gradient (vertical)\" : filter_factory(gradient_v),\n \"sobol (horizontal)\" : filter_factory(sobol_h),\n \"sobol (vertical)\" : filter_factory(sobol_v),\n \"emboss\" : filter_factory(emboss),\n}\n\nimages = {\n \"ascent\" : np.copy(scipy.misc.ascent().astype(np.float32)[::-1, :]),\n \"face\" : np.copy(scipy.misc.face(gray=True).astype(np.float32)[::-1, :]),\n}\n\npalette = ['#%02x%02x%02x' %(i,i,i) for i in range(256)]\ncm = LinearColorMapper(palette=palette, low=0, high=256)\nwidth, height = images['ascent'].shape\np_kernel = figure(x_range=(0, width), y_range=(0, height))\np_kernel.image(image=[images['ascent']], x=[0], y=[0], dw=[width], dh=[height], color_mapper=cm, name=\"kernel\")\n\ndef update(image=\"ascent\", kernel_name=\"none\", scale=100, bias=0):\n global _last_kname\n global _last_out\n \n img_kernel = images.get(image)\n\n kernel = kernels.get(kernel_name, None)\n if kernel == None:\n out = np.copy(img_kernel)\n\n else:\n out = np.zeros_like(img_kernel)\n\n ts = timer()\n kernel(img_kernel, out)\n te = timer()\n print('kernel takes:', te - ts)\n\n out *= scale / 100.0\n out += bias\n print(out.min(), out.max())\n\n renderer = p_kernel.select(dict(name=\"kernel\", type=GlyphRenderer))\n ds = renderer[0].data_source\n ds.data['image'] = [out]\n ds.push_notebook()\n\nshow(p_kernel)\n\nknames = [\"none\"] + sorted(kernels.keys())\ninteract(update, image=[\"ascent\" ,\"face\"], kernel_name=knames, scale=(10, 100, 10), bias=(0, 255))",
"Wavelet Decomposition\nThis last example demostrates a Haar wavelet decomposition using a Numba-compiled function. Play around with the slider to see differnet levels of decomposition of the image.",
"@njit\ndef wavelet_decomposition(img, tmp):\n \"\"\"\n Perform inplace wavelet decomposition on `img` with `tmp` as\n a temporarily buffer.\n\n This is a very simple wavelet for demonstration\n \"\"\"\n w, h = img.shape\n halfwidth, halfheight = w//2, h//2\n \n lefthalf, righthalf = tmp[:halfwidth, :], tmp[halfwidth:, :]\n \n # Along first dimension\n for x in range(halfwidth):\n for y in range(h):\n lefthalf[x, y] = (img[2 * x, y] + img[2 * x + 1, y]) / 2\n righthalf[x, y] = img[2 * x, y] - img[2 * x + 1, y]\n \n # Swap buffer\n img, tmp = tmp, img\n tophalf, bottomhalf = tmp[:, :halfheight], tmp[:, halfheight:]\n \n # Along second dimension\n for y in range(halfheight):\n for x in range(w):\n tophalf[x, y] = (img[x, 2 * y] + img[x, 2 * y + 1]) / 2\n bottomhalf[x, y] = img[x, 2 * y] - img[x, 2 * y + 1]\n \n return halfwidth, halfheight\n\nimg_wavelet = np.copy(scipy.misc.face(gray=True)[::-1, :])\n\npalette = ['#%02x%02x%02x' %(i,i,i) for i in range(256)]\nwidth, height = img_wavelet.shape\np_wavelet = figure(x_range=(0, width), y_range=(0, height))\np_wavelet.image(image=[img_wavelet], x=[0], y=[0], dw=[width], dh=[height], palette=palette, name=\"wavelet\")\n\ndef update(level=0):\n\n out = np.copy(img_wavelet)\n tmp = np.zeros_like(img_wavelet)\n\n ts = timer()\n hw, hh = img_wavelet.shape\n while level > 0 and hw > 1 and hh > 1:\n hw, hh = wavelet_decomposition(out[:hw, :hh], tmp[:hw, :hh])\n level -= 1\n te = timer()\n print('wavelet takes:', te - ts)\n\n renderer = p_wavelet.select(dict(name=\"wavelet\", type=GlyphRenderer))\n ds = renderer[0].data_source\n ds.data['image'] = [out]\n ds.push_notebook()\n\nshow(p_wavelet)\n\ninteract(update, level=(0, 7))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jsnajder/StrojnoUcenje
|
notebooks/SU-2015-6-LinearniDiskriminativniModeli.ipynb
|
cc0-1.0
|
[
"Sveučilište u Zagrebu<br>\nFakultet elektrotehnike i računarstva\nStrojno učenje\n<a href=\"http://www.fer.unizg.hr/predmet/su\">http://www.fer.unizg.hr/predmet/su</a>\nAk. god. 2015./2016.\nBilježnica 6: Linearni diskriminativni modeli\n(c) 2015 Jan Šnajder\n<i>Verzija: 0.1 (2015-11-11)</i>",
"import scipy as sp\nimport scipy.stats as stats\nimport matplotlib.pyplot as plt\nimport pandas as pd\n%pylab inline",
"Sadržaj:\n\n\nUvod\n\n\nPoopćeni linearni model\n\n\nGeometrija linearnog modela\n\n\nVišeklasna klasifikacija\n\n\nKlasifikacija regresijom\n\n\nGradijentni spust\n\n\nPerceptron\n\n\nSažetak\n\n\nUvod\n\nDiskriminativni modeli: Za razliku od generativnih modela, koji modeliraju\n$\nP(\\mathcal{C}_j|\\mathbf{x})\\ \\propto\\ P(\\mathbf{x}|\\mathcal{C}_j) P(\\mathcal{C}_j)\n$, \nmodeliramo:\nizravno aposteriornu vjerojatnost klase, $P(\\mathcal{C}_j|\\mathbf{x})$, ili\n\nizravno diskriminacijsku (klasifikacijsku) funkciju $h(\\mathbf{x})$\n\n\nLinearan modeli: Granica je linearna (hiperravnina):\n\n\n$$\nh(\\mathbf{x}) = \\mathbf{w}^\\intercal\\tilde{\\mathbf{x}}\n$$\n\n\nModel je linearan u značajkama $\\mathbf{x}$ $\\Rightarrow$ to daje linearnu granica u <u>ulaznom\nprostoru</u>\n\n\nGranica je hiperravnina za koju $h(\\mathbf{x})=0$ ili $h(\\mathbf{x})=0.5$ (ovisno o modelu)\n\n\nGranicu zovemo diskriminativna funkcija (engl. discriminative functions) ili granica odluke, decizijska granica (engl. decision boundary)",
"def h(x, w): return sp.dot(x, w)\n\ndef plot_decision_boundary(h, boundary=0, margins=None):\n x = linspace(-10, 10)\n y = linspace(-10, 10)\n X1, X2 = np.meshgrid(x, y)\n XX = sp.dstack((sp.ones((50, 50)), X1, X2))\n plt.contour(X1, X2, h(XX), linecolor='red', levels=[boundary])\n if margins!=None:\n CS = plt.contour(X1, X2, h(XX), colors=['gray', 'gray'], levels=[margins[0],margins[1]])\n plt.clabel(CS, fontsize=9, inline=1)\n\nw = [-2, 1, -0.5]\nplot_decision_boundary(lambda x : h(x, w), margins=(-1,1))\n\nw = [2, -1, 0.5]\nplot_decision_boundary(lambda x : h(x, w), margins=(-1,1))\n\nw = [2, -1.5, 0.3]\nplot_decision_boundary(lambda x : h(x, w), margins=(-1,1))",
"Poopćeni linearan model\n\n\nAktivacijska funkcija: nelinearna funkcija \n$$\nf:\\mathbb{R}\\to[0,1]\n$$ \nili\n$$\nf:\\mathbb{R}\\to[-1,1]\n$$\n\n\nPoopćeni linearan model (engl. generalized linear model, GLM):\n\n\n$$\n h(\\mathbf{x}) = \\color{red}{f\\big(}\\mathbf{w}^\\intercal\\tilde{\\mathbf{x}}\\color{red}{\\big)}\n$$\n$\\Rightarrow$ Linearna granica u ulaznom prostoru (premda je $f$ nelinearna)\n$\\Rightarrow$ Model je nelinearan u parametrima (jer je $f$ nelinearna)\n\nTipične funkcije preslikavanja:\nFunkcija identiteta (tj. kao da nema preslikavanja)\nSigmoidalna (logistička) funkcija\nFunkcija skoka (step-funkcija)",
"xs = sp.linspace(-5,5, 100)\nplt.plot(xs, xs);\n\ndef sigm(x): return 1 / (1 + sp.exp(-x))\n\nplt.plot(xs, sigm(xs));\n\nplt.plot(xs, sign(xs));",
"Odabir funkcije $f$ nema utjecaja na linearnost granice, budući da će, očigledno, funkcija $f$ za iste ulazne vrijednosti $\\mathbf{w}^\\intercal\\mathbf{x}$ davati iste vrijednosti $f(\\mathbf{w}^\\intercal\\mathbf{x})$",
"w = [-4, 2, -1]\nplot_decision_boundary(lambda x : h(x, w), margins=(-1,1))\n\nplot_decision_boundary(lambda x : sigm(sp.dot(x, w)), boundary=0.5, margins=(0.1,0.9))",
"Kao i kod regresije, možemo koristiti preslikavanje\n $\\boldsymbol{\\phi}:\\mathbb{R}^n\\to\\mathbb{R}^m$ iz ulaznog\nprostora u prostor značajki:\n$$\n h(\\mathbf{x}) = f\\big(\\mathbf{w}^\\intercal\\mathbf{\\phi}(\\mathbf{x})\\big)\n$$\n$\\Rightarrow$ Linearna granica u prostoru značajki\n$\\Rightarrow$ Nelinearna granica u ulaznom\nprostoru\n$\\Rightarrow$ Model je nelinearan u parametrima (jer je $f$ nelinearna)\nPrimjer:\n\nPreslikavanje iz dvodimenzijskog ulaznog prostora u 5-dimenzijski prostor značajki\n\n$$\n\\boldsymbol{\\phi}(\\mathbf{x}) = (1,x_1,x_2,x_1 x_2, x_1^2, x_2^2)\n$$",
"def h2(x, w): \n x2 = sp.dstack((x, x[:,:,1]*x[:,:,2], x[:,:,1]**2, x[:,:,2]**2))\n return sp.dot(x2, w)\n\nw = [-0.05 , -0.15 , -0.5 , 0.15 , -0.08 , 0.05]\nplot_decision_boundary(lambda x : h2(x, w), margins=[-1, 1])",
"Geometrija linearnog modela\n\n\n[Skica]\n\n\nZa točke $\\mathbf{x}_1$ i $\\mathbf{x}_2$ na hiperravnini:\n$$\nh(\\mathbf{x}_1)=h(\\mathbf{x}_2)=0\\quad \\Rightarrow\\quad \\mathbf{w}^\\intercal(\\mathbf{x}_1-\\mathbf{x}_2) = 0\n$$\n$\\Rightarrow$ $\\mathbf{w}$ je normala hiperravnine\n\n\nNB: U iduće dvije točke $\\mathbf{w}$ ne uključuje $w_0$\n\n\nZa točku $\\mathbf{x}$ na hiperravnini:\n$$\n\\mathbf{w}^\\intercal\\mathbf{x} + w_0 = 0 \\quad\\Rightarrow\\quad\n\\frac{\\mathbf{w}^\\intercal\\mathbf{x}}{\\|\\mathbf{w}\\|}=-\\frac{w_0}{\\|\\mathbf{w}\\|}\n$$\n$\\Rightarrow$ udaljenost ravnine od ishodišta je $-w_0/\\|\\mathbf{w}\\|$\n\n\nZa točku $\\mathbf{x}$ izvan hiperravnine:\n$$\n\\begin{align}\n\\mathbf{x} &= \\mathbf{x}{\\bot} + d\\frac{\\mathbf{w}}{\\|\\mathbf{w}\\|}\\\n\\mathbf{w}^\\intercal\\mathbf{x} + w_0 &= \\mathbf{w}^\\intercal\\mathbf{x}{\\bot} + w_0 + d \\frac{\\mathbf{w}^\\intercal\\mathbf{w}}{\\|\\mathbf{w}\\|}\\\nh(\\mathbf{x}) &= d \\|\\mathbf{w}\\|\n\\end{align}\n$$\n\n\n$\\Rightarrow$ udaljenost točke $\\mathbf{x}$ od ravnine je $d=h(\\mathbf{x})/\\|\\mathbf{w}\\|$\n * $h(\\mathbf{x}) > 0\\ \\Rightarrow\\ $ $\\mathbf{x}$ je na strani hiperravnine u smjeru normale $\\mathbf{w}$\n * $h(\\mathbf{x}) < 0\\ \\Rightarrow\\ $ $\\mathbf{x}$ je na suprotnoj strani hiperravnine\n * $h(\\mathbf{x}) = 0\\ \\Rightarrow\\ $ $\\mathbf{x}$ je na hiperravnini",
"w = sp.array([-4, 2, -1])\nX = sp.array([[1, 5, -1],\n [1, -5, 5]])\n\nplot_decision_boundary(lambda x : h(x, w), margins=(-1, 1))\nplt.scatter(X[:,1],X[:,2]);\n\nh(X[0], w)\n\nh(X[1], w)\n\nsp.linalg.norm(w[1:])\n\ndef distance(x,w): return sp.dot(x, w) / sp.linalg.norm(w[1:])\n\ndistance(X[0], w)\n\ndistance(X[1], w)\n\nw2 = w/10.0\n\nplot_decision_boundary(lambda x : h(x, w2), margins=(-1,1))\nplt.scatter(X[:,1],X[:,2]);\n\nh(X[0], w2)\n\nh(X[1], w2)\n\nsp.linalg.norm(w2[1:])\n\ndistance(X[0], w2)\n\ndistance(X[1], w2)",
"Višeklasna klasifikacija ($K>2$)\n\nShema jedan-naspram-jedan (engl. one-vs-one, OVO) \n$K\\choose 2$ binarnih klasifikatora, razdjeljuju sve parove klasa\n\nModel:\n$$\nh(\\mathbf{x})=\\mathrm{argmax}i\\sum{i\\neq j}\\mathrm{sgn}\\big(h_{ij}(\\mathbf{x})\\big)\n$$\ngdje $h_{ji}(\\mathbf{x})=- h_{ij}(\\mathbf{x})$\n\n\n[Skica: OVO]\n\n\nShema jedan-naspram-ostali (engl. one-vs-rest, OVR, one-vs-all) \n\n$K$ binarnih klasifikatora s pouzdanošću, po jedan za svaku klasu\n\nModel:\n$$\nh(\\mathbf{x}) = \\mathrm{argmax}_j\\ h_j(\\mathbf{x})\n$$\n\n\n[Skica: OVR]\n\n\nPrednost OVR nad OVO je da imamo manje modela, međutim OVR lako rezultira neuravnoteženim brojem primjera kroz klase",
"h1 = lambda x: h(x, [0, 2, 1])\nplot_decision_boundary(h1)\nh2 = lambda x: h(x, [-0.2, 0.7, -0.8])\nplot_decision_boundary(h2)\nh3 = lambda x: h(x, [-1.5, 0.1, 0.5])\nplot_decision_boundary(h3)\nplt.scatter(X[:,1],X[:,2]);\n\nprint h1(X[0]), h2(X[0]), h3(X[0])\n\nprint h1(X[1]), h2(X[1]), h3(X[1])\n\ndef ovr(x): return sp.argmax([h1(x), h2(x), h3(x)])\n\novr(X[0])\n\novr(X[1])\n\nx = linspace(-10, 10)\ny = linspace(-10, 10)\nX1, X2 = np.meshgrid(x, y)\nXX = sp.dstack((sp.ones((50, 50)), X1, X2))\n\nn, m, _ = shape(XX)\nYY = sp.zeros((n,m))\nfor i in range(0,n):\n for j in range(0,m):\n YY[i,j] = ovr(XX[i, j])\n\nplt.contourf(X1, X2, YY);",
"Klasifikacija regresijom\nPodsjetnik\n\n\n[Skica]\n\n\nFunkcija pogreške (empirijsko očekivanje kvadratnog gubitka):\n$$\nE(\\mathbf{w}|\\mathcal{D})=\\frac{1}{2}\n\\sum_{i=1}^N\\big(\\mathbf{w}^\\intercal\\boldsymbol{\\phi}(\\mathbf{x}^{(i)}) - y^{(i)}\\big)^2 = \n\\frac{1}{2}\n(\\boldsymbol\\Phi\\mathbf{w} - \\mathbf{y})^\\intercal\n(\\boldsymbol\\Phi\\mathbf{w} - \\mathbf{y})\n$$\n\n\nMinimizator pogreške:\n$$\n \\mathbf{w} = (\\boldsymbol\\Phi^\\intercal\\boldsymbol\\Phi)^{-1}\\boldsymbol\\Phi^\\intercal\\mathbf{y}\n = \\color{red}{\\boldsymbol\\Phi^{+}}\\mathbf{y}\n$$\n\n\nQ: Kako iskoristiti regresiju za klasifikaciju?\n\n\nBinarna klasifikacija $(K=2)$\n\n\nIdeja: regresijska funkcija $h(\\mathbf{x})$ koja za primjere iz klase $\\mathcal{C}_1$ daje $h(\\mathbf{x})=1$, a za primjere iz klase $\\mathcal{C}_2$ daje $h(\\mathbf{x})=0$ \n\n\nPrimjer $\\mathbf{x}$ klasificiriamo u klasu $\\mathcal{C}_j$ za koju je $h(\\mathbf{x})$ najveći\n\n\nGranica između $\\mathcal{C}_1$ i $\\mathcal{C}_2$ je na $h(\\mathbf{x})=0.5$\n\n\nPrimjer za $n=1$\n\n$h(x) = w0 + w1 x$\n\n[Skica]\n\n\nPrimjer za $n=2$\n\n$h(x) = w0 + w1 x_1 + w2 * x_2$\n\n[Skica]\n\n\nAlternativno, model možemo trenirati tako da za primjere iz klase $\\mathcal{C_2}$ bude ciljana vrijednost bude $y=-1$\n\nDiskriminativna funkcija onda je $h(\\mathbf{x})=0$\n\nVišeklasna klasifikacija $(K>2)$\n\n\nShema jedan-naspram-ostali (OVR)\n\n\nTreniramo po jedan model $h_j$ za svaku klasu $\\mathcal{C}_j$\n\n\n$$\nh_j(\\mathbf{x}) = \\mathbf{w}_j^\\intercal\\boldsymbol{\\phi}(\\mathbf{x})\n$$\n\nPrimjeri za učenje $\\mathcal{D}={\\mathbf{x}^{(i)},\\color{red}{\\mathbf{y}^{(i)}}}_{i=1}^N$:\n\n\\begin{align}\n\\boldsymbol\\Phi = \n\\begin{pmatrix}\n \\boldsymbol{\\phi}(\\mathbf{x}^{(1)})^\\intercal \\\n \\boldsymbol{\\phi}(\\mathbf{x}^{(2)})^\\intercal \\\n\\vdots\\\n \\boldsymbol{\\phi}(\\mathbf{x}^{(N)})^\\intercal \\\n\\end{pmatrix}{N\\times (m+1)}\n&\n\\qquad\n \\color{red}{\\mathbf{y}_j} = \n\\begin{pmatrix}\n y_j^{(1)}\\\n y_j^{(2)}\\\n \\vdots\\\n y_j^{(N)}\n\\end{pmatrix}{N\\times 1}\n\\end{align}\n\n\n[Primjer] \n\n\nRješenje koje minimizira kvadratnu pogrešku:\n\\begin{equation}\n\\mathbf{w}_j = (\\boldsymbol\\Phi^\\intercal\\boldsymbol\\Phi)^{-1}\\boldsymbol\\Phi^\\intercal\\mathbf{y}_j = \\boldsymbol\\Phi^{+}\\mathbf{y}_j\n\\end{equation}\n\n\nModel (OVR):\n$$\nh(\\mathbf{x}) = \\mathrm{argmax}_j\\ \\mathbf{w}_j\\boldsymbol\\phi(\\mathbf{x})\n$$\n\n\nPrimjer ($K=3$)\n\n\n\nPrednosti i nedostatci klasifikacije regresijom\n\nPrednosti:\nRješenje u zatvorenoj formi\n\nJednostavan postupak\n\n\nNedostatci:\n\nIzlazi modela nemaju vjerojatnosnu intepretaciju ($h(\\mathbf{x})$ nije ograničena na interval $[0,1]$)\nNerobusnost: osjetljivost na vrijednosti koje odskaču (kažnjavanje \"pretočno''\n klasificiranih primjera)\n\nZbog toga, u nekim slučajevima: pogrešna klasifikacija unatoč tome što su primjeri linearno odvojivi\n\n\n[Primjer]\n\n\nGradijentni spust\nPerceptron\nSažetak\n\n\nDiskriminativni linearni modeli modeliraju granicu između klasa, ali ne i način generiranja primjera\n\n\nPoopćeni linearan model je linearan model s aktivacijskom funkcijom\n\n\nKlasifikacija regresijom je jednostavan postupak, ali nije robusan\n\n\nKonveksna optimizacija važna je za strojno učenje jer je funkcija gubitka (a time i funkcija pogreške) tipično konveksna\n\n\nAko minimizacija pogreške nema rješenje u zatvorenoj formi, a možemo izračunati gradijent, onda možemo primijeniti gradijentni spust\n\n\nPerceptron je linearan klasifikacijski model koji gradijentnim spustom mimimizira aproksimaciju broja pogrešnih klasifikacija\n\n\nPerceptron ne konvergira za linearno nedvojive probleme, dok za linearno odvojive rješenje ovisi o inicijalizaciji i redoslijedu primjera\n\n\nNiti regresija niti perceptron ne daju probabilistički izlaz"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
hunterherrin/phys202-2015-work
|
days/day08/Display.ipynb
|
mit
|
[
"Display of Rich Output\nIn Python, objects can declare their textual representation using the __repr__ method.",
"class Ball(object):\n pass\n\nb = Ball()\nb.__repr__()\n\nprint(b)",
"Overriding the __repr__ method:",
"class Ball(object):\n def __repr__(self):\n return 'TEST'\n\nb = Ball()\n\nprint(b)",
"IPython expands on this idea and allows objects to declare other, rich representations including:\n\nHTML\nJSON\nPNG\nJPEG\nSVG\nLaTeX\n\nA single object can declare some or all of these representations; all of them are handled by IPython's display system. .\nBasic display imports\nThe display function is a general purpose tool for displaying different representations of objects. Think of it as print for these rich representations.",
"from IPython.display import display",
"A few points:\n\nCalling display on an object will send all possible representations to the Notebook.\nThese representations are stored in the Notebook document.\nIn general the Notebook will use the richest available representation.\n\nIf you want to display a particular representation, there are specific functions for that:",
"from IPython.display import (\n display_pretty, display_html, display_jpeg,\n display_png, display_json, display_latex, display_svg\n)",
"Images\nTo work with images (JPEG, PNG) use the Image class.",
"from IPython.display import Image\n\ni = Image(filename='./ipython-image.png')\ndisplay(i)",
"Returning an Image object from an expression will automatically display it:",
"i",
"An image can also be displayed from raw data or a URL.",
"Image(url='http://python.org/images/python-logo.gif')",
"HTML\nPython objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the HTML class.",
"from IPython.display import HTML\n\ns = \"\"\"<table>\n<tr>\n<th>Header 1</th>\n<th>Header 2</th>\n</tr>\n<tr>\n<td>row 1, cell 1</td>\n<td>row 1, cell 2</td>\n</tr>\n<tr>\n<td>row 2, cell 1</td>\n<td>row 2, cell 2</td>\n</tr>\n</table>\"\"\"\n\nh = HTML(s)\n\ndisplay(h)",
"You can also use the %%html cell magic to accomplish the same thing.",
"%%html\n<table>\n<tr>\n<th>Header 1</th>\n<th>Header 2</th>\n</tr>\n<tr>\n<td>row 1, cell 1</td>\n<td>row 1, cell 2</td>\n</tr>\n<tr>\n<td>row 2, cell 1</td>\n<td>row 2, cell 2</td>\n</tr>\n</table>\n\n%%html\n<style>\n#notebook {\n background-color: skyblue;\n font-family: times new roman;\n}\n</style>",
"You can remove the abvove styling by using \"Cell\"$\\rightarrow$\"Current Output\"$\\rightarrow$\"Clear\" with that cell selected.\nJavaScript\nThe Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as d3.js for output.",
"from IPython.display import Javascript",
"Pass a string of JavaScript source code to the JavaScript object and then display it.",
"js = Javascript('alert(\"hi\")');\n\ndisplay(js)",
"The same thing can be accomplished using the %%javascript cell magic:",
"%%javascript\n\nalert(\"hi\");",
"Here is a more complicated example that loads d3.js from a CDN, uses the %%html magic to load CSS styles onto the page and then runs ones of the d3.js examples.",
"Javascript(\n \"\"\"$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')\"\"\"\n)\n\n%%html\n<style type=\"text/css\">\n\ncircle {\n fill: rgb(31, 119, 180);\n fill-opacity: .25;\n stroke: rgb(31, 119, 180);\n stroke-width: 1px;\n}\n\n.leaf circle {\n fill: #ff7f0e;\n fill-opacity: 1;\n}\n\ntext {\n font: 10px sans-serif;\n}\n\n</style>\n\n%%javascript\n\n// element is the jQuery element we will append to\nvar e = element.get(0);\n \nvar diameter = 600,\n format = d3.format(\",d\");\n\nvar pack = d3.layout.pack()\n .size([diameter - 4, diameter - 4])\n .value(function(d) { return d.size; });\n\nvar svg = d3.select(e).append(\"svg\")\n .attr(\"width\", diameter)\n .attr(\"height\", diameter)\n .append(\"g\")\n .attr(\"transform\", \"translate(2,2)\");\n\nd3.json(\"./flare.json\", function(error, root) {\n var node = svg.datum(root).selectAll(\".node\")\n .data(pack.nodes)\n .enter().append(\"g\")\n .attr(\"class\", function(d) { return d.children ? \"node\" : \"leaf node\"; })\n .attr(\"transform\", function(d) { return \"translate(\" + d.x + \",\" + d.y + \")\"; });\n\n node.append(\"title\")\n .text(function(d) { return d.name + (d.children ? \"\" : \": \" + format(d.size)); });\n\n node.append(\"circle\")\n .attr(\"r\", function(d) { return d.r; });\n\n node.filter(function(d) { return !d.children; }).append(\"text\")\n .attr(\"dy\", \".3em\")\n .style(\"text-anchor\", \"middle\")\n .text(function(d) { return d.name.substring(0, d.r / 3); });\n});\n\nd3.select(self.frameElement).style(\"height\", diameter + \"px\");",
"Audio\nIPython makes it easy to work with sounds interactively. The Audio display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the Image display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers.",
"from IPython.display import Audio\nAudio(\"./scrubjay.mp3\")",
"A NumPy array can be converted to audio. The Audio class normalizes and encodes the data and embeds the resulting audio in the Notebook.\nFor instance, when two sine waves with almost the same frequency are superimposed a phenomena known as beats occur:",
"import numpy as np\nmax_time = 3\nf1 = 120.0\nf2 = 124.0\nrate = 8000.0\nL = 3\ntimes = np.linspace(0,L,rate*L)\nsignal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)\n\nAudio(data=signal, rate=rate)",
"Video\nMore exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load:",
"from IPython.display import YouTubeVideo\nYouTubeVideo('sjfsUzECqK0')",
"External sites\nYou can even embed an entire page from another site in an iframe; for example this is IPython's home page:",
"from IPython.display import IFrame\nIFrame('https://ipython.org', width='100%', height=350)",
"Links to local files\nIPython provides builtin display classes for generating links to local files. Create a link to a single file using the FileLink object:",
"from IPython.display import FileLink, FileLinks\nFileLink('../Visualization/Matplotlib.ipynb')",
"Alternatively, to generate links to all of the files in a directory, use the FileLinks object, passing '.' to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, FileLinks would work in a recursive manner creating links to files in all sub-directories as well.",
"FileLinks('./')",
"Rich output and nbviewer\nMuch of the power of the Notebook is that it enables users to share notebooks with each other using http://nbviewer.ipython.org, without installing IPython locally. As of IPython 2.0, notebooks rendered on nbviewer will display all output, including HTML and JavaScript. Furthermore, to provide a consistent JavaScript environment on the live Notebook and nbviewer, the following JavaScript libraries are loaded onto the nbviewer page, before the notebook and its output is displayed:\n\njQuery\nRequireJS\n\nLibraries such as mpld3 use these capabilities to generate interactive visualizations that work on nbviewer."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tsarouch/python_minutes
|
fit/.ipynb_checkpoints/gaussian_fit-checkpoint.ipynb
|
gpl-2.0
|
[
"The typical problem we have to solve:\nLets say we want to study the properties of a distribution e.g. physics process.\nTypically we see the histogram and we want to see to which class of distribution belongs to.\nwe can obtain this by fitting our data and checking the residuals (goodness of fit)",
"#Necessary imports\n\n# lib for numeric calculations\nimport numpy as np\n\n# standard lib for python plotting\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# seaborn lib for more option in DS\nimport seaborn as sns\n\n# so to obtain pseudo-random numbers\nimport random \n \n# fits a curve to data\nfrom scipy.optimize import curve_fit \n\n# Lets say that a physics process gives us a gaussian distribution with a\n# mean value somewhere between 3 nad 5\n# and a sigma with a value around 1\n# We do not know exactly their precise number and this is what we want to figure out with the fit.\nmu = random.randrange(3, 5, 1)\nsigma = 0.1 * random.randrange(8, 12, 1)\n\n# Create the data\ndata = np.random.normal(mu, sigma, size=10000)\n\nbin_values, bin_edges = np.histogram(data, density=False, bins=100)\nbin_centres = (bin_edges[:-1] + bin_edges[1:])/2\n\n# Define model function to be used to fit to the data above:\ndef gauss(x, *p):\n A, mu, sigma = p\n return A*np.exp(-(x-mu)**2/(2.*sigma**2))\n\n# p0 is the initial guess for the fitting coefficients (A, mu and sigma above)\np0 = [1., 0., 1.]\n\ncoeff, var_matrix = curve_fit(gauss, bin_centres, bin_values, p0=p0)\n\n# Get the fitted curve\nhist_fit = gauss(bin_centres, *coeff)\n\n#plt.plot(bin_centres, hist, label='Test data')\nplt.plot(bin_centres, hist_fit, label='Fitted data', color='red', linewidth=2)\n_ = plt.hist(data, bins=100, color='blue', alpha=.3)\n\n# Finally, lets get the fitting parameters, i.e. the mean and standard deviation:\nprint 'Fitted mean = ', coeff[1]\nprint 'Fitted standard deviation = ', coeff[2]\n\nplt.show()",
"Create the distribution and visualize it",
"# Lets say that a physics process gives us a gaussian distribution with a\n# mean value somewhere between 3 nad 5\n# while it has a broad sigma with a value around 1\nmu = random.randrange(3, 5, 1)\nsigma = 0.1 * random.randrange(8, 12, 1)\n\n# Lets create now the distribution\ndata = np.random.normal(mu, sigma, 1000)\n \n# Lets visualize it\n_ = plt.hist(data, bins=100, alpha=.5)",
"Fit a function to the distribution and obtain its properties",
"# get the binned data\n# density = True => histo integral = 1, normalized\nhist_data = np.histogram(data, density=True, bins=100) \n\n# Define model function to be used to fit to the data above, we assume here that we know that is a gaussian function:\ndef gauss(x, *p):\n A, mu, sigma = p\n return A * np.exp( - (x - mu) ** 2 / (2. * sigma ** 2))\n\n\n# get distribution information\nbin_edges = hist_data[1]\nbin_centres = (bin_edges[:-1] + bin_edges[1:]) / 2\nbin_values = hist_data[0]\n\n# p0 is the initial guess for the fitting coefficients (A, mu and sigma above)\np0 = [1., 4., 1.]\n\n# Fit the data\ncoeff, var_matrix = curve_fit(gauss, bin_centres, bin_values, p0=p0)\nprint \"The fit coeff. are:\", coeff\n\n# Obtain the final fit function\nhist_fit = gauss(bin_centres, *coeff)\n\nhist_data = plt.hist(data, bins=100, alpha = 0.5)\nplt.plot(bin_centres, hist_fit, label='Fitted data')\n\n\n\n#hist_data = plt.hist(s, bins=100)",
"Now do the fit",
"\n\n\n# Get the fitted curve\nhist_fit = gauss(bin_centres, *coeff)\n\n\n\nhist_data = plt.hist(s, bins=100, alpha = 0.5)\nplt.plot(bin_centres, hist_fit, label='Fitted data')\n\ncoeff"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
SylvainCorlay/ipywidgets
|
docs/source/examples/Widget Asynchronous.ipynb
|
bsd-3-clause
|
[
"Index - Back\nAsynchronous Widgets\nThis notebook covers two scenarios where we'd like widget-related code to run without blocking the kernel from acting on other execution requests:\n\nPausing code to wait for user interaction with a widget in the frontend\nUpdating a widget in the background\n\nWaiting for user interaction\nYou may want to pause your Python code to wait for some user interaction with a widget from the frontend. Typically this would be hard to do since running Python code blocks any widget messages from the frontend until the Python code is done.\nWe'll do this in two approaches: using the event loop integration, and using plain generator functions.\nEvent loop integration\nIf we take advantage of the event loop integration IPython offers, we can have a nice solution using the async/await syntax in Python 3.\nFirst we invoke our asyncio event loop. This requires ipykernel 4.7 or later.",
"%gui asyncio",
"We define a new function that returns a future for when a widget attribute changes.",
"import asyncio\ndef wait_for_change(widget, value):\n future = asyncio.Future()\n def getvalue(change):\n # make the new value available\n future.set_result(change.new)\n widget.unobserve(getvalue, value)\n widget.observe(getvalue, value)\n return future",
"And we finally get to our function where we will wait for widget changes. We'll do 10 units of work, and pause after each one until we observe a change in the widget. Notice that the widget's value is available to us, since it is what the wait_for_change future has as a result.\nRun this function, and change the slider 10 times.",
"from ipywidgets import IntSlider, Output\nslider = IntSlider()\nout = Output()\n\nasync def f():\n for i in range(10):\n out.append_stdout('did work ' + str(i) + '\\n')\n x = await wait_for_change(slider, 'value')\n out.append_stdout('async function continued with value ' + str(x) + '\\n')\nasyncio.ensure_future(f())\n\nslider\n\nout",
"Generator approach\nIf you can't take advantage of the async/await syntax, or you don't want to modify the event loop, you can also do this with generator functions.\nFirst, we define a decorator which hooks a generator function up to widget change events.",
"from functools import wraps\ndef yield_for_change(widget, attribute):\n \"\"\"Pause a generator to wait for a widget change event.\n \n This is a decorator for a generator function which pauses the generator on yield\n until the given widget attribute changes. The new value of the attribute is\n sent to the generator and is the value of the yield.\n \"\"\"\n def f(iterator):\n @wraps(iterator)\n def inner():\n i = iterator()\n def next_i(change):\n try:\n i.send(change.new)\n except StopIteration as e:\n widget.unobserve(next_i, attribute)\n widget.observe(next_i, attribute)\n # start the generator\n next(i)\n return inner\n return f",
"Then we set up our generator.",
"from ipywidgets import IntSlider, VBox, HTML\nslider2=IntSlider()\n\n@yield_for_change(slider2, 'value')\ndef f():\n for i in range(10):\n print('did work %s'%i)\n x = yield\n print('generator function continued with value %s'%x)\nf()\n\nslider2",
"Modifications\nThe above two approaches both waited on widget change events, but can be modified to wait for other things, such as button event messages (as in a \"Continue\" button), etc.\nUpdating a widget in the background\nSometimes you'd like to update a widget in the background, allowing the kernel to also process other execute requests. We can do this with threads. In the example below, the progress bar will update in the background and will allow the main kernel to do other computations.",
"import threading\nfrom IPython.display import display\nimport ipywidgets as widgets\nimport time\nprogress = widgets.FloatProgress(value=0.0, min=0.0, max=1.0)\n\ndef work(progress):\n total = 100\n for i in range(total):\n time.sleep(0.2)\n progress.value = float(i+1)/total\n\nthread = threading.Thread(target=work, args=(progress,))\ndisplay(progress)\nthread.start()",
"Index - Back"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.