Unnamed: 0 int64 0 16k | text_prompt stringlengths 110 62.1k | code_prompt stringlengths 37 152k |
|---|---|---|
6,000 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 2
Lógica proposicional
"Poder-se-á definir a Lógica como a ciência das regras que legitimam
a utilização da palavra portanto." B. Ruyer in Logique.
Proposição
No caso das instruções if e while, a execução dum bloco de código está dependente da avaliação duma função proposicional (condição). Com o objectivo de estudar estas instruções e formalizar a noção de função proposicional começa-se por rever algumas noções de lógica proposicional e do cálculo de predicados.
Os elementos básicos da lógica são as proposições ou sentenças que se entendem como afirmações precisas. Na lógica clássica, que abordamos, a avaliação duma proposição é regida por dois princípios fundamentais
Step1: Conjunção
Sejam $p$ e $q$ proposições. A proposição "$p$ e $q$", denotada
$p\wedge q$, é a proposição que é verdadeira apenas quando $p$ e $q$
são ambas verdadeiras, caso contrário é falsa. A proposição $p\wedge q$
diz-se a \textbf{conjunção} de $p$ e $q$.
Assim, os valores lógicos das três proposições $p$, $q$, e $p\wedge
q$ estão relacionados pela tabela de verdade
Step2: Disjunção
Sejam p e q proposições. A proposição "$p$ ou $q$", denotada
p$\vee$q, é a proposição que é falsa apenas quando $p$ e $q$ são
ambas falsas, caso contrário é verdade. A proposição p$\vee$q
diz-se a disjunção de p e q.
A tabela de verdade de p $\vee$q toma assim a forma
Step3: Disjunção exclusiva
Para tornar a interpretação da disjunção independente do contexto definimos
Step4: Exercício
Step5: Bi-implicação
Sejam p e q proposições. A bi-condicional ou bi-implicação de p e q é a proposição p$\leftrightarrow$q que é verdadeira
quando p e q têm o mesmo valor lógico.
A tabela de verdade de p$\leftrightarrow$q toma assim a forma
Step6: Facilmente podemos mostrar que as proposições p$\leftrightarrow$q
e $(p\rightarrow q)\wedge(q\rightarrow p)$ têm os mesmos valores
lógicos, ou seja a proposição $(p\leftrightarrow q)\leftrightarrow
((p\rightarrow q)\wedge(q\rightarrow p))$ é sempre verdadeira.
(p | $\leftrightarrow$ | q) | $\leftrightarrow$ | ((p | $\rightarrow$ | q) | $\wedge$ | (q | $\rightarrow$ | p))
Step7: Exercício
Step8: Exercício
Step9: Exercício
Step10: Exercício
Step11: Exercício
Step12: Exercício
Step13: Exercício
Step14: Exercício | Python Code:
#
# Tabela da Negação
#
for p in [True,False]:
print('not',p,"=", not p)
Explanation: Chapter 2
Lógica proposicional
"Poder-se-á definir a Lógica como a ciência das regras que legitimam
a utilização da palavra portanto." B. Ruyer in Logique.
Proposição
No caso das instruções if e while, a execução dum bloco de código está dependente da avaliação duma função proposicional (condição). Com o objectivo de estudar estas instruções e formalizar a noção de função proposicional começa-se por rever algumas noções de lógica proposicional e do cálculo de predicados.
Os elementos básicos da lógica são as proposições ou sentenças que se entendem como afirmações precisas. Na lógica clássica, que abordamos, a avaliação duma proposição é regida por dois princípios fundamentais:
- Princípio da não contradição - Uma proposição não pode ser simultaneamente verdadeira e falsa;
- Princípio do terceiro excluído - Uma proposição ou é verdadeira ou é falsa;
Por exemplo "1 é maior que 3" é uma proposição cujo valor lógico é o de
"falsidade" enquanto que "todos os triângulos têm três lados e três ângulos" é uma proposição cujo valor lógico é o de "verdade".
Por outro lado "x < 3" não é uma proposição (depende do valor que venha a ser atribuído à variável x) sendo denominada função proposicional.
Representam-se por letras (geralmente minúsculas) as proposições genéricas (ou variáveis proposicionais) e por 1 (ou V) e 0 (ou F) os valores lógicos de "verdade" e "falsidade", respectivamente.
A área da lógica que trata as proposições neste contexto é designada por cálculo proposicional ou lógica proposicional.
Proposição simples e proposição composta
Por vezes combinam-se várias proposições para obter proposições mais expressivas. Neste sentido, classificamos as proposições como simples (também denominada atómica) ou
composta (também denominada molecular).
As proposições simples apresentam apenas uma afirmação:
$p:$ $\sqrt{2}$ não é um número racional.
$q:$ existem mais números reais que inteiros.
$v:$ $1=2$.
$r:2+3>4$.
As proposições compostas são definidas por uma ou por mais do que uma
proposição, usando na sua formação operadores lógicos
(também designados de conectivas lógicas ou operadores para formação de proposições):
$x = 2$ e $y = 1$.
se $x > y$ então $y < x$.
não é verdade que $2+3>4$.
Conectivas lógicas
Em cálculo proposicional as proposições são geradas a partir de proposições simples, usando operadores para formação de proposições. Vamos tomar como sintacticamente válidas proposições compostas da forma:
não $p$,
$p$ e $q$,
$p$ ou $q$,
ou $p$ ou (exclusivo) $q$,
se $p$ então $q$,
$p$ se e só se $q$.
onde $p$ e $q$ são proposições (simples ou compostas). Neste casos, em geral, pretende-se obter os valores lógicos das proposições compostas em função dos valores lógicos conhecidos das proposições mais simples que as compõem. Por forma a podermos formalizar a lógica e a avaliação de proposições, convencionamos a seguinte representação para os operadores sintácticos usados na formação de proposições:
Operações Lógicas | Símbolos | Notação | Significado
------------------|----------|---------|------------
Negação | $\neg$ ou $\sim$ | $\neg p$ | não p
Conjunção | $\wedge$ | $p \wedge q$ | p e q
Disjunção | $\vee$ | $p \vee q$ | p ou q
Disjunção exclusiva | $\oplus$ ou $\dot{\vee}$ | $p\oplus q$ | ou p ou (exclusivo) q
Implicação | $\rightarrow$ | $p\rightarrow q$ | se p então q
Bi-implicação | $\leftrightarrow$ | $p\leftrightarrow q$ | p se só se q
Negação
Seja $p$ uma proposição. A afirmação "não se verifica que
p" é uma nova proposição, designada de negação de $p$. A
negação de $p$ é denotada por $\neg p$ ou $\sim p$. A proposição
$\neg p$ deve ler-se "não p" e é verdadeira se p é falsa. A proposição $\neg p$ é falsa se p é verdadeira.
É usual definir a interpretação dum operador lógico através de
tabelas do tipo:
$p$ | $\neg p$
:----:|:-------:
T | F
F | T
ou
$p$ | $\neg p$
:----:|:--------:
1 | 0
0 | 1
stas tabelas são designadas por tabelas de verdade. Neste
caso define completamente o operador negação, relacionando os
valores lógicos de p e $\neg p$.
Note que, em linguagem corrente nem sempre se pode negar logicamente uma proposição,
antepondo o advérbio "não" ao verbo da proposição, isto apenas se verifica nos casos mais simples.
Por exemplo: negar "Hoje é sábado." é afirmar "Hoje não é sábado".
Mas negar que "Todas as aves voam" é o mesmo que afirmar "não se verifica que todas as aves voam" o que é equivalente a afirmar que "Nem todas as aves voam" mas não é afirmar que "Todas as aves não voam".
Em linguagem Matemática, dado o rigor da interpretação das
designações usadas, o processo de negação fica simplificado. Por
exemplo, negar "5>2" é o mesmo que afirmar "$\neg$(5>2)" que é equivalente, por definição da relação >, a escrever "5$\leq$2". Assim como "5>2" é verdade, temos pela interpretação da negação que "$\neg$(5>2)" é falso.
End of explanation
#
# Tabela da conjunção
#
for p in [True,False]:
for q in [True,False]:
print(p,'and',q,'=', p and q)
Explanation: Conjunção
Sejam $p$ e $q$ proposições. A proposição "$p$ e $q$", denotada
$p\wedge q$, é a proposição que é verdadeira apenas quando $p$ e $q$
são ambas verdadeiras, caso contrário é falsa. A proposição $p\wedge q$
diz-se a \textbf{conjunção} de $p$ e $q$.
Assim, os valores lógicos das três proposições $p$, $q$, e $p\wedge
q$ estão relacionados pela tabela de verdade:
$p$ | $q$ | $p$ $\wedge$ $q$
:-----:|:----:|:--------:
V | V | V
V | F | F
F | V | F
F | F | F
Note que a tabela tem quatro linhas, uma por cada combinação
possível de valores de verdade para as proposições $p$ e $q$.
End of explanation
#
# Tabela da disjunção
#
for p in [True,False]:
for q in [True,False]:
print(p,'or',q,'=', p or q)
Explanation: Disjunção
Sejam p e q proposições. A proposição "$p$ ou $q$", denotada
p$\vee$q, é a proposição que é falsa apenas quando $p$ e $q$ são
ambas falsas, caso contrário é verdade. A proposição p$\vee$q
diz-se a disjunção de p e q.
A tabela de verdade de p $\vee$q toma assim a forma:
$p$ | $q$ | $p$ $\vee$ $q$
:------:|:-----:|:---------:
V | V | V
V | F | V
F | V | V
F | F | F
A conectiva ou é interpretada na versão inclusiva da
palavra "ou" em linguagem corrente. Note que, nas proposições seguintes ou tem ou significado inclusivo ou significado
exclusivo consoante o contexto de interpretação:
- O João pratica futebol ou natação.[ou ambas as coisas]
- Ele é do Sporting ou do Porto.[mas não as duas coisas]
End of explanation
#
# Tabela da disjunção exclusiva
#
for p in [True,False]:
for q in [True,False]:
if p!=q:
print(p,'xor',q,'=', True)
else:
print(p,'xor',q,'=', False)
Explanation: Disjunção exclusiva
Para tornar a interpretação da disjunção independente do contexto definimos: A disjunção exclusiva
de p e q, denotada p$\oplus$q ou p$\dot{\vee}$q, é a
proposição que é verdade apenas quando, ou p é verdadeira ou q é
verdadeira, caso contrário é falsa.
A tabela de verdade de p$\oplus$q toma assim a forma:
$p$ | $q$ | $p$ $\oplus$ $q$
:------:|:-----:|:--------:
V | V | F
V | F | V
F | V | V
F | F | F
End of explanation
#
# Tabela da implicação
#
for p in [True,False]:
for q in [True,False]:
if p and not q:
print(p,'-->',q,'=',False)
else:
print(p,'-->',q,'=',True)
Explanation: Exercício:
Relacione o valor lógico das proposições $p$, $q$, $r$ e
$(p\wedge (\neg q))\oplus (r\vee p)$.
Exercício:
Indique os valores (de verdade ou falsidade) das seguintes afirmações:
- $3\leq 7$ e 4 é um número inteiro ímpar.
- $3\leq 7$ ou 4 é um número inteiro ímpar.
- 5 é ímpar ou divisível por 4.
Implicação
Sejam p e q proposições. A implicação p$\rightarrow$q é
a proposição que é falsa quando p é verdadeira e q é falsa, nos
outros casos é verdadeira.
A tabela de verdade de p$\rightarrow$q toma assim a forma:
$p$ | $q$ | $p$ $\rightarrow$ $q$
:------:|:-----:|:----------:
V | V | V
V | F | F
F | V | V
F | F | V
Numa proposição do tipo p$\rightarrow$q a proposição p recebe o
nome de hipótese (antecedente ou premissa) e a q chama-se
tese (conclusão ou consequente). A proposição p$\rightarrow$q também é muitas vezes designada por declaração
condicional. Estas designações são compatíveis com o uso da implicação em linguagem corrente, devemos no entanto notar que a tabela entra em conflito com a interpretação que fazemos da implicação: neste caso não se dirá "p implica q" quando se sabe à priori que p é falsa. Na interpretação que apresentamos para a implicação ela é verdade sempre que "p" é falsa independentemente do valor lógico de "q". Esta situação pode ilustrar-se com a implicação "se 1+1=1 então 2=3" que é verdadeira, uma vez que o antecedente é falso.
End of explanation
#
# Tabela da disjunção exclusiva
#
for p in [True,False]:
for q in [True,False]:
if p==q:
print(p,'<->',q,'=', True)
else:
print(p,'<->',q,'=', False)
Explanation: Bi-implicação
Sejam p e q proposições. A bi-condicional ou bi-implicação de p e q é a proposição p$\leftrightarrow$q que é verdadeira
quando p e q têm o mesmo valor lógico.
A tabela de verdade de p$\leftrightarrow$q toma assim a forma:
$p$ | $q$ | $p$ $\leftrightarrow$ $q$
:------:|:-----:|:----------:
V | V | V
V | F | F
F | V | F
F | F | V
A proposição p$\leftrightarrow$q deve ler-se "p se e só se q"
(abreviado por "p sse q") ou "p é condição necessária e
suficiente para q".
End of explanation
def imp(p,q):
u''' imp(bool,bool)->bool
Operador de implicação '''
return not p or q
def biimp(p,q):
u''' biimp(bool,bool)->bool
Operador de bi-implicação'''
return imp(p,q) and imp(q,p)
imp(False,True)
biimp(False,True)
Explanation: Facilmente podemos mostrar que as proposições p$\leftrightarrow$q
e $(p\rightarrow q)\wedge(q\rightarrow p)$ têm os mesmos valores
lógicos, ou seja a proposição $(p\leftrightarrow q)\leftrightarrow
((p\rightarrow q)\wedge(q\rightarrow p))$ é sempre verdadeira.
(p | $\leftrightarrow$ | q) | $\leftrightarrow$ | ((p | $\rightarrow$ | q) | $\wedge$ | (q | $\rightarrow$ | p))
:------:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:
V | V | V | V | V | V | V | V | V | V | V
V | F | F | V | V | F | F | F | F | V | V
F | F | V | V | F | V | V | F | V | F | F
F | V | F | V | F | V | F | V | F | V | F
------|----|----|----|----|----|----|----|----|----|----
1 | 2 | 1 | 4 | 1 | 2 | 1 | 3 | 1 | 2 | 1
Exercício:
Suponhamos que p,q,r representam as seguintes sentenças:
$p:$"7 é um número inteiro par"
$q:3+1=4$
$r:$"24 é divisível por 8"
Escreva em linguagem simbólica as proposições
$3+1\neq 4$ e 24 é divisível por 8
não é verdade que 7 seja ímpar ou 3+1=4
se 3+1=4 então 24 não é divisível por 8
Escreva por palavras as sentenças
$p\vee(\neg q)$
$\neg(p\wedge q)$
$(\neg r)\vee (\neg q)$
Exercício:
Construir as tabelas de verdade das seguintes proposições:
1. $((p\rightarrow q)\wedge p)\rightarrow q$
1. $p\leftrightarrow(q\rightarrow r)$
1. $(p\wedge(\neg p))\rightarrow q$
1. $((p\vee r)\wedge(q\vee r))\wedge((\neg p)\vee (\neg r))$
1. $(p\wedge(q\vee r))\wedge (q\wedge (p\vee r))$
Exercício:
Quantas linhas tem a tabela de verdade de uma proposição com $n$ variáveis proposicionais?
Ordem de precedência das conectivas lógicas
Até aqui, temos usado parêntesis para definir a ordem de
aplicação dos operadores lógicos numa proposição composta. Por forma
a reduzir o número de parêntesis adoptamos a seguinte convenção: Sempre que numa expressão estiverem presentes várias operações lógicas, convenciona-se, na ausência de parêntesis, que as operações se efectuem na ordem seguinte:
1. a negação;
1. a conjunção e a disjunção;
1. a implicação e a bi-implicação.
Assim,
1. $p\rightarrow ((\neg p)\vee r)$ pode escrever-se $p\rightarrow \neg p\vee r$;
1. $(p\wedge (\neg q))\leftrightarrow c$ pode escrever-se $p\wedge \neg q\leftrightarrow c$;
1. $p\vee q\wedge \neg r \rightarrow p \rightarrow\neg q$ deve ser entendida como
$(((p\vee q)\wedge(\neg r))\rightarrow p) \rightarrow(\neg q)$.
Tautologia
Chama-se tautologia (ou fórmula logicamente
verdadeira) a uma proposição que é verdadeira, para quaisquer que sejam os valores lógicos atribuídos às variáveis proposicionais que a compõem. Dito de outra forma, chama-se tautologia a uma proposição cuja coluna correspondente na tabela de verdade possui apenas Vs ou 1s. Exemplo duma tautologia é a proposição $p\vee(\neg p)$, designada de "Princípio do terceiro excluído",
A negação duma tautologia, ou seja uma proposição que é sempre falsa, diz-se uma contra-tautologia ou contradição. Se uma proposição não é nem uma tautologia nem uma contradição denomina-se por contingência.
Não deve confundir-se contradição com proposição falsa, assim como não deve confundir-se tautologia com proposição verdadeira. O facto de uma tautologia ser sempre verdadeira e uma contradição ser sempre falsa deve-se à sua forma lógica (sintaxe) e não ao significado que se lhes pode atribuir (semântica).
A tabela de verdade
mostra que $p\rightarrow(p\vee q)$ é uma tautologia, enquanto que $(p\rightarrow q)\wedge (p\wedge (\neg q))$ é uma contradição.
Exercício
Mostre que são tautologias:
1. $(\neg q\rightarrow \neg p)\leftrightarrow(p\rightarrow q)$
1. $(p\leftrightarrow q)\leftrightarrow ((p\rightarrow q)\wedge(q\rightarrow p))$
Exemplos de outras tautologias são apresentadas abaixo:
Equivalências proposicionais
As proposições $p$ e $q$ dizem-se logicamente
equivalentes se $p\leftrightarrow q$ é uma tautologia. Por $p\equiv
q$ ou $p\Leftrightarrow q$ denotamos que $p$ e $q$ são logicamente
equivalentes.
Diz-se que a proposição $p$ implica logicamente a proposição $q$ se a veracidade da primeira arrastar necessariamente a veracidade da segunda, ou seja, se a proposição p$\rightarrow$q for uma tautologia.
$\neg q\rightarrow \neg p \Leftrightarrow p\rightarrow q$
$\neg$ | $q$ | $\rightarrow$ | $\neg$ | $p$
:-----------:|:-------:|:---------------:|:--------:|:-----:
F | V | V | F | V
V | F | F | F | V
F | V | V | V | F
V | F | V | V | F
-----------|-------|---------------|--------|-----
2 | 1 | 3 | 2 | 1
e
$p$ | $\rightarrow$ | $q$
:-----:|:-------------:|:----:
V | V | V
V | F | F
F | V | V
F | V | F
-------|---------------|------
1 | 2 | 1
$p\leftrightarrow q\Leftrightarrow (p\rightarrow q)\wedge(q\rightarrow p)$
($p$ | $\leftrightarrow$ | q) | $\leftrightarrow$ | (($p$ | $\rightarrow$ | $q$) | $\wedge$ | ($q$ | $\rightarrow$ | $p$))
:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:
V | V | V | V | V | V | V | V | V | V | V
V | F | F | V | V | F | F | F | F | V | V
F | F | V | V | F | V | V | F | V | F | F
F | V | F | V | F | V | F | V | F | V | F
----|----|----|----|----|----|----|----|----|----|----
1 | 2 | 1 | 4 | 1 | 2 | 1 | 3 | 1 | 2 | 1
Deste modo, a equivalência proposicional pode ser sempre verificada através duma tabela de verdade. Em particular, as proposições $p$ e $q$ são equivalentes se e só se as colunas, na tabela de verdade, que determinam os seu valores lógicos coincidirem.
Exercício
Mostre que são exemplos de equivalências proposicionais:
1. $\neg(p\vee \neg p) \Leftrightarrow p \wedge \neg p$
1. $\neg (p\vee q)\Leftrightarrow \neg p \wedge \neg q$
1. $\neg p\vee q \Leftrightarrow p \rightarrow q$
1. $p\vee(q\wedge r)\Leftrightarrow(p\vee q)\wedge(p\vee r)$
Exercício
Indique quais das sentenças seguintes são equivalentes:
1. $p\wedge(\neg q)$
1. $p\rightarrow q$
1. $\neg((\neg p)\vee q)$
1. $q\rightarrow(\neg q)$
1. $(\neg p)\vee q$
1. $\neg(p\rightarrow q)$
1. $p\rightarrow(\neg q)$
1. $(\neg p)\rightarrow (\neg q)$
Exercício
Mostre que cada uma das proposições que se seguem:
1. $(\neg p)\vee q$
1. $(\neg q)\rightarrow (\neg p)$
1. $\neg(p\wedge (\neg q))$
é equivalente a $p\rightarrow q$.
Exercício
Mostre que:
1. $p\vee(q\wedge r)$ não é logicamente equivalente a $(p\vee q)\wedge r$.
1. $p\vee (q\wedge r)$ é logicamente equivalente a $(p\vee q)\wedge (p\vee r)$.
1. $p\vee(\neg (q \vee r))$ é logicamente equivalente a $(p\vee(\neg q))\vee(\neg r)$
De seguida apresentamos exemplos de equivalências úteis para o que se segue (que podem ser verificadas através de tabelas de verdade):
Nome | Propriedade | Propriedade
-------------|----------------------|-------------------
Comutatividade | $p \wedge q \Leftrightarrow q \wedge p$ | $p \vee q \Leftrightarrow q \vee p$
Associativa| $(p\wedge q)\wedge r \Leftrightarrow p \wedge (q \wedge r)$ | $(p\vee q)\vee r \Leftrightarrow p \vee (q \vee r)$
Idempotência | $p\wedge p \Leftrightarrow p$ | $p\vee p \Leftrightarrow p$
Identidade | $p\wedge V\Leftrightarrow p$ | $p\vee F\Leftrightarrow p$
Dominância | $p\wedge F\Leftrightarrow F$ | $p\vee V\Leftrightarrow V$
Absorção | $p\wedge(p\vee r)\Leftrightarrow p$ |$p\vee(p\wedge r)\Leftrightarrow p$
Distributivas | $p\wedge(q\vee r)\Leftrightarrow(p\wedge q)\vee(p\wedge r)$ | $p\vee(q\wedge r)\Leftrightarrow(p\vee q)\wedge(p\vee r)$
Distributivas | $p\rightarrow(q\vee r)\Leftrightarrow(p\rightarrow q)\vee(p\rightarrow r)$ | $p\rightarrow(q\wedge r)\Leftrightarrow (p\rightarrow q)\wedge(p\rightarrow r)$
Leis de De Morgan | $\neg (p\wedge q)\Leftrightarrow \neg p \vee \neg q$ | $\neg (p\vee q)\Leftrightarrow \neg p \wedge \neg q$
Def. Implicação | $p\rightarrow q \Leftrightarrow \neg p \vee q$ | $p\rightarrow q\Leftrightarrow \neg(p\wedge\neg q)$
Def. Bi-condicional | $p\leftrightarrow q \Leftrightarrow (p\rightarrow q) \wedge (q \rightarrow p)$ | $p\leftrightarrow q \Leftrightarrow (\neg p \vee q) \wedge (\neg q \vee p)$
Negação | $\neg(\neg p)\Leftrightarrow p$ |
Contraposição | $p\rightarrow q \Leftrightarrow \neg q \rightarrow \neg p$|
Troca de premissas | $p\rightarrow (q\rightarrow r)\Leftrightarrow q\rightarrow (p\rightarrow r)$ |
As equivalências lógicas apresentadas na tabela anterior, podem ser usadas na determinação de equivalências lógicas adicionais. Isso porque, podemos numa proposição composta, substituir
proposições por proposições que lhes sejam equivalentes
sem que isso altere os valores de verdade da proposição original.
Por exemplo:
$$
\begin{array}{rcll}
\neg(p\vee(\neg p \wedge q)) & \Leftrightarrow & \neg p \wedge \neg(\neg p \wedge q) & \text{da segunda lei de De Morgan} \
& \Leftrightarrow & \neg p \wedge [\neg(\neg p) \vee \neg q] & \text{da primeira lei de De Morgan} \
& \Leftrightarrow & \neg p \wedge (p\vee \neg q) & \text{da lei da dupla negação} \
& \Leftrightarrow & (\neg p \wedge p) \vee (\neg p \wedge \neg q) & \text{da segunda distributividade} \
& \Leftrightarrow & F \vee (\neg p \wedge \neg q) & \text{já que } \neg p \wedge p \Leftrightarrow F \
& \Leftrightarrow & \neg p \wedge \neg q & \text{da lei identidade}
\end{array}
$$
Donde podemos concluir que $\neg(p\vee(\neg p \wedge q))$ e $\neg p
\wedge \neg q$ são proposições logicamente equivalentes:
$$
\neg(p\vee(\neg p \wedge q)) \Leftrightarrow \neg p \wedge \neg q
$$
Exercício
Simplifique as seguintes proposições:
1. $p\vee(q\wedge (\neg p))$
1. $\neg(p\vee(q\wedge(\neg r)))\wedge q$
1. $\neg((\neg p)\wedge(\neg q))$
1. $\neg((\neg p)\vee q)\vee(p\wedge(\neg r))$
1. $(p\wedge q)\vee (p\wedge (\neg q))$
1. $(p\wedge r)\vee((\neg r)\wedge (p\vee q))$
Exercício
Por vezes usa-se o símbolo $\downarrow$ para construir proposições compostas $p\downarrow q$ definidas por duas proposições $p$ e $q$, que é verdadeira quando e só quando $p$ e $q$ são simultaneamente falsas, e é falsa em todos os outros casos. A proposição $p\downarrow q$ lê-se "nem $p$ nem $q$".
1. Apresente a tabela de verdade de $p\downarrow q$.
1. Expresse $p\downarrow q$ em termos das conectivas $\wedge,\vee$ e $\neg$.
1. Determine proposições apenas definidas pela conectiva $\downarrow$ que sejam equivalentes a $\neg p$, $p\wedge q$ e $p\vee q$.
Exercício
Expresse a proposição $p\leftrightarrow q$ usando apenas os símbolos $\wedge,\vee$ e $\neg$.
Considerações sobre a implicação
As duas primeiras linhas da tabela da implicação
$p$ | $q$ | $p\rightarrow q$
:-------:|:-------:|:------------:
V | V | V
V | F | F
F | V | V
F | F | V
não apresentam qualquer problema sob o ponto de vista intuitivo do senso comum. Quanto às duas últimas, qualquer outra escolha possível apresenta desvantagens sob o ponto de vista lógico, o que levou à escolha das soluções apresentadas, já que:
fazendo F na 3º linha e F na 4º linha, obtém-se a tabela da conjunção
fazendo F na 3º linha e V na 4º linha, obtém-se a tabela da bi-implicação
resta a possibilidade de fazer V na 3º linha e F na 4º linha que também não é, pois isso equivaleria a recusar a equivalência
$$
(p\rightarrow q)\Leftrightarrow(\neg q\rightarrow\neg p)
$$
que é uma equivalência aconselhável, já que a proposição "se o Pedro fala, existe" é (intuitivamente) equivalente à proposição "se o Pedro não existe, não fala". A aceitação desta equivalência impõe a tabela considerada para a implicação.
$\neg$ | $q$ | $\rightarrow$ | $\neg$ | $p$
:-------:|:-----:|:---------------:|:--------:|:-------:
F | V | V | F | V
V | F | F | F | V
F | V | V | V | F
V | F | V | V | F
-------|-----|---------------|--------|-------
2 | 1 | 3 | 2 | 1
e
$p$ | $\rightarrow$ | $q$
:----:|:---------------:|:-------:
V | V | V
V | F | F
F | V | V
F | V | F
----|---------------|-------
1 | 2 | 1
A partir duma implicação $r$ dada por $p\rightarrow q$ define-se as
proposições:
1. $q\rightarrow p$, designada de recíproca da implicação $r$;
1. $\neg q\rightarrow \neg p$, designada por contra-recíproca de $r$;
1. $\neg p\rightarrow \neg q$, designada por inversa de $r$.
Observe-se que, embora a contra-recíproca seja equivalente à proposição original, o mesmo não acontece com a recíproca (e a inversa, que lhe é equivalente) o que se pode verificar através das respectivas tabelas de verdade.
Exercício
Determine:
1. a contra-recíproca de $(\neg p)\rightarrow q$
1. a inversa de $(\neg q)\rightarrow p$
1. a recíproca da inversa de $q\rightarrow (\neg p)$
1. a negação de $p\rightarrow (\neg q)$
Exercícios de python
Exercício:
Implemente os operadores de implicação e bi-implicação, através de funções
imp(bool,bool)->bool e biimp(bool,bool)->bool.
End of explanation
def TabelaP4():
u''' TabelaP4()->
tabela de (p->q)|h'''
print('p'.center(5)+'|'+'q'.center(5)+'|'+'h'.center(5)+'| (p->q)|h')
print('-'*27)
for p in [False,True]:
for q in [False,True]:
for h in [False,True]:
aval = imp(p,q) or h
print(str(p).center(5)+'|'+str(q).center(5)+'|'+str(h).center(5)+'|'+str(aval).center(10))
TabelaP4()
Explanation: Exercício:
Apresente as tabelas de verdade da implicação da bi-implicação e da proposição $P4:(p\rightarrow q)\vee h$. Por exemplo, tal que
>>> TabelaP4()
-----------------------------
p | q | h | (p->q)|h
-----------------------------
False|False|False| True
False|False| True| True
False| True|False| True
False| True| True| True
True|False|False| False
True|False| True| True
True| True|False| True
True| True| True| True
End of explanation
def cab(lista):
u''' cab(list)->
Imprime cabeçalho de tabela'''
print('-'*5*(len(lista)+1))
for prop in lista[:-1]:
print(prop.center(5)+'|', end='')
print(lista[-1]) # imprime último elemento
print('-'*5*(len(lista)+1))
cab(['p1','p2','imp(p1,p2)'])
Explanation: Exercício:
Defina a função
cab(list)->
em que dado uma lista de strings ['p1','p2','p3',...,'pn'], imprima o cabeçalho duma tabela de verdade. Por exemplo, tal que
>>> cab(['p1','p2','imp(p1,p2)'])
-------------------------
p1 | p2 | imp(p1,p2)
-------------------------
End of explanation
def linha(lista):
u''' linha(list)->
Imprime linha de tabela'''
for prop in lista[:-1]:
print(str(prop).center(5)+'|', end='')
print(str(lista[-1])) # imprime último elemento
linha([True,False,True])
Explanation: Exercício:
Defina a função
linha(list)->
em que dada uma lista de valores lógicos ['p1','p2','p3',...,'pn'], imprima uma linha 'p1|p2|p3|...|pn' duma tabela de verdade, onde cada valor lógico está numa string com 5 posições. Por exemplo, tal que
>>> linha([True,False,True])
True|False| True
End of explanation
def trad(exp):
u''' trans(str)->str
Tradução duma expressão proposicional codificada,
usando os símbolos 0,1,\&,$|$ e $\sim$, numa expressão
proposicional no Python usando False, True, and, or e not.
'''
exp = exp.replace('0','False')
exp = exp.replace('1','True')
exp = exp.replace('&',' and ')
exp = exp.replace('|',' or ')
exp = exp.replace('~',' not ')
return exp
trad('(p&~(q|w))')
Explanation: Exercício:
Defina uma função trad(string)->string que faça a tradução duma expressão proposicional codificada, usando os símbolos 0,1,\&,$|$ e $\sim$, numa expressão proposicional no Python usando False, True, and, or e not. Por exemplo, tal que
>>> trad('(p&~(q|w))')
'(p and not (q or w))'
End of explanation
def Eval(exp, atrib):
u''' Eval(string,list)->bool
Avalia a expressão proposicional, na sintaxe do Python,
associando a cada variável usada <var> o valor lógico <bool>.
A associação entre variáveis e valores lógicos deve ser descrita
por pares (<var>,<bool>) na lista que serve de argumento.
'''
for var in atrib:
exp = exp.replace(var[0],str(var[1]))
return eval(exp)
Eval('not(p1 and p2) or p1',[('p1',True),('p2',False)])
Explanation: Exercício:
Defina a função
Eval(string,list)->bool
que avalia a expressão proposicional, na sintaxe do Python, associando a cada variável usada <var> o valor lógico <bool>. A associação entre variáveis e valores lógicos deve ser descrita por pares (<var>,<bool>) na lista que serve de argumento.
Eval('(p1 and not (p2 or p3))',[('p1',True),('p2',False),('p3',True)])} avalia '(True and not (False or True))'.
Por exemplo, tal que
>>> Eval('not(p1 and p2) or p1',[('p1',True),('p2',False)])
True
End of explanation
def binlist(nvar):
u''' binlist(int)->
lista em representação binária os números de 2**n-1 até 0
'''
for n in range(2**nvar-1,-1,-1):
print(bin(n)[2:].rjust(nvar,'0'))
binlist(3)
Explanation: Exercício:
Represente em representação binário os números de $2^n-1$ até zero. Exemplo:
>>> binlist(3)
111
110
101
100
011
010
001
000
End of explanation
def tabela(exp,var):
u''' tabela(str,list)->
Imprime a tabela de verdade da proposição descrita pela string,
assumindo que as suas variáveis estão na lista.
USANDO: a linguagem proposicional de símbolos 0,1,\&,$|$ e $\sim$,
mais as funções imp(bool,bool)->bool e biimp(bool,bool)->bool)
'''
cab(var+[exp])
nvar = len(var)
for n in range(2**nvar-1,-1,-1):
l=bin(n)[2:].rjust(nvar,'0')
cont=0
lista = []
vlog = []
for v in var:
lista.append((v,bool(int(l[cont]))))
vlog.append(bool(int(l[cont])))
cont = cont + 1
linha(vlog+ [Eval(trad(exp),lista)])
tabela('imp(u,q)|w',['u','q','w'])
Explanation: Exercício:
Usando as funções anteriores, defina uma função
tabela(string, list)->
que imprima a tabela de verdade da proposição $q$, descrita pela string, assumindo que as suas variáveis estão na lista $[p1,p2,...,pn]$. (USANDO: a linguagem proposicional de símbolos 0,1,\&,$|$ e $\sim$, mais as funções imp(bool,bool)->bool e biimp(bool,bool)->bool))
Por exemplo, tal que
>>> tabela('imp(u,q)|w',['u','q','w'])
-------------------------
u | q | w |imp(u,q)|w
-------------------------
True| True| True|True
True| True|False|True
True|False| True|True
True|False|False|False
False| True| True|True
False| True|False|True
False|False| True|True
False|False|False|True
End of explanation
def tautologia(exp,var):
u''' tautologia(str,list)->bool
Verifica se a proposição descrita pela string é uma tautologia,
assumindo que as suas variáveis estão descritas na lista.
USANDO: a linguagem proposicional de símbolos 0,1,\&,$|$ e $\sim$,
mais as funções imp(bool,bool)->bool e biimp(bool,bool)->bool
'''
sai = True
nvar = len(var)
for n in range(2**nvar-1,-1,-1):
l=str(bin(n))[2:].rjust(nvar,'0')
cont=0
lista = []
for v in var:
lista.append((v,bool(int(l[cont]))))
cont = cont + 1
sai = sai and bool(Eval(exp,lista))
return sai
tautologia('biimp(~q | w, imp(q,w))',['q','w'])
Explanation: Exercício:
Usando as funções anteriores, defina uma função
tautologia(string, list)->bool
que verifica se a proposição $q$, descrita pela string, é uma tautologia e assumindo que as suas variáveis estão descritas na lista $[p1,p2,...p_n]$. (USANDO: a linguagem proposicional de símbolos 0,1,\&,$|$ e $\sim$, mais as funções imp(bool,bool)->bool e biimp(bool,bool)->bool)
>>> tautologia('biimp(~q | w, imp(q,w))',['q','w'])
False
End of explanation |
6,001 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial Part 9
Step1: Hyperparameter Optimization
Let's start by loading the HIV dataset. It classifies over 40,000 molecules based on whether they inhibit HIV replication.
Step2: Now let's train a model on it. We will use a MultitaskClassifier, which is just a stack of dense layers. But that still leaves a lot of options. How many layers should there be, and how wide should each one be? What dropout rate should we use? What learning rate?
These are called hyperparameters. The standard way to select them is to try lots of values, train each model on the training set, and evaluate it on the validation set. This lets us see which ones work best.
You could do that by hand, but usually it's easier to let the computer do it for you. DeepChem provides a selection of hyperparameter optimization algorithms, which are found in the dc.hyper package. For this example we'll use GridHyperparamOpt, which is the most basic method. We just give it a list of options for each hyperparameter and it exhaustively tries all combinations of them.
The lists of options are defined by a dict that we provide. For each of the model's arguments, we provide a list of values to try. In this example we consider three possible sets of hidden layers
Step3: hyperparam_search() returns three arguments
Step4: We can see a few general patterns. Using two layers with the larger learning rate doesn't work very well. It seems the deeper model requires a smaller learning rate. We also see that 20% dropout usually works better than 50%. Once we narrow down the list of models based on these observations, all the validation scores are very close to each other, probably close enough that the remaining variation is mainly noise. It doesn't seem to make much difference which of the remaining hyperparameter sets we use, so let's arbitrarily pick a single layer of width 1000 and learning rate of 0.0001.
Early Stopping
There is one other important hyperparameter we haven't considered yet
Step5: Learning Rate Schedules
In the examples above we use a fixed learning rate throughout training. In some cases it works better to vary the learning rate during training. To do this in DeepChem, we simply specify a LearningRateSchedule object instead of a number for the learning_rate argument. In the following example we use a learning rate that decreases exponentially. It starts at 0.0002, then gets multiplied by 0.9 after every 1000 steps. | Python Code:
!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import conda_installer
conda_installer.install()
!/root/miniconda/bin/conda info -e
!pip install --pre deepchem
import deepchem
deepchem.__version__
Explanation: Tutorial Part 9: Advanced Model Training
In the tutorials so far we have followed a simple procedure for training models: load a dataset, create a model, call fit(), evaluate it, and call ourselves done. That's fine for an example, but in real machine learning projects the process is usually more complicated. In this tutorial we will look at a more realistic workflow for training a model.
Colab
This tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
Setup
To run DeepChem within Colab, you'll need to run the following installation commands. This will take about 5 minutes to run to completion and install your environment. You can of course run this tutorial locally if you prefer. In that case, don't run these cells since they will download and install Anaconda on your local machine.
End of explanation
import deepchem as dc
tasks, datasets, transformers = dc.molnet.load_hiv(featurizer='ECFP', split='scaffold')
train_dataset, valid_dataset, test_dataset = datasets
Explanation: Hyperparameter Optimization
Let's start by loading the HIV dataset. It classifies over 40,000 molecules based on whether they inhibit HIV replication.
End of explanation
params_dict = {
'n_tasks': [len(tasks)],
'n_features': [1024],
'layer_sizes': [[500], [1000], [1000, 1000]],
'dropouts': [0.2, 0.5],
'learning_rate': [0.001, 0.0001]
}
optimizer = dc.hyper.GridHyperparamOpt(dc.models.MultitaskClassifier)
metric = dc.metrics.Metric(dc.metrics.roc_auc_score)
best_model, best_hyperparams, all_results = optimizer.hyperparam_search(
params_dict, train_dataset, valid_dataset, metric, transformers)
Explanation: Now let's train a model on it. We will use a MultitaskClassifier, which is just a stack of dense layers. But that still leaves a lot of options. How many layers should there be, and how wide should each one be? What dropout rate should we use? What learning rate?
These are called hyperparameters. The standard way to select them is to try lots of values, train each model on the training set, and evaluate it on the validation set. This lets us see which ones work best.
You could do that by hand, but usually it's easier to let the computer do it for you. DeepChem provides a selection of hyperparameter optimization algorithms, which are found in the dc.hyper package. For this example we'll use GridHyperparamOpt, which is the most basic method. We just give it a list of options for each hyperparameter and it exhaustively tries all combinations of them.
The lists of options are defined by a dict that we provide. For each of the model's arguments, we provide a list of values to try. In this example we consider three possible sets of hidden layers: a single layer of width 500, a single layer of width 1000, or two layers each of width 1000. We also consider two dropout rates (20% and 50%) and two learning rates (0.001 and 0.0001).
End of explanation
all_results
Explanation: hyperparam_search() returns three arguments: the best model it found, the hyperparameters for that model, and a full listing of the validation score for every model. Let's take a look at the last one.
End of explanation
model = dc.models.MultitaskClassifier(n_tasks=len(tasks),
n_features=1024,
layer_sizes=[1000],
dropouts=0.2,
learning_rate=0.0001)
callback = dc.models.ValidationCallback(valid_dataset, 1000, metric)
model.fit(train_dataset, nb_epoch=50, callbacks=callback)
Explanation: We can see a few general patterns. Using two layers with the larger learning rate doesn't work very well. It seems the deeper model requires a smaller learning rate. We also see that 20% dropout usually works better than 50%. Once we narrow down the list of models based on these observations, all the validation scores are very close to each other, probably close enough that the remaining variation is mainly noise. It doesn't seem to make much difference which of the remaining hyperparameter sets we use, so let's arbitrarily pick a single layer of width 1000 and learning rate of 0.0001.
Early Stopping
There is one other important hyperparameter we haven't considered yet: how long we train the model for. GridHyperparamOpt trains each for a fixed, fairly small number of epochs. That isn't necessarily the best number.
You might expect that the longer you train, the better your model will get, but that isn't usually true. If you train too long, the model will usually start overfitting to irrelevant details of the training set. You can tell when this happens because the validation set score stops increasing and may even decrease, while the score on the training set continues to improve.
Fortunately, we don't need to train lots of different models for different numbers of steps to identify the optimal number. We just train it once, monitor the validation score, and keep whichever parameters maximize it. This is called "early stopping". DeepChem's ValidationCallback class can do this for us automatically. In the example below, we have it compute the validation set's ROC AUC every 1000 training steps. If you add the save_dir argument, it will also save a copy of the best model parameters to disk.
End of explanation
learning_rate = dc.models.optimizers.ExponentialDecay(0.0002, 0.9, 1000)
model = dc.models.MultitaskClassifier(n_tasks=len(tasks),
n_features=1024,
layer_sizes=[1000],
dropouts=0.2,
learning_rate=learning_rate)
model.fit(train_dataset, nb_epoch=50, callbacks=callback)
Explanation: Learning Rate Schedules
In the examples above we use a fixed learning rate throughout training. In some cases it works better to vary the learning rate during training. To do this in DeepChem, we simply specify a LearningRateSchedule object instead of a number for the learning_rate argument. In the following example we use a learning rate that decreases exponentially. It starts at 0.0002, then gets multiplied by 0.9 after every 1000 steps.
End of explanation |
6,002 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST end to end on Kubeflow on GKE
This example guides you through
Step1: Install the required libraries
Run the next cell to import the libraries required to train this model.
Step2: Wait for the message Configure docker credentials before moving on to the next cell.
Step3: Configure a Docker registry for Kubeflow Fairing
In order to build Docker images from your notebook, you need a Docker registry to store the images.
Below you set some variables specifying a Container Registry.
Kubeflow Fairing provides a utility function to guess the name of your GCP project.
Step4: Use Kubeflow Fairing to build the Docker image
This notebook uses Kubeflow Fairing's kaniko builder to build a Docker image that includes all your dependencies.
* You use kaniko because you want to be able to run pip to install dependencies.
* Kaniko gives you the flexibility to build images from Dockerfiles.
Step5: Run the next cell and wait until you see a message like Built image gcr.io/<your-project>/fairing-job/mnist
Step6: Create a Cloud Storage bucket
Run the next cell to create a Google Cloud Storage (GCS) bucket to store your models and other results.
Since this notebook is running in Python, the cell uses the GCS Python client libraries, but you can use the gsutil command line instead.
Step8: Distributed training
To train the model, this example uses TFJob to run a distributed training job. Run the next cell to set up the YAML specification for the job
Step9: Create the training job
To submit the training job, you could write the spec to a YAML file and then do kubectl apply -f {FILE}.
However, because you are running in a Jupyter notebook, you use the TFJob client.
* You run the TFJob in a namespace created by a Kubeflow profile.
* The namespace is the same as the namespace where you are running the notebook.
* Creating a profile ensures that the namespace is provisioned with service accounts and other resources needed for Kubeflow.
Step10: Check the job using kubectl
Above you used the Python SDK for TFJob to check the status. You can also use kubectl get the status of your job.
The job conditions will tell you whether the job is running, succeeded or failed.
Step12: Get the training logs
There are two ways to get the logs for the training job
Step16: Deploy TensorBoard
The next step is to create a Kubernetes deployment to run TensorBoard.
TensorBoard will be accessible behind the Kubeflow IAP endpoint.
Step17: Set a variable defining your endpoint
Set endpoint to https
Step18: Access the TensorBoard UI
Run the cell below to find the endpoint for the TensorBoard UI.
Step19: Wait for the training job to finish
You can use the TFJob client to wait for the job to finish
Step23: Serve the model
Now you can deploy the model using TensorFlow Serving.
You need to create the following
Step27: Deploy the UI for the MNIST web app
Deploy the UI to visualize the MNIST prediction results.
This example uses a prebuilt and public Docker image for the UI.
Step28: Access the MNIST web UI
A reverse proxy route is automatically added to the Kubeflow IAP endpoint. The MNIST endpoint is | Python Code:
import logging
import os
import uuid
from importlib import reload
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
Explanation: MNIST end to end on Kubeflow on GKE
This example guides you through:
Taking an example TensorFlow model and modifying it to support distributed training.
Serving the resulting model using TFServing.
Deploying and using a web app that sends prediction requests to the model.
Requirements
You must be running Kubeflow 1.0 on Kubernetes Engine (GKE) with Cloud Identity-Aware Proxy (Cloud IAP). See the guide to deploying Kubeflow on GCP.
Run this notebook within your Kubeflow cluster. See the guide to setting up your Kubeflow notebooks.
Prepare model
There is a delta between existing distributed MNIST examples and what's needed to run well as a TFJob.
Basically, you must:
Add options in order to make the model configurable.
Use tf.estimator.train_and_evaluate to enable model exporting and serving.
Define serving signatures for model serving.
This tutorial provides a Python program that's already prepared for you: model.py.
Verify that you have a Google Cloud Platform (GCP) account
The cell below checks that this notebook was spawned with credentials to access GCP.
End of explanation
import notebook_setup
reload(notebook_setup)
notebook_setup.notebook_setup()
Explanation: Install the required libraries
Run the next cell to import the libraries required to train this model.
End of explanation
import k8s_util
# Force a reload of Kubeflow. Since Kubeflow is a multi namespace module,
# doing the reload in notebook_setup may not be sufficient.
import kubeflow
reload(kubeflow)
from kubernetes import client as k8s_client
from kubernetes import config as k8s_config
from kubeflow.tfjob.api import tf_job_client as tf_job_client_module
from IPython.core.display import display, HTML
import yaml
Explanation: Wait for the message Configure docker credentials before moving on to the next cell.
End of explanation
from kubernetes import client as k8s_client
from kubernetes.client import rest as k8s_rest
from kubeflow import fairing
from kubeflow.fairing import utils as fairing_utils
from kubeflow.fairing.builders import append
from kubeflow.fairing.deployers import job
from kubeflow.fairing.preprocessors import base as base_preprocessor
# Setting up Google Container Registry (GCR) for storing output containers.
# You can use any Docker container registry instead of GCR.
GCP_PROJECT = fairing.cloud.gcp.guess_project_name()
DOCKER_REGISTRY = 'gcr.io/{}/fairing-job'.format(GCP_PROJECT)
namespace = fairing_utils.get_current_k8s_namespace()
logging.info(f"Running in project {GCP_PROJECT}")
logging.info(f"Running in namespace {namespace}")
logging.info(f"Using Docker registry {DOCKER_REGISTRY}")
Explanation: Configure a Docker registry for Kubeflow Fairing
In order to build Docker images from your notebook, you need a Docker registry to store the images.
Below you set some variables specifying a Container Registry.
Kubeflow Fairing provides a utility function to guess the name of your GCP project.
End of explanation
# TODO(https://github.com/kubeflow/fairing/issues/426): We should get rid of this once the default
# Kaniko image is updated to a newer image than 0.7.0.
from kubeflow.fairing import constants
constants.constants.KANIKO_IMAGE = "gcr.io/kaniko-project/executor:v0.14.0"
from kubeflow.fairing.builders import cluster
# output_map is a map of extra files to add to the notebook.
# It is a map from source location to the location inside the context.
output_map = {
"Dockerfile.model": "Dockerfile",
"model.py": "model.py"
}
preprocessor = base_preprocessor.BasePreProcessor(
command=["python"], # The base class will set this.
input_files=[],
path_prefix="/app", # irrelevant since we aren't preprocessing any files
output_map=output_map)
preprocessor.preprocess()
Explanation: Use Kubeflow Fairing to build the Docker image
This notebook uses Kubeflow Fairing's kaniko builder to build a Docker image that includes all your dependencies.
* You use kaniko because you want to be able to run pip to install dependencies.
* Kaniko gives you the flexibility to build images from Dockerfiles.
End of explanation
# Use a Tensorflow image as the base image
# We use a custom Dockerfile
cluster_builder = cluster.cluster.ClusterBuilder(registry=DOCKER_REGISTRY,
base_image="", # base_image is set in the Dockerfile
preprocessor=preprocessor,
image_name="mnist",
dockerfile_path="Dockerfile",
pod_spec_mutators=[fairing.cloud.gcp.add_gcp_credentials_if_exists],
context_source=cluster.gcs_context.GCSContextSource())
cluster_builder.build()
logging.info(f"Built image {cluster_builder.image_tag}")
Explanation: Run the next cell and wait until you see a message like Built image gcr.io/<your-project>/fairing-job/mnist:<1234567>.
End of explanation
from google.cloud import storage
bucket = f"{GCP_PROJECT}-mnist"
client = storage.Client()
b = storage.Bucket(client=client, name=bucket)
if not b.exists():
logging.info(f"Creating bucket {bucket}")
b.create()
else:
logging.info(f"Bucket {bucket} already exists")
Explanation: Create a Cloud Storage bucket
Run the next cell to create a Google Cloud Storage (GCS) bucket to store your models and other results.
Since this notebook is running in Python, the cell uses the GCS Python client libraries, but you can use the gsutil command line instead.
End of explanation
train_name = f"mnist-train-{uuid.uuid4().hex[:4]}"
num_ps = 1
num_workers = 2
model_dir = f"gs://{bucket}/mnist"
export_path = f"gs://{bucket}/mnist/export"
train_steps = 200
batch_size = 100
learning_rate = .01
image = cluster_builder.image_tag
train_spec = fapiVersion: kubeflow.org/v1
kind: TFJob
metadata:
name: {train_name}
spec:
tfReplicaSpecs:
Ps:
replicas: {num_ps}
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
serviceAccount: default-editor
containers:
- name: tensorflow
command:
- python
- /opt/model.py
- --tf-model-dir={model_dir}
- --tf-export-dir={export_path}
- --tf-train-steps={train_steps}
- --tf-batch-size={batch_size}
- --tf-learning-rate={learning_rate}
image: {image}
workingDir: /opt
restartPolicy: OnFailure
Chief:
replicas: 1
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
serviceAccount: default-editor
containers:
- name: tensorflow
command:
- python
- /opt/model.py
- --tf-model-dir={model_dir}
- --tf-export-dir={export_path}
- --tf-train-steps={train_steps}
- --tf-batch-size={batch_size}
- --tf-learning-rate={learning_rate}
image: {image}
workingDir: /opt
restartPolicy: OnFailure
Worker:
replicas: 1
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
serviceAccount: default-editor
containers:
- name: tensorflow
command:
- python
- /opt/model.py
- --tf-model-dir={model_dir}
- --tf-export-dir={export_path}
- --tf-train-steps={train_steps}
- --tf-batch-size={batch_size}
- --tf-learning-rate={learning_rate}
image: {image}
workingDir: /opt
restartPolicy: OnFailure
Explanation: Distributed training
To train the model, this example uses TFJob to run a distributed training job. Run the next cell to set up the YAML specification for the job:
End of explanation
tf_job_client = tf_job_client_module.TFJobClient()
tf_job_body = yaml.safe_load(train_spec)
tf_job = tf_job_client.create(tf_job_body, namespace=namespace)
logging.info(f"Created job {namespace}.{train_name}")
Explanation: Create the training job
To submit the training job, you could write the spec to a YAML file and then do kubectl apply -f {FILE}.
However, because you are running in a Jupyter notebook, you use the TFJob client.
* You run the TFJob in a namespace created by a Kubeflow profile.
* The namespace is the same as the namespace where you are running the notebook.
* Creating a profile ensures that the namespace is provisioned with service accounts and other resources needed for Kubeflow.
End of explanation
!kubectl get tfjobs -o yaml {train_name}
Explanation: Check the job using kubectl
Above you used the Python SDK for TFJob to check the status. You can also use kubectl get the status of your job.
The job conditions will tell you whether the job is running, succeeded or failed.
End of explanation
from urllib.parse import urlencode
for replica in ["chief", "worker", "ps"]:
logs_filter = fresource.type="k8s_container"
labels."k8s-pod/tf-job-name" = "{train_name}"
labels."k8s-pod/tf-replica-type" = "{replica}"
resource.labels.container_name="tensorflow"
new_params = {'project': GCP_PROJECT,
# Logs for last 7 days
'interval': 'P7D',
'advancedFilter': logs_filter}
query = urlencode(new_params)
url = "https://console.cloud.google.com/logs/viewer?" + query
display(HTML(f"Link to: <a href='{url}'>{replica} logs</a>"))
Explanation: Get the training logs
There are two ways to get the logs for the training job:
Using kubectl to fetch the pod logs. These logs are ephemeral; they will be unavailable when the pod is garbage collected to free up resources.
Using Stackdriver.
Kubernetes logs are automatically available in Stackdriver.
You can use labels to locate the logs for a specific pod.
In the cell below, you use labels for the training job name and process type to locate the logs for a specific pod.
Run the cell below to get a link to Stackdriver for your logs:
End of explanation
tb_name = "mnist-tensorboard"
tb_deploy = fapiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mnist-tensorboard
name: {tb_name}
namespace: {namespace}
spec:
selector:
matchLabels:
app: mnist-tensorboard
template:
metadata:
labels:
app: mnist-tensorboard
version: v1
spec:
serviceAccount: default-editor
containers:
- command:
- /usr/local/bin/tensorboard
- --logdir={model_dir}
- --port=80
image: tensorflow/tensorflow:1.15.2-py3
name: tensorboard
ports:
- containerPort: 80
tb_service = fapiVersion: v1
kind: Service
metadata:
labels:
app: mnist-tensorboard
name: {tb_name}
namespace: {namespace}
spec:
ports:
- name: http-tb
port: 80
targetPort: 80
selector:
app: mnist-tensorboard
type: ClusterIP
tb_virtual_service = fapiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: {tb_name}
namespace: {namespace}
spec:
gateways:
- kubeflow/kubeflow-gateway
hosts:
- '*'
http:
- match:
- uri:
prefix: /mnist/{namespace}/tensorboard/
rewrite:
uri: /
route:
- destination:
host: {tb_name}.{namespace}.svc.cluster.local
port:
number: 80
timeout: 300s
tb_specs = [tb_deploy, tb_service, tb_virtual_service]
k8s_util.apply_k8s_specs(tb_specs, k8s_util.K8S_CREATE_OR_REPLACE)
Explanation: Deploy TensorBoard
The next step is to create a Kubernetes deployment to run TensorBoard.
TensorBoard will be accessible behind the Kubeflow IAP endpoint.
End of explanation
endpoint = None
if endpoint:
logging.info(f"endpoint set to {endpoint}")
else:
logging.info("Warning: You must set {endpoint} in order to print out the URLs where you can access your web apps.")
Explanation: Set a variable defining your endpoint
Set endpoint to https://your-domain (with no slash at the end). Your domain typically has the following pattern: <your-kubeflow-deployment-name>.endpoints.<your-gcp-project>.cloud.goog. You can see your domain in the URL that you're using to access this notebook.
End of explanation
if endpoint:
vs = yaml.safe_load(tb_virtual_service)
path= vs["spec"]["http"][0]["match"][0]["uri"]["prefix"]
tb_endpoint = endpoint + path
display(HTML(f"TensorBoard UI is at <a href='{tb_endpoint}'>{tb_endpoint}</a>"))
Explanation: Access the TensorBoard UI
Run the cell below to find the endpoint for the TensorBoard UI.
End of explanation
tf_job = tf_job_client.wait_for_condition(train_name, expected_condition=["Succeeded", "Failed"], namespace=namespace)
if tf_job_client.is_job_succeeded(train_name, namespace):
logging.info(f"TFJob {namespace}.{train_name} succeeded")
else:
raise ValueError(f"TFJob {namespace}.{train_name} failed")
Explanation: Wait for the training job to finish
You can use the TFJob client to wait for the job to finish:
End of explanation
deploy_name = "mnist-model"
model_base_path = export_path
# The web UI defaults to mnist-service so if you change the name, you must
# change it in the UI as well.
model_service = "mnist-service"
deploy_spec = fapiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mnist
name: {deploy_name}
namespace: {namespace}
spec:
selector:
matchLabels:
app: mnist-model
template:
metadata:
# TODO(jlewi): Right now we disable the istio side car because otherwise ISTIO rbac will prevent the
# UI from sending RPCs to the server. We should create an appropriate ISTIO rbac authorization
# policy to allow traffic from the UI to the model servier.
# https://istio.io/docs/concepts/security/#target-selectors
annotations:
sidecar.istio.io/inject: "false"
labels:
app: mnist-model
version: v1
spec:
serviceAccount: default-editor
containers:
- args:
- --port=9000
- --rest_api_port=8500
- --model_name=mnist
- --model_base_path={model_base_path}
- --monitoring_config_file=/var/config/monitoring_config.txt
command:
- /usr/bin/tensorflow_model_server
env:
- name: modelBasePath
value: {model_base_path}
image: tensorflow/serving:1.15.0
imagePullPolicy: IfNotPresent
livenessProbe:
initialDelaySeconds: 30
periodSeconds: 30
tcpSocket:
port: 9000
name: mnist
ports:
- containerPort: 9000
- containerPort: 8500
resources:
limits:
cpu: "4"
memory: 4Gi
requests:
cpu: "1"
memory: 1Gi
volumeMounts:
- mountPath: /var/config/
name: model-config
volumes:
- configMap:
name: {deploy_name}
name: model-config
service_spec = fapiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/path: /monitoring/prometheus/metrics
prometheus.io/port: "8500"
prometheus.io/scrape: "true"
labels:
app: mnist-model
name: {model_service}
namespace: {namespace}
spec:
ports:
- name: grpc-tf-serving
port: 9000
targetPort: 9000
- name: http-tf-serving
port: 8500
targetPort: 8500
selector:
app: mnist-model
type: ClusterIP
monitoring_config = fkind: ConfigMap
apiVersion: v1
metadata:
name: {deploy_name}
namespace: {namespace}
data:
monitoring_config.txt: |-
prometheus_config: {{
enable: true,
path: "/monitoring/prometheus/metrics"
}}
model_specs = [deploy_spec, service_spec, monitoring_config]
k8s_util.apply_k8s_specs(model_specs, k8s_util.K8S_CREATE_OR_REPLACE)
Explanation: Serve the model
Now you can deploy the model using TensorFlow Serving.
You need to create the following:
* A Kubernetes deployment.
* A Kubernetes service.
* (Optional) A configmap containing the Prometheus monitoring configuration.
End of explanation
ui_name = "mnist-ui"
ui_deploy = fapiVersion: apps/v1
kind: Deployment
metadata:
name: {ui_name}
namespace: {namespace}
spec:
replicas: 1
selector:
matchLabels:
app: mnist-web-ui
template:
metadata:
labels:
app: mnist-web-ui
spec:
containers:
- image: gcr.io/kubeflow-examples/mnist/web-ui:v20190112-v0.2-142-g3b38225
name: web-ui
ports:
- containerPort: 5000
serviceAccount: default-editor
ui_service = fapiVersion: v1
kind: Service
metadata:
annotations:
name: {ui_name}
namespace: {namespace}
spec:
ports:
- name: http-mnist-ui
port: 80
targetPort: 5000
selector:
app: mnist-web-ui
type: ClusterIP
ui_virtual_service = fapiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: {ui_name}
namespace: {namespace}
spec:
gateways:
- kubeflow/kubeflow-gateway
hosts:
- '*'
http:
- match:
- uri:
prefix: /mnist/{namespace}/ui/
rewrite:
uri: /
route:
- destination:
host: {ui_name}.{namespace}.svc.cluster.local
port:
number: 80
timeout: 300s
ui_specs = [ui_deploy, ui_service, ui_virtual_service]
k8s_util.apply_k8s_specs(ui_specs, k8s_util.K8S_CREATE_OR_REPLACE)
Explanation: Deploy the UI for the MNIST web app
Deploy the UI to visualize the MNIST prediction results.
This example uses a prebuilt and public Docker image for the UI.
End of explanation
if endpoint:
vs = yaml.safe_load(ui_virtual_service)
path= vs["spec"]["http"][0]["match"][0]["uri"]["prefix"]
ui_endpoint = endpoint + path
display(HTML(f"mnist UI is at <a href='{ui_endpoint}'>{ui_endpoint}</a>"))
Explanation: Access the MNIST web UI
A reverse proxy route is automatically added to the Kubeflow IAP endpoint. The MNIST endpoint is:
https:/${KUBEFlOW_ENDPOINT}/mnist/${NAMESPACE}/ui/
where NAMESPACE is the namespace where you're running the Jupyter notebook.
End of explanation |
6,003 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compare the Different Corpora
Corpora
Step1: Compare the fraction of emotional sentences per text
For the different corpora.
An emotional sentence is a sentence for which at least one HEEM label is predicted.
Step2: Compare the number of lines per text
For the different corpora.
Step3: Compare the average number of labels per sentence
For the different corpora
Step4: Compare the number of emotional sentences per text
For the different corpora. | Python Code:
# Render our plots inline
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import os
pd.set_option('display.mpl_style', 'default') # Make the graphs a bit prettier
plt.rcParams['figure.figsize'] = (16, 6)
# adjust to your local directories
embem_data_dir = '/home/jvdzwaan/data/embem/'
output_dir = '/home/jvdzwaan/data/tmp/'
# load data
def load_data(corpus, column_names, corpus_metadata, label_counts, body_parts, emotion_bodypart_pairs):
c = pd.read_csv(corpus, header=None, sep='\t', index_col=0, names=column_names)
md = pd.read_csv(corpus_metadata, index_col=0)
l = pd.read_csv(label_counts, index_col=0)
bp = pd.read_csv(body_parts, index_col=0)
ebp = pd.read_csv(emotion_bodypart_pairs, index_col=0)
return pd.concat([c, md, l, bp, ebp], axis=1)
corpus_big = load_data(os.path.join(embem_data_dir, 'corpus/corpus_big.csv'), ['id', 'year', 'genre', 'title', 'authors'],
os.path.join(embem_data_dir, 'dict/corpus_big_additional_metadata.csv'),
os.path.join(embem_data_dir, 'dict/corpus_big_label_counts.csv'),
os.path.join(embem_data_dir, 'dict/corpus_big_heem_expanded_body_parts.csv'),
os.path.join(embem_data_dir, 'dict/corpus_big_emotion_bodypart_pairs.csv'))
annotation = load_data(os.path.join(embem_data_dir, 'corpus/annotation_corpus.csv'), ['id', 'year', 'genre', 'title', 'authors'],
os.path.join(embem_data_dir, 'dict/annotation_additional_metadata.csv'),
os.path.join(embem_data_dir, 'dict/annotation_label_counts.csv'),
os.path.join(embem_data_dir, 'dict/annotation_heem_expanded_body_parts.csv'),
os.path.join(embem_data_dir, 'dict/annotation_emotion_bodypart_pairs.csv'))
ceneton = load_data(os.path.join(embem_data_dir, 'corpus/ceneton.csv'), ['id', 'year', 'genre', 'title', 'authors'],
os.path.join(embem_data_dir, 'dict/ceneton_additional_metadata.csv'),
os.path.join(embem_data_dir, 'dict/ceneton_label_counts.csv'),
os.path.join(embem_data_dir, 'dict/ceneton_heem_expanded_body_parts.csv'),
os.path.join(embem_data_dir, 'dict/ceneton_emotion_bodypart_pairs.csv'))
edbo = load_data(os.path.join(embem_data_dir, 'corpus/edbo.csv'), ['id', 'year', 'genre', 'title+author'],
os.path.join(embem_data_dir, 'dict/edbo_additional_metadata.csv'),
os.path.join(embem_data_dir, 'dict/edbo_label_counts.csv'),
os.path.join(embem_data_dir, 'dict/edbo_heem_expanded_body_parts.csv'),
os.path.join(embem_data_dir, 'dict/edbo_emotion_bodypart_pairs.csv'))
complete = pd.concat([annotation, corpus_big, ceneton, edbo]).fillna(0)
combined = pd.concat([corpus_big, ceneton, edbo]).fillna(0)
# Basic statistics
print '# texts'
print 'Corpus big:', len(corpus_big)
print 'Annotation:', len(annotation)
print 'Ceneton:', len(ceneton)
print 'EDBO:', len(edbo)
print 'Combined:', len(combined)
print 'Complete:', len(complete)
#combined
# number of texts per genre and period
print 'Number of texts per genre'
genres = complete.groupby('genre')
genres.size().plot(kind='bar')
print genres.size()
print 'Number of texts per period'
periods = complete.groupby('period')
periods.size().reindex(['renaissance', 'classicism', 'enlightenment']).plot(kind='bar')
print periods.size().reindex(['renaissance', 'classicism', 'enlightenment'])
print 'Number of texts per period'
df = pd.DataFrame({'count' : complete.groupby(['period', 'genre']).size()}).reset_index()
df = df.pivot(index='period', columns='genre', values='count')
df = df.fillna(0)
df = df.reindex(['renaissance', 'classicism', 'enlightenment'])
print df
df.plot(kind='bar')
print 'Number of texts per year'
years = complete.groupby('year')
#print years.size()
print 'Number of years for which 0 texts are available:', np.sum(years.size() == 0)
years.size().plot(marker='o')
print 'Number of texts per genre per year'
year2genre = pd.DataFrame({'count' : complete.groupby(['year', 'genre']).size()}).reset_index()
year2genre = year2genre.pivot(index='year', columns='genre', values='count')
year2genre = year2genre.fillna(0)
#print year2genre
year2genre.plot()
Explanation: Compare the Different Corpora
Corpora:
* Corpus Big
* Annotation
* Ceneton
* EDBO
End of explanation
complete.loc[:, 'frac_emotional'] = complete.apply(lambda row: (row['#emotional']+0.0)/row['#lines'], axis=1)
combined.loc[:, 'frac_emotional'] = combined.apply(lambda row: (row['#emotional']+0.0)/row['#lines'], axis=1)
annotation.loc[:, 'frac_emotional'] = annotation.apply(lambda row: (row['#emotional']+0.0)/row['#lines'], axis=1)
corpus_big.loc[:, 'frac_emotional'] = corpus_big.apply(lambda row: (row['#emotional']+0.0)/row['#lines'], axis=1)
ceneton.loc[:, 'frac_emotional'] = ceneton.apply(lambda row: (row['#emotional']+0.0)/row['#lines'], axis=1)
edbo.loc[:, 'frac_emotional'] = edbo.apply(lambda row: (row['#emotional']+0.0)/row['#lines'], axis=1)
data = [complete['frac_emotional'], combined['frac_emotional'], annotation['frac_emotional'], corpus_big['frac_emotional'], ceneton['frac_emotional'], edbo['frac_emotional']]
plt.boxplot(data)
plt.xticks([1,2,3,4,5,6],['Complete', 'Combined', 'Annotation', 'Corpus big','Ceneton','EDBO'])
plt.title('Fraction of emotional sentences in the different datasets');
from scipy import stats
import statsmodels.api as sm
f_val, p_val = stats.f_oneway(annotation['frac_emotional'], corpus_big['frac_emotional'], ceneton['frac_emotional'], edbo['frac_emotional'])
print "P value ANOVA: {:10.10f}\n".format(p_val)
annotation.loc[:, 'corpus'] = annotation.apply(lambda row: 'annotation', axis=1)
corpus_big.loc[:, 'corpus'] = corpus_big.apply(lambda row: 'corpus_big', axis=1)
ceneton.loc[:, 'corpus'] = ceneton.apply(lambda row: 'ceneton', axis=1)
edbo.loc[:, 'corpus'] = edbo.apply(lambda row: 'edbo', axis=1)
df = pd.concat([annotation, corpus_big, ceneton, edbo])
result = sm.stats.multicomp.pairwise_tukeyhsd(df.frac_emotional, df.corpus)
print(result.summary())
Explanation: Compare the fraction of emotional sentences per text
For the different corpora.
An emotional sentence is a sentence for which at least one HEEM label is predicted.
End of explanation
data = [complete['#lines'], combined['#lines'], annotation['#lines'], corpus_big['#lines'], ceneton['#lines'], edbo['#lines']]
plt.boxplot(data)
plt.xticks([1,2,3,4,5,6],['Complete', 'Combined', 'Annotation', 'Corpus big','Ceneton','EDBO'])
plt.title('The number of lines per text in different datasets');
f_val, p_val = stats.f_oneway(annotation['#lines'], corpus_big['#lines'], ceneton['#lines'], edbo['#lines'])
print "P value ANOVA: {:10.10f}\n".format(p_val)
result = sm.stats.multicomp.pairwise_tukeyhsd(df.get('#lines'), df.corpus)
print(result.summary())
Explanation: Compare the number of lines per text
For the different corpora.
End of explanation
data = [complete['avg_labels'], combined['avg_labels'], annotation['avg_labels'], corpus_big['avg_labels'], ceneton['avg_labels'], edbo['avg_labels']]
plt.boxplot(data)
plt.xticks([1,2,3,4,5,6],['Complete', 'Combined', 'Annotation', 'Corpus big','Ceneton','EDBO']);
f_val, p_val = stats.f_oneway(complete['avg_labels'], combined['avg_labels'], annotation['avg_labels'], corpus_big['avg_labels'], ceneton['avg_labels'], edbo['avg_labels'])
print "P value ANOVA: {:10.10f}\n".format(p_val)
result = sm.stats.multicomp.pairwise_tukeyhsd(df.get('avg_labels'), df.corpus)
print(result.summary())
Explanation: Compare the average number of labels per sentence
For the different corpora
End of explanation
data = [complete['#emotional'], combined['#emotional'], annotation['#emotional'], corpus_big['#emotional'], ceneton['#emotional'], edbo['#emotional']]
plt.boxplot(data)
plt.xticks([1,2,3,4,5,6],['Complete', 'Combined', 'Annotation', 'Corpus big','Ceneton','EDBO']);
f_val, p_val = stats.f_oneway(annotation['#emotional'], corpus_big['#emotional'], ceneton['#emotional'], edbo['#emotional'])
print "P value ANOVA: {:10.10f}\n".format(p_val)
result = sm.stats.multicomp.pairwise_tukeyhsd(df.get('avg_labels'), df.corpus)
print(result.summary())
# load label names
import itertools
from embem.emotools.heem_utils import heem_emotion_labels, heem_body_part_labels
ebp_labels = ['{}_{}'.format(e, bp) for e, bp in list(itertools.product(heem_emotion_labels, heem_body_part_labels))]
def count_pairs(row):
#print row['Achterdocht_Arms']
#print row.index
return np.sum([row[p] for p in ebp_labels if p in row.index])
complete.loc[:, '#pairs'] = complete.apply(count_pairs, axis=1)
combined.loc[:, '#pairs'] = combined.apply(count_pairs, axis=1)
# Save datasets to file (for easy loading)
annotation.to_csv(os.path.join(output_dir, 'annotation.csv'))
corpus_big.to_csv(os.path.join(output_dir, 'corpus_big.csv'))
ceneton.to_csv(os.path.join(output_dir, 'ceneton.csv'))
edbo.to_csv(os.path.join(output_dir, 'edbo.csv'))
combined.to_csv(os.path.join(output_dir, 'combined.csv'))
complete.to_csv(os.path.join(output_dir, 'complete.csv'))
Explanation: Compare the number of emotional sentences per text
For the different corpora.
End of explanation |
6,004 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feedback k domácím projektům
Jde tento kód napsat jednodušeji, aby ale dělal úplně totéž?
Step1: Ano, lze
Step2: A co tento?
Step3: Ten taky
Step4: Nejmenší číslo
Upovídané řešení z domácích projektů
Step5: Lepší, ale pořád ne optimální
Step6: Kratší a méně náročné řešení
Step7: N-úhelníky v řadě
Upovídané řešení z domácích projektů
Step8: Kratší řešení s využitím cyklu v dalším cyklu
Step9: Obecné připomínky a rady
Importy provádíme vždy na prvních řádcích programu a v rámci programu pouze jednou.
Snažíme se nepoužívat importy s hvězdičkou.
Neimportujeme nic co pak v programu nepoužijeme.
Kód nemusí být elegantní, hlavně když funguje (alespoň pro začátek).
Komentáře je lepší a jednodušší psát nad nebo pod kód místo vedle něj. Obzvlášť pokud má komentovaná část kódu několik řádků.
Pochvala za funkce.
Když odevzdáváte soubor s funkcemi, je třeba je v rámci souboru také zavolat, jinak se po jeho spuštění nic nestane.
Martin děkuje všem, kteří zrychlili želvičku.
Děkujeme za PyBeer | Python Code:
for radek in range(4):
radek += 1
for value in range(radek):
print('X', end=' ')
print('')
Explanation: Feedback k domácím projektům
Jde tento kód napsat jednodušeji, aby ale dělal úplně totéž?
End of explanation
for radek in range(1, 5):
print('X ' * radek)
Explanation: Ano, lze :-)
End of explanation
promenna = "X"
for j in range(5):
for i in promenna:
print(i, i, i, i, i)
Explanation: A co tento?
End of explanation
for j in range(5):
print('X ' * 5)
Explanation: Ten taky
End of explanation
prve = input('Zadej cislo: ')
druhe = input('Zadej cislo: ')
tretie = input('Zadej cislo: ')
stvrte = input('Zadej cislo: ')
piate = input('Zadej cislo: ')
if prve<druhe and prve<tretie and prve<stvrte and prve<piate:
print(prve)
if druhe<prve and druhe<tretie and druhe<stvrte and druhe<piate:
print(druhe)
if tretie<prve and tretie<druhe and tretie<stvrte and tretie<piate:
print(tretie)
if stvrte<prve and stvrte<druhe and stvrte<tretie and stvrte<piate:
print(stvrte)
if piate<prve and piate<druhe and piate<tretie and piate<stvrte:
print(piate)
Explanation: Nejmenší číslo
Upovídané řešení z domácích projektů
End of explanation
a = float(input('Prvni cislo: '))
b = float(input('Druhe cislo: '))
c = float(input('Treti cislo: '))
d = float(input('Ctrvte cislo: '))
e = float(input('Pate cislo: '))
m = a
for cislo in a, b, c, d, e:
if cislo < m:
m=cislo
print(m)
Explanation: Lepší, ale pořád ne optimální
End of explanation
minimum = 0
for x in range(5):
cislo = int(input('Zadej cislo: '))
if minimum == 0 or cislo < minimum:
minimum = cislo
print('Nejmensi zadane cislo je', minimum)
Explanation: Kratší a méně náročné řešení
End of explanation
from turtle import forward, shape, left, right, exitonclick, penup, pendown, back
# pětiúhelník:
vnitrniuhel = 180*(1-(2/5))
vnejsiuhel= 180-vnitrniuhel
for x in range (5):
forward(200/5)
left(vnejsiuhel)
penup()
forward(100)
pendown()
# šestiúhelník:
vnitrniuhel = 180*(1-(2/6))
vnejsiuhel= 180-vnitrniuhel
for x in range (6):
forward(200/6)
left(vnejsiuhel)
penup()
forward(100)
pendown()
# sedmiúhelník:
vnitrniuhel = 180*(1-(2/7))
vnejsiuhel= 180-vnitrniuhel
for x in range (7):
forward(200/7)
left(vnejsiuhel)
penup()
forward(100)
pendown()
# osmiúhelník:
vnitrniuhel = 180*(1-(2/8))
vnejsiuhel= 180-vnitrniuhel
for x in range (8):
forward(200/8)
left(vnejsiuhel)
exitonclick()
Explanation: N-úhelníky v řadě
Upovídané řešení z domácích projektů
End of explanation
from turtle import forward, shape, left, right, exitonclick, penup, pendown, back
for n in range(5,9):
vnitrniuhel = 180*(1-(2/n))
vnejsiuhel= 180-vnitrniuhel
for x in range (n):
forward(200/n)
left(vnejsiuhel)
penup()
forward(100)
pendown()
exitonclick()
Explanation: Kratší řešení s využitím cyklu v dalším cyklu
End of explanation
# ##### ## #######
# ############################
# #############################
# #################################
# | |___________
# | ( ) ( ) ( ) |________ /
# | ) ( ) ( ) ( | / /
# | ( ) ( ) ( ) | / /
# | ) ( ) ( ) ( | / /
# | ( ) ( ) ( ) |____/ /
# | ) ( ) ( ) ( |_____/
# | (___) (___) (___) |
# | |
# |_____________________________|
Explanation: Obecné připomínky a rady
Importy provádíme vždy na prvních řádcích programu a v rámci programu pouze jednou.
Snažíme se nepoužívat importy s hvězdičkou.
Neimportujeme nic co pak v programu nepoužijeme.
Kód nemusí být elegantní, hlavně když funguje (alespoň pro začátek).
Komentáře je lepší a jednodušší psát nad nebo pod kód místo vedle něj. Obzvlášť pokud má komentovaná část kódu několik řádků.
Pochvala za funkce.
Když odevzdáváte soubor s funkcemi, je třeba je v rámci souboru také zavolat, jinak se po jeho spuštění nic nestane.
Martin děkuje všem, kteří zrychlili želvičku.
Děkujeme za PyBeer
End of explanation |
6,005 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
← Back to Index
Sheet Music Representations
Music can be represented in many different ways. The printed, visual form of a musical work is called a score or sheet music. For example, here is a sheet music excerpt from Mozart Piano Sonata No. 11 K. 331
Step1: Sheet music consists of notes. A note has several properties including pitch, timbre, loudness, and duration.
Pitch (Wikipedia is a perceptual property that indicates how "high" or "low" a note sounds. Pitch is closely related to the fundamental frequency sounded by the note, although fundamental frequency is a physical property of the sound wave.
An octave (Wikipedia) is an interval between two notes where the higher note is twice the fundamental frequency of the lower note. For example, an A at 440 Hz and an A at 880 Hz are separated by one octave. Here are two Cs separated by one octave
Step2: A pitch class (Wikipedia) is the set of all notes that are an integer number of octaves apart. For example, the set of all Cs, {..., C1, C2, ...} is one pitch class, and the set of all Ds, {..., D1, D2, ...} is another pitch class. Here is the pitch class for C | Python Code:
ipd.SVG("https://upload.wikimedia.org/wikipedia/commons/2/27/MozartExcerptK331.svg")
ipd.YouTubeVideo('dP9KWQ8hAYk')
Explanation: ← Back to Index
Sheet Music Representations
Music can be represented in many different ways. The printed, visual form of a musical work is called a score or sheet music. For example, here is a sheet music excerpt from Mozart Piano Sonata No. 11 K. 331:
End of explanation
ipd.Image("https://upload.wikimedia.org/wikipedia/commons/a/a5/Perfect_octave_on_C.png")
Explanation: Sheet music consists of notes. A note has several properties including pitch, timbre, loudness, and duration.
Pitch (Wikipedia is a perceptual property that indicates how "high" or "low" a note sounds. Pitch is closely related to the fundamental frequency sounded by the note, although fundamental frequency is a physical property of the sound wave.
An octave (Wikipedia) is an interval between two notes where the higher note is twice the fundamental frequency of the lower note. For example, an A at 440 Hz and an A at 880 Hz are separated by one octave. Here are two Cs separated by one octave:
End of explanation
ipd.Image("https://upload.wikimedia.org/wikipedia/commons/thumb/9/98/Pitch_class_on_C.png/187px-Pitch_class_on_C.png")
Explanation: A pitch class (Wikipedia) is the set of all notes that are an integer number of octaves apart. For example, the set of all Cs, {..., C1, C2, ...} is one pitch class, and the set of all Ds, {..., D1, D2, ...} is another pitch class. Here is the pitch class for C:
End of explanation |
6,006 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SuchLinkedTrees
In the last article, we saw how to use SuchTree to probe the topology of
very large trees. In this article, we're going to look at the other component
of the package, SuchLinkedTrees.
If you are interested in studying how two groups of organisms interact (or,
rather, have interacted over evolutionary time), you will find yourself with
two trees of distinct groups of taxa that are linked by a matrix of
interaction observations. This is sometimes called a 'dueling trees' problem.
If the trees happen to have the same number of taxa, and the interaction
matrix happens to be a unit matrix, then you can compute the distance matrix
for each of your trees and use the
Mantel test to compare them.
However, this is a pretty special case. Hommola et al. describe a method
extends the Mantel test in this paper here
Step1: To get started, we need to initialize two trees and a table of observations
linking the taxa on the two trees.
Step2: To create a SuchLinkedTrees instance, you need two SuchTrees and a pandas
DataFrame, where the taxa in the first tree matches the DataFrame index, and the
taxa in the second tree matches the DataFrame columns.
This is a pretty large dataset, so it takes a bit of time to load.
Step3: To test for cospeciation, Hommola's method does the following
Step4: Not too bad. The algorithm went through ten iterations, placing ten blocks of
4096 pairs into each bucket before it converged on our stopping condition after
testing 2,621,440 pairs (about 0.2% of the possible pairs). Note that the
$p$-value we see here is not Hommola's $p$-value -- it doesn't include any
information about the topologies of the trees.
Let's see what the distribution of sampled distances looks like.
Step5: Well... that's... something? In this case, we are looking at the entire microbiome
of a complex of 14 host species that's about 10-20 million years old. Because
bacteria and archaea are older than that, we don't expect to see a meaningful
pattern of coevolution at the scale of the whole microbial community.
If we're looking for coevolution, we want to examine clades within the microbial
community. This is what SuchLinkedTrees really designed to do.
SuchLinkedTrees has two functions that allow you to examine individual clades.
subset_a() takes the node id of an internal node of the first tree (usually, the
host organisms), and masks the data within the SuchLinkedTrees instance so that
that node behaves as the root. subset_b() does the same for the second tree
(usually, the guest organisms).
Step6: The observations are also masked so that distance calculations are constrained to within that
clade. The masking operation is extremely efficient, even for very large datasets.
Step7: So, all we need to do is iterate over the internal nodes of the microbe tree
(which we can get from SuchTree's get_internal_nodes() function), subset
the guest tree to that node, and apply Hommola's algorithm to the masked
SuchLinkedTrees instance.
I'm going to put some simple constrains based on clade size. You could also
use the average or total tree depth for each clade. It takes about an hour
to finish all 103,445 clades, so let's look at a random sample of 10,000 of
them.
Step8: Let's see what we've got!
Step9: Are there any clades that are big enough to to be interesting that
show a significant correlation above 0.6?
Step10: Cool. Let's go back and look at these in more detail.
Step11: Huh. Well, that looks a lot less interesting than I hoped. This is the
problem with correlation measures -- they don't test that the data obeys
their assumptions. In this case, we're using Pierson's $r$, which
assumes that the data from the two sources is normally distributed, which
this clearly is not. If you haven't seen this before, check out
Anscombe's quartet;
the gist of his argument is that it's not a good idea to apply any statistic
without examining the data graphically.
Let's have a look at the trees so we can get a better idea of why this is
broken. Unfortunately, I don't have a great way of pulling out the subtree
for plotting yet, so this will require some help from dendropy. | Python Code:
%pylab inline
%config InlineBackend.figure_format='retina'
from SuchTree import SuchTree, SuchLinkedTrees
import seaborn
import pandas
from scipy.cluster.hierarchy import ClusterWarning
from scipy.stats import pearsonr
warnings.simplefilter( 'ignore', UserWarning )
Explanation: SuchLinkedTrees
In the last article, we saw how to use SuchTree to probe the topology of
very large trees. In this article, we're going to look at the other component
of the package, SuchLinkedTrees.
If you are interested in studying how two groups of organisms interact (or,
rather, have interacted over evolutionary time), you will find yourself with
two trees of distinct groups of taxa that are linked by a matrix of
interaction observations. This is sometimes called a 'dueling trees' problem.
If the trees happen to have the same number of taxa, and the interaction
matrix happens to be a unit matrix, then you can compute the distance matrix
for each of your trees and use the
Mantel test to compare them.
However, this is a pretty special case. Hommola et al. describe a method
extends the Mantel test in this paper here :
A Permutation Test of Host–Parasite Cospeciation. Molecular Biology and Evolution, Vol. 26, No. 7. (01 July 2009), pp. 1457-1468, by Kerstin Hommola, Judith E. Smith, Yang Qiu, Walter R. Gilks
This is implemented in scikit-bio as hommola_cospeciation.
Unfortunately, the version in scikit-bio does not scale to very large trees,
and does not expose the computed distances for analysis. This is where
SuchLinkedTrees can help.
End of explanation
T1 = SuchTree( 'data/bigtrees/host.tree' )
T2 = SuchTree( 'data/bigtrees/guest.tree')
LK = pandas.read_csv( 'data/bigtrees/links.csv', index_col=0 )
print( 'host tree taxa : %d' % T1.n_leafs )
print( 'guest tree taxa : %d' % T2.n_leafs )
print( 'observation matrix : %d x %d' % LK.shape )
Explanation: To get started, we need to initialize two trees and a table of observations
linking the taxa on the two trees.
End of explanation
%time SLT = SuchLinkedTrees( T1, T2, LK )
n_links = sum(LK.apply(sum))
print( 'total observations : %d' % n_links )
print( 'observation pairs : %d' % int( ( n_links * ( n_links - 1 ) ) / 2 ) )
Explanation: To create a SuchLinkedTrees instance, you need two SuchTrees and a pandas
DataFrame, where the taxa in the first tree matches the DataFrame index, and the
taxa in the second tree matches the DataFrame columns.
This is a pretty large dataset, so it takes a bit of time to load.
End of explanation
%time result = SLT.sample_linked_distances( sigma=0.001, buckets=64, n=4096 )
result
print( 'sampled link pairs : %d' % len(result['TreeA']) )
print( 'Pearson\'s correlation : r=%f, p=%f' % pearsonr( result['TreeA'],
result['TreeB'] ) )
Explanation: To test for cospeciation, Hommola's method does the following :
calculate the patristic distance between the host taxa from the two observations
calculate the patristic distance between the guest taxa from the two observations
calculate the Pearson's correlation of the distance measures
Then, to calculate the significance of the correlation, it randomly
permutes the observation table and recalculates the distances and correlations.
A significance measure (a $p$ value) is estimated based on how likely
the correlation measure on unpermuted observations could belong to the set
of correlation measures on permuted observations.
For each correlation measure, we'd have to do calculate 1,008,162,156 patristic distances
through each of the two trees. To calculate the significance, we would then need to permute
the observations and then repeat the process about 50 times. That's 100,816,215,600 tree
traversals!
How long would that take? In our previous example, we benchmarked 1,000,000 distance
calculations at about 14 seconds on a single thread. For this dataset, one correlation
measure would require about a thousand times as many lookups, so it should have a run
time of about four hours. With the significance test, that would be a little more than
one CPU-week. I suppose that's not impossible, but for large datasets like this, we
probably don't need an exhaustive search of every possible pair of observations to get
a fairly accurate correlation measure.
So, we'ere going to use SuchLinkedTrees.sample_linked_distances(),
which returns a representative sample of distances. It does this
by filling a user-specified number of buckets (default : 64) with distances
between randomly chosen observations. It stops when the standard deviation of
the standard deviation of the buckets falls bellow sigma (default : 0.001).
End of explanation
df = pandas.DataFrame( { 'microbe tree distances' : result['TreeA'],
'host tree distances' : result['TreeB'] } )
seaborn.jointplot( 'microbe tree distances', 'host tree distances',
data=df, alpha=0.3, size=8 )
Explanation: Not too bad. The algorithm went through ten iterations, placing ten blocks of
4096 pairs into each bucket before it converged on our stopping condition after
testing 2,621,440 pairs (about 0.2% of the possible pairs). Note that the
$p$-value we see here is not Hommola's $p$-value -- it doesn't include any
information about the topologies of the trees.
Let's see what the distribution of sampled distances looks like.
End of explanation
SLT.subset_b_size
Explanation: Well... that's... something? In this case, we are looking at the entire microbiome
of a complex of 14 host species that's about 10-20 million years old. Because
bacteria and archaea are older than that, we don't expect to see a meaningful
pattern of coevolution at the scale of the whole microbial community.
If we're looking for coevolution, we want to examine clades within the microbial
community. This is what SuchLinkedTrees really designed to do.
SuchLinkedTrees has two functions that allow you to examine individual clades.
subset_a() takes the node id of an internal node of the first tree (usually, the
host organisms), and masks the data within the SuchLinkedTrees instance so that
that node behaves as the root. subset_b() does the same for the second tree
(usually, the guest organisms).
End of explanation
SLT.subset_b(121)
SLT.subset_b_leafs
Explanation: The observations are also masked so that distance calculations are constrained to within that
clade. The masking operation is extremely efficient, even for very large datasets.
End of explanation
from pyprind import ProgBar
warnings.simplefilter( 'ignore', RuntimeWarning )
N = len( T2.get_internal_nodes() )
progbar = ProgBar( N, title='Chugging through microbime data...' )
data = []
for n,nodeid in enumerate( T2.get_internal_nodes() ) :
SLT.subset_b( nodeid )
progbar.update()
if SLT.subset_b_size < 10 :
continue
if SLT.subset_n_links > 2500 :
continue
d = {}
d['name'] = 'clade_' + str(nodeid)
d['n_links'] = SLT.subset_n_links
d['n_leafs'] = SLT.subset_b_size
ld = SLT.linked_distances()
d['r'], d['p'] = pearsonr( ld['TreeA'], ld['TreeB'] )
data.append( d )
data = pandas.DataFrame( data ).dropna()
Explanation: So, all we need to do is iterate over the internal nodes of the microbe tree
(which we can get from SuchTree's get_internal_nodes() function), subset
the guest tree to that node, and apply Hommola's algorithm to the masked
SuchLinkedTrees instance.
I'm going to put some simple constrains based on clade size. You could also
use the average or total tree depth for each clade. It takes about an hour
to finish all 103,445 clades, so let's look at a random sample of 10,000 of
them.
End of explanation
data.head()
seaborn.jointplot( data.n_leafs, data.r, alpha=0.3, size=8 )
Explanation: Let's see what we've got!
End of explanation
data.loc[ ( data.r > 0.6 ) &
( data.n_leafs > 10 ) &
( data.n_links > 15 ) &
( data.p < 0.01 ) ]
Explanation: Are there any clades that are big enough to to be interesting that
show a significant correlation above 0.6?
End of explanation
SLT.subset_b( 26971 )
ld = SLT.linked_distances()
seaborn.jointplot( ld['TreeA'], ld['TreeB'] )
Explanation: Cool. Let's go back and look at these in more detail.
End of explanation
from dendropy import Tree
from tempfile import NamedTemporaryFile
tmpfile1 = NamedTemporaryFile()
tmpfile2 = NamedTemporaryFile()
# invert the taxa : node_id map
# FIXME : I need a better interface for this, suggestions welcome
sfeal = dict( zip( SLT.TreeB.leafs.values(), SLT.TreeB.leafs.keys() ) )
subset_taxa = [ sfeal[i] for i in SLT.subset_b_leafs ]
guest_tree = Tree.get_from_path( 'data/bigtrees/guest.tree',
schema='newick',
preserve_underscores=True ) # Newick is the worst
subset_tree = guest_tree.extract_tree_with_taxa_labels( subset_taxa )
subset_tree.write_to_path( tmpfile1.name, schema='newick' )
LK[ subset_taxa ].to_csv( tmpfile2.name )
%load_ext rpy2.ipython
cladepath = tmpfile1.name
linkpath = tmpfile2.name
outpath = 'clade_26971.svg'
%%R -i cladepath -i linkpath -i outpath -w 800 -h 800 -u px
library("phytools")
library("igraph")
tr1 <- read.tree( "data/bigtrees/host.tree" )
tr2 <- read.tree( cladepath )
links <- read.csv( linkpath, row.names=1, stringsAsFactors = F )
im <- graph_from_incidence_matrix( as.matrix( links ) )
assoc <- as_edgelist( im )
obj <- cophylo( tr1, tr2, assoc=assoc )
svg( outpath, width = 10, height = 12 )
plot( obj )
Explanation: Huh. Well, that looks a lot less interesting than I hoped. This is the
problem with correlation measures -- they don't test that the data obeys
their assumptions. In this case, we're using Pierson's $r$, which
assumes that the data from the two sources is normally distributed, which
this clearly is not. If you haven't seen this before, check out
Anscombe's quartet;
the gist of his argument is that it's not a good idea to apply any statistic
without examining the data graphically.
Let's have a look at the trees so we can get a better idea of why this is
broken. Unfortunately, I don't have a great way of pulling out the subtree
for plotting yet, so this will require some help from dendropy.
End of explanation |
6,007 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 class="title">Text Classification and Clustering</h1>
<ul>
<li>Research Seminar Information Retrieval
<li>Humboldt-University Berlin
<li>2015-07-01
<li>Stefan Baerisch
<ul/>
# Gliederung
- **Einleitung** - Von was reden wir und warum
- **Machine Learning** - Gemeinsame Grundlagen von *Classification* und *Clustering*
- **Eigenschaften von Texte** - Darstellung von Texten und *Feature Engineering*
- **Clustering** - Als ein Beispiel von *Non-Supervised* Machine Learning. Mit *Hierarchical Clustering* und k-Means Clustering als Beispiele*
- **Classification** - Als ein Beispiel von *Supervised* Learning. Mit *k-nearest Neighbors*, *Naive Bayes* und *Decision Trees* als Beispiel.
- **Abschluss und Ausblick** - Ein Rückschau und Themen, die sonst nicht gepasst haben.
## Was ist Machine Learning
* *Machine Learning* ist ein automatisiertes Verfahren, das auf Grundlage eines *Modells* Aussagen zu Eingaben macht.
* Eingaben können, Texte, Bilder, Zahlen, Sprache sein.
* Ausgaben können Vorhersagen zu Werten, Klassenzugehörigkeiten, oder Steuerungsbefehle sein.
* *Learning* weil das Modell trainiert wird, in der Regel durch Beispieldaten.
Classification und Clustering sind zwei Beispiele. Zahlreiche andere existieren.
## Wie wird Machine Learning verwendet?
* **Online Werbung** - Die schnelle Entscheidung aufgrund zahlreicher Daten (Profil und aktuelle Eingaben), welche Werbenachricht die höchste Erfolgschance hat.
* **Wartungsvorhersage** - Für komplexe Maschinen mit Telemetrie, wie lässt sich ein Problem vorhersagen?
* Vorhersage von Erkrankungen aus Suchanfragen (Flu Trends) und Schwangerschaften aus der Kaufhistorie (Target)
* Spracherkennung, Bilderkennung und Übersetzung
Machine Lerning wird oft vorhersagend eingesetzt.
## Was ist dieser Vortrag und wofür ist er gut?
- Dies ist ein sehr schneller Überblick zu Classification und Clustering, mit etwas Machine Learning als Hintergrund
- Sie sollten Folgendes mitnehmen
Step1: Machine Learning
Modelle und Einsatz von Modellen 1
Ein Modell ist eine Vereinfachung zu einem bestimmten Zweck
Ein Beispiel sind Karten in unterschiedlichen Maßstäben und mit unterschiedlichen Darstellungen
Jedes Modell ist ein Kompromiss
Qualität der Vorhersagen
Komplexität und Verständlichkeit
Laufzeitverhalten
Aufwand der Implementierung
Folgen von Fehlern
Modelle und Einsatz von Modellen 2
Modelle werden vorbereitet, trainiert und evaluiert
Dies geschieht in einem iterativen Prozess
Alle Schritte sind wichtig
<center>
<img class="logo" src="bilder/prozess.png" class="bspic" width=700 />
<span>Data Science Prozess. Angepasst aus aus Data Science for Business, O'Reilly Media, 2013</span>
</center>
Modelle und Einsatz von Modellen 3
Problem verstehen Der wichtigste Punkt, und organisatorisch der schwierigste
Daten verstehen Was für Überraschungen gibt es in den Daten?
Daten vorbereiten In der Praxis häufig am Aufwendigsten
Modell erstellen
Auswahl der Verfahrens
Parametrisierung des Modells
Ausführungen des Trainings
Modell erproben Wie gut ist das Modell auf Trainingsdaten und auf Testdaten?
Modell einsetzen Dies kann mit einer technischen Reimplementierung einhergehen.
Features und Werte
Feature sind jene Eigenschaften von Entitäten/Documenten, mit denen Modelle arbeiten
Features entstehen erst durch Extraktion und Transformation
Intern findet eine Umwandlung in Zahlen statt, dies können wir hier ignorieren
Die Anzahl der Feature ist die Dimensionalität der Daten
Text ist hochdimensional
Besondere Eigenschaften von Texten
Beispiel für den Vortrag
Der webkb Datenbestand besteht aus Universitätswebseiten vom Ende der 90er Jahre
Ursprüngliche Anwendung war die Klassifizierung nach der Art der Seite
Person, Lehrveranstaltung, etc...
Die Daten liegen als HTML vor, insgesamt ~8000 Dokumente
Step2: Ein Beispieldokument
Step3: Feature Extraction für Texte
Texte müssen für die hier besprochenen Verfahren in eine passende Form gebraucht werden
Dies umfasst technische und inhaltliche Vorbereitung
Entfernen von HTML Tags
Aufteilung in Terme oder N-Grams
Löschen von nicht relevanten oder störenden Inhalten
Verkleinerung des Vokabulars
Umwandeln in eine Dokument-Term Matrix
Festlegung der Termgewichte in der Matrix durch entsprechende Vor-Verarbeitung
Die ersten 1000 Zeichen des ersten Dokuments
Step4: Entfernen der HTML Tags
Step5: Umwandlung in Tokens
Step6: Löschen von nicht relevanten Tokens nach Struktur
Step7: Entfernung von Stopwörtern
Step8: Exploration der Daten
Step9: Vorbereitung der weiteren Schritte
Die Klassen sind Teil des Pfads innerhalb des Datenbestands
Step10: Dokumentfrequenz und Verbereitung
Die Dokument Frequenz ist eine interessante Eigenschaft bei der Feature Extraction
Terme, die in jedem Dokument gleich häufig vorkommen, sind für die Klassifikation und das Clustering wertlos
Dies können hier HTML Tags sein
Terme, die nur in sehr wenigen Dokumenten vorkommen, können das Training verfälschen
Praktische Implementierung und Vergleich
Je auf Aufbereitung der Dokumente variert die Größe des Vokabulars stark
Step11: Beispiel für die Anpassung der Filterung nach Dokumentenfrequenz
Step12: Vorbereitung der Daten
Für den weiteren Vortrag gibt es einige Standardwerte
Dies sind nicht unbedingt gute Werte, mehr Prüfung wäre notwendig
Es findet keine Filterung der Termstruktur statt
Die Artefakte sind im Weiteren zu sehen
Step13: Clustering
Clustering Prozess
Clustering ist Non Supervised, es werden keine Trainingslabels verwendet
<center>
<img class="logo" src="bilder/process3.png" class="bspic" width=800 />
</center>
Entfernung und Clusterkriterium
Die Entfernung von Dokumenten, z.B. im Vectorraummodell, ist häufig Grundlage von Clusterzuordnungen
<center>
<img src="bilder/clust_1.png" class="bspic" width=400 />
</center>
K Means - Clustering
K Means ist ein randomisierter, Iterativer Prozess mit vorheriger Auswahl der Custerzahl
1. Wähle X Punkte als erste Clustermittelpunkte aus
2. Weise alle Punkte dem nächsten Clustermittelpunkt zu
3. Aktualisiere den Clustermittelpunkt (daher K-Means)
4. Wiederhole 2 - 4 bis die Clusterzuordnungen stabil sind
K Means - Beispiel
<center>
<img src="bilder/clust_2.png" class="bspic" width=700 />
</center>
Programm zur Ausführung des Clusterings
Step14: Darstellung der häufigsten Termine je Cluster
Step15: Visualisierung von Clustern - Grundlagen
Die Elemente des Clusters sind Punkte in einem Raum
Die Lage dieser Punkte lässt sich in 3 Dimension gut darstellen
Wenn es mehr Dimensionen / Features gibt, müssen wir entweder kombinieren oder auswählen
Hier sind es gut 28000 Feautures
Visualisierung von Clustern - In der Praxis
Step16: Hierarchisches Clustering
Hierarchisches Clustering - Theorie
Auch hier ist wieder die Entfernung von Punkte zueinander entscheidend.
Es werden solange die zwei jeweils nächsten Punkte verschmolzen, bis es nur noch einen Punkt gibt.
Der Vorteil des Verfahrens ist ein schöne Darstellung der Nähe der jeweiligen Cluster
Hierarchisches Clustering - Beispiel
<center>
<img src="bilder/clust_3.png" class="bspic" width=700 />
</center>
Hierarchisches Clustering - In der Praxis
Step17: Classification
Classification - Prozess
Classification ist ein Supervised Prozess. Label beschreiben die Klassenzugehörigkeit und werden dann zur Vorhersage genutzt.
<center>
<img class="logo" src="bilder/process2.png" class="bspic" width=700 />
</center>
Classification - Ausprägungen
Klassifikation kann zwischen zwei Klassen, mehreren Klassen, oder hierarchisch stattfinden
<center>
<img class="logo" src="bilder/class_1_1.png" class="bspic" width=900 />
</center>
Classification - OneVersusAll
Eine Möglichkeit, mehrere Klassen zu betrachten, ist der Vergleich mit den Dokumenten aller jeweils anderen Klassen
Die beste Klasse ist dann jene, die sich am Besten von der kombinierten Klaase abgrenzen lässt.
<center>
<img class="logo" src="bilder/class_2.png" class="bspic" width=700 />
</center>
Classification - Wahrscheinlichkeiten
Neben der reinen Klassenzugehörigkeit ist oft die Wahrscheinlichkeit der Zugehörigkeit interessant
Ab welcher Wahrscheinlichkeit der Zugehörigkeit wird etwas getan, z.B. ein Dokument als relevant gewertet?
<center>
<img class="logo" src="bilder/class_3.png" class="bspic" width=700 />
</center>
Vorbereitung der Daten
Aufteilung in einen Validierungs und einen Trainingssatz, jeweils mit Dokumenen und Labeln
Step18: Decision Trees
Decision Trees - Die Theorie
Der grundlegende Gedanke ist, ein Feature auszuwählen, das die Dokument möglichst gut nach Label trennt.
Das Verfahren wird dann für die 'Äste' wiederholt.
Decision Trees - Die Umsetzung
Step19: Der Entscheidungsbaum
<center>
<img src="tree.png" class="bspic" width=1000 />
</center>
Step20: Naive Bayes
Naive Bayes - Die Theorie
Die Formel sieht komplex aus
Step21: Validierung anhand der Wahrscheinlichkeit - ROC Kurven
Step23: Ende - Abschluss und Ausblick
Wichtige Punkte
Classification und Clustering für Texte sind verbreite Verfahren
In den Grundzügen einfach zu verstehen und einzusetzen
Eine gute Umsetzung ist nicht einfach
Es gibt eine Vielzahl von Wahlmöglichkeiten bei den Verfahren
Das Verständnis der Daten ist wichtig
Die Vorbereitung von Features können leicht 80% der Arbeit sein
Mehr zum Entdecken
<center>
<img class="logo" src="ml_map.png" class="bspic" width=700 />
</center> | Python Code:
# Baseline Classifier : Weist jedem Wert die häufigste Klasse zu.
from collections import Counter
model = Counter()
def fit(value, cls):
model.update([cls])
def predict(value):
return model.most_common(1)[0][0]
#Drei Aufrufe von train,
fit("Banana","fruit")
fit("Apple","fruit")
fit("Bean","Vegetable")
predict("Bean")
Explanation: <h1 class="title">Text Classification and Clustering</h1>
<ul>
<li>Research Seminar Information Retrieval
<li>Humboldt-University Berlin
<li>2015-07-01
<li>Stefan Baerisch
<ul/>
# Gliederung
- **Einleitung** - Von was reden wir und warum
- **Machine Learning** - Gemeinsame Grundlagen von *Classification* und *Clustering*
- **Eigenschaften von Texte** - Darstellung von Texten und *Feature Engineering*
- **Clustering** - Als ein Beispiel von *Non-Supervised* Machine Learning. Mit *Hierarchical Clustering* und k-Means Clustering als Beispiele*
- **Classification** - Als ein Beispiel von *Supervised* Learning. Mit *k-nearest Neighbors*, *Naive Bayes* und *Decision Trees* als Beispiel.
- **Abschluss und Ausblick** - Ein Rückschau und Themen, die sonst nicht gepasst haben.
## Was ist Machine Learning
* *Machine Learning* ist ein automatisiertes Verfahren, das auf Grundlage eines *Modells* Aussagen zu Eingaben macht.
* Eingaben können, Texte, Bilder, Zahlen, Sprache sein.
* Ausgaben können Vorhersagen zu Werten, Klassenzugehörigkeiten, oder Steuerungsbefehle sein.
* *Learning* weil das Modell trainiert wird, in der Regel durch Beispieldaten.
Classification und Clustering sind zwei Beispiele. Zahlreiche andere existieren.
## Wie wird Machine Learning verwendet?
* **Online Werbung** - Die schnelle Entscheidung aufgrund zahlreicher Daten (Profil und aktuelle Eingaben), welche Werbenachricht die höchste Erfolgschance hat.
* **Wartungsvorhersage** - Für komplexe Maschinen mit Telemetrie, wie lässt sich ein Problem vorhersagen?
* Vorhersage von Erkrankungen aus Suchanfragen (Flu Trends) und Schwangerschaften aus der Kaufhistorie (Target)
* Spracherkennung, Bilderkennung und Übersetzung
Machine Lerning wird oft vorhersagend eingesetzt.
## Was ist dieser Vortrag und wofür ist er gut?
- Dies ist ein sehr schneller Überblick zu Classification und Clustering, mit etwas Machine Learning als Hintergrund
- Sie sollten Folgendes mitnehmen:
- Klassifizierung und Clustering sind nicht kompliziert
- Es gibt eine ganze Reihe möglicher Verfahren
- Die Vorbereitung von Daten ist entscheidend für die Ergebnisse, speziell für Texte
## Zum Aufbau und Inhalt
- Ich verwende englische Begriffe wo diese geläufiger sind.
- Die Präsentation steht zum Download auf [Github](https://github.com/stbaercom/150627_hu_slides).
- Der Beispielcode ist Python 2. Ich empfehle [Anaconda](http://continuum.io/downloads)
- Die Testdaten sind der _4 Universities Data Set_, [hier](http://www.cs.cmu.edu/afs/cs/project/theo-20/www/data/) zum Download
##Quellcode und Praxis
- Einige Folien enthalten Quellcode. Das ist für jene, die es interessiert.
- Für die Nichtprogrammier - Einfach zuhören. Es ist weniger kompliziert, als es aussieht
## Ein Beispiel für Code
End of explanation
!tree webkb | head -n15
Explanation: Machine Learning
Modelle und Einsatz von Modellen 1
Ein Modell ist eine Vereinfachung zu einem bestimmten Zweck
Ein Beispiel sind Karten in unterschiedlichen Maßstäben und mit unterschiedlichen Darstellungen
Jedes Modell ist ein Kompromiss
Qualität der Vorhersagen
Komplexität und Verständlichkeit
Laufzeitverhalten
Aufwand der Implementierung
Folgen von Fehlern
Modelle und Einsatz von Modellen 2
Modelle werden vorbereitet, trainiert und evaluiert
Dies geschieht in einem iterativen Prozess
Alle Schritte sind wichtig
<center>
<img class="logo" src="bilder/prozess.png" class="bspic" width=700 />
<span>Data Science Prozess. Angepasst aus aus Data Science for Business, O'Reilly Media, 2013</span>
</center>
Modelle und Einsatz von Modellen 3
Problem verstehen Der wichtigste Punkt, und organisatorisch der schwierigste
Daten verstehen Was für Überraschungen gibt es in den Daten?
Daten vorbereiten In der Praxis häufig am Aufwendigsten
Modell erstellen
Auswahl der Verfahrens
Parametrisierung des Modells
Ausführungen des Trainings
Modell erproben Wie gut ist das Modell auf Trainingsdaten und auf Testdaten?
Modell einsetzen Dies kann mit einer technischen Reimplementierung einhergehen.
Features und Werte
Feature sind jene Eigenschaften von Entitäten/Documenten, mit denen Modelle arbeiten
Features entstehen erst durch Extraktion und Transformation
Intern findet eine Umwandlung in Zahlen statt, dies können wir hier ignorieren
Die Anzahl der Feature ist die Dimensionalität der Daten
Text ist hochdimensional
Besondere Eigenschaften von Texten
Beispiel für den Vortrag
Der webkb Datenbestand besteht aus Universitätswebseiten vom Ende der 90er Jahre
Ursprüngliche Anwendung war die Klassifizierung nach der Art der Seite
Person, Lehrveranstaltung, etc...
Die Daten liegen als HTML vor, insgesamt ~8000 Dokumente
End of explanation
filename = './webkb/course/cornell/http:^^cs.cornell.edu^Info^Courses^Current^CS415^CS414.html'
from IPython.display import IFrame
IFrame(filename, width=700, height=500)
Explanation: Ein Beispieldokument
End of explanation
txt_1 = open(filename).read()
txt_1[:1000]
Explanation: Feature Extraction für Texte
Texte müssen für die hier besprochenen Verfahren in eine passende Form gebraucht werden
Dies umfasst technische und inhaltliche Vorbereitung
Entfernen von HTML Tags
Aufteilung in Terme oder N-Grams
Löschen von nicht relevanten oder störenden Inhalten
Verkleinerung des Vokabulars
Umwandeln in eine Dokument-Term Matrix
Festlegung der Termgewichte in der Matrix durch entsprechende Vor-Verarbeitung
Die ersten 1000 Zeichen des ersten Dokuments
End of explanation
from bs4 import BeautifulSoup
txt_2 = BeautifulSoup(txt_1).get_text()
txt_2
Explanation: Entfernen der HTML Tags
End of explanation
import nltk
tokens_1 = nltk.word_tokenize(txt_2)
print tokens_1
print len(tokens_1)
Explanation: Umwandlung in Tokens
End of explanation
import re
tokens_2 = [t.lower() for t in tokens_1 if not re.match(r"[^a-zA-Z]",t)]
print tokens_2
print len(tokens_2)
Explanation: Löschen von nicht relevanten Tokens nach Struktur
End of explanation
stopwords_en = nltk.corpus.stopwords.words('english')
stopwords_txt = ['mime-version','content-type','text/html','content-length','last-modified']
tokens_3 = [t for t in tokens_2 if t not in stopwords_en + stopwords_txt]
print tokens_3
print len(tokens_3)
Explanation: Entfernung von Stopwörtern
End of explanation
from collections import Counter
counter = Counter(tokens_3)
counter.most_common(10)
Explanation: Exploration der Daten
End of explanation
import glob
filenames = glob.glob("./webkb/*/*/http*")
print len(filenames)
print filenames[0]
print filenames[0].split("/")[2]
target_classes = [f.split("/")[2] for f in filenames]
print target_classes[:5]
Explanation: Vorbereitung der weiteren Schritte
Die Klassen sind Teil des Pfads innerhalb des Datenbestands
End of explanation
from bs4 import BeautifulSoup
import nltk
import re
import sklearn.feature_extraction
filenames = glob.glob("./webkb/*/*/http*")
def bs_tokenize(txt):
txt = BeautifulSoup(txt).get_text().lower()
return [t for t in nltk.word_tokenize(txt) if not re.match(r"[^a-z]",t)]
def prepare(filenames, **kwargs):
args = dict(input='filename',strip_accents = 'unicode', encoding='iso8859-1')
args.update(kwargs)
vectorizer = sklearn.feature_extraction.text.TfidfVectorizer(**args)
term_doc_mat = vectorizer.fit_transform(filenames)
return vectorizer,term_doc_mat
v1,tm1 = prepare(filenames[:200],max_df= 1.0, min_df = 1)
v2,tm2 = prepare(filenames[:200],tokenizer = bs_tokenize)
v3,tm3 = prepare(filenames[:200],max_df= 0.8, min_df = 0.1)
print "Anzahl Terme im Dictionary, keine Filter %4i" % len(v1.vocabulary_)
print "Anzahl Terme im Dictionary, Filterung nach Muster %4i" % len(v2.vocabulary_)
print "Anzahl Terme im Dictionary, Filterung nach Document Frequency %4i" % len(v3.vocabulary_)
Explanation: Dokumentfrequenz und Verbereitung
Die Dokument Frequenz ist eine interessante Eigenschaft bei der Feature Extraction
Terme, die in jedem Dokument gleich häufig vorkommen, sind für die Klassifikation und das Clustering wertlos
Dies können hier HTML Tags sein
Terme, die nur in sehr wenigen Dokumenten vorkommen, können das Training verfälschen
Praktische Implementierung und Vergleich
Je auf Aufbereitung der Dokumente variert die Größe des Vokabulars stark
End of explanation
import random
from IPython.html.widgets import interact
from IPython.html import widgets
TEMP = None
def prepare_interact_100(max_df,min_df):
global TEMP
cv,tm = prepare(filenames[:200],max_df= max_df, min_df = min_df)
print len(cv.vocabulary_)
current = set(cv.vocabulary_.keys())
if TEMP:
print [str(v) for v in list(TEMP.symmetric_difference(current))][:100]
TEMP = current
max_df_w = widgets.FloatSlider(min=0.1,max=1.0,step=0.05,value=1.0)
min_df_w = widgets.IntSlider(min=1,max=50,step=1,value=0)
interact(prepare_interact_100,max_df = max_df_w,min_df = min_df_w)
Explanation: Beispiel für die Anpassung der Filterung nach Dokumentenfrequenz
End of explanation
idf_vec,term_mat = prepare(filenames,max_df= 0.5, min_df = 4)
idf_vec, term_mat
Explanation: Vorbereitung der Daten
Für den weiteren Vortrag gibt es einige Standardwerte
Dies sind nicht unbedingt gute Werte, mehr Prüfung wäre notwendig
Es findet keine Filterung der Termstruktur statt
Die Artefakte sind im Weiteren zu sehen
End of explanation
import sklearn.cluster
import numpy as np
from collections import Counter
def get_kmeans(num_clusters,term_matrix):
kmeans = sklearn.cluster.MiniBatchKMeans(num_clusters)
kmeans.fit(term_matrix)
return kmeans
kmeans = get_kmeans(5,term_mat)
labels=kmeans.labels_
for v in Counter(labels).items():
print "Cluster %s has %4i Elements " % v
Explanation: Clustering
Clustering Prozess
Clustering ist Non Supervised, es werden keine Trainingslabels verwendet
<center>
<img class="logo" src="bilder/process3.png" class="bspic" width=800 />
</center>
Entfernung und Clusterkriterium
Die Entfernung von Dokumenten, z.B. im Vectorraummodell, ist häufig Grundlage von Clusterzuordnungen
<center>
<img src="bilder/clust_1.png" class="bspic" width=400 />
</center>
K Means - Clustering
K Means ist ein randomisierter, Iterativer Prozess mit vorheriger Auswahl der Custerzahl
1. Wähle X Punkte als erste Clustermittelpunkte aus
2. Weise alle Punkte dem nächsten Clustermittelpunkt zu
3. Aktualisiere den Clustermittelpunkt (daher K-Means)
4. Wiederhole 2 - 4 bis die Clusterzuordnungen stabil sind
K Means - Beispiel
<center>
<img src="bilder/clust_2.png" class="bspic" width=700 />
</center>
Programm zur Ausführung des Clusterings
End of explanation
import pandas as pd
def top_terms(num_clusters,km, vectorizer):
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
acum = {}
for i in range(num_clusters):
for i2 in order_centroids[i, :20]:
acum.setdefault("Cluster %s Term" % i,[]).append(terms[i2])
return pd.DataFrame(acum)
top_terms = top_terms(5,kmeans,idf_vec)
top_terms
Explanation: Darstellung der häufigsten Termine je Cluster
End of explanation
from sklearn.decomposition import TruncatedSVD
from itertools import cycle
import matplotlib.pyplot as plt
def draw_cluster(num_clusters,term_mat,labels):
colors = 'rgbcmykw'
tsvd = TruncatedSVD()
tsvd.fit(term_mat)
two_dim = tsvd.transform(term_mat)
plt.figure(figsize=(12,10))
for i in range(num_clusters):
cluster_points = two_dim[np.where(labels == i)[0]]
c = colors[min(i,len(colors)-1)]
l = "Cluster %i" % i
plt.scatter(x = cluster_points[:,1], y = cluster_points[:,0], c=c, alpha=0.5,label=l)
plt.legend(loc=4)
plt.show()
draw_cluster(5, term_mat, kmeans.labels_)
Explanation: Visualisierung von Clustern - Grundlagen
Die Elemente des Clusters sind Punkte in einem Raum
Die Lage dieser Punkte lässt sich in 3 Dimension gut darstellen
Wenn es mehr Dimensionen / Features gibt, müssen wir entweder kombinieren oder auswählen
Hier sind es gut 28000 Feautures
Visualisierung von Clustern - In der Praxis
End of explanation
import matplotlib.pyplot as plt
from sklearn.metrics.pairwise import cosine_similarity
from scipy.cluster.hierarchy import ward, dendrogram
from sklearn.decomposition import TruncatedSVD
def draw_hier_tree_2(term_matrix,dims = 50, docs = 500):
plt.figure(figsize=(12,10))
tsvd = TruncatedSVD(n_components=dims)
red_dim = tsvd.fit_transform(term_matrix[:docs])
dist = 1 - cosine_similarity(red_dim)
dend = dendrogram(ward(dist))
plt.tight_layout()
plt.show()
draw_hier_tree_2(term_mat)
Explanation: Hierarchisches Clustering
Hierarchisches Clustering - Theorie
Auch hier ist wieder die Entfernung von Punkte zueinander entscheidend.
Es werden solange die zwei jeweils nächsten Punkte verschmolzen, bis es nur noch einen Punkt gibt.
Der Vorteil des Verfahrens ist ein schöne Darstellung der Nähe der jeweiligen Cluster
Hierarchisches Clustering - Beispiel
<center>
<img src="bilder/clust_3.png" class="bspic" width=700 />
</center>
Hierarchisches Clustering - In der Praxis
End of explanation
from sklearn.cross_validation import train_test_split
splitted = train_test_split(term_mat,target_classes,test_size=0.25, random_state=42)
train_dtm, test_dtm, train_target, test_target = splitted
Explanation: Classification
Classification - Prozess
Classification ist ein Supervised Prozess. Label beschreiben die Klassenzugehörigkeit und werden dann zur Vorhersage genutzt.
<center>
<img class="logo" src="bilder/process2.png" class="bspic" width=700 />
</center>
Classification - Ausprägungen
Klassifikation kann zwischen zwei Klassen, mehreren Klassen, oder hierarchisch stattfinden
<center>
<img class="logo" src="bilder/class_1_1.png" class="bspic" width=900 />
</center>
Classification - OneVersusAll
Eine Möglichkeit, mehrere Klassen zu betrachten, ist der Vergleich mit den Dokumenten aller jeweils anderen Klassen
Die beste Klasse ist dann jene, die sich am Besten von der kombinierten Klaase abgrenzen lässt.
<center>
<img class="logo" src="bilder/class_2.png" class="bspic" width=700 />
</center>
Classification - Wahrscheinlichkeiten
Neben der reinen Klassenzugehörigkeit ist oft die Wahrscheinlichkeit der Zugehörigkeit interessant
Ab welcher Wahrscheinlichkeit der Zugehörigkeit wird etwas getan, z.B. ein Dokument als relevant gewertet?
<center>
<img class="logo" src="bilder/class_3.png" class="bspic" width=700 />
</center>
Vorbereitung der Daten
Aufteilung in einen Validierungs und einen Trainingssatz, jeweils mit Dokumenen und Labeln
End of explanation
from sklearn import tree
def classify_tree(term_matrix, targets):
clf = tree.DecisionTreeClassifier(criterion="entropy",max_leaf_nodes = 100)
clf = clf.fit(term_matrix,targets)
return clf
clf = classify_tree(train_dtm.todense(), train_target)
import os
tree.export_graphviz(clf,feature_names = idf_vec.get_feature_names(), max_depth = 5 ,out_file='tree.dot')
!dot -Tpng tree.dot -o tree.png
Explanation: Decision Trees
Decision Trees - Die Theorie
Der grundlegende Gedanke ist, ein Feature auszuwählen, das die Dokument möglichst gut nach Label trennt.
Das Verfahren wird dann für die 'Äste' wiederholt.
Decision Trees - Die Umsetzung
End of explanation
!open tree.png
Explanation: Der Entscheidungsbaum
<center>
<img src="tree.png" class="bspic" width=1000 />
</center>
End of explanation
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import confusion_matrix
from sklearn.cross_validation import train_test_split
splitted = train_test_split(term_mat,target_classes,test_size=0.25, random_state=42)
train_dtm, test_dtm, train_target, test_target = splitted
bayes_cls = MultinomialNB()
bayes_cls = bayes_cls.fit(train_dtm,train_target)
bayes_pred_cls = bayes_cls.predict(test_dtm)
cm = confusion_matrix(test_target, bayes_pred_cls)
classes = bayes_cls.classes_
def plot_confusion_matrix(cm, classes, cmap=plt.cm.Blues):
plt.figure(figsize=(12,10))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plot_confusion_matrix(cm,classes)
Explanation: Naive Bayes
Naive Bayes - Die Theorie
Die Formel sieht komplex aus:
$$ P(c|d)\propto P(c)\prod { 1\le k\le { n }{ d } }^{ }{ P({ t }_{ k }|c) } $$
Der Wesentliche Gedanke ist, dass die Klassenzugehörig von zwei Aspekten abhängt:
- Der Wahrscheinlichkeit der Klasse
- Die multiplizierte Wahrscheinlichkeit für jeden Term, dass dieser in der Klasse erscheint.
Naive Bayes - Praxis und Evaluation
End of explanation
import numpy as np
import matplotlib.pyplot as plt
import sklearn.naive_bayes
from sklearn import svm, datasets
from sklearn.metrics import roc_curve, auc
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
import warnings
warnings.filterwarnings('ignore')
def draw_roc(term_mat,target_classes):
classes = np.array(['project', 'course', 'other', 'student', 'faculty', 'department', 'staff'])
target_classes_bin = label_binarize(target_classes, classes=classes)
splitted = train_test_split(term_mat,target_classes_bin,test_size=0.25, random_state=42)
train_dtm, test_dtm, train_target, test_target = splitted
classifier = OneVsRestClassifier(sklearn.naive_bayes.MultinomialNB())
y_score = classifier.fit(train_dtm, train_target).predict_proba(test_dtm)
plt.figure()
plt.figure(figsize=(12,10))
for i in range(len(classes)):
fpr, tpr, _ = roc_curve(test_target[:, i], y_score[:, i])
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label='%s, area:%.2f' % (classes[i],roc_auc))
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend(loc="lower right")
plt.show()
draw_roc(term_mat,target_classes)
Explanation: Validierung anhand der Wahrscheinlichkeit - ROC Kurven
End of explanation
from IPython.display import display, HTML
s =
<style>
.rendered_html {
font-family: "proxima-nova", helvetica;
font-size: 120%;
line-height: 1.3;
}
.rendered_html h1 {
margin: 0.25em 0em 0.5em;
color: #015C9C;
text-align: center;
line-height: 1.2;
page-break-before: always;
}
.rendered_html h2 {
margin: 1.1em 0em 0.5em;
color: #26465D;
line-height: 1.2;
}
.rendered_html h3 {
margin: 1.1em 0em 0.5em;
color: #002845;
line-height: 1.2;
}
.rendered_html li {
line-height: 1.2;
}
.prompt {
font-size: 110%;
}
.CodeMirror-lines {
font-size: 110%;
}
.output_area {
font-size: 110%;
}
h1.bigtitle {
margin: 4cm 1cm 4cm 1cm;
font-size: 300%;
}
h3.point {
font-size: 200%;
text-align: center;
margin: 2em 0em 2em 0em;
#26465D
}
.sbpic {
margin: 10px 10px 10px 10px;
}
a.anchor-link {
display: none;
}
h1.title {
font-size: 250%;
}
.CodeMirror-code { width:100% !important; }
.container { width:100% !important; }
</style>
display(HTML(s))
from IPython.html.services.config import ConfigManager
from IPython.utils.path import locate_profile
cm = ConfigManager(profile_dir=locate_profile(get_ipython().profile))
cm.update('livereveal', {
'width': 1024,
'height': 768,
})
Explanation: Ende - Abschluss und Ausblick
Wichtige Punkte
Classification und Clustering für Texte sind verbreite Verfahren
In den Grundzügen einfach zu verstehen und einzusetzen
Eine gute Umsetzung ist nicht einfach
Es gibt eine Vielzahl von Wahlmöglichkeiten bei den Verfahren
Das Verständnis der Daten ist wichtig
Die Vorbereitung von Features können leicht 80% der Arbeit sein
Mehr zum Entdecken
<center>
<img class="logo" src="ml_map.png" class="bspic" width=700 />
</center>
End of explanation |
6,008 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data preprocessing
Here will download and subset NCEP reanalysis data, and read in files created from the DesInventar database. Then create a map showing the regions where disaster records are available, and how this relates to the resolution of the meteorological data.
Set up
Import needed packages
Step1: Specify region
For this exercise, using examples from India.
Step2: Set data
Disaster records
A spreadsheet of availble data was obtained from the DesInventar website, and then exported to .csv format. Both versions are available in the data repository. When pulling data from the website sometimes there can be little formatting issues, which we repair here. Also want to learn what span of years is covered by the database for our example country (India), so that we can save disk space by paring down the reanalysis data to the smallest possible file.
Step3: Reanalysis
Need to pull the renalysis data from NCEP's online database. Going to pull the full global files at first, so that have the data avaialbe if want to look at other regions of the world. This requires a lot of download time and storage space, the resulting minimally sized files are stored in the repository (others are deleated or moved to save disk space) so don't run these code blocks unless you need to change something about the data is being aquired or it's final form (which means, yeah, probably you'll end up having to run the script).
Step4: Once have full data set can then subdivide to create individual files for different regions to reduce the run time when reading in data for individual regions.
Step5: Region masks
The way we arranged the analysis (which as you can see is a bit of an ad hoc, duct tape style procedure) requires masking out the individual districts, or rather the closest approximation of them possible using the low resolution, gridded reanalysis data.
The first step is creating a 'blanked' file of the region, where all the values are set to unity.
Step6: The actual mask files are made with a different script, writen in NCL The code here modifies the generic script based on what region we're interested in at the moment.
For some countries, e.g., Chile, the region labels in the shapefiles and the region labels in the heatwave database are not rendered the same (typically this has to do with how accented letters are notated), so some tweaking has to be done.
Step7: Drawing a map
Want to create a graphic to show that reports only exist for certain regions, and how the grid spacing of the meterological fields imperfectly matches the actual region boundaries. Have currently set things so that a grid cell is considered informative about the political region as long some part of the region boundary is within 50kms of the grid point (cell center). Played around with a few things before settling on this. The distance is pretty conservative; as in tends towards considering information from outside the region, rather than excluding information from within, but still keeps a more "fair" evaluation, by not evaluating against grid cells which contain only a minimal amount of the geographical region. Considering that most political boundaries are linked to geographical features/divides, if only a small fraction of the region extends into another grid cell, would expect its weather to more correlated with that shown by cells over the rest of the region than that of this other area. Example of this can be seen for Uttar Pradesh (India), where a sliver of the region overlaps with a gird cell that is mostly representitive of the Himalayas, so it is not considered when calculating the warm spell durations.
Looking at the individual administrative regions requires working
with shape files. These are obtained from the Database of Global
Administrative Areas. | Python Code:
#--- Libraries
import pandas as pd # statistics packages
import numpy as np # linear algebra packages
import matplotlib.pyplot as plt # plotting routines
import seaborn as sns # more plotting routines
import shapefile # routines for using 'shapefiles'
import urllib # routines for interacting with websites
import subprocess # routines for calling external OS commands
from mpl_toolkits.basemap import Basemap # plotting routines for map making
from matplotlib import gridspec # plotting routines for multiple plots
from netCDF4 import Dataset # routines for interacting with NetCDF files
from matplotlib import cm # more plotting routines
from matplotlib.collections import LineCollection # more plotting routines
from cdo import * # routines for interacting with NetCDF files
cdo = Cdo() # via an external program
# place graphics in the notebook document
%matplotlib inline
Explanation: Data preprocessing
Here will download and subset NCEP reanalysis data, and read in files created from the DesInventar database. Then create a map showing the regions where disaster records are available, and how this relates to the resolution of the meteorological data.
Set up
Import needed packages
End of explanation
#--- Identify country for example
# label country
country = 'India'
# define bounding box for region
mlat = '0' ; Mlat = '40' ; mlon = '65' ; Mlon = '105'
Explanation: Specify region
For this exercise, using examples from India.
End of explanation
#--- Pull in data from DesInvetar records
# Read file of reported heatwaves (original spreadsheet)
heatwave_data = pd.read_csv('../data/Heatwaves_database.csv')
# repair region name with space before name
heatwave_data.loc[(heatwave_data.Region==' Tamil Nadu'),'Region'] = 'Tamil Nadu'
# list out the dates for example country (India)
india_dates = heatwave_data['Date (YMD)'][heatwave_data['Country'].isin(['India'])]
# find year of earliest entry
min_year = np.min([int(x.split('/')[0]) for x in india_dates])
# find year of latest entry
max_year = np.max([int(x.split('/')[0]) for x in india_dates])
Explanation: Set data
Disaster records
A spreadsheet of availble data was obtained from the DesInventar website, and then exported to .csv format. Both versions are available in the data repository. When pulling data from the website sometimes there can be little formatting issues, which we repair here. Also want to learn what span of years is covered by the database for our example country (India), so that we can save disk space by paring down the reanalysis data to the smallest possible file.
End of explanation
#---Download NetCDF files
# path to data directory for max/min daily temperatures
path_maxmin = 'ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface_gauss'
# path to data directory for 6hr temperature records
path_hourly = 'ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis/surface_gauss'
# loop through years
for yr in range(1948,2016) :
# write max 2meter temperature to new file
path = path_maxmin+'/tmax.2m.gauss.'+str(yr)+'.nc'
ofile = open('../data/t2m.max.daily.'+str(yr)+'.nc','w')
ofile.write(urllib.urlopen(path).read())
ofile.close()
# write min 2meter temperature to new file
path = path_maxmin+'/tmin.2m.gauss.'+str(yr)+'.nc'
ofile = open('../data/t2m.min.daily.'+str(yr)+'.nc','w')
ofile.write(urllib.urlopen(path).read())
ofile.close()
# write 2meter temperature at 6-hour intervals to new file
path = path_hourly+'/air.2m.gauss.'+str(yr)+'.nc'
ofile = open('../data/t2m.subdaily.'+str(yr)+'.nc','w')
ofile.write(urllib.urlopen(path).read())
ofile.close()
# set data as single multiyear files
_ = cdo.mergetime(input='../data/t2m.max.daily.*.nc',output='../data/t2m.max.daily.nc')
_ = cdo.mergetime(input='../data/t2m.min.daily.*.nc',output='../data/t2m.min.daily.nc')
_ = cdo.mergetime(input='../data/t2m.subdaily.*.nc',output='../data/t2m.subdaily.nc')
Explanation: Reanalysis
Need to pull the renalysis data from NCEP's online database. Going to pull the full global files at first, so that have the data avaialbe if want to look at other regions of the world. This requires a lot of download time and storage space, the resulting minimally sized files are stored in the repository (others are deleated or moved to save disk space) so don't run these code blocks unless you need to change something about the data is being aquired or it's final form (which means, yeah, probably you'll end up having to run the script).
End of explanation
#--- Create data files of region
# select region from min-temperature data
_ = cdo.sellonlatbox(','.join([mlon,Mlon,mlat,Mlat]),
input='../data/t2m.min.daily.nc',
output='../data/'+country+'.t2m.min.daily.nc')
# select region from max-temperature data
_ = cdo.sellonlatbox(','.join([mlon,Mlon,mlat,Mlat]),
input='../data/t2m.max.daily.nc',
output='../data/'+country+'.t2m.max.daily.nc')
# select region from hourly-temperature data
_ = cdo.sellonlatbox(','.join([mlon,Mlon,mlat,Mlat]),
input='../data/t2m.subdaily.nc',
output='../data/'+country+'.t2m.subdaily.nc')
# create a daily mean temperature file
_ = cdo.daymean(input='../data/'+country+'.t2m.subdaily.nc',
output='../data/'+country+'.t2m.daily.nc')
#--- Trim time range of file to match disaster records
# list years in time range
years_in_record = ','.join([ str(x) for x in range(min_year,max_year+1) ])
# subset regional data
_ = cdo.selyear(years_in_record,
input='../data/'+country+'.t2m.min.daily.nc',
output='../data/'+country+'.t2m.min.daily.subset.nc')
_ = cdo.selyear(years_in_record,
input='../data/'+country+'.t2m.max.daily.nc',
output='../data/'+country+'.t2m.max.daily.subset.nc')
# _ = cdo.selyear(years_in_record,
# input='../data/'+country+'.t2m.subdaily.nc',
# output='../data/'+country+'.t2m.subdaily.subset.nc')
_ = cdo.selyear(years_in_record,
input='../data/'+country+'.t2m.daily.nc',
output='../data/'+country+'.t2m.daily.subset.nc')
# retain base period file (needed for one of the heat wave definitions)
years = ','.join([ str(x) for x in range(1960,1991)])
_ = cdo.selyear(years,
input='../data/'+country+'.t2m.max.daily.nc',
output='../data/'+country+'basefile.nc')
Explanation: Once have full data set can then subdivide to create individual files for different regions to reduce the run time when reading in data for individual regions.
End of explanation
#--- Create blank file for region
# write grid information to file
ofile = open('../data/ncep_grid.asc','w')
ofile.write('\n'.join(cdo.griddes(input='../data/'+country+'.t2m.daily.nc')))
ofile.close()
# create data file with all values set to 1
_ = cdo.const('1','../data/ncep_grid.asc',
output='../data/'+country+'.blank.ncepgrid.nc',
options='-f nc')
Explanation: Region masks
The way we arranged the analysis (which as you can see is a bit of an ad hoc, duct tape style procedure) requires masking out the individual districts, or rather the closest approximation of them possible using the low resolution, gridded reanalysis data.
The first step is creating a 'blanked' file of the region, where all the values are set to unity.
End of explanation
#--- Identify regions of interest
# make list of unique region names for country
regions = list( set(heatwave_data.Region.where(heatwave_data.Country==country)) )
# remove nans (from regions that arent in the selected country)
regions = [x for x in regions if str(x) != 'nan']
regions = [x.title() for x in regions]
if ( country == 'Chile') :
regions_shapefile = [u'Antofagasta',u'Araucan\xeda',
u'Ais\xe9n del General Carlos Ib\xe1\xf1ez del Campo',
u'Regi\xf3n Metropolitana de Santiago',
u'Magallanes y Ant\xe1rtica Chilena',
u"Libertador General Bernardo O'Higgins"]
else :
regions_shapefile = regions
#--- Create masks
# loop through regions
for i in range(len(regions)) :
# find the name of the region
reg = regions[i].title()
# find the name of the region as defined by the shapefile
reg_shapefile = regions_shapefile[i] #reg_shapefile = regions_shapefile[i].decode('utf-8')
# remove spaces
reg = reg.strip()
# report what's happening
print("Creating masking script for "+reg+", aka "+reg_shapefile)
# create NCL script from defualt file with name of region
with open('maskregions_'+"".join(country.split(" "))+'.ncl', 'r') as input_file, open('crMaskFile.ncl', 'w') as output_file:
# check lines for dummy line
for line in input_file :
if line.strip() == 'region = "STATE/PROVINCE"' :
# overwrite with region name
output_file.write(' region = "'+reg_shapefile.encode('utf-8')+'"\n')
else :
output_file.write(line)
# run NCL routine
print("Running masking script")
# subprocess.call(['/bin/bash','-i','-c','ncl crMaskFile.ncl'])
subprocess.call(['/bin/bash','-c','ncl crMaskFile.ncl'])
# create a file that masks the region
print("Renaming mask and copying to data folder.")
subprocess.call(['cp','mask.nc',"../data/"+"_".join(reg.split())+'.mask.nc'])
#--- Create single mask file showing all considered regions
# combine all the individual mask files
_ = cdo.add(input='../data/Orissa.mask.nc ../data/Uttar_Pradesh.mask.nc',
output='../data/tmp.nc')
_ = cdo.add(input='../data/tmp.nc ../data/Tamil_Nadu.mask.nc',
output='../data/India.masks.nc')
Explanation: The actual mask files are made with a different script, writen in NCL The code here modifies the generic script based on what region we're interested in at the moment.
For some countries, e.g., Chile, the region labels in the shapefiles and the region labels in the heatwave database are not rendered the same (typically this has to do with how accented letters are notated), so some tweaking has to be done.
End of explanation
#--- Map regions of India used in this example
# read which regions are included in disaster database
regions = list(set(heatwave_data.loc[(heatwave_data.Country=='India'),'Region']))
# Create a map object
chart = Basemap(projection='lcc',resolution='c',
lat_0=20,lon_0=85,
llcrnrlat=5,urcrnrlat=35,
llcrnrlon=70,urcrnrlon=100)
# add geographic features
chart.shadedrelief()
# draw parallels and meridians.
chart.drawparallels(np.arange(-90.,91.,10.),labels=[False,True,True,False])
chart.drawmeridians(np.arange(-180.,181.,10.),labels=[True,False,False,True])
# add country outline
chart.readshapefile('../data/IND_adm0', 'IND0',drawbounds=True) ;
# add region outlines, for regions in data set
chart.readshapefile('../data/IND_adm1', 'IND1',drawbounds=False) ;
for info, shape in zip(chart.IND1_info, chart.IND1):
if info['NAME_1'] in regions :
x, y = zip(*shape)
chart.plot(x, y, marker=None,color=sns.xkcd_rgb['dusty orange'])
# load file of combined regional masks
ncfile = Dataset('../data/India.masks.nc')
# read mask data
rmask = ncfile.variables['region_mask'][:]
# get coordinates of data
lons = ncfile.variables['lon'][:]
lats = ncfile.variables['lat'][:]
# shift so that lines show grid box boundaries,
# rather than grid point locations
lons = lons - (1.875/2)
lats = lats + (1.9047/2)
# if in western hemisphere, need to label as
# "all the way round", rather than +/-
# lons = lons - 360
# set coordinates list as grid of locations
lons, lats = np.meshgrid(lons,lats)
# overlay region masks
chart.pcolormesh(lons,lats,rmask,shading='flat',latlon=True, alpha=0.2) ;
# save image
plt.savefig('../figures/india.png')
Explanation: Drawing a map
Want to create a graphic to show that reports only exist for certain regions, and how the grid spacing of the meterological fields imperfectly matches the actual region boundaries. Have currently set things so that a grid cell is considered informative about the political region as long some part of the region boundary is within 50kms of the grid point (cell center). Played around with a few things before settling on this. The distance is pretty conservative; as in tends towards considering information from outside the region, rather than excluding information from within, but still keeps a more "fair" evaluation, by not evaluating against grid cells which contain only a minimal amount of the geographical region. Considering that most political boundaries are linked to geographical features/divides, if only a small fraction of the region extends into another grid cell, would expect its weather to more correlated with that shown by cells over the rest of the region than that of this other area. Example of this can be seen for Uttar Pradesh (India), where a sliver of the region overlaps with a gird cell that is mostly representitive of the Himalayas, so it is not considered when calculating the warm spell durations.
Looking at the individual administrative regions requires working
with shape files. These are obtained from the Database of Global
Administrative Areas.
End of explanation |
6,009 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 3, Table 3
This notebook explains how I used the Harvard General Inquirer to streamline interpretation of a predictive model.
I'm italicizing the word "streamline" because I want to emphasize that I place very little weight on the Inquirer
Step1: Loading the General Inquirer.
This takes some doing, because the General Inquirer doesn't start out as a set of wordlists. I have to translate it into that form.
I start by loading an English dictionary.
Step2: The next stage is to translate the Inquirer. It begins as a table where word senses are row labels, and the Inquirer categories are columns (except for two columns at the beginning and two at the end). This is, by the way, the "basic spreadsheet" described at this site
Step3: Load model predictions about volumes
The next step is to create some vectors that store predictions about volumes. In this case, these are predictions about the probability that a volume is fiction, rather than biography.
Step4: And get the wordcounts themselves
This cell of the notebook is very short (one line), but it takes a lot of time to execute. There's a lot of file i/o that happens inside the function get_wordfreqs, in the FileCabinet module, which is invoked here. We come away with a dictionary of wordcounts, keyed in the first instance by volume ID.
Note that these are normalized frequencies rather than the raw integer counts we had in the analogous notebook in chapter 1.
Step5: Now calculate the representation of each Inquirer category in each doc
We normalize by the total wordcount for a volume.
This cell also takes a long time to run. I've added a counter so you have some confidence that it's still running.
Step6: Calculate correlations
Now that we have all the information, calculating correlations is easy. We iterate through Inquirer categories, in each case calculating the correlation between a vector of model predictions for docs, and a vector of category-frequencies for docs.
Step7: Load expanded names of Inquirer categories
The terms used in the inquirer spreadsheet are not very transparent. DAV for instance is "descriptive action verbs." BodyPt is "body parts." To make these more transparent, I have provided expanded names for many categories that turned out to be relevant in the book, trying to base my description on the accounts provided here
Step8: Print results
I print the top 12 correlations and the bottom 12, skipping categories that are drawn from the "Laswell value dictionary." The Laswell categories are very finely discriminated (things like "enlightenment gain" or "power loss"), and I have little faith that they're meaningful. I especially doubt that they could remain meaningful when the Inquirer is used crudely as a source of wordlists. | Python Code:
# some standard modules
import csv, os, sys
from collections import Counter
import numpy as np
from scipy.stats import pearsonr
# now a module that I wrote myself, located
# a few directories up, in the software
# library for this repository
sys.path.append('../../lib')
import FileCabinet as filecab
Explanation: Chapter 3, Table 3
This notebook explains how I used the Harvard General Inquirer to streamline interpretation of a predictive model.
I'm italicizing the word "streamline" because I want to emphasize that I place very little weight on the Inquirer: as I say in the text, "The General Inquirer has no special authority, and I have tried not to make it a load-bearing element of this argument."
To interpret a model, I actually spend a lot of time looking at lists of features, as well as predictions about individual texts. But to explain my interpretation, I need some relatively simple summary. Given real-world limits on time and attention, going on about lists of individual words for five pages is rarely an option. So, although wordlists are crude and arbitrary devices, flattening out polysemy and historical change, I am willing to lean on them rhetorically, where I find that they do in practice echo observations I have made in other ways.
I should also acknowledge that I'm not using the General Inquirer as it was designed to be used. The full version of this tool is not just a set of wordlists, it's a software package that tries to get around polysemy by disambiguating different word senses. I haven't tried to use it in that way: I think it would complicate my explanation, in order to project an impression of accuracy and precision that I don't particularly want to project. Instead, I have stressed that word lists are crude tools, and I'm using them only as crude approximations.
That said, how do I do it?
To start with, we'll load an array of modules. Some standard, some utilities that I've written myself.
End of explanation
# start by loading the dictionary
dictionary = set()
with open('../../lexicons/MainDictionary.txt', encoding = 'utf-8') as f:
reader = csv.reader(f, delimiter = '\t')
for row in reader:
word = row[0]
count = int(row[2])
if count < 10000:
continue
# that ignores very rare words
# we end up with about 42,700 common ones
else:
dictionary.add(word)
Explanation: Loading the General Inquirer.
This takes some doing, because the General Inquirer doesn't start out as a set of wordlists. I have to translate it into that form.
I start by loading an English dictionary.
End of explanation
inquirer = dict()
suffixes = dict()
suffixes['verb'] = ['s', 'es', 'ed', 'd', 'ing']
suffixes['noun'] = ['s', 'es']
allinquirerwords = set()
with open('../../lexicons/inquirerbasic.csv', encoding = 'utf-8') as f:
reader = csv.DictReader(f)
fields = reader.fieldnames[2:-2]
for field in fields:
inquirer[field] = set()
for row in reader:
term = row['Entry']
if '#' in term:
parts = term.split('#')
word = parts[0].lower()
sense = int(parts[1].strip('_ '))
partialsense = True
else:
word = term.lower()
sense = 0
partialsense = False
if sense > 1:
continue
# we're ignoring uncommon senses
pos = row['Othtags']
if 'Noun' in pos:
pos = 'noun'
elif 'SUPV' in pos:
pos = 'verb'
forms = {word}
if pos == 'noun' or pos == 'verb':
for suffix in suffixes[pos]:
if word + suffix in dictionary:
forms.add(word + suffix)
if pos == 'verb' and word.rstrip('e') + suffix in dictionary:
forms.add(word.rstrip('e') + suffix)
for form in forms:
for field in fields:
if len(row[field]) > 1:
inquirer[field].add(form)
allinquirerwords.add(form)
print('Inquirer loaded')
print('Total of ' + str(len(allinquirerwords)) + " words.")
Explanation: The next stage is to translate the Inquirer. It begins as a table where word senses are row labels, and the Inquirer categories are columns (except for two columns at the beginning and two at the end). This is, by the way, the "basic spreadsheet" described at this site:
http://www.wjh.harvard.edu/~inquirer/spreadsheet_guide.htm
I translate this into a dictionary where the keys are Inquirer categories, and the values are sets of words associated with each category.
But to do that, I have to do some filtering and expanding. Different senses of a word are broken out in the spreadsheet thus:
ABOUT#1
ABOUT#2
ABOUT#3
etc.
I need to separate the hashtag part. Also, because I don't want to allow rare senses of a word too much power, I ignore everything but the first sense of a word.
However, I also want to allow singular verb forms and plural nouns to count. So there's some code below that expands words by adding -s -ed, etc to the end. See the suffixes defined below for more details. Note that I use the English dictionary to determine which possible forms are real words.
End of explanation
# the folder where wordcounts will live
# we're only going to load predictions
# that correspond to files located there
sourcedir = '../sourcefiles/'
docs = []
logistic = []
with open('../modeloutput/fullfiction.results.csv', encoding = 'utf-8') as f:
reader = csv.DictReader(f)
for row in reader:
genre = row['realclass']
docid = row['volid']
if not os.path.exists(sourcedir + docid + '.tsv'):
continue
docs.append(row['volid'])
logistic.append(float(row['logistic']))
logistic = np.array(logistic)
numdocs = len(docs)
assert numdocs == len(logistic)
print("We have information about " + str(numdocs) + " volumes.")
Explanation: Load model predictions about volumes
The next step is to create some vectors that store predictions about volumes. In this case, these are predictions about the probability that a volume is fiction, rather than biography.
End of explanation
wordcounts = filecab.get_wordfreqs(sourcedir, '.tsv', docs)
Explanation: And get the wordcounts themselves
This cell of the notebook is very short (one line), but it takes a lot of time to execute. There's a lot of file i/o that happens inside the function get_wordfreqs, in the FileCabinet module, which is invoked here. We come away with a dictionary of wordcounts, keyed in the first instance by volume ID.
Note that these are normalized frequencies rather than the raw integer counts we had in the analogous notebook in chapter 1.
End of explanation
# Initialize empty category vectors
categories = dict()
for field in fields:
categories[field] = np.zeros(numdocs)
# Now fill them
for i, doc in enumerate(docs):
ctcat = Counter()
allcats = 0
for word, count in wordcounts[doc].items():
if word in dictionary:
allcats += count
if word not in allinquirerwords:
continue
for field in fields:
if word in inquirer[field]:
ctcat[field] += count
for field in fields:
categories[field][i] = ctcat[field] / (allcats + 0.00000001)
# Laplacian smoothing there to avoid div by zero, among other things.
# notice that, since these are normalized freqs, we need to use a very small decimal
# If these are really normalized freqs, it may not matter very much
# that we divide at all. The denominator should always be 1, more or less.
# But I'm not 100% sure about that.
if i % 100 == 1:
print(i, allcats)
Explanation: Now calculate the representation of each Inquirer category in each doc
We normalize by the total wordcount for a volume.
This cell also takes a long time to run. I've added a counter so you have some confidence that it's still running.
End of explanation
logresults = []
for inq_category in fields:
l = pearsonr(logistic, categories[inq_category])[0]
logresults.append((l, inq_category))
logresults.sort()
Explanation: Calculate correlations
Now that we have all the information, calculating correlations is easy. We iterate through Inquirer categories, in each case calculating the correlation between a vector of model predictions for docs, and a vector of category-frequencies for docs.
End of explanation
short2long = dict()
with open('../../lexicons/long_inquirer_names.csv', encoding = 'utf-8') as f:
reader = csv.DictReader(f)
for row in reader:
short2long[row['short_name']] = row['long_name']
Explanation: Load expanded names of Inquirer categories
The terms used in the inquirer spreadsheet are not very transparent. DAV for instance is "descriptive action verbs." BodyPt is "body parts." To make these more transparent, I have provided expanded names for many categories that turned out to be relevant in the book, trying to base my description on the accounts provided here: http://www.wjh.harvard.edu/~inquirer/homecat.htm
We load these into a dictionary.
End of explanation
print('Printing the correlations of General Inquirer categories')
print('with the predicted probabilities of being fiction in allsubset2.csv:')
print()
print('First, top positive correlations: ')
print()
for prob, n in reversed(logresults[-15 : ]):
if n in short2long:
n = short2long[n]
if 'Laswell' in n:
continue
else:
print(str(prob) + '\t' + n)
print()
print('Now, negative correlations: ')
print()
for prob, n in logresults[0 : 15]:
if n in short2long:
n = short2long[n]
if 'Laswell' in n:
continue
else:
print(str(prob) + '\t' + n)
Explanation: Print results
I print the top 12 correlations and the bottom 12, skipping categories that are drawn from the "Laswell value dictionary." The Laswell categories are very finely discriminated (things like "enlightenment gain" or "power loss"), and I have little faith that they're meaningful. I especially doubt that they could remain meaningful when the Inquirer is used crudely as a source of wordlists.
End of explanation |
6,010 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experience
Based on wordnet as ground truth, we tried to learn a classifier to detect antonymics relations between words (small != big / good != bad)
To do so we will explore the carthesian product of
Step1: We can observe quite good f1-score on RandomForest with normalised projected cosine similarity.
Results are even better with not bidirectional relations (bidi). It makes sense since we can find several antonyms for one word | Python Code:
summaryDf = pd.DataFrame([extractSummaryLine(l) for l in open('../../data/learnedModel/anto/summary.txt').readlines()],
columns=['bidirectional', 'strict', 'clf', 'feature', 'post', 'precision', 'recall', 'f1'])
summaryDf.sort_values('f1', ascending=False)[:10]
Explanation: Experience
Based on wordnet as ground truth, we tried to learn a classifier to detect antonymics relations between words (small != big / good != bad)
To do so we will explore the carthesian product of:
* simple / bidi: consider each adjective to have only one antonyms or not
* strict: try to compose missing concept
* randomForest / knn: knn allow us to check if there is anything consistent to learn, randomForest is a basic model as a first approach to learn the function
* feature: one of the feature presented in the guided tour
* postFeature: any extra processing to apply to the feature extraction (like normalise)
We use a 10 K-Fold cross validation.
Negative sampling is generating by shuffling pairs.
Once you downloaded the files, you can use this script reproduce the experience at home:
python experiment/trainAll_antoClf.py > ../data/learnedModel/anto/log.txt
Results
Here is the summary of the results we gathered,
You can find details reports in logs.
End of explanation
!python ../../toolbox/script/detailConceptPairClfError.py ../../data/voc/npy/wikiEn-skipgram.npy ../../data/learnedModel/anto/bidi__RandomForestClassifier_pCosSim_postNormalize.dill ../../data/wordPair/wordnetAnto.txt anto ../../data/wordPair/wordnetAnto_fake.txt notAnto
Explanation: We can observe quite good f1-score on RandomForest with normalised projected cosine similarity.
Results are even better with not bidirectional relations (bidi). It makes sense since we can find several antonyms for one word:
* small != big
* small != tall
Allowing to compose concept also seems to have a positive impact.
Study errors
Here is the detail of:
* False positive - ie: pairs considered as antonyms but not included in wordnet
* False negative - ie: not detected antonyms
The false positives are especially interresting here...
End of explanation |
6,011 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<a href='#motivation'>Motivation</a>
<a href='#constructor'>Constructing a dataset</a>
<a href='#attributes'>Attributes</a>
<a href='#access'>Accessing samples</a>
<a href='#iteration'>Iteration over samples</a>
<a href='#subsetselection'>Subset selection</a>
<a href='#serialization'>Saving/reloading a dataset (Serialization)</a>
<a href='#arithmetic'> Combining multiple datasets and arithmetic on useful subsets within datasets </a>
<a href='#portability'>Portability (e.g. with sklearn)</a>
<a id='motivation'></a>
Research data management for medical and machine learning datasets with pyradigm
Data structures provided by the pyradigm library are greatly suited for medical data, where each sample/subject/unit needs to be uniquely identified with a single identifier (denoted as subject ID or something similar depending on the field) that links multiple tables containing diverse types of data, such features extracted and prediction targets for machine learning analyses, covariates and demographics as well as needing to deal with multiple data types (numerical, categorical etc).
Key-level correspondence across data, targets (either as labels 1 or 2, or as class names such as healthy and disease) and related tables (demographics and other meta data) is essential to maintain integrity of the datasets themselves as well as the subsequent analyses. This would help improve the provenance, as it is easy to encapsulate the relevant info and necessary history in the user-defined attribute and meta data features.
To provide a concrete examples, let's look at how a machine learning dataset is handled traditionally.
You typically have a matrix X of size n x p (n samples and p features), and a target vector y containing the target values (numerical or categorical or multi-output arrays). These X and y serve as training (and test set) for a classifier (like SVM) to fit the data X to match y as accurately as possible.
Let's get a little more concrete, and produce toy data for X and y
Step1: Almost all the machine learning toolboxes take their input in this form
Step2: The only major difference between the above two data structures is the data type of target i.e. categorical/discrete vs. continuous values. The ClassificationDataset accepts only categorical values for its target, often as strings ('healthy', 'disease', 'cat', 'chair' etc), whereas the RegressionDataset allows continuous floating point numbers (e.g. age, temperature, salary, weight etc). Depending on the target data type, the analyses (esp. in machine learning) changes dramatically in terms of which predictive model is employed, how they are optimized and what performance metrics are considered etc.
Let's get started with a classification dataset, which can be instantiated as shown below. We also give it a simple description
Step3: These dataset have customised and convenient display methods, summarizing the important info. For example, just printing instance shows its content summary (typically count of samplets per target and a description if any).
Step4: You can see the dataset some description attached to it, however we can also it is empty. This can be verified in a boolean context as shown below
Step7: Let's add samples to this dataset which is when this dataset implementation becomes really handy. Before we do that, we will define some convenience routines defined to just illustrate a simple yet common use of this dataset.
Step8: So now we have IO routines to read the data for us. Let's define where the data will come from
Step9: This would obviously change for your applications, but this has sufficient properties to illustrate the point.
Let's look at what methods this dataset offers us
Step11: That's a lot of methods and attributes to use, organize and retrieve datasets.
So let's go through them by their usage sections.
Constructor
You can see there few methods such as .add_samplet(), .get_subset() etc. The most often used method is the .add_samplet(), which is key to constructing a pyradigm dataset.
To contruct a dataset, one typically starts with a list of subject IDs to be added - we create few random lists, each to be considered as a separate class
Step12: The dataset can be populated by adding all subjects belonging to a one class (referred to by cls_id here), done by adding one samplet at a time, using the .add_samplet() method. Let's go ahead and add some samplets based on id lists we just created.
Note
Step13: Nice. Isn't it?
So what's nice about this, you say? The simple fact that you are constructing a dataset as you read the data in its most elemental form (in the units of the dataset such as the subject ID in our neuroimaging application). You're done as soon as you're done reading the features from disk.
What's more - you can inspect the dataset in an intuitive manner, as shown below
Step14: Even better, right? No more coding of several commands to get the complete and concise sense of the dataset.
<a id='attributes'></a>
Convenient attributes
If you would like, you can always get more specific information, such as
Step15: If you'd like to take a look data inside for few subjects - shall we call it a glance?
Step16: We can control the number of items to glance, by passing a number to dataset.glance() method
Step17: Or you may be wondering what are the subject IDs in the dataset.. here they are
Step18: These datasets offer all convenient methods and attributes you need. Besides it is quite easy to extend them to fit your needs and improve your workflow.
<a id='access'></a>
Accessing samples
Thanks to its design, data for a given samplet 'M0299' can simply be obtained by
Step19: Like a Python dict, it raises an error if the key is not in the dataset
Step20: A more graceful handling would be to use dataset.get to control what value to be returned in case the requested id is not found in the dataset.
Step21: <a id='iteration'></a>
Iteration
Thanks to builtin iteration, we can easily iterate over all the samplets in the dataset
Step22: Did you see that? It's so intuitive and natural! Such a clean traversal of dataset.
Thanks to the choice of the OrderedDict() to represent the data, classes and labels underneath, the order of sample addition is retained. Hence the correspondence across samples in the dataset not only key-wise (by the sample id), but also index-wise.
<a id='transform'></a>
Subject-wise transform
Quite often, we are interested in computing some statistics on data for a given subject (such as mean, or ROI-wise median). Typically this requires a loop, with some computation and organizing it in a new dataset! A simple routine pattern of usage, but can't avoided if you are still fiddling with representing your dataset in medieval matrices!
Step23: we can easily traverse the dataset to check the result
Step24: As the transform accepts an arbitrary callable, we could do many more sophisticated things, such as access the subset of features e.g. cortical thickness for a particular region of interest (say posterior cingulate gyrus).
Step25: Using this "mask" function, we can easily obtain features for an ROI
Step26: We can verify that the new dataset does indeed have only 3 features, for the same subjects/classes
Step27: Let's make a bar plot with the just computed numbers
Step28: Remember as the original source of data was random, this has no units, property or meaning!
<a id='subsetselection'></a>
Subset selection
In addition to the structured way of obtaining the various properties of this dataset, this implementation really will come in handy when you have to slice and dice the dataset (with large number of classes and features) into smaller subsets (e.g. for binary classification). Let's see how we can retrieve the data for a single class
Step29: That's it, obtaining the data for a given class is a simple call away.
Now let's see what it looks like
Step30: Even with updated description automatically, to indicate its history. Let's see some data from controls
Step31: We can also query a random subset of samples for manual inspection or cross-validation purposes. For example
Step32: You can see which samplets were selected
Step33: You can verify that it is indeed random by issuing another call
Step34: Let's see how we can retrieve specific samples by their IDs (for which there are many use cases)
Step35: So as simple as that.
Cross-validation
If you would like to develop a variant of cross-validation, and need to obtain a random split of the dataset to obtain training and test sets, it is as simple as
Step36: This method returns two sets of sample ids corresponding to training set (which 50% of samples from all classes in the dataset) and the rest in test_set. Let's see what they have
Step37: We can also get a train/test split by specifying an exact number of subjects we would like from each class (e.g. when you would like to avoid class imbalance in the training set)
Step38: Let's see what the training set contains - we expect 3*3 =9 subjects
Step39: We can indeed verify that is the case, by creating a new smaller dataset from that list of ids and getting a summary
Step40: Another programmatic way to look into different classes is this
Step41: which returns all the classes that you could iterative over.
Using these two lists, we can easily obtain subset datasets, as illustrated below.
Step42: How about selecting a subset of features from all samples?
Step43: Great. Isn't it? You can also see the two-time-point history (initial subset in classes, followed by a subset in features).
<a id='serialization'></a>
Serialization
Once you have this dataset, you can save and load these trivially using your favourite serialization module. Let's do some pickling
Step44: That's it - it is saved.
Let's reload it from disk and make sure we can indeed retrieve it
Step45: We can check to see they are indeed one and the same
Step46: <a id='arithmetic'></a>
Dataset Arithmetic
You might wonder how can you combine two different types of features ( thickness and shape ) from the dataset. Piece of cake, see below ...
To concatenat two datasets, first we make a second dataset
Step47: How can you check if they are "functionally identical"? As in same keys, same data and classes for each key... Easy
Step48: Now let's try the arithmetic
Step49: Great. The add method recognized the identical set of keys and performed a horiz cat, as can be noticed by the twice the number of features in the combined dataset
Step50: We can also do some removal in similar fashion
Step51: Data structure is even producing a warning to let you know the resulting output would be empty! We can verify that
Step52: <a id='portability'></a>
Portability
This is all well and good. How does it interact with other packages out there, you might ask? It is as simple as you can imagine | Python Code:
import sys, os
import numpy as np
import matplotlib
%matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
n = 10 # number of samples
p = 3 # number of features
X = np.random.random([n, p]) # random data for illustration
y = [1]*5 + [2]*5 # random labels ...
np.set_printoptions(precision=2) # save some screen space
print('X : \n{}'.format(X))
print('y : \n{}'.format(y))
Explanation: Table of Contents
<a href='#motivation'>Motivation</a>
<a href='#constructor'>Constructing a dataset</a>
<a href='#attributes'>Attributes</a>
<a href='#access'>Accessing samples</a>
<a href='#iteration'>Iteration over samples</a>
<a href='#subsetselection'>Subset selection</a>
<a href='#serialization'>Saving/reloading a dataset (Serialization)</a>
<a href='#arithmetic'> Combining multiple datasets and arithmetic on useful subsets within datasets </a>
<a href='#portability'>Portability (e.g. with sklearn)</a>
<a id='motivation'></a>
Research data management for medical and machine learning datasets with pyradigm
Data structures provided by the pyradigm library are greatly suited for medical data, where each sample/subject/unit needs to be uniquely identified with a single identifier (denoted as subject ID or something similar depending on the field) that links multiple tables containing diverse types of data, such features extracted and prediction targets for machine learning analyses, covariates and demographics as well as needing to deal with multiple data types (numerical, categorical etc).
Key-level correspondence across data, targets (either as labels 1 or 2, or as class names such as healthy and disease) and related tables (demographics and other meta data) is essential to maintain integrity of the datasets themselves as well as the subsequent analyses. This would help improve the provenance, as it is easy to encapsulate the relevant info and necessary history in the user-defined attribute and meta data features.
To provide a concrete examples, let's look at how a machine learning dataset is handled traditionally.
You typically have a matrix X of size n x p (n samples and p features), and a target vector y containing the target values (numerical or categorical or multi-output arrays). These X and y serve as training (and test set) for a classifier (like SVM) to fit the data X to match y as accurately as possible.
Let's get a little more concrete, and produce toy data for X and y:
End of explanation
from pyradigm import ClassificationDataset as ClfDataset
from pyradigm import RegressionDataset as RegrDataset
Explanation: Almost all the machine learning toolboxes take their input in this form: X and y, regardless of the original source that produced these features in the first place.
This might be fine if all you ever wanted to do is to extract some features, do some machine learning one-time and dispose these features away!
But this is almost never the case!
Because it doesn't simply end there.
At a minimum, you often need to dissect the results and link them across different tables e.g. to figure out
* which samples are misclassified, that can only be queried with their identifiers and not simply their row indices in X?
* what are the characteristics of those samples?
* what targets/labels/classes do they belong to?
And all this info needs to be obtained
* without having to write lots of code connecting few non-obvious links to disparate sources of data (numerical features X, and sample identifiers in a CSV file) to find the relevant info
* without having to track down who or which method originally produced these features
* how the previous colleage or student organized the whole dataset, if you haven't generated the features yourself from scratch
And if you are like me, you would be thinking about how would you organize your workflow such that the aforementioned tasks can be accomplished with ease.
This pyradigm data structure is the result of years of refinement to conquer the above complex workflow and related scenarios with ease. By always organizing the extracted features in a dictionary with keyed-in by their samplet ID, along with other important info such as target values, other attributes and associated meta-data. This, by definition, preserves the integrity of the data (inability to incorrectly label samples etc).
With user-defined attributes, this data structure allows the capture of provenance info in a way that is meaningful to the user.
NOTE: we define a samplet to be a single row in X, to distinguish from a sample that is sometimes incorrectly used to refer both a single instance and an entire "sample" / dataset.
An example application is shown below, touching upon the following topics:
<a href='#motivation'>Motivation</a>
<a href='#constructor'>Constructing a dataset</a>
<a href='#attributes'>Attributes</a>
<a href='#access'>Accessing samples</a>
<a href='#iteration'>Iteration over samples</a>
<a href='#subsetselection'>Subset selection</a>
<a href='#serialization'>Saving/reloading a dataset (Serialization)</a>
<a href='#arithmetic'> Combining multiple datasets and arithmetic on useful subsets within datasets </a>
<a href='#portability'>Portability (e.g. with sklearn)</a>
Improting the necessary modules and our fancy class definition:
End of explanation
dataset = ClfDataset()
dataset.description = 'ADNI1: cortical thickness features from Freesurfer v6.0, QCed.'
Explanation: The only major difference between the above two data structures is the data type of target i.e. categorical/discrete vs. continuous values. The ClassificationDataset accepts only categorical values for its target, often as strings ('healthy', 'disease', 'cat', 'chair' etc), whereas the RegressionDataset allows continuous floating point numbers (e.g. age, temperature, salary, weight etc). Depending on the target data type, the analyses (esp. in machine learning) changes dramatically in terms of which predictive model is employed, how they are optimized and what performance metrics are considered etc.
Let's get started with a classification dataset, which can be instantiated as shown below. We also give it a simple description:
End of explanation
dataset
Explanation: These dataset have customised and convenient display methods, summarizing the important info. For example, just printing instance shows its content summary (typically count of samplets per target and a description if any).
End of explanation
bool(dataset)
Explanation: You can see the dataset some description attached to it, however we can also it is empty. This can be verified in a boolean context as shown below:
End of explanation
def read_thickness(path):
Dummy function to minic a data reader.
# in your actural routine, this might be:
# pysurfer.read_thickness(path).values()
return np.random.random(2)
def get_features(work_dir, subj_id):
Returns the whole brain cortical thickness for a given subject ID.
# extension to identify the data file; this could be .curv, anything else you choose
ext_thickness = '.thickness'
thickness = dict()
for hemi in ['lh', 'rh']:
path_thickness = os.path.join(work_dir, subj_id, hemi + ext_thickness)
thickness[hemi] = read_thickness(path_thickness)
# concatenating them to build a whole brain feature set
thickness_wb = np.concatenate([thickness['lh'], thickness['rh']])
return thickness_wb
Explanation: Let's add samples to this dataset which is when this dataset implementation becomes really handy. Before we do that, we will define some convenience routines defined to just illustrate a simple yet common use of this dataset.
End of explanation
work_dir = '/project/ADNI/FreesurferThickness_v4p3'
class_set = ['Control', 'Alzheimer', 'MCI']
class_sizes = [7, 8, 6]
Explanation: So now we have IO routines to read the data for us. Let's define where the data will come from:
End of explanation
[mm for mm in dir(dataset) if not mm.startswith('_') ]
Explanation: This would obviously change for your applications, but this has sufficient properties to illustrate the point.
Let's look at what methods this dataset offers us:
End of explanation
import random
random.seed(42)
def get_id_list(class_name, size=10):
Generates a random ID list.
return ['{}{:04d}'.format(class_name[0],np.random.randint(50*size)) for _ in range(size)]
Explanation: That's a lot of methods and attributes to use, organize and retrieve datasets.
So let's go through them by their usage sections.
Constructor
You can see there few methods such as .add_samplet(), .get_subset() etc. The most often used method is the .add_samplet(), which is key to constructing a pyradigm dataset.
To contruct a dataset, one typically starts with a list of subject IDs to be added - we create few random lists, each to be considered as a separate class:
End of explanation
for class_index, class_id in enumerate(class_set):
print('Adding class {:>5}'.format(class_id))
target_list = get_id_list(class_id,class_sizes[class_index])
for subj_id in target_list:
print('\t reading subject {:>7}'.format(subj_id))
thickness_wb = get_features(work_dir, subj_id)
# adding the sample to the dataset
dataset.add_samplet(subj_id, thickness_wb, class_id)
Explanation: The dataset can be populated by adding all subjects belonging to a one class (referred to by cls_id here), done by adding one samplet at a time, using the .add_samplet() method. Let's go ahead and add some samplets based on id lists we just created.
Note: samplet here refers to a single in the sample feature matrix X. This new term samplet is defined to distinguish individual row elements of X from X itself and minimized confusion.
End of explanation
dataset
Explanation: Nice. Isn't it?
So what's nice about this, you say? The simple fact that you are constructing a dataset as you read the data in its most elemental form (in the units of the dataset such as the subject ID in our neuroimaging application). You're done as soon as you're done reading the features from disk.
What's more - you can inspect the dataset in an intuitive manner, as shown below:
End of explanation
dataset.num_samplets
dataset.num_features
dataset.target_set
dataset.target_sizes
dataset.target_sizes['Control']
Explanation: Even better, right? No more coding of several commands to get the complete and concise sense of the dataset.
<a id='attributes'></a>
Convenient attributes
If you would like, you can always get more specific information, such as:
End of explanation
dataset.glance()
Explanation: If you'd like to take a look data inside for few subjects - shall we call it a glance?
End of explanation
dataset.glance(2)
Explanation: We can control the number of items to glance, by passing a number to dataset.glance() method:
End of explanation
dataset.samplet_ids
Explanation: Or you may be wondering what are the subject IDs in the dataset.. here they are:
End of explanation
dataset['M0022']
Explanation: These datasets offer all convenient methods and attributes you need. Besides it is quite easy to extend them to fit your needs and improve your workflow.
<a id='access'></a>
Accessing samples
Thanks to its design, data for a given samplet 'M0299' can simply be obtained by:
End of explanation
dataset['dlfjdjf']
Explanation: Like a Python dict, it raises an error if the key is not in the dataset:
End of explanation
dataset.get('dkfjd', np.nan)
Explanation: A more graceful handling would be to use dataset.get to control what value to be returned in case the requested id is not found in the dataset.
End of explanation
for samplet, features in dataset:
print("{} : {:>10} : {}".format(sample, dataset.targets[samplet], features))
Explanation: <a id='iteration'></a>
Iteration
Thanks to builtin iteration, we can easily iterate over all the samplets in the dataset:
End of explanation
mean_data = dataset.transform(np.mean)
mean_data.description = 'mean values per subject'
mean_data
Explanation: Did you see that? It's so intuitive and natural! Such a clean traversal of dataset.
Thanks to the choice of the OrderedDict() to represent the data, classes and labels underneath, the order of sample addition is retained. Hence the correspondence across samples in the dataset not only key-wise (by the sample id), but also index-wise.
<a id='transform'></a>
Subject-wise transform
Quite often, we are interested in computing some statistics on data for a given subject (such as mean, or ROI-wise median). Typically this requires a loop, with some computation and organizing it in a new dataset! A simple routine pattern of usage, but can't avoided if you are still fiddling with representing your dataset in medieval matrices! :).
If you organized your dataset in a pyradigm, such computation is trivial, thanks to builtin implementation of transform method. The mean value for each subject can be computed and organized in a new dataset, with an intuitive and single line:
End of explanation
for samplet, val in mean_data:
print("{} : {:>10} : {:.3f}".format(samplet, mean_data.targets[samplet], val))
Explanation: we can easily traverse the dataset to check the result:
End of explanation
# let's make a toy function to return the indices for the ROI
def get_ROI_indices(x): return x[:3]
Explanation: As the transform accepts an arbitrary callable, we could do many more sophisticated things, such as access the subset of features e.g. cortical thickness for a particular region of interest (say posterior cingulate gyrus).
End of explanation
pcg = dataset.transform(get_ROI_indices)
Explanation: Using this "mask" function, we can easily obtain features for an ROI
End of explanation
pcg
pcg.num_features
Explanation: We can verify that the new dataset does indeed have only 3 features, for the same subjects/classes:
End of explanation
data, lbl, keys = pcg.data_and_targets()
n, bins, patches = plt.hist(data)
Explanation: Let's make a bar plot with the just computed numbers:
End of explanation
ctrl = dataset.get_class('Control')
Explanation: Remember as the original source of data was random, this has no units, property or meaning!
<a id='subsetselection'></a>
Subset selection
In addition to the structured way of obtaining the various properties of this dataset, this implementation really will come in handy when you have to slice and dice the dataset (with large number of classes and features) into smaller subsets (e.g. for binary classification). Let's see how we can retrieve the data for a single class:
End of explanation
ctrl
Explanation: That's it, obtaining the data for a given class is a simple call away.
Now let's see what it looks like:
End of explanation
ctrl.glance(2)
Explanation: Even with updated description automatically, to indicate its history. Let's see some data from controls:
End of explanation
random_subset = dataset.random_subset(perc_in_class=0.3)
random_subset
Explanation: We can also query a random subset of samples for manual inspection or cross-validation purposes. For example:
End of explanation
random_subset.samplet_ids
Explanation: You can see which samplets were selected:
End of explanation
# supplying a new seed everytime to ensure randomization
from datetime import datetime
dataset.random_subset(perc_in_class=0.3).samplet_ids
Explanation: You can verify that it is indeed random by issuing another call:
End of explanation
data = dataset.get_subset(dataset.samplet_ids[1:20])
data
Explanation: Let's see how we can retrieve specific samples by their IDs (for which there are many use cases):
End of explanation
train_set, test_set = dataset.train_test_split_ids( train_perc = 0.5)
Explanation: So as simple as that.
Cross-validation
If you would like to develop a variant of cross-validation, and need to obtain a random split of the dataset to obtain training and test sets, it is as simple as:
End of explanation
train_set, test_set
Explanation: This method returns two sets of sample ids corresponding to training set (which 50% of samples from all classes in the dataset) and the rest in test_set. Let's see what they have:
End of explanation
train_set, test_set = dataset.train_test_split_ids( count_per_class = 3)
Explanation: We can also get a train/test split by specifying an exact number of subjects we would like from each class (e.g. when you would like to avoid class imbalance in the training set):
End of explanation
train_set
Explanation: Let's see what the training set contains - we expect 3*3 =9 subjects :
End of explanation
training_dataset = dataset.get_subset(train_set)
training_dataset
Explanation: We can indeed verify that is the case, by creating a new smaller dataset from that list of ids and getting a summary:
End of explanation
target_set, target_sizes = training_dataset.summarize()
target_set, target_sizes
Explanation: Another programmatic way to look into different classes is this:
End of explanation
dataset
binary_dataset = dataset.get_class(['Control','Alzheimer'])
binary_dataset
Explanation: which returns all the classes that you could iterative over.
Using these two lists, we can easily obtain subset datasets, as illustrated below.
End of explanation
binary_dataset.get_feature_subset(range(2))
Explanation: How about selecting a subset of features from all samples?
End of explanation
from pathlib import Path
out_file = Path('.') / 'Freesurfer_thickness_v4p3.PyradigmDataset.pkl'
binary_dataset.save(out_file)
Explanation: Great. Isn't it? You can also see the two-time-point history (initial subset in classes, followed by a subset in features).
<a id='serialization'></a>
Serialization
Once you have this dataset, you can save and load these trivially using your favourite serialization module. Let's do some pickling:
End of explanation
reloaded = ClfDataset(out_file) # another form of the constructor!
reloaded
Explanation: That's it - it is saved.
Let's reload it from disk and make sure we can indeed retrieve it:
End of explanation
binary_dataset == reloaded
Explanation: We can check to see they are indeed one and the same:
End of explanation
dataset_two = ClfDataset(in_dataset=dataset) # yet another constructor: in its copy form!
Explanation: <a id='arithmetic'></a>
Dataset Arithmetic
You might wonder how can you combine two different types of features ( thickness and shape ) from the dataset. Piece of cake, see below ...
To concatenat two datasets, first we make a second dataset:
End of explanation
dataset_two == dataset
Explanation: How can you check if they are "functionally identical"? As in same keys, same data and classes for each key... Easy:
End of explanation
combined = dataset + dataset_two
Explanation: Now let's try the arithmetic:
End of explanation
combined
Explanation: Great. The add method recognized the identical set of keys and performed a horiz cat, as can be noticed by the twice the number of features in the combined dataset:
End of explanation
smaller = combined - dataset
Explanation: We can also do some removal in similar fashion:
End of explanation
bool(smaller)
Explanation: Data structure is even producing a warning to let you know the resulting output would be empty! We can verify that:
End of explanation
from sklearn import svm
clf = svm.SVC(gamma=0.001, C=100.)
data_matrix, target, sample_ids = binary_dataset.data_and_targets()
clf.fit(data_matrix, target)
Explanation: <a id='portability'></a>
Portability
This is all well and good. How does it interact with other packages out there, you might ask? It is as simple as you can imagine:
End of explanation |
6,012 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ATM 623
Step1: Contents
The observed seasonal cycle from NCEP Reanalysis data
Analytical toy model of the seasonal cycle
Exploring the amplitude of the seasonal cycle with an EBM
The seasonal cycle for a planet with 90º obliquity
<a id='section1'></a>
1. The observed seasonal cycle from NCEP Reanalysis data
Look at the observed seasonal cycle in the NCEP reanalysis data.
Read in the necessary data from the online server.
The catalog is here
Step2: Make two maps
Step3: Make a contour plot of the zonal mean temperature as a function of time
Step4: <a id='section2'></a>
2. Analytical toy model of the seasonal cycle
What factors determine the above pattern of seasonal temperatures? How large is the winter-to-summer variation in temperature? What is its phasing relative to the seasonal variations in insolation?
We will start to examine this in a very simple zero-dimensional EBM.
Suppose the seasonal cycle of insolation at a point is
$$ Q = Q^* \sin\omega t + Q_0$$
where $\omega = 2\pi ~ \text{year}^{-1}$, $Q_0$ is the annual mean insolation, and $Q^*$ is the amplitude of the seasonal variations.
Here $\omega ~ t=0$ is spring equinox, $\omega~t = \pi/2$ is summer solstice, $\omega~t = \pi$ is fall equinox, and $ \omega ~t = 3 \pi/2$ is winter solstice.
Now suppose the temperature is governed by
$$ C \frac{d T}{d t} = Q - (A + B~T) $$
so that we have a simple model
$$ C \frac{d T}{d t} = Q^* \sin\omega t + Q_0 - (A + B~T) $$
We want to ask two questions
Step5: The blue line shows the amplitude of the seasonal cycle of temperature, expressed as a fraction of its maximum value $\frac{Q^*}{B}$ (the value that would occur if the system had zero heat capacity so that temperatures were always in radiative equilibrium with the instantaneous insolation).
The red line shows the phase lag (in degrees) of the temperature cycle relative to the insolation cycle.
The vertical black line indicates 2.5 meters of water, which is the heat capacity of the atmosphere and thus our effective lower bound on total column heat capacity.
The seasonal phase shift
Even for the driest surfaces the phase shift is about 45º and the amplitude is half of its theoretical maximum. For most wet surfaces the cycle is damped out and delayed further.
Of course we are already familiar with this phase shift from our day-to-day experience. Our calendar says that summer "begins" at the solstice and last until the equinox.
Step6: The blue curve in this figure is in phase with the insolation.
<a id='section3'></a>
3. Exploring the amplitude of the seasonal cycle with an EBM
Something important is missing from this toy model
Step7: Notice that this model has an insolation subprocess called DailyInsolation, rather than AnnualMeanInsolation. These should be fairly self-explanatory.
Step8: All models should have the same annual mean temperature
Step9: There is no automatic function in the climlab code to keep track of minimum and maximum temperatures (though we might add that in the future!)
Instead we'll step through one year "by hand" and save all the temperatures.
Step10: Make a figure to compare the observed zonal mean seasonal temperature cycle to what we get from the EBM with different heat capacities
Step11: Which one looks more realistic? Depends a bit on where you look. But overall, the observed seasonal cycle matches the 10 meter case best. The effective heat capacity governing the seasonal cycle of the zonal mean temperature is closer to 10 meters of water than to either 2 or 50 meters.
Making an animation of the EBM solutions
Let's animate the seasonal cycle of insolation and temperature in our models with the three different water depths
Step12: <a id='section4'></a>
4. The seasonal cycle for a planet with 90º obliquity
The EBM code uses our familiar insolation.py code to calculate insolation, and therefore it's easy to set up a model with different orbital parameters. Here is an example with very different orbital parameters
Step13: Repeat the same procedure to calculate and store temperature throughout one year, after letting the models run out to equilibrium.
Step14: And plot the seasonal temperature cycle same as we did above
Step15: Note that the temperature range is much larger than for the Earth-like case above (but same contour interval, 10 degC).
Why is the temperature so uniform in the north-south direction with 50 meters of water?
To see the reason, let's plot the annual mean insolation at 90º obliquity, alongside the present-day annual mean insolation
Step16: Though this is a bit misleading, because our model prescribes an increase in albedo from the equator to the pole. So the absorbed shortwave gradients look even more different.
If you are interested in how ice-albedo feedback might work on a high-obliquity planet with a cold equator, then I suggest you take a look at this paper | Python Code:
# Ensure compatibility with Python 2 and 3
from __future__ import print_function, division
Explanation: ATM 623: Climate Modeling
Brian E. J. Rose, University at Albany
Lecture 19: Modeling the seasonal cycle of surface temperature
Warning: content out of date and not maintained
You really should be looking at The Climate Laboratory book by Brian Rose, where all the same content (and more!) is kept up to date.
Here you are likely to find broken links and broken code.
About these notes:
This document uses the interactive Jupyter notebook format. The notes can be accessed in several different ways:
The interactive notebooks are hosted on github at https://github.com/brian-rose/ClimateModeling_courseware
The latest versions can be viewed as static web pages rendered on nbviewer
A complete snapshot of the notes as of May 2017 (end of spring semester) are available on Brian's website.
Also here is a legacy version from 2015.
Many of these notes make use of the climlab package, available at https://github.com/brian-rose/climlab
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
import climlab
from climlab import constants as const
import cartopy.crs as ccrs # use cartopy to make some maps
## The NOAA ESRL server is shutdown! January 2019
ncep_url = "http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis.derived/"
ncep_Ts = xr.open_dataset(ncep_url + "surface_gauss/skt.sfc.mon.1981-2010.ltm.nc", decode_times=False)
#url = "http://apdrc.soest.hawaii.edu:80/dods/public_data/Reanalysis_Data/NCEP/NCEP/clima/"
#ncep_Ts = xr.open_dataset(url + 'surface_gauss/skt')
lat_ncep = ncep_Ts.lat; lon_ncep = ncep_Ts.lon
Ts_ncep = ncep_Ts.skt
print( Ts_ncep.shape)
Explanation: Contents
The observed seasonal cycle from NCEP Reanalysis data
Analytical toy model of the seasonal cycle
Exploring the amplitude of the seasonal cycle with an EBM
The seasonal cycle for a planet with 90º obliquity
<a id='section1'></a>
1. The observed seasonal cycle from NCEP Reanalysis data
Look at the observed seasonal cycle in the NCEP reanalysis data.
Read in the necessary data from the online server.
The catalog is here: http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis.derived/catalog.html
End of explanation
maxTs = Ts_ncep.max(dim='time')
minTs = Ts_ncep.min(dim='time')
meanTs = Ts_ncep.mean(dim='time')
fig = plt.figure( figsize=(16,6) )
ax1 = fig.add_subplot(1,2,1, projection=ccrs.Robinson())
cax1 = ax1.pcolormesh(lon_ncep, lat_ncep, meanTs, cmap=plt.cm.seismic , transform=ccrs.PlateCarree())
cbar1 = plt.colorbar(cax1)
ax1.set_title('Annual mean surface temperature ($^\circ$C)', fontsize=14 )
ax2 = fig.add_subplot(1,2,2, projection=ccrs.Robinson())
cax2 = ax2.pcolormesh(lon_ncep, lat_ncep, maxTs - minTs, transform=ccrs.PlateCarree() )
cbar2 = plt.colorbar(cax2)
ax2.set_title('Seasonal temperature range ($^\circ$C)', fontsize=14)
for ax in [ax1,ax2]:
#ax.contour( lon_cesm, lat_cesm, topo.variables['LANDFRAC'][:], [0.5], colors='k');
#ax.set_xlabel('Longitude', fontsize=14 ); ax.set_ylabel('Latitude', fontsize=14 )
ax.coastlines()
Explanation: Make two maps: one of annual mean surface temperature, another of the seasonal range (max minus min).
End of explanation
Tmax = 65; Tmin = -Tmax; delT = 10
clevels = np.arange(Tmin,Tmax+delT,delT)
fig_zonobs, ax = plt.subplots( figsize=(10,6) )
cax = ax.contourf(np.arange(12)+0.5, lat_ncep,
Ts_ncep.mean(dim='lon').transpose(), levels=clevels,
cmap=plt.cm.seismic, vmin=Tmin, vmax=Tmax)
ax.set_xlabel('Month', fontsize=16)
ax.set_ylabel('Latitude', fontsize=16 )
cbar = plt.colorbar(cax)
ax.set_title('Zonal mean surface temperature (degC)', fontsize=20)
Explanation: Make a contour plot of the zonal mean temperature as a function of time
End of explanation
omega = 2*np.pi / const.seconds_per_year
omega
B = 2.
Hw = np.linspace(0., 100.)
Ctilde = const.cw * const.rho_w * Hw * omega / B
amp = 1./((Ctilde**2+1)*np.cos(np.arctan(Ctilde)))
Phi = np.arctan(Ctilde)
color1 = 'b'
color2 = 'r'
fig = plt.figure(figsize=(8,6))
ax1 = fig.add_subplot(111)
ax1.plot(Hw, amp, color=color1)
ax1.set_xlabel('water depth (m)', fontsize=14)
ax1.set_ylabel('Seasonal amplitude ($Q^* / B$)', fontsize=14, color=color1)
for tl in ax1.get_yticklabels():
tl.set_color(color1)
ax2 = ax1.twinx()
ax2.plot(Hw, np.rad2deg(Phi), color=color2)
ax2.set_ylabel('Seasonal phase shift (degrees)', fontsize=14, color=color2)
for tl in ax2.get_yticklabels():
tl.set_color(color2)
ax1.set_title('Dependence of seasonal cycle phase and amplitude on water depth', fontsize=16)
ax1.grid()
ax1.plot([2.5, 2.5], [0, 1], 'k-');
Explanation: <a id='section2'></a>
2. Analytical toy model of the seasonal cycle
What factors determine the above pattern of seasonal temperatures? How large is the winter-to-summer variation in temperature? What is its phasing relative to the seasonal variations in insolation?
We will start to examine this in a very simple zero-dimensional EBM.
Suppose the seasonal cycle of insolation at a point is
$$ Q = Q^* \sin\omega t + Q_0$$
where $\omega = 2\pi ~ \text{year}^{-1}$, $Q_0$ is the annual mean insolation, and $Q^*$ is the amplitude of the seasonal variations.
Here $\omega ~ t=0$ is spring equinox, $\omega~t = \pi/2$ is summer solstice, $\omega~t = \pi$ is fall equinox, and $ \omega ~t = 3 \pi/2$ is winter solstice.
Now suppose the temperature is governed by
$$ C \frac{d T}{d t} = Q - (A + B~T) $$
so that we have a simple model
$$ C \frac{d T}{d t} = Q^* \sin\omega t + Q_0 - (A + B~T) $$
We want to ask two questions:
What is the amplitude of the seasonal temperature variation?
When does the temperature maximum occur?
We will look for an oscillating solution
$$ T(t) = T_0 + T^* \sin(\omega t - \Phi) $$
where $\Phi$ is an unknown phase shift and $T^*$ is the unknown amplitude of seasonal temperature variations.
The annual mean:
Integrate over one year to find
$$ \overline{T} = T_0 $$
$$ Q_0 = A + B ~ \overline{T} $$
so that
$$T_0 = \frac{Q_0 - A}{B} $$
The seasonal problem
Now we need to solve for $T^*$ and $\Phi$.
Take the derivative
$$ \frac{d T}{dt} = T^* \omega \cos(\omega t - \Phi) $$
and plug into the model equation to get
\begin{align}
C~ T^ \omega \cos(\omega t - \Phi) &= Q^ \sin\omega t + Q_0 \
& - \left( A + B~(T_0 + T^ \sin(\omega t - \Phi) )\right)
\end{align*}
Subtracting out the annual mean leaves us with
$$ C~ T^ \omega \cos(\omega t - \Phi) = Q^ \sin\omega t - B ~ T^* \sin(\omega t - \Phi) $$
Zero heat capacity: the radiative equilibrium solution
It's instructive to first look at the case with $C=0$, which means that the system is not capable of storing heat, and the temperature must always be in radiative equilibrium with the insolation.
In this case we would have
$$ Q^ \sin\omega t = B ~ T^ \sin(\omega t - \Phi) $$
which requires that the phase shift is
$$ \Phi = 0 $$
and the amplitude is
$$ T^ = \frac{Q^}{B} $$
With no heat capacity, there can be no phase shift! The temperature goes up and does in lockstep with the insolation.
As we will see, the amplitude of the temperature variations is maximum in this limit.
As a practical example: at 45ºN the amplitude of the seasonal insolation cycle is about 180 W m$^{-2}$ (see the Insolation notes -- the difference between insolation at summer and winter solstice is about 360 W m$^{-2}$ which we divide by two to get the amplitude of seasonal variations).
We will follow our previous EBM work and take $B = 2$ W m$^{-2}$ K$^{-1}$. This would give a seasonal temperature amplitude of 90ºC!
This highlights to important role for heat capacity to buffer the seasonal variations in sunlight.
Non-dimensional heat capacity parameter
We can rearrange the seasonal equation to give
$$ \frac{C~\omega}{B} \cos(\omega t - \Phi) + \sin(\omega t - \Phi) = \frac{Q^}{B~T^} \sin\omega t $$
The heat capacity appears in our equation through the non-dimensional ratio
$$ \tilde{C} = \frac{C~\omega}{B} $$
This parameter measures the efficiency of heat storage versus damping of energy anomalies through longwave radiation to space in our system.
We will now use trigonometric identities
\begin{align}
\cos(\omega t - \Phi) &= \cos\omega t \cos\Phi + \sin\omega t \sin\Phi \
\sin(\omega t - \Phi) &= \sin\omega t \cos\Phi - \cos\omega t \sin\Phi
\end{align}
to express our equation as
\begin{align}
\frac{Q^}{B~T^} \sin\omega t = &\tilde{C} \cos\omega t \cos\Phi \
+ &\tilde{C} \sin\omega t \sin\Phi \
+ &\sin\omega t \cos\Phi \
- &\cos\omega t \sin\Phi
\end{align}
Now gathering together all terms in $\cos\omega t$ and $\sin\omega t$:
$$ \cos\omega t \left( \tilde{C} \cos\Phi - \sin\Phi \right) = \sin\omega t \left( \frac{Q^}{B~T^} - \tilde{C} \sin\Phi - \cos\Phi \right) $$
Solving for the phase shift
The equation above must be true for all $t$, which means that sum of terms in each set of parentheses must be zero.
We therefore have an equation for the phase shift
$$ \tilde{C} \cos\Phi - \sin\Phi = 0 $$
which means that the phase shift is
$$ \Phi = \arctan \tilde{C} $$
Solving for the amplitude
The other equation is
$$ \frac{Q^}{B~T^} - \tilde{C} \sin\Phi - \cos\Phi = 0 $$
or
$$ \frac{Q^}{B~T^} - \cos\Phi \left( 1+ \tilde{C}^2 \right) = 0 $$
which we solve for $T^*$ to get
$$ T^ = \frac{Q^}{B} \frac{1}{\left( 1+ \tilde{C}^2 \right) \cos\left(\arctan \tilde{C} \right) } $$
Shallow water limit:
In low heat capacity limit,
$$ \tilde{C} << 1 $$
the phase shift is
$$ \Phi \approx \tilde{C} $$
and the amplitude is
$$ T^ = \frac{Q^}{B} \left( 1 - \tilde{C} \right) $$
Notice that for a system with very little heat capacity, the phase shift approaches zero and the amplitude approaches its maximum value $T^ = \frac{Q^}{B}$.
In the shallow water limit the temperature maximum will occur just slightly after the insolation maximum, and the seasonal temperature variations will be large.
Deep water limit:
Suppose instead we have an infinitely large heat reservoir (e.g. very deep ocean mixed layer).
In the limit $\tilde{C} \rightarrow \infty$, the phase shift tends toward
$$ \Phi \rightarrow \frac{\pi}{2} $$
so the warming is nearly perfectly out of phase with the insolation -- peak temperature would occur at fall equinox.
But the amplitude in this limit is very small!
$$ T^* \rightarrow 0 $$
What values of $\tilde{C}$ are realistic?
We need to evaluate
$$ \tilde{C} = \frac{C~\omega}{B} $$
for reasonable values of $C$ and $B$.
$B$ is the longwave radiative feedback in our system: a measure of how efficiently a warm anomaly is radiated away to space. We have previously chosen $B = 2$ W m$^{-2}$ K$^{-1}$.
$C$ is the heat capacity of the whole column, a number in J m$^{-2}$ K$^{-1}$.
Heat capacity of the atmosphere
Integrating from the surface to the top of the atmosphere, we can write
$$ C_a = \int_0^{p_s} c_p \frac{dp}{g} $$
where $c_p = 10^3$ J kg$^{-1}$ K$^{-1}$ is the specific heat at constant pressure for a unit mass of air, and $dp/g$ is a mass element.
This gives $C_a \approx 10^7$ J m$^{-2}$ K$^{-1}$.
Heat capacity of a water surface
As we wrote back in Lecture 2, the heat capacity for a well-mixed column of water is
$$C_w = c_w \rho_w H_w $$
where
$c_w = 4 \times 10^3$ J kg$^{-1}$ $^\circ$C$^{-1}$ is the specific heat of water,
$\rho_w = 10^3$ kg m$^{-3}$ is the density of water, and
$H_w $ is the depth of the water column
The heat capacity of the entire atmosphere is thus equivalent to 2.5 meters of water.
$\tilde{C}$ for a dry land surface
A dry land surface has very little heat capacity and $C$ is actually dominated by the atmosphere. So we can take $C = C_a = 10^7$ J m$^{-2}$ K$^{-1}$ as a reasonable lower bound.
So our lower bound on $\tilde{C}$ is thus, taking $B = 2$ W m$^{-2}$ K$^{-1}$ and $\omega = 2\pi ~ \text{year}^{-1} = 2 \times 10^{-7} \text{ s}^{-1}$:
$$ \tilde{C} = 1 $$
$\tilde{C}$ for a 100 meter ocean mixed layer
Setting $H_w = 100$ m gives $C_w = 4 \times 10^8$ J m$^{-2}$ K$^{-1}$. Then our non-dimensional parameter is
$$ \tilde{C} = 40 $$
The upshot: $\tilde{C}$ is closer to the deep water limit
Even for a dry land surface, $\tilde{C}$ is not small. This means that there is always going to be a substantial phase shift in the timing of the peak temperatures, and a reduction in the seasonal amplitude.
Plot the full solution for a range of water depths
End of explanation
fig, ax = plt.subplots()
years = np.linspace(0,2)
Harray = np.array([0., 2.5, 10., 50.])
for Hw in Harray:
Ctilde = const.cw * const.rho_w * Hw * omega / B
Phi = np.arctan(Ctilde)
ax.plot(years, np.sin(2*np.pi*years - Phi)/np.cos(Phi)/(1+Ctilde**2), label=Hw)
ax.set_xlabel('Years', fontsize=14)
ax.set_ylabel('Seasonal amplitude ($Q^* / B$)', fontsize=14)
ax.set_title('Solution of toy seasonal model for several different water depths', fontsize=14)
ax.legend(); ax.grid()
Explanation: The blue line shows the amplitude of the seasonal cycle of temperature, expressed as a fraction of its maximum value $\frac{Q^*}{B}$ (the value that would occur if the system had zero heat capacity so that temperatures were always in radiative equilibrium with the instantaneous insolation).
The red line shows the phase lag (in degrees) of the temperature cycle relative to the insolation cycle.
The vertical black line indicates 2.5 meters of water, which is the heat capacity of the atmosphere and thus our effective lower bound on total column heat capacity.
The seasonal phase shift
Even for the driest surfaces the phase shift is about 45º and the amplitude is half of its theoretical maximum. For most wet surfaces the cycle is damped out and delayed further.
Of course we are already familiar with this phase shift from our day-to-day experience. Our calendar says that summer "begins" at the solstice and last until the equinox.
End of explanation
# for convenience, set up a dictionary with our reference parameters
param = {'A':210, 'B':2, 'a0':0.354, 'a2':0.25, 'D':0.6}
param
# We can pass the entire dictionary as keyword arguments using the ** notation
model1 = climlab.EBM_seasonal(**param, name='Seasonal EBM')
print( model1)
Explanation: The blue curve in this figure is in phase with the insolation.
<a id='section3'></a>
3. Exploring the amplitude of the seasonal cycle with an EBM
Something important is missing from this toy model: heat transport!
The amplitude of the seasonal cycle of insolation increases toward the poles, but the seasonal temperature variations are partly mitigated by heat transport from lower, warmer latitudes.
Our 1D diffusive EBM is the appropriate tool for exploring this further.
We are looking at the 1D (zonally averaged) energy balance model with diffusive heat transport. The equation is
$$ C \frac{\partial T_s}{\partial t} = (1-\alpha) ~ Q - \left( A + B~T_s \right) + \frac{D}{\cos\phi } \frac{\partial }{\partial \phi} \left( \cos\phi ~ \frac{\partial T_s}{\partial \phi} \right) $$
with the albedo given by
$$ \alpha(\phi) = \alpha_0 + \alpha_2 P_2(\sin\phi) $$
and we will use
climlab.EBM_seasonal
to solve this model numerically.
One handy feature of climlab process code: the function integrate_years() automatically calculates the time averaged temperature. So if we run it for exactly one year, we get the annual mean temperature (and many other diagnostics) saved in the dictionary timeave.
We will look at the seasonal cycle of temperature in three different models with different heat capacities (which we express through an equivalent depth of water in meters).
All other parameters will be as chosen in Lecture 16 (which focussed on tuning the EBM to the annual mean energy budget).
End of explanation
# We will try three different water depths
water_depths = np.array([2., 10., 50.])
num_depths = water_depths.size
Tann = np.empty( [model1.lat.size, num_depths] )
models = []
for n in range(num_depths):
ebm = climlab.EBM_seasonal(water_depth=water_depths[n], **param)
models.append(ebm)
models[n].integrate_years(20., verbose=False )
models[n].integrate_years(1., verbose=False)
Tann[:,n] = np.squeeze(models[n].timeave['Ts'])
Explanation: Notice that this model has an insolation subprocess called DailyInsolation, rather than AnnualMeanInsolation. These should be fairly self-explanatory.
End of explanation
lat = model1.lat
fig, ax = plt.subplots()
ax.plot(lat, Tann)
ax.set_xlim(-90,90)
ax.set_xlabel('Latitude')
ax.set_ylabel('Temperature (degC)')
ax.set_title('Annual mean temperature in the EBM')
ax.legend( water_depths )
Explanation: All models should have the same annual mean temperature:
End of explanation
num_steps_per_year = int(model1.time['num_steps_per_year'])
Tyear = np.empty((lat.size, num_steps_per_year, num_depths))
for n in range(num_depths):
for m in range(num_steps_per_year):
models[n].step_forward()
Tyear[:,m,n] = np.squeeze(models[n].Ts)
Explanation: There is no automatic function in the climlab code to keep track of minimum and maximum temperatures (though we might add that in the future!)
Instead we'll step through one year "by hand" and save all the temperatures.
End of explanation
fig = plt.figure( figsize=(16,10) )
ax = fig.add_subplot(2,num_depths,2)
cax = ax.contourf(np.arange(12)+0.5, lat_ncep,
Ts_ncep.mean(dim='lon').transpose(),
levels=clevels, cmap=plt.cm.seismic,
vmin=Tmin, vmax=Tmax)
ax.set_xlabel('Month')
ax.set_ylabel('Latitude')
cbar = plt.colorbar(cax)
ax.set_title('Zonal mean surface temperature - observed (degC)', fontsize=20)
for n in range(num_depths):
ax = fig.add_subplot(2,num_depths,num_depths+n+1)
cax = ax.contourf(4*np.arange(num_steps_per_year),
lat, Tyear[:,:,n], levels=clevels,
cmap=plt.cm.seismic, vmin=Tmin, vmax=Tmax)
cbar1 = plt.colorbar(cax)
ax.set_title('water depth = %.0f m' %models[n].param['water_depth'], fontsize=20 )
ax.set_xlabel('Days of year', fontsize=14 )
ax.set_ylabel('Latitude', fontsize=14 )
Explanation: Make a figure to compare the observed zonal mean seasonal temperature cycle to what we get from the EBM with different heat capacities:
End of explanation
def initial_figure(models):
fig, axes = plt.subplots(1,len(models), figsize=(15,4))
lines = []
for n in range(len(models)):
ax = axes[n]
c1 = 'b'
Tsline = ax.plot(lat, models[n].Ts, c1)[0]
ax.set_title('water depth = %.0f m' %models[n].param['water_depth'], fontsize=20 )
ax.set_xlabel('Latitude', fontsize=14 )
if n is 0:
ax.set_ylabel('Temperature', fontsize=14, color=c1 )
ax.set_xlim([-90,90])
ax.set_ylim([-60,60])
for tl in ax.get_yticklabels():
tl.set_color(c1)
ax.grid()
c2 = 'r'
ax2 = ax.twinx()
Qline = ax2.plot(lat, models[n].insolation, c2)[0]
if n is 2:
ax2.set_ylabel('Insolation (W m$^{-2}$)', color=c2, fontsize=14)
for tl in ax2.get_yticklabels():
tl.set_color(c2)
ax2.set_xlim([-90,90])
ax2.set_ylim([0,600])
lines.append([Tsline, Qline])
return fig, axes, lines
def animate(step, models, lines):
for n, ebm in enumerate(models):
ebm.step_forward()
# The rest of this is just updating the plot
lines[n][0].set_ydata(ebm.Ts)
lines[n][1].set_ydata(ebm.insolation)
return lines
# Plot initial data
fig, axes, lines = initial_figure(models)
# Some imports needed to make and display animations
from IPython.display import HTML
from matplotlib import animation
num_steps = int(models[0].time['num_steps_per_year'])
ani = animation.FuncAnimation(fig, animate,
frames=num_steps,
interval=80,
fargs=(models, lines),
)
HTML(ani.to_html5_video())
Explanation: Which one looks more realistic? Depends a bit on where you look. But overall, the observed seasonal cycle matches the 10 meter case best. The effective heat capacity governing the seasonal cycle of the zonal mean temperature is closer to 10 meters of water than to either 2 or 50 meters.
Making an animation of the EBM solutions
Let's animate the seasonal cycle of insolation and temperature in our models with the three different water depths
End of explanation
orb_highobl = {'ecc':0.,
'obliquity':90.,
'long_peri':0.}
print( orb_highobl)
model_highobl = climlab.EBM_seasonal(orb=orb_highobl, **param)
print( model_highobl.param['orb'])
Explanation: <a id='section4'></a>
4. The seasonal cycle for a planet with 90º obliquity
The EBM code uses our familiar insolation.py code to calculate insolation, and therefore it's easy to set up a model with different orbital parameters. Here is an example with very different orbital parameters: 90º obliquity. We looked at the distribution of insolation by latitude and season for this type of planet in the last homework.
End of explanation
Tann_highobl = np.empty( [lat.size, num_depths] )
models_highobl = []
for n in range(num_depths):
model = climlab.EBM_seasonal(water_depth=water_depths[n],
orb=orb_highobl,
**param)
models_highobl.append(model)
models_highobl[n].integrate_years(40., verbose=False )
models_highobl[n].integrate_years(1., verbose=False)
Tann_highobl[:,n] = np.squeeze(models_highobl[n].timeave['Ts'])
Tyear_highobl = np.empty([lat.size, num_steps_per_year, num_depths])
for n in range(num_depths):
for m in range(num_steps_per_year):
models_highobl[n].step_forward()
Tyear_highobl[:,m,n] = np.squeeze(models_highobl[n].Ts)
Explanation: Repeat the same procedure to calculate and store temperature throughout one year, after letting the models run out to equilibrium.
End of explanation
fig = plt.figure( figsize=(16,5) )
Tmax_highobl = 125; Tmin_highobl = -Tmax_highobl; delT_highobl = 10
clevels_highobl = np.arange(Tmin_highobl, Tmax_highobl+delT_highobl, delT_highobl)
for n in range(num_depths):
ax = fig.add_subplot(1,num_depths,n+1)
cax = ax.contourf( 4*np.arange(num_steps_per_year), lat, Tyear_highobl[:,:,n],
levels=clevels_highobl, cmap=plt.cm.seismic, vmin=Tmin_highobl, vmax=Tmax_highobl )
cbar1 = plt.colorbar(cax)
ax.set_title('water depth = %.0f m' %models[n].param['water_depth'], fontsize=20 )
ax.set_xlabel('Days of year', fontsize=14 )
ax.set_ylabel('Latitude', fontsize=14 )
Explanation: And plot the seasonal temperature cycle same as we did above:
End of explanation
lat2 = np.linspace(-90, 90, 181)
days = np.linspace(1.,50.)/50 * const.days_per_year
Q_present = climlab.solar.insolation.daily_insolation( lat2, days )
Q_highobl = climlab.solar.insolation.daily_insolation( lat2, days, orb_highobl )
Q_present_ann = np.mean( Q_present, axis=1 )
Q_highobl_ann = np.mean( Q_highobl, axis=1 )
fig, ax = plt.subplots()
ax.plot( lat2, Q_present_ann, label='Earth' )
ax.plot( lat2, Q_highobl_ann, label='90deg obliquity' )
ax.grid()
ax.legend(loc='lower center')
ax.set_xlabel('Latitude', fontsize=14 )
ax.set_ylabel('W m$^{-2}$', fontsize=14 )
ax.set_title('Annual mean insolation for two different obliquities', fontsize=16)
Explanation: Note that the temperature range is much larger than for the Earth-like case above (but same contour interval, 10 degC).
Why is the temperature so uniform in the north-south direction with 50 meters of water?
To see the reason, let's plot the annual mean insolation at 90º obliquity, alongside the present-day annual mean insolation:
End of explanation
%load_ext version_information
%version_information numpy, xarray, climlab
Explanation: Though this is a bit misleading, because our model prescribes an increase in albedo from the equator to the pole. So the absorbed shortwave gradients look even more different.
If you are interested in how ice-albedo feedback might work on a high-obliquity planet with a cold equator, then I suggest you take a look at this paper:
Rose, Cronin and Bitz (2017): Ice Caps and Ice Belts: The Effects of Obliquity on Ice−Albedo Feedback, The Astrophysical Journal 846, doi:10.3847/1538-4357/aa8306
<div class="alert alert-success">
[Back to ATM 623 notebook home](../index.ipynb)
</div>
Version information
End of explanation |
6,013 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework #1
This notebook contains the first homework for this class, and is due on Friday, October 23rd, 2016 at 11
Step2: Section 3 | Python Code:
# write any code you need here!
# Create additional cells if you need them by using the
# 'Insert' menu at the top of the browser window.
Explanation: Homework #1
This notebook contains the first homework for this class, and is due on Friday, October 23rd, 2016 at 11:59 p.m.. Please make sure to get started early, and come by the instructors' office hours if you have any questions. Office hours and locations can be found in the course syllabus. IMPORTANT: While it's fine if you talk to other people in class about this homework - and in fact we encourage it! - you are responsible for creating the solutions for this homework on your own, and each student must submit their own homework assignment.
Some links that you may find helpful:
Markdown tutorial
The matplotlib website
The matplotlib figure gallery (this is particularly helpful for getting ideas!)
The Pyplot tutorial
The Pandas tutorial
All CMSE 201 YouTube videos
Your name
Put your name here!
Section 1: Find me a model, any model
Look around online and find a model that you think is interesting. This model can be of anything that you are intrigued by, and as simple or complicated as you want. Write a paragraph or two describing the model, and identifying the components of the model that we talked about in class - the model's inputs, inner workings, outputs, and how one might decide if this is a good model or not. Make sure to cite your sources by providing links to the web pages that you looked at to create this description. You can either just paste the URL into the cell below, or do something prettier, like this: google. The syntax for that second one is [google](http://google.com).
put your answer here!
Section 2: Car conundrum
Part 1. Consider this: What volume of gasoline does a typical automobile (car, SUV, or pickup) use during its entire lifetime? How does the weight of the total fuel consumed compare to the weight of the car? How about the price of the fuel compared to the price of the car?
Come up with a simple order-of-magnitude approximation for each of those three questions, and in the cell below this one write a paragraph or two addressing each of the questions above. What are the factors you need to consider? What range of values might they have? In what way is your estimate limited? (Also, to add a twist: does it matter what type of car you choose?)
Note: if you use a Google search or two to figure out what range of values you might want to use, include links to the relevant web page(s). As described above, you can either just paste the URL, or do something prettier, like this: google. The syntax for that second one is [google](http://google.com).
put your answer here!
Part 2. In the space below, write a Python program to model the answer to all three of those questions, and keep track of the answers in a numpy array. Plot your answers to both questions in some convenient way (probably not a scatter plot - look at the matplotlib gallery for inspiration!). Do the answers you get make sense to you?
End of explanation
from IPython.display import HTML
HTML(
<iframe
src="https://docs.google.com/forms/d/e/1FAIpQLSd0yvuDR2XP5QhWHJZTZHgsSi84QAZU7x-C9NEA40y6NnArAA/viewform?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
Explanation: Section 3: Get the Lead Out (continued from your in-class assignment)
You're going to make a Jupyter Notebook. We'll feature our class's work on the CMSE Homepage
This is real data. And, you're some of the first people with the chance to analyze it and make your results public.
We want you to create a new Jupyter notebook to answer this question (which will be uploaded separately, as a second document to this notebook) that we can post publicly, to the world, on the CMSE Department Homepage.
Your Notebook Presentation Should Answer These Questions:
Your presentation should try to answer the following questions:
How bad was the lead level situation in August, 2015 when the first lead levels were taken?
How has the lead situation changed since August, 2015?
Is it getting better? If so, show your readers and convince them
Is it getting worse? If so, show your readers and convince them
How you answer the questions is up to you. But, remember to:
State your positions clearly.
Justify your positions with graphics, calculations, and written analysis to explain why you think what you think.
Consider counterarguments. Could someone try to use the same data to arrive at a different conclusion than yours? If they could, explain that conclusion and (if appropriate) why you think that position is flawed.
Do your best. Write as clearly as you can, use what you know, and don't confuse sizzle with quality. You don't need fancy pants visual and graphical animations to be persuasive. The humble scatterplot and its cousins the bar chart and histogram are extraordinarily powerful when you craft them carefully. And all of them are built-in to pandas.
Lastly, This is real data and you really do have a chance to be featured on the CMSE webpage. So:
The conclusions you draw matter. These are Flint resident's actual living conditions.
Any numerical conclusions you draw should be backed up by your code. If you say the average lead level was below EPA limits, you'll need to be able to back up that claim in your notebook either with graphical evidence or numerical evidence (calculations). So, make sure to show your (computational) work!
Your analysis is a check on the scientific community. The more eyes we have looking at this data and offering reproducible analyses (which Jupyter Notebooks are), the more confidence we can have in the data.
You may find other results online, but you still have to do your own analysis to decide whether you agree with their results.
Section 4: Feedback (required!)
End of explanation |
6,014 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Statements Assessment Test
Lets test your knowledge!
Use for, split(), and if to create a Statement that will print out words that start with 's'
Step1: Use range() to print all the even numbers from 0 to 10.
Step2: Use List comprehension to create a list of all numbers between 1 and 50 that are divisible by 3.
Step3: Go through the string below and if the length of a word is even print "even!"
Step4: Write a program that prints the integers from 1 to 100. But for multiples of three print "Fizz" instead of the number, and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz".
Step5: Use List Comprehension to create a list of the first letters of every word in the string below | Python Code:
st = 'Print only the words that start with s in this sentence'
#Code here
st = 'Print only the words that start with s in this sentence'
for word in st.split():
if word[0] == 's':
print(word )
Explanation: Statements Assessment Test
Lets test your knowledge!
Use for, split(), and if to create a Statement that will print out words that start with 's':
End of explanation
#Code Here
for number in range(0,11):
if number % 2 == 0:
print(number)
Explanation: Use range() to print all the even numbers from 0 to 10.
End of explanation
#Code in this cell
l = [number for number in range(1,51) if number % 3 == 0]
print(l)
Explanation: Use List comprehension to create a list of all numbers between 1 and 50 that are divisible by 3.
End of explanation
st = 'Print every word in this sentence that has an even number of letters'
#Code in this cell
st = 'Print every word in this sentence that has an even number of letters'
for word in st.split():
if len(word) % 2 == 0:
print(word)
Explanation: Go through the string below and if the length of a word is even print "even!"
End of explanation
#Code in this cell
l = range(1,101)
for val in l:
if val % 3 == 0 and val % 5 == 0:
print ('FizzBuzz num ' + str(val))
elif val % 3 == 0:
print('Fizz num ' + str(val))
elif val % 5 ==0 :
print('Buzz num ' + str(val))
Explanation: Write a program that prints the integers from 1 to 100. But for multiples of three print "Fizz" instead of the number, and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz".
End of explanation
st = 'Create a list of the first letters of every word in this string'
#Code in this cell
st = 'Create a list of the first letters of every word in this string'
l = []
for word in st.split():
l.append(word[0])
print(l)
Explanation: Use List Comprehension to create a list of the first letters of every word in the string below:
End of explanation |
6,015 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploration
Exploration of prepocessed DF
Step1: Input
Privacy restriction
Step2: Exploration
Step3: Preparation for Modeling
Missing Values
Step4: DFs for Modeling
Step5: Dummie Encoding
Step7: Output for Modeling | Python Code:
import numpy as np
import pandas as pd
import math
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Exploration
Exploration of prepocessed DF
End of explanation
file_path = "../data/events_df.pkl"
df = pd.read_pickle(file_path)
print(df.shape)
print(df.dtypes)
df.head()
Explanation: Input
Privacy restriction:
Original (personal) cleaned DF not in Repo. Go through nb "0_Cleaning" with self provided data to reproduce pickled DF of attended events ("events_df.pkl").
For further steps: Repo contains pickled DF for modeling (nb "3_Modeling"), in which private informations are elimated.
End of explanation
print("Stats (continuous Vars):")
print(df.describe())
print("")
print("NaN values count:")
print(df.isnull().sum())
for col in df:
print(df[col].value_counts())
print("")
df.groupby(df.main_topic).mean()[["distance", "rating"]]
df.groupby(df.city).mean()[["distance", "rating"]]
Explanation: Exploration
End of explanation
df_cleaned = df.fillna("missing") # Nan in String val Cols
print(df_cleaned.isnull().sum())
Explanation: Preparation for Modeling
Missing Values
End of explanation
# Minimal Features Model
model01_cols = [u"main_topic", u"buzzwordy_title", u"buzzwordy_organizer", u"days", u"weekday", u"city",
u"country", u"distance", u"ticket_prize", u"rating"]
df_model01 = df_cleaned[model01_cols]
df_model01.head()
Explanation: DFs for Modeling
End of explanation
df_model01 = pd.get_dummies(df_model01, prefix=["main_topic", "weekday", "city", "country"])
Explanation: Dummie Encoding
End of explanation
def pickle_model(df_model, file_path):
Pickles provided model DF for modeling step
df_model.to_pickle(file_path)
pickle_model(df_model01, "../data/df_model01.pkl") # Model01
Explanation: Output for Modeling
End of explanation |
6,016 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Markov chains for finding CpG islands
Step5: As training data, we use some already-called CpG islands. These were called in a prior study that used a kind of Hidden Markov Model. Relevant studies
Step7: Islands are described simply as a pair of numbers giving the 0-based right open interval for each island.
Step8: Our first idea for a model is to count how many times our $k$-mer of interest occurs inside and outside CpG islands. This get problematic as $k$ grows as it requires exponentially more training data.
Step10: Now we adopt the Markov assumption and estimate all the conditional probabilities, e.g. $P(A|C)$. | Python Code:
from __future__ import print_function
import random
import re
import gzip
from itertools import islice
from operator import itemgetter
import numpy as np
from future.standard_library import install_aliases
install_aliases()
from urllib.request import urlopen, urlcleanup, urlretrieve
Explanation: Markov chains for finding CpG islands
End of explanation
islands_url = 'http://www.haowulab.org/software/makeCGI/model-based-cpg-islands-hg19.txt'
# URL for chromosome of the hg19 human genome assembly
def hg19_chr_url(chrom):
return 'ftp://hgdownload.cse.ucsc.edu/goldenPath/hg19/chromosomes/%s.fa.gz' % chrom
def sample(iterable, n):
Samples n items from a stream
samp = []
for t, item in enumerate(iterable):
if t < n:
samp.append(item)
else:
m = random.randint(0, t)
if m < n:
samp[m] = item
return samp
def kmers_from_fasta(fh, k):
Yield k-mer, offset pairs from FASTA filehandle.
Ignore k-mers with chars besides A, C, G or T.
non_acgt = re.compile('[^ACGTacgt]') # regex for detecting non-A/C/G/Ts
kmer, off = [], 0
for ln in fh:
if ln[0] == r'>':
kmer, off = [], 0 # new sequence
continue
for c in filter(lambda x: x.isalpha(), ln.decode()):
if len(kmer) == k:
kmer.pop(0) # k-mer buffer full, so bump one element
kmer.append(c.upper())
off += 1
if len(kmer) == k:
kmerstr = ''.join(kmer)
if not non_acgt.search(kmerstr):
yield kmerstr, off - k
def kmers_islands_from_fasta(fh, k, isles, want_inside):
Yield k-mers along with string indicating whether k-mer lies
entirely within an island (True) or not (False)
cur = 0
for kmer, off in kmers_from_fasta(fh, k):
while cur < len(isles) and off >= isles[cur][1]:
cur += 1
was_inside = False
if cur < len(isles) and off >= isles[cur][0]:
if off + k <= isles[cur][1]:
was_inside = True
if want_inside:
yield kmer
if not was_inside and not want_inside:
yield kmer
def parse_islands(fh, chromosome):
Parse a file with island annotations. Only take
records from given chromosome name.
islands = []
for ln in fh:
ch, st, en, _ = ln.split(b'\t', 3)
if ch == chromosome.encode('utf8'):
# convert 1-based closed interval to 0-based right-open
islands.append((int(st)-1, int(en)))
return islands
def get_islands(chromosome):
with urlopen(islands_url) as fh:
return parse_islands(fh, chromosome) # takes a few seconds
Explanation: As training data, we use some already-called CpG islands. These were called in a prior study that used a kind of Hidden Markov Model. Relevant studies:
A species-generalized probabilistic model-based definition of CpG islands. Irizarry RA, Wu H, Feinberg AP. doi:10.1007/s00335-009-9222-5
Redefining CpG islands using hidden Markov models. Wu H, Caffo B, Jaffee HA, Irizarry RA, Feinberg AP. doi:10.1093/biostatistics/kxq005
End of explanation
get_islands('chr22')[1:10]
def kmers_islands_from_hg19(k, chromosome, islands, inside):
fa_fn, _ = urlretrieve(hg19_chr_url(chromosome))
with gzip.open(fa_fn, 'rb') as fa_fh:
# Yield all the k-mer tuples
for r in kmers_islands_from_fasta(fa_fh, k, islands, inside):
yield r
def samples_from_hg19(k, chromosome, n, upto):
Given given k, and n, sample n k-mers from both inside
and outside CpG islands, then return histograms of number
of times each k-mer occurs inside and outside.
islands = get_islands(chromosome)
ins = sample(islice(kmers_islands_from_hg19(
k, chromosome, islands, True), upto), n)
out = sample(islice(kmers_islands_from_hg19(
k, chromosome, islands, False), upto), n)
return ins, out
Explanation: Islands are described simply as a pair of numbers giving the 0-based right open interval for each island.
End of explanation
from collections import Counter
random.seed(723444)
q = 'CGCGC'
n = 500000
upto = 5000000
ins, out = samples_from_hg19(len(q), 'chr22', n, upto)
assert len(ins) == n, (len(ins), len(out), n)
assert len(out) == n, (len(ins), len(out), n)
hist_in, hist_out = Counter(ins), Counter(out)
# print info about inside/outside counts and probabilities
print("inside: %d out of %d" % (hist_in[q], n))
print("outside: %d out of %d" % (hist_out[q], n))
print("p(inside): %0.5f" % (float(hist_in[q]) / (hist_in[q] + hist_out[q])))
print("p(outside): %0.5f" % (float(hist_out[q]) / (hist_in[q] + hist_out[q])))
Explanation: Our first idea for a model is to count how many times our $k$-mer of interest occurs inside and outside CpG islands. This get problematic as $k$ grows as it requires exponentially more training data.
End of explanation
# Now to build inside and outside Markov chains
# compile dinucleotide tables
samp_in, samp_out = samples_from_hg19(2, 'chr22', n=100000, upto=1000000)
def markov_chain_from_dinucs(dinucs):
''' Given dinucleotide frequencies, make a transition table. '''
conds = np.zeros((4, 4), dtype=np.float64)
margs = np.zeros(4, dtype=np.float64)
for i, ci in enumerate('ACGT'):
tot = 0
for j, cj in enumerate('ACGT'):
count = dinucs.get(ci + cj, 0)
tot += count
margs[i] += count
if tot > 0:
for j, cj in enumerate('ACGT'):
conds[i, j] = dinucs.get(ci + cj, 0) / float(tot)
return conds, margs
ins_conds, ins_margs = markov_chain_from_dinucs(Counter(samp_in))
out_conds, out_margs = markov_chain_from_dinucs(Counter(samp_out))
# transition probabilities inside CpG island
ins_conds
# confirm that rows add to 1
np.sum(ins_conds, 1), np.sum(out_conds, 1)
# elementwise log2 of above table
np.log2(ins_conds)
# log ratio table
np.log2(ins_conds) - np.log2(out_conds)
def classify(seq, lrTab):
Classify seq using given log-ratio table. We're ignoring the
initial probability for simplicity.
bits = 0
nucmap = { 'A':0, 'C':1, 'G':2, 'T':3 }
for dinuc in [ seq[i:i+2] for i in range(len(seq)-1) ]:
i, j = nucmap[dinuc[0]], nucmap[dinuc[1]]
bits += lrTab[i, j]
return bits
log_ratios = np.log2(ins_conds) - np.log2(out_conds)
classify('CGCGCGCGCGCGCGCGCGCGCGCGCG', log_ratios)
classify('ATTCTACTATCATCTATCTATCTTCT', log_ratios)
itest, otest = samples_from_hg19(100, 'chr18', 1000, 100000)
itestClass = [ classify(x, log_ratios) for x in itest ]
otestClass = [ classify(x, log_ratios) for x in otest ]
%pylab inline --no-import-all
from matplotlib import pyplot
bins = numpy.linspace(-60, 60, 100)
pyplot.hist(itestClass, bins, alpha=0.5)
pyplot.hist(otestClass, bins, alpha=0.5)
pyplot.show()
Explanation: Now we adopt the Markov assumption and estimate all the conditional probabilities, e.g. $P(A|C)$.
End of explanation |
6,017 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hour of Code 2015
For Mr. Clifford's Class (5C)
Perry Grossman
December 2015
Introduction
From the Hour of Code to the Power of Co
How to use programming skills for data analysis, or "data science," the new, hot term
<img src="http
Step1: Some Basic Things
Leveraging a tutorial by David Beazley, Ian Stokes-Rees and Continuum Analytics
http
Step2: Floor numbering is the numbering scheme used for a building's floors. There are two major schemes in use across the world. In one system, used in the majority of Europe, the ground floor is the floor on the ground and often has no number or is assigned the number zero. Therefore, the next floor up is assigned the number 1 and is the first floor.
The other system, used primarily in the United States and Canada, counts the bottom floor as number 1 or first floor.
https | Python Code:
# you can also access this directly:
from PIL import Image
im = Image.open("DataScienceProcess.jpg")
im
#path=\'DataScienceProcess.jpg'
#image=Image.open(path)
Explanation: Hour of Code 2015
For Mr. Clifford's Class (5C)
Perry Grossman
December 2015
Introduction
From the Hour of Code to the Power of Co
How to use programming skills for data analysis, or "data science," the new, hot term
<img src="http://www.niemanlab.org/images/drew-conway-data-science-venn-diagram.jpg">
<img src="http://qph.is.quoracdn.net/main-qimg-3504cc03d0a1581096eba9ef97cfd7eb?convert_to_webp=true">
End of explanation
# Comments
# ls list of the files in this folder. See below.
This line will make an error because this line is not python code and this is a code cell.
# Leveraging
#http://localhost:8888/notebooks/Dropbox/Python/Harvard%20SEAS%20Tutorial/python-mastery-isr19-master/1-PythonReview.ipynb
ls # NOT PYTHON! command line
pwd # ALSO NOT PYTHON! Shows what folder you are in.
# math
1+2
4000*3
import math
math.sqrt(2)
2 ** (0.5)
637*532.6
from __future__ import division
1/2
(8+5)*4
# Create a variable
name = 'Perry Grossman'
# Print the variable
name
name[6]
Explanation: Some Basic Things
Leveraging a tutorial by David Beazley, Ian Stokes-Rees and Continuum Analytics
http://localhost:8888/notebooks/Dropbox/Python/Harvard%20SEAS%20Tutorial/python-mastery-isr19-master/1-PythonReview.ipynb
and other resources
End of explanation
from functools import partial
# https://docs.python.org/2/library/functools.html
from random import choice, randint
choice('yes no maybe'.split()) # split is a method
for i in range(10):
print("Call me " + choice('yes no maybe'.split()))
randint(1, 6)
# If you need dice, try this:
roll = partial(randint, 1, 20)
roll()
# how would you make 20 sided dice?
# Create a list of numbers
vals = [3, -8, 2, 7, 6, 2, 5, 12, 4, 9]
#Find the even numbers
evens = []
for v in vals:
if v%2 == 0:
evens.append(v)
#How is this working?
evens
squares = []
for v in vals:
squares.append(v*v)
squares
bigsquares = []
for v in vals:
s = v*v
if s > 10:
bigsquares.append(s)
bigsquares
Explanation: Floor numbering is the numbering scheme used for a building's floors. There are two major schemes in use across the world. In one system, used in the majority of Europe, the ground floor is the floor on the ground and often has no number or is assigned the number zero. Therefore, the next floor up is assigned the number 1 and is the first floor.
The other system, used primarily in the United States and Canada, counts the bottom floor as number 1 or first floor.
https://en.wikipedia.org/wiki/Storey
End of explanation |
6,018 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Facies classification using Convolutional Neural Networks
Team StoDIG - Statoil Deep-learning Interest Group
David Wade, John Thurmond & Eskil Kulseth Dahl
In this python notebook we propose a facies classification model, building on the simple Neural Network solution proposed by LA_Team in order to outperform the prediction model proposed in the predicting facies from well logs challenge.
Given the limited size of the training data set, Deep Learning is not likely to exceed the accuracy of results from refined Machine Learning techniques (such as Gradient Boosted Trees). However, we chose to use the opportunity to advance our understanding of Deep Learning network design, and have enjoyed participating in the contest. With a substantially larger training set and perhaps more facies ambiguity, Deep Learning could be a preferred approach to this sort of problem.
We use three key innovations
Step1: Data ingest
We load the training and testing data to preprocess it for further analysis, filling the missing data values in the PE field with zero and proceeding to normalize the data that will be fed into our model.
Step2: Split data into training data and blind data, and output as Numpy arrays
Step3: Data Augmentation
It is physically reasonable to expect 1st and 2nd order derivatives of logs to play an important role in determining facies. To save the CNN the effort of learning convolution kernels to represent these features to the rest of the Neural Network we compute them here (for training and validation data). Further, we expand the input data to be acted on by the convolutional layer.
Step4: Convolutional Neural Network
We build a CNN with the following layers
Step5: We train the CNN and evaluate it on precision/recall.
Step6: We display the learned 1D convolution kernels
Step7: In order to avoid overfitting, we evaluate our model by running a 5-fold stratified cross-validation routine.
Step8: Prediction
To predict the STUART and CRAWFORD blind wells we do the following
Step9: Run the model on the blind data
Output a CSV
Plot the wells in the notebook | Python Code:
%%sh
pip install pandas
pip install scikit-learn
pip install keras
pip install sklearn
from __future__ import print_function
import time
import numpy as np
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.constraints import maxnorm
from keras.optimizers import SGD
from keras.optimizers import Adam
from keras.optimizers import Adamax
from keras.optimizers import Nadam
from keras.layers import Dense, Dropout, Activation, Convolution1D, Flatten, Reshape, MaxPooling1D, GaussianNoise
from keras.wrappers.scikit_learn import KerasClassifier
from keras.utils import np_utils
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold , StratifiedKFold
from classification_utilities import display_cm, display_adj_cm
from sklearn.metrics import confusion_matrix, f1_score
from sklearn import preprocessing
from sklearn.model_selection import GridSearchCV
Explanation: Facies classification using Convolutional Neural Networks
Team StoDIG - Statoil Deep-learning Interest Group
David Wade, John Thurmond & Eskil Kulseth Dahl
In this python notebook we propose a facies classification model, building on the simple Neural Network solution proposed by LA_Team in order to outperform the prediction model proposed in the predicting facies from well logs challenge.
Given the limited size of the training data set, Deep Learning is not likely to exceed the accuracy of results from refined Machine Learning techniques (such as Gradient Boosted Trees). However, we chose to use the opportunity to advance our understanding of Deep Learning network design, and have enjoyed participating in the contest. With a substantially larger training set and perhaps more facies ambiguity, Deep Learning could be a preferred approach to this sort of problem.
We use three key innovations:
- Augmenting the input data with 1st and 2nd order derivatives
- Inserting a convolutional layer as the first layer in the Neural Network
- Adding Dropout regularization to prevent overfitting
Problem Modeling
The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).
The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a classifier to predict facies types.
This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are:
* Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10),
photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.
* Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)
The nine discrete facies (classes of rocks) are:
1. Nonmarine sandstone
2. Nonmarine coarse siltstone
3. Nonmarine fine siltstone
4. Marine siltstone and shale
5. Mudstone (limestone)
6. Wackestone (limestone)
7. Dolomite
8. Packstone-grainstone (limestone)
9. Phylloid-algal bafflestone (limestone)
These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.
Facies |Label| Adjacent Facies
:---: | :---: |:--:
1 |SS| 2
2 |CSiS| 1,3
3 |FSiS| 2
4 |SiSh| 5
5 |MS| 4,6
6 |WS| 5,7
7 |D| 6,8
8 |PS| 6,7,9
9 |BS| 7,8
Setup
Check we have all the libraries we need, and import the modules we require. Note that we have used the Theano backend for Keras, and to achieve a reasonable training time we have used an NVidia K20 GPU.
End of explanation
filename = 'train_test_data.csv'
data = pd.read_csv(filename)
data.head(12)
# Set 'Well Name' and 'Formation' fields as categories
data['Well Name'] = data['Well Name'].astype('category')
data['Formation'] = data['Formation'].astype('category')
# Fill missing values and normalize for 'PE' field
data['PE'] = data['PE'].fillna(value=0)
mean_pe = data['PE'].mean()
std_pe = data['PE'].std()
data['PE'] = (data['PE']-mean_pe)/std_pe
# Normalize the rest of fields (GR, ILD_log10, DelthaPHI, PHIND,NM_M,RELPOS)
correct_facies_labels = data['Facies'].values
feature_vectors = data.drop(['Formation'], axis=1)
well_labels = data[['Well Name', 'Facies']].values
data_vectors = feature_vectors.drop(['Well Name', 'Facies'], axis=1).values
scaler = preprocessing.StandardScaler().fit(data_vectors)
scaled_features = scaler.transform(data_vectors)
data_out = np.hstack([well_labels, scaled_features])
Explanation: Data ingest
We load the training and testing data to preprocess it for further analysis, filling the missing data values in the PE field with zero and proceeding to normalize the data that will be fed into our model.
End of explanation
def preprocess(data_out):
data = data_out
well_data = {}
well_names = ['SHRIMPLIN', 'ALEXANDER D', 'SHANKLE', 'LUKE G U', 'KIMZEY A', 'CROSS H CATTLE',
'NOLAN', 'Recruit F9', 'NEWBY', 'CHURCHMAN BIBLE', 'STUART', 'CRAWFORD']
for name in well_names:
well_data[name] = [[], []]
for row in data:
well_data[row[0]][1].append(row[1])
well_data[row[0]][0].append(list(row[2::]))
chunks = []
chunks_test = []
chunk_length = 1
chunks_facies = []
wellID=0.0
for name in well_names:
if name not in ['STUART', 'CRAWFORD']:
test_well_data = well_data[name]
log_values = np.array(test_well_data[0])
facies_values = np.array(test_well_data[1])
for i in range(log_values.shape[0]):
toAppend = np.concatenate((log_values[i:i+1, :], np.asarray(wellID).reshape(1,1)), axis=1)
chunks.append(toAppend)
chunks_facies.append(facies_values[i])
else:
test_well_data = well_data[name]
log_values = np.array(test_well_data[0])
for i in range(log_values.shape[0]):
toAppend = np.concatenate((log_values[i:i+1, :], np.asarray(wellID).reshape(1,1)), axis=1)
chunks_test.append(toAppend)
wellID = wellID + 1.0
chunks_facies = np.array(chunks_facies, dtype=np.int32)-1
X_ = np.array(chunks)
X = np.zeros((len(X_),len(X_[0][0]) * len(X_[0])))
for i in range(len(X_)):
X[i,:] = X_[i].flatten()
X_test = np.array(chunks_test)
X_test_out = np.zeros((len(X_test),len(X_test[0][0]) * len(X_test[0])))
for i in range(len(X_test)):
X_test_out[i,:] = X_test[i].flatten()
y = np_utils.to_categorical(chunks_facies)
return X, y, X_test_out
X_train_in, y_train, X_test_in = preprocess(data_out)
Explanation: Split data into training data and blind data, and output as Numpy arrays
End of explanation
conv_length = 7
# Reproducibility
np.random.seed(7)
# Load data
def addGradients(input):
output = input
for i in range(8):
grad = np.gradient(output[:,i])
gradT = np.reshape(grad,(grad.size,1))
output = np.concatenate((output, gradT), axis=1)
grad2 = np.gradient(grad)
grad2T = np.reshape(grad2,(grad2.size,1))
output = np.concatenate((output, grad2T), axis=1)
return output
def expand_dims(input):
r = int((conv_length-1)/2)
l = input.shape[0]
n_input_vars = input.shape[1]
output = np.zeros((l, conv_length, n_input_vars))
for i in range(l):
for j in range(conv_length):
for k in range(n_input_vars):
output[i,j,k] = input[min(i+j-r,l-1),k]
return output
X_train = np.empty((0,conv_length,24), dtype=float)
X_test = np.empty((0,conv_length,24), dtype=float)
wellId = 0.0
for i in range(10):
X_train_subset = X_train_in[X_train_in[:, 8] == wellId][:,0:8]
X_train_subset = addGradients(X_train_subset)
X_train_subset = expand_dims(X_train_subset)
X_train = np.concatenate((X_train,X_train_subset),axis=0)
wellId = wellId + 1.0
for i in range(2):
X_test_subset = X_test_in[X_test_in[:, 8] == wellId][:,0:8]
X_test_subset = addGradients(X_test_subset)
X_test_subset = expand_dims(X_test_subset)
X_test = np.concatenate((X_test,X_test_subset),axis=0)
wellId = wellId + 1.0
print(X_train.shape)
print(X_test.shape)
# Obtain labels
y_labels = np.zeros((len(y_train),1))
for i in range(len(y_train)):
y_labels[i] = np.argmax(y_train[i])
y_labels = y_labels.astype(int)
Explanation: Data Augmentation
It is physically reasonable to expect 1st and 2nd order derivatives of logs to play an important role in determining facies. To save the CNN the effort of learning convolution kernels to represent these features to the rest of the Neural Network we compute them here (for training and validation data). Further, we expand the input data to be acted on by the convolutional layer.
End of explanation
# Set parameters
input_dim = 24
output_dim = 9
n_per_batch = 128
epochs = 200
def dnn_model(init_dropout_rate=0.5, main_dropout_rate=0.45, hidden_dim_1=192, hidden_dim_2=96, max_norm=10, n_dense=3, sigma=0.0, nb_conv=32):
# Define the model
model = Sequential()
model.add(Dropout(init_dropout_rate, input_shape=(conv_length,input_dim,)))
model.add(Convolution1D(nb_conv, conv_length, border_mode='same', activation='relu', input_shape=(conv_length,input_dim), input_length=conv_length))
model.add(MaxPooling1D(pool_length=2, stride=None, border_mode='same'))
model.add(Flatten())
model.add(Dropout(main_dropout_rate, input_shape=(nb_conv*conv_length,)))
model.add(Dense(hidden_dim_1, input_dim=nb_conv*conv_length, init='uniform', activation='relu', W_constraint=maxnorm(max_norm)))
for i in range(n_dense):
if (i==1):
model.add(Dropout(main_dropout_rate, input_shape=(hidden_dim_1,)))
model.add(Dense(hidden_dim_2, input_dim=hidden_dim_1, init='uniform', activation='relu', W_constraint=maxnorm(max_norm)))
else:
model.add(Dropout(main_dropout_rate, input_shape=(hidden_dim_2,)))
model.add(Dense(hidden_dim_2, input_dim=hidden_dim_2, init='uniform', activation='relu', W_constraint=maxnorm(max_norm)))
model.add(Dense(output_dim, init='normal', activation='softmax'))
optimizerNadam = Nadam(lr=0.002, beta_1=0.9, beta_2=0.999, epsilon=1e-08, schedule_decay=0.004)
model.compile(loss='categorical_crossentropy', optimizer=optimizerNadam, metrics=['accuracy'])
return model
Explanation: Convolutional Neural Network
We build a CNN with the following layers:
Dropout layer on input
One 1D convolutional layer, with MaxPooling
Series of Dropout & Fully-Connected layers, of parameterizable length
End of explanation
# Load the model
t0 = time.time()
model_dnn = dnn_model()
model_dnn.summary()
t1 = time.time()
print("Load time = %d" % (t1-t0) )
#Train model
t0 = time.time()
model_dnn.fit(X_train, y_train, batch_size=n_per_batch, nb_epoch=epochs, verbose=0)
t1 = time.time()
print("Train time = %d seconds" % (t1-t0) )
# Predict Values on Training set
t0 = time.time()
y_predicted = model_dnn.predict( X_train , batch_size=n_per_batch, verbose=2)
t1 = time.time()
print("Test time = %d seconds" % (t1-t0) )
# Print Report
# Format output [0 - 8 ]
y_ = np.zeros((len(y_train),1))
for i in range(len(y_train)):
y_[i] = np.argmax(y_train[i])
y_predicted_ = np.zeros((len(y_predicted), 1))
for i in range(len(y_predicted)):
y_predicted_[i] = np.argmax( y_predicted[i] )
# Confusion Matrix
conf = confusion_matrix(y_, y_predicted_)
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS','WS', 'D','PS', 'BS']
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
# Print Results
print ("\nModel Report")
print ("-Accuracy: %.6f" % ( accuracy(conf) ))
print ("-Adjacent Accuracy: %.6f" % ( accuracy_adjacent(conf, adjacent_facies) ))
print ("\nConfusion Matrix")
display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True)
Explanation: We train the CNN and evaluate it on precision/recall.
End of explanation
print(model_dnn.layers[1].get_weights()[0].shape)
fig, ax = plt.subplots(figsize=(12,6))
plt.subplot(421)
plt.imshow(model_dnn.layers[1].get_weights()[0][:,0,0,:], interpolation='none')
plt.subplot(422)
plt.imshow(model_dnn.layers[1].get_weights()[0][:,0,1,:], interpolation='none')
plt.subplot(423)
plt.imshow(model_dnn.layers[1].get_weights()[0][:,0,2,:], interpolation='none')
plt.subplot(424)
plt.imshow(model_dnn.layers[1].get_weights()[0][:,0,3,:], interpolation='none')
plt.subplot(425)
plt.imshow(model_dnn.layers[1].get_weights()[0][:,0,4,:], interpolation='none')
plt.subplot(426)
plt.imshow(model_dnn.layers[1].get_weights()[0][:,0,5,:], interpolation='none')
plt.subplot(427)
plt.imshow(model_dnn.layers[1].get_weights()[0][:,0,6,:], interpolation='none')
plt.subplot(428)
plt.imshow(model_dnn.layers[1].get_weights()[0][:,0,7,:], interpolation='none')
plt.show()
Explanation: We display the learned 1D convolution kernels
End of explanation
# Cross Validation
def cross_validate():
t0 = time.time()
estimator = KerasClassifier(build_fn=dnn_model, nb_epoch=epochs, batch_size=n_per_batch, verbose=0)
skf = StratifiedKFold(n_splits=5, shuffle=True)
results_dnn = cross_val_score(estimator, X_train, y_train, cv= skf.get_n_splits(X_train, y_train))
t1 = time.time()
print("Cross Validation time = %d" % (t1-t0) )
print(' Cross Validation Results')
print( results_dnn )
print(np.mean(results_dnn))
cross_validate()
Explanation: In order to avoid overfitting, we evaluate our model by running a 5-fold stratified cross-validation routine.
End of explanation
# 1=sandstone 2=c_siltstone 3=f_siltstone
# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite
# 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im=ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[5])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
Explanation: Prediction
To predict the STUART and CRAWFORD blind wells we do the following:
Set up a plotting function to display the logs & facies.
End of explanation
# DNN model Prediction
y_test = model_dnn.predict( X_test , batch_size=n_per_batch, verbose=0)
predictions_dnn = np.zeros((len(y_test),1))
for i in range(len(y_test)):
predictions_dnn[i] = np.argmax(y_test[i]) + 1
predictions_dnn = predictions_dnn.astype(int)
# Store results
test_data = pd.read_csv('../validation_data_nofacies.csv')
test_data['Facies'] = predictions_dnn
test_data.to_csv('Prediction_StoDIG.csv')
make_facies_log_plot(
test_data[test_data['Well Name'] == 'STUART'],
facies_colors=facies_colors)
make_facies_log_plot(
test_data[test_data['Well Name'] == 'CRAWFORD'],
facies_colors=facies_colors)
Explanation: Run the model on the blind data
Output a CSV
Plot the wells in the notebook
End of explanation |
6,019 | Given the following text description, write Python code to implement the functionality described.
Description:
Count number of ways to reach a given score in a game
Returns number of ways to reach score n . ; table [ i ] will store count of solutions for value i . Initialize all table values as 0. ; Base case ( If given value is 0 ) ; One by one consider given 3 moves and update the table [ ] values after the index greater than or equal to the value of the picked move . ; Driver Program
| Python Code:
def count(n ) :
table =[0 for i in range(n + 1 ) ]
table[0 ] = 1
for i in range(3 , n + 1 ) :
table[i ] += table[i - 3 ]
for i in range(5 , n + 1 ) :
table[i ] += table[i - 5 ]
for i in range(10 , n + 1 ) :
table[i ] += table[i - 10 ]
return table[n ]
n = 20
print(' Count ▁ for ' , n , ' is ' , count(n ) )
n = 13
print(' Count ▁ for ' , n , ' is ' , count(n ) )
|
6,020 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Description
This is a test of SimulatedAnnealing.
Basics
Step4: Set Parameters
State in MetropolisSampler is np.array([float]).
The smaller the re_scaling be, the faster it anneals.
Step5: Create SimulatedAnnealing Object
Step8: Target Function on Arbitrary Dimension
Step9: Test on 1-Dim
Step10: Get argmin
Step11: Plot the MCMC
Step12: Conclusion
Splendid.
Test on 2-Dim
Step13: Get argmin
Step14: Plot the MCMC
Step15: Conclusion
Splendid.
Test on 4-Dim
Step16: Get argmin
Step17: Plot the MCMC | Python Code:
import sys
sys.path.append('../sample/')
from simulated_annealing import Temperature, SimulatedAnnealing
from random import uniform, gauss
import numpy as np
import matplotlib.pyplot as plt
Explanation: Description
This is a test of SimulatedAnnealing.
Basics
End of explanation
def temperature_of_time(t, re_scaling, max_temperature):
int * int -> float
return max_temperature / (1 + np.exp(t / re_scaling))
def initialize_state():
None -> [float]
return np.array([uniform(-10, 10) for i in range(dim)])
def markov_process(x, step_length):
[float] -> [float]
result = x.copy()
for i, item in enumerate(result):
result[i] = item + gauss(0, 1) * step_length
return result
Explanation: Set Parameters
State in MetropolisSampler is np.array([float]).
The smaller the re_scaling be, the faster it anneals.
End of explanation
def get_sa(dim, iterations, re_scaling, max_temperature, step_length):
sa = SimulatedAnnealing(
lambda i: temperature_of_time(i, re_scaling, max_temperature),
iterations, initialize_state,
lambda x: markov_process(x, step_length)
)
return sa
Explanation: Create SimulatedAnnealing Object
End of explanation
def N(mu, sigma):
float * float -> ([float] -> float)
return lambda x: np.exp(- np.sum(np.square((x - mu) / sigma)))
## Recall SimulatedAnnealing is searching the argmin, instead of argmax.
def target_function(x):
[float] -> float
return -1 * (N(0, 5)(x) + 100 * N(10, 3)(x))
Explanation: Target Function on Arbitrary Dimension
End of explanation
dim = 1
## Needs tuning
iterations = int(10 ** 3)
re_scaling = int(iterations / 10)
max_temperature = 1000
step_length = 1
sa = get_sa(dim, iterations, re_scaling, max_temperature, step_length)
Explanation: Test on 1-Dim
End of explanation
argmin = sa(target_function)
print('argmin = {0}'.format(argmin))
print('target(argmin) = {0}, which shall be about -100'.format(target_function(argmin)))
Explanation: Get argmin
End of explanation
def t(x):
return np.log(-1 * target_function(x))
step_list = np.arange(len(sa.chain))
t_lst = [t(_) for _ in sa.chain]
plt.plot(step_list, t_lst)
plt.xlabel('step')
plt.ylabel('log(-1 * value of target function)')
plt.show()
for i in range(dim):
x_lst = [_[i] for _ in sa.chain]
plt.plot(step_list, x_lst)
plt.xlabel('step')
plt.ylabel('x[{0}]'.format(i))
plt.show()
Explanation: Plot the MCMC
End of explanation
dim = 2
## Needs tuning
iterations = int(10 ** 3)
re_scaling = int(iterations / 10)
max_temperature = 1000
step_length = 1
sa = get_sa(dim, iterations, re_scaling, max_temperature, step_length)
Explanation: Conclusion
Splendid.
Test on 2-Dim
End of explanation
argmin = sa(target_function)
print('argmin = {0}'.format(argmin))
print('target(argmin) = {0}, which shall be about -100'.format(target_function(argmin)))
Explanation: Get argmin
End of explanation
def t(x):
return np.log(-1 * target_function(x))
step_list = np.arange(len(sa.chain))
t_lst = [t(_) for _ in sa.chain]
plt.plot(step_list, t_lst)
plt.xlabel('step')
plt.ylabel('log(-1 * value of target function)')
plt.show()
for i in range(dim):
x_lst = [_[i] for _ in sa.chain]
plt.plot(step_list, x_lst)
plt.xlabel('step')
plt.ylabel('x[{0}]'.format(i))
plt.show()
Explanation: Plot the MCMC
End of explanation
dim = 4
## Needs tuning
iterations = int(10 ** 6)
re_scaling = int(iterations / 100)
max_temperature = 1000
step_length = 3
sa = get_sa(dim, iterations, re_scaling, max_temperature, step_length)
Explanation: Conclusion
Splendid.
Test on 4-Dim
End of explanation
argmin = sa(target_function)
print('argmin = {0}'.format(argmin))
print('target(argmin) = {0}, which shall be about -100'.format(target_function(argmin)))
p = np.argmin([target_function(_) for _ in sa.chain])
argmin = sa.chain[p]
print(argmin)
Explanation: Get argmin
End of explanation
def t(x):
return np.log(-1 * target_function(x))
step_list = np.arange(len(sa.chain))
t_lst = [t(_) for _ in sa.chain]
plt.plot(step_list, t_lst)
plt.xlabel('step')
plt.ylabel('log(-1 * value of target function)')
plt.show()
for i in range(dim):
x_lst = [_[i] for _ in sa.chain]
plt.plot(step_list, x_lst)
plt.xlabel('step')
plt.ylabel('x[{0}]'.format(i))
plt.show()
for axis_to_plot in range(dim):
x_lst = [_[axis_to_plot] for _ in sa.chain]
plt.plot(x_lst, t_lst)
plt.xlabel('x[{0}]'.format(axis_to_plot))
plt.ylabel('log(-1 * value of target function)')
plt.show()
Explanation: Plot the MCMC
End of explanation |
6,021 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nucleosynthetic yields
These are key to every chemical evolution model. Chempy supports three nucleosynthetic channels at the moment
Step1: Hyper Nova (HN) is only provided for Nomoto 2013 CC-SN yields and it is mixed 50/50 with it for stars with mass >= 25 Msun
Step2: Elements availability
usually not all elements are provided by a yield table. We have a handy plotting routine to show which elements are given. We check for the default and the alternative yield table.
Step3: CC-SN yields
Here we visualise the yield in [X/Fe] for the whole grid in masses and metallicities for two different yields sets
- Interestingly CC-SN ejecta can be Solar in their alpha-enhancement for low-mass progenitors (=13Msun)
- Ths effect is even stronger for the Chieffi04 yields
Step4: Yield comparison
We can plot the differences of the two yield tables for different elements (They are copied into the output/ folder). Here only the result for Ti is displayed.
Step5: AGB yield comparison
We have a look at the Carbon and Nitrogen yields.
We see that high mass AGB stars produce less fraction of C than low-mass AGB stars and that its vice versa for N. The C/N ratio should be IMF sensitive.
Step6: Yield table query and remnant fraction
Here you see how the yield tables are queried (the metallicity accesses the yield table)
For net yield the remnant fraction + the 'unprocessed mass in winds' sums to unity.
The changes come from destroyed Hydrogen that is fused into other elements
Step7: SN Ia yields
Here we see that the SNIa ejecta differ quite strongly for our two yieldtables | Python Code:
%pylab inline
from Chempy.parameter import ModelParameters
from Chempy.yields import SN2_feedback, AGB_feedback, SN1a_feedback, Hypernova_feedback
from Chempy.infall import PRIMORDIAL_INFALL, INFALL
# This loads the default parameters, you can check and change them in paramter.py
a = ModelParameters()
# Implemented SN Ia yield tables
a.yield_table_name_1a_list
# AGB yields implemented
a.yield_table_name_agb_list
# CC-SN yields implemented
a.yield_table_name_sn2_list
# Hypernova yields (is mixed with Nomoto2013 CC-SN yields for stars more massive than 25Msun)
a.yield_table_name_hn_list
# Here we show the available mass and metallicity range for each yield set
# First for CC-SNe
print('Available CC-SN yield parameter range')
for item in a.yield_table_name_sn2_list:
basic_sn2 = SN2_feedback()
getattr(basic_sn2, item)()
print('----------------------------------')
print('yield table name: ',item)
print('provided masses: ', basic_sn2.masses)
print('provided metallicities',basic_sn2.metallicities)
Explanation: Nucleosynthetic yields
These are key to every chemical evolution model. Chempy supports three nucleosynthetic channels at the moment:
- Core-Collapse Supernova (CC-SN)
- Supernova of type Ia (SN Ia)
- Winds from Asymptotic Giant Branch phase of stars (AGB)
End of explanation
# Then for Hypernovae
print('Available HN yield parameter range')
for item in a.yield_table_name_hn_list:
basic_hn = Hypernova_feedback()
getattr(basic_hn, item)()
print('----------------------------------')
print('yield table name: ',item)
print('provided masses: ', basic_hn.masses)
print('provided metallicities',basic_hn.metallicities)
# Here for AGB stars
print('Available AGB yield parameter range')
for item in a.yield_table_name_agb_list:
basic_agb = AGB_feedback()
getattr(basic_agb, item)()
print('----------------------------------')
print('yield table name: ',item)
print('provided masses: ', basic_agb.masses)
print('provided metallicities',basic_agb.metallicities)
# And for SN Ia
print('Available SN Ia yield parameter range')
for item in a.yield_table_name_1a_list:
basic_1a = SN1a_feedback()
getattr(basic_1a, item)()
print('----------------------------------')
print('yield table name: ',item)
print('provided masses: ', basic_1a.masses)
print('provided metallicities',basic_1a.metallicities)
from Chempy.data_to_test import elements_plot
from Chempy.solar_abundance import solar_abundances
Explanation: Hyper Nova (HN) is only provided for Nomoto 2013 CC-SN yields and it is mixed 50/50 with it for stars with mass >= 25 Msun
End of explanation
# To get the element list we initialise the solar abundance class
basic_solar = solar_abundances()
# we load the default yield set:
basic_sn2 = SN2_feedback()
getattr(basic_sn2, "Nomoto2013")()
basic_1a = SN1a_feedback()
getattr(basic_1a, "Seitenzahl")()
basic_agb = AGB_feedback()
getattr(basic_agb, "Karakas_net_yield")()
#Now we plot the elements available for the default yield set and which elements are available for specific surveys and come from which nucleosynthetic channel
elements_plot('default', basic_agb.elements,basic_sn2.elements,basic_1a.elements,['C','N','O'], basic_solar.table,40)
# Then we load the alternative yield set:
basic_sn2 = SN2_feedback()
getattr(basic_sn2, "chieffi04")()
basic_1a = SN1a_feedback()
getattr(basic_1a, "Thielemann")()
basic_agb = AGB_feedback()
getattr(basic_agb, "Ventura_net")()
#And again plot the elements available
elements_plot('alternative', basic_agb.elements,basic_sn2.elements,basic_1a.elements,['C','N','O'], basic_solar.table,40)
Explanation: Elements availability
usually not all elements are provided by a yield table. We have a handy plotting routine to show which elements are given. We check for the default and the alternative yield table.
End of explanation
# We need solar abundances for normalisation of the feedback
basic_solar.Asplund09()
# Then we plot the [Mg/Fe] of Nomoto+ 2013 for all masses and metallicities
from Chempy.data_to_test import yield_plot
basic_sn2 = SN2_feedback()
getattr(basic_sn2, "Nomoto2013")()
yield_plot('Nomoto+2013', basic_sn2, basic_solar, 'Mg')
# And we plot the same for Chieffi+ 2004 CC-yields
basic_sn2 = SN2_feedback()
getattr(basic_sn2, "chieffi04")()
yield_plot('Chieffi+04', basic_sn2, basic_solar, 'Mg')
Explanation: CC-SN yields
Here we visualise the yield in [X/Fe] for the whole grid in masses and metallicities for two different yields sets
- Interestingly CC-SN ejecta can be Solar in their alpha-enhancement for low-mass progenitors (=13Msun)
- Ths effect is even stronger for the Chieffi04 yields
End of explanation
# Now we plot a comparison for different elements between Nomoto+ 2013 and Chieffi+ 2004 CC-yields:
# You can look into the output/ folder and see the comparison for all those elements
from Chempy.data_to_test import yield_comparison_plot
basic_sn2 = SN2_feedback()
getattr(basic_sn2, "Nomoto2013")()
basic_sn2_chieffi = SN2_feedback()
getattr(basic_sn2_chieffi, "chieffi04")()
for element in ['C', 'N', 'O', 'Mg', 'Ca', 'Na', 'Al', 'Mn','Ti']:
yield_comparison_plot('Nomoto13', 'Chieffi04', basic_sn2, basic_sn2_chieffi, basic_solar, element)
Explanation: Yield comparison
We can plot the differences of the two yield tables for different elements (They are copied into the output/ folder). Here only the result for Ti is displayed.
End of explanation
# We can also plot a comparison between Karakas+ 2010 and Ventura+ 2013 AGB-yields
# Here we plot the fractional N yield
from Chempy.data_to_test import fractional_yield_comparison_plot
basic_agb = AGB_feedback()
getattr(basic_agb, "Karakas_net_yield")()
basic_agb_ventura = AGB_feedback()
getattr(basic_agb_ventura, "Ventura_net")()
fractional_yield_comparison_plot('Karakas10', 'Ventura13', basic_agb, basic_agb_ventura, basic_solar, 'N')
#The next line produces an error in the 0.2 version. Needs checking
#fractional_yield_comparison_plot('Karakas10', 'Ventura13', basic_agb, basic_agb_ventura, basic_solar, 'C')
Explanation: AGB yield comparison
We have a look at the Carbon and Nitrogen yields.
We see that high mass AGB stars produce less fraction of C than low-mass AGB stars and that its vice versa for N. The C/N ratio should be IMF sensitive.
End of explanation
# Different entries of the yield table are queried
print('Mass, Remnant mass fraction, Unprocessed mass in winds fraction, destroyed Hydrogen of total mass')
for i in range(len(basic_agb.masses)):
print(basic_agb.table[0.02]['Mass'][i],basic_agb.table[0.02]['mass_in_remnants'][i],basic_agb.table[0.02]['unprocessed_mass_in_winds'][i],basic_agb.table[0.02]['H'][i])
Explanation: Yield table query and remnant fraction
Here you see how the yield tables are queried (the metallicity accesses the yield table)
For net yield the remnant fraction + the 'unprocessed mass in winds' sums to unity.
The changes come from destroyed Hydrogen that is fused into other elements
End of explanation
# Here we compare the yields for different iron-peak elements for Seitenzahl+ 2013 and Thielemann+ 2003 SNIa tables
basic_1a = SN1a_feedback()
getattr(basic_1a, 'Seitenzahl')()
basic_1a_alternative = SN1a_feedback()
getattr(basic_1a_alternative, 'Thielemann')()
print('Mass fraction of SN1a ejecta: Cr, Mn, Fe and Ni')
print('Seitenzahl2013')
print(basic_1a.table[0.02]['Cr'],basic_1a.table[0.02]['Mn'],basic_1a.table[0.02]['Fe'],basic_1a.table[0.02]['Ni'])
print('Thielemann2003')
print(basic_1a_alternative.table[0.02]['Cr'],basic_1a_alternative.table[0.02]['Mn'],basic_1a_alternative.table[0.02]['Fe'],basic_1a_alternative.table[0.02]['Ni'])
Explanation: SN Ia yields
Here we see that the SNIa ejecta differ quite strongly for our two yieldtables
End of explanation |
6,022 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EAS Testing - Antutu benchmark on Android
The goal of this experiment is to run benchmarks on a Hikey running Android with an EAS kernel and collect results. The analysis phase will consist in comparing EAS with other schedulers, that is comparing sched governor with
Step1: Test Environment set up
In case more than one Android device are conencted to the host, you must specify the ID of the device you want to target in my_target_conf. Run adb devices on your host to get the ID.
Step2: Support Functions
This set of support functions will help us running the benchmark using different CPUFreq governors.
Step3: Run Antutu and collect scores
Step4: After running the benchmark for the specified governors we can show the scores | Python Code:
import logging
reload(logging)
log_fmt = '%(asctime)-9s %(levelname)-8s: %(message)s'
logging.basicConfig(format=log_fmt)
# Change to info once the notebook runs ok
logging.getLogger().setLevel(logging.INFO)
%pylab inline
import copy
import os
from time import sleep
from subprocess import Popen
import pandas as pd
# Support to access the remote target
import devlib
from env import TestEnv
# Support for trace events analysis
from trace import Trace
# Suport for FTrace events parsing and visualization
import trappy
Explanation: EAS Testing - Antutu benchmark on Android
The goal of this experiment is to run benchmarks on a Hikey running Android with an EAS kernel and collect results. The analysis phase will consist in comparing EAS with other schedulers, that is comparing sched governor with:
- interactive
- performance
- powersave
- ondemand
The benchmark we will be using is Antutu . You will need to manually install the app on the Android device in order to run this Notebook.
When opinening Antutu for the first time you will need to Install the work benchmark from inside the app.
End of explanation
# Setup a target configuration
my_target_conf = {
# Target platform and board
"platform" : 'android',
# Add target support
"board" : 'hikey',
# Device ID
#"device" : "00b1346f0878ccb1",
# Define devlib modules to load
"modules" : [
'cpufreq' # enable CPUFreq support
],
}
my_tests_conf = {
# Folder where all the results will be collected
"results_dir" : "Android_Antutu",
# Platform configurations to test
"confs" : [
{
"tag" : "antutu",
"flags" : "ftrace", # Enable FTrace events
"sched_features" : "ENERGY_AWARE", # enable EAS
},
],
}
# Initialize a test environment using:
# the provided target configuration (my_target_conf)
# the provided test configuration (my_test_conf)
te = TestEnv(target_conf=my_target_conf, test_conf=my_tests_conf)
target = te.target
Explanation: Test Environment set up
In case more than one Android device are conencted to the host, you must specify the ID of the device you want to target in my_target_conf. Run adb devices on your host to get the ID.
End of explanation
def set_performance():
target.cpufreq.set_all_governors('performance')
def set_powersave():
target.cpufreq.set_all_governors('powersave')
def set_interactive():
target.cpufreq.set_all_governors('interactive')
def set_sched():
target.cpufreq.set_all_governors('sched')
def set_ondemand():
target.cpufreq.set_all_governors('ondemand')
for cpu in target.list_online_cpus():
tunables = target.cpufreq.get_governor_tunables(cpu)
target.cpufreq.set_governor_tunables(
cpu,
'ondemand',
**{'sampling_rate' : tunables['sampling_rate_min']}
)
# CPUFreq configurations to test
confs = {
'performance' : {
'label' : 'prf',
'set' : set_performance,
},
# 'powersave' : {
# 'label' : 'pws',
# 'set' : set_powersave,
# },
'interactive' : {
'label' : 'int',
'set' : set_interactive,
},
'sched' : {
'label' : 'sch',
'set' : set_sched,
},
# 'ondemand' : {
# 'label' : 'odm',
# 'set' : set_ondemand,
# }
}
# The set of results for each comparison test
results = {}
def check_packages(pkgname):
try:
output = target.execute('pm list packages -f | grep -i {}'.format(pkgname))
except Exception:
raise RuntimeError('Package: [{}] not availabe on target'.format(pkgname))
# Check for specified PKG name being available on target
#adb -s 0123456789 shell "am kill-all"
#adb -s 0123456789 shell "am start -W -n com.antutu.ABenchMark/.ABenchMarkStart"
#adb shell "am force-stop com.antutu.ABenchMark"
#check_packages('com.futuremark.pcmark.android.benchmark')
check_packages('com.antutu.ABenchMark')
def pcmark_run(exp_dir):
# Unlock device screen (assume no password required)
target.execute('input keyevent 82')
# Start PCMark on the target device
# target.execute('monkey -p com.futuremark.pcmark.android.benchmark -c android.intent.category.LAUNCHER 1')
target.execute('am start -W -n com.antutu.ABenchMark/.ABenchMarkStart')
# Wait few seconds to make sure the app is loaded
sleep(5)
# Flush entire log
target.clear_logcat()
# Run performance workload (assume screen is vertical)
target.execute('input tap 512 200')
# Wait for completion (7 minutes in total) and collect log
log_file = os.path.join(exp_dir, 'log.txt')
# Wait 5 minutes
sleep(300)
# Start collecting the log
with open(log_file, 'w') as log:
logcat = Popen(['adb logcat', 'com.antutu.ABenchMark/.ABenchMarkStart', '*:S'],
stdout=log,
shell=True)
# Wait additional two minutes for benchmark to complete
sleep(100)
# Terminate logcat
logcat.kill()
# Get scores from logcat
score_file = os.path.join(exp_dir, 'score.txt')
os.popen('grep -o "PCMA_.*_SCORE .*" {} | sed "s/ = / /g" | sort -u > {}'.format(log_file, score_file))
# Close application
target.execute('am force-stop com.antutu.ABenchMark')
return score_file
def antutu_run(exp_dir):
!wa run antutu.yaml -f -d $exp_dir
score_file = exp_dir+"/results.csv"
print score_file
import csv
from collections import defaultdict
def experiment(governor, exp_dir):
os.system('mkdir -p {}'.format(exp_dir));
logging.info('------------------------')
logging.info('Run workload using %s governor', governor)
confs[governor]['set']()
### Run the benchmark ###
#score_file = pcmark_run(exp_dir)
score_file = antutu_run(exp_dir)
# Save the score as a dictionary
scores = dict()
#with open(score_file, 'r') as f:
# lines = f.readlines()
# for l in lines:
# info = l.split()
# scores.update({info[0] : float(info[1])})
inFile = open('/home/lubaoquan/tools/lisa/lisa/results/Android_PCMark/'+governor+'/results.csv', 'r')
inLine = csv.reader(inFile)
next(inLine, None)
collectValue = defaultdict(list)
for row in inLine:
item = row[3]
value = row[4]
# collectValue[item].append(float(value))
# for item, value in collectValue.iteritems():
if item == 'execution_time':
continue
print item, value
scores.update({item : float(value)})
# return all the experiment data
return {
'dir' : exp_dir,
'scores' : scores,
}
Explanation: Support Functions
This set of support functions will help us running the benchmark using different CPUFreq governors.
End of explanation
# Run the benchmark in all the configured governors
for governor in confs:
test_dir = os.path.join(te.res_dir, governor)
res = experiment(governor, test_dir)
results[governor] = copy.deepcopy(res)
Explanation: Run Antutu and collect scores
End of explanation
# Create results DataFrame
data = {}
for governor in confs:
data[governor] = {}
for score_name, score in results[governor]['scores'].iteritems():
data[governor][score_name] = score
#df = pd.DataFrame.from_dict(data)
#df
#data['performance']['CPU']=12405
#data['interactive']['CPU']=11000
#data['performance']['GPU']=2434
#data['interactive']['GPU']=2100
#data['performance']['UX']=12939
#data['interactive']['UX']=11100
#data['performance']['RAM']=4358
#data['interactive']['RAM']=4100
df = pd.DataFrame.from_dict(data)
df
df.plot(kind='bar', rot=45, figsize=(16,8),
title='Antutu CPU scores vs SchedFreq governors');
Explanation: After running the benchmark for the specified governors we can show the scores
End of explanation |
6,023 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quick Setup
Step1: Download Data - MNIST
The MNIST dataset contains labeled images of handwritten digits, where each example is a 28x28 pixel image of grayscale values in the range [0,255] stretched out as 784 pixels, and each label is one of 10 possible digits in [0,9]. Here, we download 60,000 training examples, and 10,000 test examples, where the format is "label, pixel_1, pixel_2, ..., pixel_n".
Step3: SystemML Softmax Model
1. Train
Step5: 2. Compute Test Accuracy
Step6: 3. Extract Model Into Spark DataFrames For Future Use | Python Code:
# Create a SystemML MLContext object
from systemml import MLContext, dml
ml = MLContext(sc)
Explanation: Quick Setup
End of explanation
%%sh
mkdir -p data/mnist/
cd data/mnist/
curl -O https://pjreddie.com/media/files/mnist_train.csv
curl -O https://pjreddie.com/media/files/mnist_test.csv
Explanation: Download Data - MNIST
The MNIST dataset contains labeled images of handwritten digits, where each example is a 28x28 pixel image of grayscale values in the range [0,255] stretched out as 784 pixels, and each label is one of 10 possible digits in [0,9]. Here, we download 60,000 training examples, and 10,000 test examples, where the format is "label, pixel_1, pixel_2, ..., pixel_n".
End of explanation
training =
source("nn/examples/mnist_softmax.dml") as mnist_softmax
# Read training data
data = read($data, format="csv")
n = nrow(data)
# Extract images and labels
images = data[,2:ncol(data)]
labels = data[,1]
# Scale images to [0,1], and one-hot encode the labels
images = images / 255.0
labels = table(seq(1, n), labels+1, n, 10)
# Split into training (55,000 examples) and validation (5,000 examples)
X = images[5001:nrow(images),]
X_val = images[1:5000,]
y = labels[5001:nrow(images),]
y_val = labels[1:5000,]
# Train
epochs = 1
[W, b] = mnist_softmax::train(X, y, X_val, y_val, epochs)
script = dml(training).input("$data", "data/mnist/mnist_train.csv").output("W", "b")
W, b = ml.execute(script).get("W", "b")
Explanation: SystemML Softmax Model
1. Train
End of explanation
testing =
source("nn/examples/mnist_softmax.dml") as mnist_softmax
# Read test data
data = read($data, format="csv")
n = nrow(data)
# Extract images and labels
X_test = data[,2:ncol(data)]
y_test = data[,1]
# Scale images to [0,1], and one-hot encode the labels
X_test = X_test / 255.0
y_test = table(seq(1, n), y_test+1, n, 10)
# Eval on test set
probs = mnist_softmax::predict(X_test, W, b)
[loss, accuracy] = mnist_softmax::eval(probs, y_test)
print("Test Accuracy: " + accuracy)
script = dml(testing).input("$data", "data/mnist/mnist_test.csv", W=W, b=b)
ml.execute(script)
Explanation: 2. Compute Test Accuracy
End of explanation
W_df = W.toDF()
b_df = b.toDF()
W_df, b_df
Explanation: 3. Extract Model Into Spark DataFrames For Future Use
End of explanation |
6,024 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Week 5 - Crafting the public interface.
Learning Objectives
Explain what a public interface is
Discuss the advantages of defining a public interface
Compare different public interfaces
Design a simple public interface
Inheritance
Last week we looked at inheritance, building a general class that we could then extend with additional functionality for special situations.
Each of the classes we create inheriting from our general class can be thought of as having a 'is-a' relationship with the general class. For example, looking at our Item example from last week Equipment is a Item, Consumable is a Item.
Step5: Composition
In week 3 we took example projects and broke them down into a collection of different classes. Many of you chose the cookbook example for the assignment and questioned whether things like ingredients should be attributes on the recipe class or classes in their own right. Often the answer is both. These are the interactions that change a collection of different classes into a functioning program. This is called composition. The Recipe object is a composite object, it has ingredients, it has instructions, etc.
This week we will look at how we can design our classes to be easy to use, for both programmer-class and class-class interactions.
Step6: This has the basic functionality implemented but there are some improvements we can make.
Before we look at making changes we can seek inspiration. Requests and Pandas are two packages well regarded for having well implemented interfaces.
Requests
Step7: The API documentation for requests
The Response class
Some useful features
Step12: The API documentation for the DataFrame object.
The actual code.
Some useful features
Step17: Viewing the ingredients now looks much better. Let's now look at the get_nutrition method.
There are still a number of areas that could be improved
When we call get_nutrition it is not clear what the different values returned actually are
We don't use the get_nutrition method when calculating the nutrition values in the Recipe class
There is no way to add additional types of nutrient
Ingredient and Recipe return different types from get_nutrition, tuple and list respectively
Recipe could not be used as an ingredient for another Recipe
Step18: WSGI
The value of building and documenting a interface to our code is not unique to object oriented programming.
Next week we will look at creating websites as an alternative to command line programs and GUIs. Python has a rich ecosystem of web servers and frameworks for creating web applications. Importantly, the vast majority use a common interface called WSGI.
WSGI is based on a simple exchange. The example below use the wsgiref package for the web server with the application implemented without using external packages. Next week, we will look at some of the more commonly used web servers and use a web framework to develop a more substantial web project.
Step24: Assignments
Modify the Ingredient and Recipe classes so that the following code works. | Python Code:
class Item(object):
def __init__(self, name, description, location):
self.name = name
self.description = description
self.location = location
def update_location(self, new_location):
pass
class Equipment(Item):
pass
class Consumable(Item):
def __init__(self, name, description, location, initial_quantity, current_quantity, storage_temp, flammability):
self.name = name
self.description = description
self.location = location
self.initial_quantity = initial_quantity
self.current_quantity = current_quantity
self.flammability = flammability
def update_quantity_remaining(self, amount):
pass
Explanation: Week 5 - Crafting the public interface.
Learning Objectives
Explain what a public interface is
Discuss the advantages of defining a public interface
Compare different public interfaces
Design a simple public interface
Inheritance
Last week we looked at inheritance, building a general class that we could then extend with additional functionality for special situations.
Each of the classes we create inheriting from our general class can be thought of as having a 'is-a' relationship with the general class. For example, looking at our Item example from last week Equipment is a Item, Consumable is a Item.
End of explanation
class Ingredient(object):
The ingredient object that contains nutritional information
def __init__(self, name, carbs, protein, fat):
self.name = name
self.carbs = carbs
self.protein = protein
self.fat = fat
def get_nutrition(self):
Returns the nutritional information for the ingredient
return (self.carbs, self.protein, self.fat)
class Recipe(object):
The Recipe object containing the ingredients
def __init__(self, name, ingredients):
self.name = name
self.ingredients = ingredients
def get_nutrition(self):
Returns the nutritional information for the recipe
nutrition = [0, 0, 0]
for amount, ingredient in self.ingredients:
nutrition[0] += amount * ingredient.carbs
nutrition[1] += amount * ingredient.protein
nutrition[2] += amount * ingredient.fat
return nutrition
bread = Recipe('Bread', [(820, Ingredient('Flour', 0.77, 0.10, 0.01)),
(30, Ingredient('Oil', 0, 0, 1)),
(36, Ingredient('Sugar', 1, 0, 0)),
(7, Ingredient('Yeast', 0.3125, 0.5, 0.0625)),
(560, Ingredient('Water', 0, 0, 0))])
print(bread.ingredients)
print(bread.get_nutrition())
Explanation: Composition
In week 3 we took example projects and broke them down into a collection of different classes. Many of you chose the cookbook example for the assignment and questioned whether things like ingredients should be attributes on the recipe class or classes in their own right. Often the answer is both. These are the interactions that change a collection of different classes into a functioning program. This is called composition. The Recipe object is a composite object, it has ingredients, it has instructions, etc.
This week we will look at how we can design our classes to be easy to use, for both programmer-class and class-class interactions.
End of explanation
import requests
r = requests.get('https://api.github.com/repos/streety/biof509/events')
print(r.status_code)
print(r.headers['content-type'])
print(r.text[:1000])
print(r.json()[0]['payload']['commits'][0]['message'])
type(r)
Explanation: This has the basic functionality implemented but there are some improvements we can make.
Before we look at making changes we can seek inspiration. Requests and Pandas are two packages well regarded for having well implemented interfaces.
Requests: HTTP for Humans
Requests is a package used for making HTTP requests. There are options in the python standard library for making http requests but they can seem difficult to use.
End of explanation
import pandas as pd
data = pd.DataFrame([[0,1,2,3], [4,5,6,7], [8,9,10,11]], index=['a', 'b', 'c'], columns=['col1', 'col2', 'col3', 'col4'])
data
print(data.shape)
print(data['col1'])
print(data.col1)
import matplotlib.pyplot as plt
%matplotlib inline
data.plot()
data.to_csv('Wk05-temp.csv')
data2 = pd.read_csv('Wk05-temp.csv', index_col=0)
print(data2)
Explanation: The API documentation for requests
The Response class
Some useful features:
property
Pandas
pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and
data analysis tools for the Python programming language.
End of explanation
class Ingredient(object):
The ingredient object that contains nutritional information
def __init__(self, name, carbs, protein, fat):
self.name = name
self.carbs = carbs
self.protein = protein
self.fat = fat
def __repr__(self):
return 'Ingredient({0}, {1}, {2}, {3})'.format(self.name, self.carbs, self.protein, self.fat)
def get_nutrition(self):
Returns the nutritional information for the ingredient
return (self.carbs, self.protein, self.fat)
class Recipe(object):
The Recipe object containing the ingredients
def __init__(self, name, ingredients):
self.name = name
self.ingredients = ingredients
def get_nutrition(self):
Returns the nutritional information for the recipe
nutrition = [0, 0, 0]
for amount, ingredient in self.ingredients:
nutrition[0] += amount * ingredient.carbs
nutrition[1] += amount * ingredient.protein
nutrition[2] += amount * ingredient.fat
return nutrition
bread = Recipe('Bread', [(820, Ingredient('Flour', 0.77, 0.10, 0.01)),
(30, Ingredient('Oil', 0, 0, 1)),
(36, Ingredient('Sugar', 1, 0, 0)),
(7, Ingredient('Yeast', 0.3125, 0.5, 0.0625)),
(560, Ingredient('Water', 0, 0, 0))])
print(bread.ingredients)
print(bread.get_nutrition())
Explanation: The API documentation for the DataFrame object.
The actual code.
Some useful features:
* classmethod
* property
* __getitem__
* Public and private attributes/methods
* __getattr__
Cookbook
We can now return to our cookbook example.
Displaying the ingredients needs to be improved.
End of explanation
class Ingredient(object):
The ingredient object that contains nutritional information
def __init__(self, name, carbs, protein, fat):
self.name = name
self.carbs = carbs
self.protein = protein
self.fat = fat
def __repr__(self):
return 'Ingredient({0}, {1}, {2}, {3})'.format(self.name, self.carbs, self.protein, self.fat)
def get_nutrition(self):
Returns the nutritional information for the ingredient
return (self.carbs, self.protein, self.fat)
class Recipe(object):
The Recipe object containing the ingredients
def __init__(self, name, ingredients):
self.name = name
self.ingredients = ingredients
def get_nutrition(self):
Returns the nutritional information for the recipe
nutrition = [0, 0, 0]
for amount, ingredient in self.ingredients:
nutrition[0] += amount * ingredient.carbs
nutrition[1] += amount * ingredient.protein
nutrition[2] += amount * ingredient.fat
return nutrition
bread = Recipe('Bread', [(820, Ingredient('Flour', 0.77, 0.10, 0.01)),
(30, Ingredient('Oil', 0, 0, 1)),
(36, Ingredient('Sugar', 1, 0, 0)),
(7, Ingredient('Yeast', 0.3125, 0.5, 0.0625)),
(560, Ingredient('Water', 0, 0, 0))])
print(bread.ingredients)
print(bread.get_nutrition())
Explanation: Viewing the ingredients now looks much better. Let's now look at the get_nutrition method.
There are still a number of areas that could be improved
When we call get_nutrition it is not clear what the different values returned actually are
We don't use the get_nutrition method when calculating the nutrition values in the Recipe class
There is no way to add additional types of nutrient
Ingredient and Recipe return different types from get_nutrition, tuple and list respectively
Recipe could not be used as an ingredient for another Recipe
End of explanation
!cat Wk05-wsgi.py
Explanation: WSGI
The value of building and documenting a interface to our code is not unique to object oriented programming.
Next week we will look at creating websites as an alternative to command line programs and GUIs. Python has a rich ecosystem of web servers and frameworks for creating web applications. Importantly, the vast majority use a common interface called WSGI.
WSGI is based on a simple exchange. The example below use the wsgiref package for the web server with the application implemented without using external packages. Next week, we will look at some of the more commonly used web servers and use a web framework to develop a more substantial web project.
End of explanation
class Ingredient(object):
The ingredient object that contains nutritional information
def __init__(self, name, *args, **kwargs):
self.name = name
self.nums = []
for a in [*args]:
if isinstance(a, dict):
for key in a.keys():
setattr(self, key, a[key])
elif isinstance(a, float):
self.nums.append(a)
if len(self.nums) in [3,4]:
for n, val in zip(['carbs', 'protein', 'fat', 'cholesterol'], self.nums):
setattr(self, n, val)
elif isinstance(a, int):
self.nums.append(a)
if len(self.nums) in [3,4]:
for n, val in zip(['carbs', 'protein', 'fat', 'cholesterol'], self.nums):
setattr(self, n, val)
else:
print('Need correct nutritional information format')
def __repr__(self):
if getattr(self, 'cholesterol', False):
return 'Ingredient({0}, {1}, {2}, {3}, {4})'.format(self.name,
self.carbs,
self.protein,
self.fat,
self.cholesterol)
else:
return 'Ingredient({0}, {1}, {2}, {3})'.format(self.name,
self.carbs,
self.protein,
self.fat)
def get_nutrition(self):
Returns the nutritional information for the ingredient
return (self.carbs, self.protein, self.fat, self.cholestrol)
def get_name(self):
Returns the ingredient name
return self.name
class Recipe(object):
The Recipe object containing the ingredients
def __init__(self, name, *ingredients):
self.name = name
self.ingredients = [*ingredients][0]
self.number = len(*ingredients)
self.nutrition_ = {'carbs': 0, 'protein': 0, 'fat':0, 'cholesterol':0}
def __repr__(self):
return 'Recipe({0}, {1})'.format(self.name, self.ingredients)
def get_nutrition(self):
Returns the nutritional information for the recipe
#for _ in range(self.number):
nutrition = [0,0,0,0] # need to be length of dict
for amount, ingredient in self.ingredients:
# print(type(ingredient), ingredient) # test
try:
if getattr(ingredient, 'cholesterol', False):
nutrition[0] += amount * ingredient.carbs
nutrition[1] += amount * ingredient.protein
nutrition[2] += amount * ingredient.fat
nutrition[3] += amount * ingredient.cholesterol
else:
nutrition[0] += amount * ingredient.carbs
nutrition[1] += amount * ingredient.protein
nutrition[2] += amount * ingredient.fat
except AttributeError: # in case another recipe is in the ingredients (nested)
nu = ingredient.get_nutrition()
nu = [amount * x for x in nu]
nutrition[0] += nu[0]
nutrition[1] += nu[1]
nutrition[2] += nu[2]
nutrition[3] += nu[3]
return nutrition
@property
def nutrition(self):
facts = self.get_nutrition()
self.nutrition_['carbs'] = facts[0]
self.nutrition_['protein'] = facts[1]
self.nutrition_['fat'] = facts[2]
self.nutrition_['cholesterol'] = facts[3]
return self.nutrition_
def get_name(self):
return self.name
bread = Recipe('Bread', [(820, Ingredient('Flour', 0.77, 0.10, 0.01)),
(30, Ingredient('Oil', 0, 0, 1)),
(36, Ingredient('Sugar', 1, 0, 0)),
(7, Ingredient('Yeast', 0.3125, 0.5, 0.0625)),
(560, Ingredient('Water', 0, 0, 0))])
print(bread.ingredients)
# Should be roughly [(820, Ingredient(Flour, 0.77, 0.1, 0.01)), (30, Ingredient(Oil, 0, 0, 1)),
# (36, Ingredient(Sugar, 1, 0, 0)), (7, Ingredient(Yeast, 0.3125, 0.5, 0.0625)), (560, Ingredient(Water, 0, 0, 0))]
print(bread.nutrition)
#Should be roughly {'carbs': 669.5875, 'protein': 85.5, 'fat': 38.6375} the order is not important
eggs = Ingredient('Egg', {'carbs': 0.0077, 'protein': 0.1258, 'fat': 0.0994, 'cholesterol': 0.00423, 'awesome':100})
#eggs = Ingredient('Egg', {'carbs': 0.0077, 'protein': 0.1258, 'fat': 0.0994})
#eggs = Ingredient('Egg', 0.0077, 0.1258, 0.0994, 0.00423)
print(eggs)
#Points to note:
# - The different call to Ingredient, you can use isinstance or type to change the
# behaviour depending on the arguments supplied
# - Cholesterol as an extra nutrient, your implementation should accept any nutrient
# - Use of Recipe (bread) as an ingredient
basic_french_toast = Recipe('Basic French Toast', [(300, Ingredient('Egg', {'carbs': 0.0077, 'protein': 0.1258,
'fat': 0.0994, 'cholesterol': 0.00423})),
(0.25, bread)])
print(basic_french_toast.ingredients)
# Should be roughly:
# [(300, Ingredient(Egg, 0.0077, 0.1258, 0.0994)), (0.25, Recipe(Bread, [(820, Ingredient(Flour, 0.77, 0.1, 0.01)),
# (30, Ingredient(Oil, 0, 0, 1)), (36, Ingredient(Sugar, 1, 0, 0)), (7, Ingredient(Yeast, 0.3125, 0.5, 0.0625)),
# (560, Ingredient(Water, 0, 0, 0))]))]
# Note the formatting for the Recipe object, a __repr__ method will be needed
print(basic_french_toast.nutrition)
# Should be roughly {'protein': 59.115, 'carbs': 169.706875, 'cholesterol': 1.2690000000000001, 'fat': 39.479375000000005}
# The order is not important
Explanation: Assignments
Modify the Ingredient and Recipe classes so that the following code works.
End of explanation |
6,025 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Now that we know how to read the data, we will add some processing to generate a gridded field.<br />
In the first part, we will use the loop on the files to store all the data, then in the second part we will compute a mean value for each grid point of the grid.
First we select a year and a month
Step1: The directory where we store the data files
Step2: Reading all the data
The loop is the same as the 2 previous examples, except that now we will keep the data in arrays that we create empty
Step3: Now we have the coordinates and the temperature for all the files, we can save them in a file that we can re-use later.<br/>
To do so, we will use the numpy savetxt function.
The function requires
Step4: Creation of a gridded field
There are many ways of getting a gridded field from sparsely distributed observations. We will show two simple applications.
Linear interpolation
With scipy, the module interpolate provide many functions to perform interpolations.<br />
In particular, griddata aims to interpolate unstructured D-dimensional data.
Step5: We need to specify the grid on which the observations have to be interpolated.<br/>
We construct a 5º by 5º grid between 70ºS and 70ºN latitude.
Step6: The interpolated field is obtained as
Step7: After the interpolation and prior to the plot, it is necessary to mask the NaN values that could have been generated.
Step8: Plotting
Same code as in the previous examples. pcolormesh (pseudocolor plot of a 2-D array) is used for the representation of the gridded field.
Step9: The chosen method (linear) is far from the best, but the goal is simply to illustrate the tool with the drifter data.<br />
We check that the min/max values are consistent with the data
Step10: Interpolation in a specific region
If we want to focus on a given area and increase the resolution, we just have to change the interpolation grid
Step11: On the map we will also add the data locations. | Python Code:
year = 2015
month = 7
%matplotlib inline
import glob
import os
import netCDF4
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from matplotlib import colors
Explanation: Now that we know how to read the data, we will add some processing to generate a gridded field.<br />
In the first part, we will use the loop on the files to store all the data, then in the second part we will compute a mean value for each grid point of the grid.
First we select a year and a month:
End of explanation
basedir = "~/DataOceano/MyOcean/INSITU_GLO_NRT_OBSERVATIONS_013_030/monthly/" + str(year) + str(month).zfill(2) + '/'
basedir = os.path.expanduser(basedir)
Explanation: The directory where we store the data files:
End of explanation
lon_checked, lat_checked, temperature_checked = np.array([]), np.array([]), np.array([])
filelist = sorted(glob.glob(basedir+'*.nc'))
k = 0
for datafiles in filelist:
# print datafiles
with netCDF4.Dataset(datafiles) as nc:
lon = nc.variables['LONGITUDE'][:]
lat = nc.variables['LATITUDE'][:]
depth = nc.variables['DEPH'][:]
POSITION_QC = nc.variables['POSITION_QC'][:]
if depth.shape[1] == 1:
try:
TEMP_QC = nc.variables['TEMP_QC'][:, 0]
temperature = nc.variables['TEMP'][:]
gooddata = np.where(np.logical_and((TEMP_QC == 1), (POSITION_QC == 1)))
temperature = temperature[gooddata]
temperature_checked = np.append(temperature_checked, temperature)
lon_checked = np.append(lon_checked, lon[gooddata])
lat_checked = np.append(lat_checked, lat[gooddata])
except KeyError:
k += 1
# print 'No variable temperature in this file'
Explanation: Reading all the data
The loop is the same as the 2 previous examples, except that now we will keep the data in arrays that we create empty:
End of explanation
datafile = './lon_lat_temperature_' + str(year) + '_' + str(month).zfill(2) + '.txt'
np.savetxt(datafile, np.c_[lon_checked, lat_checked, temperature_checked], fmt='%3.5f %3.5f %2.2f')
print 'Data saved in file ' + datafile
Explanation: Now we have the coordinates and the temperature for all the files, we can save them in a file that we can re-use later.<br/>
To do so, we will use the numpy savetxt function.
The function requires:
* the name of the file where the data will be written,
* the data to be saved, in the form of an array.
To create a unique array from the different arrays (coordinates and temperature), we use the function c_.
End of explanation
from scipy.interpolate import griddata
Explanation: Creation of a gridded field
There are many ways of getting a gridded field from sparsely distributed observations. We will show two simple applications.
Linear interpolation
With scipy, the module interpolate provide many functions to perform interpolations.<br />
In particular, griddata aims to interpolate unstructured D-dimensional data.
End of explanation
lon_interp, lat_interp = np.meshgrid(np.arange(-180, 180.5, 5.), np.arange(-70, 70., 5.))
Explanation: We need to specify the grid on which the observations have to be interpolated.<br/>
We construct a 5º by 5º grid between 70ºS and 70ºN latitude.
End of explanation
temperature_interp = griddata((lon_checked, lat_checked), temperature_checked, (lon_interp, lat_interp), method='linear')
Explanation: The interpolated field is obtained as:
End of explanation
temperature_interp = np.ma.masked_where(np.isnan(temperature_interp), temperature_interp)
Explanation: After the interpolation and prior to the plot, it is necessary to mask the NaN values that could have been generated.
End of explanation
fig = plt.figure(figsize=(12, 8))
tempmin, tempmax = 0., 30.
cmaptemp = plt.cm.RdYlBu_r
normtemp = colors.Normalize(vmin=tempmin, vmax=tempmax)
tempticks = np.arange(tempmin, tempmax+0.1,2.5)
m = Basemap(projection='moll', lon_0=0, resolution='c')
lon_interp_map, lat_interp_map = m(lon_interp, lat_interp)
pcm = m.pcolormesh(lon_interp_map, lat_interp_map, temperature_interp, cmap=cmaptemp, norm=normtemp)
cbar = plt.colorbar(pcm, extend='both', shrink=0.7)
cbar.set_label('$^{\circ}$C', rotation=0, ha='left')
m.drawcoastlines(linewidth=0.2)
m.fillcontinents(color = 'gray')
plt.title('Gridded temperature from surface drifters\n' + str(year) + '-' + str(month).zfill(2))
plt.show()
Explanation: Plotting
Same code as in the previous examples. pcolormesh (pseudocolor plot of a 2-D array) is used for the representation of the gridded field.
End of explanation
print temperature_interp.min()
print temperature_interp.max()
print temperature_checked.min()
print temperature_checked.max()
Explanation: The chosen method (linear) is far from the best, but the goal is simply to illustrate the tool with the drifter data.<br />
We check that the min/max values are consistent with the data:
End of explanation
lonmin, lonmax, latmin, latmax = -25., 35., 20.0, 60.
deltalon, deltalat = 2., 2.
lon_interp, lat_interp = np.meshgrid(np.arange(lonmin, lonmax, deltalon), np.arange(latmin, latmax, deltalat))
temperature_interp = griddata((lon_checked, lat_checked), temperature_checked, (lon_interp, lat_interp), method='linear')
temperature_interp = np.ma.masked_where(np.isnan(temperature_interp), temperature_interp)
Explanation: Interpolation in a specific region
If we want to focus on a given area and increase the resolution, we just have to change the interpolation grid:
End of explanation
fig = plt.figure(figsize=(12, 8))
m = Basemap(llcrnrlon=lonmin, llcrnrlat=latmin,
urcrnrlon=lonmax, urcrnrlat=latmax, resolution='i')
lon_interp_map, lat_interp_map = m(lon_interp, lat_interp)
lon_checked_map, lat_checked_map = m(lon_checked, lat_checked)
pcm = m.pcolormesh(lon_interp_map, lat_interp_map, temperature_interp, cmap=cmaptemp, norm=normtemp)
plt.plot(lon_checked_map, lat_checked, 'ko', ms=0.1)
cbar = plt.colorbar(pcm, extend='both', shrink=0.9)
cbar.set_label('$^{\circ}$C', rotation=0, ha='left')
m.drawcoastlines(linewidth=0.2)
m.fillcontinents(color = 'gray')
plt.title('Gridded temperature from surface drifters\n' + str(year) + '-' + str(month).zfill(2), fontsize=20)
plt.show()
Explanation: On the map we will also add the data locations.
End of explanation |
6,026 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
파이썬 기본 자료형 1부
파이썬 언어에서 사용되는 값들의 기본 자료형을 살펴본다.
변수에 할당될 수 있는 가장 단순한 자료형에는 네 종류가 있다
Step1: 변수를 선언하고 값을 바로 확인할 수 있다.
Step2: 파이썬을 계산기처럼 활용할 수도 있다.
Step3: 주의
Step4: 나머지를 계산하는 연산자는 % 이다.
Step5: 지수 계산
Step6: 변수 선언 및 활용
컴퓨터 프로그램을 데이터를 이용하여 다음과 같은 일들을 처리하기 위한 명령문들의 나열로 생각할 수 있다.
* 데이터 읽기
* 데이터 생성하기
* 데이터 계산하기
* 데이터 변환하기
* 데이터 정리하기
* 데이터 저장하기
특정 데이터를 조작하기 위해서는 해당 데이터를 저장하거나 불러올 수 있어야 한다. 그러기 위해서 변수를 활용한다.
변수를 일종의 그릇에 비유할 수 있으며, 변수에 할당된 데이터는 그릇에 담겨진 내용물에 해당한다.
파이썬에서 변수의 이름을 지으려면 기본적으로 세 가지 규칙을 따라야 한다.
반드시 영어 알파벳 문자(a-z,A-Z) 또는 밑줄기호(_)로 시작해야 하며, 이후에는 알파벳, 숫자(0-9), 밑줄기호가 임의로 사용될 수 있다.
파이썬 예약어(def, from, import 등)를 변수 이름으로 사용하면 안된다.
대소문자를 구분해야 한다
Step7: 예를 들어, C 언어의 경우 아래와 같이 선언해야 한다.
int a_number = 2
char a_word[] = 'dog'
변수에 할당된 값을 확인하기 위해 print() 함수를 이용한다.
Step8: 변수에 할당된 값의 자료형을 확인하려면 type() 함수를 호출한다.
Step9: 선언된 변수를 이용하여 연산을 할 수도 있다.
Step10: 연산의 결과를 변수에 할당할 수 있다. 해당 변수에는 연산의 결과만을 기억한다.
Step11: 계산된 결과의 자료형도 type() 함수를 이용하여 확인할 수 있다.
Step12: 문자열의 경우 덧셈과 곱셈 연산자를 사용할 수 있다.
Step13: 하지만 변수에 할당된 값의 자료형에 따라 연산의 가능여부가 결정된다.
예를 들어, 숫자의 문자열의 합은 정의되어 있지 않으며, 실행할 경우 오류가 발생한다.
Step14: 주의
Step15: 기본 자료형
파이썬에는 8개의 자료형이 미리 선언되어 있다. 그중 네 개는 단순자료형이며, 나머지 네 개는 컬렉션 자료형(모음 자료형)이다.
단순 자료형
하나의 값만을 대상으로 한다는 의미에서 단순 자료형이다. 즉, 정수 하나, 부동소수점 하나, 불리언 값 하나, 문자열 하나 등등.
정수(int)
부동소수점(float)
불리언 값(bool)
문자열(str)
컬렉션 자료형
여러 개의 값들을 하나로 묶어서 다룬다는 의미에서 컬렉션 자료형이다.
리스트(list)
튜플(tuple)
집합(set)
사전(dictionary)
여기서는 단순 자료형을 소개하고, 컬렉션 자료형은 이후에 다룬다.
정수(int)
일반적으로 알고 있는 정수(자연수, 0, 음의 정수)들의 자료형을 나타내면 덧셈, 뺄셈, 곱셈, 나눗셈 등의 일반 연산이 가능하다.
주의
Step16: 정수와 실수 사이에 강제로 형변환 가능하다. 실수를 정수로 변환하고자 할 경우 int() 함수를 사용한다. 그러면 소수점 이하는 버려진다.
Step17: 정수를 실수로 형변환하려면 float() 함수를 사용한다.
Step18: 주의
Step19: 키워드 관련 주의사항
지금까지 살펴보았듯이 float, int, print, type와 같은 단어는 녹색으로 표시되는데 이는 그 단어들이 파이썬에서 특별한 역할을 수행하는 키워드이기 때문이다.
그런 키워드를 재정의할 수는 있지만 하지 않는 것이 좋다.
혹여 실수로 아래와 같은 일을 할 수도 있는데 매우 조심해야 한다.
Step20: 즉, int() 함수의 본래의 정의가 사라졌다. 이럴 때는 아래와 같이 원래의 함수로 되돌릴 수 있다.
Step21: 연산자 우선순위
일반적으로 알려진 연산자들 사이의 우선순위를 알아야 한다.
줄여서 PEMDAS(펨다스)로 기억하면 좋다.
PEMDAS
Step22: 불리언 값(bool)
if 또는 while 문에서 사용되는 불리언 자료형에는 두 개의 값만 사용된다.
* True
* False
이 두 개의 값만을 이용하여 복잡한 프로그램을 구현할 수 있다.
예제
Step23: 두 개의 변수 선언을 아래와 같이 동시에 할 수 있다. 등호기호 왼편과 오른편에 사용되는 변수와 값의 개수가 동일해야 함에 주의한다.
Step24: 주의
Step25: 불리언 자료형의 변수를 이용하여 연산을 수행할 수도 있다.
Step26: 불리언 연산자 우선순위
not 연산자의 우선순위가 가장 높다.
Step27: 숫자 비교
일반적으로 사용하는 숫자들의 비교를 나타내는 연산자들은 다음과 같다. 리턴값은 모두 불리언 자료형이다.
!=
Step29: 연습문제
연습
두 숫자의 평균값을 구하는 함수를 아래와 같이 작성할 수 있다.
주의
Step30: 주의
Step32: 함수에 대한 정보를 얻고자 할 경우 help() 함수를 활용할 수 있다.
그러면 앞서 average 함수를 정의할 때 함께 적어 넣은 독스트링이 보여진다.
Step33: 연습
두 숫자 a와 b의 사이의 거리를 리턴하는 함수 distance(a, b)를 정의하라.
활용 예
Step35: abs 함수는 인자로 입력된 숫자의 절대값을 리턴하는 함수이다.
Step36: 연습
두 숫자의 기하평균(geometric mean)을 리턴하는 함수 geometric_mean(a, b) 함수를 정의하라.
두 숫자 a와 b의 기하평균을 c라 하면, 두 변의 길이가 각각 a와 b인 직사각형의 넓이와 변의 길이가 c인 정사각형의 넓이가 동일함을 의미한다.
활용 예
Step37: sqrt에 대해 알고 싶으면 help 함수를 활용한다.
help(math.sqrt)
Step39: 연습
바닥면적이 A이고 높이가 h인 피라미드의 부피를 리턴하는 함수 pyramid_volume(A, h)를 정의하라.
활용 예
Step40: 주의
Step42: 연습
초(second) 단위의 숫자를 받아서 일(day) 단위의 값으로 되돌려주는 seconds2days(n) 함수를 정의하라. 입력값은 int 또는 float 일 수 있으며 리턴값은 float 자료형이어야 한다.
활용 예
Step44: 파이썬3의 경우에는 아래와 같이 정의해도 된다.
def seconds2days(sec)
Step45: 연습
변의 길이가 각각 a, b, c인 삼각형의 면적 A를 계산하는 함수 triangle_area(a, b, c)를 정의하라.
다음 등식을 이용할 수 있다.
A = (s * (s - a) * (s - b) * (s - c)) ** 0.5
s = (a + b + c) / 2
아래 사이트 참조 | Python Code:
print("Hello World")
Explanation: 파이썬 기본 자료형 1부
파이썬 언어에서 사용되는 값들의 기본 자료형을 살펴본다.
변수에 할당될 수 있는 가장 단순한 자료형에는 네 종류가 있다:
정수 자료형(int):
..., -3, -2, -1, 0, 1, 2, 3, 등등
1 + 2, -2 * 3, 등등
부동소수점 자료형(float):
1.2, 0.333333, -1.2, -3.7680, 등등
2.0 \ 3.5, 3.555 + 3.4 * 7.9, 등등
불리언 자료형(bool): True, False를 포함하여 두 값으로 계산될 수 있는 값
예: 1 == 1, 2 < 3, 1 + 1 > 3 and 2 < 3, 등등
문자열 자료형(str):
'a', 'abc', 'enginneering', ... 등등
'abc' * 2, 'engineering' + 'math', 등등
이번 장 주요 내용:
정수, 부동소수점, 불리언 자료형을 소개. 문자열 자료형은 다음 장에서 다룸.
변수에 할당된 값과 그 값의 자료형을 알아내는 데에 사용하는 두 개의 함수의 기본적인 활용법
print() 함수: 변수에 할당된 값을 확인할 때 사용
type() 함수: 값의 자료형을 확인할 때 사용
특정 자료형과 관련하여 많이 사용되는 함수와 메소드 살펴보기
파이썬 명령어 기초 사용법
Spyder, IDLE 등을 사용하여 파이썬 명령어를 실행할 수 있다.
명령 프롬프트(prompt)는 보통 아래의 모양을 갖는다.
>>>
또는
In [1]:
파이썬은 "스크립트 언어"에 속한다. 즉, 코드를 작성한 후에 바로 실행시킬 수 있다.
C와 Java 등의 언어는 코드를 작성한 후에 코드가 들어 있는 파일을 컴파일하여 생성된 목적코드(object code)를 실행하기 때문에 컴파일 언어라고 불린다.
예를 들어, print() 함수를 이용하여 터미널 화면에 문자열 값을 보여주고 싶다면 단순히 아래와 같이 코드를 작성하고 실행하면 된다.
주의: print는 "출력하다", "화면에 보여주다", "인쇄하다" 등으로 번역한다. 반면에 함수를 정의할 때 사용하는 return은 "값을 돌려준다" "리턴한다" 등으로 번역하여 사용한다. print와 return은 사용 용도다 서로 완전히 다르다. 나중에 차이점을 배우게 된다.
End of explanation
a = 1 + 1
a
Explanation: 변수를 선언하고 값을 바로 확인할 수 있다.
End of explanation
2 + 3
a = 2 + 3
a + 1
42 - 15.3
100 * 11
7 / 2
Explanation: 파이썬을 계산기처럼 활용할 수도 있다.
End of explanation
7.0 / 2
Explanation: 주의:
파이썬2에서는 나눗셈 연산자(/)는 정수 자료형인 경우 몫을 계산한다. 반면에 부동소수점이 사용되면 부동소수점을 리턴한다.
파이썬3에서는 나눗셈 연산자(/)는 무조건 부동소수점을 리턴한다.
In [22]: 7 / 2
Out[22]: 3.5
End of explanation
7%5
Explanation: 나머지를 계산하는 연산자는 % 이다.
End of explanation
2 ** 3
9 ** 0.5
Explanation: 지수 계산: 예를 들어, 2의 3승을 계산하고자 할 때 사용한다.
End of explanation
# int a_number = 2
a_number = 2
a_word = 'dog'
Explanation: 변수 선언 및 활용
컴퓨터 프로그램을 데이터를 이용하여 다음과 같은 일들을 처리하기 위한 명령문들의 나열로 생각할 수 있다.
* 데이터 읽기
* 데이터 생성하기
* 데이터 계산하기
* 데이터 변환하기
* 데이터 정리하기
* 데이터 저장하기
특정 데이터를 조작하기 위해서는 해당 데이터를 저장하거나 불러올 수 있어야 한다. 그러기 위해서 변수를 활용한다.
변수를 일종의 그릇에 비유할 수 있으며, 변수에 할당된 데이터는 그릇에 담겨진 내용물에 해당한다.
파이썬에서 변수의 이름을 지으려면 기본적으로 세 가지 규칙을 따라야 한다.
반드시 영어 알파벳 문자(a-z,A-Z) 또는 밑줄기호(_)로 시작해야 하며, 이후에는 알파벳, 숫자(0-9), 밑줄기호가 임의로 사용될 수 있다.
파이썬 예약어(def, from, import 등)를 변수 이름으로 사용하면 안된다.
대소문자를 구분해야 한다: 'YOU', 'you', 'You', 'yOu'는 모두 다른 이름으로 처리된다.
'-', '+', '*','/' 등의 연산자 기호는 이름에 사용될 수 없다.
'@', '$', '?' 등의 기호도 사용되지 않는다.
변수 선언
변수에 특정 값을 할당하는 것을 변수 선언이라 부른다.
변수 선언은 아래 모양을 갖춘다.
변수이름 = 할당할 값
예를 들어 아래에서 a_number라는 변수이름에 정수 2가 할당되었고, a_word 변수에는 dog라는 문자열이 할당되었다.
주의:
변수를 생성하고자 할 때 값을 초기화하면 된다. 즉, 변수를 미리 선언할 필요가 없다. C와 Java와의 주요 차이점 중의 하나이다.
자료형을 선언할 필요가 없다. 변수의 자료형을 파이썬이 알아서 판단한다. 이를 동적 타이핑(dynamic typing)이라 한다.
End of explanation
print(a_number)
print(a_word)
Explanation: 예를 들어, C 언어의 경우 아래와 같이 선언해야 한다.
int a_number = 2
char a_word[] = 'dog'
변수에 할당된 값을 확인하기 위해 print() 함수를 이용한다.
End of explanation
type(a_number)
type(a_word)
Explanation: 변수에 할당된 값의 자료형을 확인하려면 type() 함수를 호출한다.
End of explanation
a_number + 7
(a_number * 6.0) / 5
Explanation: 선언된 변수를 이용하여 연산을 할 수도 있다.
End of explanation
first_result = 8 / 3.5
first_result
Explanation: 연산의 결과를 변수에 할당할 수 있다. 해당 변수에는 연산의 결과만을 기억한다.
End of explanation
type(first_result)
Explanation: 계산된 결과의 자료형도 type() 함수를 이용하여 확인할 수 있다.
End of explanation
"Bull " + a_word
a_word * 2
Explanation: 문자열의 경우 덧셈과 곱셈 연산자를 사용할 수 있다.
End of explanation
a_number + a_word
Explanation: 하지만 변수에 할당된 값의 자료형에 따라 연산의 가능여부가 결정된다.
예를 들어, 숫자의 문자열의 합은 정의되어 있지 않으며, 실행할 경우 오류가 발생한다.
End of explanation
print(a_number)
a_number = 5
print(a_number)
Explanation: 주의: 오류 내용을 초보자가 이해하기는 어렵다. 여기서는 자료형이 맞지 않아 오류가 발생할 경우에 TypeError가 발생한다는 사실만을 기억해 두면 좋다.
변수에 할당된 값은 변경이 가능하다. 원래 할당된 값을 변경할 수 있다는 의미에서 변수라 부른다. 변수가 아닌 숫자를 상수라 부른다.
End of explanation
new_float = 4.0
print(new_float)
Explanation: 기본 자료형
파이썬에는 8개의 자료형이 미리 선언되어 있다. 그중 네 개는 단순자료형이며, 나머지 네 개는 컬렉션 자료형(모음 자료형)이다.
단순 자료형
하나의 값만을 대상으로 한다는 의미에서 단순 자료형이다. 즉, 정수 하나, 부동소수점 하나, 불리언 값 하나, 문자열 하나 등등.
정수(int)
부동소수점(float)
불리언 값(bool)
문자열(str)
컬렉션 자료형
여러 개의 값들을 하나로 묶어서 다룬다는 의미에서 컬렉션 자료형이다.
리스트(list)
튜플(tuple)
집합(set)
사전(dictionary)
여기서는 단순 자료형을 소개하고, 컬렉션 자료형은 이후에 다룬다.
정수(int)
일반적으로 알고 있는 정수(자연수, 0, 음의 정수)들의 자료형을 나타내면 덧셈, 뺄셈, 곱셈, 나눗셈 등의 일반 연산이 가능하다.
주의: 정수들의 나눗셈의 결과는 부동소수점이다.
파이썬3에서 처럼 정수들의 나눗셈이 부동소수점이 되도록 하려면 아래 명령어를 먼저 실행하면 된다.
최신 버젼인 파이썬3과의 호환성을 위해 필요할 수 있다.
from __future__ import division
In [4]: 8 / 5
Out[4]: 1.6
위 명령어를 실행한 후에 기존의 정수들의 나눗셈 연산을 위해서는 몫을 계산하는 연산자인 //을 사용하면 된다.
In [5]: 8 // 5
Out[5]: 1
In [5]: 8 % 5
Out[5]: 3
부동소수점(float)
부동소수점은 원래 실수를 컴퓨터에서 다루기 위해 개발되었으나 실제로는 유리수 일부만을 다룬다.
무리수인 원주율 pi의 경우에도 컴퓨터의 한계로 인해 소수점 이하 적당한 자리에서 끊어서 사용한다.
End of explanation
int(4.8)
Explanation: 정수와 실수 사이에 강제로 형변환 가능하다. 실수를 정수로 변환하고자 할 경우 int() 함수를 사용한다. 그러면 소수점 이하는 버려진다.
End of explanation
float(2)
Explanation: 정수를 실수로 형변환하려면 float() 함수를 사용한다.
End of explanation
basic_int = 2
print(float(basic_int))
print(type(basic_int))
float_basic_int = float(basic_int)
print(type(float_basic_int))
Explanation: 주의: 변수를 형변환한다고 해서 변수에 할당된 값이 변하는 것은 아니다. 다만, 형변환한 값을 다른 변수에 저장해서 활용할 수는 있다.
End of explanation
int = 4
print("What have we done to int?", int)
int(5.0)
Explanation: 키워드 관련 주의사항
지금까지 살펴보았듯이 float, int, print, type와 같은 단어는 녹색으로 표시되는데 이는 그 단어들이 파이썬에서 특별한 역할을 수행하는 키워드이기 때문이다.
그런 키워드를 재정의할 수는 있지만 하지 않는 것이 좋다.
혹여 실수로 아래와 같은 일을 할 수도 있는데 매우 조심해야 한다.
End of explanation
del int
int(5.0)
Explanation: 즉, int() 함수의 본래의 정의가 사라졌다. 이럴 때는 아래와 같이 원래의 함수로 되돌릴 수 있다.
End of explanation
eqn1 = 2 * 3 - 2
print(eqn1)
eqn2 = -2 + 2 * 3
print( eqn2 )
eqn3 = -2 + (2 % 3)
print( eqn3 )
eqn4 = (.3 + 5) // 2
print(eqn4)
eqn5 = 2 ** 4 // 2
print(eqn5)
Explanation: 연산자 우선순위
일반적으로 알려진 연산자들 사이의 우선순위를 알아야 한다.
줄여서 PEMDAS(펨다스)로 기억하면 좋다.
PEMDAS:
* 괄호(Parentheses)
* 지수승(Exponents)
* 곱셈(Multiplication)
* 나눗셈(Division)
* 덧셈(Addition)
* 뺄셈(Subtraction).
왼쪽에 오는 연산자의 우선순위가 높다.
지수승을 나타내는 기호는 **이다.
End of explanation
puppy = True
print(puppy)
type(puppy)
puppies = False
Explanation: 불리언 값(bool)
if 또는 while 문에서 사용되는 불리언 자료형에는 두 개의 값만 사용된다.
* True
* False
이 두 개의 값만을 이용하여 복잡한 프로그램을 구현할 수 있다.
예제: 강아지를 한 마리만 갖고 있다고 가정하자.
이것을 표현하기 위해 puppy(강아지 한마리)라는 변수에 True를 할당하고, 여러 마리의 강아지를 뜻하는 puppies 변수에는 False를 할당한다.
End of explanation
puppy, puppies = True, False
print("Do I have a puppy?", puppy)
print("Do I have puppies?", puppies)
Explanation: 두 개의 변수 선언을 아래와 같이 동시에 할 수 있다. 등호기호 왼편과 오른편에 사용되는 변수와 값의 개수가 동일해야 함에 주의한다.
End of explanation
True and True
True and False
Explanation: 주의: 위에서 사용된 print함수의 사용법을 기억해둔다. print 함수는 인자를 여러 개 받을 수 있으며 그 값들을 차례대로 동시에 한 줄에 출력한다. 각각의 값들은 스페이스(space)로 구분되어진다.
불리언 연산자
and, not, or 세 개의 연산자를 이용하여 불리언 연산을 할 수 있다. 각 연산자의 의미는 일반적으로 알려진 것과 동일하다.
End of explanation
puppy and puppies
not puppies
not puppy
Explanation: 불리언 자료형의 변수를 이용하여 연산을 수행할 수도 있다.
End of explanation
puppy and not puppies
puppy or puppies
False or False
Explanation: 불리언 연산자 우선순위
not 연산자의 우선순위가 가장 높다.
End of explanation
4 == 4
4 == 5
4 != 2
4 != 4
4 > 2
4 > 4
4 >= 4
False or False
Explanation: 숫자 비교
일반적으로 사용하는 숫자들의 비교를 나타내는 연산자들은 다음과 같다. 리턴값은 모두 불리언 자료형이다.
!=: 다른지 여부를 판단
==: 같은지 여부를 판단
<=: 작거나 같은지 여부를 판단
>=: 크거나 같은지 여부를 판단
<: 작은지 여부를 판단
>: 큰지 여부를 판단
End of explanation
def average(a, b):
두 개의 숫자 a와 b가 주어졌을 때,
두 숫자의 평균을 리턴하는 함수
return (a + b) * 0.5
Explanation: 연습문제
연습
두 숫자의 평균값을 구하는 함수를 아래와 같이 작성할 수 있다.
주의: 함수에 대해서는 이후에 좀 더 자세히 다룬다. 여기서는 함수를 작성하는 방식에 주의한다.
함수 작성요령:
def 함수이름(인자1, 인자2, ..., 인자k):
함수본체
return 리턴값
End of explanation
average(10, 20)
average(10, 4)
Explanation: 주의:
큰 따옴표 세 개(...)로 둘러싸인 부분은 문서화를 위해 사용되며 주석으로 처리된다.
즉, 정의되는 함수의 의미와 역할에 대한 설명을 담는다. 영어로 독스트링(docstring)이라 부른다.
주석 등에 한글을 사용하고자 할 경우 아래 문장이 문서 맨 첫줄에 있어야 한다.
# coding: utf-8
End of explanation
help(average)
Explanation: 함수에 대한 정보를 얻고자 할 경우 help() 함수를 활용할 수 있다.
그러면 앞서 average 함수를 정의할 때 함께 적어 넣은 독스트링이 보여진다.
End of explanation
def distance(a, b):
return abs(a-b)
Explanation: 연습
두 숫자 a와 b의 사이의 거리를 리턴하는 함수 distance(a, b)를 정의하라.
활용 예:
```
In [11]: distance(3, 4)
Out[11]: 1
In [12]: distance(3, 1)
Out[12]: 2
```
아래 코드에서 pass 부분을 수정해서 채워야 한다.
def distance(a, b):
if-else문을 사용하지 않고도 가능하다.
pass
견본답안:
End of explanation
distance(3, 4)
distance(3, 1)
Explanation: abs 함수는 인자로 입력된 숫자의 절대값을 리턴하는 함수이다.
End of explanation
import math
def geometic_mean(a, b):
c = math.sqrt(a * b)
return c
Explanation: 연습
두 숫자의 기하평균(geometric mean)을 리턴하는 함수 geometric_mean(a, b) 함수를 정의하라.
두 숫자 a와 b의 기하평균을 c라 하면, 두 변의 길이가 각각 a와 b인 직사각형의 넓이와 변의 길이가 c인 정사각형의 넓이가 동일함을 의미한다.
활용 예:
In [ ]: geometric_mean(2, 2)
Out[ ]: 2.0
In [ ]: geometric_mean(2, 8)
Out[ ]: 4.0
In [ ]: geometric_mean(2, 1)
Out[ ]: 1.4142135623730951
힌트: 제곱근을 계산해주는 sqrt()를 이용한다. 단, sqrt() 함수를 이용하려면 먼저 math 라는 모듈을 아래와 같이 임포트 해야 한다.
import math
이후에 math.sqrt(3)와 같은 형식으로 제곱근 함수를 호출할 수 있다.
견본답안:
End of explanation
geometic_mean(2, 2)
geometic_mean(2, 8)
geometic_mean(2, 1)
Explanation: sqrt에 대해 알고 싶으면 help 함수를 활용한다.
help(math.sqrt)
End of explanation
def pyramid_volume(A, h):
4각뿔의 부피는 밑면적 * 높이 * 1/3
리턴값이 항상 float 자료형이 되도록 한다.
V = A * h / 3.0
return V
Explanation: 연습
바닥면적이 A이고 높이가 h인 피라미드의 부피를 리턴하는 함수 pyramid_volume(A, h)를 정의하라.
활용 예:
In [ ]: pyramid_volume(1, 2)
Out[ ]: 0.6666666666666666
견본답안:
End of explanation
pyramid_volume(1, 2)
Explanation: 주의: 3이 아니라 3.0으로 나누는 것에 주의하라. 파이썬3에서는 상관이 없다.
End of explanation
# 하루는 아래 숫자만큼의 초로 이루어진다.
# 하루 = 24시간 * 60분 * 60초.
daysec = 60 * 60 * 24
# 이제 초를 일 단위로 변경할 수 있다.
def seconds2days(sec):
sec을 일 단위로 변경하는 함수.
강제형변환에 주의할 것
return (float(sec) / daysec)
seconds2days(43200)
Explanation: 연습
초(second) 단위의 숫자를 받아서 일(day) 단위의 값으로 되돌려주는 seconds2days(n) 함수를 정의하라. 입력값은 int 또는 float 일 수 있으며 리턴값은 float 자료형이어야 한다.
활용 예:
In [ ]: seconds2days(43200)
Out[ ]: 0.5
견본답안:
End of explanation
def box_surface(a, b, c):
각 변의 길이가 각각 a, b, c인 박스의 표면적을 리턴하는 함수.
힌트: 6개의 면의 합을 구하면 된다
s1, s2, s3 = a * b, b * c, c * a
return 2 * (s1 + s2 + s3)
box_surface(1, 1, 1)
box_surface(2, 2, 3)
Explanation: 파이썬3의 경우에는 아래와 같이 정의해도 된다.
def seconds2days(sec):
return (sec / daysec)
연습
변의 길이가 각각 a, b, c인 직각육면체의 표면적을 계산해주는 함수 box_surface(a, b, c)를 정의하라.
예를 들어, 박스를 페인트칠하고자 할 때 필요한 페인트의 양을 계산하는 문제이다.
활용 예:
In [ ]: box_surface(1, 1, 1)
Out[ ]: 6
In [ ]: box_surface(2, 2, 3)
Out[ ]: 32
견본답안:
End of explanation
def triangle_area(a, b, c):
s = (a + b + c) / 2.0
A = (s * (s - a) * (s - b) * (s - c))
return math.sqrt(A)
triangle_area(2, 2, 3)
Explanation: 연습
변의 길이가 각각 a, b, c인 삼각형의 면적 A를 계산하는 함수 triangle_area(a, b, c)를 정의하라.
다음 등식을 이용할 수 있다.
A = (s * (s - a) * (s - b) * (s - c)) ** 0.5
s = (a + b + c) / 2
아래 사이트 참조:
https://ko.wikipedia.org/wiki/%EC%82%BC%EA%B0%81%ED%98%95
견본답안:
End of explanation |
6,027 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Importing Necessary Modules
To discover something new is to explore where it has never been explored.
Added Conv Visuals Also (Working)
Step1: Loading The Dataset
Step2: Normalising The Data
Step3: Printing the shape of the Datasets
Step4: ## Reshape To Match The Keras's Expectations
Step5: Linear Model
Step8: Basic Simple Plot And Evaluation
Step9: Activations Look Like What?
It looks like diversity of the similar patterns present on multiple classes effect the performance of the classifier although CNN is a robust architechture.
Step10: Let's see the activation of the 2nd channel of the first layer
Step11: Let's plot the activations of the other conv layers as well.
Step12: Classifcation Report | Python Code:
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt #for plotting
from collections import Counter
from sklearn.metrics import confusion_matrix
import itertools
import seaborn as sns
from subprocess import check_output
print(check_output(["ls", "../input"]).decode("utf8"))
%matplotlib inline
Explanation: Importing Necessary Modules
To discover something new is to explore where it has never been explored.
Added Conv Visuals Also (Working)
End of explanation
#loading the dataset.......(Train)
train = pd.read_csv("../input/train.csv")
print(train.shape)
train.head()
z_train = Counter(train['label'])
z_train
sns.countplot(train['label'])
#loading the dataset.......(Test)
test= pd.read_csv("../input/test.csv")
print(test.shape)
test.head()
x_train = (train.ix[:,1:].values).astype('float32') # all pixel values
y_train = train.ix[:,0].values.astype('int32') # only labels i.e targets digits
x_test = test.values.astype('float32')
%matplotlib inline
# preview the images first
plt.figure(figsize=(12,10))
x, y = 10, 4
for i in range(40):
plt.subplot(y, x, i+1)
plt.imshow(x_train[i].reshape((28,28)),interpolation='nearest')
plt.show()
Explanation: Loading The Dataset
End of explanation
x_train = x_train/255.0
x_test = x_test/255.0
y_train
Explanation: Normalising The Data
End of explanation
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
Explanation: Printing the shape of the Datasets
End of explanation
X_train = x_train.reshape(x_train.shape[0], 28, 28,1)
X_test = x_test.reshape(x_test.shape[0], 28, 28,1)
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D
from keras.layers.normalization import BatchNormalization
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ReduceLROnPlateau
from sklearn.model_selection import train_test_split
batch_size = 64
num_classes = 10
epochs = 20
input_shape = (28, 28, 1)
# convert class vectors to binary class matrices One Hot Encoding
y_train = keras.utils.to_categorical(y_train, num_classes)
X_train, X_val, Y_train, Y_val = train_test_split(X_train, y_train, test_size = 0.1, random_state=42)
Explanation: ## Reshape To Match The Keras's Expectations
End of explanation
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),activation='relu',kernel_initializer='he_normal',input_shape=input_shape))
model.add(Conv2D(32, kernel_size=(3, 3),activation='relu',kernel_initializer='he_normal'))
model.add(MaxPool2D((2, 2)))
model.add(Dropout(0.20))
model.add(Conv2D(64, (3, 3), activation='relu',padding='same',kernel_initializer='he_normal'))
model.add(Conv2D(64, (3, 3), activation='relu',padding='same',kernel_initializer='he_normal'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(128, (3, 3), activation='relu',padding='same',kernel_initializer='he_normal'))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.25))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.RMSprop(),
metrics=['accuracy'])
learning_rate_reduction = ReduceLROnPlateau(monitor='val_acc',
patience=3,
verbose=1,
factor=0.5,
min_lr=0.0001)
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=15, # randomly rotate images in the range (degrees, 0 to 180)
zoom_range = 0.1, # Randomly zoom image
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=False, # randomly flip images
vertical_flip=False) # randomly flip images
model.summary()
datagen.fit(X_train)
h = model.fit_generator(datagen.flow(X_train,Y_train, batch_size=batch_size),
epochs = epochs, validation_data = (X_val,Y_val),
verbose = 1, steps_per_epoch=X_train.shape[0] // batch_size
, callbacks=[learning_rate_reduction],)
Explanation: Linear Model
End of explanation
final_loss, final_acc = model.evaluate(X_val, Y_val, verbose=0)
print("Final loss: {0:.6f}, final accuracy: {1:.6f}".format(final_loss, final_acc))
# Look at confusion matrix
#Note, this code is taken straight from the SKLEARN website, an nice way of viewing confusion matrix.
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Predict the values from the validation dataset
Y_pred = model.predict(X_val)
# Convert predictions classes to one hot vectors
Y_pred_classes = np.argmax(Y_pred, axis = 1)
# Convert validation observations to one hot vectors
Y_true = np.argmax(Y_val, axis = 1)
# compute the confusion matrix
confusion_mtx = confusion_matrix(Y_true, Y_pred_classes)
# plot the confusion matrix
plot_confusion_matrix(confusion_mtx, classes = range(10))
print(h.history.keys())
accuracy = h.history['acc']
val_accuracy = h.history['val_acc']
loss = h.history['loss']
val_loss = h.history['val_loss']
epochs = range(len(accuracy))
plt.plot(epochs, accuracy, 'bo', label='Training accuracy')
plt.plot(epochs, val_accuracy, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend()
plt.show()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
# Errors are difference between predicted labels and true labels
errors = (Y_pred_classes - Y_true != 0)
Y_pred_classes_errors = Y_pred_classes[errors]
Y_pred_errors = Y_pred[errors]
Y_true_errors = Y_true[errors]
X_val_errors = X_val[errors]
def display_errors(errors_index,img_errors,pred_errors, obs_errors):
This function shows 6 images with their predicted and real labels
n = 0
nrows = 2
ncols = 3
fig, ax = plt.subplots(nrows,ncols,sharex=True,sharey=True)
for row in range(nrows):
for col in range(ncols):
error = errors_index[n]
ax[row,col].imshow((img_errors[error]).reshape((28,28)))
ax[row,col].set_title("Predicted label :{}\nTrue label :{}".format(pred_errors[error],obs_errors[error]))
n += 1
# Probabilities of the wrong predicted numbers
Y_pred_errors_prob = np.max(Y_pred_errors,axis = 1)
# Predicted probabilities of the true values in the error set
true_prob_errors = np.diagonal(np.take(Y_pred_errors, Y_true_errors, axis=1))
# Difference between the probability of the predicted label and the true label
delta_pred_true_errors = Y_pred_errors_prob - true_prob_errors
# Sorted list of the delta prob errors
sorted_dela_errors = np.argsort(delta_pred_true_errors)
# Top 6 errors
most_important_errors = sorted_dela_errors[-6:]
# Show the top 6 errors
display_errors(most_important_errors, X_val_errors, Y_pred_classes_errors, Y_true_errors)
Explanation: Basic Simple Plot And Evaluation
End of explanation
test_im = X_train[154]
plt.imshow(test_im.reshape(28,28), cmap='viridis', interpolation='none')
Explanation: Activations Look Like What?
It looks like diversity of the similar patterns present on multiple classes effect the performance of the classifier although CNN is a robust architechture.
End of explanation
from keras import models
layer_outputs = [layer.output for layer in model.layers[:8]]
activation_model = models.Model(input=model.input, output=layer_outputs)
activations = activation_model.predict(test_im.reshape(1,28,28,1))
first_layer_activation = activations[0]
plt.matshow(first_layer_activation[0, :, :, 4], cmap='viridis')
Explanation: Let's see the activation of the 2nd channel of the first layer:
Had taken help from the keras docs, this answer on StackOverFlow
End of explanation
model.layers[:-1]# Droping The Last Dense Layer
layer_names = []
for layer in model.layers[:-1]:
layer_names.append(layer.name)
images_per_row = 16
for layer_name, layer_activation in zip(layer_names, activations):
if layer_name.startswith('conv'):
n_features = layer_activation.shape[-1]
size = layer_activation.shape[1]
n_cols = n_features // images_per_row
display_grid = np.zeros((size * n_cols, images_per_row * size))
for col in range(n_cols):
for row in range(images_per_row):
channel_image = layer_activation[0,:, :, col * images_per_row + row]
channel_image -= channel_image.mean()
channel_image /= channel_image.std()
channel_image *= 64
channel_image += 128
channel_image = np.clip(channel_image, 0, 255).astype('uint8')
display_grid[col * size : (col + 1) * size,
row * size : (row + 1) * size] = channel_image
scale = 1. / size
plt.figure(figsize=(scale * display_grid.shape[1],
scale * display_grid.shape[0]))
plt.title(layer_name)
plt.grid(False)
plt.imshow(display_grid, aspect='auto', cmap='viridis')
layer_names = []
for layer in model.layers[:-1]:
layer_names.append(layer.name)
images_per_row = 16
for layer_name, layer_activation in zip(layer_names, activations):
if layer_name.startswith('max'):
n_features = layer_activation.shape[-1]
size = layer_activation.shape[1]
n_cols = n_features // images_per_row
display_grid = np.zeros((size * n_cols, images_per_row * size))
for col in range(n_cols):
for row in range(images_per_row):
channel_image = layer_activation[0,:, :, col * images_per_row + row]
channel_image -= channel_image.mean()
channel_image /= channel_image.std()
channel_image *= 64
channel_image += 128
channel_image = np.clip(channel_image, 0, 255).astype('uint8')
display_grid[col * size : (col + 1) * size,
row * size : (row + 1) * size] = channel_image
scale = 1. / size
plt.figure(figsize=(scale * display_grid.shape[1],
scale * display_grid.shape[0]))
plt.title(layer_name)
plt.grid(False)
plt.imshow(display_grid, aspect='auto', cmap='viridis')
layer_names = []
for layer in model.layers[:-1]:
layer_names.append(layer.name)
images_per_row = 16
for layer_name, layer_activation in zip(layer_names, activations):
if layer_name.startswith('drop'):
n_features = layer_activation.shape[-1]
size = layer_activation.shape[1]
n_cols = n_features // images_per_row
display_grid = np.zeros((size * n_cols, images_per_row * size))
for col in range(n_cols):
for row in range(images_per_row):
channel_image = layer_activation[0,:, :, col * images_per_row + row]
channel_image -= channel_image.mean()
channel_image /= channel_image.std()
channel_image *= 64
channel_image += 128
channel_image = np.clip(channel_image, 0, 255).astype('uint8')
display_grid[col * size : (col + 1) * size,
row * size : (row + 1) * size] = channel_image
scale = 1. / size
plt.figure(figsize=(scale * display_grid.shape[1],
scale * display_grid.shape[0]))
plt.title(layer_name)
plt.grid(False)
plt.imshow(display_grid, aspect='auto', cmap='viridis')
Explanation: Let's plot the activations of the other conv layers as well.
End of explanation
#get the predictions for the test data
predicted_classes = model.predict_classes(X_test)
#get the indices to be plotted
y_true = test.iloc[:, 0]
correct = np.nonzero(predicted_classes==y_true)[0]
incorrect = np.nonzero(predicted_classes!=y_true)[0]
from sklearn.metrics import classification_report
target_names = ["Class {}".format(i) for i in range(num_classes)]
print(classification_report(y_true, predicted_classes, target_names=target_names))
submissions=pd.DataFrame({"ImageId": list(range(1,len(predicted_classes)+1)),
"Label": predicted_classes})
submissions.to_csv("asd.csv", index=False, header=True)
model.save('my_model_1.h5')
json_string = model.to_json()
Explanation: Classifcation Report
End of explanation |
6,028 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TabNet
Step1: Authenticate your GCP account
If you are using AI Platform Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Step2: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. AI Platform runs
the code from this package. In this tutorial, AI Platform also saves the
trained model that results from your job in the same bucket. You can then
create an AI Platform model version based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Cloud
AI Platform services are
available. You may
not use a Multi-Regional Storage bucket for training with AI Platform.
Step3: Only if your bucket doesn't already exist
Step4: Import libraries
Import the libraries we'll be using in this tutorial.
Step5: Reading a sample TabNet prediction on syn2 data
After training and serving your model, you upload the output to Google Cloud Storage. There is a sample TabNet prediction using synthetic data
Step6: Download and preprocess the predictions.
Step7: Parse the mask values in prediction. Then, concatenate the mask values.
The output is a matrix having Nxk (N is the number of outputs, k is the size of each mask).
Concatenating mask values are used to visualize the feature importance.
Step8: Visualize the mask value matrix.
The lighter color indicates more important feature.
For example, only features 3-6 are meaningful in prediction output in Syn2 data. In the plot, the column 3-6 have light color. | Python Code:
PROJECT_ID = "[<your-project-id>]"
Explanation: TabNet: Attentive Interpretable Tabular Learning
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/ml-on-gcp/blob/master/tutorials/explanations/ai-explanations-tabnet-algorithm.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ml-on-gcp/tree/main/tutorials/explanations/ai-explanations-tabnet-algorithm.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
Overview
AI Platform provides a built-in algorithm based on TabNet.
Learn how to train and deploy a model with the TabNet built-in algorithm.
This tutorial provides the sample code to visualize the explanation of TabNet algorithm with Synthetic_2 (Syn2) data.
Syn2 data is described at Section 4.1 of learning to explain paper. The input feature X is generated from a 10-dimensional standard Gaussian. The response variable Y is generated from feature X[3:6] only.
Objective
The goal is to provide a sample plotting tool to visualize the output of TabNet, which is helpful in explaining the algorithm.
Before you begin
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime --> Change runtime type
This tutorial assumes you are running the notebook either in Colab or Cloud AI Platform Notebooks.
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project.
Make sure that billing is enabled for your project.
Enable the AI Platform Training & Prediction and Compute Engine APIs.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
import os
import sys
import warnings
warnings.filterwarnings('ignore')
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
# If you are running this notebook in Colab, follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
def install_dlvm_packages():
!pip install tabulate
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
!pip install witwidget --quiet
!pip install tensorflow==1.15.0 --quiet
!gcloud config set project $PROJECT_ID
elif "DL_PATH" in os.environ:
install_dlvm_packages()
Explanation: Authenticate your GCP account
If you are using AI Platform Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
End of explanation
BUCKET_NAME = "[<your-bucket-name>]"
REGION = "us-central1"
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. AI Platform runs
the code from this package. In this tutorial, AI Platform also saves the
trained model that results from your job in the same bucket. You can then
create an AI Platform model version based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Cloud
AI Platform services are
available. You may
not use a Multi-Regional Storage bucket for training with AI Platform.
End of explanation
!gsutil mb -l $REGION gs://$BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
import numpy as np
import json
from google.cloud import storage
import matplotlib.pyplot as plt
import matplotlib.cm as cm
%matplotlib inline
Explanation: Import libraries
Import the libraries we'll be using in this tutorial.
End of explanation
!gsutil cp gs://cloud-samples-data/ai-platform/synthetic/tab_net_output/syn2 gs://$BUCKET_NAME
# Replace your the BUCKET_NAME and PREDICTION_FILE
# BUCKET_NAME = "[<your-bucket-name>]"
# PREDICTION_FILE = "[<your-prediction-file>]"
PREDICTION_FILE = "syn2"
MASK_KEY = "aggregated_mask_values"
HEADER = [("feat_" + str(i)) for i in range(1, 12)]
HEADER
Explanation: Reading a sample TabNet prediction on syn2 data
After training and serving your model, you upload the output to Google Cloud Storage. There is a sample TabNet prediction using synthetic data: gs://cloud-samples-data/ai-platform/synthetic/tab_net_output/syn2
You can copy this output to your bucket for testing. Running prediction on your own data will generate the output having the same format.
Each prediction in TabNet contains aggregated_mask_values field. The masks are used to explain the predictions.
End of explanation
storage_client = storage.Client()
bucket = storage_client.get_bucket(BUCKET_NAME)
blob = bucket.blob(PREDICTION_FILE)
f = blob.download_as_string(client=None).decode("utf-8").strip()
predictions = f.split("\n")
predictions[:1]
Explanation: Download and preprocess the predictions.
End of explanation
masks = []
for prediction in predictions:
prediction = json.loads(prediction)
masks.append(prediction[MASK_KEY])
masks = np.matrix(masks)
masks.shape
Explanation: Parse the mask values in prediction. Then, concatenate the mask values.
The output is a matrix having Nxk (N is the number of outputs, k is the size of each mask).
Concatenating mask values are used to visualize the feature importance.
End of explanation
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(121)
ax.imshow(masks[:50, :], interpolation='bilinear', cmap=cm.Greys_r)
ax.set_xlabel('Features')
ax.set_ylabel('Sample index')
ax.xaxis.set_ticks(np.arange(len(HEADER)))
ax.set_xticklabels(HEADER, rotation='vertical')
plt.show()
Explanation: Visualize the mask value matrix.
The lighter color indicates more important feature.
For example, only features 3-6 are meaningful in prediction output in Syn2 data. In the plot, the column 3-6 have light color.
End of explanation |
6,029 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Artifact correction with Maxwell filter
This tutorial shows how to clean MEG data with Maxwell filtering.
Maxwell filtering in MNE can be used to suppress sources of external
intereference and compensate for subject head movements.
See maxwell for more details.
Step1: Set parameters
Step2: Preprocess with Maxwell filtering
Step3: Select events to extract epochs from, pick M/EEG channels, and plot evoked | Python Code:
import mne
from mne.preprocessing import maxwell_filter
data_path = mne.datasets.sample.data_path()
Explanation: Artifact correction with Maxwell filter
This tutorial shows how to clean MEG data with Maxwell filtering.
Maxwell filtering in MNE can be used to suppress sources of external
intereference and compensate for subject head movements.
See maxwell for more details.
End of explanation
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
ctc_fname = data_path + '/SSS/ct_sparse_mgh.fif'
fine_cal_fname = data_path + '/SSS/sss_cal_mgh.dat'
Explanation: Set parameters
End of explanation
raw = mne.io.read_raw_fif(raw_fname, add_eeg_ref=False)
raw.info['bads'] = ['MEG 2443', 'EEG 053', 'MEG 1032', 'MEG 2313'] # set bads
# Here we don't use tSSS (set st_duration) because MGH data is very clean
raw_sss = maxwell_filter(raw, cross_talk=ctc_fname, calibration=fine_cal_fname)
Explanation: Preprocess with Maxwell filtering
End of explanation
tmin, tmax = -0.2, 0.5
event_id = {'Auditory/Left': 1}
events = mne.find_events(raw, 'STI 014')
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
include=[], exclude='bads')
for r, kind in zip((raw, raw_sss), ('Raw data', 'Maxwell filtered data')):
epochs = mne.Epochs(r, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(eog=150e-6),
preload=False)
evoked = epochs.average()
evoked.plot(window_title=kind, ylim=dict(grad=(-200, 250),
mag=(-600, 700)))
Explanation: Select events to extract epochs from, pick M/EEG channels, and plot evoked
End of explanation |
6,030 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Differential Equations Exercise 1
Imports
Step2: Euler's method
Euler's method is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation
$$ \frac{dy}{dx} = f(y(x), x) $$
with the initial condition
Step4: The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation
Step6: You are now going to solve the following differential equation
Step7: In the following cell you are going to solve the above ODE using four different algorithms | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
Explanation: Ordinary Differential Equations Exercise 1
Imports
End of explanation
def solve_euler(derivs, y0, x):
Solve a 1d ODE using Euler's method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where
y and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
y0 = y(x[0]) #Initial Condition
h = 0.1
t = 0:h:5
assert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
Explanation: Euler's method
Euler's method is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation
$$ \frac{dy}{dx} = f(y(x), x) $$
with the initial condition:
$$ y(x_0)=y_0 $$
Euler's method performs updates using the equations:
$$ y_{n+1} = y_n + h f(y_n,x_n) $$
$$ h = x_{n+1} - x_n $$
Write a function solve_euler that implements the Euler method for a 1d ODE and follows the specification described in the docstring:
End of explanation
def solve_midpoint(derivs, y0, x):
Solve a 1d ODE using the Midpoint method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where y
and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
# YOUR CODE HERE
raise NotImplementedError()
assert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
Explanation: The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation:
$$ y_{n+1} = y_n + h f\left(y_n+\frac{h}{2}f(y_n,x_n),x_n+\frac{h}{2}\right) $$
Write a function solve_midpoint that implements the midpoint method for a 1d ODE and follows the specification described in the docstring:
End of explanation
def solve_exact(x):
compute the exact solution to dy/dx = x + 2y.
Parameters
----------
x : np.ndarray
Array of x values to compute the solution at.
Returns
-------
y : np.ndarray
Array of solutions at y[i] = y(x[i]).
# YOUR CODE HERE
raise NotImplementedError()
assert np.allclose(solve_exact(np.array([0,1,2])),np.array([0., 1.09726402, 12.39953751]))
Explanation: You are now going to solve the following differential equation:
$$
\frac{dy}{dx} = x + 2y
$$
which has the analytical solution:
$$
y(x) = 0.25 e^{2x} - 0.5 x - 0.25
$$
First, write a solve_exact function that compute the exact solution and follows the specification described in the docstring:
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this for grading the plots
Explanation: In the following cell you are going to solve the above ODE using four different algorithms:
Euler's method
Midpoint method
odeint
Exact
Here are the details:
Generate an array of x values with $N=11$ points over the interval $[0,1]$ ($h=0.1$).
Define the derivs function for the above differential equation.
Using the solve_euler, solve_midpoint, odeint and solve_exact functions to compute
the solutions using the 4 approaches.
Visualize the solutions on a sigle figure with two subplots:
Plot the $y(x)$ versus $x$ for each of the 4 approaches.
Plot $\left|y(x)-y_{exact}(x)\right|$ versus $x$ for each of the 3 numerical approaches.
Your visualization should have legends, labeled axes, titles and be customized for beauty and effectiveness.
While your final plot will use $N=10$ points, first try making $N$ larger and smaller to see how that affects the errors of the different approaches.
End of explanation |
6,031 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Over-dispersed Age-Period-Cohort Models
We replicate the data example in Harnau and Nielsen (2017) in Section 6.
The work on this vignette was supported by the European Research Council, grant AdG 694262.
First, we import the package
Step1: Next, we create a model and attach the Taylor and Ashe (1983) data to it.
Step2: Deviance Analysis (Table 2)
We first consider a deviance analysis. We start with an over-dispersed Poisson model with an age-period-cohort predictor and look for reductions.
Step3: First, we see that the age-period-cohort deviance is an extremely unlikely draw from a $\chi^2_{28}$ so a Poisson model is clearly rejected. Thus, we look at the $F$-tests in the column F_vs_APC and the corresponding p-values. We limit ourselves to nested models that cannot be rejected at the 5% level.
Remark
Step4: The models not rejected at the 5% level include the age-period (AP), age-cohort (AC), age-drift (Ad) and age (A) model. Only the AP and AC model are immediately nested in the APC model with the Ad and A model nested in both of them.
When it comes to forecasting, the age-cohort model has several advantages over the age-period model. Since is does not include a period effect, it does not require parameter extrapolation. Further, in a run-off triangle, the situation we have here, the age-cohort model replicates the chain-ladder point forecasts. Thus, we now take the age-cohort model as the primary model. We can then see what models we can reduce the age-cohort model to.
Step5: Age-drift and age model are (still) the only feasible reductions.
Remark (not in paper)
Step6: Next, we take the age-drift model as the primary model.
Step7: We can still just about not reject the age model.
Taken together, these results replicate Table 2 in the paper.
Parameter Estimation and Uncertainty (Table 3, Figure 1)
We move on look at the parameter uncertainty of both Poisson and over-dispersed Poisson models.
First, we fit an over-dispersed Poisson age-period-cohort model
Step8: As part of the estimation, the package attaches a parameter table to the model. This includes parameter estimates, standard errors, $t$ statistics and p-values compared to a $t$ distribution. We take a look at the first couple rows before recreating Table 3 from the paper.
Step9: To recreate Table 3, we further need to estimate an over-dispersed Poisson age-cohort model, and a Poisson age-period-cohort and age-cohort model.
Step10: For a Poisson model, the parameter table includes $z$ scores and p-values compared to a normal rather than a $t$ distribution. We look at the first couple rows of the Poisson age-period-cohort model.
Step11: Then we can combine the resulting parameter tables. We recall that the parameter estimates are identical for over-dispersed Poisson and Poisson model.
Remark
Step12: We can also plot the parameter estimates, replicating Figure 1.
Step13: Besides plots for the double differences and the detrended version, the plots also include the level, for which there is no confidence band given the sampling scheme, and the trends. We point out that these trends related to the detrended parameterization. Thus, they cannot be interpreted separately, in contrast to the detrended parameters.
Remark (not in paper)
Step14: Forecast by cell, cohort, age, period, and total are automatically generated. First, we look at the forecasts by period (calendar year).
Step15: The point-forecast corresponds to the cash-flow by calendar year. Besides, the output includes quantile forecasts, and the standard error and its components
Step16: and for the total
Step17: Next, we compute distribution forecasts based on the bootstrap by England and Verrall (1999) and England (2002). Since bootstrapping requires random sampling, the results differ somewhat from those in the paper. We note that the bootstrap does not have a solid theoretical foundation.
Step18: Just as for the $t$ forecast, this automatically computes forecasts by cell, age, period, cohort and for the total. The output for the bootstrap forecasts contains descriptive statistics over bootstrap draws
Step19: In contrast to the $t$ forecast, the bootstrap comes with a mean forecast that differs from the chain-ladder point forecast. Also, the reported bootstrap standard deviation differs from the bootstrapped chain-ladder standard error since it is computed around the bootstrap mean, not the chain-ladder point forecast.
Just as before, we can look at forecasts aggregated by cohort and for the total.
Step20: Taken together, this replicates Table 4.
Forecasting with smaller models (not in paper)
In the deviance analysis we found that we cannot reject a reduction to an age-drift or even an age model. Since the age-cohort model replicates the chain-ladder point forecasts we have so far not considered forecasts resulting from the smaller models. However, this is easily done.
Step21: We can now compare the forecasts of the three models. We look at the forecasts for the total, but we could just as easily look at other aggregates or forecasts by cells. | Python Code:
import apc
# Turn off a FutureWarnings
import warnings
warnings.simplefilter('ignore', FutureWarning)
Explanation: Over-dispersed Age-Period-Cohort Models
We replicate the data example in Harnau and Nielsen (2017) in Section 6.
The work on this vignette was supported by the European Research Council, grant AdG 694262.
First, we import the package
End of explanation
model = apc.Model()
model.data_from_df(apc.loss_TA(), data_format='CL')
Explanation: Next, we create a model and attach the Taylor and Ashe (1983) data to it.
End of explanation
model.fit_table('od_poisson_response')
model.deviance_table
Explanation: Deviance Analysis (Table 2)
We first consider a deviance analysis. We start with an over-dispersed Poisson model with an age-period-cohort predictor and look for reductions.
End of explanation
model.deviance_table[model.deviance_table['P>F'] > 0.05]
Explanation: First, we see that the age-period-cohort deviance is an extremely unlikely draw from a $\chi^2_{28}$ so a Poisson model is clearly rejected. Thus, we look at the $F$-tests in the column F_vs_APC and the corresponding p-values. We limit ourselves to nested models that cannot be rejected at the 5% level.
Remark:
The nesting is nicely illustrated in the following figure, taken from Nielsen (2014, Figure 5):
<img src="https://user-images.githubusercontent.com/25103918/42902938-3fc5c6bc-8a9e-11e8-94b6-7406f9a42c29.png" alt="Nested Sub-Models" width="400"/>
Nielsen (2014) also discusses the individual sub-models and provides their specific parameterizations.
End of explanation
model.fit_table('od_poisson_response', reference_predictor='AC')
model.deviance_table
model.deviance_table[model.deviance_table['P>F'] > 0.05]
Explanation: The models not rejected at the 5% level include the age-period (AP), age-cohort (AC), age-drift (Ad) and age (A) model. Only the AP and AC model are immediately nested in the APC model with the Ad and A model nested in both of them.
When it comes to forecasting, the age-cohort model has several advantages over the age-period model. Since is does not include a period effect, it does not require parameter extrapolation. Further, in a run-off triangle, the situation we have here, the age-cohort model replicates the chain-ladder point forecasts. Thus, we now take the age-cohort model as the primary model. We can then see what models we can reduce the age-cohort model to.
End of explanation
model.fit_table('od_poisson_response', reference_predictor='AP')
model.deviance_table
model.deviance_table[model.deviance_table['P>F'] > 0.05]
Explanation: Age-drift and age model are (still) the only feasible reductions.
Remark (not in paper): we can also consider the age-period model as the new primary model and see what reductions are feasible. This yields the same reductions:
End of explanation
model.fit_table('od_poisson_response', reference_predictor='Ad')
model.deviance_table
model.deviance_table[model.deviance_table['P>F'] > 0.05]
Explanation: Next, we take the age-drift model as the primary model.
End of explanation
model.fit('od_poisson_response', 'APC')
Explanation: We can still just about not reject the age model.
Taken together, these results replicate Table 2 in the paper.
Parameter Estimation and Uncertainty (Table 3, Figure 1)
We move on look at the parameter uncertainty of both Poisson and over-dispersed Poisson models.
First, we fit an over-dispersed Poisson age-period-cohort model
End of explanation
model.parameters.head()
Explanation: As part of the estimation, the package attaches a parameter table to the model. This includes parameter estimates, standard errors, $t$ statistics and p-values compared to a $t$ distribution. We take a look at the first couple rows before recreating Table 3 from the paper.
End of explanation
model_ac = model.clone() # this creates a model object with the data already attached
model_ac.fit('od_poisson_response', 'AC')
model_apc_pois = model.clone()
model_apc_pois.fit('poisson_response', 'APC')
model_ac_pois = model.clone()
model_ac_pois.fit('poisson_response', 'AC')
Explanation: To recreate Table 3, we further need to estimate an over-dispersed Poisson age-cohort model, and a Poisson age-period-cohort and age-cohort model.
End of explanation
model_apc_pois.parameters.head()
Explanation: For a Poisson model, the parameter table includes $z$ scores and p-values compared to a normal rather than a $t$ distribution. We look at the first couple rows of the Poisson age-period-cohort model.
End of explanation
import pandas as pd
pd.concat([
pd.concat([
model.parameters['coef'],
model_apc_pois.parameters['std_err'].rename('se N'),
model.parameters['std_err'].rename('se t')
], axis=1),
pd.concat([
model_ac.parameters['coef'],
model_ac_pois.parameters['std_err'].rename('se N'),
model_ac.parameters['std_err'].rename('se t')
], axis=1)
], axis=1, keys=['apc model', 'ac model'], sort=False)
Explanation: Then we can combine the resulting parameter tables. We recall that the parameter estimates are identical for over-dispersed Poisson and Poisson model.
Remark: The standard errors do not exactly match those in the paper but give the same impression. This is due to a former bug in the software.
End of explanation
model.plot_parameters(around_coef=False)
Explanation: We can also plot the parameter estimates, replicating Figure 1.
End of explanation
model_ac.forecast()
Explanation: Besides plots for the double differences and the detrended version, the plots also include the level, for which there is no confidence band given the sampling scheme, and the trends. We point out that these trends related to the detrended parameterization. Thus, they cannot be interpreted separately, in contrast to the detrended parameters.
Remark (not in paper): instead, we can also plot the double sums of double differences as shown in equation (3) in the paper. To do this, we merely need to add the argument plot_style='sum_sum' to plot_parameters. In this case, the trends are de-coupled and can be interpreted separately. However, the interpretation of the double sums is difficult.
Forecasting (Table 4)
Finally, we replicate the forecasting results. The package has both the $t$ and the bootstrap forecasts included.
Remark: The quantiles of the $t$ forecast do not exactly match those in the paper but give the same impression. This is due to a former bug in the software.
First, we look at the $t$ forecast. If we do not supply the argument method to get_distribution_fc, $t$ forecasts will be generated.
End of explanation
model_ac.forecasts['Period'].round()
Explanation: Forecast by cell, cohort, age, period, and total are automatically generated. First, we look at the forecasts by period (calendar year).
End of explanation
model_ac.forecasts['Cohort'].round()
Explanation: The point-forecast corresponds to the cash-flow by calendar year. Besides, the output includes quantile forecasts, and the standard error and its components:
* se_total: $[\hat{\tau} {D_1/(n-q)}{\hat{\pi}\mathcal{A} + \hat{s}^2\mathcal{A} + (\hat{\pi})^2}]^{1/2}$
* se_process: $[\hat{\tau} {D_1/(n-q)}\hat{\pi}\mathcal{A}]^{1/2}$
* se_estimation_xi: $[\hat{\tau} {D_1/(n-q)} \hat{s}^2\mathcal{A}]^{1/2}$
* se_estimation_tau: $[\hat{\tau} {D_1/(n-q)} (\hat{\pi}_\mathcal{A})^2]^{1/2}$
Similarly, we can look at forecasts by cohort
End of explanation
model_ac.forecasts['Total'].round()
Explanation: and for the total
End of explanation
fc_bootstrap = apc.bootstrap_forecast(apc.loss_TA(), seed=1)
Explanation: Next, we compute distribution forecasts based on the bootstrap by England and Verrall (1999) and England (2002). Since bootstrapping requires random sampling, the results differ somewhat from those in the paper. We note that the bootstrap does not have a solid theoretical foundation.
End of explanation
fc_bootstrap['Period'].round()
Explanation: Just as for the $t$ forecast, this automatically computes forecasts by cell, age, period, cohort and for the total. The output for the bootstrap forecasts contains descriptive statistics over bootstrap draws:
End of explanation
fc_bootstrap['Cohort'].round()
fc_bootstrap['Total'].round()
Explanation: In contrast to the $t$ forecast, the bootstrap comes with a mean forecast that differs from the chain-ladder point forecast. Also, the reported bootstrap standard deviation differs from the bootstrapped chain-ladder standard error since it is computed around the bootstrap mean, not the chain-ladder point forecast.
Just as before, we can look at forecasts aggregated by cohort and for the total.
End of explanation
model_ad = model.clone()
model_ad.fit('od_poisson_response', 'Ad')
model_ad.forecast()
model_a = model.clone()
model_a.fit('od_poisson_response', 'A')
model_a.forecast()
Explanation: Taken together, this replicates Table 4.
Forecasting with smaller models (not in paper)
In the deviance analysis we found that we cannot reject a reduction to an age-drift or even an age model. Since the age-cohort model replicates the chain-ladder point forecasts we have so far not considered forecasts resulting from the smaller models. However, this is easily done.
End of explanation
print('Age-Cohort Model')
model_ac.forecasts['Total'].round()
print('Age-Drift Model')
model_ad.forecasts['Total'].round()
print('Age Model')
model_a.forecasts['Total'].round()
Explanation: We can now compare the forecasts of the three models. We look at the forecasts for the total, but we could just as easily look at other aggregates or forecasts by cells.
End of explanation |
6,032 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Derivation of the inversion stencil using a non-symmetric forward-backward scheme
Derivation of a non-symmetric stencil of
$$b = \nabla\cdot(A\nabla_\perp f)+Bf$$
using a forward stencil on $\nabla\cdot(A\nabla_\perp f)$, and a backward stencil on $\nabla_\perp f$.
The stencil will not be symmetric as $f(x-h_x)$, $f(x)$ and $f(x+h_x)$ would be multiplied with $J(x,y)$. Symmetry requires that $f(x-h_x)$ is multiplied with $J(x-h_x)$, $f(x)$ with $J(x)$ and $f(x+h_x)$ with $J(x+h_x)$.
For symmetric version, see ForwardsBackwards.ipynb and BackwardsForwards.ipynb
Step3: We are here discretizing the equation
$$ b =
\nabla\cdot(A\nabla_\perp f)+Bf
\simeq
\frac{1}{J}\partial_x \left(JAg^{xx}\partial_x f\right)
+ \frac{1}{J}\partial_z \left(JAg^{zz}\partial_z f\right) + Bf$$
where the derivatives in $y$ has been assumed small in non-orthogonal grids.
We will let $T$ denote "term", the superscript $^F$ denote a forward stencil, and the superscript $^B$ denote a backward stencil.
NOTE
Step4: Calculate the finite difference approximation of $\frac{1}{J}\partial_x \left(JAg^{xx}\partial_x f\right)$
We start by making the substitution $\partial_x f \to g$ and calulate the first term of the equation under consideration
Step5: We now back substitute $g\to \partial_x f$
Step6: Calculating the second term
Calculate the finite difference approximation of $\partial_z f$
Step7: Calculate the finite difference approximation of $\frac{1}{J}\partial_z \left(JAg^{zz}\partial_z f\right)$
We start by making the substitution $\partial_z f \to g$ and calulate the second term of the equation under consideration
Step8: Calculating the third term
Step9: Collecting terms | Python Code:
from IPython.display import display
from sympy import init_printing
from sympy import symbols, expand, together, as_finite_diff, collect
from sympy import Function, Eq, Subs
from collections import deque
init_printing()
def finiteDifferenceOfOneTerm(factors, wrt, stencil):
Finds the finite different approximation of a term consisting of several factors
Input:
factors - An iterable containing the factors of the term
wrt - Take the derivative of the term with respect to this variable
stencil - An iterable containing the points to be used in the stencil
Output
term - The finite difference approximation of the term
# Take the derivative
factorsDiff = []
for factor in factors:
factorsDiff.append(as_finite_diff(factor.diff(wrt), stencil))
# Putting together terms
term = 0
# Make object for cyclic permutation
cyclPerm = deque(range(len(factors)))
for perm in range(len(cyclPerm)):
# Initialize a dummy term to store temporary variables in
curTerm = factorsDiff[cyclPerm[0]]
for permNr in range(1,len(factors)):
curTerm *= factors[cyclPerm[permNr]]
# Make a cyclic premutation
cyclPerm.rotate(1)
term += curTerm
return term
def fromFunctionToGrid(expr, sym):
Change from @(x,z) to @_xz, where @ represents a function
Input:
expr - The expression to change
sym - symbols('@_xz, @_xp1z, @_xm1z, @_xzp1, @_xzm1')
xp1 = x+hx
zm1 = z-hz
etc.
curFun = str(syms[0]).split('_')[0]
for sym in syms:
curSuffix = str(sym).split('_')[1]
if curSuffix == 'xz':
expr = expr.subs(Function(curFun)(x,z), sym)
elif curSuffix == 'xp1z':
expr = expr.subs(Subs(Function(curFun)(x,z), x, x+hx).doit(), sym)
elif curSuffix == 'xm1z':
expr = expr.subs(Subs(Function(curFun)(x,z), x, x-hx).doit(), sym)
elif curSuffix == 'xzp1':
expr = expr.subs(Subs(Function(curFun)(x,z), z, z+hz).doit(), sym)
elif curSuffix == 'xzm1':
expr = expr.subs(Subs(Function(curFun)(x,z), z, z-hz).doit(), sym)
return expr
x, z, hx, hz = symbols('x, z, h_x, h_z')
hx, hz = symbols('h_x, h_z', positive=True)
f = Function('f')(x, z)
A = Function('A')(x, z)
B = Function('B')(x, z)
gxx = Function('g^x^x')(x, z)
gzz = Function('g^z^z')(x, z)
J = Function('J')(x, z)
# Dummy function
g = Function('g')(x,z)
# Stencils
backwardX = [x-hx, x]
forwardX = [x, x+hx]
backwardZ = [z-hz, z]
forwardZ = [z, z+hz]
Explanation: Derivation of the inversion stencil using a non-symmetric forward-backward scheme
Derivation of a non-symmetric stencil of
$$b = \nabla\cdot(A\nabla_\perp f)+Bf$$
using a forward stencil on $\nabla\cdot(A\nabla_\perp f)$, and a backward stencil on $\nabla_\perp f$.
The stencil will not be symmetric as $f(x-h_x)$, $f(x)$ and $f(x+h_x)$ would be multiplied with $J(x,y)$. Symmetry requires that $f(x-h_x)$ is multiplied with $J(x-h_x)$, $f(x)$ with $J(x)$ and $f(x+h_x)$ with $J(x+h_x)$.
For symmetric version, see ForwardsBackwards.ipynb and BackwardsForwards.ipynb
End of explanation
fx = f.diff(x)
fxB = as_finite_diff(fx, backwardX)
display(Eq(symbols('f_x'), fx))
display(Eq(symbols('f_x^B'), together(fxB)))
Explanation: We are here discretizing the equation
$$ b =
\nabla\cdot(A\nabla_\perp f)+Bf
\simeq
\frac{1}{J}\partial_x \left(JAg^{xx}\partial_x f\right)
+ \frac{1}{J}\partial_z \left(JAg^{zz}\partial_z f\right) + Bf$$
where the derivatives in $y$ has been assumed small in non-orthogonal grids.
We will let $T$ denote "term", the superscript $^F$ denote a forward stencil, and the superscript $^B$ denote a backward stencil.
NOTE:
sympy has a built in function as_finite_diff, which could do the derivation easy for us. However it fails if
Non derivative terms or factors are present in the expression
If the expression is a Subs object (for example unevaluated derivatives calculated at a point)
We therefore do this in a sligthly tedious way.
Calculating the first term
Calculate the finite difference approximation of $\partial_x f$
End of explanation
# Define the factors
factors = [J, A, gxx, g]
term1 = finiteDifferenceOfOneTerm(factors, x, forwardX)
term1 /= J
display(Eq(symbols('T_1^F'), term1))
Explanation: Calculate the finite difference approximation of $\frac{1}{J}\partial_x \left(JAg^{xx}\partial_x f\right)$
We start by making the substitution $\partial_x f \to g$ and calulate the first term of the equation under consideration
End of explanation
term1 = term1.subs(Subs(g,x,x+hx).doit(), Subs(fxB,x,x+hx).doit())
term1 = term1.subs(g, fxB)
display(Eq(symbols('T_1^F'), term1))
Explanation: We now back substitute $g\to \partial_x f$
End of explanation
fz = f.diff(z)
fzB = as_finite_diff(fz, backwardZ)
display(Eq(symbols('f_z'), fz))
display(Eq(symbols('f_z^B'), together(fzB)))
Explanation: Calculating the second term
Calculate the finite difference approximation of $\partial_z f$
End of explanation
# Define the factors
factors = [J, A, gzz, g]
term2 = finiteDifferenceOfOneTerm(factors, z, forwardZ)
term2 /= J
display(Eq(symbols('T_2^F'), term2))
term2 = term2.subs(Subs(g,z,z+hz).doit(), Subs(fzB,z,z+hz).doit())
term2 = term2.subs(g, fzB)
display(Eq(symbols('T_2'), term2))
Explanation: Calculate the finite difference approximation of $\frac{1}{J}\partial_z \left(JAg^{zz}\partial_z f\right)$
We start by making the substitution $\partial_z f \to g$ and calulate the second term of the equation under consideration
End of explanation
term3 = B*f
display(Eq(symbols('T_3^F'), term3))
Explanation: Calculating the third term
End of explanation
b = term1 + term2 + term3
display(Eq(symbols('b'), b))
# Converting to grid syntax
functions = ['f', 'A', 'J', 'g^x^x', 'g^z^z', 'B']
for func in functions:
curStr = '{0}_xz, {0}_xp1z, {0}_xm1z, {0}_xzp1, {0}_xzm1'.format(func)
syms = symbols(curStr)
b = fromFunctionToGrid(b, syms)
# We must expand before we collect
b = collect(expand(b), symbols('f_xz, f_xp1z, f_xm1z, f_xzp1, f_xzm1'), exact=True)
display(Eq(symbols('b'),b))
Explanation: Collecting terms
End of explanation |
6,033 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mining performance hotspots with JProfiler, jQAssistant, Neo4j and Pandas
TL;DR I show how I determine the parts of an application that trigger unnecessary SQL statements by using graph analysis of a call tree.
Introduction
General claim
We don't need more tools that show use more problems in our software. We need ways to determine the right problem to solve. But often we just fix the symptoms on the surface rather than the underlying problems. I find that approach non-professional and want to do my part to improve this situation by delivering root cause analysis of symptoms to get to the real problems in our software.
What you will see
In this notebook, I'll show you one of my approaches for mining performance problems based on application runtime performance analysis. In general, I use this approach to make the point that there are severe design errors that have a negative influence on the application's overall performance. In this example, I show you how I determine the reasons behind a massive amount of executed SQL statements in an application step by step.
The key idea is to use graph analysis to analyze call stacks that were created by a profiling tool. With this approach, I'm not only able to show the hotspots that are involved in the performance issue (because the hotspots show that some SQL statements take long to execute or are executed too often), but also the reasons behind these hotspots. I achieve this by extracting various additional information like the web requests, application's entry points and the triggers within the application causing the hotspots.
This is very helpful to determine the most critical parts in the application and gives you a hint where you could start improving immediately. I use this analysis at work to determine the biggest performance bottleneck in a medium sized application (~700 kLOCs). Based on the results, we work out possible improvements for that specific hotspot, create a prototypical fix for it, measure the fix's impact and, if the results are convincing, roll out the fix for that problem application wide (and work on the next performance bottleneck and so on).
I hope you'll see that this is a very reasonable approach albeit the simplified use case that I show in this blog post/notebook.
Used software
Before we start, I want to briefly introduce you to the setup that I use for this analysis
Step1: For a full version of this file, see GitHub
Step3: We execute a simple query for one XML element and it's relationships to its attributes.
For example, if we want to display the data of this <tt><hotspot</tt> element
xml
<hotspot
leaf="false"
value="SELECT id, name FROM types ORDER BY name"
time="78386"
count="107">
as graph, we get that information from all the attributes of an element (don't worry about the syntax of the following two Cypher statements. They are just there to show you the underlying data as an example).
Step5: As seen in the picture with the huge graph above, each <tt><hotspot></tt> node refers to the further <tt><node></tt>s, that call the hotspots. In our case, these nodes are the methods in our application that are responsible for the executions of the SQL statements.
If we list all attributes of such a node, we've got plenty of information of the callees of the hotspots. For example these nodes contain information about the method name (<tt>method</tt>) or the number of executed SQL statements (<tt>count</tt>)
Step7: Prepare performance analysis
Because it's a bit cumbersome to work at the abstraction level of the XML file, let's enrich this graph with a few better concepts for mining performance problems.
Clean up (optional)
Before executing the first statements, we clean up any preexisting data from previous queries. This is only necessary when you execute this notebook several times on the same data store (like me). It makes the results repeatable and thus more reproducible (a property we should generally strive for!).
Step9: Consolidate the data
We create some new nodes that contain all the information from the XML part of the graph that we need. We simply copy the values of some attributes to new <tt>Call</tt> nodes.
In our Cypher query, we first retrieve all <tt><node></tt> elements (identified by their "name" property) and some attributes that we need for our analysis. For each relevant information item, we create a variable to retrieve the information later on
Step11: We do the same for the <tt><hotspot></tt> elements. Here, the attributes are a little bit different, because we are gathering data from the hotspots that contain information about the executed SQL statements
Step13: Now, we have many new nodes in our database that are aren't directly connected. E. g. a <tt>Call</tt> node looks like this
Step15: And there we have it! Just click in the Neo4j browser on the relationship CALLS and the you'll see our call tree from JProfiler as call graph in Neo4j, ready for root cause analysis!
Root Cause Analysis
Conceptual model
All the work before was just there to get a nice graph model that feels more natural. Now comes the analysis part
Step17: Identifying all entry points into the application
Next, we identify the entry points aka first nodes of our application. We can achieve this by first searching all the shortest paths between the already existing <tt>Request</tt> nodes and all the nodes that have our package name. From all these subgraphs, we take only the single subgraph that has only a single node with the package name of our application. This is the first node that occurs in the call graph when starting from the request (I somehow can feel that there is a more elegant way to do this. If so, please let me know!). We mark that nodes as <tt>Entry</tt> nodes.
Step19: With the same approach, we can mark all the calls that trigger the execution of the SQL statements with the label <tt>Trigger</tt>.
Step21: After marking all the relevant nodes, we connect them via a new relationshop <tt>LEADS_TO</tt> to enable more elegant queries later on.
Step23: Getting results
All the previous steps where needed to enable this simple query, that gives us the spots in the application, that lead eventually to the hotspots!
Step24: The returned data consists of
* <tt>request</tt>
Step25: We see immediately that we have an issue with the loading of the pet's owners via the <tt>OwnerController</tt>. Let's look at the problem from another perspective
Step26: And group the hotspots accordingly.
Step27: Now we made the problem more obvious
Step28: You could also have a look at the most problematic spot in the application by grouping the data by the class and the method that triggers the execution of the most SQL statements. | Python Code:
with open (r'input/spring-petclinic/JDBC_Probe_Hot_Spots_jmeter_test.xml') as log:
[print(line[:97] + "...") for line in log.readlines()[:10]]
Explanation: Mining performance hotspots with JProfiler, jQAssistant, Neo4j and Pandas
TL;DR I show how I determine the parts of an application that trigger unnecessary SQL statements by using graph analysis of a call tree.
Introduction
General claim
We don't need more tools that show use more problems in our software. We need ways to determine the right problem to solve. But often we just fix the symptoms on the surface rather than the underlying problems. I find that approach non-professional and want to do my part to improve this situation by delivering root cause analysis of symptoms to get to the real problems in our software.
What you will see
In this notebook, I'll show you one of my approaches for mining performance problems based on application runtime performance analysis. In general, I use this approach to make the point that there are severe design errors that have a negative influence on the application's overall performance. In this example, I show you how I determine the reasons behind a massive amount of executed SQL statements in an application step by step.
The key idea is to use graph analysis to analyze call stacks that were created by a profiling tool. With this approach, I'm not only able to show the hotspots that are involved in the performance issue (because the hotspots show that some SQL statements take long to execute or are executed too often), but also the reasons behind these hotspots. I achieve this by extracting various additional information like the web requests, application's entry points and the triggers within the application causing the hotspots.
This is very helpful to determine the most critical parts in the application and gives you a hint where you could start improving immediately. I use this analysis at work to determine the biggest performance bottleneck in a medium sized application (~700 kLOCs). Based on the results, we work out possible improvements for that specific hotspot, create a prototypical fix for it, measure the fix's impact and, if the results are convincing, roll out the fix for that problem application wide (and work on the next performance bottleneck and so on).
I hope you'll see that this is a very reasonable approach albeit the simplified use case that I show in this blog post/notebook.
Used software
Before we start, I want to briefly introduce you to the setup that I use for this analysis:
Fork of the Spring sample project PetClinic as application to torture
Tomcat 8 installation as servlet container for the application (standalone, for easier integration of the profiling tool)
JMeter load testing tool for executing some requests
JProfiler profiler for recording performance measures
jQAssistant static analysis tool for reading in call trees
Neo4j graph database and Cypher graph query language for executing the graph analysis
Pandas, py2neo and Bokeh on Jupyter* as documentation, execution and analysis environment
The first ones are dependent on the environment and programming language you use. jQAssistant, Neo4j and Pandas are my default environment for software analytics so far. I'll show your how all those tools fit together.
So let's get started!
*actually, what you see here, is the result of an executed Jupyter notebook, too. You can find that notebook on GitHub.
Performance Profiling
As a prerequisite for this analysis, we need performance profiling data gathered by a profiler. A profiler will be integrated into the runtime environment (e. g. Java Virtual Machine) of your application and measures diverse properties like method execution time, number of web service calls, executed SQL statements etc. Additionally, we need something that uses or clicks through our application to get some numbers. In my case, I run the Spring PetClinic performance test using JMeter.
As profiling tool, I use JProfiler to record some performance measures while the test was running.
<p><tt><advertisment></tt><br />
At this point, I want to thank ej-technologies for providing me with a [free open-source license](https://www.ej-technologies.com/buy/jprofiler/openSource) for JProfiler that enables this blog post in exchange of mentioning their product:
<a href="http://www.ej-technologies.com/products/jprofiler/overview.html">
</a>
JProfiler is a great commercial tool for profiling Java application and costs around 400 €. It really worth the money because it gives you deep insights how your application performs under the hood.
<tt></advertisment></tt>
</p>
Also outside the advertisement block, I personally like JProfiler a lot because it does what it does very very good. Back to the article.
The recording of the measures starts before the execution of the performance test and stops after the test has finished successfully. The result is stored in a file as so-called "snapshot". The use of a snapshot enables you to repeat your analysis over and over again with exactly the same performance measures.
What we usually need for performance analysis is a recorded runtime stack of all method calls as a call tree. A call tree shows you a tree of the called methods. Below, you can see the call tree for the called methods with their measured CPU wall clock time (aka the real time that is spent in that method) and the number of invocations for a complete test run:
With such a view, you see which parts of your application call which classes and methods by drilling down the hierarchy by hand:
But there is more: You can also "turn around" the call tree and list all the so-called "HotSpots". Technically, e. g. for CPU HotSpots, JProfiler sums up all the measurements for the method call leafs that take longer than 0.1% of all method calls. With this view, you see the application's hotspots immediately:
These views are also available for other measures like web service calls, file accesses or DB calls, that is shown below:
This is the data that we need for our SQL statement analysis. The big problem is, that we can't easily see where all those SQL statements come from because we just see the isolated SQL statements.
And this is where our journey begins...
Reading XML into Neo4j
The input data
For further processing, we export such a call tree into a XML file (via the JProfiler GUI or the command line tool jpexport). If we export the data of the SQL hotspots (incl. the complete call tree) with JProfiler, we'll get a XML file like the following:
End of explanation
import pandas as pd
from py2neo import Graph
graph = Graph()
Explanation: For a full version of this file, see GitHub: https://github.com/feststelltaste/software-analytics/blob/master/notebooks/input/spring-petclinic/JDBC_Probe_Hot_Spots_jmeter_test.xml
This file consists of all the information that we've seen in the JProfiler GUI, but as XML elements and attributes. And here comes the great part: The content itself is graph-like because the XML elements are nested! So the <tt><tree></tt> element contains the <tt><hotspot></tt> elements that contain the <tt><node></tt> elements and so on. A case for a graph database like Neo4j!
But how do we get that XML file into our Neo4j database? jQAssistant to the rescue!
Scanning with jQAssistant
jQAssistant is a great and versatile tool for scanning various graph-like structured data into Neo4j (see my experiences with jQAssistant so far for more information). I just downloaded the version 1.1.3, added the binary to my <tt>PATH</tt> system variable and executed the following command (works for jQAssistant versions prior to 1.2.0, I haven't figured it out how to do it with newer versions yet):
<pre>
jqassistant scan -f xml:document::JDBC_Probe_Hot_Spots_jmeter_test.xml
</pre>
This will import the XML structure as a graph into the Neo4j graph database that is used by jQAssistant under the hood.
Exploring the data
So, if we want to have a quick look at the stored data, we can start jQAssistant's Neo4j embedded instance via
<pre>
jqassistant server
</pre>
open <tt>http://localhost:7474</tt>, click in the overview menu at the label <tt>File</tt>, click on some nodes and you will see something like this:
It shows the content of the XML file from above as a graph in Neo4j:
* The pink node is the entry point – the XML file.
* To the right, there is the first XML element <tt><tree></tt> in that file, connected by the <tt>HAS_ROOT_ELEMENT</tt> relationship.
* The <tt><tree></tt> element has some attributes, connected by <tt>HAS_ATTRIBUTE</tt>.
* From the <tt><tree></tt> element, there are multiple outgoing relationships with various <tt><hotspot></tt> nodes, containing some information about the executed SQLs in the referenced attributes.
* The attributes that are connected to these elements contain the values that we need for our purpose later on.
So, for example the attribute with the name <tt>value</tt> contains the executed SQL statement:
The attribute with the name <tt>count</tt> contains the number of executions of a SQL statement:
Each element or attribute is also labeled correspondingly with <tt>Element</tt> or <tt>Attribute</tt>.
Looking at real data
I want to show you the data from the database in a more nicer way. So, we load our main libraries and initialize the connection to Neo4j database by creating a <tt>Graph</tt> object (for more details on this have a look at this blog post)
End of explanation
query=
MATCH (e:Element)-[:HAS_ATTRIBUTE]->(a:Attribute)
WHERE a.value = "SELECT id, name FROM types ORDER BY name"
WITH e as node
MATCH (node)-[:HAS_ATTRIBUTE]->(all:Attribute)
RETURN all.name, all.value
pd.DataFrame(graph.run(query).data())
Explanation: We execute a simple query for one XML element and it's relationships to its attributes.
For example, if we want to display the data of this <tt><hotspot</tt> element
xml
<hotspot
leaf="false"
value="SELECT id, name FROM types ORDER BY name"
time="78386"
count="107">
as graph, we get that information from all the attributes of an element (don't worry about the syntax of the following two Cypher statements. They are just there to show you the underlying data as an example).
End of explanation
query=
MATCH (e:Element)-[:HAS_ATTRIBUTE]->(a:Attribute)
WHERE id(e) = 12 //just select an arbitrary node
RETURN a.name, a.value
pd.DataFrame(graph.run(query).data())
Explanation: As seen in the picture with the huge graph above, each <tt><hotspot></tt> node refers to the further <tt><node></tt>s, that call the hotspots. In our case, these nodes are the methods in our application that are responsible for the executions of the SQL statements.
If we list all attributes of such a node, we've got plenty of information of the callees of the hotspots. For example these nodes contain information about the method name (<tt>method</tt>) or the number of executed SQL statements (<tt>count</tt>):
End of explanation
query=
MATCH (n:Node)-[r:CALLS|CREATED_FROM|LEADS_TO]->()
DELETE r, n
RETURN COUNT(r), COUNT(n)
graph.run(query).data()
Explanation: Prepare performance analysis
Because it's a bit cumbersome to work at the abstraction level of the XML file, let's enrich this graph with a few better concepts for mining performance problems.
Clean up (optional)
Before executing the first statements, we clean up any preexisting data from previous queries. This is only necessary when you execute this notebook several times on the same data store (like me). It makes the results repeatable and thus more reproducible (a property we should generally strive for!).
End of explanation
query =
MATCH (n:Element {name: "node"}),
(n)-[:HAS_ATTRIBUTE]->(classAttribut:Attribute {name : "class"}),
(n)-[:HAS_ATTRIBUTE]->(methodAttribut:Attribute {name : "methodName"}),
(n)-[:HAS_ATTRIBUTE]->(countAttribut:Attribute {name : "count"}),
(n)-[:HAS_ATTRIBUTE]->(timeAttribut:Attribute {name : "time"})
CREATE
(x:Node:Call {
fqn: classAttribut.value,
class: SPLIT(classAttribut.value,".")[-1],
method: methodAttribut.value,
count: toFloat(countAttribut.value),
time: toFloat(timeAttribut.value)
})-[r:CREATED_FROM]->(n)
RETURN COUNT(x), COUNT(r)
graph.run(query).data()
Explanation: Consolidate the data
We create some new nodes that contain all the information from the XML part of the graph that we need. We simply copy the values of some attributes to new <tt>Call</tt> nodes.
In our Cypher query, we first retrieve all <tt><node></tt> elements (identified by their "name" property) and some attributes that we need for our analysis. For each relevant information item, we create a variable to retrieve the information later on:
cypher
MATCH (n:Element {name: "node"}),
(n)-[:HAS_ATTRIBUTE]->(classAttribut:Attribute {name : "class"}),
(n)-[:HAS_ATTRIBUTE]->(methodAttribut:Attribute {name : "methodName"}),
(n)-[:HAS_ATTRIBUTE]->(countAttribut:Attribute {name : "count"}),
(n)-[:HAS_ATTRIBUTE]->(timeAttribut:Attribute {name : "time"})
For each <tt><node></tt> element we've found, we tag the nodes with the label <tt>Node</tt> to have a general marker for the JProfiler measurements (which is "node" by coincidence) and mark all nodes that contain information about the calling classes and methods with the label <tt>Call</tt>:
cypher
CREATE
(x:Node:Call {
We also copy the relevant information from the <tt><node></tt> element's attributes into the new nodes. We put the value of the class attribute (that consists of the Java package name and the class name) into the <tt>fqn</tt> (full qualified name) property and add just the name of the class in the <tt>class</tt> property (just for displaying reasons in the end). The rest is copied as well, including some type conversions for <tt>count</tt> and <tt>time</tt>:
cypher
fqn: classAttribut.value,
class: SPLIT(classAttribut.value,".")[-1],
method: methodAttribut.value,
count: toFloat(countAttribut.value),
time: toFloat(timeAttribut.value)
})
Additionally, we track the origin of the information by a <tt>CREATED_FROM</tt> relationship to connect the new nodes later on:
cypher
-[r:CREATED_FROM]->(n)
So, the complete query looks like the following and will be executed against the Neo4j data store:
End of explanation
query =
MATCH (n:Element { name: "hotspot"}),
(n)-[:HAS_ATTRIBUTE]->(valueAttribut:Attribute {name : "value"}),
(n)-[:HAS_ATTRIBUTE]->(countAttribut:Attribute {name : "count"}),
(n)-[:HAS_ATTRIBUTE]->(timeAttribut:Attribute {name : "time"})
WHERE n.name = "hotspot"
CREATE
(x:Node:HotSpot {
value: valueAttribut.value,
count: toFloat(countAttribut.value),
time: toFloat(timeAttribut.value)
})-[r:CREATED_FROM]->(n)
RETURN COUNT(x), COUNT(r)
graph.run(query).data()
Explanation: We do the same for the <tt><hotspot></tt> elements. Here, the attributes are a little bit different, because we are gathering data from the hotspots that contain information about the executed SQL statements:
End of explanation
query=
MATCH (outerNode:Node)-[:CREATED_FROM]->
(outerElement:Element)-[:HAS_ELEMENT]->
(innerElement:Element)<-[:CREATED_FROM]-(innerNode:Node)
CREATE (outerNode)<-[r:CALLS]-(innerNode)
RETURN COUNT(r)
graph.run(query).data()
Explanation: Now, we have many new nodes in our database that are aren't directly connected. E. g. a <tt>Call</tt> node looks like this:
So, let's connect them. How? We've saved that information with our <tt>CREATED_FROM</tt> relationship:
This information can be used to connect the <tt>Call</tt> nodes as well as the <tt>HotSpot</tt> nodes.
End of explanation
query=
MATCH (x)
WHERE x.fqn = "_jprofiler_annotation_class" AND x.method STARTS WITH "HTTP"
SET x:Request
RETURN COUNT(x)
graph.run(query).data()
Explanation: And there we have it! Just click in the Neo4j browser on the relationship CALLS and the you'll see our call tree from JProfiler as call graph in Neo4j, ready for root cause analysis!
Root Cause Analysis
Conceptual model
All the work before was just there to get a nice graph model that feels more natural. Now comes the analysis part: As mentioned in the introduction, we don't only want the hotspots that signal that something awkward happened, but also
* the trigger in our application of the hotspot combined with
* the information about the entry point (e. g. where in our application does the problem happen) and
* (optionally) the request that causes the problem (to be able to localize the problem)
Speaking in graph terms, we need some specific nodes of our call tree graph with the following information:
* <tt>HotSpot</tt>: The executed SQL statement aka the <tt>HotSpot</tt> node
* <tt>Trigger</tt>: The executor of the SQL statement in our application aka the <tt>Call</tt> node with the last class/method call that starts with our application's package name
* <tt>Entry</tt>: The first call of our own application code aka the <tt>Call</tt> node that starts also with our application's package name
* <tt>Request</tt>: The <tt>Call</tt> node with the information about the HTTP request (optional, but because JProfiler delivers this information as well, we use it here in this example)
These points in the call tree should give us enough information that we can determine where to look for improvements in our application.
Challenges
There is one thing that is a little bit tricky to implement: It's to model what we see as "last" and "first" in Neo4j / Cypher. Because we are using the package name of a class to identify our own application, there are many <tt>Call</tt> nodes in one call graph part that have that package name. Neo4j would (rightly) return too many results (for us) because it would deliver one result for each match. And a match is also given when a <tt>Call</tt> node within our application matches the package name. So, how do we mark the first and last nodes of our application code?
Well, take one step at a time. Before we are doing anything awkward, we are trying to store all the information that we know into the graph before executing our analysis. I always favor this approach instead of trying to find a solution with complicated cypher queries, where you'll probably mix up things easily.
Preparing the query
First, we can identify the request, that triggers the SQL statement, because we configured JProfiler to include that information in our call tree. We simply label them with the label <tt>Request</tt>.
Identifying all request
End of explanation
query=
MATCH
request_to_entry=shortestPath((request:Request)-[:CALLS*]->(entry:Call))
WHERE
entry.fqn STARTS WITH "org.springframework.samples.petclinic"
AND
SINGLE(n IN NODES(request_to_entry)
WHERE exists(n.fqn) AND n.fqn STARTS WITH "org.springframework.samples.petclinic")
SET
entry:Entry
RETURN COUNT(entry)
graph.run(query).data()
Explanation: Identifying all entry points into the application
Next, we identify the entry points aka first nodes of our application. We can achieve this by first searching all the shortest paths between the already existing <tt>Request</tt> nodes and all the nodes that have our package name. From all these subgraphs, we take only the single subgraph that has only a single node with the package name of our application. This is the first node that occurs in the call graph when starting from the request (I somehow can feel that there is a more elegant way to do this. If so, please let me know!). We mark that nodes as <tt>Entry</tt> nodes.
End of explanation
query=
MATCH
trigger_to_hotspot=shortestPath((trigger:Call)-[:CALLS*]->(hotspot:HotSpot))
WHERE
trigger.fqn STARTS WITH "org.springframework.samples.petclinic"
AND
SINGLE(n IN NODES(trigger_to_hotspot)
WHERE exists(n.fqn) AND n.fqn STARTS WITH "org.springframework.samples.petclinic")
SET
trigger:Trigger
RETURN count(trigger)
graph.run(query).data()
Explanation: With the same approach, we can mark all the calls that trigger the execution of the SQL statements with the label <tt>Trigger</tt>.
End of explanation
query=
MATCH
(request:Request)-[:CALLS*]->
(entry:Entry)-[:CALLS*]->
(trigger:Trigger)-[:CALLS*]->
(hotspot:HotSpot)
CREATE UNIQUE
(request)-[leads1:LEADS_TO]->
(entry)-[leads2:LEADS_TO]->
(trigger)-[leads3:LEADS_TO]->(hotspot)
RETURN count(leads1), count(leads2), count(leads3)
graph.run(query).data()
Explanation: After marking all the relevant nodes, we connect them via a new relationshop <tt>LEADS_TO</tt> to enable more elegant queries later on.
End of explanation
query=
MATCH
(request:Request)-[:LEADS_TO]->
(entry:Entry)-[:LEADS_TO]->
(trigger:Trigger)-[:LEADS_TO]->
(hotspot:HotSpot)
RETURN
request.method as request,
request.count as sql_count,
entry.class as entry_class,
entry.method as entry_method,
trigger.class as trigger_class,
trigger.method as trigger_method,
hotspot.value as sql,
hotspot.count as sql_count_sum
hotspots = pd.DataFrame(graph.run(query).data())
hotspots.head()
Explanation: Getting results
All the previous steps where needed to enable this simple query, that gives us the spots in the application, that lead eventually to the hotspots!
End of explanation
sqls_per_method = hotspots.groupby([
'request',
'entry_class',
'entry_method',
'trigger_class',
'trigger_method']).agg(
{'sql_count' : 'sum',
'request' : 'count'})
sqls_per_method
Explanation: The returned data consists of
* <tt>request</tt>: the name of the HTTP request
* <tt>sql_count</tt>: the number of SQL statements caused by this HTTP request
* <tt>entry_class</tt>: the class name of the entry point into our application
* <tt>entry_method</tt>: the method name of the entry point into our application
* <tt>trigger_class</tt>: the class name of the exit point out of our application
* <tt>trigger_method</tt>: the method name of the exit point out of our application
* <tt>sql</tt>: the executed SQL statement
* <tt>sql_count_sum</tt>: the amount of all executed SQL statements of the same kind
A look at a subgraph
If we take a look at the Neo4j browser and execute the statement from above but returning the nodes, aka
cypher
MATCH
(request:Request)-[:LEADS_TO]->
(entry:Entry)-[:LEADS_TO]->
(trigger:Trigger)-[:LEADS_TO]->
(hotspot:HotSpot)
RETURN
request, entry, trigger, hotspot
we get a nice overview of all our performance hotspots, e. g.
With this graphical view, it's easy to see the connection between the requests, our application code and the hotspots.
Albeit this view is nice for exploration, it's not actionable. So let's use Pandas to shape the data to knowledge!
In-depth analysis
First, we have a look which parts in the application trigger all the SQLs. We simply group some columns to get a more dense overview:
End of explanation
hotspots['table'] = hotspots['sql'].\
str.upper().str.extract(
r".*(FROM|INTO|UPDATE) ([\w\.]*)",
expand=True)[1]
hotspots['table'].value_counts()
Explanation: We see immediately that we have an issue with the loading of the pet's owners via the <tt>OwnerController</tt>. Let's look at the problem from another perspective: What kind of data is loaded by whom from the tables. We simply chop the SQL and extract just the name of the database table (in fact, the regex is so simple that some of the tables weren't identified. But because these are special cases, we can ignore them):
End of explanation
grouped_by_entry_class_and_table = hotspots.groupby(['entry_class', 'table'])[['sql_count']].sum()
grouped_by_entry_class_and_table
Explanation: And group the hotspots accordingly.
End of explanation
from bokeh.charts import Donut, show, output_notebook
plot_data = grouped_by_entry_class_and_table.reset_index()
d = Donut(plot_data, label=['entry_class', 'table'],
values='sql_count',
text_font_size='8pt',
hover_text='sql_count'
)
output_notebook()
show(d)
Explanation: Now we made the problem more obvious: The class <tt>OwnerController</tt> works heavily with the <tt>PETS</tt> table and the pet's <tt>TYPES</tt> table. Surely an error in our program.
Let's visualize the problem with a nice donut chart in Bokeh:
End of explanation
hotspots.groupby(['trigger_class', 'trigger_method'])[['sql_count']].sum().sort_values(
by='sql_count', ascending=False).head(5)
Explanation: You could also have a look at the most problematic spot in the application by grouping the data by the class and the method that triggers the execution of the most SQL statements.
End of explanation |
6,034 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
constitutive
Step1: L_iso
Provides the elastic stiffness tensor for an isotropic material.
The two first arguments are a couple of elastic properties. The third argument specifies which couple has been provided and the nature and order of coefficients.
Exhaustive list of possible third argument
Step2: M_iso
Provides the elastic compliance tensor for an isotropic material.
The two first arguments are a couple of elastic properties. The third argument specify which couple has been provided and the nature and order of coefficients.
Exhaustive list of possible third argument
Step3: L_cubic
Provides the elastic stiffness tensor for a cubic material. Arguments are the stiffness coefficients C11, C12 and C44, or the elastic constants E, nu, G
Exhaustive list of possible third argument
Step4: M_cubic
Provides the elastic compliance tensor for a cubic material. Arguments are the stiffness coefficients C11, C12 and C44, or the elastic constants E, nu, G
Exhaustive list of possible third argument
Step5: L_isotrans
Provides the elastic stiffness tensor for an isotropic transverse material.
Arguments are longitudinal Young modulus EL, transverse young modulus, Poisson’s ratio for loading along the longitudinal axis nuTL, Poisson’s ratio for loading along the transverse axis nuTT, shear modulus GLT and the axis of symmetry.
Step6: bp
Step7: L_ortho
Provides the elastic stiffness tensor for an orthotropic material.
Arguments are either (convention 'EnuG')
Step8: M_ortho
Provides the elastic compliance tensor for an orthotropic material.
Arguments are either (convention 'EnuG') | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from simmit import smartplus as sim
import os
Explanation: constitutive : The Constitutive Library
End of explanation
E = 70000.0
nu = 0.3
L = sim.L_iso(E,nu,"Enu")
print np.array_str(L, precision=4, suppress_small=True)
d = sim.check_symetries(L)
print(d['umat_type'])
print(d['props'])
x = sim.L_iso_props(L)
print(x)
Explanation: L_iso
Provides the elastic stiffness tensor for an isotropic material.
The two first arguments are a couple of elastic properties. The third argument specifies which couple has been provided and the nature and order of coefficients.
Exhaustive list of possible third argument :
‘Enu’,’nuE,’Kmu’,’muK’, ‘KG’, ‘GK’, ‘lambdamu’, ‘mulambda’, ‘lambdaG’, ‘Glambda’.
Return a numpy ndarray.
Example :
End of explanation
E = 70000.0
nu = 0.3
M = sim.M_iso(E,nu,"Enu")
print np.array_str(M, precision=2)
L_inv = np.linalg.inv(M)
d = sim.check_symetries(L_inv)
print(d['umat_type'])
print(d['props'])
x = sim.M_iso_props(M)
print(x)
Explanation: M_iso
Provides the elastic compliance tensor for an isotropic material.
The two first arguments are a couple of elastic properties. The third argument specify which couple has been provided and the nature and order of coefficients.
Exhaustive list of possible third argument :
‘Enu’,’nuE,’Kmu’,’muK’, ‘KG’, ‘GK’, ‘lambdamu’, ‘mulambda’, ‘lambdaG’, ‘Glambda’.
End of explanation
E = 70000.0
nu = 0.3
G = 23000.0
L = sim.L_cubic(E,nu,G,"EnuG")
print np.array_str(L, precision=2)
d = sim.check_symetries(L)
print(d['umat_type'])
print(d['props'])
x = sim.L_cubic_props(L)
print(x)
Explanation: L_cubic
Provides the elastic stiffness tensor for a cubic material. Arguments are the stiffness coefficients C11, C12 and C44, or the elastic constants E, nu, G
Exhaustive list of possible third argument : ‘Cii’,’EnuG, the by-default argument is 'Cii'
End of explanation
E = 70000.0
nu = 0.3
G = 23000.0
M = sim.M_cubic(E,nu,G,"EnuG")
print np.array_str(M, precision=2)
L = np.linalg.inv(M)
d = sim.check_symetries(L)
print(d['umat_type'])
print(d['props'])
x = sim.L_cubic_props(L)
print(x)
Explanation: M_cubic
Provides the elastic compliance tensor for a cubic material. Arguments are the stiffness coefficients C11, C12 and C44, or the elastic constants E, nu, G
Exhaustive list of possible third argument : ‘Cii’,’EnuG, the by-default argument is 'Cii'
End of explanation
EL = 70000.0
ET = 20000.0
nuTL = 0.08
nuTT = 0.3
GLT = 12000.0
axis = 3
L = sim.L_isotrans(EL,ET,nuTL,nuTT,GLT,axis)
print np.array_str(L, precision=2)
d = sim.check_symetries(L)
print(d['umat_type'])
print(d['axis'])
print np.array_str(d['props'], precision=2)
x = sim.L_isotrans_props(L,axis)
print np.array_str(x, precision=2)
Explanation: L_isotrans
Provides the elastic stiffness tensor for an isotropic transverse material.
Arguments are longitudinal Young modulus EL, transverse young modulus, Poisson’s ratio for loading along the longitudinal axis nuTL, Poisson’s ratio for loading along the transverse axis nuTT, shear modulus GLT and the axis of symmetry.
End of explanation
EL = 70000.0
ET = 20000.0
nuTL = 0.08
nuTT = 0.3
GLT = 12000.0
axis = 3
M = sim.M_isotrans(EL,ET,nuTL,nuTT,GLT,axis)
print np.array_str(M, precision=2)
x = sim.M_isotrans_props(M,axis)
print np.array_str(x, precision=2)
Explanation: bp::def("L_iso", L_iso);
bp::def("M_iso", M_iso);
bp::def("L_cubic", L_cubic);
bp::def("M_cubic", M_cubic);
bp::def("L_ortho", L_ortho);
bp::def("M_ortho", M_ortho);
bp::def("L_isotrans", L_isotrans);
bp::def("M_isotrans", M_isotrans);
bp::def("check_symetries", check_symetries);
bp::def("L_iso_props", L_iso_props);
bp::def("M_iso_props", M_iso_props);
bp::def("L_isotrans_props", L_isotrans_props);
bp::def("M_isotrans_props", M_isotrans_props);
bp::def("L_cubic_props", L_cubic_props);
bp::def("M_cubic_props", M_cubic_props);
bp::def("L_ortho_props", L_ortho_props);
bp::def("M_ortho_props", M_ortho_props);
bp::def("M_aniso_props", M_aniso_props);
M_isotrans
Provides the elastic compliance tensor for an isotropic transverse material.
Arguments are longitudinal Young modulus EL, transverse young modulus, Poisson’s ratio for loading along the longitudinal axis nuTL, Poisson’s ratio for loading along the transverse axis nuTT, shear modulus GLT and the axis of symmetry.
End of explanation
E_1 = 4500.0
E_2 = 2300.0
E_3 = 2700.0
nu_12 = 0.06
nu_13 = 0.08
nu_23 = 0.3
G_12 = 2200.0
G_13 = 2100.0
G_23 = 2400.0
L = sim.L_ortho(E_1,E_2,E_3,nu_12,nu_13,nu_23,G_12,G_13,G_23,'EnuG')
print np.array_str(L, precision=2)
d = sim.check_symetries(L)
print(d['umat_type'])
print(d['axis'])
print np.array_str(d['props'], precision=2)
x = sim.L_ortho_props(L)
print np.array_str(x, precision=2)
Explanation: L_ortho
Provides the elastic stiffness tensor for an orthotropic material.
Arguments are either (convention 'EnuG'):
The Young modulus of axis 1 $E_1$,
The Young modulus of axis 2 $E_2$,
The Young modulus of axis 3 $E_3$,
The Poisson ratio of direction 1 with respect to 2 $\nu_{12}$,
The Poisson ratio of direction 1 with respect to 3 $\nu_{13}$,
The Poisson ratio of direction 2 with respect to 3 $\nu_{13}$,
The shear modulus of direction 1 with respect to 2 $G_{12}$,
The shear modulus of direction 1 with respect to 3 $G_{13}$,
The shear modulus of direction 2 with respect to 3 $G_{23}$,
The axial coefficient of thermal expansion in direction 1 $\alpha_1$,
The axial coefficient of thermal expansion in direction 1 $\alpha_2$,
The axial coefficient of thermal expansion in direction 1 $\alpha_3$,
or the list of Cii (C11, C12, C13, C22, C23, C33, C44, C55, C66) (convention 'Cii')
End of explanation
E_1 = 4500.0
E_2 = 2300.0
E_3 = 2700.0
nu_12 = 0.06
nu_13 = 0.08
nu_23 = 0.3
G_12 = 2200.0
G_13 = 2100.0
G_23 = 2400.0
M = sim.M_ortho(E_1,E_2,E_3,nu_12,nu_13,nu_23,G_12,G_13,G_23,'EnuG')
print np.array_str(M, precision=2)
x = sim.M_ortho_props(M)
print np.array_str(x, precision=2)
Explanation: M_ortho
Provides the elastic compliance tensor for an orthotropic material.
Arguments are either (convention 'EnuG'):
The Young modulus of axis 1 $E_1$,
The Young modulus of axis 2 $E_2$,
The Young modulus of axis 3 $E_3$,
The Poisson ratio of direction 1 with respect to 2 $\nu_{12}$,
The Poisson ratio of direction 1 with respect to 3 $\nu_{13}$,
The Poisson ratio of direction 2 with respect to 3 $\nu_{13}$,
The shear modulus of direction 1 with respect to 2 $G_{12}$,
The shear modulus of direction 1 with respect to 3 $G_{13}$,
The shear modulus of direction 2 with respect to 3 $G_{23}$,
The axial coefficient of thermal expansion in direction 1 $\alpha_1$,
The axial coefficient of thermal expansion in direction 1 $\alpha_2$,
The axial coefficient of thermal expansion in direction 1 $\alpha_3$,
or the list of Cii (C11, C12, C13, C22, C23, C33, C44, C55, C66) (convention 'Cii')
End of explanation |
6,035 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting Breast Cancer Proliferation Scores with Apache Spark and Apache SystemML
Preprocessing
Setup
Step1: Execute Preprocessing & Save
Step2: Sample Data
TODO | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
from breastcancer.preprocessing import preprocess, save, train_val_split
# Ship a fresh copy of the `breastcancer` package to the Spark workers.
# Note: The zip must include the `breastcancer` directory itself,
# as well as all files within it for `addPyFile` to work correctly.
# This is equivalent to `zip -r breastcancer.zip breastcancer`.
dirname = "breastcancer"
zipname = dirname + ".zip"
shutil.make_archive(dirname, 'zip', dirname + "/..", dirname)
spark.sparkContext.addPyFile(zipname)
plt.rcParams['figure.figsize'] = (10, 6)
Explanation: Predicting Breast Cancer Proliferation Scores with Apache Spark and Apache SystemML
Preprocessing
Setup
End of explanation
# TODO: Filtering tiles and then cutting into samples could result
# in samples with less tissue than desired, despite that being the
# procedure of the paper. Look into simply selecting tiles of the
# desired size to begin with.
# Get list of image numbers, minus the broken ones.
broken = {2, 45, 91, 112, 242, 256, 280, 313, 329, 467}
slide_nums = sorted(set(range(1,501)) - broken)
# Settings
training = True
tile_size = 256
sample_size = 256
grayscale = False
num_partitions = 20000
add_row_indices = True
train_frac = 0.8
split_seed = 24
folder = "data" # Linux-filesystem directory to read raw data
save_folder = "data" # Hadoop-supported directory in which to save DataFrames
df_path = os.path.join(save_folder, "samples_{}_{}{}.parquet".format(
"labels" if training else "testing", sample_size, "_grayscale" if grayscale else ""))
train_df_path = os.path.join(save_folder, "train_{}{}.parquet".format(sample_size,
"_grayscale" if grayscale else ""))
val_df_path = os.path.join(save_folder, "val_{}{}.parquet".format(sample_size,
"_grayscale" if grayscale else ""))
df_path, train_df_path, val_df_path
# Process all slides.
df = preprocess(spark, slide_nums, tile_size=tile_size, sample_size=sample_size,
grayscale=grayscale, training=training, num_partitions=num_partitions,
folder=folder)
# Save DataFrame of samples.
save(df, df_path, sample_size, grayscale)
# Load full DataFrame from disk.
df = spark.read.load(df_path)
# Split into train and validation DataFrames based On slide number
train, val = train_val_split(spark, df, slide_nums, folder, train_frac, add_row_indices,
seed=split_seed)
# Save train and validation DataFrames.
save(train, train_df_path, sample_size, grayscale)
save(val, val_df_path, sample_size, grayscale)
Explanation: Execute Preprocessing & Save
End of explanation
# Load train and validation DataFrames from disk.
train = spark.read.load(train_df_path)
val = spark.read.load(val_df_path)
# Take a stratified sample.
p=0.01
train_sample = train.drop("__INDEX").sampleBy("tumor_score", fractions={1: p, 2: p, 3: p}, seed=42)
val_sample = val.drop("__INDEX").sampleBy("tumor_score", fractions={1: p, 2: p, 3: p}, seed=42)
train_sample, val_sample
# Reassign row indices.
# TODO: Wrap this in a function with appropriate default arguments.
train_sample = (
train_sample.rdd
.zipWithIndex()
.map(lambda r: (r[1] + 1, *r[0]))
.toDF(['__INDEX', 'slide_num', 'tumor_score', 'molecular_score', 'sample']))
train_sample = train_sample.select(train_sample["__INDEX"].astype("int"),
train_sample.slide_num.astype("int"),
train_sample.tumor_score.astype("int"),
train_sample.molecular_score,
train_sample["sample"])
val_sample = (
val_sample.rdd
.zipWithIndex()
.map(lambda r: (r[1] + 1, *r[0]))
.toDF(['__INDEX', 'slide_num', 'tumor_score', 'molecular_score', 'sample']))
val_sample = val_sample.select(val_sample["__INDEX"].astype("int"),
val_sample.slide_num.astype("int"),
val_sample.tumor_score.astype("int"),
val_sample.molecular_score,
val_sample["sample"])
train_sample, val_sample
# Save train and validation DataFrames.
tr_sample_filename = "train_{}_sample_{}{}.parquet".format(p, sample_size,
"_grayscale" if grayscale else "")
val_sample_filename = "val_{}_sample_{}{}.parquet".format(p, sample_size,
"_grayscale" if grayscale else "")
train_sample_path = os.path.join(save_folder, tr_sample_filename)
val_sample_path = os.path.join(save_folder, val_sample_filename)
save(train_sample, train_sample_path, sample_size, grayscale)
save(val_sample, val_sample_path, sample_size, grayscale)
Explanation: Sample Data
TODO: Wrap this in a function with appropriate default arguments
End of explanation |
6,036 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting sensor layouts of EEG systems
This example illustrates how to load all the EEG system montages
shipped in MNE-python, and display it on the fsaverage template subject.
Step1: Check all montages against a sphere
Step2: Check all montages against fsaverage | Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Joan Massich <mailsik@gmail.com>
#
# License: BSD Style.
import os.path as op
import mne
from mne.channels.montage import get_builtin_montages
from mne.datasets import fetch_fsaverage
from mne.viz import set_3d_title, set_3d_view
Explanation: Plotting sensor layouts of EEG systems
This example illustrates how to load all the EEG system montages
shipped in MNE-python, and display it on the fsaverage template subject.
End of explanation
for current_montage in get_builtin_montages():
montage = mne.channels.make_standard_montage(current_montage)
info = mne.create_info(
ch_names=montage.ch_names, sfreq=100., ch_types='eeg')
info.set_montage(montage)
sphere = mne.make_sphere_model(r0='auto', head_radius='auto', info=info)
fig = mne.viz.plot_alignment(
# Plot options
show_axes=True, dig='fiducials', surfaces='head',
bem=sphere, info=info)
set_3d_view(figure=fig, azimuth=135, elevation=80)
set_3d_title(figure=fig, title=current_montage)
Explanation: Check all montages against a sphere
End of explanation
subjects_dir = op.dirname(fetch_fsaverage())
for current_montage in get_builtin_montages():
montage = mne.channels.make_standard_montage(current_montage)
# Create dummy info
info = mne.create_info(
ch_names=montage.ch_names, sfreq=100., ch_types='eeg')
info.set_montage(montage)
fig = mne.viz.plot_alignment(
# Plot options
show_axes=True, dig='fiducials', surfaces='head', mri_fiducials=True,
subject='fsaverage', subjects_dir=subjects_dir, info=info,
coord_frame='mri',
trans='fsaverage', # transform from head coords to fsaverage's MRI
)
set_3d_view(figure=fig, azimuth=135, elevation=80)
set_3d_title(figure=fig, title=current_montage)
Explanation: Check all montages against fsaverage
End of explanation |
6,037 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Core Test
Step2: Mesh Test
Step3: FunctionSpace Test
Step4: Exporter Test | Python Code:
!echo "deb https://dl.bintray.com/feelpp/ubuntu bionic latest" | tee -a /etc/apt/sources.list
!wget -qO - https://bintray.com/user/downloadSubjectPublicKey?username=bintray | apt-key add -
!apt update
!apt install feelpp-quickstart feelpp-data
!apt install python3-mpi4py python3-feelpp ssh
Explanation: <a href="https://colab.research.google.com/github/feelpp/book.feelpp.org/blob/master/pyfeelpp_tests.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Feel++ Notebook
In this notebook, we
install Feel++
test it some core functionalities
retrieve results to visualize them locally in Paraview
Installation of Feel++
we start with installing Feel++ in colab
set the proper sources
add the bintray key
update the repo
install python3-feelpp
End of explanation
import sys
import feelpp
e=feelpp.Environment(sys.argv)
e.setConfigFile("/usr/share/feelpp/data/testcases/quickstart/cases/triangle/triangle.cfg")
print(e.numberOfProcessors())
print("isMasterRank:",e.isMasterRank() )
Explanation: Core Test
End of explanation
geo={
'2':feelpp.download( "github:{repo:feelpp,path:feelpp/quickstart/laplacian/cases/feelpp2d/feelpp2d.geo}", worldComm=feelpp.Environment.worldCommPtr() )[0],
'3':feelpp.download( "github:{repo:feelpp,path:feelpp/quickstart/laplacian/cases/feelpp3d/feelpp3d.geo}", worldComm=feelpp.Environment.worldCommPtr() )[0]
}
def run( m, geofile ):
if e.isMasterRank():
print("mesh dim:", m.dimension())
m=feelpp.load(m,geofile,0.1)
if e.isMasterRank():
print("mesh ",m.dimension(),"D nelts:", m.numGlobalElements() )
print("mesh ",m.dimension(),"D nfaces:", m.numGlobalFaces() )
print("mesh ",m.dimension(),"D hmin:", m.hMin())
print("mesh ",m.dimension(),"D havg:", m.hAverage())
print("mesh ",m.dimension(),"D hmax:", m.hMax())
print("mesh ",m.dimension(),"D measure:", m.measure())
r = feelpp.elements(m)
print("mesh elts:", feelpp.nelements(r,True))
r = feelpp.boundaryfaces(m)
print("mesh boundary faces:", feelpp.nfaces(r,True))
run( feelpp.mesh(dim=2), geo['2'] )
run( feelpp.mesh(dim=3,realdim=3), geo['3'] )
Explanation: Mesh Test
End of explanation
geo={
'2':feelpp.download( "github:{repo:feelpp,path:feelpp/quickstart/laplacian/cases/feelpp2d/feelpp2d.geo}", worldComm=feelpp.Environment.worldCommPtr() )[0],
'3':feelpp.download( "github:{repo:feelpp,path:feelpp/quickstart/laplacian/cases/feelpp3d/feelpp3d.geo}", worldComm=feelpp.Environment.worldCommPtr() )[0]
}
def run( m, geo ):
m2d = feelpp.load(m,geo,0.1)
Xh=feelpp.functionSpace(mesh=m2d)
if e.isMasterRank():
print("Xh basisname: ", Xh.basisName())
print("Xh nDof: ", Xh.nDof())
print("Xh nLocalDof: ", Xh.nLocalDof())
print("Xh nLocalDofWithGhost: ", Xh.nLocalDofWithGhost())
print("Xh nLocalDofWithoutGhost: ", Xh.nLocalDofWithoutGhost())
m3=Xh.mesh()
assert m3==m2d
u=Xh.element()
u.on(range=feelpp.elements(m2d),expr=feelpp.expr("x:x"))
assert u.functionSpace() == Xh
assert u.size() == Xh.nDof()
run( feelpp.mesh(dim=2), geo['2'] )
run( feelpp.mesh(dim=3,realdim=3), geo['3'] )
Explanation: FunctionSpace Test
End of explanation
geo={
'2':feelpp.download( "github:{repo:feelpp,path:feelpp/quickstart/laplacian/cases/feelpp2d/feelpp2d.geo}", worldComm=feelpp.Environment.worldCommPtr() )[0],
'3':feelpp.download( "github:{repo:feelpp,path:feelpp/quickstart/laplacian/cases/feelpp3d/feelpp3d.geo}", worldComm=feelpp.Environment.worldCommPtr() )[0]
}
def run( m, geo ):
mesh = feelpp.load(m,geo,0.1)
Xh=feelpp.functionSpace( space="Pch", mesh=mesh, order=1 )
P0h = feelpp.functionSpace( space="Pdh", mesh=mesh, order=0 )
#u=Xh.elementFromExpr("{sin(2*pi*x)*cos(pi*y)}:x:y")
u=Xh.element()
u.on(range=feelpp.elements(mesh),expr=feelpp.expr("x*x:x"))
e = feelpp.exporter(mesh=mesh,name="feelpp"+str(m.dimension())+"d")
e.addScalar("un", 1.)
e.addP1c("u",u)
e.addP0d("pid",feelpp.pid( P0h ))
e.save()
run( feelpp.mesh( dim=2 ), geo['2'] )
run( feelpp.mesh( dim=3, realdim=3 ), geo['3'] )
!ls -lrt exports/ensightgold/feelpp2d
!ls -lrt exports/ensightgold/feelpp2d
# Save to your local computer
from google.colab import files
!zip -r feelpp.zip exports/ensightgold/
files.download('feelpp.zip')
Explanation: Exporter Test
End of explanation |
6,038 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Peer in THRG Generated Graphs
Dev Space
System
Step1: Network Dynamics
Let's examine the network dynamics
Step2: THRG
Using time1.py we learn or derive the production rules from the given
graph. Then, using time2.py we generate graphs given the THRG graph
model.
Toy Example
The board example is a toy graph. The following is inspired in part by this webpost.
Step3: Average Node Degree for the group of K=20 generated graphs.
Step4: Looking at the Avg Nodes and Edges in a group of generated graphs | Python Code:
# imports
import networkx as nx
%matplotlib inline
import matplotlib.pyplot as plt
params = {'legend.fontsize':'small',
'figure.figsize': (7,7),
'axes.labelsize': 'small',
'axes.titlesize': 'small',
'xtick.labelsize':'small',
'ytick.labelsize':'small'}
plt.rcParams.update(params)
import matplotlib.gridspec as gridspec
import numpy as np
import pandas as pd
def unique_node_aggregation_perts(gb,pddf):
# print(r'* Unique edges added at each time stamp')
acnd= []
uniq_v= {}
for k,v in gb.groups.items():
nodes = pddf[['u','v']].loc[v].values[0]
newvcnt=0
for x in nodes:
if not(x in acnd):
[acnd.append(x) for x in nodes if not(x in acnd)]
newvcnt +=1
uniq_v[k] = newvcnt
df = pd.DataFrame.from_dict(uniq_v.items())
df.sort_values(by=[0],inplace=True)
return df
G = nx.Graph()
G.add_edge(0, 1,attr_dict={'ts':1})
G.add_edge(0, 4,attr_dict={'ts':38})
G.add_edge(0, 6,attr_dict={'ts':32})
G.add_edge(1, 2,attr_dict={'ts':2})
G.add_edge(1, 3,attr_dict={'ts':12})
G.add_edge(2, 3,attr_dict={'ts':14})
G.add_edge(2, 5,attr_dict={'ts':27})
G.add_edge(3, 4,attr_dict={'ts':11})
G.add_edge(3, 5,attr_dict={'ts':24})
G.add_edge(4, 6,attr_dict={'ts':40})
Explanation: Peer in THRG Generated Graphs
Dev Space
System: sailntrpy
Dir: ~/Research/Phoenix/PhoenixPython
Toy Graph - The Example Graph
This graph is shown as the example graph in the PAMI paper.
End of explanation
t = [d.values()[0] for u,v,d in G.edges(data=True)]
g_edges = [[d.values()[0],u,v] for u,v,d in G.edges(data=True)]
df = pd.DataFrame(g_edges, columns=['ts','u','v']) # u= source, v= target
gb = df.groupby(['ts'])
print(r'* Unique edges added at each time stamp')
acnd= []
uniq_v= {}
for k,v in gb.groups.items():
nodes = df[['u','v']].loc[v].values[0]
newvcnt=0
for x in nodes:
if not(x in acnd):
[acnd.append(x) for x in nodes if not(x in acnd)]
newvcnt +=1
uniq_v[k] = newvcnt
df = pd.DataFrame.from_dict(uniq_v.items())
df.sort_values(by=[0],inplace=True)
f, axs = plt.subplots(1, 3, figsize=(15,5))
ax0=axs[0]
ax1=axs[1]
ax2=axs[2]
pstn = nx.spring_layout(G)
nx.draw_networkx(G,pos=pstn, alpha=0.5,ax=ax0)
nx.draw_networkx_edge_labels(G,pos=pstn,alpha=0.5,ax=ax0)
nf = gb['v'].count()
df['ecnt'] = gb['v'].count().values
df['cs']= df[1].cumsum()
df['ce']= df['ecnt'].cumsum()
ax1.plot(df[0].values,df['ecnt'].values,'ro',linestyle=':')
ax1.bar(df[0].values,df[1].values, width = 0.8, alpha=0.5)
ax1.set_ylim([0,1.5]);
ax1.set_xlabel('Time Stamps')
ax1.set_ylabel('Unique Vertices Joining the Graph')
ax1.set_yticks([1,1.5]);
# Cummulative nodes
ax2.plot(df[0].values,df['cs'].values, alpha=0.5, label='nodes')
ax2.plot(df[0].values,df['ce'].values,label='edges')
ax2.legend()
ax2.set_xlabel('Time Stamps');
ax2.set_ylabel('Cumulative V and E over time');
# nx.nx.write_edgelist(G, '/tmp/out.weighted_example_graph',data=True)
Explanation: Network Dynamics
Let's examine the network dynamics
End of explanation
%run time/time1.py -d /tmp/toygraph -b 4 -i 0 -g ''Toygraph''
%run time/time2.py -din /tmp/toygraph -dout /tmp/toygraph_o -m 2 -gen 20
from glob import glob
dir_path = "/tmp/toygraph_o"
c_files = glob(dir_path+ "/*cliques.p")
e_files = glob(dir_path+ "/*edges.p")
if 0:
for j,f in enumerate(c_files):
clq_lst = pickle.load(open(f, "rb"))
print j, len(clq_lst), clq_lst
for c in clq_lst:
print c.history, c.nids
break
gdf = pd.DataFrame()
for f in e_files:
edg_lst = pickle.load(open(f, "rb"))
df = pd.DataFrame(edg_lst)
gdf = gdf.append(df)
# print gdf.shape
gb = gdf.groupby([2]).count()/20.0
# print gb.head()
f, axs = plt.subplots(1, 1, figsize=(1.6 * 6., 1 * 4.))
axs.scatter(x=gb.index.values,y=gb[1])
axs.set_ylabel('Avg # of Edges per Timestamp');
# plt.boxplot(gb[1].values, labels=)
# print gb.index.values
# print gb[1].values
Explanation: THRG
Using time1.py we learn or derive the production rules from the given
graph. Then, using time2.py we generate graphs given the THRG graph
model.
Toy Example
The board example is a toy graph. The following is inspired in part by this webpost.
End of explanation
# # Average Node Degree for the group of K=20 generated graphs
# gb = gdf.groupby([2]).groups
# avgk =[]
# for k,v in gb.items():
# # print gdf.loc[gb.groups[k]]
# df= gdf.loc[v]
# df.columns = ['s','t','ts']
# # print df.head()
# g = nx.from_pandas_dataframe(df, 's','t', ['ts'])
# # nodes.append(g.number_of_nodes())
# avgk.append(g.degree().values())
# # print k, np.mean(g.degree().keys()), g.degree().keys()
# f, axs = plt.subplots(1, 2, figsize=(1.6 * 6., 1 * 4.))
# axs[0].boxplot(avgk);
# axs[0].set_ylim([0,5])
# axs[0].set_ylabel('Degree per Timestamp');
# in blocks
gdf.columns = ['u','v','ts']
# print gdf.head()
span = gdf.ts.max() - gdf.ts.min()
slic = span/4.
for blk in range(int(gdf.index.min()),int(gdf.index.max()),int(slic)):
mask = (gdf['ts'] >= blk) & (gdf['ts'] <= blk+slic)
df = gdf.loc[mask]
g = nx.from_pandas_dataframe(df, 'u','v',['ts'])
print g.degree()
print nx.average_degree_connectivity()
break
# Average Degree At each time stamp
# in one generated graph determine the average degree
import pprint as pp
print ()
for f in e_files:
edg_lst = pickle.load(open(f, "rb"))
df = pd.DataFrame(edg_lst, columns=['u','v','ts'])
# Within this set of edges, gropu by time-stamp (lowest level)
gb = df.groupby(['ts']).groups
kd_lst = [nx.from_pandas_dataframe(df.loc[v],'u','v',['ts']).degree() for k,v in gb.items()]
ts_graphs = [nx.from_pandas_dataframe(df.loc[v],'u','v',['ts']) for k,v in gb.items()]
grps_k = [d.keys() for d in kd_lst]
# print [np.mean(kg) for kg in grps_k]
g = nx.from_pandas_dataframe(df.loc[v],'u','v',['ts'])
f, axmult = plt.subplots(1, len(ts_graphs), figsize=(1.6 * 6., 1 * 4.))
for j,axs in enumerate(axmult):
nx.draw_networkx(ts_graphs[j],pos=nx.spring_layout(ts_graphs[j]),ax=axs)
axs.set_xlabel('ts:'+str(j))
axs.spines['top'].set_visible(False)
axs.spines['right'].set_visible(False)
axs.spines['left'].set_visible(False)
# axs.axis('off')
axs.get_yaxis().set_visible(False)
axs.get_xaxis().set_ticks([])
if 0: print ts_graphs[j].degree().values()
break
plt.suptitle('Generated Graph Fragments Per Timestamp');
Explanation: Average Node Degree for the group of K=20 generated graphs.
End of explanation
#
# From the group of generated graphs, these are some stats
#
mdf = pd.DataFrame()
for f in e_files:
edg_lst = pickle.load(open(f, "rb"))
df = pd.DataFrame(edg_lst)
df.columns = ['u','v','ts']
gb = df.groupby(['ts'])
# print gb.keys()
# nodes = []
# for k,v in gb.items():
# g = nx.from_pandas_dataframe(df.loc[v],'u','v',['ts'])
# nodes.append([k,g.number_of_nodes()])
# print g.number_of_nodes()
# print nodes
nf = unique_node_aggregation_perts(gb, df)
nf.columns = ['ts','v']
if f == e_files[0]:
mdf = nf
continue
# nf['cs'] = nf[1].cumsum()
mdf = pd.merge(left=nf,right=mdf,on='ts',how='outer')
# df = pd.DataFrame(nodes)
mdf['avgVcnt'] = mdf.mean(axis=1)
mdf['cs'] = mdf['avgVcnt'].cumsum()
# print mdf.head()
# df['cs'] = df[0].cumsum()
# nf[[1,'cs']].plot()
f, axs = plt.subplots(1, 1, figsize=(1.6 * 6., 1 * 4.))
# axs.plot(nf[0].values, nf[1].values)
mdf['cs'].plot(x='ts',ax=axs,marker='o',linestyle=":");
axs.set_ylabel('Average Unique Nodes Accumluating')
axs.set_ylim(0,13)
axs.set_xlim(-1,10)
Explanation: Looking at the Avg Nodes and Edges in a group of generated graphs
End of explanation |
6,039 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Honors-Physics-PHYS1010" data-toc-modified-id="Honors-Physics-PHYS1010-1"><span class="toc-item-num">1 </span>Honors Physics PHYS1010</a></div><div class="lev1 toc-item"><a href="#Fractals-in-Python" data-toc-modified-id="Fractals-in-Python-2"><span class="toc-item-num">2 </span>Fractals in Python</a></div><div class="lev2 toc-item"><a href="#1.-Setup-the-Environment" data-toc-modified-id="1.-Setup-the-Environment-2.1"><span class="toc-item-num">2.1 </span>1. Setup the Environment</a></div><div class="lev2 toc-item"><a href="#2.-Generating-the-Mandelbrot-Set" data-toc-modified-id="2.-Generating-the-Mandelbrot-Set-2.2"><span class="toc-item-num">2.2 </span>2. Generating the <a href="https
Step1: This sets up the colors we want in our fractal image.
How this works
Step2: 2. Generating the Mandelbrot Set
As we covered in the slides, The Mandelbrot Set is the set of complex numbers, C, such that the following equation does not diverge when iterated from $z = 0$
Step3: Let's test our function
Step4: 3. Generate a Fractal Image
Now that we can determine if a value is in the Mandelbrot Set, let's build the structure of our image.
Step5: We are going to loop over all the pixels in our image and check if that pixel is in the Mandelbrot Set. We are using the $x$ and $y$ coordinates to represent the Real and Imaginary parts of the Complex number $z$.
Step6: Now we save our image and display it.
Step7: 4. The Julia Set
It turns out that there are more ways to make a fractal. We are going to open up some of the parameters and see what happens.
First we open up our value of $z_n$ and redefine our iteration function.
We have also pulled out the functional form that defines our set, this will make it easier to modify this without breaking anything in our iterate function.
Step8: Now we open up the value of c to be defined by us and let the pixel location relate to the value of $z_{n}$
Step9: By changing the name here, you can save multiple files without having to modify too much code
Step10: Useful numpy Functions | Python Code:
from PIL import Image, ImageDraw
import math, colorsys, numpy
from matplotlib import colors
from IPython.display import Image as ipythonImage
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Honors-Physics-PHYS1010" data-toc-modified-id="Honors-Physics-PHYS1010-1"><span class="toc-item-num">1 </span>Honors Physics PHYS1010</a></div><div class="lev1 toc-item"><a href="#Fractals-in-Python" data-toc-modified-id="Fractals-in-Python-2"><span class="toc-item-num">2 </span>Fractals in Python</a></div><div class="lev2 toc-item"><a href="#1.-Setup-the-Environment" data-toc-modified-id="1.-Setup-the-Environment-2.1"><span class="toc-item-num">2.1 </span>1. Setup the Environment</a></div><div class="lev2 toc-item"><a href="#2.-Generating-the-Mandelbrot-Set" data-toc-modified-id="2.-Generating-the-Mandelbrot-Set-2.2"><span class="toc-item-num">2.2 </span>2. Generating the <a href="https://en.wikipedia.org/wiki/Mandelbrot_set" target="_blank">Mandelbrot Set</a></a></div><div class="lev2 toc-item"><a href="#3.-Generate-a-Fractal-Image" data-toc-modified-id="3.-Generate-a-Fractal-Image-2.3"><span class="toc-item-num">2.3 </span>3. Generate a Fractal Image</a></div><div class="lev2 toc-item"><a href="#4.-The-Julia-Set" data-toc-modified-id="4.-The-Julia-Set-2.4"><span class="toc-item-num">2.4 </span>4. The <a href="https://en.wikipedia.org/wiki/Julia_set" target="_blank">Julia Set</a></a></div><div class="lev3 toc-item"><a href="#Useful-numpy-Functions:-Call-by-using-numpy.function" data-toc-modified-id="Useful-numpy-Functions:-Call-by-using-numpy.function-2.4.1"><span class="toc-item-num">2.4.1 </span>Useful numpy Functions: Call by using numpy.function</a></div><div class="lev4 toc-item"><a href="#OPTIONAL-Choosing-Colors-with-Math" data-toc-modified-id="OPTIONAL-Choosing-Colors-with-Math-2.4.1.1"><span class="toc-item-num">2.4.1.1 </span>OPTIONAL Choosing Colors with Math</a></div>
# Honors Physics PHYS1010
# Fractals in Python
Here we will demonstrate how to generate fractal images using the coding language python
## 1. Setup the Environment
Here we import the packages we need from the existing python libraries. Python has extensive libraries of functions that saves us from having to write them ourselves.
End of explanation
ipythonImage(filename = "named_colors.png")
color_list=('black',
'darkslategray',
'darkgreen',
'green',
'forestgreen',
'darkseagreen',
'limegreen',
'lime',
'palegreen',
'white')
palette = []
palette.append( colors.hex2color(colors.cnames[color_list[0]]) )
palette.append( colors.hex2color(colors.cnames[color_list[1]]) )
palette.append( colors.hex2color(colors.cnames[color_list[2]]) )
palette.append( colors.hex2color(colors.cnames[color_list[3]]) )
palette.append( colors.hex2color(colors.cnames[color_list[4]]) )
palette.append( colors.hex2color(colors.cnames[color_list[5]]) )
palette.append( colors.hex2color(colors.cnames[color_list[6]]) )
palette.append( colors.hex2color(colors.cnames[color_list[7]]) )
palette.append( colors.hex2color(colors.cnames[color_list[8]]) )
palette.append( colors.hex2color(colors.cnames[color_list[9]]) )
Explanation: This sets up the colors we want in our fractal image.
How this works:
We are building an array of values that correspond to our colors. Colors are defined in Python as a list of three values corresponding to the percentage of Red, Green, and Blue in that color.
Black is (0.0, 0.0, 0.0) and White is (1.0, 1.0, 1.0)
Feel free to change the colors as you wish. The list of prenamed colors is porvided below. Later we will see another way to generate the colors using math.
End of explanation
cutoff = 2.0
def iterate_series(c):
z_n = complex(0,0)
for n in range(0,100):
z_n = z_n*z_n + c
if abs(z_n) > cutoff:
return n
return -1
Explanation: 2. Generating the Mandelbrot Set
As we covered in the slides, The Mandelbrot Set is the set of complex numbers, C, such that the following equation does not diverge when iterated from $z = 0$:
\begin{split}
z_{n+1}= z_{n}^{2} + c
\end{split}
To determine if the equation is diverging, we need to set up a test.
To do so, we will use a loop and check if the absolute value of $z_{n}$ is larger than a cutoff.
We define a function to do this that accepts an input value for $c$ and returns $-1$ if $c$ is in the Mandelbrot Set and the iteration that diverged if not.
End of explanation
iterate_series(1)
iterate_series(0)
iterate_series(-1)
Explanation: Let's test our function
End of explanation
x_max = 800
y_max = 800
img = Image.new("RGB",(x_max,y_max))
d = ImageDraw.Draw(img)
Explanation: 3. Generate a Fractal Image
Now that we can determine if a value is in the Mandelbrot Set, let's build the structure of our image.
End of explanation
for x in range(x_max):
for y in range(y_max):
#This determines the centering of our image
offset=(2.2,1.5)
#The value of c is determined by scaling the pixel location and offsetting it.
c = complex(x*3.0/x_max-offset[0], y*3.0/y_max-offset[1])
#Now we call our function from before
n = iterate_series(c)
#Checks if c is in the Mandelbrot Set
if n == -1:
v=1
#If not, it checks when it diverged
else:
v=n/100.0
#Determines the colors in our image based on our the previous check
color_index = int(v * (len(palette)-1))
rgb = palette[color_index]
red = int(rgb[0]*255)
green = int(rgb[1]*255)
blue = int(rgb[2]*255)
d.point((x,y),fill = (red,green,blue))
Explanation: We are going to loop over all the pixels in our image and check if that pixel is in the Mandelbrot Set. We are using the $x$ and $y$ coordinates to represent the Real and Imaginary parts of the Complex number $z$.
End of explanation
img.save("fractal.png")
ipythonImage(filename='fractal.png')
Explanation: Now we save our image and display it.
End of explanation
def func_z_n(c, z_n):
#return z_n*z_n +c
return numpy.power(z_n,2) + c
cutoff = 2
def iterate_series2(c, z_n = -2.0**.5):
for n in range(0,100):
z_n = func_z_n(c, z_n)
if abs(z_n) > cutoff:
return n
return -1
Explanation: 4. The Julia Set
It turns out that there are more ways to make a fractal. We are going to open up some of the parameters and see what happens.
First we open up our value of $z_n$ and redefine our iteration function.
We have also pulled out the functional form that defines our set, this will make it easier to modify this without breaking anything in our iterate function.
End of explanation
c_julia = complex(-0.4, 0.6)
for x in range(x_max):
for y in range(y_max):
offset=(1.5, 1.5)
z = complex(x*3.0/x_max-offset[0], y*3.0/y_max-offset[1])
n = iterate_series2(c_julia, z)
if n == -1:
v=1
else:
v=n/100.0
color_index = int(v * (len(palette)-1))
rgb = palette[color_index]
red = int(rgb[0]*255)
green = int(rgb[1]*255)
blue = int(rgb[2]*255)
d.point((x,y),fill = (red,green,blue))
#If you want to play with the colors another way, uncomment this and run the color pallet cell bellow.
#Don't forget to comment out the line above first
#d.point((x, y), fill = palette[int(v * (colors_max-1))])
Explanation: Now we open up the value of c to be defined by us and let the pixel location relate to the value of $z_{n}$
End of explanation
name = "julia"
img.save(name+".png")
ipythonImage(filename = name+".png")
Explanation: By changing the name here, you can save multiple files without having to modify too much code
End of explanation
#The max number of colors it can handle
#colors_max = 50
colors_max = 500
#Coefficients for tweaking the percentage of each color we use
r1, g1, b1 = 0.66, 1.0, 0.0 # Between 0.0 and 1.0
r2, g2, b2 = 1.0, -2.0, 2.0 # Must be greater than 1.0 or less than -1.0
r3, g3, b3 = 0.6, 0.8, 0.1 # Between 0.0 and 1.0
# Calculate a tolerable palette
palette = [0] * colors_max
for i in range(colors_max):
f = 1-abs((float(i)/colors_max-1)**15)
#r, g, b = colorsys.hsv_to_rgb(.66+f/3, 1-f/2, f)
r, g, b = colorsys.hsv_to_rgb(r1+f/r2, g1+f/g2, b1+f/b2)
#palette[i] = (int(r*255), int(g*255), int(b*255))
palette[i] = (int((r-r3)*255), int((g-g3)*255), int((b-b3)*255))
Explanation: Useful numpy Functions: Call by using numpy.function
Try some of these in the definition of our set and see what happens.
|Trig Functions|Hyperbolic Functions|Exponentials and Logs|
|:---:|:---:|:---:|
|sin(x)|sinh(x)|exp(x)|
|cos(x)|cosh(x)|log(x)|
|tan(x)|tanh(x)|log10(x)|
|arcsin(x)|arcsinh(x)|power(x,y)|
|arccos(x)|arccosh(x)|sqrt(x)|
|arctan(x)|arctanh(x)||
OPTIONAL Choosing Colors with Math
The cell below chooses colors for us based on an algorithm rather than specifying the colors in advance. It takes some tweaking to get it to look right but some of the results are spectacular.
End of explanation |
6,040 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Uncertainty in Deep Learning
A common criticism of deep learning models is that they tend to act as black boxes. A model produces outputs, but doesn't given enough context to interpret them properly. How reliable are the model's predictions? Are some predictions more reliable than others? If a model predicts a value of 5.372 for some quantity, should you assume the true value is between 5.371 and 5.373? Or that it's between 2 and 8? In some fields this situation might be good enough, but not in science. For every value predicted by a model, we also want an estimate of the uncertainty in that value so we can know what conclusions to draw based on it.
DeepChem makes it very easy to estimate the uncertainty of predicted outputs (at least for the models that support it—not all of them do). Let's start by seeing an example of how to generate uncertainty estimates. We load a dataset, create a model, train it on the training set, predict the output on the test set, and then derive some uncertainty estimates.
Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
Step1: We'll use the Delaney dataset from the MoleculeNet suite to run our experiments in this tutorial. Let's load up our dataset for our experiments, and then make some uncertainty predictions.
Step2: All of this looks exactly like any other example, with just two differences. First, we add the option uncertainty=True when creating the model. This instructs it to add features to the model that are needed for estimating uncertainty. Second, we call predict_uncertainty() instead of predict() to produce the output. y_pred is the predicted outputs. y_std is another array of the same shape, where each element is an estimate of the uncertainty (standard deviation) of the corresponding element in y_pred. And that's all there is to it! Simple, right?
Of course, it isn't really that simple at all. DeepChem is doing a lot of work to come up with those uncertainties. So now let's pull back the curtain and see what is really happening. (For the full mathematical details of calculating uncertainty, see https
Step3: The line clearly does not do a great job of fitting the data. There are many possible reasons for this. Perhaps the measuring device used to capture the data was not very accurate. Perhaps y depends on some other factor in addition to x, and if we knew the value of that factor for each data point we could predict y more accurately. Maybe the relationship between x and y simply isn't linear, and we need a more complicated model to capture it. Regardless of the cause, the model clearly does a poor job of predicting the training data, and we need to keep that in mind. We cannot expect it to be any more accurate on test data than on training data. This is known as aleatoric uncertainty.
How can we estimate the size of this uncertainty? By training a model to do it, of course! At the same time it is learning to predict the outputs, it is also learning to predict how accurately each output matches the training data. For every output of the model, we add a second output that produces the corresponding uncertainty. Then we modify the loss function to make it learn both outputs at the same time.
Epistemic Uncertainty
Now consider these three curves. They are fit to the same data points as before, but this time we are using 10th degree polynomials.
Step4: Each of them perfectly interpolates the data points, yet they clearly are different models. (In fact, there are infinitely many 10th degree polynomials that exactly interpolate any ten data points.) They make identical predictions for the data we fit them to, but for any other value of x they produce different predictions. This is called epistemic uncertainty. It means the data does not fully constrain the model. Given the training data, there are many different models we could have found, and those models make different predictions.
The ideal way to measure epistemic uncertainty is to train many different models, each time using a different random seed and possibly varying hyperparameters. Then use all of them for each input and see how much the predictions vary. This is very expensive to do, since it involves repeating the whole training process many times. Fortunately, we can approximate the same effect in a less expensive way
Step5: The first thing we notice is that the axes have similar ranges. The model clearly has learned the overall magnitude of errors in the predictions. There also is clearly a correlation between the axes. Values with larger uncertainties tend on average to have larger errors. (Strictly speaking, we expect the absolute error to be less than the predicted uncertainty. Even a very uncertain number could still happen to be close to the correct value by chance. If the model is working well, there should be more points below the diagonal than above it.)
Now let's see how well the values satisfy the expected distribution. If the standard deviations are correct, and if the errors are normally distributed (which is certainly not guaranteed to be true!), we expect 95% of the values to be within two standard deviations, and 99% to be within three standard deviations. Here is a histogram of errors as measured in standard deviations. | Python Code:
!pip install --pre deepchem
import deepchem
deepchem.__version__
Explanation: Uncertainty in Deep Learning
A common criticism of deep learning models is that they tend to act as black boxes. A model produces outputs, but doesn't given enough context to interpret them properly. How reliable are the model's predictions? Are some predictions more reliable than others? If a model predicts a value of 5.372 for some quantity, should you assume the true value is between 5.371 and 5.373? Or that it's between 2 and 8? In some fields this situation might be good enough, but not in science. For every value predicted by a model, we also want an estimate of the uncertainty in that value so we can know what conclusions to draw based on it.
DeepChem makes it very easy to estimate the uncertainty of predicted outputs (at least for the models that support it—not all of them do). Let's start by seeing an example of how to generate uncertainty estimates. We load a dataset, create a model, train it on the training set, predict the output on the test set, and then derive some uncertainty estimates.
Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
End of explanation
import deepchem as dc
import numpy as np
import matplotlib.pyplot as plot
tasks, datasets, transformers = dc.molnet.load_delaney()
train_dataset, valid_dataset, test_dataset = datasets
model = dc.models.MultitaskRegressor(len(tasks), 1024, uncertainty=True)
model.fit(train_dataset, nb_epoch=20)
y_pred, y_std = model.predict_uncertainty(test_dataset)
Explanation: We'll use the Delaney dataset from the MoleculeNet suite to run our experiments in this tutorial. Let's load up our dataset for our experiments, and then make some uncertainty predictions.
End of explanation
# Generate some fake data and plot a regression line.
x = np.linspace(0, 5, 10)
y = 0.15*x + np.random.random(10)
plot.scatter(x, y)
fit = np.polyfit(x, y, 1)
line_x = np.linspace(-1, 6, 2)
plot.plot(line_x, np.poly1d(fit)(line_x))
plot.show()
Explanation: All of this looks exactly like any other example, with just two differences. First, we add the option uncertainty=True when creating the model. This instructs it to add features to the model that are needed for estimating uncertainty. Second, we call predict_uncertainty() instead of predict() to produce the output. y_pred is the predicted outputs. y_std is another array of the same shape, where each element is an estimate of the uncertainty (standard deviation) of the corresponding element in y_pred. And that's all there is to it! Simple, right?
Of course, it isn't really that simple at all. DeepChem is doing a lot of work to come up with those uncertainties. So now let's pull back the curtain and see what is really happening. (For the full mathematical details of calculating uncertainty, see https://arxiv.org/abs/1703.04977)
To begin with, what does "uncertainty" mean? Intuitively, it is a measure of how much we can trust the predictions. More formally, we expect that the true value of whatever we are trying to predict should usually be within a few standard deviations of the predicted value. But uncertainty comes from many sources, ranging from noisy training data to bad modelling choices, and different sources behave in different ways. It turns out there are two fundamental types of uncertainty we need to take into account.
Aleatoric Uncertainty
Consider the following graph. It shows the best fit linear regression to a set of ten data points.
End of explanation
plot.figure(figsize=(12, 3))
line_x = np.linspace(0, 5, 50)
for i in range(3):
plot.subplot(1, 3, i+1)
plot.scatter(x, y)
fit = np.polyfit(np.concatenate([x, [3]]), np.concatenate([y, [i]]), 10)
plot.plot(line_x, np.poly1d(fit)(line_x))
plot.show()
Explanation: The line clearly does not do a great job of fitting the data. There are many possible reasons for this. Perhaps the measuring device used to capture the data was not very accurate. Perhaps y depends on some other factor in addition to x, and if we knew the value of that factor for each data point we could predict y more accurately. Maybe the relationship between x and y simply isn't linear, and we need a more complicated model to capture it. Regardless of the cause, the model clearly does a poor job of predicting the training data, and we need to keep that in mind. We cannot expect it to be any more accurate on test data than on training data. This is known as aleatoric uncertainty.
How can we estimate the size of this uncertainty? By training a model to do it, of course! At the same time it is learning to predict the outputs, it is also learning to predict how accurately each output matches the training data. For every output of the model, we add a second output that produces the corresponding uncertainty. Then we modify the loss function to make it learn both outputs at the same time.
Epistemic Uncertainty
Now consider these three curves. They are fit to the same data points as before, but this time we are using 10th degree polynomials.
End of explanation
abs_error = np.abs(y_pred.flatten()-test_dataset.y.flatten())
plot.scatter(y_std.flatten(), abs_error)
plot.xlabel('Standard Deviation')
plot.ylabel('Absolute Error')
plot.show()
Explanation: Each of them perfectly interpolates the data points, yet they clearly are different models. (In fact, there are infinitely many 10th degree polynomials that exactly interpolate any ten data points.) They make identical predictions for the data we fit them to, but for any other value of x they produce different predictions. This is called epistemic uncertainty. It means the data does not fully constrain the model. Given the training data, there are many different models we could have found, and those models make different predictions.
The ideal way to measure epistemic uncertainty is to train many different models, each time using a different random seed and possibly varying hyperparameters. Then use all of them for each input and see how much the predictions vary. This is very expensive to do, since it involves repeating the whole training process many times. Fortunately, we can approximate the same effect in a less expensive way: by using dropout.
Recall that when you train a model with dropout, you are effectively training a huge ensemble of different models all at once. Each training sample is evaluated with a different dropout mask, corresponding to a different random subset of the connections in the full model. Usually we only perform dropout during training and use a single averaged mask for prediction. But instead, let's use dropout for prediction too. We can compute the output for lots of different dropout masks, then see how much the predictions vary. This turns out to give a reasonable estimate of the epistemic uncertainty in the outputs.
Uncertain Uncertainty?
Now we can combine the two types of uncertainty to compute an overall estimate of the error in each output:
$$\sigma_\text{total} = \sqrt{\sigma_\text{aleatoric}^2 + \sigma_\text{epistemic}^2}$$
This is the value DeepChem reports. But how much can you trust it? Remember how I started this tutorial: deep learning models should not be used as black boxes. We want to know how reliable the outputs are. Adding uncertainty estimates does not completely eliminate the problem; it just adds a layer of indirection. Now we have estimates of how reliable the outputs are, but no guarantees that those estimates are themselves reliable.
Let's go back to the example we started with. We trained a model on the SAMPL training set, then generated predictions and uncertainties for the test set. Since we know the correct outputs for all the test samples, we can evaluate how well we did. Here is a plot of the absolute error in the predicted output versus the predicted uncertainty.
End of explanation
plot.hist(abs_error/y_std.flatten(), 20)
plot.show()
Explanation: The first thing we notice is that the axes have similar ranges. The model clearly has learned the overall magnitude of errors in the predictions. There also is clearly a correlation between the axes. Values with larger uncertainties tend on average to have larger errors. (Strictly speaking, we expect the absolute error to be less than the predicted uncertainty. Even a very uncertain number could still happen to be close to the correct value by chance. If the model is working well, there should be more points below the diagonal than above it.)
Now let's see how well the values satisfy the expected distribution. If the standard deviations are correct, and if the errors are normally distributed (which is certainly not guaranteed to be true!), we expect 95% of the values to be within two standard deviations, and 99% to be within three standard deviations. Here is a histogram of errors as measured in standard deviations.
End of explanation |
6,041 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cargue de datos s SciDB
1) Verificar Prerequisitos
Python
SciDB-Py requires Python 2.6-2.7 or 3.3
Step1: NumPy
tested with version 1.9 (1.13.1)
Step2: Requests
tested with version 2.7 (2.18.1) Required for using the Shim interface to SciDB.
Step3: Pandas (optional)
tested with version 0.15. (0.20.3) Required only for importing/exporting SciDB arrays as Pandas Dataframe objects.
Step4: SciPy (optional)
tested with versions 0.10-0.12. (0.19.0) Required only for importing/exporting SciDB arrays as SciPy sparse matrices.
Step5: 2) Importar scidbpy
pip install git+http
Step6: conectarse al servidor de Base de datos
Step7: 3) Leer archivo con cada una de las ondas
Step8: Quitarle caracteres especiales
Step9: 4) Importar WFDB para conectarse a physionet
Step10: Busca la ubicacion de la señal tipo II
Step11: Normaliza la señal y le quita los valores en null
Step12: Cambiar los guiones "-" por raya al piso "_" porque por algun motivo SciDB tiene problemas con estos caracteres
Si el arreglo sin valores nulos no queda vacio lo sube al SciDB | Python Code:
import sys
sys.version_info
Explanation: Cargue de datos s SciDB
1) Verificar Prerequisitos
Python
SciDB-Py requires Python 2.6-2.7 or 3.3
End of explanation
import numpy as np
np.__version__
Explanation: NumPy
tested with version 1.9 (1.13.1)
End of explanation
import requests
requests.__version__
Explanation: Requests
tested with version 2.7 (2.18.1) Required for using the Shim interface to SciDB.
End of explanation
import pandas as pd
pd.__version__
Explanation: Pandas (optional)
tested with version 0.15. (0.20.3) Required only for importing/exporting SciDB arrays as Pandas Dataframe objects.
End of explanation
import scipy
scipy.__version__
Explanation: SciPy (optional)
tested with versions 0.10-0.12. (0.19.0) Required only for importing/exporting SciDB arrays as SciPy sparse matrices.
End of explanation
import scidbpy
scidbpy.__version__
from scidbpy import connect
Explanation: 2) Importar scidbpy
pip install git+http://github.com/paradigm4/scidb-py.git@devel
End of explanation
sdb = connect('http://localhost:8080')
Explanation: conectarse al servidor de Base de datos
End of explanation
import urllib.request # urllib2 in python2 the lib that handles the url stuff
target_url = "https://www.physionet.org/physiobank/database/mimic2wdb/matched/RECORDS-waveforms"
data = urllib.request.urlopen(target_url) # it's a file like object and works just like a file
lines = data.readlines();
line = str(lines[100])
Explanation: 3) Leer archivo con cada una de las ondas
End of explanation
carpeta,onda = line.replace('b\'','').replace('\'','').replace('\\n','').split("/")
onda
Explanation: Quitarle caracteres especiales
End of explanation
import wfdb
sig, fields = wfdb.srdsamp(onda,pbdir='mimic2wdb/matched/'+carpeta) #, sampfrom=11000
print(sig)
print("signame: " + str(fields['signame']))
print("units: " + str(fields['units']))
print("fs: " + str(fields['fs']))
print("comments: " + str(fields['comments']))
print("fields: " + str(fields))
Explanation: 4) Importar WFDB para conectarse a physionet
End of explanation
signalII = None
try:
signalII = fields['signame'].index("II")
except ValueError:
print("List does not contain value")
if(signalII!=None):
print("List contain value")
Explanation: Busca la ubicacion de la señal tipo II
End of explanation
array = wfdb.processing.normalize(x=sig[:, signalII], lb=-2, ub=2)
arrayNun = array[~np.isnan(array)]
arrayNun = np.trim_zeros(arrayNun)
arrayNun
Explanation: Normaliza la señal y le quita los valores en null
End of explanation
ondaName = onda.replace("-", "_")
if arrayNun.size>0 :
sdb.input(upload_data=array).store(ondaName,gc=False)
# sdb.iquery("store(input(<x:int64>[i], '{fn}', 0, '{fmt}'), "+ondaName+")", upload_data=array)
Explanation: Cambiar los guiones "-" por raya al piso "_" porque por algun motivo SciDB tiene problemas con estos caracteres
Si el arreglo sin valores nulos no queda vacio lo sube al SciDB
End of explanation |
6,042 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A1
Step1: Step 1
Step2: Run the above function to call the API and assign the responses to variables
Step3: Export the API raw data files. This section has been commented out in order to not continuously overwrite the raw data files. The raw data files have already been created and will be imported in the next step.
Step4: Step 2
Step5: Functions for processing
get_views and get_counts take the raw .json files as inputs, strip the timestamps and views/counts, and return arrays with two columns (timestamp, views/counts) and a row with each month's worth of data.
lookup_val takes the arrays created from the prior functions as one input and a date as a second input. It uses the date to find the index within the array from column 1 (timestamp) and returns the value from that same index in column 2 (counts/views). If the date is not within the array, then a value of 0 is assigned.
Step6: Run the above functions to get all of the views/counts for both the legacy and current API
Step7: Processing
First, all of the formatted arrays from the API responses are concatenated and the first column (timestamp) is taken as a set() to remove any duplicate timestamps. From here we can easily parse the timestamps into a list of just the years and a list of just the months. This gives us our first two columns of our cleaned data, 'year' and 'month'.
Step8: Second, we initialize five (one for each API response) lists where we will obtain just the counts/views from the two column arrays. We will then loop through all of the dates (no duplicates) that we found from the previous step and use the lookup_val function to find the corresponding counts/views for each API response and append these to lists we initialized.
Step9: Third, we need to aggregate the two mobile sets of data from pageviews to get the total mobile data. For both pagecounts and pageviews we aggregate the desktop counts/views and mobile counts/views to get the total views for each.
Step10: Convert to pandas DataFrame for easy export.
Step11: Export data in single csv. This section has been commented out in order to not continuously overwrite the cleaned data file. The cleaned data file has already been created and will be imported in the next step.
Step12: Step 3
Step13: Plot the data
The dates from the csv are converted to a datetime format in order to be plotted neatly. The points from the data are plotted, filtering out non-zero values in y-axis data.
The figure is then saved as a .png file. | Python Code:
import json
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import requests
from datetime import datetime
%matplotlib inline
Explanation: A1: Data curation
Dane Jordan
Import necessary libraries that will be used
End of explanation
# since we will be performing api calls at least five times, we will functionalize it
def data_acquisition(api_endpoint, access):
'''
call the wikimedia api and return a json format data set
:param api_endpoint: legacy (pagecounts)
current (pageviews)
:param access: legacy (all-sites, desktop-site, mobile-site)
current (desktop, mobile-app, mobile-web)
'''
# define the parameters for the api call (params listed below are universal across all api calls)
params = {'project' : 'en.wikipedia.org',
'agent' : 'user',
'granularity' : 'monthly',
'start' : '2008010100',
'end' : '2017100100'
}
# check api_endpoint variable entered is valid for pagecounts
if api_endpoint == 'pagecounts':
endpoint = 'https://wikimedia.org/api/rest_v1/metrics/legacy/pagecounts/aggregate/{project}/{access-site}/{granularity}/{start}/{end}'
params['access-site'] = access
# check access variable entered is valid for pagecounts, print error otherwise
if access != 'all-sites' and access != 'desktop-site' and access != 'mobile-site':
print('error: invalid access (all-sites, desktop-site, mobile-site)')
else:
pass
# check api_endpoint variable entered is valid for pageviews
elif api_endpoint == 'pageviews':
endpoint = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}'
params['access'] = access
# check access variable entered is valid for pageviews, print error otherwise
if access != 'desktop' and access != 'mobile-app' and access != 'mobile-web':
print('error: invalid access (desktop, mobile-app, mobile-web)')
else:
pass
# print error message if invalid api_endpoint variable
else:
print('error: invalid api_endpoint (pagecounts, pageviews)')
# headers={'User-Agent' : 'https://github.com/drjordy66', 'From' : 'drjordan@uw.edu'}
api_call = requests.get(endpoint.format(**params))
return api_call.json()
Explanation: Step 1: Data acquisition
Below is a function that takes two inputs, the API endpoint (either 'pagecounts' or 'pageviews') and the access parameter. For pagecounts the access parameter can be 'all-sites', 'desktop-site', or 'mobile-site'. For pageviews the access parameter can be 'desktop', 'mobile-app', or 'mobile-web'. The function fills in all other parameters for an API call (thanks Jonathan and Oliver!), and returns the API response.
End of explanation
response_pageview_desktop = data_acquisition('pageviews', 'desktop')
response_pageview_mobileweb = data_acquisition('pageviews', 'mobile-web')
response_pageview_mobileapp = data_acquisition('pageviews', 'mobile-app')
response_pagecount_desktop = data_acquisition('pagecounts', 'desktop-site')
response_pagecount_mobile = data_acquisition('pagecounts', 'mobile-site')
Explanation: Run the above function to call the API and assign the responses to variables
End of explanation
# json.dump(response_pageview_desktop, open('../data_raw/pageviews_desktop_' + response_pageview_desktop['items'][0]['timestamp'][:-4] + '-' + response_pageview_desktop['items'][-1]['timestamp'][:-4] + '.json', 'w'), indent=4)
# json.dump(response_pageview_mobileweb, open('../data_raw/pageviews_mobile-web_' + response_pageview_mobileweb['items'][0]['timestamp'][:-4] + '-' + response_pageview_mobileweb['items'][-1]['timestamp'][:-4] + '.json', 'w'), indent=4)
# json.dump(response_pageview_mobileapp, open('../data_raw/pageviews_mobile-app_' + response_pageview_mobileapp['items'][0]['timestamp'][:-4] + '-' + response_pageview_mobileapp['items'][-1]['timestamp'][:-4] + '.json', 'w'), indent=4)
# json.dump(response_pagecount_desktop, open('../data_raw/pagecounts_desktop-site_' + response_pagecount_desktop['items'][0]['timestamp'][:-4] + '-' + response_pagecount_desktop['items'][-1]['timestamp'][:-4] + '.json', 'w'), indent=4)
# json.dump(response_pagecount_mobile, open('../data_raw/pagecounts_mobile-site_' + response_pagecount_mobile['items'][0]['timestamp'][:-4] + '-' + response_pagecount_mobile['items'][-1]['timestamp'][:-4] + '.json', 'w'), indent=4)
Explanation: Export the API raw data files. This section has been commented out in order to not continuously overwrite the raw data files. The raw data files have already been created and will be imported in the next step.
End of explanation
response_pagecount_desktop = json.load(open('../data_raw/pagecounts_desktop-site_200801-201608.json'))
response_pagecount_mobile = json.load(open('../data_raw/pagecounts_mobile-site_201410-201608.json'))
response_pageview_desktop = json.load(open('../data_raw/pageviews_desktop_201507-201709.json'))
response_pageview_mobileapp = json.load(open('../data_raw/pageviews_mobile-app_201507-201709.json'))
response_pageview_mobileweb = json.load(open('../data_raw/pageviews_mobile-web_201507-201709.json'))
Explanation: Step 2: Data processing
Import the raw .json files to process and create a new file for analysis.
End of explanation
def get_views(api_response):
'''
strip all views from an api response
'''
temp_list = []
for i in api_response['items']:
temp_list.append([i['timestamp'], i['views']])
return np.array(temp_list)
def get_count(api_response):
'''
strip all views from an api response
'''
temp_list = []
for i in api_response['items']:
temp_list.append([i['timestamp'], i['count']])
return np.array(temp_list)
def lookup_val(api_views, date):
'''
find row index against all dates and create views respective column
'''
if i in list(api_views[:,0]):
row_idx = list(api_views[:,0]).index(date)
row_val = int(api_views[:,1][row_idx])
else:
row_val = 0
return row_val
Explanation: Functions for processing
get_views and get_counts take the raw .json files as inputs, strip the timestamps and views/counts, and return arrays with two columns (timestamp, views/counts) and a row with each month's worth of data.
lookup_val takes the arrays created from the prior functions as one input and a date as a second input. It uses the date to find the index within the array from column 1 (timestamp) and returns the value from that same index in column 2 (counts/views). If the date is not within the array, then a value of 0 is assigned.
End of explanation
# strip all dates and views/count from api responses
pageview_desktop_views = get_views(response_pageview_desktop)
pageview_mobileweb_views = get_views(response_pageview_mobileweb)
pageview_mobileapp_views = get_views(response_pageview_mobileapp)
pagecount_desktop_views = get_count(response_pagecount_desktop)
pagecount_mobile_views = get_count(response_pagecount_mobile)
Explanation: Run the above functions to get all of the views/counts for both the legacy and current API
End of explanation
# combine all data into one array
all_dates_views = np.concatenate((pageview_desktop_views,
pageview_mobileweb_views,
pageview_mobileapp_views,
pagecount_desktop_views,
pagecount_mobile_views))
# strip only dates, remove duplicates, sort
all_dates = sorted(list(set(all_dates_views[:, 0])))
# parse into years and months
year_col = [all_dates[i][:-6] for i in range(len(all_dates))]
month_col = [all_dates[i][4:6] for i in range(len(all_dates))]
Explanation: Processing
First, all of the formatted arrays from the API responses are concatenated and the first column (timestamp) is taken as a set() to remove any duplicate timestamps. From here we can easily parse the timestamps into a list of just the years and a list of just the months. This gives us our first two columns of our cleaned data, 'year' and 'month'.
End of explanation
# initialize lists for columns of csv file
pageview_desktop_views_col = []
pageview_mobileweb_views_col = []
pageview_mobileapp_views_col = []
pagecount_desktop_views_col = []
pagecount_mobile_views_col = []
# loop through all of the dates and lookup respective values from each api response
for i in all_dates:
pageview_desktop_views_col.append(lookup_val(pageview_desktop_views, i))
pageview_mobileweb_views_col.append(lookup_val(pageview_mobileweb_views, i))
pageview_mobileapp_views_col.append(lookup_val(pageview_mobileapp_views, i))
pagecount_desktop_views_col.append(lookup_val(pagecount_desktop_views, i))
pagecount_mobile_views_col.append(lookup_val(pagecount_mobile_views, i))
Explanation: Second, we initialize five (one for each API response) lists where we will obtain just the counts/views from the two column arrays. We will then loop through all of the dates (no duplicates) that we found from the previous step and use the lookup_val function to find the corresponding counts/views for each API response and append these to lists we initialized.
End of explanation
# aggregate the mobile views from pageviews and the "all views" from pageviews and pagecounts
pageview_mobile_views_col = [sum(i) for i in zip(pageview_mobileweb_views_col, pageview_mobileapp_views_col)]
pageview_all_views_col = [sum(i) for i in zip(pageview_desktop_views_col, pageview_mobile_views_col)]
pagecount_all_views_col = [sum(i) for i in zip(pagecount_desktop_views_col, pagecount_mobile_views_col)]
Explanation: Third, we need to aggregate the two mobile sets of data from pageviews to get the total mobile data. For both pagecounts and pageviews we aggregate the desktop counts/views and mobile counts/views to get the total views for each.
End of explanation
# assign column data to a pandas dataframe
df = pd.DataFrame({'year': year_col,
'month': month_col,
'pagecount_all_views': pagecount_all_views_col,
'pagecount_desktop_views': pagecount_desktop_views_col,
'pagecount_mobile_views': pagecount_mobile_views_col,
'pageview_all_views': pageview_all_views_col,
'pageview_desktop_views': pageview_desktop_views_col,
'pageview_mobile_views': pageview_mobile_views_col})
# organize in correct column order
df = df[['year',
'month',
'pagecount_all_views',
'pagecount_desktop_views',
'pagecount_mobile_views',
'pageview_all_views',
'pageview_desktop_views',
'pageview_mobile_views']]
Explanation: Convert to pandas DataFrame for easy export.
End of explanation
# write the column data to csv
# df.to_csv('../data_clean/en-wikipedia_traffic_200801-201709.csv', index=False)
Explanation: Export data in single csv. This section has been commented out in order to not continuously overwrite the cleaned data file. The cleaned data file has already been created and will be imported in the next step.
End of explanation
df = pd.read_csv('../data_clean/en-wikipedia_traffic_200801-201709.csv',
dtype={'year': str, 'month': str})
Explanation: Step 3: Analysis
Import the cleaned data file to use for analysis.
End of explanation
# convert dates to a datetime format for plotting
dates = np.array([datetime.strptime(list(df['year'])[i] + list(df['month'])[i], '%Y%m') for i in range(len(df))])
# set plot size
plt.figure(figsize=(16, 8))
# plot the points, filtering on non-zero values in the column data
plt.plot(dates[np.array(df['pageview_desktop_views']) > 0],
np.array(df['pageview_desktop_views'])[np.array(df['pageview_desktop_views']) > 0], 'g')
plt.plot(dates[np.array(df['pageview_mobile_views']) > 0],
np.array(df['pageview_mobile_views'])[np.array(df['pageview_mobile_views']) > 0], 'b')
plt.plot(dates[np.array(df['pageview_all_views']) > 0],
np.array(df['pageview_all_views'])[np.array(df['pageview_all_views']) > 0], 'k')
plt.plot(dates[np.array(df['pagecount_desktop_views']) > 0][:-1],
np.array(df['pagecount_desktop_views'])[np.array(df['pagecount_desktop_views']) > 0][:-1], 'g--')
plt.plot(dates[np.array(df['pagecount_mobile_views']) > 0][:-1],
np.array(df['pagecount_mobile_views'])[np.array(df['pagecount_mobile_views']) > 0][:-1], 'b--')
plt.plot(dates[np.array(df['pagecount_all_views']) > 0][:-1],
np.array(df['pagecount_all_views'])[np.array(df['pagecount_all_views']) > 0][:-1], 'k--')
# add gridlines, x-axis label, scale (to match assignment), title, legend
plt.grid(True)
plt.xlabel('May 2015: a new pageview definition took effect, which eliminated all crawler traffic. Dashed lines mark old definition.', color='r')
plt.ylim(0, 12e9)
plt.yticks([0, 2e9, 4e9, 6e9, 8e9, 10e9, 12e9], [0, 2000, 4000, 6000, 8000, 10000, 12000])
plt.title('Page Views on English Wikipedia (x 1,000,000)')
plt.legend(['main site', 'mobile site', 'total'])
# save an image of the analysis
plt.savefig('../analysis/analysis.png')
Explanation: Plot the data
The dates from the csv are converted to a datetime format in order to be plotted neatly. The points from the data are plotted, filtering out non-zero values in y-axis data.
The figure is then saved as a .png file.
End of explanation |
6,043 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'uhh', 'sandbox-2', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: UHH
Source ID: SANDBOX-2
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
6,044 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Use tensorboard to visualize tensorflow graph, plot quantitative metrics about the execution of your graph, and show addinal data like images that pass through it.
Step1: Tensorboard operates by reading Tensorflow event files, which contain summary data that you genertae when running Tensorflow.<br>
First create the Tensorflow graph and decide which nodes you would like to annotate<br>
To annotate a node use 'tf.summary.scalar'. <br>
To visualize the distributions of activations coming from a particular layer use 'tf.summary.histogram'.<br>
Now to generate all your summary nodes use 'tf.summary.maerge_all', this steps is used so as to run all the nodes.<br>
Now just run the 'tf.summary.FileWriter' to write this summary operation to disk<br>
Step2: Graph Visualization
A Tensorflow program would contain millions of nodes. It would be difficult to visualize them in the starting. So we use scoping, in which variable names are scoped and the visualiation uses this information to define a hiearachy on the nodes in the graph.<br>
Step3: Tensorflow graphs have two types of connections namely data dependencies and control dependenceis.<br>
Data Dependencies show the flow of tensors between two ops and are shown as solid arrows.<br>
Control Dependencies use dotted lines. | Python Code:
import tensorboard as tb
import tensorflow as tf
Explanation: Use tensorboard to visualize tensorflow graph, plot quantitative metrics about the execution of your graph, and show addinal data like images that pass through it.
End of explanation
def variable_summaries (var):
with tf.name_scope ('summaries'):
mean = tf.reduce_mean(var)
tf.summary.scalar('mean', mean)
with tf.name_scope('sttdev'):
sttdev = tf.sqrt(tf.reduce_mean(tf.square(var-mean)))
tf.summary.scalar('sttdev', sttdev)
tf.summary.scalar('max', tf.reduce_max(var))
tf.summary.scalar('min', tf.reduce_min(var))
tf.summary.histogram('histogram', var)
def nn_layer(input_tensor, input_dim, output_dim, layer_name, act=tf.nn.relu):
# Adding a name scope ensures logical grouping of layers in the graph
with tf.name_scope(layer_name):
with tf.name_scope('weights'):
weights = weight_variable([input_dim, output_dim])
variable_summaries(weights)
with tf.name_scope('biases'):
biases = bias_variable([output_dim])
variable_summaries(biases)
with tf.name_scope('Wx_plus_b'):
preactivate = tf.matmul(input_tensor, weights) + biases
tf.summary.histogram('pre_activations', preactivate)
activations = act(preactivate, name='activation')
tf.summary.histogram('activations', activations)
return activations
hidden1 = nn_layer(x, 784, 500, 'layer1')
with tf.name_scope('dropout'):
keep_prb = tf.placeholder(tf.float32)
tf.summary.scalar('dropout_keep_probability', keep_prob)
dropped = tf.nn.droput(hidden1, keep_prob)
y = nn_layer(dropped, 500, 10, 'layer2', act=tf.identify)
with tf.name_scope('cross_entropy'):
diff = tf.nn.softmax_cross_entropy_with_logits(target=y_, logits=y)
# To launch tensorboard use
tensorboard --logdir=path/to/log-directory
# logdir points to the directory in which FileWriter is stored
Explanation: Tensorboard operates by reading Tensorflow event files, which contain summary data that you genertae when running Tensorflow.<br>
First create the Tensorflow graph and decide which nodes you would like to annotate<br>
To annotate a node use 'tf.summary.scalar'. <br>
To visualize the distributions of activations coming from a particular layer use 'tf.summary.histogram'.<br>
Now to generate all your summary nodes use 'tf.summary.maerge_all', this steps is used so as to run all the nodes.<br>
Now just run the 'tf.summary.FileWriter' to write this summary operation to disk<br>
End of explanation
with tf.name_scope('hidden') as scope:
a = tf.constant(5, name='alpha')
W = tf.Variable(tf.random_uniform([1,2], -1.0, 1.0), name = 'weights')
b = tf.Variable(tf.zeros([1]), name='biases')
# Remomber that better the name scopes better is the visualization
Explanation: Graph Visualization
A Tensorflow program would contain millions of nodes. It would be difficult to visualize them in the starting. So we use scoping, in which variable names are scoped and the visualiation uses this information to define a hiearachy on the nodes in the graph.<br>
End of explanation
-
Explanation: Tensorflow graphs have two types of connections namely data dependencies and control dependenceis.<br>
Data Dependencies show the flow of tensors between two ops and are shown as solid arrows.<br>
Control Dependencies use dotted lines.
End of explanation |
6,045 | Given the following text description, write Python code to implement the functionality described.
Description:
Swap the elements between any two given quadrants of a Matrix
Python3 program for the above approach ; Function to iterate over the X quadrant and swap its element with Y quadrant ; Iterate over X quadrant ; Swap operations ; Function to swap the elements of the two given quadrants ; For Swapping 1 st and 2 nd Quadrant ; For Swapping 1 st and 3 rd Quadrant ; For Swapping 1 st and 4 th Quadrant ; For Swapping 2 nd and 3 rd Quadrant ; For Swapping 2 nd and 4 th Quadrant ; For Swapping 3 rd and 4 th Quadrant ; Print the resultant matrix ; Function to print the matrix ; Iterate over the rows ; Iterate over the cols ; Given matrix ; Given quadrants ; Function Call
| Python Code:
N , M = 6 , 6
def swap(mat , startx_X , starty_X , startx_Y , starty_Y ) :
row , col = 0 , 0
i = startx_X
while(bool(True ) ) :
col = 0
j = startx_X
while(bool(True ) ) :
temp = mat[i ][j ]
mat[i ][j ] = mat[startx_Y + row ][starty_Y + col ]
mat[startx_Y + row ][starty_Y + col ] = temp
col += 1
if col >= M // 2 :
break
j += 1
row += 1
if row >= N // 2 :
break
i += 1
def swapQuadOfMatrix(mat , X , Y ) :
if(X == 1 and Y == 2 ) :
swap(mat , 0 , 0 , 0 , M // 2 )
elif(X == 1 and Y == 3 ) :
swap(mat , 0 , 0 , N // 2 , 0 )
elif(X == 1 and Y == 4 ) :
swap(mat , 0 , 0 , N // 2 , M // 2 )
elif(X == 2 and Y == 3 ) :
swap(mat , 0 , M // 2 , N // 2 , 0 )
elif(X == 2 and Y == 4 ) :
swap(mat , 0 , M // 2 , N // 2 , M // 2 )
elif(X == 3 and Y == 4 ) :
swap(mat , N // 2 , 0 , N // 2 , M // 2 )
printMat(mat )
def printMat(mat ) :
for i in range(N ) :
for j in range(M ) :
print(mat[i ][j ] , end = "▁ ")
print()
mat =[[ 1 , 2 , 3 , 4 , 5 , 6 ] ,[7 , 8 , 9 , 10 , 11 , 12 ] ,[13 , 14 , 15 , 16 , 17 , 18 ] ,[19 , 20 , 21 , 22 , 23 , 24 ] ,[25 , 26 , 27 , 28 , 29 , 30 ] ,[31 , 32 , 33 , 34 , 35 , 36 ] ]
X , Y = 1 , 4
swapQuadOfMatrix(mat , X , Y )
|
6,046 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
19 - Introduction to Deep Learning - MLP
by Alejandro Correa Bahnsen and Jesus Solano
version 1.4, May 2019
Part of the class Practical Machine Learning
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Valerio Maggio, Fondazione Bruno Kessler
Introduction to Deep Learning
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction.
These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics.
Google DeepMind
Step2: Let's implement our first MLP
You are going to train a Neural Network with a single hidden layer.
Here is our model
Step4: Initialize the model's parameters
Exercise
Step7: The Loop
Question
Step9: Now that you have computed $A^{[2]}$ (in the Python variable "A2"), which contains $a^{2}$ for every example, you can compute the cost function as follows
Step11: Using the cache computed during forward propagation, you can now implement backward propagation.
Backward Propagation
Step13: Update Parameters
Step15: Integrate parts [Network structure ,Model Parameters, the loop] in nn_model()
Build your neural network model in nn_model().
Instructions
Step17: Predictions
Use your model to predict by building predict().
Use forward propagation to predict results.
Reminder
Step18: Test our MLP building blocks in real data
Note
Step19: Expected Output | Python Code:
# Import the required packages
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import scipy
# Package imports
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
import sklearn.linear_model
Explanation: 19 - Introduction to Deep Learning - MLP
by Alejandro Correa Bahnsen and Jesus Solano
version 1.4, May 2019
Part of the class Practical Machine Learning
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Valerio Maggio, Fondazione Bruno Kessler
Introduction to Deep Learning
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction.
These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics.
Google DeepMind: Ground-breaking AlphaGo masters the game of Go
Google DeepMind Go Video
Google Assistant: An artificial intelligence-powered virtual assistant
Google Assistant Video
Research at NVIDIA: The First Interactive AI Rendered Virtual World
Nvidia New World Video
This Section will cover:
Getting a conceptual understanding of multi-layer neural networks
Training neural networks for image classification
Implementing the powerful backpropagation algorithm
Debugging neural network implementations
Building Blocks: Artificial Neural Networks (ANN)
In machine learning and cognitive science, an artificial neural network (ANN) is a network inspired by biological neural networks which are used to estimate or approximate functions that can depend on a large number of inputs that are generally unknown
An ANN is built from nodes (neurons) stacked in layers between the feature vector and the target vector.
A node in a neural network is built from Weights and Activation function
An early version of ANN built from one node was called the Perceptron
<img src="https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/notebooks/images/Perceptron.png" width="65%">
The Perceptron is an algorithm for supervised learning of binary classifiers. functions that can decide whether an input (represented by a vector of numbers) belongs to one class or another.
Much like logistic regression, the weights in a neural net are being multiplied by the input vertor summed up and feeded into the activation function's input.
A Perceptron Network can be designed to have multiple layers, leading to the Multi-Layer Perceptron (aka MLP)
<img src="https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/notebooks/images/MLP.png" width="65%">
Single Layer Neural Network
<img src="https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/notebooks/images/single_layer.png" width="65%" />
(Source: Python Machine Learning, S. Raschka)
Weights Update Rule
We use a gradient descent optimization algorithm to learn the Weights Coefficients of the model.
<br><br>
In every epoch (pass over the training set), we update the weight vector $w$ using the following update rule:
$$
w = w + \Delta w, \text{where } \Delta w = - \eta \nabla J(w)
$$
<br><br>
In other words, we computed the gradient based on the whole training set and updated the weights of the model by taking a step into the opposite direction of the gradient $ \nabla J(w)$.
In order to fin the optimal weights of the model, we optimized an objective function (e.g. the Sum of Squared Errors (SSE)) cost function $J(w)$.
Furthermore, we multiply the gradient by a factor, the learning rate $\eta$ , which we choose carefully to balance the speed of learning against the risk of overshooting the global minimum of the cost function.
Gradient Descent
In gradient descent optimization, we update all the weights simultaneously after each epoch, and we define the partial derivative for each weight $w_j$ in the weight vector $w$ as follows:
$$
\frac{\partial}{\partial w_j} J(w) = \sum_{i} ( y^{(i)} - a^{(i)} ) x^{(i)}_j
$$
Note: The superscript $(i)$ refers to the i-th sample. The subscript $j$ refers to the j-th dimension/feature
Here $y^{(i)}$ is the target class label of a particular sample $x^{(i)}$ , and $a^{(i)}$ is the activation of the neuron
(which is a linear function in the special case of Perceptron).
We define the activation function $\phi(\cdot)$ as follows:
$$
\phi(z) = z = a = \sum_{j} w_j x_j = \mathbf{w}^T \mathbf{x}
$$
Binary Classification
While we used the activation $\phi(z)$ to compute the gradient update, we may use a threshold function (Heaviside function) to squash the continuous-valued output into binary class labels for prediction:
$$
\hat{y} =
\begin{cases}
1 & \text{if } \phi(z) \geq 0 \
0 & \text{otherwise}
\end{cases}
$$
Building Neural Nets from scratch
Idea:
We will build the neural networks from first principles.
We will create a very simple model and understand how it works. We will also be implementing backpropagation algorithm.
Please note that this code is not optimized and not to be used in production.
This is for instructive purpose - for us to understand how ANN works.
Libraries like theano have highly optimized code.
Perceptron
If you want a sneak peek of alternate (production ready) implementation of Perceptron for instance try:
python
from sklearn.linear_model import Perceptron
Introducing the multi-layer neural network architecture
<img src="https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/notebooks/images/multi-layers-1.png" width="50%" />
(Source: Python Machine Learning, S. Raschka)
Now we will see how to connect multiple single neurons to a multi-layer feedforward neural network; this special type of network is also called a multi-layer perceptron (MLP).
The figure shows the concept of an MLP consisting of three layers: one input layer, one hidden layer, and one output layer.
The units in the hidden layer are fully connected to the input layer, and the output layer is fully connected to the hidden layer, respectively.
If such a network has more than one hidden layer, we also call it a deep artificial neural network.
Notation
we denote the ith activation unit in the lth layer as $a_i^{(l)}$ , and the activation units $a_0^{(1)}$ and
$a_0^{(2)}$ are the bias units, respectively, which we set equal to $1$.
<br><br>
The activation of the units in the input layer is just its input plus the bias unit:
$$
\mathbf{a}^{(1)} = [a_0^{(1)}, a_1^{(1)}, \ldots, a_m^{(1)}]^T = [1, x_1^{(i)}, \ldots, x_m^{(i)}]^T
$$
<br><br>
Note: $x_j^{(i)}$ refers to the jth feature/dimension of the ith sample
Notes on Notation (usually) Adopted
The terminology around the indices (subscripts and superscripts) may look a little bit confusing at first.
<br><br>
You may wonder why we wrote $w_{j,k}^{(l)}$ and not $w_{k,j}^{(l)}$ to refer to
the weight coefficient that connects the kth unit in layer $l$ to the jth unit in layer $l+1$.
<br><br>
What may seem a little bit quirky at first will make much more sense later when we vectorize the neural network representation.
<br><br>
For example, we will summarize the weights that connect the input and hidden layer by a matrix
$$ W^{(1)} \in \mathbb{R}^{h×[m+1]}$$
where $h$ is the number of hidden units and $m + 1$ is the number of hidden units plus bias unit.
<img src="https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/notebooks/images/multi-layers-2.png" width="75%" />
(Source: Python Machine Learning, S. Raschka)
Forward Propagation
Starting at the input layer, we forward propagate the patterns of the training data through the network to generate an output.
Based on the network's output, we calculate the error that we want to minimize using a cost function that we will describe later.
We backpropagate the error, find its derivative with respect to each weight in the network, and update the model.
Sigmoid Activation
<img src="https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/notebooks/images/logistic_function.png" width="65%" />
(Source: Python Machine Learning, S. Raschka)
<img src="./images/fwd_step.png" width="50%" />
(Source: Python Machine Learning, S. Raschka)
<img src="https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/notebooks/images/fwd_step_net.png" width="75%" />
(Source: Python Machine Learning, S. Raschka)
Backward Propagation
The weights of each neuron are learned by gradient descent, where each neuron's error is derived with respect to it's weight.
<img src="https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/notebooks/images/bkwd_step_net.png" width="75%" />
(Source: Python Machine Learning, S. Raschka)
Optimization is done for each layer with respect to the previous layer in a technique known as BackPropagation.
<img src="https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/notebooks/images/backprop.png" width="%">
Update Parameters
The weights of each neuron are learned by gradient descent, where each neuron's error is derived with respect to it's weight. The rule to update parameters is $ \theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter.
<img src='https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/notebooks/images/updateParameters.png'>
The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.
(The following code is inspired from these terrific notebooks)
End of explanation
def layer_sizes(X, Y, hidden_neurons):
Arguments:
X -- input dataset of shape (input size, number of examples)
Y -- labels of shape (output size, number of examples)
Returns:
n_x -- the size of the input layer
n_h -- the size of the hidden layer
n_y -- the size of the output layer
n_x = X.shape[0] # size of input layer
n_h = hidden_neurons
n_y = Y.shape[0] # size of output layer
return (n_x, n_h, n_y)
Explanation: Let's implement our first MLP
You are going to train a Neural Network with a single hidden layer.
Here is our model:
<img src="https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/notebooks/images/classification_kiank.png" style="width:600px;height:300px;">
Mathematically:
For one example $x^{(i)}$:
$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1]}\tag{1}$$
$$a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}$$
$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2]}\tag{3}$$
$$\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}$$
$$y^{(i)}_{prediction} = \begin{cases} 1 & \mbox{if } a^{2} > 0.5 \ 0 & \mbox{otherwise } \end{cases}\tag{5}$$
Given the predictions on all the examples, you can also compute the cost $J$ as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}$$
Reminder: The general methodology to build a Neural Network is to:
1. Define the neural network structure ( # of input units, # of hidden units, etc).
2. Initialize the model's parameters
3. Loop:
- Implement forward propagation
- Compute loss
- Implement backward propagation to get the gradients
- Update parameters (gradient descent)
You often build helper functions to compute steps 1-3 and then merge them into one function we call nn_model(). Once you've built nn_model() and learnt the right parameters, you can make predictions on new data.
Defining the neural network structure
Define three variables:
- n_x: the size of the input layer
- n_h: the size of the hidden layer
- n_y: the size of the output layer
End of explanation
# Solved Exercise: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
params -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.
W1 = np.random.randn(n_h,n_x) * 0.01
b1 = np.zeros(shape=(n_h,1))
W2 = np.random.randn(n_y,n_h) * 0.01
b2 = np.zeros(shape=(n_y,1))
assert (W1.shape == (n_h, n_x))
assert (b1.shape == (n_h, 1))
assert (W2.shape == (n_y, n_h))
assert (b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
Explanation: Initialize the model's parameters
Exercise: Implement the function initialize_parameters().
Instructions:
- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.
- You will initialize the weights matrices with random values.
- Use: np.random.randn(a,b) * 0.01 to randomly initialize a matrix of shape (a,b).
- You will initialize the bias vectors as zeros.
- Use: np.zeros((a,b)) to initialize a matrix of shape (a,b) with zeros.
End of explanation
def forward_propagation(X, parameters):
Argument:
X -- input data of size (n_x, m)
parameters -- python dictionary containing your parameters (output of initialization function)
Returns:
A2 -- The sigmoid output of the second activation
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2"
# Retrieve each parameter from the dictionary "parameters"
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Implement Forward Propagation to calculate A2 (probabilities)
Z1 = np.dot(W1,X)+b1
A1 = np.tanh(Z1)
Z2 = np.dot(W2,A1)+b2
A2 = sigmoid(Z2)
assert(A2.shape == (1, X.shape[1]))
cache = {"Z1": Z1,
"A1": A1,
"Z2": Z2,
"A2": A2}
return A2, cache
def sigmoid(x):
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size.
Return:
s -- sigmoid(x)
s = 1/(1+np.exp(-x))
return s
Explanation: The Loop
Question: Implement forward_propagation().
Instructions:
- Look above at the mathematical representation of your classifier.
- You can use the function sigmoid(). It is built-in (imported) in the notebook.
- You can use the function np.tanh(). It is part of the numpy library.
- The steps you have to implement are:
1. Retrieve each parameter from the dictionary "parameters" (which is the output of initialize_parameters()) by using parameters[".."].
2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).
- Values needed in the backpropagation are stored in "cache". The cache will be given as an input to the backpropagation function.
End of explanation
def compute_cost(A2, Y, parameters):
Computes the cross-entropy cost given in equation (13)
Arguments:
A2 -- The sigmoid output of the second activation, of shape (1, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
parameters -- python dictionary containing your parameters W1, b1, W2 and b2
Returns:
cost -- cross-entropy cost given equation (13)
m = Y.shape[1] # number of example
# Compute the cross-entropy cost
logprobs = np.multiply(Y,np.log(A2)) + np.multiply(1-Y,np.log(1-A2))
cost = -1/m * np.sum(logprobs)
cost = np.squeeze(cost) # makes sure cost is the dimension we expect.
# E.g., turns [[17]] into 17
assert(isinstance(cost, float))
return cost
Explanation: Now that you have computed $A^{[2]}$ (in the Python variable "A2"), which contains $a^{2}$ for every example, you can compute the cost function as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \small\tag{13}$$
Cost function: Implement compute_cost() to compute the value of the cost $J$.
Instructions:
- There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented
$- \sum\limits_{i=0}^{m} y^{(i)}\log(a^{2})$:
python
logprobs = np.multiply(np.log(A2),Y)
cost = - np.sum(logprobs) # no need to use a for loop!
(you can use either np.multiply() and then np.sum() or directly np.dot()).
End of explanation
def backward_propagation(parameters, cache, X, Y):
Implement the backward propagation using the instructions above.
Arguments:
parameters -- python dictionary containing our parameters
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2".
X -- input data of shape (2, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
Returns:
grads -- python dictionary containing your gradients with respect to different parameters
m = X.shape[1]
# First, retrieve W1 and W2 from the dictionary "parameters".
W1 = parameters["W1"]
W2 = parameters["W2"]
# Retrieve also A1 and A2 from dictionary "cache".
A1 = cache["A1"]
A2 = cache["A2"]
# Backward propagation: calculate dW1, db1, dW2, db2.
dZ2 = A2 - Y
dW2 = 1/m * np.dot(dZ2,A1.T)
db2 = 1/m*np.sum(dZ2,axis=1,keepdims=True)
dZ1 = np.dot(W2.T,dZ2) * (1 - np.power(A1,2))
dW1 = 1/m* np.dot(dZ1,X.T)
db1 = 1/m*np.sum(dZ1,axis=1,keepdims=True)
grads = {"dW1": dW1,
"db1": db1,
"dW2": dW2,
"db2": db2}
return grads
Explanation: Using the cache computed during forward propagation, you can now implement backward propagation.
Backward Propagation: Implement the function backward_propagation().
Instructions:
Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation.
<img src="https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/notebooks/images/grad_summary.png" style="width:600px;height:300px;">
<!--
$\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } = \frac{1}{m} (a^{[2](i)} - y^{(i)})$
$\frac{\partial \mathcal{J} }{ \partial W_2 } = \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } a^{[1] (i) T} $
$\frac{\partial \mathcal{J} }{ \partial b_2 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)}}}$
$\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } = W_2^T \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $
$\frac{\partial \mathcal{J} }{ \partial W_1 } = \frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } X^T $
$\frac{\partial \mathcal{J} _i }{ \partial b_1 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)}}}$
- Note that $*$ denotes elementwise multiplication.
- The notation you will use is common in deep learning coding:
- dW1 = $\frac{\partial \mathcal{J} }{ \partial W_1 }$
- db1 = $\frac{\partial \mathcal{J} }{ \partial b_1 }$
- dW2 = $\frac{\partial \mathcal{J} }{ \partial W_2 }$
- db2 = $\frac{\partial \mathcal{J} }{ \partial b_2 }$
!-->
Tips:
To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute
$g^{[1]'}(Z^{[1]})$ using (1 - np.power(A1, 2)).
End of explanation
def update_parameters(parameters, grads, learning_rate = 1.2):
Updates parameters using the gradient descent update rule given above
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients
Returns:
parameters -- python dictionary containing your updated parameters
# Retrieve each parameter from the dictionary "parameters"
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Retrieve each gradient from the dictionary "grads"
dW1 = grads["dW1"]
db1 = grads["db1"]
dW2 = grads["dW2"]
db2 = grads["db2"]
# Update rule for each parameter
W1 = W1 - learning_rate * dW1
b1 = b1 - learning_rate * db1
W2 = W2 - learning_rate * dW2
b2 = b2 - learning_rate * db2
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
Explanation: Update Parameters:
Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).
General gradient descent rule: $ \theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter.
Illustration: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.
<img src="https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/notebooks/images/sgd.gif" style="width:400;height:400;"> <img src="https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/notebooks/images/sgd_bad.gif" style="width:400;height:400;">
End of explanation
def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):
Arguments:
X -- dataset of shape (2, number of examples)
Y -- labels of shape (1, number of examples)
n_h -- size of the hidden layer
num_iterations -- Number of iterations in gradient descent loop
print_cost -- if True, print the cost every 1000 iterations
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
np.random.seed(3)
n_x = layer_sizes(X, Y,n_h)[0]
n_y = layer_sizes(X, Y,n_h)[2]
# Initialize parameters, then retrieve W1, b1, W2, b2. Inputs: "n_x, n_h, n_y". Outputs = "W1, b1, W2, b2, parameters".
parameters = initialize_parameters(n_x,n_h,n_y)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache".
A2, cache = forward_propagation(X, parameters)
# Cost function. Inputs: "A2, Y, parameters". Outputs: "cost".
cost = compute_cost(A2,Y,parameters)
# Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads".
grads = backward_propagation(parameters,cache,X,Y)
# Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters".
parameters = update_parameters(parameters,grads)
# Print the cost every 1000 iterations
if print_cost and i % 1000 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
return parameters
Explanation: Integrate parts [Network structure ,Model Parameters, the loop] in nn_model()
Build your neural network model in nn_model().
Instructions: The neural network model has to use the previous functions in the right order.
End of explanation
def predict(parameters, X):
Using the learned parameters, predicts a class for each example in X
Arguments:
parameters -- python dictionary containing your parameters
X -- input data of size (n_x, m)
Returns
predictions -- vector of predictions of our model (red: 0 / blue: 1)
# Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.
A2, cache = forward_propagation(X,parameters)
predictions = A2 > 0.5
return predictions
Explanation: Predictions
Use your model to predict by building predict().
Use forward propagation to predict results.
Reminder: predictions = $y_{prediction} = \mathbb 1 \text{{activation > 0.5}} = \begin{cases}
1 & \text{if}\ activation > 0.5 \
0 & \text{otherwise}
\end{cases}$
As an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: X_new = (X > threshold)
End of explanation
from sklearn.datasets.samples_generator import make_circles
x_train, y_train = make_circles(n_samples=1000, noise= 0.05, random_state=3)
plt.figure(figsize=(15, 10))
plt.scatter(x_train[:, 0], x_train[:,1], c=y_train, s=40, cmap=plt.cm.Spectral);
def plot_decision_boundary(model, X, y):
plt.figure(figsize=(15,10))
# Set min and max values and give it some padding
x_min, x_max = X[0, :].min() - 1, X[0, :].max() + 1
y_min, y_max = X[1, :].min() - 1, X[1, :].max() + 1
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole grid
Z = model(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.ylabel('x2')
plt.xlabel('x1')
plt.scatter(X[0, :], X[1, :], c=y.ravel(), s=80, cmap=plt.cm.Spectral)
# Build a model with a n_h-dimensional hidden layer
parameters = nn_model(x_train.T, np.array([y_train.T]), n_h = 5, num_iterations = 10000, print_cost=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), x_train.T, np.array([y_train.T]))
plt.title("Decision Boundary for hidden layer size " + str(4))
Explanation: Test our MLP building blocks in real data
Note: This process will eventually result in our own Neural Networks class
It is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.
End of explanation
# Print accuracy
predictions = predict(parameters, x_train.T)
print ('Accuracy: %d' % float((np.dot( np.array([y_train.T]),predictions.T) + np.dot(1- np.array([y_train.T]),1-predictions.T))/float( np.array([y_train.T]).size)*100) + '%')
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td>**Cost after iteration 9000**</td>
<td> 0.218607 </td>
</tr>
</table>
End of explanation |
6,047 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 5
Examples and Exercises from Think Stats, 2nd Edition
http
Step1: Exponential distribution
Here's what the exponential CDF looks like with a range of parameters.
Step2: Here's the distribution of interarrival times from a dataset of birth times.
Step3: Here's what the CCDF looks like on a log-y scale. A straight line is consistent with an exponential distribution.
Step4: Normal distribution
Here's what the normal CDF looks like with a range of parameters.
Step5: I'll use a normal model to fit the distribution of birth weights from the NSFG.
Step6: Here's the observed CDF and the model. The model fits the data well except in the left tail.
Step7: A normal probability plot is a visual test for normality. The following example shows that if the data are actually from a normal distribution, the plot is approximately straight.
Step8: Here's the normal probability plot for birth weights, showing that the lightest babies are lighter than we expect from the normal mode, and the heaviest babies are heavier.
Step9: If we suspect that the deviation in the left tail is due to preterm babies, we can check by selecting only full term births.
Step10: Now the deviation in the left tail is almost gone, but the heaviest babies are still heavy.
Step11: Lognormal model
As an example of a lognormal disrtribution, we'll look at adult weights from the BRFSS.
Step13: The following function estimates the parameters of a normal distribution and plots the data and a normal model.
Step14: Here's the distribution of adult weights and a normal model, which is not a very good fit.
Step15: Here's the distribution of adult weight and a lognormal model, plotted on a log-x scale. The model is a better fit for the data, although the heaviest people are heavier than the model expects.
Step17: The following function generates a normal probability plot.
Step18: When we generate a normal probability plot with adult weights, we can see clearly that the data deviate from the model systematically.
Step19: If we make a normal probability plot with log weights, the model fit the data well except in the tails, where the heaviest people exceed expectations.
Step20: Pareto distribution
Here's what the Pareto CDF looks like with a range of parameters.
Step21: The distribution of populations for cities and towns is sometimes said to be Pareto-like.
Step22: Here's the distribution of population for cities and towns in the U.S., along with a Pareto model. The model fits the data well in the tail.
Step23: The lognormal model might be a better fit for this data (as is often the case for things that are supposed to be Pareto).
Step24: Here's a normal probability plot for the log-populations. The model fits the data well except in the right tail, where the biggest cities are bigger than expected.
Step25: Random variates
When we have an analytic CDF, we can sometimes invert it to generate random values. The following function generates values from an exponential distribution.
Step26: We can test it by generating a sample.
Step27: And plotting the CCDF on a log-y scale.
Step28: A straight line is consistent with an exponential distribution.
Exercises
Exercise
Step29: Exercise
Step30: For example <tt>scipy.stats.norm</tt> represents a normal distribution.
Step31: A "frozen random variable" can compute its mean and standard deviation.
Step32: It can also evaluate its CDF. How many people are below the mean by more than one standard deviation? About 16%
Step33: How many people are between 5'10" and 6'1"?
Step34: Exercise
Step35: What is the mean height in Pareto world?
Step36: What fraction of people are shorter than the mean?
Step37: Out of 7 billion people, how many do we expect to be taller than 1 km? You could use <tt>dist.cdf</tt> or <tt>dist.sf</tt>.
Step38: How tall do we expect the tallest person to be?
Step39: Exercise
Step40: Exercise
Step41: Bonus Example
Step42: Here's what the CDF looks like on a linear scale.
Step43: To check whether a Pareto model describes the data well, I plot the CCDF on a log-log scale.
I found parameters for the Pareto model that match the tail of the distribution.
Step44: For the lognormal model I estimate mu and sigma using percentile-based statistics (median and IQR).
Step45: Here's what the distribution, and fitted model, look like on a log-x scale. | Python Code:
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print("Downloaded " + local)
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkstats2.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkplot.py")
import numpy as np
import thinkstats2
import thinkplot
Explanation: Chapter 5
Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
thinkplot.PrePlot(3)
for lam in [2.0, 1, 0.5]:
xs, ps = thinkstats2.RenderExpoCdf(lam, 0, 3.0, 50)
label = r"$\lambda=%g$" % lam
thinkplot.Plot(xs, ps, label=label)
thinkplot.Config(title="Exponential CDF", xlabel="x", ylabel="CDF", loc="lower right")
Explanation: Exponential distribution
Here's what the exponential CDF looks like with a range of parameters.
End of explanation
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/nsfg.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/analytic.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/babyboom.dat")
import analytic
df = analytic.ReadBabyBoom()
diffs = df.minutes.diff()
cdf = thinkstats2.Cdf(diffs, label="actual")
thinkplot.Cdf(cdf)
thinkplot.Config(xlabel="Time between births (minutes)", ylabel="CDF")
Explanation: Here's the distribution of interarrival times from a dataset of birth times.
End of explanation
thinkplot.Cdf(cdf, complement=True)
thinkplot.Config(
xlabel="Time between births (minutes)",
ylabel="CCDF",
yscale="log",
loc="upper right",
)
Explanation: Here's what the CCDF looks like on a log-y scale. A straight line is consistent with an exponential distribution.
End of explanation
thinkplot.PrePlot(3)
mus = [1.0, 2.0, 3.0]
sigmas = [0.5, 0.4, 0.3]
for mu, sigma in zip(mus, sigmas):
xs, ps = thinkstats2.RenderNormalCdf(mu=mu, sigma=sigma, low=-1.0, high=4.0)
label = r"$\mu=%g$, $\sigma=%g$" % (mu, sigma)
thinkplot.Plot(xs, ps, label=label)
thinkplot.Config(title="Normal CDF", xlabel="x", ylabel="CDF", loc="upper left")
Explanation: Normal distribution
Here's what the normal CDF looks like with a range of parameters.
End of explanation
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/nsfg.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/first.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dct")
download(
"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dat.gz"
)
import nsfg
import first
preg = nsfg.ReadFemPreg()
weights = preg.totalwgt_lb.dropna()
Explanation: I'll use a normal model to fit the distribution of birth weights from the NSFG.
End of explanation
# estimate parameters: trimming outliers yields a better fit
mu, var = thinkstats2.TrimmedMeanVar(weights, p=0.01)
print("Mean, Var", mu, var)
# plot the model
sigma = np.sqrt(var)
print("Sigma", sigma)
xs, ps = thinkstats2.RenderNormalCdf(mu, sigma, low=0, high=12.5)
thinkplot.Plot(xs, ps, label="model", color="0.6")
# plot the data
cdf = thinkstats2.Cdf(weights, label="data")
thinkplot.PrePlot(1)
thinkplot.Cdf(cdf)
thinkplot.Config(title="Birth weights", xlabel="Birth weight (pounds)", ylabel="CDF")
Explanation: Here's the observed CDF and the model. The model fits the data well except in the left tail.
End of explanation
n = 1000
thinkplot.PrePlot(3)
mus = [0, 1, 5]
sigmas = [1, 1, 2]
for mu, sigma in zip(mus, sigmas):
sample = np.random.normal(mu, sigma, n)
xs, ys = thinkstats2.NormalProbability(sample)
label = "$\mu=%d$, $\sigma=%d$" % (mu, sigma)
thinkplot.Plot(xs, ys, label=label)
thinkplot.Config(
title="Normal probability plot",
xlabel="standard normal sample",
ylabel="sample values",
)
Explanation: A normal probability plot is a visual test for normality. The following example shows that if the data are actually from a normal distribution, the plot is approximately straight.
End of explanation
mean, var = thinkstats2.TrimmedMeanVar(weights, p=0.01)
std = np.sqrt(var)
xs = [-4, 4]
fxs, fys = thinkstats2.FitLine(xs, mean, std)
thinkplot.Plot(fxs, fys, linewidth=4, color="0.8")
xs, ys = thinkstats2.NormalProbability(weights)
thinkplot.Plot(xs, ys, label="all live")
thinkplot.Config(
title="Normal probability plot",
xlabel="Standard deviations from mean",
ylabel="Birth weight (lbs)",
)
Explanation: Here's the normal probability plot for birth weights, showing that the lightest babies are lighter than we expect from the normal mode, and the heaviest babies are heavier.
End of explanation
full_term = preg[preg.prglngth >= 37]
term_weights = full_term.totalwgt_lb.dropna()
Explanation: If we suspect that the deviation in the left tail is due to preterm babies, we can check by selecting only full term births.
End of explanation
mean, var = thinkstats2.TrimmedMeanVar(weights, p=0.01)
std = np.sqrt(var)
xs = [-4, 4]
fxs, fys = thinkstats2.FitLine(xs, mean, std)
thinkplot.Plot(fxs, fys, linewidth=4, color="0.8")
thinkplot.PrePlot(2)
xs, ys = thinkstats2.NormalProbability(weights)
thinkplot.Plot(xs, ys, label="all live")
xs, ys = thinkstats2.NormalProbability(term_weights)
thinkplot.Plot(xs, ys, label="full term")
thinkplot.Config(
title="Normal probability plot",
xlabel="Standard deviations from mean",
ylabel="Birth weight (lbs)",
)
Explanation: Now the deviation in the left tail is almost gone, but the heaviest babies are still heavy.
End of explanation
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/brfss.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/CDBRFS08.ASC.gz")
import brfss
df = brfss.ReadBrfss()
weights = df.wtkg2.dropna()
Explanation: Lognormal model
As an example of a lognormal disrtribution, we'll look at adult weights from the BRFSS.
End of explanation
def MakeNormalModel(weights):
Plots a CDF with a Normal model.
weights: sequence
cdf = thinkstats2.Cdf(weights, label="weights")
mean, var = thinkstats2.TrimmedMeanVar(weights)
std = np.sqrt(var)
print("n, mean, std", len(weights), mean, std)
xmin = mean - 4 * std
xmax = mean + 4 * std
xs, ps = thinkstats2.RenderNormalCdf(mean, std, xmin, xmax)
thinkplot.Plot(xs, ps, label="model", linewidth=4, color="0.8")
thinkplot.Cdf(cdf)
Explanation: The following function estimates the parameters of a normal distribution and plots the data and a normal model.
End of explanation
MakeNormalModel(weights)
thinkplot.Config(
title="Adult weight, linear scale",
xlabel="Weight (kg)",
ylabel="CDF",
loc="upper right",
)
Explanation: Here's the distribution of adult weights and a normal model, which is not a very good fit.
End of explanation
log_weights = np.log10(weights)
MakeNormalModel(log_weights)
thinkplot.Config(
title="Adult weight, log scale",
xlabel="Weight (log10 kg)",
ylabel="CDF",
loc="upper right",
)
Explanation: Here's the distribution of adult weight and a lognormal model, plotted on a log-x scale. The model is a better fit for the data, although the heaviest people are heavier than the model expects.
End of explanation
def MakeNormalPlot(weights):
Generates a normal probability plot of birth weights.
weights: sequence
mean, var = thinkstats2.TrimmedMeanVar(weights, p=0.01)
std = np.sqrt(var)
xs = [-5, 5]
xs, ys = thinkstats2.FitLine(xs, mean, std)
thinkplot.Plot(xs, ys, color="0.8", label="model")
xs, ys = thinkstats2.NormalProbability(weights)
thinkplot.Plot(xs, ys, label="weights")
Explanation: The following function generates a normal probability plot.
End of explanation
MakeNormalPlot(weights)
thinkplot.Config(
title="Adult weight, normal plot",
xlabel="Weight (kg)",
ylabel="CDF",
loc="upper left",
)
Explanation: When we generate a normal probability plot with adult weights, we can see clearly that the data deviate from the model systematically.
End of explanation
MakeNormalPlot(log_weights)
thinkplot.Config(
title="Adult weight, lognormal plot",
xlabel="Weight (log10 kg)",
ylabel="CDF",
loc="upper left",
)
Explanation: If we make a normal probability plot with log weights, the model fit the data well except in the tails, where the heaviest people exceed expectations.
End of explanation
xmin = 0.5
thinkplot.PrePlot(3)
for alpha in [2.0, 1.0, 0.5]:
xs, ps = thinkstats2.RenderParetoCdf(xmin, alpha, 0, 10.0, n=100)
thinkplot.Plot(xs, ps, label=r"$\alpha=%g$" % alpha)
thinkplot.Config(title="Pareto CDF", xlabel="x", ylabel="CDF", loc="lower right")
Explanation: Pareto distribution
Here's what the Pareto CDF looks like with a range of parameters.
End of explanation
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/populations.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/PEP_2012_PEPANNRES_with_ann.csv")
import populations
pops = populations.ReadData()
print("Number of cities/towns", len(pops))
Explanation: The distribution of populations for cities and towns is sometimes said to be Pareto-like.
End of explanation
log_pops = np.log10(pops)
cdf = thinkstats2.Cdf(pops, label="data")
cdf_log = thinkstats2.Cdf(log_pops, label="data")
# pareto plot
xs, ys = thinkstats2.RenderParetoCdf(xmin=5000, alpha=1.4, low=0, high=1e7)
thinkplot.Plot(np.log10(xs), 1 - ys, label="model", color="0.8")
thinkplot.Cdf(cdf_log, complement=True)
thinkplot.Config(
xlabel="log10 population", ylabel="CCDF", yscale="log", loc="lower left"
)
Explanation: Here's the distribution of population for cities and towns in the U.S., along with a Pareto model. The model fits the data well in the tail.
End of explanation
thinkplot.PrePlot(cols=2)
mu, sigma = log_pops.mean(), log_pops.std()
xs, ps = thinkstats2.RenderNormalCdf(mu, sigma, low=0, high=8)
thinkplot.Plot(xs, ps, label="model", color="0.8")
thinkplot.Cdf(cdf_log)
thinkplot.Config(xlabel="log10 population", ylabel="CDF", loc="lower right")
Explanation: The lognormal model might be a better fit for this data (as is often the case for things that are supposed to be Pareto).
End of explanation
thinkstats2.NormalProbabilityPlot(log_pops, label="data")
thinkplot.Config(xlabel="Random variate", ylabel="log10 population", xlim=[-5, 5])
Explanation: Here's a normal probability plot for the log-populations. The model fits the data well except in the right tail, where the biggest cities are bigger than expected.
End of explanation
import random
def expovariate(lam):
p = random.random()
x = -np.log(1 - p) / lam
return x
Explanation: Random variates
When we have an analytic CDF, we can sometimes invert it to generate random values. The following function generates values from an exponential distribution.
End of explanation
t = [expovariate(lam=2) for _ in range(1000)]
Explanation: We can test it by generating a sample.
End of explanation
cdf = thinkstats2.Cdf(t)
thinkplot.Cdf(cdf, complement=True)
thinkplot.Config(xlabel="Exponential variate", ylabel="CCDF", yscale="log")
Explanation: And plotting the CCDF on a log-y scale.
End of explanation
# Solution
def sample_pareto(alpha, x_m, size):
u = np.random.random(size)
return x_m * pow(1 - u, -1 / alpha)
# Solution
sample = sample_pareto(1, 2, 1000)
cdf = thinkstats2.Cdf(sample)
thinkplot.Cdf(cdf, complement=True)
thinkplot.Config(xlabel="Random values", ylabel="CCDF", xscale="log", yscale="log")
Explanation: A straight line is consistent with an exponential distribution.
Exercises
Exercise: Write a function that generates a Pareto variate. Generate a sample and plot its complementary CDF on a log-log scale. Does it look like a straight line?
End of explanation
import scipy.stats
Explanation: Exercise: In the BRFSS (see Section 5.4), the distribution of heights is roughly normal with parameters µ = 178 cm and σ = 7.7 cm for men, and µ = 163 cm and σ = 7.3 cm for women.
In order to join Blue Man Group, you have to be male between 5’10” and 6’1” (see http://bluemancasting.com). What percentage of the U.S. male population is in this range? Hint: use scipy.stats.norm.cdf.
scipy.stats contains objects that represent analytic distributions
End of explanation
mu = 178
sigma = 7.7
dist = scipy.stats.norm(loc=mu, scale=sigma)
type(dist)
Explanation: For example <tt>scipy.stats.norm</tt> represents a normal distribution.
End of explanation
dist.mean(), dist.std()
Explanation: A "frozen random variable" can compute its mean and standard deviation.
End of explanation
dist.cdf(mu - sigma)
Explanation: It can also evaluate its CDF. How many people are below the mean by more than one standard deviation? About 16%
End of explanation
# Solution
low = dist.cdf(177.8) # 5'10"
high = dist.cdf(185.4) # 6'1"
low, high, high - low
Explanation: How many people are between 5'10" and 6'1"?
End of explanation
alpha = 1.7
xmin = 1 # meter
dist = scipy.stats.pareto(b=alpha, scale=xmin)
dist.median()
Explanation: Exercise: To get a feel for the Pareto distribution, let’s see how different the world would be if the distribution of human height were Pareto. With the parameters xm = 1 m and α = 1.7, we get a distribution with a reasonable minimum, 1 m, and median, 1.5 m.
Plot this distribution. What is the mean human height in Pareto world? What fraction of the population is shorter than the mean? If there are 7 billion people in Pareto world, how many do we expect to be taller than 1 km? How tall do we expect the tallest person to be?
scipy.stats.pareto represents a pareto distribution. In Pareto world, the distribution of human heights has parameters alpha=1.7 and xmin=1 meter. So the shortest person is 100 cm and the median is 150.
End of explanation
# Solution
dist.mean()
Explanation: What is the mean height in Pareto world?
End of explanation
# Solution
dist.cdf(dist.mean())
Explanation: What fraction of people are shorter than the mean?
End of explanation
# Solution
(1 - dist.cdf(1000)) * 7e9, dist.sf(1000) * 7e9
Explanation: Out of 7 billion people, how many do we expect to be taller than 1 km? You could use <tt>dist.cdf</tt> or <tt>dist.sf</tt>.
End of explanation
# Solution
# One way to solve this is to search for a height that we
# expect one person out of 7 billion to exceed.
# It comes in at roughly 600 kilometers.
dist.sf(600000) * 7e9
# Solution
# Another way is to use `ppf`, which evaluates the "percent point function", which
# is the inverse CDF. So we can compute the height in meters that corresponds to
# the probability (1 - 1/7e9).
dist.ppf(1 - 1 / 7e9)
Explanation: How tall do we expect the tallest person to be?
End of explanation
sample = [random.weibullvariate(2, 1) for _ in range(1000)]
cdf = thinkstats2.Cdf(sample)
thinkplot.Cdf(cdf, transform="weibull")
thinkplot.Config(xlabel="Weibull variate", ylabel="CCDF")
Explanation: Exercise: The Weibull distribution is a generalization of the exponential distribution that comes up in failure analysis (see http://wikipedia.org/wiki/Weibull_distribution). Its CDF is
$\mathrm{CDF}(x) = 1 − \exp[−(x / λ)^k]$
Can you find a transformation that makes a Weibull distribution look like a straight line? What do the slope and intercept of the line indicate?
Use random.weibullvariate to generate a sample from a Weibull distribution and use it to test your transformation.
Generate a sample from a Weibull distribution and plot it using a transform that makes a Weibull distribution look like a straight line.
If you are stuck, you can get a hint from thinkplot.Cdf, which provides a transform that makes the CDF of a Weibull distribution look like a straight line. Here's an example that shows how it's used.
End of explanation
import analytic
df = analytic.ReadBabyBoom()
diffs = df.minutes.diff()
cdf = thinkstats2.Cdf(diffs, label="actual")
n = len(diffs)
lam = 44.0 / 24 / 60
sample = [random.expovariate(lam) for _ in range(n)]
1 / lam, np.mean(sample)
# Solution
model = thinkstats2.Cdf(sample, label='model')
thinkplot.PrePlot(2)
thinkplot.Cdfs([cdf, model], complement=True)
thinkplot.Config(xlabel='Time between births (minutes)',
ylabel='CCDF',
yscale='log')
# Solution
# If you plot distributions for a large number of samples, you get a sense
# of how much random variation to expect. In this case, the data fall within
# the range we expect, so there is no compelling reason to think it is
# not exponential.
for i in range(100):
sample = [random.expovariate(lam) for _ in range(n)]
thinkplot.Cdf(thinkstats2.Cdf(sample), complement=True, color="0.9")
thinkplot.Cdf(cdf, complement=True)
thinkplot.Config(xlabel="Time between births (minutes)", ylabel="CCDF", yscale="log")
Explanation: Exercise: For small values of n, we don’t expect an empirical distribution to fit an analytic distribution exactly. One way to evaluate the quality of fit is to generate a sample from an analytic distribution and see how well it matches the data.
For example, in Section 5.1 we plotted the distribution of time between births and saw that it is approximately exponential. But the distribution is based on only 44 data points. To see whether the data might have come from an exponential distribution, generate 44 values from an exponential distribution with the same mean as the data, about 33 minutes between births.
Plot the distribution of the random values and compare it to the actual distribution. You can use random.expovariate to generate the values.
End of explanation
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/hinc.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/hinc06.csv")
import hinc
df = hinc.ReadData()
df
Explanation: Bonus Example: The distributions of wealth and income are sometimes modeled using lognormal and Pareto distributions. To see which is better, let’s look at some data.
The Current Population Survey (CPS) is a joint effort of the Bureau of Labor Statistics and the Census Bureau to study income and related variables. Data collected in 2013 is available from http://www.census.gov/hhes/www/cpstables/032013/hhinc/toc.htm. I downloaded hinc06.xls, which is an Excel spreadsheet with information about household income, and converted it to hinc06.csv, a CSV file you will find in the repository for this book. You will also find hinc.py, which reads this file.
Extract the distribution of incomes from this dataset. Are any of the analytic distributions in this chapter a good model of the data?
End of explanation
xs, ps = df.income.values, df.ps.values
cdf = thinkstats2.Cdf(xs, ps, label="data")
cdf_log = thinkstats2.Cdf(np.log10(xs), ps, label="data")
# linear plot
thinkplot.Cdf(cdf)
thinkplot.Config(xlabel="household income", ylabel="CDF")
Explanation: Here's what the CDF looks like on a linear scale.
End of explanation
xs, ys = thinkstats2.RenderParetoCdf(xmin=55000, alpha=2.5, low=0, high=250000)
thinkplot.Plot(xs, 1 - ys, label="model", color="0.8")
thinkplot.Cdf(cdf, complement=True)
thinkplot.Config(
xlabel="log10 household income",
ylabel="CCDF",
xscale="log",
yscale="log",
loc="lower left",
)
Explanation: To check whether a Pareto model describes the data well, I plot the CCDF on a log-log scale.
I found parameters for the Pareto model that match the tail of the distribution.
End of explanation
median = cdf_log.Percentile(50)
iqr = cdf_log.Percentile(75) - cdf_log.Percentile(25)
std = iqr / 1.349
# choose std to match the upper tail
std = 0.35
print(median, std)
Explanation: For the lognormal model I estimate mu and sigma using percentile-based statistics (median and IQR).
End of explanation
xs, ps = thinkstats2.RenderNormalCdf(median, std, low=3.5, high=5.5)
thinkplot.Plot(xs, ps, label="model", color="0.8")
thinkplot.Cdf(cdf_log)
thinkplot.Config(xlabel="log10 household income", ylabel="CDF")
Explanation: Here's what the distribution, and fitted model, look like on a log-x scale.
End of explanation |
6,048 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: General intro
Step4: Defining the structure of the network
For the neural network to train on your data, you need the following <a href="https
Step5: Training
Step6: Test | Python Code:
%matplotlib inline
import os
from urllib import urlretrieve
import math
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from zipfile import ZipFile
# download some training and test data
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Get features and labels from all files
for filename in zipf.namelist():
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
# some preprocessing
# normalization of mean and variance
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
a = 0.1
b = 0.9
grayscale_min = 0
grayscale_max = 255
return a + (((image_data - grayscale_min) * (b - a)) / (grayscale_max - grayscale_min))
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
# more preprocessing, one hot encoding of labels
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
# randomize data and split in train and test sets
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=42)
Explanation: General intro
End of explanation
# Set the features and labels tensors
# features =
# labels =
# Set the weights and biases tensors
# weights =
# biases =
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), axis=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
Explanation: Defining the structure of the network
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/programmers_guide/dims_types#data_types">float32</a> tensors:,
- features,
- Placeholder tensor for feature data (train_features/valid_features/test_features),
- labels,
- Placeholder tensor for label data (train_labels/valid_labels/test_labels),
- weights,
- Variable Tensor with random numbers from a truncated normal distribution.,
- See <a href="https://www.tensorflow.org/api_docs/python/tf/truncated_normal">tf.truncated_normal() documentation</a> for help.,
- biases,
- Variable Tensor with all zeros.,
- See <a href="https://www.tensorflow.org/api_docs/python/tf/zeros"> tf.zeros() documentation</a> for help.
<img src="images/weight_biases.png" style="height: 60%;width: 60%; position: relative; right: 10%">
End of explanation
# TODO: Find the best parameters for each configuration
# epochs =
# batch_size =
# learning_rate =
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# The training cycle
for batch_i in range(batch_count):
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
Explanation: Training
End of explanation
# TODO: Set the epochs, batch_size, and learning_rate with the best parameters from problem 3
# epochs =
# batch_size =
# learning_rate =
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# The training cycle
for batch_i in range(batch_count):
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
print('Test Accuracy is {}'.format(test_accuracy))
Explanation: Test
End of explanation |
6,049 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome to the Python Plotting Tutorial for Com597I!
This is intended as a self-guided exercise to help you explore python plotting. Please follow the document, follow the links and read the documentation. I'll be circulating to help you if you have questions.
Step1: A super simple chart
The simplest possible plot is a line chart. See the example below. Answer the following questions
Step2: Try This
Using the documentation for matplotlib.pyplot found here make a plot for the following data set. In this case, you want to plot both x and y values.
Step3: Improving the plot
When I made the plot above, it was a very jumbled line chart, which didn't look very good. Using the tutorial figure out how to plot the points as dots rather than as a line chart.
Step4: If you look carefully at the data, you'll find cases where the same exact point shows up twice or more. One trick is to use alpha or the transparency of a dot to handle overlapping data. Using the argument alpha=0.3 in the plot above to make each particular dot slightly transparent.
Step5: Challenge
Last week, we processed the congressional district data to produce a data set with the percent of the working population in each industry compared to the mean and median salary in that district. You opened that data in Excel to produce a plot.
Recreate a plot of percent of the district in engineering and the mean salary for that district. Make sure to label your axes.
(Hint) remember
Step6: Bar charts
The line charts above are great for cardinal data, but python can do many other kinds of plots. Here, we'll make a bar chart. There are a few differences
Step7: Try it
Add another bar to the same chart for the monthly low temperature. Make sure you label your axes!
Styles
Now that we've created some simple charts in python, let's see how to make them beautiful!
Matplotlib comes with many pre-built style packages that can be used to make your plots automatically look professional. Try the samples below for a taste.
The full set can be found by running the command below
Step8: Use a style like this
Step9: Try it!
Set a context like above and rerun any of the plots in your document.
Histograms
Pyplot includes excellent support for histograms. A histogram uses the range of your data on the x-axis. It divides that range in to buckets (usually with equal widths) and counts the number of values that fall into each bucket. Look at the example below | Python Code:
# This section imports some important libraries that we'll need.
# the most important is the second line. After you run this, "plt" will be
# the plotting library in python.
import numpy
import matplotlib.pyplot as plt
%pylab inline
Explanation: Welcome to the Python Plotting Tutorial for Com597I!
This is intended as a self-guided exercise to help you explore python plotting. Please follow the document, follow the links and read the documentation. I'll be circulating to help you if you have questions.
End of explanation
my_data = [1,2.4,-1.3,4]
plt.plot(my_data)
plt.ylabel("yi")
plt.xlabel("hi")
plt.title("A plot!")
plt.show()
Explanation: A super simple chart
The simplest possible plot is a line chart. See the example below. Answer the following questions:
* What variable holds the data?
* We only defined one data series. Is it on the y or x axis?
* What do the functions ylabel, xlabel, and title do? Remove them to test your idea.
End of explanation
height = [68, 66, 75, 71, 67, 65, 67, 75, 72, 74, 72, 75, 69, 70, 65, 64, 68, 73, 70, 76, 64, 64, 66, 63, 68, 62, 72, 74, 76, 69, 65, 65, 64, 66, 64, 72, 74]
shoe_size = [12 ,9 ,12 ,11 ,12 ,8.5 ,9 ,13 ,11 ,12 ,12 ,12 ,10 ,11 ,10 ,8 ,9 ,11 ,8 ,12 ,8 ,9 ,11 ,10 ,9 ,9 ,11 ,11 ,12 ,10 ,8 ,7 ,9 ,10 ,13 ,12 ,11]
# put your code here
plt.scatter(shoe_size, height, s=range(len(height)), color="red", marker="o", linestyle="None", alpha=.7)
plt.show()
# want:
hist = {}
for i in range(len(height)):
combo = str(height[i]) + "," + str(shoe_size[i])
if combo not in hist:
hist[combo] = 0
hist[combo] = hist[combo] + 1
new_height = []
new_shoe_size = []
counts = []
for key in hist:
key_split = key.split(",")
new_height.append(int(key_split[0]))
new_shoe_size.append(float(key_split[1]))
counts.append(hist[key] * 50)
plt.scatter(new_height, new_shoe_size, counts)
plt.show()
Explanation: Try This
Using the documentation for matplotlib.pyplot found here make a plot for the following data set. In this case, you want to plot both x and y values.
End of explanation
# put your new plot here
Explanation: Improving the plot
When I made the plot above, it was a very jumbled line chart, which didn't look very good. Using the tutorial figure out how to plot the points as dots rather than as a line chart.
End of explanation
# put your code here. Make sure you include axis labels and a title!
Explanation: If you look carefully at the data, you'll find cases where the same exact point shows up twice or more. One trick is to use alpha or the transparency of a dot to handle overlapping data. Using the argument alpha=0.3 in the plot above to make each particular dot slightly transparent.
End of explanation
# put your code here.
input_file = open("cd1141dp3_clean.csv")
header = input_file.readline()
print(header)
finance_ratios = []
mean_incomes = []
for line in input_file:
split_line = line.split(',')
finance_ratio = float(split_line[6])
mean_income = float(split_line[7])
finance_ratios.append(finance_ratio)
mean_incomes.append(mean_income)
plt.plot(finance_ratios, mean_incomes, 'ro')
plt.xlabel("Finance Ratio")
plt.ylabel("Mean Income")
Explanation: Challenge
Last week, we processed the congressional district data to produce a data set with the percent of the working population in each industry compared to the mean and median salary in that district. You opened that data in Excel to produce a plot.
Recreate a plot of percent of the district in engineering and the mean salary for that district. Make sure to label your axes.
(Hint) remember: all of the python you have learned will work in the notebook. You can still use open for instance.
End of explanation
month = ["jan", "feb", "mar", 'apr', 'may', 'june', 'july', 'aug', 'sept', 'oct', 'nov', 'dec']
high = [45, 48, 52, 58, 64, 69, 72, 73, 67, 59, 51, 47]
low = [36, 37, 39, 43, 47, 52, 54, 55, 52, 47, 41, 38]
fake_x_data = range(len(month))
width = .8
plt.bar(fake_x_data, high, width)
plt.xticks(fake_x_data, month)
plt.xlabel("Month")
plt.show()
Explanation: Bar charts
The line charts above are great for cardinal data, but python can do many other kinds of plots. Here, we'll make a bar chart. There are a few differences:
First of all, we still need an x and a y variable. I made a fake set of data for the x axis using range and len. What are the values in fake_x_data?
Play with the width variable to see what it does to the plot.
Note that we use xticks to label our groupings.
End of explanation
list(plt.style.available)
Explanation: Try it
Add another bar to the same chart for the monthly low temperature. Make sure you label your axes!
Styles
Now that we've created some simple charts in python, let's see how to make them beautiful!
Matplotlib comes with many pre-built style packages that can be used to make your plots automatically look professional. Try the samples below for a taste.
The full set can be found by running the command below:
End of explanation
plt.style.use("seaborn-poster")
Explanation: Use a style like this:
End of explanation
fake_data = [1, 1, 1, 1, 2, 2, 3, 4, 5, 1, 2, 3, 4, 6, 7]
plt.hist(finance_ratios, bins=21)
plt.show()
Explanation: Try it!
Set a context like above and rerun any of the plots in your document.
Histograms
Pyplot includes excellent support for histograms. A histogram uses the range of your data on the x-axis. It divides that range in to buckets (usually with equal widths) and counts the number of values that fall into each bucket. Look at the example below:
End of explanation |
6,050 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load some data
Step1: Load pairs of covers and non-covers
```Python
def get_pairs(clique_dict)
Step2: Cut chroma features to fixed-length arrays
```Python
def patchwork(chroma, n_patches=7, patch_len=64)
Step3: Align chroma pitch dimension
```Python
def align_pitch(chroma_1, chroma_2)
Step4: Construct a dataset of cover and non-cover 'patchworks'
Python
def dataset_of_pairs(clique_dict, chroma_dict) | Python Code:
# ratio = (5, 15, 80)
ratio = (1, 9, 90)
clique_dict, cliques_by_uri = SHS_data.read_cliques()
train_cliques, test_cliques, val_cliques = util.split_train_test_validation(clique_dict, ratio=ratio)
Explanation: Load some data
End of explanation
pairs, non_pairs = paired_data.get_pairs(train_cliques)
assert len(pairs) == len(non_pairs)
assert np.all([len(pair) == 2 for pair in pairs])
assert np.all([len(non_pair) == 2 for non_pair in non_pairs])
assert np.all([cliques_by_uri[pair[0]] == cliques_by_uri[pair[1]] for pair in pairs])
assert not np.any([cliques_by_uri[non_pair[0]] == cliques_by_uri[non_pair[1]] for non_pair in non_pairs])
Explanation: Load pairs of covers and non-covers
```Python
def get_pairs(clique_dict):
...
```
End of explanation
reload(paired_data)
# simple array
len_x = 10
n_patch, patch_len = 3, 14
x_test = np.arange(len_x).reshape((-1,1))
x_patches = paired_data.patchwork(x_test, n_patches=n_patch, patch_len=patch_len)
assert x_patches[0] == x_test[0]
assert x_patches[-1] == x_test[-1]
assert len(x_patches) == n_patch * patch_len
# real data
test_pair = pairs[0]
chroma_1 = SHS_data.read_chroma(test_pair[0])
chroma_2 = SHS_data.read_chroma(test_pair[1])
patches_1 = paired_data.patchwork(chroma_1)
patches_2 = paired_data.patchwork(chroma_2)
assert patches_1.shape == patches_2.shape
# short chroma
n_patches = 3
patch_len = min(len(chroma_1), len(chroma_2)) + 10
patches_1 = paired_data.patchwork(chroma_1, n_patches=n_patches, patch_len=patch_len)
patches_2 = paired_data.patchwork(chroma_2, n_patches=n_patches, patch_len=patch_len)
assert np.all(patches_1.shape == patches_2.shape)
assert patches_1.shape[0] == n_patches * patch_len
Explanation: Cut chroma features to fixed-length arrays
```Python
def patchwork(chroma, n_patches=7, patch_len=64):
...
```
Strategy: cuttinging out n_patches equally-spaced (possibly overlapping) patches of length patch_len and stitching them back together.
Note that this requires some extra attention as there are unusually short chroma files in the dataset:
Around 30 files are less than 64 beats long.
Hence an exta test in which patch_len > len(chroma).
End of explanation
a = np.array([[2,0,1,0,0,0],
[2,0,1,0,0,0]])
b = np.array([[0,0,1,0,3,0],
[0,0,1,0,3,0]])
a_, b_ = paired_data.align_pitch(a, b)
print(a)
print(b)
print('\n', b_)
Explanation: Align chroma pitch dimension
```Python
def align_pitch(chroma_1, chroma_2):
...
```
End of explanation
train_uris = util.uris_from_clique_dict(train_cliques)
chroma_dict = SHS_data.preload_chroma(train_uris)
X_1, X_2, is_cover, _ = paired_data.dataset_of_pairs(train_cliques, chroma_dict)
print(X_1.shape, X_2.shape, is_cover.shape)
Explanation: Construct a dataset of cover and non-cover 'patchworks'
Python
def dataset_of_pairs(clique_dict, chroma_dict):
...
End of explanation |
6,051 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This is a basic tutorial on using Jupyter to use the scipy modules.
Example of plotting sine and cosine functions in the same plot
Install matplotlib through conda via
Step1: The x values limit the range of the plot.
Let's get help on the plt.plot function, so as to understand how to use it, in addition to the tutorial at http
Step2: Let's add in 'bo' string to the mix to get dots on the trace
Step3: Let's try to add two traces, the second one is a cosine function
Step4: Example of using optimize.fmin on the sine function
Step5: Pretty much what we expected. There is a minimum of -1 for this sine wave function (amplitude of 1 here ... would have been different if we multiplied the sine wave by some other factor). We can call the f function to see the value at that point which is pretty darn close to -1
Step6: Example of using optimize.root on the sine function
Step7: Let's evaludate the f function (which we know is a sine function) at not quite at the point where it is zero (at pi) | Python Code:
import math
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 3 * math.pi, 50)
y = np.sin(x)
plt.plot(x, y)
plt.show()
Explanation: Introduction
This is a basic tutorial on using Jupyter to use the scipy modules.
Example of plotting sine and cosine functions in the same plot
Install matplotlib through conda via:
conda install -y matplotlib
Below we plot a sine function from 0 to 2 pi. Pretty much what you would expect:
End of explanation
help(plt.plot)
Explanation: The x values limit the range of the plot.
Let's get help on the plt.plot function, so as to understand how to use it, in addition to the tutorial at http://matplotlib.org/users/pyplot_tutorial.html
End of explanation
import math
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 2 * math.pi, 50)
y = np.sin(x)
plt.plot(x, y, 'bo')
plt.show()
Explanation: Let's add in 'bo' string to the mix to get dots on the trace:
End of explanation
import math
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0,2 * math.pi, 50)
y1 = np.sin(x)
y2 = np.cos(x)
plt.plot(x, y1, 'bo', x, y2, 'r+')
plt.show()
Explanation: Let's try to add two traces, the second one is a cosine function:
End of explanation
import math
import numpy as np
from scipy import linalg, optimize
# Here, we called this function "func2" which is pretty arbitrary. You will need to use better name in practice, of course:
def func2(x):
return np.sin(x)
optimize.fmin(func2, math.pi - 0.01)
Explanation: Example of using optimize.fmin on the sine function
End of explanation
func2(4.71237414)
math.pi * 2 * 0.75
Explanation: Pretty much what we expected. There is a minimum of -1 for this sine wave function (amplitude of 1 here ... would have been different if we multiplied the sine wave by some other factor). We can call the f function to see the value at that point which is pretty darn close to -1:
End of explanation
help(optimize.root)
Explanation: Example of using optimize.root on the sine function
End of explanation
func2(math.pi * 0.75)
import math
import numpy as np
from scipy import linalg, optimize
# Here, we called this function "func2" which is pretty arbitrary. You will need to use better name in practice, of course:
def func2(x):
return np.sin(x)
optimize.root(func2, math.pi * 0.75)
Explanation: Let's evaludate the f function (which we know is a sine function) at not quite at the point where it is zero (at pi):
End of explanation |
6,052 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nims-kma', 'sandbox-3', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: NIMS-KMA
Source ID: SANDBOX-3
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:29
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
6,053 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This example is kindly contributed by FreddyBaudine for reproducing pygae/galgebra#26 and pygae/galgebra#30 with modifications by utensil.
Please note before Python code, there's an invisible markdown cell with the following code to enable color and define some colors from http
Step1: Base manifold (three dimensional)
Metric tensor (cartesian coordinates - norm = False)
Step2: Two dimensioanal submanifold - Unit sphere
Basis not normalised
Step3: Christoffel symbols of the first kind
Step4: One dimensioanal submanifold
Basis not normalised | Python Code:
from __future__ import print_function
import sys
from galgebra.printer import Format, xpdf
Format()
from sympy import symbols, sin, pi, latex, Array, permutedims
from galgebra.ga import Ga
from IPython.display import Math
Explanation: This example is kindly contributed by FreddyBaudine for reproducing pygae/galgebra#26 and pygae/galgebra#30 with modifications by utensil.
Please note before Python code, there's an invisible markdown cell with the following code to enable color and define some colors from http://latexcolor.com/:
markdown
$$
\require{color}
\definecolor{airforceblue}{rgb}{0.36, 0.54, 0.66}
\definecolor{applegreen}{rgb}{0.55, 0.71, 0.0}
\definecolor{atomictangerine}{rgb}{1.0, 0.6, 0.4}
$$
$$
\require{color}
\definecolor{airforceblue}{rgb}{0.36, 0.54, 0.66}
\definecolor{applegreen}{rgb}{0.55, 0.71, 0.0}
\definecolor{atomictangerine}{rgb}{1.0, 0.6, 0.4}
$$
End of explanation
from sympy import cos, sin, symbols
g3coords = (x,y,z) = symbols('x y z')
g3 = Ga('ex ey ez', g = [1,1,1], coords = g3coords,norm=False) # Create g3
(e_x,e_y,e_z) = g3.mv()
Math(r'g =%s' % latex(g3.g))
Explanation: Base manifold (three dimensional)
Metric tensor (cartesian coordinates - norm = False)
End of explanation
sp2coords = (theta, phi) = symbols(r'{\color{airforceblue}\theta} {\color{applegreen}\phi}', real = True)
sp2param = [sin(theta)*cos(phi), sin(theta)*sin(phi), cos(theta)]
sp2 = g3.sm(sp2param, sp2coords, norm = False) # submanifold
(etheta, ephi) = sp2.mv() # sp2 basis vectors
(rtheta, rphi) = sp2.mvr() # sp2 reciprocal basis vectors
sp2grad = sp2.grad
sph_map = [1, theta, phi] # Coordinate map for sphere of r = 1
Math(r'(\theta,\phi)\rightarrow (r,\theta,\phi) = %s' % latex(sph_map))
Math(r'e_\theta \cdot e_\theta = %s' % (etheta|etheta))
Math(r'e_\phi \cdot e_\phi = %s' % (ephi|ephi))
Math('g = %s' % latex(sp2.g))
Math(r'g^{-1} = %s' % latex(sp2.g_inv))
Explanation: Two dimensioanal submanifold - Unit sphere
Basis not normalised
End of explanation
Cf1 = sp2.Christoffel_symbols(mode=1)
Cf1 = permutedims(Array(Cf1), (2, 0, 1))
Math(r'\Gamma_{1, \alpha, \beta} = %s \quad \Gamma_{2, \alpha, \beta} = %s ' % (latex(Cf1[0, :, :]), latex(Cf1[1, :, :])))
Cf2 = sp2.Christoffel_symbols(mode=2)
Cf2 = permutedims(Array(Cf2), (2, 0, 1))
Math(r'\Gamma^{1}_{\phantom{1,}\alpha, \beta} = %s \quad \Gamma^{2}_{\phantom{2,}\alpha, \beta} = %s ' % (latex(Cf2[0, :, :]), latex(Cf2[1, :, :])))
F = sp2.mv('F','vector',f=True) #scalar function
f = sp2.mv('f','scalar',f=True) #vector function
Math(r'\nabla = %s' % sp2grad)
Math(r'\nabla f = %s' % (sp2.grad * f))
Math(r'F = %s' % F)
Math(r'\nabla F = %s' % (sp2.grad * F))
Explanation: Christoffel symbols of the first kind:
End of explanation
cir_th = phi = symbols(r'{\color{atomictangerine}\phi}',real = True)
cir_map = [pi/8, phi]
Math(r'(\phi)\rightarrow (\theta,\phi) = %s' % latex(cir_map))
cir1d = sp2.sm( cir_map , (cir_th,), norm = False) # submanifold
cir1dgrad = cir1d.grad
(ephi) = cir1d.mv()
Math(r'e_\phi \cdot e_\phi = %s' % latex(ephi[0] | ephi[0]))
Math('g = %s' % latex(cir1d.g))
h = cir1d.mv('h','scalar',f= True)
H = cir1d.mv('H','vector',f= True)
Math(r'\nabla = %s' % cir1dgrad)
Math(r'\nabla h = %s' %(cir1d.grad * h).simplify())
Math('H = %s' % H)
Math(r'\nabla H = %s' % (cir1d.grad * H).simplify())
Explanation: One dimensioanal submanifold
Basis not normalised
End of explanation |
6,054 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: tf.function과 함께 XLA 사용하기
<table class="tfo-notebook-buttons" align="left">
<td>
<img src="https
Step2: 그런 다음, 몇 가지 필요한 상수를 정의하고 MNIST 데이터세트를 준비합니다.
Step3: 마지막으로, 모델과 옵티마이저를 정의합니다. 이 모델은 단일 밀집 레이어를 사용합니다.
Step4: 훈련 함수 정의하기
훈련 함수에서 위에 정의된 레이어를 사용하여 예측된 레이블을 얻은 다음, 옵티마이저를 사용하여 손실의 그래디언트를 최소화합니다. XLA를 사용하여 계산을 컴파일하려면 experimental_compile=True를 사용하여 tf.function 내에 배치합니다.
Step5: 모델 훈련 및 테스트하기
훈련 함수를 정의했으면 모델을 정의합니다.
Step6: 마지막으로, 정확성을 확인합니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import tensorflow as tf
tf.compat.v1.enable_eager_execution()
Explanation: tf.function과 함께 XLA 사용하기
<table class="tfo-notebook-buttons" align="left">
<td>
<img src="https://www.tensorflow.org/images/tf_logo_32px.png"><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/xla/tutorials/compile.ipynb">TensorFlow.org에서 보기</a>
</td>
<td>
<img src="https://www.tensorflow.org/images/colab_logo_32px.png"><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/xla/tutorials/compile.ipynb">Google Colab에서 실행</a>
</td>
<td>
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"><a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/tutorials/compile.ipynb">GitHub에서 소스 보기</a>
</td>
</table>
이 튜토리얼에서는 TensorFlow 모델을 훈련하여 MNIST 데이터세트를 분류합니다. 여기서 훈련 함수는 XLA를 사용하여 컴파일합니다.
먼저, TensorFlow를 로드하고 즉시 실행을 활성화합니다.
End of explanation
# Size of each input image, 28 x 28 pixels
IMAGE_SIZE = 28 * 28
# Number of distinct number labels, [0..9]
NUM_CLASSES = 10
# Number of examples in each training batch (step)
TRAIN_BATCH_SIZE = 100
# Number of training steps to run
TRAIN_STEPS = 1000
# Loads MNIST dataset.
train, test = tf.keras.datasets.mnist.load_data()
train_ds = tf.data.Dataset.from_tensor_slices(train).batch(TRAIN_BATCH_SIZE).repeat()
# Casting from raw data to the required datatypes.
def cast(images, labels):
images = tf.cast(
tf.reshape(images, [-1, IMAGE_SIZE]), tf.float32)
labels = tf.cast(labels, tf.int64)
return (images, labels)
Explanation: 그런 다음, 몇 가지 필요한 상수를 정의하고 MNIST 데이터세트를 준비합니다.
End of explanation
layer = tf.keras.layers.Dense(NUM_CLASSES)
optimizer = tf.keras.optimizers.Adam()
Explanation: 마지막으로, 모델과 옵티마이저를 정의합니다. 이 모델은 단일 밀집 레이어를 사용합니다.
End of explanation
@tf.function(experimental_compile=True)
def train_mnist(images, labels):
images, labels = cast(images, labels)
with tf.GradientTape() as tape:
predicted_labels = layer(images)
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=predicted_labels, labels=labels
))
layer_variables = layer.trainable_variables
grads = tape.gradient(loss, layer_variables)
optimizer.apply_gradients(zip(grads, layer_variables))
Explanation: 훈련 함수 정의하기
훈련 함수에서 위에 정의된 레이어를 사용하여 예측된 레이블을 얻은 다음, 옵티마이저를 사용하여 손실의 그래디언트를 최소화합니다. XLA를 사용하여 계산을 컴파일하려면 experimental_compile=True를 사용하여 tf.function 내에 배치합니다.
End of explanation
for images, labels in train_ds:
if optimizer.iterations > TRAIN_STEPS:
break
train_mnist(images, labels)
Explanation: 모델 훈련 및 테스트하기
훈련 함수를 정의했으면 모델을 정의합니다.
End of explanation
images, labels = cast(test[0], test[1])
predicted_labels = layer(images)
correct_prediction = tf.equal(tf.argmax(predicted_labels, 1), labels)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print("Prediction accuracy after training: %s" % accuracy)
Explanation: 마지막으로, 정확성을 확인합니다.
End of explanation |
6,055 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Is the Grass Wet?
This is an example used by Pearl in his book 'Causality'. I've used the conditional probability tables from here
Step1: We begin with some variables, each with 2 states ...
Step2: We now need some equations to link these (these are our conditional probability distributions).
Step3: We now construct a causal graph from these...
Step4: The only root variable in this equation is rain. So let's supply a distribution over this.
Step5: With the graph and the rain probabilities, we can generate the full joint probability distribution.
Step6: We can query this distribution for specific probabilities...
Step7: We can also generate a joint distribution under intervention. Let's generate a distribution over the sprinkler, telling is that it is ON
Step8: This isn't the same as above!
Let's just ask about the grass in this case... | Python Code:
from causalinfo import *
# You only need this if you want to draw pretty pictures of the Networksa
from nxpd import draw, nxpdParams
nxpdParams['show'] = 'ipynb'
Explanation: Is the Grass Wet?
This is an example used by Pearl in his book 'Causality'. I've used the conditional probability tables from here:
https://en.wikipedia.org/wiki/Bayesian_network
End of explanation
rain = Variable('Rain', 2)
sprinkler = Variable('Sprinkler', 2)
grass = Variable('Grass', 2)
Explanation: We begin with some variables, each with 2 states ...
End of explanation
def rain_to_sprinkler(r, dist):
table = [.6, .4], [.99, .01]
dist[:] = table[r]
eq1 = Equation('RtoS', [rain], [sprinkler], rain_to_sprinkler)
eq1
def rain_and_sprinkler_to_grass(s, r, dist):
actual = [[1.0, 0.0], [0.2, 0.8]], [[0.1, 0.9], [0.01, 0.99]]
dist[:] = actual[s][r]
eq2 = Equation('SRtoG', [sprinkler, rain], [grass], rain_and_sprinkler_to_grass)
eq2
Explanation: We now need some equations to link these (these are our conditional probability distributions).
End of explanation
gr = CausalGraph([eq1, eq2])
draw(gr.full_network)
draw(gr.causal_network)
Explanation: We now construct a causal graph from these...
End of explanation
rain_dist = JointDist({rain: [.8, .2]})
rain_dist
Explanation: The only root variable in this equation is rain. So let's supply a distribution over this.
End of explanation
joint = gr.generate_joint(rain_dist)
joint
Explanation: With the graph and the rain probabilities, we can generate the full joint probability distribution.
End of explanation
joint.query_probability('Grass==1 and Sprinkler==1')
Explanation: We can query this distribution for specific probabilities...
End of explanation
sprinkler_on_dist = JointDistByState({sprinkler:1})
sprinkler_on_dist
joint_sprinkler_on = gr.generate_joint(rain_dist, sprinkler_on_dist)
joint_sprinkler_on
Explanation: We can also generate a joint distribution under intervention. Let's generate a distribution over the sprinkler, telling is that it is ON
End of explanation
joint_sprinkler_on.joint(grass)
rain_dist = JointDist({rain: [.99, .01]})
d = gr.generate_joint(rain_dist)
d
I = d.mutual_info(rain, grass)
H = d.entropy(rain, grass)
I, H, I/H
Explanation: This isn't the same as above!
Let's just ask about the grass in this case...
End of explanation |
6,056 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Word Cookie Solver
This program will find all of the words that can be made from a specified set of letters.
Setup
First, let's write some code to check if a single word can be made from the letters
Step3: Then we need a way to check lots of words. We'll need a dictionary
Step5: And a way to check all the words in the dictionary
Step6: Run it
Load the dictionary
Step7: Specify the letters we can use
Step8: Find the matches and print them
Step9: We can also sort them with the longest words first | Python Code:
def word_works(letters, word, allow_repeats=False):
Return True if word can be spelled using only letters. letters is a single
string. allow_repeats allows each letter to be used many times.
letters_remaining = letters.lower() # because dictionary words will be lowercase
for letter in word:
if letter in letters_remaining:
if not allow_repeats:
letters_remaining = letters_remaining.replace(letter, '', 1)
else:
return False
return True
Explanation: Word Cookie Solver
This program will find all of the words that can be made from a specified set of letters.
Setup
First, let's write some code to check if a single word can be made from the letters:
End of explanation
def load_dict(dict_filename):
Load a dictionary file into a list of (lowercase) words.
with open(dict_filename) as f:
content = f.readlines()
content = [x.strip().lower() for x in content]
return content
Explanation: Then we need a way to check lots of words. We'll need a dictionary:
End of explanation
def match_words(letters, dictionary, allow_repeats=False):
Return all words in dictionary that can be spelled using only letters.
dictionary should be a python list from load_dict().
results = []
for word in dictionary:
if word_works(letters, word, allow_repeats):
results.append(word)
return results
Explanation: And a way to check all the words in the dictionary:
End of explanation
dictionary = load_dict('dictionary.txt')
Explanation: Run it
Load the dictionary:
End of explanation
letters = 'wdro'
Explanation: Specify the letters we can use:
End of explanation
matching_words = match_words(letters, dictionary)
print(matching_words)
Explanation: Find the matches and print them:
End of explanation
print( sorted(matching_words, key=len, reverse=True) )
Explanation: We can also sort them with the longest words first:
End of explanation |
6,057 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Feature Engineering </h1>
In this notebook, you will learn how to incorporate feature engineering into your pipeline.
<ul>
<li> Working with feature columns </li>
<li> Adding feature crosses in TensorFlow </li>
<li> Reading data from BigQuery </li>
<li> Creating datasets using Dataflow </li>
<li> Using a wide-and-deep model </li>
</ul>
Note
Step1: After doing a pip install, restart your kernel by selecting kernel from the menu and clicking Restart Kernel before proceeding further
Step2: <h2> 1. Environment variables for project and bucket </h2>
<li> Your project id is the *unique* string that identifies your project (not the project name). You can find this from the GCP Console dashboard's Home page. My dashboard reads
Step4: <h2> 2. Specifying query to pull the data </h2>
Let's pull out a few extra columns from the timestamp.
Step5: Try the query above in https
Step6: First, let's define a function for preprocessing the data
Step7: Now, let's run pipeline locally. This takes upto <b>5 minutes</b>. You will see a message "Done" when it is done.
Step8: 4. Run Beam pipeline on Cloud Dataflow
Run pipeline on cloud on a larger sample size.
Step9: The following step will take <b>15-20 minutes.</b> Monitor job progress on the Cloud Console, in the Dataflow section
Step10: Once the job completes, observe the files created in Google Cloud Storage
Step11: 5. Develop model with new inputs
Download the first shard of the preprocessed data to enable local development.
Step12: We have two new inputs in the INPUT_COLUMNS, three engineered features, and the estimator involves bucketization and feature crosses.
Step13: Try out the new model on the local sample (this takes <b>5 minutes</b>) to make sure it works fine.
Step14: You can use saved_model_cli to look at the exported signature. Note that the model doesn't need any of the engineered features as inputs. It will compute latdiff, londiff, euclidean from the provided inputs, thanks to the add_engineered call in the serving_input_fn.
Step15: 6. Train on cloud
This will take <b> 10-15 minutes </b> even though the prompt immediately returns after the job is submitted. Monitor job progress on the Cloud Console, in the AI Platform section and wait for the training job to complete.
Step16: The RMSE is now 8.33249, an improvement over the 9.3 that we were getting ... of course, we won't know until we train/validate on a larger dataset. Still, this is promising. But before we do that, let's do hyper-parameter tuning.
<b>Use the Cloud Console link to monitor the job and do NOT proceed until the job is done.</b>
Step17: Optional | Python Code:
!pip install --user apache-beam[gcp]==2.16.0
!pip install --user httplib2==0.12.0
Explanation: <h1> Feature Engineering </h1>
In this notebook, you will learn how to incorporate feature engineering into your pipeline.
<ul>
<li> Working with feature columns </li>
<li> Adding feature crosses in TensorFlow </li>
<li> Reading data from BigQuery </li>
<li> Creating datasets using Dataflow </li>
<li> Using a wide-and-deep model </li>
</ul>
Note: You may ignore specific errors related to "papermill", "google-cloud-storage", and "datalab". You may also ignore warnings related to '/home/jupyter/.local/bin'. These components and issues do not impact your ability to complete the lab.
End of explanation
import tensorflow as tf
import apache_beam as beam
import shutil
print(tf.__version__)
Explanation: After doing a pip install, restart your kernel by selecting kernel from the menu and clicking Restart Kernel before proceeding further
End of explanation
import os
PROJECT = 'cloud-training-demos' # CHANGE THIS
BUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME. Use a regional bucket in the region you selected.
REGION = 'us-central1' # Choose an available region for Cloud AI Platform
# for bash
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.15'
## ensure we're using python3 env
os.environ['CLOUDSDK_PYTHON'] = 'python3'
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
## ensure we predict locally with our current Python environment
gcloud config set ml_engine/local_python `which python`
Explanation: <h2> 1. Environment variables for project and bucket </h2>
<li> Your project id is the *unique* string that identifies your project (not the project name). You can find this from the GCP Console dashboard's Home page. My dashboard reads: <b>Project ID:</b> cloud-training-demos </li>
<li> Cloud training often involves saving and restoring model files. Therefore, we should <b>create a single-region bucket</b>. If you don't have a bucket already, I suggest that you create one from the GCP console (because it will dynamically check whether the bucket name you want is available) </li>
</ol>
<b>Change the cell below</b> to reflect your Project ID and bucket name.
End of explanation
def create_query(phase, EVERY_N):
if EVERY_N == None:
EVERY_N = 4 #use full dataset
#select and pre-process fields
base_query =
SELECT
(tolls_amount + fare_amount) AS fare_amount,
DAYOFWEEK(pickup_datetime) AS dayofweek,
HOUR(pickup_datetime) AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
CONCAT(STRING(pickup_datetime), STRING(pickup_longitude), STRING(pickup_latitude), STRING(dropoff_latitude), STRING(dropoff_longitude)) AS key
FROM
[nyc-tlc:yellow.trips]
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
#add subsampling criteria by modding with hashkey
if phase == 'train':
query = "{} AND ABS(HASH(pickup_datetime)) % {} < 2".format(base_query,EVERY_N)
elif phase == 'valid':
query = "{} AND ABS(HASH(pickup_datetime)) % {} == 2".format(base_query,EVERY_N)
elif phase == 'test':
query = "{} AND ABS(HASH(pickup_datetime)) % {} == 3".format(base_query,EVERY_N)
return query
print(create_query('valid', 100)) #example query using 1% of data
Explanation: <h2> 2. Specifying query to pull the data </h2>
Let's pull out a few extra columns from the timestamp.
End of explanation
%%bash
if gsutil ls | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then
gsutil -m rm -rf gs://$BUCKET/taxifare/ch4/taxi_preproc/
fi
Explanation: Try the query above in https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips if you want to see what it does (ADD LIMIT 10 to the query!)
<h2> 3. Preprocessing Dataflow job from BigQuery </h2>
This code reads from BigQuery and saves the data as-is on Google Cloud Storage. We can do additional preprocessing and cleanup inside Dataflow, but then we'll have to remember to repeat that prepreprocessing during inference. It is better to use tf.transform which will do this book-keeping for you, or to do preprocessing within your TensorFlow model. We will look at this in future notebooks. For now, we are simply moving data from BigQuery to CSV using Dataflow.
While we could read from BQ directly from TensorFlow (See: https://www.tensorflow.org/api_docs/python/tf/contrib/cloud/BigQueryReader), it is quite convenient to export to CSV and do the training off CSV. Let's use Dataflow to do this at scale.
Because we are running this on the Cloud, you should go to the GCP Console (https://console.cloud.google.com/dataflow) to look at the status of the job. It will take several minutes for the preprocessing job to launch.
End of explanation
import datetime
####
# Arguments:
# -rowdict: Dictionary. The beam bigquery reader returns a PCollection in
# which each row is represented as a python dictionary
# Returns:
# -rowstring: a comma separated string representation of the record with dayofweek
# converted from int to string (e.g. 3 --> Tue)
####
def to_csv(rowdict):
days = ['null', 'Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
CSV_COLUMNS = 'fare_amount,dayofweek,hourofday,pickuplon,pickuplat,dropofflon,dropofflat,passengers,key'.split(',')
rowdict['dayofweek'] = days[rowdict['dayofweek']]
rowstring = ','.join([str(rowdict[k]) for k in CSV_COLUMNS])
return rowstring
####
# Arguments:
# -EVERY_N: Integer. Sample one out of every N rows from the full dataset.
# Larger values will yield smaller sample
# -RUNNER: 'DirectRunner' or 'DataflowRunner'. Specfy to run the pipeline
# locally or on Google Cloud respectively.
# Side-effects:
# -Creates and executes dataflow pipeline.
# See https://beam.apache.org/documentation/programming-guide/#creating-a-pipeline
####
def preprocess(EVERY_N, RUNNER):
job_name = 'preprocess-taxifeatures' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/taxifare/ch4/taxi_preproc/'.format(BUCKET)
#dictionary of pipeline options
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': 'preprocess-taxifeatures' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S'),
'project': PROJECT,
'runner': RUNNER,
'num_workers' : 4,
'max_num_workers' : 5
}
#instantiate PipelineOptions object using options dictionary
opts = beam.pipeline.PipelineOptions(flags=[], **options)
#instantantiate Pipeline object using PipelineOptions
with beam.Pipeline(options=opts) as p:
for phase in ['train', 'valid']:
query = create_query(phase, EVERY_N)
outfile = os.path.join(OUTPUT_DIR, '{}.csv'.format(phase))
(
p | 'read_{}'.format(phase) >> beam.io.Read(beam.io.BigQuerySource(query=query))
| 'tocsv_{}'.format(phase) >> beam.Map(to_csv)
| 'write_{}'.format(phase) >> beam.io.Write(beam.io.WriteToText(outfile))
)
print("Done")
Explanation: First, let's define a function for preprocessing the data
End of explanation
preprocess(50*10000, 'DirectRunner')
%%bash
gsutil ls gs://$BUCKET/taxifare/ch4/taxi_preproc/
Explanation: Now, let's run pipeline locally. This takes upto <b>5 minutes</b>. You will see a message "Done" when it is done.
End of explanation
%%bash
if gsutil ls | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then
gsutil -m rm -rf gs://$BUCKET/taxifare/ch4/taxi_preproc/
fi
Explanation: 4. Run Beam pipeline on Cloud Dataflow
Run pipeline on cloud on a larger sample size.
End of explanation
preprocess(50*100, 'DataflowRunner')
Explanation: The following step will take <b>15-20 minutes.</b> Monitor job progress on the Cloud Console, in the Dataflow section
End of explanation
%%bash
gsutil ls -l gs://$BUCKET/taxifare/ch4/taxi_preproc/
%%bash
#print first 10 lines of first shard of train.csv
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/train.csv-00000-of-*" | head
Explanation: Once the job completes, observe the files created in Google Cloud Storage
End of explanation
%%bash
if [ -d sample ]; then
rm -rf sample
fi
mkdir sample
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/train.csv-00000-of-*" > sample/train.csv
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/valid.csv-00000-of-*" > sample/valid.csv
Explanation: 5. Develop model with new inputs
Download the first shard of the preprocessed data to enable local development.
End of explanation
%%bash
grep -A 20 "INPUT_COLUMNS =" taxifare/trainer/model.py
%%bash
grep -A 50 "build_estimator" taxifare/trainer/model.py
%%bash
grep -A 15 "add_engineered(" taxifare/trainer/model.py
Explanation: We have two new inputs in the INPUT_COLUMNS, three engineered features, and the estimator involves bucketization and feature crosses.
End of explanation
%%bash
rm -rf taxifare.tar.gz taxi_trained
export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare
python -m trainer.task \
--train_data_paths=${PWD}/sample/train.csv \
--eval_data_paths=${PWD}/sample/valid.csv \
--output_dir=${PWD}/taxi_trained \
--train_steps=10 \
--job-dir=/tmp
%%bash
ls taxi_trained/export/exporter/
Explanation: Try out the new model on the local sample (this takes <b>5 minutes</b>) to make sure it works fine.
End of explanation
%%bash
model_dir=$(ls ${PWD}/taxi_trained/export/exporter | tail -1)
saved_model_cli show --dir ${PWD}/taxi_trained/export/exporter/${model_dir} --all
%%writefile /tmp/test.json
{"dayofweek": "Sun", "hourofday": 17, "pickuplon": -73.885262, "pickuplat": 40.773008, "dropofflon": -73.987232, "dropofflat": 40.732403, "passengers": 2}
%%bash
model_dir=$(ls ${PWD}/taxi_trained/export/exporter)
gcloud ai-platform local predict \
--model-dir=${PWD}/taxi_trained/export/exporter/${model_dir} \
--json-instances=/tmp/test.json
Explanation: You can use saved_model_cli to look at the exported signature. Note that the model doesn't need any of the engineered features as inputs. It will compute latdiff, londiff, euclidean from the provided inputs, thanks to the add_engineered call in the serving_input_fn.
End of explanation
%%bash
OUTDIR=gs://${BUCKET}/taxifare/ch4/taxi_trained
JOBNAME=lab4a_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/taxifare/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC \
--runtime-version=$TFVERSION \
-- \
--train_data_paths="gs://$BUCKET/taxifare/ch4/taxi_preproc/train*" \
--eval_data_paths="gs://${BUCKET}/taxifare/ch4/taxi_preproc/valid*" \
--train_steps=5000 \
--output_dir=$OUTDIR
Explanation: 6. Train on cloud
This will take <b> 10-15 minutes </b> even though the prompt immediately returns after the job is submitted. Monitor job progress on the Cloud Console, in the AI Platform section and wait for the training job to complete.
End of explanation
%%bash
gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1
%%bash
model_dir=$(gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1)
saved_model_cli show --dir ${model_dir} --all
%%bash
model_dir=$(gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1)
gcloud ai-platform local predict \
--model-dir=${model_dir} \
--json-instances=/tmp/test.json
Explanation: The RMSE is now 8.33249, an improvement over the 9.3 that we were getting ... of course, we won't know until we train/validate on a larger dataset. Still, this is promising. But before we do that, let's do hyper-parameter tuning.
<b>Use the Cloud Console link to monitor the job and do NOT proceed until the job is done.</b>
End of explanation
%%bash
MODEL_NAME="feateng"
MODEL_VERSION="v1"
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1)
echo "Run these commands one-by-one (the very first time, you'll create a model and then create a version)"
#gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ai-platform delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version $TFVERSION
%%bash
gcloud ai-platform predict --model=feateng --version=v1 --json-instances=/tmp/test.json
Explanation: Optional: deploy model to cloud
End of explanation |
6,058 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
Step1: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Step2: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise
Step6: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this
Step7: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
Step10: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_hidden, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. There for, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords
Step11: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note
Step12: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
Step13: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure it's performance. Remember, only do this after finalizing the hyperparameters.
Step14: Try out your own text! | Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
End of explanation
from collections import Counter
total_counts = Counter()
for idx, row in reviews.iterrows():
total_counts.update(row[0].split(' '))
print("Total words in data set: ", len(total_counts))
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation).
End of explanation
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
print(vocab[-1], ': ', total_counts[vocab[-1]])
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
word2idx = {word: i for i, word in enumerate(vocab)}
Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
for word in text.split(' '):
idx = word2idx.get(word, None)
if idx is None:
continue
else:
word_vector[idx] += 1
return np.array(word_vector)
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' ').
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
# Input
net = tflearn.input_data([None, 10000])
#Hidden
net = tflearn.fully_connected(net, 400, activation='ReLU')
net = tflearn.fully_connected(net, 50, activation='ReLU')
net = tflearn.fully_connected(net, 12, activation='ReLU')
#Output
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
#Make it go
model = tflearn.DNN(net)
return model
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_hidden, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. There for, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
model = build_model()
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=50)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure it's performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
text = "This movie is so good. It was super and the worst"
positive_prob = model.predict([text_to_vector(text.lower())])[0][1]
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
Explanation: Try out your own text!
End of explanation |
6,059 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Modeling in Practice
Assignment
Step1: Let's check that the quantitative variable are effectively centered.
Step2: The means are both very close to 0; confirming the centering.
Distributions visualization
The following plots shows the distribution of all 3 explanatory variables with the response variable.
Step3: The four plots above show the following trends | Python Code:
# Magic command to insert the graph directly in the notebook
%matplotlib inline
# Load a useful Python libraries for handling data
import pandas as pd
import numpy as np
import statsmodels.formula.api as smf
import seaborn as sns
import matplotlib.pyplot as plt
from IPython.display import Markdown, display
nesarc = pd.read_csv('nesarc_pds.csv')
canabis_usage = {1 : 1, 2 : 0, 9 : 9}
sex_shift = {1 : 1, 2 : 0}
white_race = {1 : 1, 2 : 0}
subnesarc = (nesarc[['AGE', 'SEX', 'S1Q1D5', 'S1Q7D', 'S3BQ1A5', 'S1Q11A']]
.assign(sex=lambda x: pd.to_numeric(x['SEX'].map(sex_shift)),
white_ethnicity=lambda x: pd.to_numeric(x['S1Q1D5'].map(white_race)),
used_canabis=lambda x: (pd.to_numeric(x['S3BQ1A5'], errors='coerce')
.map(canabis_usage)
.replace(9, np.nan)),
family_income=lambda x: (pd.to_numeric(x['S1Q11A'], errors='coerce')))
.dropna())
centered_nesarc = subnesarc.assign(age_c=subnesarc['AGE']-subnesarc['AGE'].mean(),
family_income_c=subnesarc['family_income']-subnesarc['family_income'].mean())
display(Markdown("Mean age : {:.0f}".format(centered_nesarc['AGE'].mean())))
display(Markdown("Mean family income last year: {:.0f}$".format(centered_nesarc['family_income'].mean())))
Explanation: Regression Modeling in Practice
Assignment: Test a Logistic Regression Model
Following is the Python program I wrote to fulfill the fourth assignment of the Regression Modeling in Practice online course.
I decided to use Jupyter Notebook as it is a pretty way to write code and present results.
Research question for this assignment
For this assignment, I decided to use the NESARC database with the following question : Are people from white ethnicity more likely to have ever used cannabis?
The potential other explanatory variables will be:
Age
Sex
Family income
Data management
The data will be managed to get cannabis usage recoded from 0 (never used cannabis) and 1 (used cannabis). The non-answering recordings (reported as 9) will be discarded.
The response variable having 2 categories, categories grouping is not needed.
The other categorical variable (sex) will be recoded such that 0 means female and 1 equals male. And the two quantitative explanatory variables (age and family income) will be centered.
End of explanation
print("Centered age")
print(centered_nesarc['age_c'].describe())
print("\nCentered family income")
print(centered_nesarc['family_income_c'].describe())
Explanation: Let's check that the quantitative variable are effectively centered.
End of explanation
g = sns.factorplot(x='white_ethnicity', y='used_canabis', data=centered_nesarc,
kind="bar", ci=None)
g.set_xticklabels(['Non White', 'White'])
plt.xlabel('White ethnicity')
plt.ylabel('Ever used cannabis')
plt.title('Ever used cannabis dependance on the white ethnicity');
g = sns.factorplot(x='sex', y='used_canabis', data=centered_nesarc,
kind="bar", ci=None)
g.set_xticklabels(['Female', 'Male'])
plt.ylabel('Ever used cannabis')
plt.title('Ever used cannabis dependance on the sex');
g = sns.boxplot(x='used_canabis', y='family_income', data=centered_nesarc)
g.set_yscale('log')
g.set_xticklabels(('No', 'Yes'))
plt.xlabel('Ever used cannabis')
plt.ylabel('Family income ($)');
g = sns.boxplot(x='used_canabis', y='AGE', data=centered_nesarc)
g.set_xticklabels(('No', 'Yes'))
plt.xlabel('Ever used cannabis')
plt.ylabel('Age');
Explanation: The means are both very close to 0; confirming the centering.
Distributions visualization
The following plots shows the distribution of all 3 explanatory variables with the response variable.
End of explanation
model = smf.logit(formula='used_canabis ~ family_income_c + age_c + sex + white_ethnicity', data=centered_nesarc).fit()
model.summary()
params = model.params
conf = model.conf_int()
conf['Odds Ratios'] = params
conf.columns = ['Lower Conf. Int.', 'Upper Conf. Int.', 'Odds Ratios']
np.exp(conf)
Explanation: The four plots above show the following trends:
- More white people tries cannabis more than non-white
- Male people tries cannabis more than female
- Younger people tries cannabis more than older ones
- Man from richer families tries cannabis more than those from poorer families
Logistic regression model
The plots showed the direction of a potential relationship. But a rigorous statistical test has to be carried out to confirm the four previous hypothesis.
The following code will test a logistic regression model on our hypothesis.
End of explanation |
6,060 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
$\newcommand{\xv}{\mathbf{x}}
\newcommand{\Xv}{\mathbf{X}}
\newcommand{\piv}{\mathbf{\pi}}
\newcommand{\yv}{\mathbf{y}}
\newcommand{\Yv}{\mathbf{Y}}
\newcommand{\zv}{\mathbf{z}}
\newcommand{\av}{\mathbf{a}}
\newcommand{\Wv}{\mathbf{W}}
\newcommand{\wv}{\mathbf{w}}
\newcommand{\gv}{\mathbf{g}}
\newcommand{\Hv}{\mathbf{H}}
\newcommand{\dv}{\mathbf{d}}
\newcommand{\Vv}{\mathbf{V}}
\newcommand{\vv}{\mathbf{v}}
\newcommand{\tv}{\mathbf{t}}
\newcommand{\Tv}{\mathbf{T}}
\newcommand{\Sv}{\mathbf{S}}
\newcommand{\zv}{\mathbf{z}}
\newcommand{\Zv}{\mathbf{Z}}
\newcommand{\Norm}{\mathcal{N}}
\newcommand{\muv}{\boldsymbol{\mu}}
\newcommand{\sigmav}{\boldsymbol{\sigma}}
\newcommand{\phiv}{\boldsymbol{\phi}}
\newcommand{\Phiv}{\boldsymbol{\Phi}}
\newcommand{\Sigmav}{\boldsymbol{\Sigma}}
\newcommand{\Lambdav}{\boldsymbol{\Lambda}}
\newcommand{\half}{\frac{1}{2}}
\newcommand{\argmax}[1]{\underset{#1}{\operatorname{argmax}}}
\newcommand{\argmin}[1]{\underset{#1}{\operatorname{argmin}}}
\newcommand{\dimensionbar}[1]{\underset{#1}{\operatorname{|}}}
$
Hierarchical Clustering
Hierarchical clustering is often used to construct dendrograms. We will see an example below.
The methods are straightforward. The similarity between pairs of samples is usually related to the Euclidean distance between them. In agglomerative clustering, initially each sample is in a unique cluster. Then, the most similar two clusters are merged. This continues until a single cluster results that contains all samples. The distance between two clusters, $C_i$ and $C_j$, can be determined by the single-link method
$$
d(C_i,C_j) = \min_{\xv\in C_i, \yv\in C_j} d(\xv,\yv)
$$
or complete-link method
$$
d(C_i,C_j) = \max_{\xv\in C_i, \yv\in C_j} d(\xv,\yv)
$$
where $d(\xv,\yv)$ is the Euclidean distance between $\xv$ and $\yv$.
In divisive clustering, all samples are initially in one cluster, which is successively split until all samples are in unique clusters. We will use agglomerative clustering, as it often results in more compact dendrograms.
Step1: Let's represent clusters as a list of sample matrices, each matrix containing samples from one cluster. Initially, all samples are in their own clusters. Let's use the Old Faithful data to develop our implementation.
Step2: Now we need the complete-linkage cluster distance function.
Step3: So, the maximum square distance between $C_1$ and $C_2$ is 50.
Step4: All that is left is a way to identify to two clusters with the minimum distance.
Step5: or
Step6: So, clusters at indices 0 and 1 are closest. We can merge these two using np.vstack. Now we are ready to write the function.
Step7: Now for a simple, but very inefficient, implementation of agglomerative clustering.
Step8: How might we make this more efficient?
Maybe if we compute the pairwise squared distance between data points once! Then clusters are defined by indices into this distance matrix.
Step9: What else could you do to speed this up?
Let's try another data set, this time from Finland. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: $\newcommand{\xv}{\mathbf{x}}
\newcommand{\Xv}{\mathbf{X}}
\newcommand{\piv}{\mathbf{\pi}}
\newcommand{\yv}{\mathbf{y}}
\newcommand{\Yv}{\mathbf{Y}}
\newcommand{\zv}{\mathbf{z}}
\newcommand{\av}{\mathbf{a}}
\newcommand{\Wv}{\mathbf{W}}
\newcommand{\wv}{\mathbf{w}}
\newcommand{\gv}{\mathbf{g}}
\newcommand{\Hv}{\mathbf{H}}
\newcommand{\dv}{\mathbf{d}}
\newcommand{\Vv}{\mathbf{V}}
\newcommand{\vv}{\mathbf{v}}
\newcommand{\tv}{\mathbf{t}}
\newcommand{\Tv}{\mathbf{T}}
\newcommand{\Sv}{\mathbf{S}}
\newcommand{\zv}{\mathbf{z}}
\newcommand{\Zv}{\mathbf{Z}}
\newcommand{\Norm}{\mathcal{N}}
\newcommand{\muv}{\boldsymbol{\mu}}
\newcommand{\sigmav}{\boldsymbol{\sigma}}
\newcommand{\phiv}{\boldsymbol{\phi}}
\newcommand{\Phiv}{\boldsymbol{\Phi}}
\newcommand{\Sigmav}{\boldsymbol{\Sigma}}
\newcommand{\Lambdav}{\boldsymbol{\Lambda}}
\newcommand{\half}{\frac{1}{2}}
\newcommand{\argmax}[1]{\underset{#1}{\operatorname{argmax}}}
\newcommand{\argmin}[1]{\underset{#1}{\operatorname{argmin}}}
\newcommand{\dimensionbar}[1]{\underset{#1}{\operatorname{|}}}
$
Hierarchical Clustering
Hierarchical clustering is often used to construct dendrograms. We will see an example below.
The methods are straightforward. The similarity between pairs of samples is usually related to the Euclidean distance between them. In agglomerative clustering, initially each sample is in a unique cluster. Then, the most similar two clusters are merged. This continues until a single cluster results that contains all samples. The distance between two clusters, $C_i$ and $C_j$, can be determined by the single-link method
$$
d(C_i,C_j) = \min_{\xv\in C_i, \yv\in C_j} d(\xv,\yv)
$$
or complete-link method
$$
d(C_i,C_j) = \max_{\xv\in C_i, \yv\in C_j} d(\xv,\yv)
$$
where $d(\xv,\yv)$ is the Euclidean distance between $\xv$ and $\yv$.
In divisive clustering, all samples are initially in one cluster, which is successively split until all samples are in unique clusters. We will use agglomerative clustering, as it often results in more compact dendrograms.
End of explanation
!wget http://www.cs.colostate.edu/~anderson/cs480/notebooks/oldfaithful.csv
data = np.loadtxt('oldfaithful.csv')
data.shape
plt.scatter(data[:,0],data[:,1]);
plt.xlabel('Duration');
plt.ylabel('Interval');
clusters = [d for d in data]
clusters[:5]
Explanation: Let's represent clusters as a list of sample matrices, each matrix containing samples from one cluster. Initially, all samples are in their own clusters. Let's use the Old Faithful data to develop our implementation.
End of explanation
C1 = [np.array([[1,2]]), np.array([[2,3]])]
C2 = [np.array([[2,2]]), np.array([[6,7]])]
C1,C2
allC1 = np.vstack((C1))
allC2 = np.vstack((C2))
allC1,allC2
allC1[:,np.newaxis,:] - allC2
np.sum((allC1[:,np.newaxis,:] - allC2)**2,axis=2)
np.max(np.sum((allC1[:,np.newaxis,:] - allC2)**2,axis=2))
Explanation: Now we need the complete-linkage cluster distance function.
End of explanation
def clusterDistance(Ci,Cj):
allCi = np.vstack((Ci))
allCj = np.vstack((Cj))
return np.max(np.sum((allCi[:,np.newaxis,:] - allCj)**2, axis=2))
clusterDistance(C1,C2)
Explanation: So, the maximum square distance between $C_1$ and $C_2$ is 50.
End of explanation
C3 = [np.array([[6,4]]), np.array([[8,9]])]
clusters = [C1, C2, C3]
clusters
for i in range(len(clusters)-1):
for j in range(i+1,len(clusters)):
print(i,j)
dists = []
for i in range(len(clusters)-1):
for j in range(i+1,len(clusters)):
dists.append([i,j,clusterDistance(clusters[i],clusters[j])])
dists
Explanation: All that is left is a way to identify to two clusters with the minimum distance.
End of explanation
[[i,j,clusterDistance(clusters[i],clusters[j])] for i in range(len(clusters)-1) for j in range(i+1,len(clusters))]
def clusterDistance(Ci,Cj):
'''Ci and Cj are two clusters, each being a dict with 'X' and 'label' keys'''
return np.mean(np.sum((Ci['X'][:,np.newaxis,:] - Cj['X'])**2, axis=2))
# return np.min(np.sum((Ci['X'][:,np.newaxis,:] - Cj['X'])**2, axis=2))
# return np.max(np.sum((Ci['X'][:,np.newaxis,:] - Cj['X'])**2, axis=2))
Explanation: or
End of explanation
def mergeClusters(Ci,Cj, k):
return {'X': np.vstack((Ci['X'], Cj['X'])),
'label': k}
Explanation: So, clusters at indices 0 and 1 are closest. We can merge these two using np.vstack. Now we are ready to write the function.
End of explanation
def agglomerative(X,clusterDistanceF, nClusters):
labels = np.zeros((X.shape[0]))
# clusters is list of pairs of sample and label
clusters = [ {'X':X[i:i+1,:], 'label':i} for i in range(X.shape[0]) ]
k = X.shape[0] - 1
while len(clusters) > nClusters:
dists = np.array( [[i,j,clusterDistance(clusters[i],clusters[j])] for i in range(len(clusters)-1) for j in range(i+1,len(clusters))] )
whichClosest = np.argmin(dists[:,-1])
closest = dists[whichClosest,:2]
i,j = closest.astype(int)
# Merge them
k += 1
clusters[i] = {'X': np.vstack((clusters[i]['X'],clusters[j]['X'])),
'label': k}
clusters.pop(j)
print(len(clusters), end=' ')
return clusters
data.shape
clusters = agglomerative(data,clusterDistance, 2)
clusters
for i in range(len(clusters)):
cluster = clusters[i]['X']
plt.scatter(cluster[:,0], cluster[:,1])
plt.xlabel('Duration');
plt.ylabel('Interval');
Explanation: Now for a simple, but very inefficient, implementation of agglomerative clustering.
End of explanation
dataDists = np.sum((data[:,np.newaxis,:] - data)**2, axis=2)
dataDists.shape
def clusterDistance(Ci, Cj, dataDists):
'''Ci and Cj are two clusters, each being a dict with 'X' and 'label' keys'''
return np.mean( np.array([dataDists[i,j] for i in Ci['X'] for j in Cj['X']]) )
# return np.min(np.sum((Ci['X'][:,np.newaxis,:] - Cj['X'])**2, axis=2))
# return np.max(np.sum((Ci['X'][:,np.newaxis,:] - Cj['X'])**2, axis=2))
def agglomerative(X,clusterDistanceF, nClusters):
dataDists = np.sum((X[:,np.newaxis,:] - X)**2, axis=2)
labels = np.zeros((X.shape[0]))
# clusters is list of pairs of sample and label
clusters = [ {'X':[i], 'label':i} for i in range(X.shape[0]) ]
k = X.shape[0] - 1
while len(clusters) > nClusters:
dists = np.array( [[i,j,clusterDistance(clusters[i],clusters[j], dataDists)] for i in range(len(clusters)-1) for j in range(i+1,len(clusters))] )
whichClosest = np.argmin(dists[:,-1])
closest = dists[whichClosest,:2]
i,j = closest.astype(int)
# Merge them
k += 1
clusters[i] = {'X': clusters[i]['X'] + clusters[j]['X'],
'label': k}
clusters.pop(j)
print(len(clusters), end=' ')
return clusters
clusters = agglomerative(data,clusterDistance, 2)
for i in range(len(clusters)):
cluster = clusters[i]['X']
coords = np.array([data[c] for c in cluster])
plt.scatter(coords[:,0], coords[:,1])
plt.xlabel('Duration');
plt.ylabel('Interval');
Explanation: How might we make this more efficient?
Maybe if we compute the pairwise squared distance between data points once! Then clusters are defined by indices into this distance matrix.
End of explanation
data2 = np.loadtxt('userslocations.csv')
data2.shape
clusters = agglomerative(data2,clusterDistance, 4)
plt.figure(figsize=(20,8))
for i in range(len(clusters)):
cluster = clusters[i]['X']
coords = np.array([data[c] for c in cluster])
plt.scatter(coords[:,0], coords[:,1])
plt.xlabel('Interval (minutes)')
plt.ylabel('Duration (minutes)')
plt.subplot(1,3,2);
Explanation: What else could you do to speed this up?
Let's try another data set, this time from Finland.
End of explanation |
6,061 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
全体の流れ
分析の前準備
BigQueryから収集したデータの抽出
データを扱いやすく整形
統計情報を確認
データの可視化
データの特徴
外気温は比較的一定
CPU温度はよく変化する
分析の実施
CPU温度が外気温を越えて熱くならないようにコントロールしたい。そのためにどうデータを扱うかのサンプルを確認する。ここで作ったモデルを実際にプロダクション環境に組み込めることを想定して作りましょう。
分析の前準備
BigQueryからのデータ抽出とデータ整形
Step1: 統計情報の確認
Step2: 時系列データとして可視化する
Step3: 散布図として可視化する
Step4: 相関係数を確認する
Step5: 分析の実施
Step1:最新のCPU温度と外気温を比較する
時間順にソートされた最新のデータを使って、CPU温度と外気温を比較する方法
Step6: この方法の問題点
温度の大小関係が少しでも変わるとすぐに判定が変わってしまう。もう少しなだらかに判定するロジックとしたい
Step2:最新からN個のデータを使ってCPU温度と外気温を比較する
時間順にソートされたN個のデータを使って、CPU温度が外気温より大きくなっている場合の数をカウントする方法
Step7: この方法の問題点
Step1よりは確実に良いロジックとなっているが、引き続き、閾値の設定がややシビアに思える。もう少しなだからな判定ロジックとしたい。
Step3:移動平均使って比較する | Python Code:
%%bq query -n requests
SELECT datetime, cpu_temperature, temperature
FROM `soracom_handson.raspi_env`
order by datetime asc
import google.datalab.bigquery as bq
import pandas as pd
df_from_bq = requests.execute(output_options=bq.QueryOutput.dataframe()).result()
# データの確認
df_from_bq
# 文字列型でデータが取得されているので変換
df_from_bq['datetime'] = pd.to_datetime(df_from_bq['datetime'])
df_from_bq['cpu_temperature'] = df_from_bq['cpu_temperature'].astype('float')
df_from_bq['temperature'] = df_from_bq['temperature'].astype('float')
# 時系列データ化する
df = df_from_bq[['cpu_temperature', 'temperature']]
df.index = df_from_bq['datetime'].values
# データの確認
df
Explanation: 全体の流れ
分析の前準備
BigQueryから収集したデータの抽出
データを扱いやすく整形
統計情報を確認
データの可視化
データの特徴
外気温は比較的一定
CPU温度はよく変化する
分析の実施
CPU温度が外気温を越えて熱くならないようにコントロールしたい。そのためにどうデータを扱うかのサンプルを確認する。ここで作ったモデルを実際にプロダクション環境に組み込めることを想定して作りましょう。
分析の前準備
BigQueryからのデータ抽出とデータ整形
End of explanation
df.describe()
Explanation: 統計情報の確認
End of explanation
df.plot(y=['cpu_temperature', 'temperature'], figsize=(16,4), alpha=0.5)
Explanation: 時系列データとして可視化する
End of explanation
df.plot(kind='scatter', x='cpu_temperature', y='temperature', c='cpu_temperature', cmap='winter')
Explanation: 散布図として可視化する
End of explanation
# 散布図から定性的に確認できた相関をさらに定量的に確認する
import numpy as np
np.corrcoef(df['cpu_temperature'], df['temperature'])
Explanation: 相関係数を確認する
End of explanation
df.tail(1)['cpu_temperature'] > df.tail(1)['temperature']
Explanation: 分析の実施
Step1:最新のCPU温度と外気温を比較する
時間順にソートされた最新のデータを使って、CPU温度と外気温を比較する方法
End of explanation
N = 10
threashold = 8
df_N = df.tail(N)
sum(df_N['cpu_temperature'] > df_N['temperature']) > threashold
Explanation: この方法の問題点
温度の大小関係が少しでも変わるとすぐに判定が変わってしまう。もう少しなだらかに判定するロジックとしたい
Step2:最新からN個のデータを使ってCPU温度と外気温を比較する
時間順にソートされたN個のデータを使って、CPU温度が外気温より大きくなっている場合の数をカウントする方法
End of explanation
# 10項移動平均
df.rolling(window=20, center=False).mean()
# 10項移動平均の最新のものを比較材料に使う
moving_average = df.rolling(window=10, center=False).mean().tail(1)
moving_average['cpu_temperature'] > moving_average['temperature']
Explanation: この方法の問題点
Step1よりは確実に良いロジックとなっているが、引き続き、閾値の設定がややシビアに思える。もう少しなだからな判定ロジックとしたい。
Step3:移動平均使って比較する
End of explanation |
6,062 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Распределение товаров по категориям
Инициализация
Step1: Загружаю данные о продуктах
Описание полей
Step2: Загружаем все возможные наименования продуктов
У разных поставщиков один и тот-же продукт может называться по разному.
Описание полей
Step3: Загружаем все возможные наименования категорий продуктов
Категории у разных поставщиков называются по разному.
Описание полей
Step4: Переносим имена продуктов и категорий в DF продуктов
Step5: Извлечение признаков
Step6: Кластеризация продуктов
Необходимо кластеризовать продукты, чтобы кластеры "раскидать" по категориям.
Перевести и разобраться в описании методов sklearn.cluster | Python Code:
import os
import sys
from django.utils import timezone
sys.path.append('/home/ubuntu/anodos.ru/anodos/')
os.environ['DJANGO_SETTINGS_MODULE'] = 'anodos.settings'
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import catalog.runner
from catalog.models import *
Explanation: Распределение товаров по категориям
Инициализация
End of explanation
# Загружаем продукты одного производителя
vendor_alias = 'lenovo'
products = Product.objects.get_df(vendor__alias = vendor_alias, double = None)
print(products.info())
products.head()
Explanation: Загружаю данные о продуктах
Описание полей:
id - идентификатор в базе;
vendor - идентификатор производителя;
category - категория продукта (идентификатор);
unit - единица измерения;
double - если продукт является дублем имеющегося - ссылка на исходный;
price_type - тип цены (как правило - розничная);
currency - идентификатор валюты цены (на данный момент только рубль);
name - имя продукта;
article - артикул продукта (уникален для производителя);
alias - производитель, артикул и имя продукта, приведённые в вид, оптимизированный для индексации поиска;
description - описание продукта (реализация в будущем);
edited - редактировался ли продукт вручную;
tested - объект проверен вручную, все данные о нём верны;
for_export - экспортировать ли информацию во внешние системы, например, в Яндекс.Маркет (реализация в будущем);
on_stock - количество, доступное на складах;
on_transit - количество, доступное в транзитах;
on_factory - количество, доступное на заказ;
price - цена;
fixed - фиксирована ли цена (True для некоторых программных продуктов или для акционного продукта);
state - статус объекта (если False - объект не используется ни в каких обработках);
created - дата и время создания объёкта в базе;
modified - дата последнего изменения объекта.
End of explanation
input_names = ProductInputName.objects.get_df(product__vendor__alias = vendor_alias,
product__double = None)
print(input_names.info())
input_names.head()
Explanation: Загружаем все возможные наименования продуктов
У разных поставщиков один и тот-же продукт может называться по разному.
Описание полей:
id - идентификатор в базе;
product - продукт, которому соответствует имя;
name - имя;
state - статус;
created - дата и время создания;
modified - дата и время последнего редактирования.
End of explanation
input_categories = ProductInputCategory.objects.get_df(product__vendor__alias = vendor_alias,
product__double = None)
print(input_categories.info())
input_categories.head()
Explanation: Загружаем все возможные наименования категорий продуктов
Категории у разных поставщиков называются по разному.
Описание полей:
id - идентификатор в базе;
product - продукт, которому соответствует имя категории;
category - имя категори;
state - статус;
created - дата и время создания;
modified - дата и время последнего редактирования.
End of explanation
def get_input_names(product):
filtered_names = input_names[input_names['product'] == product]
if len(filtered_names):
names = []
for i, row in filtered_names.iterrows():
names.append(row['name'])
names = ' '.join(names)
else:
names = ''
return names
products['input_names'] = products['id'].map(get_input_names)
print(products.info())
def get_input_categories(product):
filtered_categories = input_categories[input_categories['product'] == product]
if len(filtered_categories):
categories = []
for i, row in filtered_categories.iterrows():
if row['category']:
categories.append(row['category'])
categories = ' '.join(categories)
else:
categories = ''
return categories
products['input_categories'] = products['id'].map(get_input_categories)
print(products.info())
products.head()
Explanation: Переносим имена продуктов и категорий в DF продуктов
End of explanation
from sklearn.feature_extraction.text import TfidfVectorizer
# Вычисляем признаки из текущих имён
names_td_idf = TfidfVectorizer()
X_names = names_td_idf.fit_transform(products['name'])
print('Размер исходной мартицы:', products.shape)
print('Размер матрицы признаков:', X_names.shape)
# Вычисляем признаки из исходных имён
input_names_td_idf = TfidfVectorizer()
X_input_names = input_names_td_idf.fit_transform(products['input_names'])
print('Размер исходной мартицы:', products.shape)
print('Размер матрицы признаков:', X_input_names.shape)
# Вычисляем признаки из исходных категорий
input_categories_td_idf = TfidfVectorizer()
X_input_categories = input_categories_td_idf.fit_transform(products['input_categories'])
print('Размер исходной мартицы:', products.shape)
print('Размер матрицы признаков:', X_input_categories.shape)
from scipy.sparse import coo_matrix, hstack
# Объединяем мартицы признаков
X = hstack([X_names, X_input_names, X_input_categories]).toarray()
print(X.shape)
Explanation: Извлечение признаков
End of explanation
from sklearn.cluster import DBSCAN
from sklearn import metrics
# Выбрать оптимальные параметра eps и min_samples
epses = [.1, .2, .3, .4, .5, .6, .7, .8, .9]
labels = []
counts_of_clasters = []
counts_of_not_clasters = []
for eps in epses:
db = DBSCAN(eps=eps, min_samples=2, metric='cosine')
db.fit(X)
labels.append(db.labels_)
uniques_labels = set()
count = 0
for label in db.labels_:
uniques_labels.add(label)
if label == -1:
count += 1
counts_of_clasters.append(len(uniques_labels))
counts_of_not_clasters.append(count)
print(counts_of_clasters)
plt.plot(epses, counts_of_clasters)
print(counts_of_not_clasters)
plt.plot(epses, counts_of_not_clasters)
from sklearn.cluster import DBSCAN
from sklearn import metrics
db = DBSCAN(eps=0.35, min_samples=2, metric='cosine')
db.fit(X)
labels.append(db.labels_)
uniques_labels = set()
count = 0
for label in db.labels_:
uniques_labels.add(label)
if label == -1:
count += 1
print('Количество кластеров:', len(uniques_labels))
print('Элементов вне кластеров:', count)
products['cluster'] = db.labels_
products.head()
for cluster in uniques_labels:
print(cluster, products[products.cluster == cluster].shape[0])
for i, row in products[products.cluster == cluster].iterrows():
print(row['name'], row['description'])
print('')
Explanation: Кластеризация продуктов
Необходимо кластеризовать продукты, чтобы кластеры "раскидать" по категориям.
Перевести и разобраться в описании методов sklearn.cluster
End of explanation |
6,063 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Winpython Default checker
Step1: Compilers
Step2: Cython (a compiler for writing C extensions for the Python language)
WinPython 3.5 and 3.6 users may not have mingwpy available, and so need "VisualStudio C++ Community Edition 2015" https
Step5: Graphics
Step6: Ipython Notebook
Step7: Mathematical
Step8: DataFrames
Step9: Split / Apply / Combine
Split your data into multiple independent groups.
Apply some function to each group.
Combine your groups back into a single data object.
Step10: Web Scraping
Step11: Operations Research
Step12: Deep Learning
Step13: SQL tools
Step14: Qt libraries Demo
See Dedicated Qt Libraries Demo
Wrap-up | Python Code:
import warnings
#warnings.filterwarnings("ignore", category=DeprecationWarning)
#warnings.filterwarnings("ignore", category=UserWarning)
#warnings.filterwarnings("ignore", category=FutureWarning)
# warnings.filterwarnings("ignore") # would silence all warnings
%matplotlib inline
# use %matplotlib widget for the adventurous
Explanation: Winpython Default checker
End of explanation
# checking Numba JIT toolchain
import numpy as np
image = np.zeros((1024, 1536), dtype = np.uint8)
#from pylab import imshow, show
import matplotlib.pyplot as plt
from timeit import default_timer as timer
from numba import jit
@jit
def create_fractal(min_x, max_x, min_y, max_y, image, iters , mandelx):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = mandelx(real, imag, iters)
image[y, x] = color
@jit
def mandel(x, y, max_iters):
c = complex(x, y)
z = 0.0j
for i in range(max_iters):
z = z*z + c
if (z.real*z.real + z.imag*z.imag) >= 4:
return i
return max_iters
# Numba speed
start = timer()
create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20 , mandel)
dt = timer() - start
fig = plt.figure()
print ("Mandelbrot created by numba in %f s" % dt)
plt.imshow(image)
plt.show()
Explanation: Compilers: Numba and Cython
Requirement
To get Cython working, Winpython 3.7+ users should install "Microsoft Visual C++ Build Tools 2017" (visualcppbuildtools_full.exe, a 4 Go installation) at https://beta.visualstudio.com/download-visual-studio-vs/
To get Numba working, not-windows10 users may have to install "Microsoft Visual C++ Redistributable pour Visual Studio 2017" (vc_redist) at https://beta.visualstudio.com/download-visual-studio-vs/
Thanks to recent progress, Visual Studio 2017/2018/2019 are cross-compatible now
Compiler toolchains
Numba (a JIT Compiler)
End of explanation
# Cython + Mingwpy compiler toolchain test
%load_ext Cython
%%cython -a
# with %%cython -a , full C-speed lines are shown in white, slowest python-speed lines are shown in dark yellow lines
# ==> put your cython rewrite effort on dark yellow lines
def create_fractal_cython(min_x, max_x, min_y, max_y, image, iters , mandelx):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = mandelx(real, imag, iters)
image[y, x] = color
def mandel_cython(x, y, max_iters):
cdef int i
cdef double cx, cy , zx, zy
cx , cy = x, y
zx , zy =0 ,0
for i in range(max_iters):
zx , zy = zx*zx - zy*zy + cx , zx*zy*2 + cy
if (zx*zx + zy*zy) >= 4:
return i
return max_iters
#Cython speed
start = timer()
create_fractal_cython(-2.0, 1.0, -1.0, 1.0, image, 20 , mandel_cython)
dt = timer() - start
fig = plt.figure()
print ("Mandelbrot created by cython in %f s" % dt)
plt.imshow(image)
Explanation: Cython (a compiler for writing C extensions for the Python language)
WinPython 3.5 and 3.6 users may not have mingwpy available, and so need "VisualStudio C++ Community Edition 2015" https://www.visualstudio.com/downloads/download-visual-studio-vs#d-visual-c
End of explanation
# Matplotlib 3.4.1
# for more examples, see: http://matplotlib.org/gallery.html
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
from matplotlib import cm
ax = plt.figure().add_subplot(projection='3d')
X, Y, Z = axes3d.get_test_data(0.05)
# Plot the 3D surface
ax.plot_surface(X, Y, Z, rstride=8, cstride=8, alpha=0.3)
# Plot projections of the contours for each dimension. By choosing offsets
# that match the appropriate axes limits, the projected contours will sit on
# the 'walls' of the graph
cset = ax.contourf(X, Y, Z, zdir='z', offset=-100, cmap=cm.coolwarm)
cset = ax.contourf(X, Y, Z, zdir='x', offset=-40, cmap=cm.coolwarm)
cset = ax.contourf(X, Y, Z, zdir='y', offset=40, cmap=cm.coolwarm)
ax.set_xlim(-40, 40)
ax.set_ylim(-40, 40)
ax.set_zlim(-100, 100)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
# Seaborn
# for more examples, see http://stanford.edu/~mwaskom/software/seaborn/examples/index.html
import seaborn as sns
sns.set()
df = sns.load_dataset("iris")
sns.pairplot(df, hue="species", height=1.5)
# altair-example
import altair as alt
alt.Chart(df).mark_bar().encode(
x=alt.X('sepal_length', bin=alt.Bin(maxbins=50)),
y='count(*):Q',
color='species:N',
#column='species',
).interactive()
# temporary warning removal
import warnings
import matplotlib as mpl
warnings.filterwarnings("ignore", category=mpl.cbook.MatplotlibDeprecationWarning)
# Holoviews
# for more example, see http://holoviews.org/Tutorials/index.html
import numpy as np
import holoviews as hv
hv.extension('matplotlib')
dots = np.linspace(-0.45, 0.45, 11)
fractal = hv.Image(image)
layouts = {y: (fractal * hv.Points(fractal.sample([(i,y) for i in dots])) +
fractal.sample(y=y) )
for y in np.linspace(0, 0.45,11)}
hv.HoloMap(layouts, kdims=['Y']).collate().cols(2)
# Bokeh 0.12.5
import numpy as np
from six.moves import zip
from bokeh.plotting import figure, show, output_notebook
N = 4000
x = np.random.random(size=N) * 100
y = np.random.random(size=N) * 100
radii = np.random.random(size=N) * 1.5
colors = ["#%02x%02x%02x" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)]
output_notebook()
TOOLS="hover,crosshair,pan,wheel_zoom,box_zoom,reset,tap,save,box_select,poly_select,lasso_select"
p = figure(tools=TOOLS)
p.scatter(x,y, radius=radii, fill_color=colors, fill_alpha=0.6, line_color=None)
show(p)
# Datashader (holoviews+Bokeh)
import datashader as ds
import numpy as np
import holoviews as hv
from holoviews import opts
from holoviews.operation.datashader import datashade, shade, dynspread, spread, rasterize
from holoviews.operation import decimate
hv.extension('bokeh')
decimate.max_samples=1000
dynspread.max_px=20
dynspread.threshold=0.5
def random_walk(n, f=5000):
Random walk in a 2D space, smoothed with a filter of length f
xs = np.convolve(np.random.normal(0, 0.1, size=n), np.ones(f)/f).cumsum()
ys = np.convolve(np.random.normal(0, 0.1, size=n), np.ones(f)/f).cumsum()
xs += 0.1*np.sin(0.1*np.array(range(n-1+f))) # add wobble on x axis
xs += np.random.normal(0, 0.005, size=n-1+f) # add measurement noise
ys += np.random.normal(0, 0.005, size=n-1+f)
return np.column_stack([xs, ys])
def random_cov():
Random covariance for use in generating 2D Gaussian distributions
A = np.random.randn(2,2)
return np.dot(A, A.T)
np.random.seed(1)
points = hv.Points(np.random.multivariate_normal((0,0), [[0.1, 0.1], [0.1, 1.0]], (50000,)),label="Points")
paths = hv.Path([0.15*random_walk(10000) for i in range(10)], kdims=["u","v"], label="Paths")
decimate(points) + rasterize(points) + rasterize(paths)
ropts = dict(colorbar=True, tools=["hover"], width=350)
rasterize( points).opts(cmap="kbc_r", cnorm="linear").relabel('rasterize()').opts(**ropts).hist() + \
dynspread(datashade( points, cmap="kbc_r", cnorm="linear").relabel("datashade()"))
#bqplot
from IPython.display import display
from bqplot import (Figure, Map, Mercator, Orthographic, ColorScale, ColorAxis,
AlbersUSA, topo_load, Tooltip)
def_tt = Tooltip(fields=['id', 'name'])
map_mark = Map(scales={'projection': Mercator()}, tooltip=def_tt)
map_mark.interactions = {'click': 'select', 'hover': 'tooltip'}
fig = Figure(marks=[map_mark], title='Interactions Example')
display(fig)
# ipyleaflet (javascript library usage)
from ipyleaflet import (
Map, Marker, TileLayer, ImageOverlay, Polyline, Polygon,
Rectangle, Circle, CircleMarker, GeoJSON, DrawControl
)
from traitlets import link
center = [34.6252978589571, -77.34580993652344]
m = Map(center=[34.6252978589571, -77.34580993652344], zoom=10)
dc = DrawControl()
def handle_draw(self, action, geo_json):
print(action)
print(geo_json)
m
m
dc.on_draw(handle_draw)
m.add_control(dc)
%matplotlib widget
# Testing matplotlib interactions with a simple plot
import matplotlib.pyplot as plt
import numpy as np
# warning ; you need to launch a second time %matplotlib widget, if after a %matplotlib inline
%matplotlib widget
fig = plt.figure() #plt.figure(1)
plt.plot(np.sin(np.linspace(0, 20, 100)))
plt.show()
# plotnine: giving a taste of ggplot of R langage (formerly we were using ggpy)
from plotnine import ggplot, aes, geom_blank, geom_point, stat_smooth, facet_wrap, theme_bw
from plotnine.data import mtcars
ggplot(mtcars, aes(x='hp', y='wt', color='mpg')) + geom_point() +\
facet_wrap("~cyl") + theme_bw()
Explanation: Graphics: Matplotlib, Pandas, Seaborn, Holoviews, Bokeh, bqplot, ipyleaflet, plotnine
End of explanation
import IPython;IPython.__version__
# Audio Example : https://github.com/ipython/ipywidgets/blob/master/examples/Beat%20Frequencies.ipynb
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from ipywidgets import interactive
from IPython.display import Audio, display
def beat_freq(f1=220.0, f2=224.0):
max_time = 3
rate = 8000
times = np.linspace(0,max_time,rate*max_time)
signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)
print(f1, f2, abs(f1-f2))
display(Audio(data=signal, rate=rate))
try:
plt.plot(signal); #plt.plot(v.result);
except:
pass
return signal
v = interactive(beat_freq, f1=(200.0,300.0), f2=(200.0,300.0))
display(v)
# Networks graph Example : https://github.com/ipython/ipywidgets/blob/master/examples/Exploring%20Graphs.ipynb
%matplotlib inline
from ipywidgets import interact
import matplotlib.pyplot as plt
import networkx as nx
# wrap a few graph generation functions so they have the same signature
def random_lobster(n, m, k, p):
return nx.random_lobster(n, p, p / m)
def powerlaw_cluster(n, m, k, p):
return nx.powerlaw_cluster_graph(n, m, p)
def erdos_renyi(n, m, k, p):
return nx.erdos_renyi_graph(n, p)
def newman_watts_strogatz(n, m, k, p):
return nx.newman_watts_strogatz_graph(n, k, p)
@interact(n=(2,30), m=(1,10), k=(1,10), p=(0.0, 1.0, 0.001),
generator={'lobster': random_lobster,
'power law': powerlaw_cluster,
'Newman-Watts-Strogatz': newman_watts_strogatz,
u'Erdős-Rényi': erdos_renyi,
})
def plot_random_graph(n, m, k, p, generator):
g = generator(n, m, k, p)
nx.draw(g)
plt.title(generator.__name__)
plt.show()
Explanation: Ipython Notebook: Interactivity & other
End of explanation
# checking statsmodels
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import statsmodels.api as sm
data = sm.datasets.anes96.load_pandas()
party_ID = np.arange(7)
labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat",
"Independent-Independent", "Independent-Republican",
"Weak Republican", "Strong Republican"]
plt.rcParams['figure.subplot.bottom'] = 0.23 # keep labels visible
plt.rcParams['figure.figsize'] = (6.0, 4.0) # make plot larger in notebook
age = [data.exog['age'][data.endog == id] for id in party_ID]
fig = plt.figure()
ax = fig.add_subplot(111)
plot_opts={'cutoff_val':5, 'cutoff_type':'abs',
'label_fontsize':'small',
'label_rotation':30}
sm.graphics.beanplot(age, ax=ax, labels=labels,
plot_opts=plot_opts)
ax.set_xlabel("Party identification of respondent")
ax.set_ylabel("Age")
plt.show()
# lmfit test (from http://nbviewer.ipython.org/github/lmfit/lmfit-py/blob/master/examples/lmfit-model.ipynb)
import numpy as np
import matplotlib.pyplot as plt
def decay(t, N, tau):
return N*np.exp(-t/tau)
t = np.linspace(0, 5, num=1000)
data = decay(t, 7, 3) + np.random.randn(*t.shape)
from lmfit import Model
model = Model(decay, independent_vars=['t'])
result = model.fit(data, t=t, N=10, tau=1)
fig = plt.figure() # necessary to separate from previous ploot with %matplotlib widget
plt.plot(t, data) # data
plt.plot(t, decay(t=t, **result.values), color='orange', linewidth=5) # best-fit model
Explanation: Mathematical: statsmodels, lmfit,
End of explanation
#Pandas
import pandas as pd
import numpy as np
idx = pd.date_range('2000', '2005', freq='d', closed='left')
datas = pd.DataFrame({'Color': [ 'green' if x> 1 else 'red' for x in np.random.randn(len(idx))],
'Measure': np.random.randn(len(idx)), 'Year': idx.year},
index=idx.date)
datas.head()
Explanation: DataFrames: Pandas, Dask
End of explanation
datas.query('Measure > 0').groupby(['Color','Year']).size().unstack()
Explanation: Split / Apply / Combine
Split your data into multiple independent groups.
Apply some function to each group.
Combine your groups back into a single data object.
End of explanation
# checking Web Scraping: beautifulsoup and requests
import requests
from bs4 import BeautifulSoup
URL = 'http://en.wikipedia.org/wiki/Franklin,_Tennessee'
req = requests.get(URL, headers={'User-Agent' : "Mining the Social Web"})
soup = BeautifulSoup(req.text, "lxml")
geoTag = soup.find(True, 'geo')
if geoTag and len(geoTag) > 1:
lat = geoTag.find(True, 'latitude').string
lon = geoTag.find(True, 'longitude').string
print ('Location is at', lat, lon)
elif geoTag and len(geoTag) == 1:
(lat, lon) = geoTag.string.split(';')
(lat, lon) = (lat.strip(), lon.strip())
print ('Location is at', lat, lon)
else:
print ('No location found')
Explanation: Web Scraping: Beautifulsoup
End of explanation
# Pulp example : minimizing the weight to carry 99 pennies
# (from Philip I Thomas)
# see https://www.youtube.com/watch?v=UmMn-N5w-lI#t=995
# Import PuLP modeler functions
from pulp import *
# The prob variable is created to contain the problem data
prob = LpProblem("99_pennies_Problem",LpMinimize)
# Variables represent how many of each coin we want to carry
pennies = LpVariable("Number_of_pennies",0,None,LpInteger)
nickels = LpVariable("Number_of_nickels",0,None,LpInteger)
dimes = LpVariable("Number_of_dimes",0,None,LpInteger)
quarters = LpVariable("Number_of_quarters",0,None,LpInteger)
# The objective function is added to 'prob' first
# we want to minimize (LpMinimize) this
prob += 2.5 * pennies + 5 * nickels + 2.268 * dimes + 5.670 * quarters, "Total_coins_Weight"
# We want exactly 99 cents
prob += 1 * pennies + 5 * nickels + 10 * dimes + 25 * quarters == 99, ""
# The problem data is written to an .lp file
prob.writeLP("99cents.lp")
prob.solve()
# print ("status",LpStatus[prob.status] )
print ("Minimal Weight to carry exactly 99 pennies is %s grams" % value(prob.objective))
# Each of the variables is printed with it's resolved optimum value
for v in prob.variables():
print (v.name, "=", v.varValue)
Explanation: Operations Research: Pulp
End of explanation
# checking sympy
import sympy
a, b =sympy.symbols('a b')
e=(a+b)**5
e.expand()
Explanation: Deep Learning: see tutorial-first-neural-network-python-keras
Symbolic Calculation: sympy
End of explanation
# checking Ipython-sql, sqlparse, SQLalchemy
%load_ext sql
%%sql sqlite:///.baresql.db
DROP TABLE IF EXISTS writer;
CREATE TABLE writer (first_name, last_name, year_of_death);
INSERT INTO writer VALUES ('William', 'Shakespeare', 1616);
INSERT INTO writer VALUES ('Bertold', 'Brecht', 1956);
SELECT * , sqlite_version() as sqlite_version from Writer order by Year_of_death
# checking baresql
from __future__ import print_function, unicode_literals, division # line needed only if Python2.7
from baresql import baresql
bsql = baresql.baresql(connection="sqlite:///.baresql.db")
bsqldf = lambda q: bsql.df(q, dict(globals(),**locals()))
users = ['Alexander', 'Billy', 'Charles', 'Danielle', 'Esmeralda', 'Franz', 'Greg']
# We use the python 'users' list like a SQL table
sql = "select 'Welcome ' || c0 || ' !' as say_hello, length(c0) as name_length from users$$ where c0 like '%a%' "
bsqldf(sql)
# Transfering Datas to sqlite, doing transformation in sql, going back to Pandas and Matplotlib
bsqldf('''
select Color, Year, count(*) as size
from datas$$
where Measure > 0
group by Color, Year'''
).set_index(['Year', 'Color']).unstack().plot(kind='bar')
# checking db.py
from db import DB
db=DB(dbtype="sqlite", filename=".baresql.db")
db.query("select sqlite_version() as sqlite_version ;")
db.tables
# checking sqlite_bro: this should lanch a separate non-browser window with sqlite_bro's welcome
!cmd start cmd /C sqlite_bro
# pyodbc or pypyodbc or ceODBC
try:
import pyodbc
except ImportError:
import pypyodbc as pyodbc # on PyPy, there is no pyodbc currently
# look for pyodbc providers
sources = pyodbc.dataSources()
dsns = list(sources.keys())
sl = [' %s [%s]' % (dsn, sources[dsn]) for dsn in dsns]
print("pyodbc Providers: (beware 32/64 bit driver and python version must match)\n", '\n'.join(sl))
# pythonnet
import clr
clr.AddReference("System.Data")
clr.AddReference('System.Data.Common')
import System.Data.OleDb as ADONET
import System.Data.Odbc as ODBCNET
import System.Data.Common as DATACOM
table = DATACOM.DbProviderFactories.GetFactoryClasses()
print("\n .NET Providers: (beware 32/64 bit driver and python version must match)")
for row in table.Rows:
print(" %s" % row[table.Columns[0]])
print(" ",[row[column] for column in table.Columns if column != table.Columns[0]])
Explanation: SQL tools: sqlite, Ipython-sql, sqlite_bro, baresql, db.py
End of explanation
# optional scipy full test (takes up to 10 minutes)
#!cmd /C start cmd /k python.exe -c "import scipy;scipy.test()"
%pip list
!jupyter labextension list
!pip check
!pipdeptree
!pipdeptree -p pip
Explanation: Qt libraries Demo
See Dedicated Qt Libraries Demo
Wrap-up
End of explanation |
6,064 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Test a Perceptual Phenomenon - Stroop data
Katie Truong
Step1: Introduction
In a Stroop experiment, participants are shown two lists of word of color names, printed in different ink colors, and asked to read out loud the ink color that the word was printed. The two lists are the congruent one, where the ink colors of the words agree with the words, and the incongruent one, where the ink colors don't match with the words.
Hypothesis
Step2: Firstly, assume the distribution of stimulus time for both congruent and incongruent lists are approximately normal, so we would be able to use Z-test or t-test. Secondly, we can see that we have a limited number of samples (24 and under 30), so we should use t-test instead of Z-test. Last but not least, the stimulus times are in pairs for both congruent and incongruent, and not independent events, so a pairwise t-test is our best bet.
Our goal is to determine whether the condition of the list would affect the stimulus time, or the mean stimulus time of each list would be significant different from each other. In other word, let's $\mu_D$, $\mu_I$ and $\mu_C$ be the mean difference between the two lists, the mean of the incongruent and the mean of the congruent, we have
Step3: We can see that the incongruent group has higher mean, max and standard deviation comparing to the congruent list.
Now we can start plotting the data. The scatterplot of stimulus time of the congruent list in respective with the incongruent list shows a clear trend of the dataset that, in a pair, the incongurent stimulus time would always be higher than the congruent stimulus time.
Step4: Both the scatterplot and the histogram suggest that the incongruent have a longer stimulus time comparing to the congruent list.
t-test
Step5: With the p-value is pvalue = 4.1003-08 < $\alpha = 0.05$, we can reject the null hypothesis.
Now consider the confidence interval | Python Code:
import pandas as pd
import numpy as np
import scipy.stats as st
import matplotlib.pyplot as plt
import seaborn as sns
import math
Explanation: Test a Perceptual Phenomenon - Stroop data
Katie Truong
End of explanation
stroopdata = pd.read_csv("stroopdata.csv")
stroopdata
Explanation: Introduction
In a Stroop experiment, participants are shown two lists of word of color names, printed in different ink colors, and asked to read out loud the ink color that the word was printed. The two lists are the congruent one, where the ink colors of the words agree with the words, and the incongruent one, where the ink colors don't match with the words.
Hypothesis:
The independent variable in this case would be whether the list is a congruent or an incogruent one, while the dependent variable would be the time it takes for participants to name the ink colors.
Now looking at the dataset provided for the experiment:
End of explanation
stroopdata.describe()
Explanation: Firstly, assume the distribution of stimulus time for both congruent and incongruent lists are approximately normal, so we would be able to use Z-test or t-test. Secondly, we can see that we have a limited number of samples (24 and under 30), so we should use t-test instead of Z-test. Last but not least, the stimulus times are in pairs for both congruent and incongruent, and not independent events, so a pairwise t-test is our best bet.
Our goal is to determine whether the condition of the list would affect the stimulus time, or the mean stimulus time of each list would be significant different from each other. In other word, let's $\mu_D$, $\mu_I$ and $\mu_C$ be the mean difference between the two lists, the mean of the incongruent and the mean of the congruent, we have:
$$
H_0:\mu_D = \mu_I - \mu_C = 0
$$
$$
H_a:\mu_D = \mu_I - \mu_C \neq 0
$$
The null hypothesis is that there is no significant difference in the population average amount of time it takes to state the colors of the words in a congruent or incongruent condition. If $\mu_D$ is significantly different than 0, we can reject the null hypothesis and conclude that the stimulus time wouldn't be affected by the condition of the list. Otherwise, if $\mu_D$ is not diffent than zero then the condition of the list has no significant effect on the stimulus time. We use $\alpha = 0.05$ .
Before conducting the t-test, it would be beneficial for us to explore the dataset and its discriptive statistics.
Data exploration:
Descriptive statistics of stimulus time of the congruent list:
End of explanation
sns.lmplot(x = 'Congruent', y = 'Incongruent', data = stroopdata)
plt.title("Stimulus times of congruent and incongruent conditions")
plt.xlabel("Congruent list (sec)")
plt.ylabel("Incongruent list (sec)")
plt.show()
fig, ax = plt.subplots()
ax.hist([stroopdata['Congruent'], stroopdata['Incongruent']], label = ['Congruent', 'Incongruent'])
ax.legend()
plt.title('Histogram of stimulus time per condition')
plt.xlabel ("Stimulus time (sec)")
plt.ylabel("Frequency")
plt.show()
Explanation: We can see that the incongruent group has higher mean, max and standard deviation comparing to the congruent list.
Now we can start plotting the data. The scatterplot of stimulus time of the congruent list in respective with the incongruent list shows a clear trend of the dataset that, in a pair, the incongurent stimulus time would always be higher than the congruent stimulus time.
End of explanation
ttest = st.ttest_rel(stroopdata['Incongruent'], stroopdata['Congruent'])
print(ttest)
Explanation: Both the scatterplot and the histogram suggest that the incongruent have a longer stimulus time comparing to the congruent list.
t-test:
We can perform the t-test easily using the scipy package:
End of explanation
# Differences:
stroopdata['Difference'] = stroopdata['Incongruent'] - stroopdata['Congruent']
# Sum of all differences:
sumdiff = sum(stroopdata['Difference'])
# Std of all differences:
stddiff = np.std(stroopdata['Difference'])
# Average of all sum differences:
avgdiff = sumdiff/24
# CI = avgdiff +- std(diff)/sqrt(n)
lower_bound = avgdiff - 2.064*stddiff/np.sqrt(24)
upper_bound = avgdiff + 2.064*stddiff/np.sqrt(24)
print(lower_bound)
print(upper_bound)
Explanation: With the p-value is pvalue = 4.1003-08 < $\alpha = 0.05$, we can reject the null hypothesis.
Now consider the confidence interval:
End of explanation |
6,065 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning to Play Pong
{tip}
For a production-grade implementation of distributed
reinforcement learning, use [Ray RLlib](https
Step1: Hyperparameters
Here we'll define a couple of the hyperparameters that are used.
Step4: Helper Functions
We first define a few helper functions
Step9: Neural Network
Here, a neural network is used to define a "policy"
for playing Pong (that is, a function that chooses an action given a state).
To implement a neural network in NumPy, we need to provide helper functions
for calculating updates and computing the output of the neural network
given an input, which in our case is an observation.
Step10: Parallelizing Gradients
We define an actor, which is responsible for taking a model and an env
and performing a rollout + computing a gradient update.
Step11: Running
This example is easy to parallelize because the network can play ten games
in parallel and no information needs to be shared between the games.
In the loop, the network repeatedly plays games of Pong and
records a gradient from each game. Every ten games, the gradients are
combined together and used to update the network. | Python Code:
import numpy as np
import os
import ray
import time
import gym
Explanation: Learning to Play Pong
{tip}
For a production-grade implementation of distributed
reinforcement learning, use [Ray RLlib](https://docs.ray.io/en/master/rllib/index.html).
In this example, we'll train a very simple neural network to play Pong using
the OpenAI Gym.
At a high level, we will use multiple Ray actors to obtain simulation rollouts
and calculate gradient simultaneously. We will then centralize these
gradients and update the neural network. The updated neural network will
then be passed back to each Ray actor for more gradient calculation.
This application is adapted, with minimal modifications, from
Andrej Karpathy's source code
(see the accompanying blog post).
{image} /ray-core/images/pong-arch.svg
:align: center
To run the application, first install some dependencies.
bash
pip install gym[atari]
At the moment, on a large machine with 64 physical cores, computing an update
with a batch of size 1 takes about 1 second, a batch of size 10 takes about 2.5
seconds. A batch of size 60 takes about 3 seconds. On a cluster with 11 nodes,
each with 18 physical cores, a batch of size 300 takes about 10 seconds. If the
numbers you see differ from these by much, take a look at the
Troubleshooting section at the bottom of this page and consider
submitting an issue.
Note that these times depend on how long the rollouts take, which in turn
depends on how well the policy is doing. For example, a really bad policy will
lose very quickly. As the policy learns, we should expect these numbers to
increase.
End of explanation
H = 200 # The number of hidden layer neurons.
gamma = 0.99 # The discount factor for reward.
decay_rate = 0.99 # The decay factor for RMSProp leaky sum of grad^2.
D = 80 * 80 # The input dimensionality: 80x80 grid.
learning_rate = 1e-4 # Magnitude of the update.
Explanation: Hyperparameters
Here we'll define a couple of the hyperparameters that are used.
End of explanation
def preprocess(img):
# Crop the image.
img = img[35:195]
# Downsample by factor of 2.
img = img[::2, ::2, 0]
# Erase background (background type 1).
img[img == 144] = 0
# Erase background (background type 2).
img[img == 109] = 0
# Set everything else (paddles, ball) to 1.
img[img != 0] = 1
return img.astype(np.float).ravel()
def process_rewards(r):
Compute discounted reward from a vector of rewards.
discounted_r = np.zeros_like(r)
running_add = 0
for t in reversed(range(0, r.size)):
# Reset the sum, since this was a game boundary (pong specific!).
if r[t] != 0:
running_add = 0
running_add = running_add * gamma + r[t]
discounted_r[t] = running_add
return discounted_r
def rollout(model, env):
Evaluates env and model until the env returns "Done".
Returns:
xs: A list of observations
hs: A list of model hidden states per observation
dlogps: A list of gradients
drs: A list of rewards.
# Reset the game.
observation = env.reset()
# Note that prev_x is used in computing the difference frame.
prev_x = None
xs, hs, dlogps, drs = [], [], [], []
done = False
while not done:
cur_x = preprocess(observation)
x = cur_x - prev_x if prev_x is not None else np.zeros(D)
prev_x = cur_x
aprob, h = model.policy_forward(x)
# Sample an action.
action = 2 if np.random.uniform() < aprob else 3
# The observation.
xs.append(x)
# The hidden state.
hs.append(h)
y = 1 if action == 2 else 0 # A "fake label".
# The gradient that encourages the action that was taken to be
# taken (see http://cs231n.github.io/neural-networks-2/#losses if
# confused).
dlogps.append(y - aprob)
observation, reward, done, info = env.step(action)
# Record reward (has to be done after we call step() to get reward
# for previous action).
drs.append(reward)
return xs, hs, dlogps, drs
Explanation: Helper Functions
We first define a few helper functions:
Preprocessing: The preprocess function will
preprocess the original 210x160x3 uint8 frame into a one-dimensional 6400
float vector.
Reward Processing: The process_rewards function will calculate
a discounted reward. This formula states that the "value" of a
sampled action is the weighted sum of all rewards afterwards,
but later rewards are exponentially less important.
Rollout: The rollout function plays an entire game of Pong (until
either the computer or the RL agent loses).
End of explanation
class Model(object):
This class holds the neural network weights.
def __init__(self):
self.weights = {}
self.weights["W1"] = np.random.randn(H, D) / np.sqrt(D)
self.weights["W2"] = np.random.randn(H) / np.sqrt(H)
def policy_forward(self, x):
h = np.dot(self.weights["W1"], x)
h[h < 0] = 0 # ReLU nonlinearity.
logp = np.dot(self.weights["W2"], h)
# Softmax
p = 1.0 / (1.0 + np.exp(-logp))
# Return probability of taking action 2, and hidden state.
return p, h
def policy_backward(self, eph, epx, epdlogp):
Backward pass to calculate gradients.
Arguments:
eph: Array of intermediate hidden states.
epx: Array of experiences (observations.
epdlogp: Array of logps (output of last layer before softmax/
dW2 = np.dot(eph.T, epdlogp).ravel()
dh = np.outer(epdlogp, self.weights["W2"])
# Backprop relu.
dh[eph <= 0] = 0
dW1 = np.dot(dh.T, epx)
return {"W1": dW1, "W2": dW2}
def update(self, grad_buffer, rmsprop_cache, lr, decay):
Applies the gradients to the model parameters with RMSProp.
for k, v in self.weights.items():
g = grad_buffer[k]
rmsprop_cache[k] = decay * rmsprop_cache[k] + (1 - decay) * g ** 2
self.weights[k] += lr * g / (np.sqrt(rmsprop_cache[k]) + 1e-5)
def zero_grads(grad_buffer):
Reset the batch gradient buffer.
for k, v in grad_buffer.items():
grad_buffer[k] = np.zeros_like(v)
Explanation: Neural Network
Here, a neural network is used to define a "policy"
for playing Pong (that is, a function that chooses an action given a state).
To implement a neural network in NumPy, we need to provide helper functions
for calculating updates and computing the output of the neural network
given an input, which in our case is an observation.
End of explanation
ray.init()
@ray.remote
class RolloutWorker(object):
def __init__(self):
# Tell numpy to only use one core. If we don't do this, each actor may
# try to use all of the cores and the resulting contention may result
# in no speedup over the serial version. Note that if numpy is using
# OpenBLAS, then you need to set OPENBLAS_NUM_THREADS=1, and you
# probably need to do it from the command line (so it happens before
# numpy is imported).
os.environ["MKL_NUM_THREADS"] = "1"
self.env = gym.make("Pong-v0")
def compute_gradient(self, model):
# Compute a simulation episode.
xs, hs, dlogps, drs = rollout(model, self.env)
reward_sum = sum(drs)
# Vectorize the arrays.
epx = np.vstack(xs)
eph = np.vstack(hs)
epdlogp = np.vstack(dlogps)
epr = np.vstack(drs)
# Compute the discounted reward backward through time.
discounted_epr = process_rewards(epr)
# Standardize the rewards to be unit normal (helps control the gradient
# estimator variance).
discounted_epr -= np.mean(discounted_epr)
discounted_epr /= np.std(discounted_epr)
# Modulate the gradient with advantage (the policy gradient magic
# happens right here).
epdlogp *= discounted_epr
return model.policy_backward(eph, epx, epdlogp), reward_sum
Explanation: Parallelizing Gradients
We define an actor, which is responsible for taking a model and an env
and performing a rollout + computing a gradient update.
End of explanation
iterations = 20
batch_size = 4
model = Model()
actors = [RolloutWorker.remote() for _ in range(batch_size)]
running_reward = None
# "Xavier" initialization.
# Update buffers that add up gradients over a batch.
grad_buffer = {k: np.zeros_like(v) for k, v in model.weights.items()}
# Update the rmsprop memory.
rmsprop_cache = {k: np.zeros_like(v) for k, v in model.weights.items()}
for i in range(1, 1 + iterations):
model_id = ray.put(model)
gradient_ids = []
# Launch tasks to compute gradients from multiple rollouts in parallel.
start_time = time.time()
gradient_ids = [actor.compute_gradient.remote(model_id) for actor in actors]
for batch in range(batch_size):
[grad_id], gradient_ids = ray.wait(gradient_ids)
grad, reward_sum = ray.get(grad_id)
# Accumulate the gradient over batch.
for k in model.weights:
grad_buffer[k] += grad[k]
running_reward = (
reward_sum
if running_reward is None
else running_reward * 0.99 + reward_sum * 0.01
)
end_time = time.time()
print(
"Batch {} computed {} rollouts in {} seconds, "
"running mean is {}".format(
i, batch_size, end_time - start_time, running_reward
)
)
model.update(grad_buffer, rmsprop_cache, learning_rate, decay_rate)
zero_grads(grad_buffer)
Explanation: Running
This example is easy to parallelize because the network can play ten games
in parallel and no information needs to be shared between the games.
In the loop, the network repeatedly plays games of Pong and
records a gradient from each game. Every ten games, the gradients are
combined together and used to update the network.
End of explanation |
6,066 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Implementing a Neural Network
In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
Step2: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
Step3: Forward pass
Step4: Forward pass
Step5: Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check
Step6: Train the network
To train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function TwoLayerNet.train and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and Softmax classifiers. You will also have to implement TwoLayerNet.predict, as the training process periodically performs prediction to keep track of accuracy over time while the network trains.
Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
Step8: Load the data
Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.
Step9: Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
Step10: Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
Step11: Tune your hyperparameters
What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.
Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.
Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.
Experiment
Step12: Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.
We will give you extra bonus point for every 1% of accuracy above 52%. | Python Code:
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.neural_net import TwoLayerNet
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
Explanation: Implementing a Neural Network
In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
End of explanation
# Create a small net and some toy data to check your implementations.
# Note that we set the random seed for repeatable experiments.
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
net = init_toy_model()
X, y = init_toy_data()
Explanation: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
End of explanation
scores = net.loss(X)
print 'Your scores:'
print scores
print
print 'correct scores:'
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print correct_scores
print
# The difference should be very small. We get < 1e-7
print 'Difference between your scores and correct scores:'
print np.sum(np.abs(scores - correct_scores))
Explanation: Forward pass: compute scores
Open the file cs231n/classifiers/neural_net.py and look at the method TwoLayerNet.loss. This function is very similar to the loss functions you have written for the SVM and Softmax exercises: It takes the data and weights and computes the class scores, the loss, and the gradients on the parameters.
Implement the first part of the forward pass which uses the weights and biases to compute the scores for all inputs.
End of explanation
loss, _ = net.loss(X, y, reg=0.1)
correct_loss = 1.30378789133
# should be very small, we get < 1e-12
print 'Difference between your loss and correct loss:'
print np.sum(np.abs(loss - correct_loss))
Explanation: Forward pass: compute loss
In the same function, implement the second part that computes the data and regularizaion loss.
End of explanation
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = net.loss(X, y, reg=0.1)
# these should all be less than 1e-8 or so
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.1)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
Explanation: Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:
End of explanation
net = init_toy_model()
stats = net.train(X, y, X, y,
learning_rate=1e-1, reg=1e-5,
num_iters=100, verbose=False)
print 'Final training loss: ', stats['loss_history'][-1]
# plot the loss history
plt.plot(stats['loss_history'])
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.title('Training Loss history')
plt.show()
Explanation: Train the network
To train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function TwoLayerNet.train and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and Softmax classifiers. You will also have to implement TwoLayerNet.predict, as the training process periodically performs prediction to keep track of accuracy over time while the network trains.
Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
End of explanation
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
Explanation: Load the data
Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.
End of explanation
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000, batch_size=200,
learning_rate=1e-4, learning_rate_decay=0.95,
reg=0.5, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print 'Validation accuracy: ', val_acc
Explanation: Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
End of explanation
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(net):
W1 = net.params['W1']
W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)
plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(net)
Explanation: Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
End of explanation
#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained #
# model in best_net. #
# #
# To help debug your network, it may help to use visualizations similar to the #
# ones we used above; these visualizations will have significant qualitative #
# differences from the ones we saw above for the poorly tuned network. #
# #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to #
# write code to sweep through possible combinations of hyperparameters #
# automatically like we did on the previous exercises. #
#################################################################################
input_size = 32 * 32 * 3
hidden_size = 200
num_classes = 10
best_val = -1
# Train the network
num_iters = 1800
batch_size=200
# hyperparameters
learning_rates = [8e-4]
regs = [5e-2]
learning_rate_decays = [0.95]
for lr in learning_rates:
for reg in regs:
for decay in learning_rate_decays:
print("learning rate: {}, regulation: {}, decay: {}".format(lr, reg, decay))
net = TwoLayerNet(input_size, hidden_size, num_classes)
net.train(X_train, y_train, X_val, y_val,
num_iters=num_iters,
batch_size=batch_size,
learning_rate=lr,
learning_rate_decay= decay,
reg=reg,
verbose=False)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
train_acc = (net.predict(X_train) == y_train).mean()
if val_acc > best_val:
best_net = net
best_val = val_acc
print 'Train accuracy:{}, Validation accuracy:{}'.format(train_acc, val_acc)
print 'Best accuracy:{}'.format(best_val)
#################################################################################
# END OF YOUR CODE #
#################################################################################
# visualize the weights of the best network
show_net_weights(best_net)
Explanation: Tune your hyperparameters
What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.
Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.
Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.
Experiment: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. For every 1% above 52% on the Test set we will award you with one extra bonus point. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.).
End of explanation
test_acc = (best_net.predict(X_test) == y_test).mean()
print 'Test accuracy: ', test_acc
Explanation: Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.
We will give you extra bonus point for every 1% of accuracy above 52%.
End of explanation |
6,067 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
load image as ndarray
http
Step1: let's do k-mean
my version will take more than 10 mins... ok. I know why I shouldn't implement my own ML library.
In the future I will only implement ML algorithm for the sake of learning it XD
Step2: sklearn KMeans | Python Code:
from skimage import io
# cast to float, you need to do this otherwise the color would be weird after clustring
pic = io.imread('data/bird_small.png') / 255.
io.imshow(pic)
pic.shape
# serialize data
data = pic.reshape(128*128, 3)
Explanation: load image as ndarray
http://scikit-image.org/
End of explanation
# C, centroids, cost = km.k_means(pd.DataFrame(data), 16, epoch = 10, n_init=3)
Explanation: let's do k-mean
my version will take more than 10 mins... ok. I know why I shouldn't implement my own ML library.
In the future I will only implement ML algorithm for the sake of learning it XD
End of explanation
from sklearn.cluster import KMeans
model = KMeans(n_clusters=16, n_init=100, n_jobs=-1)
model.fit(data)
centroids = model.cluster_centers_
print(centroids.shape)
C = model.predict(data)
print(C.shape)
centroids[C].shape
compressed_pic = centroids[C].reshape((128,128,3))
fig, ax = plt.subplots(1, 2)
ax[0].imshow(pic)
ax[1].imshow(compressed_pic)
Explanation: sklearn KMeans
End of explanation |
6,068 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading BCH/BATCH Files
BCH/BATCH files are created by FEMAG during a calculation. They hold most of the results.
Their values are grouped into different sections such as
Step1: Show the calculation type that created this file
Step2: Print the number of torque sections
Step3: The PM/Rel machine simulation is executed with a given current and beta angle.
If the beta angle were 0 then there would be only 2 torque sections.
Lets look at the content of the section machine
Step4: In the above output we find the 2 beta angles 0 and -25°. The current i1 is 500 A (RMS) and the resulting torque at beta=-25 is 405.7 Nm.
The flux sections contains the fluxes and voltages for all windings and beta angles
Step5: Show all keys of winding '1' flux
Step6: Now lets look at some plots
Step7: A phasor plot
Step8: Or if we want to have a complete report
Step9: Read another BCH file and print the torque and beta values
Step10: Create a surface plot of the torque | Python Code:
import femagtools.bch
bch = femagtools.bch.read('TEST_002.BCH')
Explanation: Reading BCH/BATCH Files
BCH/BATCH files are created by FEMAG during a calculation. They hold most of the results.
Their values are grouped into different sections such as: Flux, Torque, Machine, dqPar etc.
The actual number of sections as well as their content varyies with the type of calculation.
Currently the following types are supported by femagtools:
MULTIPLE CALCULATION OF FORCES AND FLUX
Fast cogging calculation OF FORCES AND FLUX
Fast LD-LQ-Identification
Fast Psid-Psiq-Identification
Fast PM-Synchronous-Motor Simulation
Characteristics of Permanent-Magnet-Motors
End of explanation
bch.type
Explanation: Show the calculation type that created this file
End of explanation
len(bch.torque)
Explanation: Print the number of torque sections:
End of explanation
bch.machine
Explanation: The PM/Rel machine simulation is executed with a given current and beta angle.
If the beta angle were 0 then there would be only 2 torque sections.
Lets look at the content of the section machine:
End of explanation
bch.flux.keys()
len(bch.flux['1'])
Explanation: In the above output we find the 2 beta angles 0 and -25°. The current i1 is 500 A (RMS) and the resulting torque at beta=-25 is 405.7 Nm.
The flux sections contains the fluxes and voltages for all windings and beta angles:
End of explanation
bch.flux['1'][0].keys()
Explanation: Show all keys of winding '1' flux:
End of explanation
import matplotlib.pyplot as plt
import femagtools.plot
fig = plt.figure(figsize=(10,4))
fig.add_subplot(1,2,1)
femagtools.plot.torque(bch.torque[-1]['angle'], bch.torque[-1]['torque'])
fig.add_subplot(1,2,2)
femagtools.plot.voltage('No load voltage', bch.flux['1'][0]['displ'],
bch.flux['1'][0]['voltage_dpsi'])
plt.show()
Explanation: Now lets look at some plots
End of explanation
femagtools.plot.phasor(bch)
Explanation: A phasor plot
End of explanation
femagtools.plot.pmrelsim(bch, bch.filename)
Explanation: Or if we want to have a complete report:
End of explanation
bch = femagtools.bch.read('LDQ-0-90.BCH')
bch.ldq['torque']
bch.ldq['beta']
Explanation: Read another BCH file and print the torque and beta values:
End of explanation
import femagtools.plot
femagtools.plot.i1beta_torque(bch.ldq['i1'], bch.ldq['beta'], bch.ldq['torque'])
Explanation: Create a surface plot of the torque:
End of explanation |
6,069 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
two different ways to implement categorical cross entropy in TensorFlow
| Python Code::
import tensorflow as tf
from tensorflow.keras.losses import CategoricalCrossentropy
y_true = [[0, 1, 0], [1, 0, 0]]
y_pred = [[0.15, 0.75, 0.1], [0.75, 0.15, 0.1]]
cross_entropy_loss = CategoricalCrossentropy()
print(cross_entropy_loss(y_true, y_pred).numpy())
import tensorflow as tf
from tensorflow.keras.losses import SparseCategoricalCrossentropy
y_true = [1, 0]
y_pred = [[0.15, 0.75, 0.1], [0.75, 0.15, 0.1]]
cross_entropy_loss = SparseCategoricalCrossentropy()
loss = cross_entropy_loss(y_true, y_pred).numpy()
|
6,070 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Google form analysis tests
Purpose
Step1: Sorted total answers to questions
<a id=sortedtotalanswers />
Step2: Cross-samples t-tests
<a id=crossttests />
Purpose
Step3: Conclusion | Python Code:
%run "../Functions/2. Google form analysis.ipynb"
Explanation: Google form analysis tests
Purpose: determine in what extent the current data can accurately describe correlations, underlying factors on the score.
Especially concerning the answerTemporalities[0] groups: are there underlying groups explaining the discrepancies in score? Are those groups tied to certain questions?
Table of Contents
Sorted total answers to questions
Cross-samples t-tests
biologists vs non-biologists
biologists vs non-biologists before
<br>
<br>
<br>
<br>
End of explanation
form = getPerfectPretestPostestPairs(gform)
binarized = getAllBinarized()
answersCount = len(binarized.index)
totalScorePerQuestion = pd.DataFrame(data=np.dot(np.ones(answersCount),binarized),index=binarized.columns,columns=['score'])
totalScorePerQuestion['perc'] = totalScorePerQuestion['score'] * 100 / answersCount
display(totalScorePerQuestion.sort_values(by = 'score'))
Explanation: Sorted total answers to questions
<a id=sortedtotalanswers />
End of explanation
biologists = getSurveysOfBiologists(form)
nonBiologists = form.drop(biologists.index)
biologistsScores = biologists.apply(getGFormRowScore, axis=1)
nonBiologistsScores = nonBiologists.apply(getGFormRowScore, axis=1)
#print(len(gform), len(biologists), len(nonBiologists))
#print(len(gform), len(biologistsScores), len(nonBiologistsScores))
#print(type(biologistsScores), len(biologistsScores),\
#type(nonBiologistsScores), len(nonBiologistsScores))
ttest = ttest_ind(biologistsScores, nonBiologistsScores)
ttest
biologistsScores.values
np.std(biologistsScores)
np.std(nonBiologistsScores)
Explanation: Cross-samples t-tests
<a id=crossttests />
Purpose: find out whether a question can be used to discriminate different groups.
biologists vs non-biologists
<a id=biologistsvsnonbiologists />
End of explanation
gfBefores = getGFormBefores(form)
biologistsBefores = getSurveysOfBiologists(gfBefores, hardPolicy = False)
nonBiologistsBefores = gfBefores.drop(biologistsBefores.index)
biologistsBeforesScores = biologistsBefores.apply(getGFormRowScore, axis=1)
nonBiologistsBeforesScores = nonBiologistsBefores.apply(getGFormRowScore, axis=1)
#print(len(gfBefores), len(biologistsBefores), len(nonBiologistsBefores))
#print(len(gfBefores), len(biologistsBeforesScores), len(nonBiologistsBeforesScores))
#print(type(biologistsScores), len(biologistsScores),\
#type(nonBiologistsScores), len(nonBiologistsScores))
ttest = ttest_ind(biologistsBeforesScores, nonBiologistsBeforesScores)
ttest
np.std(biologistsBeforesScores)
nonBiologistsBeforesScores
np.std(nonBiologistsBeforesScores)
Explanation: Conclusion: the two groups have distinct scores.
biologists vs non-biologists before
<a id=biologistsvsnonbiologistsbefore />
End of explanation |
6,071 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'miroc-es2l', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: MIROC
Source ID: MIROC-ES2L
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
6,072 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Extracting the IRIS XGB Models and Analysis from Redis Labs Cloud
This notebook demonstrates how to extract the machine learning Models + Analysis from the Redis Labs Cloud (https
Step1: 2) Setup the Request
Extract the Models from the Cache with this request and upload them object files to the configured S3 Bucket.
Please make sure the environment variables are set correctly and the S3 Bucket exists
Step2: Where is the downloaded file getting stored?
Step3: What's the S3 Location (Unique Bucket Name + Key)?
Step4: 3) Build and Run the Extract + Upload Request | Python Code:
# Setup the Sci-pype environment
import sys, os
# Only Redis Labs is needed for this notebook:
os.environ["ENV_DEPLOYMENT_TYPE"] = "RedisLabs"
# Load the Sci-pype PyCore as a named-object called "core" and environment variables
from src.common.load_ipython_env import *
Explanation: Extracting the IRIS XGB Models and Analysis from Redis Labs Cloud
This notebook demonstrates how to extract the machine learning Models + Analysis from the Redis Labs Cloud (https://redislabs.com/redis-cloud) cache endpoint named "CACHE" and saved locally as a compressed, string artifact file (Pickle + zlib compression). Once the file is saved, it is uploaded to the configured S3 Bucket for archiving and sharing.
Overview
Extract the IRIS XGB regressor models from the Redis Labs Cloud CACHE endpoint. After extraction, compile a manifest for defining a cache mapping for all the Models + their respective Analysis. Once cached, the Models can be extract and shared + deployed on other Sci-pype instances by using something like this notebook or the command-line versions.
Command-line Versions
This notebook was built from the extractor command line examples:
https://github.com/jay-johnson/sci-pype/tree/master/bins/ml/extractors/rl_extract_and_upload_iris_regressor.py
https://github.com/jay-johnson/sci-pype/tree/master/bins/ml/extractors
1) Extract the IRIS XGB Regressor Models + Analysis from the Cache
End of explanation
ds_name = "iris_regressor"
Explanation: 2) Setup the Request
Extract the Models from the Cache with this request and upload them object files to the configured S3 Bucket.
Please make sure the environment variables are set correctly and the S3 Bucket exists:
ENV_AWS_KEY=<AWS API Key>
ENV_AWS_SECRET=<AWS API Secret>
For docker containers make sure to set these keys in the correct Jupyter env file and restart the container:
<repo base dir>/justredis/redis-labs.env
<repo base dir>/local/jupyter.env
<repo base dir>/test/jupyter.env
What's the dataset name?
End of explanation
data_dir = str(os.getenv("ENV_DATA_DST_DIR", "/opt/work/data/dst"))
if not os.path.exists(data_dir):
os.mkdir(data_dir, 0777)
Explanation: Where is the downloaded file getting stored?
End of explanation
s3_bucket = "unique-bucket-name-for-datasets" # name this something under your AWS Account (This might be open to the public in the future...stay tuned)
s3_key = "dataset_" + core.to_upper(ds_name) + ".cache.pickle.zlib"
s3_loc = str(s3_bucket) + ":" + str(s3_key)
Explanation: What's the S3 Location (Unique Bucket Name + Key)?
End of explanation
cache_req = {
"RAName" : "CACHE", # Redis endpoint name holding the models
"DSName" : str(ds_name), # Dataset name for pulling out of the cache
"S3Loc" : str(s3_loc), # S3 location to store the model file
"DeleteAfter" : False, # Optional delete after upload
"SaveDir" : data_dir, # Optional dir to save the model file - default is ENV_DATA_DST_DIR
"TrackingID" : "" # Future support for using the tracking id
}
upload_results = core.ml_upload_cached_dataset_to_s3(cache_req, core.get_rds(), core.get_dbs(), debug)
if upload_results["Status"] == "SUCCESS":
lg("Done Uploading Model and Analysis DSName(" + str(ds_name) + ") S3Loc(" + str(cache_req["S3Loc"]) + ")", 6)
else:
lg("", 6)
lg("ERROR: Failed Upload Model and Analysis Caches as file for DSName(" + str(ds_name) + ")", 6)
lg(upload_results["Error"], 6)
lg("", 6)
sys.exit(1)
# end of if extract + upload worked
lg("", 6)
lg("Extract and Upload Completed", 5)
lg("", 6)
Explanation: 3) Build and Run the Extract + Upload Request
End of explanation |
6,073 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'cnrm-esm2-1', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: CNRM-CERFACS
Source ID: CNRM-ESM2-1
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:52
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
6,074 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Filter out video games which have no title
Filter out video games which have less than 5 ratings
Filter out users which have less than 5 ratings
Merge ratings and videogames dataframe on productID and set to df dataframe
Create a pivot matrix with index userID, columns productID and values ratings
Set columns to productID variable
Set index to userID variable
Step1: Recommend video games with pearsonR
Recommend video games with cosine similarity | Python Code:
correlated_items = ratings_pivot.corr()["B002I0JZOC"].sort_values(ascending=False).head(5)
correlated_items.index
Explanation: Filter out video games which have no title
Filter out video games which have less than 5 ratings
Filter out users which have less than 5 ratings
Merge ratings and videogames dataframe on productID and set to df dataframe
Create a pivot matrix with index userID, columns productID and values ratings
Set columns to productID variable
Set index to userID variable
End of explanation
cosine_similarity_matrix = recommend_with_cosine_similarity(ratings_pivot)["B002I0JZOC"].sort_values(ascending=False).head(5)
cosine_similarity_matrix.index
Explanation: Recommend video games with pearsonR
Recommend video games with cosine similarity
End of explanation |
6,075 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NERC region maps
NERC shapefiles are from Tamayao, M.-A. M., Michalek, J. J., Hendrickson, C. & Azevedo, I. M. L. Regional Variability and Uncertainty of Electric Vehicle Life Cycle CO2 Emissions across the United States. Environ. Sci. Technol. 49, 8844–8855 (2015).
This notebook should be run after the Paper figures notebook. The index summary table used here is calculated at the end of that notebook.
Step1: Read NERC regions shapefile
Step2: Read states shapefile
Step3: Transform projection to Albers equal area
Step4: Maps of 2001 and 2017 annual values | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import geopandas as gpd
import os
from os.path import join
import pandas as pd
import sys
sns.set(style='white')
cwd = os.getcwd()
data_path = join(cwd, '..', 'Data storage')
figure_path = join(cwd,'..', 'Figures')
file_date = '2018-03-06'
%load_ext watermark
%watermark -v -p pandas,geopandas,shapely
# Load the "autoreload" extension
%load_ext autoreload
# always reload modules marked with "%aimport"
%autoreload 1
# add the 'src' directory as one where we can import modules
src_dir = join(cwd, os.pardir, 'src')
sys.path.append(src_dir)
%aimport Plots.plot
from Plots.plot import plot_nerc_annual
Explanation: NERC region maps
NERC shapefiles are from Tamayao, M.-A. M., Michalek, J. J., Hendrickson, C. & Azevedo, I. M. L. Regional Variability and Uncertainty of Electric Vehicle Life Cycle CO2 Emissions across the United States. Environ. Sci. Technol. 49, 8844–8855 (2015).
This notebook should be run after the Paper figures notebook. The index summary table used here is calculated at the end of that notebook.
End of explanation
path = os.path.join(data_path, 'nercregions', 'NERCregions.shp')
regions_nerc = gpd.read_file(path)
regions_nerc['nerc'] = regions_nerc['NERCregion']
regions_nerc
regions_nerc.to_crs(epsg=2163).boundary.plot()
Explanation: Read NERC regions shapefile
End of explanation
path = os.path.join(data_path, 'cb_2016_us_state_20m', 'cb_2016_us_state_20m.shp')
states = gpd.read_file(path)
states.crs
drop_states = ['Alaska', 'Hawaii', 'Puerto Rico']
states = states.loc[~states['NAME'].isin(drop_states)]
states.to_crs(epsg=2163).plot()
path = join(data_path, 'final NERC data',
'Summary table {}.csv'.format(file_date))
index = pd.read_csv(path, index_col='nerc')
index
index.loc['USA', '2001']
# Add 2001, 2017, and % reduction values from summary table to geodataframe
for nerc in regions_nerc['nerc'].unique():
try:
val_2017 = index.loc[nerc, '2017']
val_2001 = index.loc[nerc, '2001']
reduce = index.loc[nerc, 'Percent Reduction']
regions_nerc.loc[regions_nerc['nerc']==nerc, 2017] = val_2017
regions_nerc.loc[regions_nerc['nerc']==nerc, 2001] = val_2001
regions_nerc.loc[regions_nerc['nerc']==nerc, 'reduction'] = '{:.0%}'.format(reduce)
regions_nerc.loc[regions_nerc['nerc']==nerc, 'reduction value'] = reduce
except:
pass
# Define national parameters for use in plot titles
usa_2001 = index.loc['USA', '2001']
usa_2017 = index.loc['USA', '2017']
regions_nerc
Explanation: Read states shapefile
End of explanation
regions_albers = regions_nerc.to_crs(epsg=2163)
states_albers = states.to_crs(epsg=2163)
Explanation: Transform projection to Albers equal area
End of explanation
title = '2001 US Average\n{:.0f} g $\mathregular{{CO_2 \ kWh^{{-1}}}}$'.format(usa_2001)
kwargs = dict(
regions_lw = 1,
regions_ec = '0.1',
SERC_y = -1.5,
SPP_y = 2.25
)
vmin = regions_albers.loc[:, [2001, 2017]].min().min()
vmax = regions_albers.loc[:, [2001, 2017]].max().max()
plot_nerc_annual(regions_proj=regions_albers, states_proj=states_albers, data_col=2001,
text_col='nerc', vmin=vmin, vmax=vmax, title=title,
cbar_title='g $\mathregular{CO_2 \ kWh^{-1}}$', **kwargs)
path = join(figure_path, 'NERC map_cividis_2001.pdf')
plt.savefig(path, bbox_inches='tight')#, dpi=350)
title = '2017 US Average\n{:.0f} g $\mathregular{{CO_2 \ kWh^{{-1}}}}$ (↓ 30%)'.format(usa_2017)
kwargs = dict(
regions_lw = 1,
regions_ec = '0.1',
SERC_y = -1.5,
SPP_y = 2.25,
FRCC_x = 4.3
)
vmin = regions_albers.loc[:, [2001, 2017]].min().min()
vmax = regions_albers.loc[:, [2001, 2017]].max().max()
regions_albers['arrow reduction'] = '↓ ' + regions_albers['reduction']
plot_nerc_annual(regions_proj=regions_albers, states_proj=states_albers, data_col=2017,
text_col='arrow reduction', vmin=vmin, vmax=vmax, title=title,
cbar_title='g $\mathregular{CO_2 \ kWh^{-1}}$', **kwargs)
path = join(figure_path, 'NERC map_cividis_2017_change.pdf')
plt.savefig(path, bbox_inches='tight')#, dpi=350)
Explanation: Maps of 2001 and 2017 annual values
End of explanation |
6,076 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise from Think Stats, 2nd Edition (thinkstats2.com)<br>
Allen Downey
Step1: Exercise 5.1
In the BRFSS (see Section 5.4), the distribution of heights is roughly normal with parameters µ = 178 cm and σ = 7.7 cm for men, and µ = 163 cm and σ = 7.3 cm for women.
In order to join Blue Man Group, you have to be male between 5’10” and 6’1” (see http
Step2: For example <tt>scipy.stats.norm</tt> represents a normal distribution.
Step3: A "frozen random variable" can compute its mean and standard deviation.
Step4: It can also evaluate its CDF. How many people are more than one standard deviation below the mean? About 16%
Step5: How many people are between 5'10" and 6'1"?
Step6: Exercise 5.2
To get a feel for the Pareto distribution, let’s see how different the world would be if the distribution of human height were Pareto. With the parameters $x_m = 1$ m and $α = 1.7$, we get a distribution with a reasonable minimum, 1 m, and median, 1.5 m.
Plot this distribution. What is the mean human height in Pareto world? What fraction of the population is shorter than the mean? If there are 7 billion people in Pareto world, how many do we expect to be taller than 1 km? How tall do we expect the tallest person to be?
<tt>scipy.stats.pareto</tt> represents a pareto distribution. In Pareto world, the distribution of human heights has parameters alpha=1.7 and xmin=1 meter. So the shortest person is 100 cm and the median is 150.
Step7: What is the mean height in Pareto world?
Step8: What fraction of people are shorter than the mean?
Step9: Out of 7 billion people, how many do we expect to be taller than 1 km? You could use <tt>dist.cdf</tt> or <tt>dist.sf</tt>.
Step10: How tall do we expect the tallest person to be? Hint
Step11: Exercise 5.3
The Weibull distribution is a generalization of the exponential distribution that comes up in failure analysis (see http
Step12: Exercise 5.4
For small values of n, we don’t expect an empirical distribution to fit an analytic distribution exactly. One way to evaluate the quality of fit is to generate a sample from an analytic distribution and see how well it matches the data.
For example, in Section 5.1 we plotted the distribution of time between births and saw that it is approximately exponential. But the distribution is based on only 44 data points. To see whether the data might have come from an exponential distribution, generate 44 values from an exponential distribution with the same mean as the data, about 33 minutes between births.
Plot the distribution of the random values and compare it to the actual distribution. You can use random.expovariate to generate the values. | Python Code:
from __future__ import print_function, division
import thinkstats2
import thinkplot
from brfss import *
import populations as p
import random
import pandas as pd
import test_models
%matplotlib inline
Explanation: Exercise from Think Stats, 2nd Edition (thinkstats2.com)<br>
Allen Downey
End of explanation
import scipy.stats
Explanation: Exercise 5.1
In the BRFSS (see Section 5.4), the distribution of heights is roughly normal with parameters µ = 178 cm and σ = 7.7 cm for men, and µ = 163 cm and σ = 7.3 cm for women.
In order to join Blue Man Group, you have to be male between 5’10” and 6’1” (see http://bluemancasting.com). What percentage of the U.S. male population is in this range? Hint: use scipy.stats.norm.cdf.
<tt>scipy.stats</tt> contains objects that represent analytic distributions
End of explanation
mu = 178
sigma = 7.7
dist = scipy.stats.norm(loc=mu, scale=sigma)
type(dist)
Explanation: For example <tt>scipy.stats.norm</tt> represents a normal distribution.
End of explanation
dist.mean(), dist.std()
Explanation: A "frozen random variable" can compute its mean and standard deviation.
End of explanation
dist.cdf(mu-sigma)
Explanation: It can also evaluate its CDF. How many people are more than one standard deviation below the mean? About 16%
End of explanation
dist.cdf(185.42) - dist.cdf(177.8)
thinkstats2.RandomSeed(17)
nrows = int(1000)
df = brfss.ReadBrfss(nrows=10000)
MakeNormalPlot(df.age)
p.MakeFigures()
Explanation: How many people are between 5'10" and 6'1"?
End of explanation
alpha = 1.7
xmin = 1
dist = scipy.stats.pareto(b=alpha, scale=xmin)
dist.median()
xs, ps = thinkstats2.RenderParetoCdf(xmin, alpha, 0, 10.0, n=100)
thinkplot.Plot(xs, ps, label=r'$\alpha=%g$' % alpha)
thinkplot.Config(xlabel='height (m)', ylabel='CDF')
Explanation: Exercise 5.2
To get a feel for the Pareto distribution, let’s see how different the world would be if the distribution of human height were Pareto. With the parameters $x_m = 1$ m and $α = 1.7$, we get a distribution with a reasonable minimum, 1 m, and median, 1.5 m.
Plot this distribution. What is the mean human height in Pareto world? What fraction of the population is shorter than the mean? If there are 7 billion people in Pareto world, how many do we expect to be taller than 1 km? How tall do we expect the tallest person to be?
<tt>scipy.stats.pareto</tt> represents a pareto distribution. In Pareto world, the distribution of human heights has parameters alpha=1.7 and xmin=1 meter. So the shortest person is 100 cm and the median is 150.
End of explanation
dist.mean()
Explanation: What is the mean height in Pareto world?
End of explanation
dist.cdf(dist.mean())
Explanation: What fraction of people are shorter than the mean?
End of explanation
(1 - dist.cdf(1000))*7000000000
Explanation: Out of 7 billion people, how many do we expect to be taller than 1 km? You could use <tt>dist.cdf</tt> or <tt>dist.sf</tt>.
End of explanation
dist.isf(1/7000000000)
Explanation: How tall do we expect the tallest person to be? Hint: find the height that yields about 1 person.
End of explanation
alpha = 100
lam = 1
sample = [random.weibullvariate(alpha, lam) for i in xrange(1000)]
cdf = thinkstats2.Cdf(sample)
thinkplot.Cdf(np.log(np.log(cdf)), complement=True)
thinkplot.Show()
Explanation: Exercise 5.3
The Weibull distribution is a generalization of the exponential distribution that comes up in failure analysis (see http://wikipedia.org/wiki/Weibull_distribution). Its CDF is
$CDF(x) = 1 − \exp(−(x / λ)^k)$
Can you find a transformation that makes a Weibull distribution look like a straight line? What do the slope and intercept of the line indicate?
Use random.weibullvariate to generate a sample from a Weibull distribution and use it to test your transformation.
End of explanation
import analytic
df = analytic.ReadBabyBoom()
diffs = df.minutes.diff()
cdf = thinkstats2.Cdf(diffs, label='actual')
thinkplot.Cdf(cdf, complement=True)
thinkplot.Config(yscale='log')
sample = [random.expovariate(1/33) for i in xrange(44)]
cdf = thinkstats2.Cdf(sample)
thinkplot.Cdf(cdf, complement=True)
thinkplot.Config(yscale='log')
test_models.main("test_models.py", "mystery2.dat")
Explanation: Exercise 5.4
For small values of n, we don’t expect an empirical distribution to fit an analytic distribution exactly. One way to evaluate the quality of fit is to generate a sample from an analytic distribution and see how well it matches the data.
For example, in Section 5.1 we plotted the distribution of time between births and saw that it is approximately exponential. But the distribution is based on only 44 data points. To see whether the data might have come from an exponential distribution, generate 44 values from an exponential distribution with the same mean as the data, about 33 minutes between births.
Plot the distribution of the random values and compare it to the actual distribution. You can use random.expovariate to generate the values.
End of explanation |
6,077 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Series Stationary Measures
I'm going to try different metrics to measure whether a time series is stationary, because there are different types of stationary, different metrics measure different types of stationary, therefore when you are using multiple stationary measures, you may get opposite results.
3 types of stationary
Step1: ❣️Notes
Step2: ❣️Notes
Step3: ❣️Notes | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from pandas.plotting import autocorrelation_plot
from statsmodels.tsa.stattools import kpss
from statsmodels.tsa.stattools import adfuller
# This is the original time series
def parser(x):
return pd.datetime.strptime('190'+x, '%Y-%m')
series = pd.read_csv('shampoo_sales.csv', header=0, parse_dates=[0], index_col=0, squeeze=True, date_parser=parser)
autocorrelation_plot(series)
plt.show()
def test_stationarity(timeseries):
#Determing rolling statistics
rolmean = timeseries.rolling(window=12,center=False).mean()
rolstd = timeseries.rolling(window=12,center=False).std()
rolcov = timeseries.rolling(window=12,center=False).cov()
# Plot rolling statistics:
plt.figure(figsize=(9,7))
orig = plt.plot(timeseries, color='blue',label='Original')
mean = plt.plot(rolmean, color='green', label='Rolling Mean')
std = plt.plot(rolstd, color='red', label = 'Rolling Std')
cov = plt.plot(rolstd, color='purple', label = 'Rolling Cov')
plt.legend(loc='best')
plt.title('Rolling Mean & Standard Deviation')
plt.show(block=False)
# Perform Augmented Dickey-Fuller test:
print 'Results of Augmented Dickey-Fuller (ADF) Test:'
dftest = adfuller(timeseries, autolag='AIC')
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
print dfoutput
print
# Perform KPSS
print 'Results of KPSS Test:'
kpsstest = kpss(timeseries, regression='c')
kpss_output = pd.Series(kpsstest[0:3], index=['Test Statistic','p-value','Lags Used'])
for key,value in kpsstest[3].items():
kpss_output['Critical Value (%s)'%key] = value
print kpss_output
ts_log = np.log(series)
plt.figure(figsize=(9,7))
plt.plot(ts_log)
plt.show()
test_stationarity(ts_log)
Explanation: Time Series Stationary Measures
I'm going to try different metrics to measure whether a time series is stationary, because there are different types of stationary, different metrics measure different types of stationary, therefore when you are using multiple stationary measures, you may get opposite results.
3 types of stationary:
<b>Strict Stationary</b>: A strict stationary series satisfies the mathematical definition of a stationary process. The mean, variance and covariance are not the function of time.
<b>Trend Stationary</b>: A series that has no unit root but exhibits a trend is referred to as a trend stationary series. Once the trend is removed, the resulting series will be strict stationary. The KPSS test classifies a series as stationary on the absence of unit root. This means that the series can be strict stationary or trend stationary.
<b>Difference Stationary</b>: A time series that can be made strict stationary through differencing is difference stationary. ADF test is also known as a difference stationarity test.
Similar work I did before: https://github.com/hanhanwu/Hanhan_Data_Science_Practice/blob/master/sequencial_analysis/python_ARIMA.ipynb
In this code, I tried different series preprocessing method, and used Rolling Mean & Standard Deviation, Dickey-Fuller Test as the metrics all the time.
So the code below is to add more stationary test options. Not just Rolling Mean & Standard Deviation, Augmented Dickey-Fuller Test (ADF), but also KPSS.
Reference: https://www.analyticsvidhya.com/blog/2018/09/non-stationary-time-series-python/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+AnalyticsVidhya+%28Analytics+Vidhya%29
Download the data here: https://datamarket.com/data/set/22r0/sales-of-shampoo-over-a-three-year-period#!ds=22r0&display=line
End of explanation
# Change 1 - Differencing
## I still want to try differencing.
ts_log_diff = ts_log - ts_log.shift(3) # I tried 1, 7 steps too
plt.figure(figsize=(9,7))
plt.plot(ts_log_diff)
plt.show()
ts_log_diff.dropna(inplace=True)
test_stationarity(ts_log_diff)
Explanation: ❣️Notes:
* It seems that in ADF, when test statistic is lower than critical value, it's stationary; but in KPSS when test statistic is higher than critical value, it's stationary.
* Also covariance and mean are always overlap here.
* Let's analysis above results:
* The visualization is showing that, standard deviation maintains the same but mean is still changing with the time, so it's not strict stationary.
* The absolute value of ADF test statstic is lower than all the absolute critical values, so it's not differencing stationary, not strict stationary either.
* KPSS test statistic is higher than 10% critical value, so it has 90% confidence that the series is trend stationary.
* Theoretically, if we try to remove the trend, it should become closer to strict stationary. Because it's trend stationary.
End of explanation
# Change 2 - Remove trend with moving average
## As we found above, log series seems to be a trend stationary
moving_avg = ts_log.rolling(window=12,center=False).mean() # taking average of LAST 2 years (36-12) values
plt.figure(figsize=(9,7))
plt.plot(ts_log)
plt.plot(moving_avg, color='orange')
plt.show()
ts_log_moving_avg_diff = ts_log - moving_avg
ts_log_moving_avg_diff.head(12)
ts_log_moving_avg_diff.dropna(inplace=True)
test_stationarity(ts_log_moving_avg_diff)
Explanation: ❣️Notes:
* I tried step=1,7 and 3 here. 1,7 all failed in ADF.
* Let's analysis above results with step=3:
* The visualization is showing that, mean and standard devitation are showing less correlation to the time.
* ADF test statstic is showing 95% confidence of differencing stationary, since the absolute value of the test statistic is higher than the ansolute 5% critical value but lower than the absolute 1% critical value.
* KPSS test statistic is higher than 10% critical value, so it has 90% confidence that the series is (trend) stationary.
* This may indicate that, when a time series is not differencing stationary but trend stationary, it is still possible to make it stationary using differencing method, but shifting step can make a difference.
End of explanation
# Change 3 - Remove trend with weighted moving average
expwighted_avg = ts_log.ewm(alpha=0.9,ignore_na=False,min_periods=0,adjust=True).mean()
plt.figure(figsize=(9,7))
plt.plot(ts_log)
plt.plot(expwighted_avg, color='red')
plt.show()
ts_log_ewma_diff = ts_log - expwighted_avg
test_stationarity(ts_log_ewma_diff)
Explanation: ❣️Notes:
As we noted above, ts_log looks like trend stationary, so if we remove the trend, the series should be strict stationary.
In change 2, with moving average we can remove the trend. And now both ADF and KPSS are showing 90% confidence of stationary, so ts_log_moving_avg_diff should be strict stationary.
End of explanation |
6,078 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Web service
Since Parselmouth is a normal Python library, it can also easily be used within the context of a web server. There are several Python frameworks that allow to quickly set up a web server or web service. In this examples, we will use Flask to show how easily one can set up a web service that uses Parselmouth to access Praat functionality such as the pitch track estimation algorithms. This functionality can then be accessed by clients without requiring either Praat, Parselmouth, or even Python to be installed, for example within the context of an online experiment.
All that is needed to set up the most basic web server in Flask is a single file. We adapt the standard Flask example to accept a sound file, access Parselmouth's Sound.to_pitch, and then send back the list of pitch track frequencies. Note that apart from saving the file that was sent in the HTTP request and encoding the resulting list of frequencies in JSON, the Python code of the pitch_track function is the same as one would write in a normal Python script using Parselmouth.
Step1: Normally, we can then run the server typing FLASK_APP=server.py flask run on the command line, as explained in the Flask documentation. Please do note that to run this server publicly, in a secure way and as part of a bigger setup, other options are available to deploy! Refer to the Flask deployment documentation.
However, to run the server from this Jupyter notebook and still be able to run the other cells that access the functionality on the client side, the following code will start the server in a separate thread and print the output of the running server.
Step2: Now that the server is up and running, we can make a standard HTTP request to this web service. For example, we can send a Wave file with an audio recording of someone saying "The north wind and the sun [...]"
Step3: To do so, we use the requests library in this example, but we could use any library to send a standard HTTP request.
Step4: Since we used the standard json library from Python to decode the reply from server, pitch_track is now a normal list of floats and we can for example plot the estimated pitch track
Step5: Refer to the examples on plotting for more details on using Parselmouth for plotting.
Importantly, Parselmouth is thus only needed by the server; the client only needs to be able to send a request and read the reply. Consequently, we could even use a different programming language on the client's side. For example, one could make build a HTML page with JavaScript to make the request and do something with the reply | Python Code:
%%writefile server.py
from flask import Flask, request, jsonify
import tempfile
app = Flask(__name__)
@app.route('/pitch_track', methods=['POST'])
def pitch_track():
import parselmouth
# Save the file that was sent, and read it into a parselmouth.Sound
with tempfile.NamedTemporaryFile() as tmp:
tmp.write(request.files['audio'].read())
sound = parselmouth.Sound(tmp.name)
# Calculate the pitch track with Parselmouth
pitch_track = sound.to_pitch().selected_array['frequency']
# Convert the NumPy array into a list, then encode as JSON to send back
return jsonify(list(pitch_track))
Explanation: Web service
Since Parselmouth is a normal Python library, it can also easily be used within the context of a web server. There are several Python frameworks that allow to quickly set up a web server or web service. In this examples, we will use Flask to show how easily one can set up a web service that uses Parselmouth to access Praat functionality such as the pitch track estimation algorithms. This functionality can then be accessed by clients without requiring either Praat, Parselmouth, or even Python to be installed, for example within the context of an online experiment.
All that is needed to set up the most basic web server in Flask is a single file. We adapt the standard Flask example to accept a sound file, access Parselmouth's Sound.to_pitch, and then send back the list of pitch track frequencies. Note that apart from saving the file that was sent in the HTTP request and encoding the resulting list of frequencies in JSON, the Python code of the pitch_track function is the same as one would write in a normal Python script using Parselmouth.
End of explanation
import os
import subprocess
import sys
import time
# Start a subprocess that runs the Flask server
p = subprocess.Popen([sys.executable, "-m", "flask", "run"], env=dict(**os.environ, FLASK_APP="server.py"), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Start two subthreads that forward the output from the Flask server to the output of the Jupyter notebook
def forward(i, o):
while p.poll() is None:
l = i.readline().decode('utf-8')
if l:
o.write("[SERVER] " + l)
import threading
threading.Thread(target=forward, args=(p.stdout, sys.stdout)).start()
threading.Thread(target=forward, args=(p.stderr, sys.stderr)).start()
# Let's give the server a bit of time to make sure it has started
time.sleep(2)
Explanation: Normally, we can then run the server typing FLASK_APP=server.py flask run on the command line, as explained in the Flask documentation. Please do note that to run this server publicly, in a secure way and as part of a bigger setup, other options are available to deploy! Refer to the Flask deployment documentation.
However, to run the server from this Jupyter notebook and still be able to run the other cells that access the functionality on the client side, the following code will start the server in a separate thread and print the output of the running server.
End of explanation
from IPython.display import Audio
Audio(filename="audio/the_north_wind_and_the_sun.wav")
Explanation: Now that the server is up and running, we can make a standard HTTP request to this web service. For example, we can send a Wave file with an audio recording of someone saying "The north wind and the sun [...]": the_north_wind_and_the_sun.wav, extracted from a Wikipedia Commons audio file.
End of explanation
import requests
import json
# Load the file to send
files = {'audio': open("audio/the_north_wind_and_the_sun.wav", 'rb')}
# Send the HTTP request and get the reply
reply = requests.post("http://127.0.0.1:5000/pitch_track", files=files)
# Extract the text from the reply and decode the JSON into a list
pitch_track = json.loads(reply.text)
print(pitch_track)
Explanation: To do so, we use the requests library in this example, but we could use any library to send a standard HTTP request.
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns
sns.set() # Use seaborn's default style to make attractive graphs
plt.rcParams['figure.dpi'] = 100 # Show nicely large images in this notebook
plt.figure()
plt.plot([float('nan') if x == 0.0 else x for x in pitch_track], '.')
plt.show()
Explanation: Since we used the standard json library from Python to decode the reply from server, pitch_track is now a normal list of floats and we can for example plot the estimated pitch track:
End of explanation
# Let's shut down the server
p.kill()
# Cleaning up the file that was written to disk
!rm server.py
Explanation: Refer to the examples on plotting for more details on using Parselmouth for plotting.
Importantly, Parselmouth is thus only needed by the server; the client only needs to be able to send a request and read the reply. Consequently, we could even use a different programming language on the client's side. For example, one could make build a HTML page with JavaScript to make the request and do something with the reply:
```html
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8" />
<script type="text/javascript" src="jquery.min.js"></script>
<script type="text/javascript" src="plotly.min.js"></script>
<script type="text/javascript">
var update_plot = function() {
var audio = document.getElementById("audio").files[0];
var formData = new FormData();
formData.append("audio", audio);
$.getJSON({url: "http://127.0.0.1:5000/pitch_track", method: "POST",
data: formData, processData: false, contentType: false,
success: function(data){
Plotly.newPlot("plot", [{ x: [...Array(data.length).keys()],
y: data.map(function(x) { return x == 0.0 ? undefined : x; }),
type: "lines" }]);}});
};
</script>
</head>
<body>
<form onsubmit="update_plot(); return false;">
<input type="file" name="audio" id="audio" />
<input type="submit" value="Get pitch track" />
<div id="plot" style="width:1000px;height:600px;"></div>
</form>
</body>
```
Again, one thing to take into account is the security of running such a web server. However, apart from deploying the flask server in a secure and performant way, we also need one extra thing to circumvent a standard security feature of the browser. Without handling Cross Origin Resource Sharing (CORS) on the server, the JavaScript code on the client side will not be able to access the web service's reply. A Flask extension exists however, Flask-CORS, and we refer to its documentation for further details.
End of explanation |
6,079 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian optimization with context variables
In this notebook we are going to see how to use Emukit to solve optimization problems in which certain variables are fixed during the optimization phase. These are called context variables [1]. This is useful when some of the variables in the optimization are controllable/known factors. And example is the optimization of a the movement of a robot under conditions of the environment change (but the change is known).
Step1: Loading the problem and the loop
Step2: Now we define the domain of the function to optimize.
We build the model
Step3: And prepare the optimization object to run the loop.
Step4: Now, we set the number of iterations to run to 10.
Step5: Running the optimization by setting a context variable
To set a context, we just need to create a dictionary with the variables to fix and pass it to the Bayesian optimization object when running the optimization. Note that, every time we run new iterations we can set other variables to be the context. We run 3 sequences of 10 iterations each with different values of the context.
Step6: We can now inspect the collected points. | Python Code:
from emukit.test_functions import branin_function
from emukit.core import ParameterSpace, ContinuousParameter, DiscreteParameter
from emukit.core.initial_designs import RandomDesign
from GPy.models import GPRegression
from emukit.model_wrappers import GPyModelWrapper
from emukit.bayesian_optimization.acquisitions import ExpectedImprovement
from emukit.bayesian_optimization.loops import BayesianOptimizationLoop
from emukit.core.loop import FixedIterationsStoppingCondition
Explanation: Bayesian optimization with context variables
In this notebook we are going to see how to use Emukit to solve optimization problems in which certain variables are fixed during the optimization phase. These are called context variables [1]. This is useful when some of the variables in the optimization are controllable/known factors. And example is the optimization of a the movement of a robot under conditions of the environment change (but the change is known).
End of explanation
f, parameter_space = branin_function()
Explanation: Loading the problem and the loop
End of explanation
design = RandomDesign(parameter_space) # Collect random points
X = design.get_samples(10)
Y = f(X)
model_gpy = GPRegression(X,Y) # Train and wrap the model in Emukit
model_emukit = GPyModelWrapper(model_gpy)
Explanation: Now we define the domain of the function to optimize.
We build the model:
End of explanation
expected_improvement = ExpectedImprovement(model = model_emukit)
bayesopt_loop = BayesianOptimizationLoop(model = model_emukit,
space = parameter_space,
acquisition = expected_improvement,
batch_size = 1)
Explanation: And prepare the optimization object to run the loop.
End of explanation
max_iter = 10
Explanation: Now, we set the number of iterations to run to 10.
End of explanation
bayesopt_loop.run_loop(f, max_iter, context={'x1':0.3}) # we set x1 as the context variable
bayesopt_loop.run_loop(f, max_iter, context={'x2':0.1}) # we set x2 as the context variable
bayesopt_loop.run_loop(f, max_iter) # no context
Explanation: Running the optimization by setting a context variable
To set a context, we just need to create a dictionary with the variables to fix and pass it to the Bayesian optimization object when running the optimization. Note that, every time we run new iterations we can set other variables to be the context. We run 3 sequences of 10 iterations each with different values of the context.
End of explanation
bayesopt_loop.loop_state.X
Explanation: We can now inspect the collected points.
End of explanation |
6,080 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Accessing Settings
Settings are found with their own context in the Bundle and can be accessed through the get_setting method
Step3: or via filtering/twig access
Step4: and can be set as any other Parameter in the Bundle
Available Settings
Now let's look at each of the available settings and what they do
log_history
log_history is a BooleanParameter (defaults to False) that controls whether undo/redo ability is enabled.
Step5: This parameter can also be set by calling b.enable_history() or b.disable_history() and can be accessed with b.history_enabled.
Step6: dict_set_all
dict_set_all is a BooleanParameter (defaults to False) that controls whether attempting to set a value to a ParameterSet via dictionary access will set all the values in that ParameterSet (if True) or raise an error (if False)
Step7: In our default binary there are temperatures ('teff') parameters for each of the components ('primary' and 'secondary'). If we were to do
Step8: If you want dictionary access to use set_value_all instead of set_value, you can enable this parameter
Step9: Now let's disable this so it doesn't confuse us while looking at the other options
Step10: dict_filter
dict_filter is a Parameter that accepts a dictionary. This dictionary will then always be sent to the filter call which is done under-the-hood during dictionary access.
Step11: In our default binary, there are several inclination parameters - one for each component ('primary', 'secondary', 'binary') and one with the constraint context (to keep the inclinations aligned).
This can be inconvenient... if you want to set the value of the binary's inclination, you must always provide extra information (like '@component').
Instead, we can always have the dictionary access search in the component context by doing the following
Step12: Now we no longer see the constraint parameters.
All parameters are always accessible with method access
Step13: Now let's reset this option... keeping in mind that we no longer have access to the 'setting' context through twig access, we'll have to use methods to clear the dict_filter
Step14: run_checks_compute
The run_checks_compute option allows setting the default compute option(s) sent to b.run_checks, including warnings in the logger raised by interactive checks (see phoebe.interactive_checks_on). | Python Code:
!pip install -I "phoebe>=2.2,<2.3"
Explanation: Advanced: Settings
The Bundle also contains a few Parameters that provide settings for that Bundle. Note that these are not system-wide and only apply to the current Bundle. They are however maintained when saving and loading a Bundle.
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
import phoebe
from phoebe import u # units
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
b.get_setting()
Explanation: Accessing Settings
Settings are found with their own context in the Bundle and can be accessed through the get_setting method
End of explanation
b['setting']
Explanation: or via filtering/twig access
End of explanation
b['log_history@setting'].description
Explanation: and can be set as any other Parameter in the Bundle
Available Settings
Now let's look at each of the available settings and what they do
log_history
log_history is a BooleanParameter (defaults to False) that controls whether undo/redo ability is enabled.
End of explanation
b['log_history@setting']
b.history_enabled
b.enable_history()
b['log_history@setting']
b.history_enabled
Explanation: This parameter can also be set by calling b.enable_history() or b.disable_history() and can be accessed with b.history_enabled.
End of explanation
b['dict_set_all@setting']
b['teff@component']
Explanation: dict_set_all
dict_set_all is a BooleanParameter (defaults to False) that controls whether attempting to set a value to a ParameterSet via dictionary access will set all the values in that ParameterSet (if True) or raise an error (if False)
End of explanation
b.set_value_all('teff@component', 4000)
print(b['value@teff@primary@component'], b['value@teff@secondary@component'])
Explanation: In our default binary there are temperatures ('teff') parameters for each of the components ('primary' and 'secondary'). If we were to do:
b['teff@component'] = 6000
this would raise an error. Under-the-hood, this is simply calling:
b.set_value('teff@component', 6000)
which of course would also raise an error.
In order to set both temperatures to 6000, you would either have to loop over the components or call the set_value_all method:
End of explanation
b['dict_set_all@setting'] = True
b['teff@component'] = 8000
print(b['value@teff@primary@component'], b['value@teff@secondary@component'])
Explanation: If you want dictionary access to use set_value_all instead of set_value, you can enable this parameter
End of explanation
b.set_value_all('teff@component', 6000)
b['dict_set_all@setting'] = False
Explanation: Now let's disable this so it doesn't confuse us while looking at the other options
End of explanation
b['incl']
Explanation: dict_filter
dict_filter is a Parameter that accepts a dictionary. This dictionary will then always be sent to the filter call which is done under-the-hood during dictionary access.
End of explanation
b['dict_filter@setting'] = {'context': 'component'}
b['incl']
Explanation: In our default binary, there are several inclination parameters - one for each component ('primary', 'secondary', 'binary') and one with the constraint context (to keep the inclinations aligned).
This can be inconvenient... if you want to set the value of the binary's inclination, you must always provide extra information (like '@component').
Instead, we can always have the dictionary access search in the component context by doing the following
End of explanation
b.filter(qualifier='incl')
Explanation: Now we no longer see the constraint parameters.
All parameters are always accessible with method access:
End of explanation
b.set_value('dict_filter@setting', {})
Explanation: Now let's reset this option... keeping in mind that we no longer have access to the 'setting' context through twig access, we'll have to use methods to clear the dict_filter
End of explanation
b['run_checks_compute@setting']
b.add_dataset('lc')
b.add_compute('legacy')
print(b.run_checks())
b['run_checks_compute@setting'] = ['phoebe01']
print(b.run_checks())
Explanation: run_checks_compute
The run_checks_compute option allows setting the default compute option(s) sent to b.run_checks, including warnings in the logger raised by interactive checks (see phoebe.interactive_checks_on).
End of explanation |
6,081 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
IST256 Lesson 13
Visualizations
Zybook Ch10
Links
Participation
Step1: A. y[ ['b'] == 'x' ]
B. y[ y['b'] == 'x' ]
C. y['b'] == 'x'
D. y[ y['b'] == y['x'] ]
Vote Now
Step2: A. y[ 'a','c' ]
B. y[ 'ac' ]
C. y[ ['a'],['c'] ]
D. y[ ['a','c'] ]
Vote Now
Step3: A. 10
B. 3
C. 6
D. 4
Vote Now | Python Code:
import pandas as pd
x = [ { 'a' :2, 'b' : 'x', 'c' : 10},
{ 'a' :4, 'b' : 'y', 'c' : 3},
{ 'a' :1, 'b' : 'x', 'c' : 6} ]
y = pd.DataFrame(x)
Explanation: IST256 Lesson 13
Visualizations
Zybook Ch10
Links
Participation: https://poll.ist256.com
Zoom Chat!
Agenda
Last Lecture... but we ain't gone!
Go over the homework
Project
Introduction to Data Visualization
Matplotlib Basics
Plot.ly Basics
Folium Basics
Project P2 Deliverable
Your rough draft is due Tuesday 5/11.
You must make a 10 minute appointment with your SG prof between 5/12 and 5/14 to go over your project and get verbal feedback.
Take notes at the meeting; we will expect you to take our feedback into consideration for your final submission.
Exam 4
Covers Lessons 11,12,13.
Issued on Monday 5/10 During our LARGE GROUP.
You are expected to sign into Zoom at 3:45pm.
Exam starts at 4pm EDT. There will be an exam password posted.
Students in Alex Smith's online section who cannot take the exam at 4PM EDT will need to arrange another time within 24 hours from the 5/10 4PM EDT.
FEQT (Future Exam Questions Training) 1
Only show part of the data frame where column b is an 'x' ?
End of explanation
import pandas as pd
x = [ { 'a' :2, 'b' : 'x', 'c' : 10},
{ 'a' :4, 'b' : 'y', 'c' : 3},
{ 'a' :1, 'b' : 'x', 'c' : 6} ]
y = pd.DataFrame(x)
y[ ['a','b'] ]
Explanation: A. y[ ['b'] == 'x' ]
B. y[ y['b'] == 'x' ]
C. y['b'] == 'x'
D. y[ y['b'] == y['x'] ]
Vote Now: https://poll.ist256.com
FEQT (Future Exam Questions Training) 2
Only show columns a and c ?
End of explanation
import pandas as pd
x = [ { 'a' :2, 'b' : 'x', 'c' : 10},
{ 'a' :4, 'b' : 'y', 'c' : 3},
{ 'a' :1, 'b' : 'x', 'c' : 6} ]
y = pd.DataFrame(x)
for z in y.to_records():
if z['a']>2:
print(z['c'])
Explanation: A. y[ 'a','c' ]
B. y[ 'ac' ]
C. y[ ['a'],['c'] ]
D. y[ ['a','c'] ]
Vote Now: https://poll.ist256.com
FEQT (Future Exam Questions Training) 3
What is the output of the following code:
End of explanation
import pandas as pd
x = [ { 'a' : {'b' : 'x', 'name' : 'mike'} , 'c' : 10},
{ 'a' : {'b' : 'y'}, 'c' : 3},
{ 'a' : {'b' : 'x'}, 'c' : 6} ]
y = pd.json_normalize(x)
y
Explanation: A. 10
B. 3
C. 6
D. 4
Vote Now: https://poll.ist256.com
FEQT (Future Exam Questions Training) 4
Which code will output the 2nd row in this data frame?
End of explanation |
6,082 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fitting Models Exercise 2
Imports
Step1: Fitting a decaying oscillation
For this problem you are given a raw dataset in the file decay_osc.npz. This file contains three arrays
Step2: Now, using curve_fit to fit this model and determine the estimates and uncertainties for the parameters | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
Explanation: Fitting Models Exercise 2
Imports
End of explanation
# YOUR CODE HERE
#raise NotImplementedError()
with np.load('decay_osc.npz') as data:
tdata = data['tdata']
ydata = data['ydata']
dy = data['dy']
plt.scatter(tdata,ydata, color = 'k')
plt.errorbar(tdata, ydata, dy,
fmt='.k', ecolor='gray')
plt.box(False)
plt.title("Raw Data")
plt.xlabel('Time')
plt.ylabel('$y(t)$');
assert True # leave this to grade the data import and raw data plot
Explanation: Fitting a decaying oscillation
For this problem you are given a raw dataset in the file decay_osc.npz. This file contains three arrays:
tdata: an array of time values
ydata: an array of y values
dy: the absolute uncertainties (standard deviations) in y
Your job is to fit the following model to this data:
$$ y(t) = A e^{-\lambda t} \cos{\omega t + \delta} $$
First, import the data using NumPy and make an appropriately styled error bar plot of the raw data.
End of explanation
# YOUR CODE HERE
#raise NotImplementedError()
def funct(t, a, lamba, omega, delta):
return a*np.e**(-lamba*t) * np.cos(omega*t) + delta
theta, theta_dev = opt.curve_fit(funct, tdata, ydata, sigma=dy, absolute_sigma=True)
print('A = {0:.3f} +/- {1:.3f}'.format(theta[0], np.sqrt(theta_dev[0,0])))
print('λ = {0:.3f} +/- {1:.3f}'.format(theta[1], np.sqrt(theta_dev[1,1])))
print('ω = {0:.3f} +/- {1:.3f}'.format(theta[2], np.sqrt(theta_dev[2,2])))
print('δ = {0:.3f} +/- {1:.3f}'.format(theta[3], np.sqrt(theta_dev[3,3])))
plt.scatter(tdata,ydata, color = 'k')
plt.errorbar(tdata, ydata, dy,
fmt='.k', ecolor='gray')
yfit = funct(tdata, theta[0],theta[1],theta[2],theta[3])
plt.plot(tdata, yfit, color = 'b')
plt.box(False)
plt.title("Raw Data with a Curve Fit")
plt.xlabel('Time')
plt.ylabel('$y(t)$');
assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors
Explanation: Now, using curve_fit to fit this model and determine the estimates and uncertainties for the parameters:
Print the parameters estimates and uncertainties.
Plot the raw and best fit model.
You will likely have to pass an initial guess to curve_fit to get a good fit.
Treat the uncertainties in $y$ as absolute errors by passing absolute_sigma=True.
End of explanation |
6,083 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id='top'></a>
Kolmogorov-Smirnov test
In this notebook we will illustrate the use of the Kolmogorov-Smirnov test (K-S test) using functions from the SciPy stats module. In particular, we will look at the influence of the sample size.
Table of contents
Preamble
Data setup
K-S test
Other tests
Odds and ends
Preamble
The first step is to setup the computational environment. For that we will start by importing the major packages needed for this notebook
Step1: Let us check the versions being used
Step2: Now, we import the stats module from the scipy package plus additional modules/packages required by this notebook
Step3: To make the plots visible inside the notebook, we need the following Jupyter "magic" command
Step4: Now we are ready to start exploring.
Back to top
Data setup
First of all, we will define a normal random variable (RV) with mean 1.0 and standard deviation 0.1
Step5: This is said to be a 'frozen' RV object since the location and scale parameters are given. Let us plot the probability density function (pdf) as well as the cumulative density function (cdf), between percentiles 0.001 and 0.999, to check the random variable
Step6: Now we will generate nine random samples, ranging from 10 to 5000 samples, and look at them in a histogram plot together with the normal pdf to see how they match
Step7: We used a different number of bins in the histogram plots in order to increase the discretization. As can be seen, the larger the size of the random sample, the better it fits the normal pdf. Similarly, we can plot the random samples, after ordering them in ascending order (empirical cumulative distribution function), together with the normal cdf
Step8: Interestingly, the larger the sample size the better it fits the normal cdf. But then again, this is what can be expected after seeing the histogram plots.
K-S test
Finally, let us compute the K-S test for goodness of fit
Step9: Let us plot these results to see that they are pretty much scattered
Step10: It is curious to see that the highest p-value does not occur for the lowest K-S statistic (D) value. In fact, it seems that, as the size of the sample increases, the p-value does not show a tendency to increase, despite the fact that both the histogram plot and the ordered sample plot seem to fit better the normal pdf and cdf, respectively.
Other tests
For comparison, we will perform two other tests. First, the Anderson-Darling test (A-D test), for data coming from a particular distribution
Step11: Second, since we are using a normal random variable, the Shapiro-Wilk test (S-W test) for normality
Step12: It is not easy to plot the A-D test results, but the S-W test results are | Python Code:
import sys
import math
import numpy as np
import scipy as sp
import matplotlib as mpl
import pandas as pd
Explanation: <a id='top'></a>
Kolmogorov-Smirnov test
In this notebook we will illustrate the use of the Kolmogorov-Smirnov test (K-S test) using functions from the SciPy stats module. In particular, we will look at the influence of the sample size.
Table of contents
Preamble
Data setup
K-S test
Other tests
Odds and ends
Preamble
The first step is to setup the computational environment. For that we will start by importing the major packages needed for this notebook:
End of explanation
print('System: {}'.format(sys.version))
for package in (np, sp, mpl, pd):
print('Package: {} {}'.format(package.__name__, package.__version__))
Explanation: Let us check the versions being used:
End of explanation
from scipy import stats
import matplotlib.pyplot as plt
Explanation: Now, we import the stats module from the scipy package plus additional modules/packages required by this notebook:
End of explanation
%matplotlib inline
Explanation: To make the plots visible inside the notebook, we need the following Jupyter "magic" command:
End of explanation
mean = 1.0
std = 0.1
rv = stats.norm(loc=mean, scale=std)
Explanation: Now we are ready to start exploring.
Back to top
Data setup
First of all, we will define a normal random variable (RV) with mean 1.0 and standard deviation 0.1:
End of explanation
x = np.linspace(rv.ppf(0.001), rv.ppf(0.999), 1000)
fig, ax = plt.subplots(1, 1)
ax.plot(x, rv.pdf(x), label='frozen pdf')
ax.plot(x, rv.cdf(x), lw=2, label='frozen cdf')
ax.axhline(0.5, ls=':')
ax.axhline(1.0, ls=':')
ax.axvline(mean, ls='-.', label='mean')
ax.legend(loc='best', frameon=False)
plt.show()
Explanation: This is said to be a 'frozen' RV object since the location and scale parameters are given. Let us plot the probability density function (pdf) as well as the cumulative density function (cdf), between percentiles 0.001 and 0.999, to check the random variable:
End of explanation
Nsamples = [u*d for d in (10,100,1000) for u in (1,2,5)]
ysamples = []
fig, ax = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(8,6))
for row in range(3):
for col in range(3):
N = Nsamples[3*row+col]
ax[row,col].plot(x, rv.pdf(x), 'k-', lw=2)
y = rv.rvs(size=N)
b = int(math.sqrt(N))
ysamples.append(y)
ax[row,col].hist(y, bins=b, normed=True, histtype='stepfilled', alpha=0.2)
ax[row,col].set_title('{} samples, {} bins'.format(N, b))
plt.show()
Explanation: Now we will generate nine random samples, ranging from 10 to 5000 samples, and look at them in a histogram plot together with the normal pdf to see how they match:
End of explanation
fig, ax = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(8,6))
for row in range(3):
for col in range(3):
N = Nsamples[3*row+col]
y = ysamples[3*row+col]
ax[row,col].plot(x, rv.cdf(x), 'k-', lw=2)
yn = np.sort(y)
xn = np.linspace(0., 1., num=N)
ax[row,col].plot(yn, xn, 'o', alpha=0.2)
ax[row,col].axhline(0.5, ls=':')
ax[row,col].axvline(mean, ls=':')
ax[row,col].set_title('{} samples'.format(N))
plt.show()
Explanation: We used a different number of bins in the histogram plots in order to increase the discretization. As can be seen, the larger the size of the random sample, the better it fits the normal pdf. Similarly, we can plot the random samples, after ordering them in ascending order (empirical cumulative distribution function), together with the normal cdf:
End of explanation
Dsamples = []
psamples = []
for N,y in zip(Nsamples, ysamples):
D, pvalue = stats.kstest(y, 'norm', args=(mean, std))
Dsamples.append(D)
psamples.append(pvalue)
print('{:4d} samples: D={}, pvalue={}'.format(N, D, pvalue))
Explanation: Interestingly, the larger the sample size the better it fits the normal cdf. But then again, this is what can be expected after seeing the histogram plots.
K-S test
Finally, let us compute the K-S test for goodness of fit:
End of explanation
fig, ax = plt.subplots(1, 1)
ax.scatter(psamples, Dsamples, s=Nsamples, alpha=0.2)
for p,D,N in zip(psamples, Dsamples, Nsamples):
ax.text(p, D, str(N), ha='center', va='center')
ax.set_xlabel('p-value')
ax.set_ylabel('D')
ax.set_title('K-S test results')
plt.show()
Explanation: Let us plot these results to see that they are pretty much scattered:
End of explanation
for N,y in zip(Nsamples, ysamples):
A2, critical_values, significance_level = stats.anderson(y)
print('{:4d} samples: A2={}'.format(N, A2), critical_values, significance_level)
Explanation: It is curious to see that the highest p-value does not occur for the lowest K-S statistic (D) value. In fact, it seems that, as the size of the sample increases, the p-value does not show a tendency to increase, despite the fact that both the histogram plot and the ordered sample plot seem to fit better the normal pdf and cdf, respectively.
Other tests
For comparison, we will perform two other tests. First, the Anderson-Darling test (A-D test), for data coming from a particular distribution:
End of explanation
Wsamples = []
psamples = []
for N,y in zip(Nsamples, ysamples):
W, pvalue = stats.shapiro(y)
Wsamples.append(W)
psamples.append(pvalue)
print('{:4d} samples: W={}, pvalue={}'.format(N, W, pvalue))
Explanation: Second, since we are using a normal random variable, the Shapiro-Wilk test (S-W test) for normality:
End of explanation
fig, ax = plt.subplots(1, 1)
ax.scatter(psamples, Wsamples, s=Nsamples, alpha=0.2)
for p,W,N in zip(psamples, Wsamples, Nsamples):
ax.text(p, W, str(N), ha='center', va='center')
ax.set_xlabel('p-value')
ax.set_ylabel('W')
ax.set_title('S-W test results')
plt.show()
Explanation: It is not easy to plot the A-D test results, but the S-W test results are:
End of explanation |
6,084 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have set up a GridSearchCV and have a set of parameters, with I will find the best combination of parameters. My GridSearch consists of 12 candidate models total. | Problem:
import numpy as np
import pandas as pd
from sklearn.model_selection import GridSearchCV
GridSearch_fitted = load_data()
assert type(GridSearch_fitted) == sklearn.model_selection._search.GridSearchCV
full_results = pd.DataFrame(GridSearch_fitted.cv_results_) |
6,085 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LeNet Lab Solution
Source
Step1: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
You do not need to modify this section.
Step2: Visualize Data
View a sample from the dataset.
You do not need to modify this section.
Step3: Preprocess Data
Shuffle the training data.
You do not need to modify this section.
Step4: Setup TensorFlow
The EPOCH and BATCH_SIZE values affect the training speed and model accuracy.
You do not need to modify this section.
Step5: SOLUTION
Step6: Features and Labels
Train LeNet to classify MNIST data.
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
You do not need to modify this section.
Step7: Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
You do not need to modify this section.
Step8: Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section.
Step9: Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
Step10: Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section. | Python Code:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", reshape=False)
X_train, y_train = mnist.train.images, mnist.train.labels
X_validation, y_validation = mnist.validation.images, mnist.validation.labels
X_test, y_test = mnist.test.images, mnist.test.labels
assert(len(X_train) == len(y_train))
assert(len(X_validation) == len(y_validation))
assert(len(X_test) == len(y_test))
print()
print("Image Shape: {}".format(X_train[0].shape))
print()
print("Training Set: {} samples".format(len(X_train)))
print("Validation Set: {} samples".format(len(X_validation)))
print("Test Set: {} samples".format(len(X_test)))
Explanation: LeNet Lab Solution
Source: Yan LeCun
Load Data
Load the MNIST data, which comes pre-loaded with TensorFlow.
You do not need to modify this section.
End of explanation
import numpy as np
# Pad images with 0s
X_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')
print("Updated Image Shape: {}".format(X_train[0].shape))
Explanation: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
You do not need to modify this section.
End of explanation
import random
import matplotlib.pyplot as plt
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image, cmap="gray")
print(y_train[index])
Explanation: Visualize Data
View a sample from the dataset.
You do not need to modify this section.
End of explanation
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
Explanation: Preprocess Data
Shuffle the training data.
You do not need to modify this section.
End of explanation
import tensorflow as tf
EPOCHS = 10
BATCH_SIZE = 64
Explanation: Setup TensorFlow
The EPOCH and BATCH_SIZE values affect the training speed and model accuracy.
You do not need to modify this section.
End of explanation
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Hyperparameters
mu = 0
sigma = 0.1
# SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# SOLUTION: Activation.
conv1 = tf.nn.relu(conv1)
# SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# SOLUTION: Activation.
conv2 = tf.nn.relu(conv2)
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# SOLUTION: Activation.
fc1 = tf.nn.relu(fc1)
# SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# SOLUTION: Activation.
fc2 = tf.nn.relu(fc2)
# SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 10.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 10), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(10))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
Explanation: SOLUTION: Implement LeNet-5
Implement the LeNet-5 neural network architecture.
This is the only cell you need to edit.
Input
The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.
Architecture
Layer 1: Convolutional. The output shape should be 28x28x6.
Activation. Your choice of activation function.
Pooling. The output shape should be 14x14x6.
Layer 2: Convolutional. The output shape should be 10x10x16.
Activation. Your choice of activation function.
Pooling. The output shape should be 5x5x16.
Flatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you.
Layer 3: Fully Connected. This should have 120 outputs.
Activation. Your choice of activation function.
Layer 4: Fully Connected. This should have 84 outputs.
Activation. Your choice of activation function.
Layer 5: Fully Connected (Logits). This should have 10 outputs.
Output
Return the result of the 2nd fully connected layer.
End of explanation
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
with tf.device('/cpu:0'):
one_hot_y = tf.one_hot(y, 10)
Explanation: Features and Labels
Train LeNet to classify MNIST data.
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
You do not need to modify this section.
End of explanation
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
Explanation: Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
You do not need to modify this section.
End of explanation
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
print("Complete")
Explanation: Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section.
End of explanation
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, '.\lenet')
print("Model saved")
Explanation: Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
End of explanation
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
Explanation: Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section.
End of explanation |
6,086 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Showing Csound k-Values in Matplotlib Animation
The goal of this notebook is to show how Csound control signals can be seen in real-time in the Python Matplotlib using the Animation module. This can be quite instructive for teaching Csound. Written by Joachim Heintz, August 2019.
Choosing the matplotlib backend
Not every matplotlib backend is capable to show animations. At the time of writing this notebook, the option %matplolib inline can only diplay static images. Use %matplotlib instead
Step1: The backend Qt5Agg is — as well as some others — capable to show animations. (If necessary, you should be able to choose an appropriate backend by editing your matplotlib.rc file.)
Basic animation in matplotlib
For a basic animation using the FuncAnimation we only need two elements
Step2: You should see a line which starts at (0,0) and moves in five seconds to (5,1).
Displaying a Csound control signal
If we want to reproduce this very basic example by using a Csound control signal rather than the y-signal generated in the animate function, we have to do this
Step3: You should see more or less the same here
Step4: Approaching the comfort zone
The next version applies some more consistency to the variable settings. You can set any frame rate in milliseconds in the tmint variable. And the x-axis will shift if the time has reached 4/5 of its size. So you can watch how the line moves as long as your instrument duration allows ...
Step5: Latency and further optimizations
The goal of the approach here is not to have live video for a musical performance, but to use the nice features of matplotlib for showing how a control signal is moving. But it seems that even for simple sounding examples it works, as the example below suggests.
There are a number of optimizations which I have not used. If necessary, they should improve the performance | Python Code:
%matplotlib qt5
Explanation: Showing Csound k-Values in Matplotlib Animation
The goal of this notebook is to show how Csound control signals can be seen in real-time in the Python Matplotlib using the Animation module. This can be quite instructive for teaching Csound. Written by Joachim Heintz, August 2019.
Choosing the matplotlib backend
Not every matplotlib backend is capable to show animations. At the time of writing this notebook, the option %matplolib inline can only diplay static images. Use %matplotlib instead:
End of explanation
from matplotlib import pyplot as plt
from matplotlib import animation
fig, ax = plt.subplots()
ax.set(xlim=(0,5), ylim=(0,1))
line, = ax.plot([], [], lw=2)
def animate(i, x=[], y=[]):
x.append(i/10)
y.append(i/50)
line.set_data(x, y)
anim = animation.FuncAnimation(fig, animate, interval=100)
Explanation: The backend Qt5Agg is — as well as some others — capable to show animations. (If necessary, you should be able to choose an appropriate backend by editing your matplotlib.rc file.)
Basic animation in matplotlib
For a basic animation using the FuncAnimation we only need two elements:
* a matplotlib figure
* an animation function
The figure can be created in many ways in matplotlib. I choose here the subplots() function in pyplot. It returns a figure and an axes object. The figure object is needed as input for the FuncAnimation. The axes object is modified by some settings, and the method plot returns a Line2D object which will then be modified during the animation.
The animation function is updated in the call to FuncAnimation every interval (default=200) milliseconds. The variable i in this function is a frame counter, starting from zero.
End of explanation
import ctcsound as csound
from matplotlib import pyplot as plt
from matplotlib import animation
orc = '''
instr 1
kVal linseg 0, p3, 1
chnset kVal, "val"
endin
'''
sco = "i1 0 5\n" #try 0.2 as start instead
cs = csound.Csound()
cs.setOption('-odac')
cs.compileOrc(orc)
cs.readScore(sco)
cs.start()
pt = csound.CsoundPerformanceThread(cs.csound())
pt.play()
fig, ax = plt.subplots()
ax.set(xlim=(0,5), ylim=(0,1))
line, = ax.plot([], [], lw=2)
def animate(i, x=[], y=[]):
x.append(i/10)
y.append(cs.controlChannel('val')[0])
line.set_data(x, y)
anim = animation.FuncAnimation(fig, animate, interval=100)
Explanation: You should see a line which starts at (0,0) and moves in five seconds to (5,1).
Displaying a Csound control signal
If we want to reproduce this very basic example by using a Csound control signal rather than the y-signal generated in the animate function, we have to do this:
create the signal in Csound and send it via chnset
receive the signal in the animation function
The crucial point here is to run the csound instance in a way that it does not block the execution of the animation. This can be easily done in the way which is shown by François Pinot in the threading notebook.
Note: close the precedent graphics canvas window before lauching the next example.
End of explanation
pt.stop()
pt.join()
cs.reset()
Explanation: You should see more or less the same here: a line starting from (0,0) to (5,1).
Well, more or less ... --- Depending on the time the backend needs to create the canvas, your line will be shifted a bit . A simple way to deal with it is to start the first instrument a bit later. In my case. 0.2 instead of 0 is a good option.
Remember to execute these commands before you run the example again:
End of explanation
import ctcsound as csound
from matplotlib import pyplot as plt
from matplotlib import animation
orc = '''
ksmps = 128
seed 0
instr 1
kVal randomi 0, 1, 1, 3
chnset kVal, "val"
endin
'''
sco = "i1 0.2 99999\n"
#plot and animation settings
xlim=(0,5)
ylim=(0,1)
tmint = 100 #time interval in ms
cschn = 'val' #csound channel name
cs = csound.Csound()
cs.setOption('-odac')
cs.compileOrc(orc)
cs.readScore(sco)
cs.start()
pt = csound.CsoundPerformanceThread(cs.csound())
pt.play()
fig, ax = plt.subplots()
ax.set(xlim=xlim, ylim=ylim)
line, = ax.plot([], [], lw=2)
fps = 1000/tmint
xrange = xlim[1] - xlim[0]
xshow = 4/5
xclear = 1-xshow
def animate(i, x=[], y=[]):
x.append(i/fps)
y.append(cs.controlChannel(cschn)[0])
line.set_data(x, y)
if i > fps*xrange*xshow:
ax.set_xlim(i/fps-xrange*xshow,i/fps+xrange*xclear)
anim = animation.FuncAnimation(fig, animate, interval=tmint)
pt.stop()
pt.join()
cs.reset()
Explanation: Approaching the comfort zone
The next version applies some more consistency to the variable settings. You can set any frame rate in milliseconds in the tmint variable. And the x-axis will shift if the time has reached 4/5 of its size. So you can watch how the line moves as long as your instrument duration allows ...
End of explanation
import ctcsound as csound
from matplotlib import pyplot as plt
from matplotlib import animation
orc = '''
ksmps = 128
nchnls = 2
0dbfs = 1
seed 0
instr 1
kMidiPitch randomi 57, 62, 1, 3
kVibr = poscil:k(randomi:k(0,1,.2,3),randomi:k(3,8,1))
kDb randomi -20, 0, 1/3, 3
kPan randomi 0, 1, 1, 3
chnset kMidiPitch, "pitch"
chnset kDb, "vol"
chnset kPan, "pan"
aSnd vco2 ampdb(kDb), mtof(kMidiPitch+kVibr)
aL, aR pan2 aSnd, kPan
out aL, aR
endin
'''
sco = "i1 0.2 99999\n"
xlim_pv=(0,5)
xlim_pan=(0,1)
ylim_pch=(57,62)
ylim_vol=(-20,0)
ylim_pan=(0,0.2)
tmint = 100
chn_pch = 'pitch'
chn_vol = 'vol'
chn_pan = 'pan'
cs = csound.Csound()
cs.setOption('-odac')
cs.compileOrc(orc)
cs.readScore(sco)
cs.start()
pt = csound.CsoundPerformanceThread(cs.csound())
pt.play()
fig, ax = plt.subplots(3, tight_layout=True, gridspec_kw={'height_ratios': [3, 3, 1]})
ax[0].set(xlim=xlim_pv, ylim=ylim_pch, title='Pitch', xticks=())
ax[1].set(xlim=xlim_pv, ylim=ylim_vol, title='Volume (dB)', xticks=())
ax[2].set(xlim=xlim_pan, ylim=ylim_pan, title='Pan', xticks=[0,0.5,1], xticklabels=['L','M','R'], yticks=())
ax[0].spines['top'].set_visible(False)
ax[1].spines['top'].set_visible(False)
ax[2].spines['top'].set_visible(False)
ax[0].spines['right'].set_visible(False)
ax[1].spines['right'].set_visible(False)
ax[2].spines['right'].set_visible(False)
ax[2].spines['left'].set_visible(False)
pchline, = ax[0].plot([], [], lw=2, c='r')
volline, = ax[1].plot([], [], lw=2, c='b')
panpnt, = ax[2].plot(0.5, 0.1, 'go', lw=4)
fps = 1000/tmint
xrange = xlim_pv[1] - xlim_pv[0]
xshow = 4/5
xclear = 1-xshow
def animate(i, x_pv=[], y_pch=[], y_vol=[]):
x_pv.append(i/fps)
y_pch.append(cs.controlChannel(chn_pch)[0])
pchline.set_data(x_pv, y_pch)
y_vol.append(cs.controlChannel(chn_vol)[0])
volline.set_data(x_pv, y_vol)
if i > fps*xrange*xshow:
ax[0].set_xlim(i/fps-xrange*xshow,i/fps+xrange*xclear)
ax[1].set_xlim(i/fps-xrange*xshow,i/fps+xrange*xclear)
x_pan = cs.controlChannel(chn_pan)[0]
panpnt.set_data(x_pan,0.1)
anim = animation.FuncAnimation(fig, animate, interval=tmint)
pt.stop()
pt.join()
cs.reset()
Explanation: Latency and further optimizations
The goal of the approach here is not to have live video for a musical performance, but to use the nice features of matplotlib for showing how a control signal is moving. But it seems that even for simple sounding examples it works, as the example below suggests.
There are a number of optimizations which I have not used. If necessary, they should improve the performance:
* On the matplotlib side, an init function can be used. Depending on the kind of animation, the option blit=True can save some speed (in this case, the init and the animate function must return the line, variable then).
* On the ctcsound side, using the method channelPtr rather than the raw controlChannel should be more efficient.
End of explanation |
6,087 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Trend analysis with bootstrap resampling
Python / Numpy implementation of the trend analysis presented in Gardiner et al., 2008
The following model is used to fit the annual trend (drift) + intra-annual variability (i.e., fourier series tuncated at $n$ degrees)
Step4: Implementation
Step5: Test data
A test with a dataset generated from given trend and intra-annual variability, with random noise.
Step6: Statistical significance using bootstrap resampling | Python Code:
import pprint
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
Explanation: Trend analysis with bootstrap resampling
Python / Numpy implementation of the trend analysis presented in Gardiner et al., 2008
The following model is used to fit the annual trend (drift) + intra-annual variability (i.e., fourier series tuncated at $n$ degrees):
$$ F(t, p, \alpha, \mathrm{{\mathbf \beta}}) = p + \alpha t + \left[\beta_1 \cos(\frac{\pi t}{L}) + \beta_2 \sin(\frac{\pi t}{L}) + \beta_3 \cos(\frac{2 \pi t}{L}) + \beta_4 \sin(\frac{2 \pi t}{L}) + \cdots + \beta_{2n-1} \cos(\frac{n \pi t}{L}) + \beta_{2n} \sin(\frac{n \pi t}{L}) \right]$$
where $p$ is the intercept, $\alpha$ is the trend (slope), $\mathrm{{\mathbf \beta}}$ are the truncated fourier series parameters, and $L$ is the half-period (= 0.5 if $t$ is in years).
Model parameters are estimated using the weighted linear least squares method. Bootstrap resampling can be used to evaluate the confidence intervals on the parameter estimates (more robust in case of non-normally distributed random effects in the data).
Author: B. Bovy | GIRPAS, ULg
End of explanation
def fourier_basis(x, degree, half_period):
Returns a 2-d array of fourier basis.
A = np.ones((x.size, 2 * degree + 1))
for d in range(1, degree + 1):
A[:, 2*d-1] = np.cos(d * np.pi * x / half_period)
A[:, 2*d] = np.sin(d * np.pi * x / half_period)
return A
def fit_driftfourier(x, data, weights, degree, half_period=0.5):
Fit y = f(x - x.min()) to data where f is given by
fourier series + drift.
Parameters
----------
x : 1-d array
x-coordinates
data : 1-d array
data values
weights : 1-d array
weights (>=0)
degree : int
degree of fourier series
half_period : float
half period
Returns
-------
intercept : float
intercept at x.min()
slope : float
slope (drift) for the normalized data
(x - x.min())
pfourier : 1-d array
Fourier series parameters for the
normalized data
f_drift : callable
Can be used to calculate the drift
given any (non-normalized) x
f_fourier : callable
Can be used to calculate fourier series
f_driftfourier : callable
Can be used to calculate drift + fourier
residual_std : float
estimated standard deviation of residuals
A : 2-d array
matrix of "coefficients"
xmin = x.min()
xnorm = x - xmin
# coefficient matrix
A = np.ones((x.size, 2 * degree + 2))
A[:, 1] = xnorm
A[:, 2:] = fourier_basis(xnorm, degree, half_period)[:, 1:]
# linear weighted least squares
results = np.linalg.lstsq(A * weights[:, np.newaxis],
data * weights)
params = results[0]
intercept = params[0]
slope = params[1]
pfourier = params[2:]
f_drift = lambda t: slope * (t - xmin) + intercept
f_fourier = lambda t: np.sum(fourier_basis(t - xmin, degree,
half_period)[:, 1:]
* pfourier[np.newaxis, :],
axis=1) + intercept
f_driftfourier = lambda t: f_drift(t) + f_fourier(t) - intercept
residual_std = np.sqrt(results[1][0] / (x.size - 2 * degree + 2))
return (intercept, slope, pfourier,
f_drift, f_fourier, f_driftfourier,
residual_std, A)
def cf_driftfourier(x, data, weights, degree,
half_period=0.5, nboot=5000,
percentiles=(2.5, 50., 97.5)):
Calculate confidence intervals for the fitted
parameters from fourier series + drift modelling,
using bootstrap resampling.
Parameters
----------
nboot : int
number of bootstrap replicates
percentiles : sequence of floats
percentiles of parameter estimate
distributions to return
Returns
-------
perc : dict
percentiles for of each parameter
distribution
intercept : 1-d array
intercept estimates from bootstraped
datasets.
slope : 1-d array
slope estimates
pfourier : 2-d array
fourier parameters estimates
See Also
--------
:func:`fit_driftfourier`
# 1st fit without bootstraping
results = fit_driftfourier(x, data, weights,
degree, half_period)
f_driftfourier = results[5]
A = results[7]
model = f_driftfourier(x)
residuals = data - model
# generate bootstrap resamples of residuals
# and new datasets from these resamples
boot_dataset = np.empty((x.size, nboot))
for i in range(nboot):
resample_i = np.floor(np.random.rand(x.size) * x.size).astype(int)
resample_residuals = residuals[resample_i]
boot_dataset[:, i] = model + resample_residuals
# fit all bootstrap datasets
results_boot = np.linalg.lstsq(A * weights[:, np.newaxis],
boot_dataset * weights[:, np.newaxis])
params_boot = results_boot[0]
# compute percentiles
perc_boot = np.column_stack(np.percentile(params_boot,
percentiles, axis=1))
perc = {'intercept' : perc_boot[0],
'slope' : perc_boot[1],
'pfourier' : perc_boot[2:]}
intercept = params_boot[0]
slope = params_boot[1]
pfourier = params_boot[2:]
return perc, intercept, slope, pfourier
Explanation: Implementation
End of explanation
sns.set_context("notebook")
np.random.seed(seed=0)
intercept = 2.
trend = 0.1
variability = 0.4
noise = 0.05
x = np.arange(2009, 2012, 0.01)
x += np.random.random(size=x.size) * 0.1
y = variability * np.cos(x * np.pi / 0.5) + trend * (x - x.min()) + intercept
y += np.random.normal(scale=noise, size=y.size)
w = np.ones_like(x)
res = fit_driftfourier(x, y, w, 2)
print("fitted trend (slope): {}".format(res[1]))
print("fitted intercept at xmin: {}".format(res[0]))
print("fitted fourier parameters: {}".format(res[2]))
print("std of residuals: {}".format(res[6]))
f_drift, f_fourier, f_driftfourier = res[3:6]
plt.scatter(x, y, label='data')
plt.plot(x, f_drift(x), label='fitted annual trend')
plt.plot(x, f_driftfourier(x),
label='fitted annual trend + intra-annual variability')
plt.legend()
Explanation: Test data
A test with a dataset generated from given trend and intra-annual variability, with random noise.
End of explanation
cf, intercept, slope, pfourier = cf_driftfourier(x, y, w, 2,
nboot=5000,
percentiles=(2.5, 50., 97.5))
pprint.pprint(cf)
f, axes = plt.subplots(1, 2, figsize=(10, 3))
sns.distributions.distplot(slope, kde=False, ax=axes[0])
sns.distributions.distplot(intercept, kde=False, ax=axes[1])
Explanation: Statistical significance using bootstrap resampling: 2.5%, 50% and 97.5% percentiles for each fitted parameter and histogram of the distribution obtained for the annual trend (slope) and intercept estimates.
End of explanation |
6,088 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lightning data analysis (from WWLN or Blitzortung)
Development notebook
This iPython notebook extracts lighning data from raw WWLN data files or Blitzortung network.
Code by
Step1: Notes
It looks like you need to be an authenticated user to make a request to call back past data. BUT, after the data is called, it is added to a temporary location, at http
Step2: Define parameters
Step3: Load WWLN data and analyze it
Step4: Save data
Step5: Load data (Blitzortung)
Step6: Plot lightning results
Step7: Map data ## | Python Code:
# Load required packages
import numpy as np
import datetime as dt
from datetime import timedelta
import pandas as pd
from tqdm import tqdm
import os
import pkg_resources as pkg
import geopandas as gpd
from shapely.geometry import Point
from bokeh.plotting import Figure, show, output_notebook, vplot
from bokeh.charts import Bar
from bokeh.io import gridplot
from bokeh.models import HoverTool, Span, Range1d, LinearAxis
from bokeh.models.sources import ColumnDataSource
import json
import gzip
from bs4 import BeautifulSoup as bs
import requests
import stormstats
import codecs
import folium
output_notebook()
"http://en.blitzortung.org/archive_data.php?session=1&stations_users=0&selected_numbers=*&end_date=1478131200&end_time=43200&start_date=1478131200&start_time=0&rawdata_image=0&north=90&west=-180&east=180&south=-90&map=0&width_orig=640&width_result=640&agespan=60&frames=12&delay=100&last_delay=1000&show_result=1"
Explanation: Lightning data analysis (from WWLN or Blitzortung)
Development notebook
This iPython notebook extracts lighning data from raw WWLN data files or Blitzortung network.
Code by: Jasa Calogovic (Faculty of Geodesy, University of Zagreb) and Benjamin Laken (UCL)
Email: jcalogovic@geof.hr
End of explanation
url = "http://en.blitzortung.org/archive_data.php?session=1&stations_users=0"\
"&selected_numbers=*&end_date=1478131200&end_time=76200&start_date=1478131200&"\
"start_time=69000&rawdata_image=0&north=90&west=-180&east=180&south=-90&map=0&"\
"width_orig=640&width_result=640&agespan=60&frames=12&delay=100&last_delay=1000&show_result=1"
#csv_target ="/Tmp/archive_1_raw.txt.gz"
url
import wget
url = "http://en.blitzortung.org/Tmp/archive_2_raw.txt.gz"
fname = wget.download(url)
requests.get(url, stream=True)
#r = requests.get(url, stream=True, auth=HTTPBasicAuth('user', 'pass'))
r = requests.get(url)
soup = bs(r.text, "lxml")
r
line
for line in soup:
print(line)
tst = r.content
bs(r.text)
stormstats.downloader.get_data(start="2015-02-01T06:30", end="2015-02-01T10:05")
stormstats.storm.gen_listfiles(data_path='./tmp_data/', ext='.gz')
import pandas as pd
import json
tmp = json.JSONDecoder('tmp_data/bz-2015-02-01-06-40.json')
json.JSONDecoder.decode('tmp_data/bz-2015-02-01-06-40.json','s')
pd.read_json('tmp_data/bz-2015-02-01-06-40.json')
Explanation: Notes
It looks like you need to be an authenticated user to make a request to call back past data. BUT, after the data is called, it is added to a temporary location, at http://en.blitzortung.org/Tmp/, which anyone can access. It seems that all files are kept there temporariliy only, and the filenames they are given (for csv type files) are archive_x_raw.txt.gz (where x is a postive integer probably corresponding to the request). So it seems that I can construct a HTTP get request via an authenticated session, and then try to download the archive file after.
I will probably have to try an algorithm to see what was the last file added to the /Tmp folder, and assume that it is mine...
End of explanation
data_path='data/WWLN/Jan2016/'
#data_path = "WWLN"
# time step (in minutes) used to count number fo lightning. Max time step is 1440 (1 day)
# and it should have multiple value of 1440 minutes.
time_step=60
# if select_area=False: all lightning are counted
# if select_area=True: only lightning in selected area are counted (lat and lon limits are needed)
select_area=True
# define area to count lightning strikes (lat, lon) if select_area=True
north_lat_limit=70
south_lat_limit=30
west_lon_limit=-15
east_lon_limit=45
# exclude all lightning data with error larger than max_error
max_error=30
# exclude all lightning strikes detected with less or equal than min_stations
min_stations=5
# note: in raw data lightning strikes detected with less than 5 stations and error larger than 30
# are already excluded
Explanation: Define parameters
End of explanation
# generate list of files in data directory, if start_date and end_date are defined data only in this
#period will be used
listfiles=gen_listfiles(data_path=data_path, ext='.loc', start_date='01-01-2016', end_date='01-02-2016')
# make loop for all files
i=0
for file in tqdm(listfiles):
# read lightning data
LN_data = read_WWLN(file=data_path+file)
# --- make quality check and select lightning in given area
# exclude lightning data that have larger error than max_error
LN_data=LN_data.loc[LN_data['err']<=max_error]
# exclude lightning data that have less than min_stations
LN_data=LN_data.loc[LN_data['#sta']>=min_stations]
# select only lightning strikes in given area
if(select_area):
LN_data=LN_data.loc[(LN_data['lat']<=north_lat_limit) & (LN_data['lat']>=south_lat_limit) &
(LN_data['lon']>=west_lon_limit) & (LN_data['lon']<=east_lon_limit)]
# --- Count lightning strikes according to time_step defined
LN_tmp=count_lightning(LN_data, time_step)
# add data to existing df
if(i>=1):
LN_count=LN_count.append(LN_tmp)
else:
LN_count=LN_tmp
i=i+1
Explanation: Load WWLN data and analyze it
End of explanation
#LN_count
LN_count.to_csv('WWLN_data_europe.csv')
Explanation: Save data
End of explanation
# get data from Blitzortung server
get_data(start="2015-02-01T06:30", end="2015-02-01T10:05",
dl_link="http://data.blitzortung.org/Data_1/Protected/Strokes/")
#
Explanation: Load data (Blitzortung)
End of explanation
# define Bokeh tools
TOOLS = "pan, wheel_zoom, box_zoom, hover, reset, save"
# calculate error bars
#err_y1=LN_count['#sta_mean']-(LN_count['#sta_std']/(np.sqrt(LN_count['count'])))
#err_y2=LN_count['#sta_mean']+(LN_count['#sta_std']/(np.sqrt(LN_count['count'])))
fig_LN_count_a = Figure(plot_width=800, plot_height=400, title="Lightning count", tools=TOOLS,
x_axis_label="Date", y_axis_label="Nr. of lightning strikes", x_axis_type = "datetime",
title_text_font_size='22pt')
fig_LN_count_a.line(LN_count['count'].index, LN_count['count'].values, color='red')
fig_LN_count_b = Figure(plot_width=800, plot_height=300, tools=TOOLS,
y_axis_label="Error", x_axis_type = "datetime", x_range=fig_LN_count_a.x_range)
fig_LN_count_b.line(LN_count['err_mean'].index, LN_count['err_mean'].values, color='blue')
fig_LN_count_c = Figure(plot_width=800, plot_height=300, tools=TOOLS,
x_axis_label="Date", y_axis_label="Mean nr. of stations", x_axis_type = "datetime",
x_range=fig_LN_count_a.x_range)
fig_LN_count_c.line(LN_count['#sta_mean'].index, LN_count['#sta_mean'].values, color='black')
fig_LN_count = gridplot([[fig_LN_count_a],[fig_LN_count_b],[fig_LN_count_c]])
show(fig_LN_count)
Explanation: Plot lightning results
End of explanation
# mapping function doesnt seem to handel the large sizes well, so I am limiting the size for now
mx = get_map(strike_data = LN_data)
mx
Explanation: Map data ##
End of explanation |
6,089 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: This is a direct copy of the Earth to Venus mission plan. I'm doing this to make sure I get the functions correct, before proceeding further on the Earth to Mars.
Below is the capturing of the data for each planet. I'm using a custom PlanetaryObject class to store the information
Step9: These are my formulas in python form. They're based off of Table 3.3 found in the book
Step10: Designing the Transfer Ellipse
Time of Flight
Step11: Eccentricity
Step12: Periapsis Radius
Step13: Semi-Major Axis
Step14: Time of Flight
Step15: Velocities
Step16: Flight Path Angles
Step17: Now that I've verified the fundamental functions above, let's wrap this all up into a nice function that'll optimize this for us
Step18: Interestingly enough, we're getting $\theta_{Earth} = 194$, however the book claims that $\theta_{Earth} = 199$. I believe the discrepency here is found with the fact that the book apparently rounds their vaules while the values used by the functions above are more accurate.
Designing the Departure Trajectory
Plane Change
Step19: Calculating $V_{HE}$ and C3
Step20: Similar to what we did for the Transfer Ellipse, let's combine all these steps into a single function to calculate these for us
Step21: Designing the Arrival Trajectory
Plane Change
Step22: Calculating $V_\infty$
Step23: We're getting different answers here, becuase our angles are a little different. alpha_inf as calculated by the book is 5.5039 while I'm getting 5.5036. This is due to the rounding of the $i_{tp}$ as found in the book. I'm getting 3.9745967799374893 while the books rounds this to 3.975. See calculation below
Step24: Combining the Trajectories into a single function | Python Code:
class PlanetaryObject():
A simple class used to store pertinant information about the plantary object
def __init__(self, date, L, e, SMA, i, peri, asc, r, v, anom, fp, mu):
self.date = date # Event Date
self.L = L # Longitude
self.e = e # Eccentricity
self.SMA = SMA # SMA
self.i = i # Inclination
self.peri = peri # Longitude of Perihelion
self.asc = asc # Longitude of Ascending Node
self.r = r # Radius
self.v = v # Velocity
self.anom = anom # True Anomaly
self.fp = fp # Flight Path Angle
self.mu = mu # Gravitation parameter
earth = PlanetaryObject(
datetime.date(1988, 4, 8),
197.53, # Longitude
0.01672, # Eccentricity
None, # SMA
None, # Inclination
102.29, # Longitude of Perihelion
0, # Longitude of Ascending Node
149.7848e6, # Radius
29.75, # Velocity
95.24, # True Anomaly
0.9554, # Flight Path Angle
398600.4 # Gravitation parameter (km^3/s^2)
)
venus = PlanetaryObject(
datetime.date(1988, 7, 26),
330.52, # Longitude
0.006778, # Eccentricity
None, # SMA
3.394, # Inclination
131.41, # Longitude of Perihelion
76.58, # Longitude of Ascending Node
108.9014e6, # Radius
34.8, # Velocity
199.11, # True Anomaly
-0.128, # Flight Path Angle
324858.8 # Gravitation parameter (km^3/s^2)
)
Explanation: This is a direct copy of the Earth to Venus mission plan. I'm doing this to make sure I get the functions correct, before proceeding further on the Earth to Mars.
Below is the capturing of the data for each planet. I'm using a custom PlanetaryObject class to store the information
End of explanation
mu_sun = 132712439935.5
def eccentricity(r_1, r_2, theta_1, theta_2):
Calculates the eccentricity of the transfer ellipse. This is calculated through
the following equation:
.. math::
\frac {r_2 - r_1} {r_1 * \cos{\theta_1} - r_2 * \cos{\theta_2}}
:param r_1: radius of the departing planetary object
:param r_2: radius of the arriving planetary object
:param theta_1: True anomaly of the departing planetary object in degrees
:param theta_2: True anomaly of the arriving planetary object in degrees
return (r_2 - r_1) / ((r_1 * math.cos(math.radians(theta_1))) - (r_2 * math.cos(math.radians(theta_2))))
def periapsis_radius(r, e, theta):
Calculates the periapsis radius of the transfer ellipse. This is calculated
using the following equation:
.. math::
\frac {r_1 * [1 + e \cos{\theta]}} {1 + e}
:param r: radius of the departing planetary object
:param e: eccentricity of the transfer ellipse
return (r * (1 + e * math.cos(math.radians(theta)))) / (1 + e)
def semimajor_axis(r=None, r_a=None, r_p=None, mu=None, V=None, e=None):
Calculates the semi-major axis of the transfer ellipse. This is calculated
using one of the following equations:
.. math::
\frac {r_a + r_p} {2}
\frac {\mu r} {2 \mu - V^2 r}
\frac {r_p} {1 - e}
\frac {r_a} {1 + e}
:param r: general radius of the elliptical orbit
:param r_a: Radius of apoapsis
:param r_p: Radius of periapsis
:param mu: gravitation parameter
:param V: Velocity of the orbiting object
:param e: Eccentricity of the elliptical orbit
if r_a != None and r_p != None:
return (r_a + r_p) / 2
if mu != None and r !=None and V != None:
return (mu * r) / (2 * mu - V ** 2 * r)
if r_p != None and e != None:
return r_p / (1 - e)
if r_a != None and e != None:
return r_a / (1 + e)
# If we reach this point, then the passed in arguments doesn't match
# any equations we have defined. Raise an Error
raise TypeError("Invalid arguments!")
def time_since_periapsis(e, n, theta=None, E=None):
Calculates the time since the periapsis. This is calculated using the
following equation:
.. math::
\frac {E - e \sin{E}} {n}
If E, isn't defined, it will be calculated using the param theta and
the following equation:
..math::
\cos {E} = \frac {e + \cos{\theta}} {1 + e \cos{\theta}}
:param e: eccentricity of the transfer ellipse
:param n: mean motion
:param theta: degrees to periapsis
:param E: eccentric anomaly in radians
if theta == None and E == None:
raise TypeError("theta or E MUST be defined")
if theta != None and E != None:
raise TypeError("theta OR E must be defined. Not both")
if E == None:
cos_E = (e + math.cos(math.radians(theta))) / (1 + e * math.cos(math.radians(theta)))
E = math.acos(cos_E)
return (E - e * math.sin(E)) / n
def mean_motion(mu, a):
Calculates the mean motion of an elliptical orbit. This is calculated
using the following equation:
.. math::
\sqrt{\frac{\mu} {a^3}}
:param mu: gravitation parameter (Mass * Gravitation constant)
:param a: semimajor axis
return math.sqrt(mu / a ** 3)
def velocity(mu, r, a):
Calculates the Velocity (V) of an object based on the elliptical orbit.
This is calculated using the following equation:
.. math::
\sqrt{\frac{2 * \mu} {r} - \frac{\mu} {a}}
:param mu: gravitation parameter (Mass * Gravition constant)
:param a: semimajor axis
return math.sqrt(2 * mu / r - mu / a)
def flight_path_angle(e, theta):
Calculates the Flight Path Angle (γ). This is calculated using
the following equation:
.. math::
\tan{γ} = {\frac{e * \sin{\theta}}{1 + 3 * \cos{\theta}}
:param e: eccentricity of the elliptical orbit
:param theta:
tan_y = (e * math.sin(math.radians(theta))) / (1 + e * math.cos(math.radians(theta)))
return math.atan(tan_y)
def inclination(Omega, L_s, L_t, i):
a = math.radians(Omega + 180 - L_s)
b = math.radians(L_t - (180 + Omega))
alpha = math.radians(180 - i)
cos_c = math.cos(a) * math.cos(b) + math.sin(a) * math.sin(b) * math.cos(alpha)
c = math.acos(cos_c)
sin_i_t = (math.sin(alpha) * math.sin(b)) / math.sin(c)
return math.asin(sin_i_t)
Explanation: These are my formulas in python form. They're based off of Table 3.3 found in the book
End of explanation
venus.date - earth.date
time_of_flight = venus.date - earth.date
time_of_flight = time_of_flight.days
time_of_flight
Explanation: Designing the Transfer Ellipse
Time of Flight
End of explanation
line_of_apisides = 180
true_anom = line_of_apisides + (venus.L - earth.L)
true_anom
eccentricity(earth.r, venus.r, line_of_apisides, true_anom)
e = eccentricity(earth.r, venus.r, line_of_apisides, true_anom)
Explanation: Eccentricity
End of explanation
periapsis_radius(earth.r, e, line_of_apisides)
r_p = periapsis_radius(earth.r, e, line_of_apisides)
Explanation: Periapsis Radius
End of explanation
# Book apparently rounds the actual values here
semimajor_axis(r_p=103.555e6, e=0.1825)
a = 126.673e6
Explanation: Semi-Major Axis
End of explanation
n = mean_motion(mu_sun, a)
n
peri_to_earth = time_since_periapsis(e, n, theta=line_of_apisides)
peri_to_earth / 3600 / 24 # conversion from seconds to days
venus_to_peri = time_since_periapsis(e, n, theta=true_anom)
venus_to_peri / 3600 / 24
(peri_to_earth - venus_to_peri) / 3600 / 24
Explanation: Time of Flight
End of explanation
velocity(mu_sun, earth.r, 129.336e6) # using the Value from the Book which appear to be rounded
velocity(mu_sun, venus.r, 129.336e6) # again using the values from the book which appear to be rounded
Explanation: Velocities
End of explanation
math.degrees(flight_path_angle(0.17194, 199.53)) # same as above, using the book values
math.degrees(flight_path_angle(0.17194, 332.52))
Explanation: Flight Path Angles
End of explanation
def transfer_ellipse(start_planet, end_planet, tof_accuracy=2, max_iters=1000, return_trials=False):
time_of_flight = end_planet.date - start_planet.date
time_of_flight = time_of_flight.days
longs = []
tofs = []
line_of_apisides = 180 # trial start
tof = 9999999999 # large number to get us started
bottom_angle = 90
top_angle = 270
i = 0
while not(time_of_flight - 10e-tof_accuracy < tof / 3600 / 24 < time_of_flight + 10e-tof_accuracy) and i < max_iters:
line_of_apisides = (bottom_angle - top_angle) / 2
true_anom = line_of_apisides + (end_planet.L - start_planet.L)
longs.append((line_of_apisides, true_anom))
e = eccentricity(start_planet.r, end_planet.r, line_of_apisides, true_anom)
r_p = periapsis_radius(start_planet.r, e, line_of_apisides)
a = semimajor_axis(r_p=r_p, e=e)
n = mean_motion(mu_sun, a)
peri_to_start = time_since_periapsis(e, n, theta=line_of_apisides)
end_to_peri = time_since_periapsis(e, n, theta=true_anom)
tof = peri_to_start - end_to_peri
tofs.append(tof / 3600 / 24)
if tof / 3600 / 24 > time_of_flight:
i += 1
# while tof / 3600 / 24 > time_of_flight:
# true_anom = line_of_apisides + (end_planet.L - start_planet.L)
# longs.append((line_of_apisides, true_anom))
# e = eccentricity(start_planet.r, end_planet.r, line_of_apisides, true_anom)
# r_p = periapsis_radius(start_planet.r, e, line_of_apisides)
# a = semimajor_axis(r_p=r_p, e=e)
# n = mean_motion(mu_sun, a)
# peri_to_start = time_since_periapsis(e, n, theta=line_of_apisides)
# end_to_peri = time_since_periapsis(e, n, theta=true_anom)
# tof = peri_to_start - end_to_peri
# tofs.append(tof / 3600 / 24)
# line_of_apisides += 1
# Calculate the Relative Velocities
V_start = velocity(mu_sun, start_planet.r, a)
V_end = velocity(mu_sun, end_planet.r, a)
y_start = flight_path_angle(e, line_of_apisides)
y_end = flight_path_angle(e, true_anom)
r_dict = {
'line_of_apisides': line_of_apisides - 1, # subtract the 1 we added during the loop
'true_anom': true_anom,
'eccentricity': e,
'SMA': a,
'time_of_flight': tof,
'V_start': V_start,
'V_end': V_end,
'y_start': math.degrees(y_start),
'y_end': math.degrees(y_end)
}
if return_trials:
r_dict.update({'runs':{'longs': longs, 'tofs':tofs}})
return r_dict
tf = transfer_ellipse(earth, venus, return_trials=True)
tf
Explanation: Now that I've verified the fundamental functions above, let's wrap this all up into a nice function that'll optimize this for us
End of explanation
alpha = 180 - venus.i
alpha
a = venus.asc + 180 - earth.L
a
b_prime = venus.L - (venus.asc + 180)
b_prime
# b = b_prime # this can be used when the transfer angles are small.
b = 73.967 # taken from the book b/c after much research, I still don't know how to solve a spherical right triangle
csc_c = math.cos(math.radians(a)) * math.cos(math.radians(b)) + math.sin(math.radians(a)) * math.sin(math.radians(b)) * math.cos(math.radians(alpha))
csc_c
c = math.degrees(math.acos(csc_c))
c
sin_i = (math.sin(math.radians(alpha)) * math.sin(math.radians(b))) / math.sin(math.radians(c))
sin_i
i_t = math.degrees(math.asin(sin_i))
i_t
Explanation: Interestingly enough, we're getting $\theta_{Earth} = 194$, however the book claims that $\theta_{Earth} = 199$. I believe the discrepency here is found with the fact that the book apparently rounds their vaules while the values used by the functions above are more accurate.
Designing the Departure Trajectory
Plane Change
End of explanation
# cos_alpha = math.cos(math.radians(i_t)) * math.cos(math.radians(earth.fp + tf['y_start']))
cos_alpha = math.cos(math.radians(4.455)) * math.cos(math.radians(earth.fp + 3.924)) # using the value from the book, since my is different (and more accurate I believe)
cos_alpha
alpha = math.degrees(math.acos(cos_alpha))
alpha
#C3 = earth.v ** 2 + tf['V_start'] ** 2 - 2 * earth.v * tf['V_start'] * math.cos(math.radians(alpha))
C3 = earth.v ** 2 + 27.312 ** 2 - 2 * earth.v * 27.312 * math.cos(math.radians(alpha))
C3
V_he = math.sqrt(C3)
V_he
Explanation: Calculating $V_{HE}$ and C3
End of explanation
def depart_trajectory(start_planet, end_planet, y, V):
alpha = 180 - end_planet.i
a = end_planet.asc + 180 - start_planet.L
b = end_planet.L - (end_planet.asc + 180)
csc_c = math.cos(math.radians(a)) * math.cos(math.radians(b)) + math.sin(math.radians(a)) * math.sin(math.radians(b)) * math.cos(math.radians(alpha))
c = math.degrees(math.acos(csc_c))
sin_i = (math.sin(math.radians(alpha)) * math.sin(math.radians(b))) / math.sin(math.radians(c))
i_t = math.degrees(math.asin(sin_i))
# if they have the same sign, subtract them, else add them
if start_planet.fp * y > 0:
y_s = abs(start_planet.fp) - abs(y)
else:
y_s = abs(start_planet.fp) + abs(y)
cos_alpha = math.cos(math.radians(i_t)) * math.cos(math.radians(y_s))
alpha = math.degrees(math.acos(cos_alpha))
C3 = start_planet.v ** 2 + V ** 2 - 2 * start_planet.v * V * math.cos(math.radians(alpha))
V_he = math.sqrt(C3)
r_dict = {
'i_t': i_t,
'C3': C3,
'V_he': V_he
}
return r_dict
depart_trajectory(earth, venus, -3.924, 27.312)
Explanation: Similar to what we did for the Transfer Ellipse, let's combine all these steps into a single function to calculate these for us:
End of explanation
alpha = 180 - venus.i
alpha
a = venus.asc + 180 - earth.L
a
b_prime = venus.L - (venus.asc + 180)
b_prime
# b = b_prime # this can be used when the transfer angles are small.
b = 73.967 # taken from the book b/c after much research, I still don't know how to solve a spherical right triangle
b
csc_c = math.cos(math.radians(a)) * math.cos(math.radians(b)) + math.sin(math.radians(a)) * math.sin(math.radians(b)) * math.cos(math.radians(alpha))
csc_c
c = math.degrees(math.acos(csc_c))
c
sin_it = math.sin(math.radians(alpha)) * math.sin(math.radians(a)) / math.sin(math.radians(c))
sin_it
it = math.degrees(math.asin(sin_it))
it
Explanation: Designing the Arrival Trajectory
Plane Change
End of explanation
#cos_alpha_inf = math.cos(math.radians(it)) * math.cos(math.radians(tf['y_end'] + venus.fp))
cos_alpha_inf = math.cos(math.radians(it)) * math.cos(math.radians(3.938 + venus.fp))
alpha_inf = math.acos(cos_alpha_inf)
math.degrees(alpha_inf)
#C3 = venus.v ** 2 + tf['V_end'] ** 2 + 2 * venus.v * tf['V_end'] * math.cos(alpha_inf)
C3 = venus.v ** 2 + 37.57 ** 2 - 2 * venus.v * 37.57 * math.cos(math.radians(5.5039))
V_inf = math.sqrt(C3)
V_inf# should be 4.442 km/s
def arrival_trajectory(start_planet, end_planet, y, V):
alpha = 180 - end_planet.i
a = end_planet.asc + 180 - start_planet.L
b = end_planet.L - (end_planet.asc + 180)
csc_c = math.cos(math.radians(a)) * math.cos(math.radians(b)) + math.sin(math.radians(a)) * math.sin(math.radians(b)) * math.cos(math.radians(alpha))
c = math.degrees(math.acos(csc_c))
sin_it = math.sin(math.radians(alpha)) * math.sin(math.radians(a)) / math.sin(math.radians(c))
it = math.degrees(math.asin(sin_it))
# if they have the same sign, subtract them, else add them
if end_planet.fp * y > 0:
y_s = abs(abs(end_planet.fp) - abs(y))
else:
y_s = abs(abs(end_planet.fp) + abs(y))
cos_alpha_inf = math.cos(math.radians(it)) * math.cos(math.radians(y_s + end_planet.fp))
alpha_inf = math.acos(cos_alpha_inf)
C3 = end_planet.v ** 2 + V ** 2 - 2 * end_planet.v * V * math.cos(math.radians(alpha_inf))
V_inf = math.sqrt(C3)
r_dict = {
'i_t': it,
'V_inf': V_inf
}
return r_dict
arrival_trajectory(earth, venus, -3.938, 37.57)
Explanation: Calculating $V_\infty$
End of explanation
math.degrees(math.acos(math.cos(math.radians(3.975)) * math.cos(math.radians(3.938-0.128))))
Explanation: We're getting different answers here, becuase our angles are a little different. alpha_inf as calculated by the book is 5.5039 while I'm getting 5.5036. This is due to the rounding of the $i_{tp}$ as found in the book. I'm getting 3.9745967799374893 while the books rounds this to 3.975. See calculation below:
End of explanation
def trajectories(start_planet, end_planet, y_start, y_end, V_start, V_end):
alpha = 180 - end_planet.i
a = end_planet.asc + 180 - start_planet.L
b = end_planet.L - (end_planet.asc + 180)
csc_c = math.cos(math.radians(a)) * math.cos(math.radians(b)) + math.sin(math.radians(a)) * math.sin(math.radians(b)) * math.cos(math.radians(alpha))
c = math.degrees(math.acos(csc_c))
sin_i_start = (math.sin(math.radians(alpha)) * math.sin(math.radians(b))) / math.sin(math.radians(c))
i_start = math.degrees(math.asin(sin_i_start))
sin_i_end = math.sin(math.radians(alpha)) * math.sin(math.radians(a)) / math.sin(math.radians(c))
i_end = math.degrees(math.asin(sin_i_end))
# if they have the same sign, subtract them, else add them
if start_planet.fp * y_start > 0:
y_s = abs(abs(start_planet.fp) - abs(y_start))
else:
y_s = abs(abs(start_planet.fp) + abs(y_start))
cos_alpha = math.cos(math.radians(i_start)) * math.cos(math.radians(y_s))
alpha = math.degrees(math.acos(cos_alpha))
C3 = start_planet.v ** 2 + V_start ** 2 - 2 * start_planet.v * V_start * math.cos(math.radians(alpha))
V_he = math.sqrt(C3)
if end_planet.fp * y_end > 0:
y_e = abs(abs(end_planet.fp) - abs(y_end))
else:
y_e = abs(abs(end_planet.fp) + abs(y_end))
cos_alpha_inf = math.cos(math.radians(i_end)) * math.cos(math.radians(y_s + end_planet.fp))
alpha_inf = math.acos(cos_alpha_inf)
C3_inf = end_planet.v ** 2 + V_end ** 2 - 2 * end_planet.v * V_end * math.cos(math.radians(alpha_inf))
V_inf = math.sqrt(C3_inf)
r_dict = {
'i_start': i_start,
'C3': C3,
'V_he': V_he,
'i_end': i_end,
'V_inf': V_inf
}
return r_dict
trajectories(earth, venus, -3.924, -3.938, 27.312, 37.57)
Explanation: Combining the Trajectories into a single function:
End of explanation |
6,090 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ApJdataFrames 013
Step1: Table 1 - Spitzer IRAC/MIPS IC348 catalog
Step2: Table 2 - SED Derived $\alpha_{IRAC}$ and $A_V$
But really... spectral types
Step3: Table 3 - Convenient passbands table | Python Code:
%pylab inline
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
from astropy.io import ascii
from astropy.table import Table, join
Explanation: ApJdataFrames 013: Lada2006
Title: Spitzer Observations of IC 348: The Disk Population at 2-3 Million Years
Authors: Charles J Lada, August A Muench, Kevin L Luhman, Lori E Allen, Lee Hartmann, Tom Megeath, Philip Myers, Giovanni Fazio, Kenneth Wood, James Muzerolle, George Rieke, Nick Siegler, and Erick Young
Data is from this paper:
http://iopscience.iop.org/1538-3881/131/3/1574/fulltext/204953.html
End of explanation
tbl1 = ascii.read("http://iopscience.iop.org/1538-3881/131/3/1574/fulltext/datafile1.txt")
tbl1[0:4]
Explanation: Table 1 - Spitzer IRAC/MIPS IC348 catalog
End of explanation
tbl2 = ascii.read("http://iopscience.iop.org/1538-3881/131/3/1574/fulltext/datafile2.txt")
tbl2[0:4]
join_tbls = join(tbl1, tbl2, keys="Seq")
print "There are {} rows in tbl1, {} in tbl2, and {} in the joined table.".format(len(tbl1), len(tbl2), len(join_tbls))
join_tbls[0:4]
Explanation: Table 2 - SED Derived $\alpha_{IRAC}$ and $A_V$
But really... spectral types
End of explanation
names = ["PASSBAND","DATA SYSTEM","REFERENCES","center_wavelength","F_{nu} (Jy)","References"]
tbl3 = pd.read_csv("http://iopscience.iop.org/1538-3881/131/3/1574/fulltext/204953.tb3.txt",
na_values="\ldots", names = names, sep='\t')
tbl3.head()
Explanation: Table 3 - Convenient passbands table
End of explanation |
6,091 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
"The Jupyter Notebook is a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include
Step1: Pandas is the software package that you will use to generate "data frames" which are just Python representations of data that you have collected.
Just as in processing, you can use any of the pandas functions by using pd.functionname
Numpy is the software package that you will use for computations and analysis. We will just be scratching its surface in terms of capabilities.
Pygal is the software package that you will use to generate plots and graphs. Although is has limitations on data formatting, the graphs are high quality and easy to format. Also Pygal is well documented
Step2: Add the rest of the antibiotis to the graph | Python Code:
# this would be a comment
# cells like this are like an advanced calculator
# for example:
2+2
Explanation: "The Jupyter Notebook is a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more." (jupyter.org)
We will be using Jupyter Notebooks to analyze and plot data that we gather in class, using Python as the underlying language
Python is a programming language that is flexible, well supported and open source (python.org). It may be of use to you unexpectedly in the future (especially you historians and written-art majors!), so keep these notes for future reference.
Instructions
'Shift+Enter' within each of the cells below to run them.
While the computations included in the cell are running, you should see "In [*]" on the left
When the computations have finished running, a number will appear within the brackets (e.g. "In [2]")
'#' denotes a comment in the code
Jupyter uses tab for code completion
Indexing in python starts at 0
End of explanation
# Load the packages into memory by running this cell
import pandas as pd
import numpy as np
import pygal
# Example of how to use pandas to read and load a "comma-separated-value" or csv file.
# You can create csv files in any text editor (like notepad)
# or in programs that use spreadsheets (Excel/Numbers/Google Sheets)
ecoli = pd.read_csv("kb_ecoli.csv")
# You can display the data you just loaded in a table
ecoli
# Start by replacing "ab#" in the csv file by the real antibiotic name
# that we used in the microbiology laboratory and then reload the data
# if you did this correctly, the table should have the correct names
ecoli = pd.read_csv("kb_ecoli.csv")
ecoli
# We can extract the data from a single column using its name
antibiotic1=ecoli.ab1
# or by its location in the data frame
antibiotic12=ecoli.iloc[0:,11]
antibiotic12
# you can also check the name of the column (remember python indexing starts at 0!)
ecoli.columns[0]
# Or we can directly calculate average values using numpy
antibiotic1=np.mean(ecoli.ab1)
antibiotic1
antibiotic12=np.mean(ecoli.ab12)
antibiotic12
# and we can already create a bar graph that displays the data with pygal
bar_chart = pygal.Bar()
bar_chart.title = "Kirby Bauer results for E.coli"
bar_chart.x_labels = 'ab1','ab12';
bar_chart.add('name of ab1', antibiotic1)
bar_chart.add(ecoli.columns[11], antibiotic12)
bar_chart.render_to_file('kirbybauer_ecoli.svg')
# the graph was saved as an svg file in your working directory
# you can open that svg file in a new browser tab
# we can use some optional arguments to put labels
bar_chart = pygal.Bar()
bar_chart.title = "Kirby Bauer results for E.coli"
bar_chart.x_title = 'Antibiotics';
bar_chart.y_title = 'Zone of inhibition (mm)';
bar_chart.add('name of ab1', antibiotic1)
bar_chart.add(ecoli.columns[11], antibiotic12)
# bar_chart.x_labels = [{'label': 'AB1','value': 1},{'label': 'AB12','value': 12}]
bar_chart.render_to_file('kirbybauer_ecoli.svg')
# reload the tab that contains the graph
Explanation: Pandas is the software package that you will use to generate "data frames" which are just Python representations of data that you have collected.
Just as in processing, you can use any of the pandas functions by using pd.functionname
Numpy is the software package that you will use for computations and analysis. We will just be scratching its surface in terms of capabilities.
Pygal is the software package that you will use to generate plots and graphs. Although is has limitations on data formatting, the graphs are high quality and easy to format. Also Pygal is well documented
End of explanation
# you could even use advanced options to put error bars
# and using numpy's standard deviation function: np.std()
bar_chart = pygal.Bar()
bar_chart.title = "Kirby Bauer results for E.coli"
bar_chart.x_title = 'Antibiotics';
bar_chart.y_title = 'Zone of inhibition (mm)';
bar_chart.add('name of ab1', antibiotic1)
bar_chart.add(ecoli.columns[11], [{'value': antibiotic12, 'ci':
{'low': np.mean(ecoli.ab12)-np.std(ecoli.ab12), 'high': np.mean(ecoli.ab12)+np.std(ecoli.ab12)}}])
# bar_chart.add('Second', [{'value': np.mean(ecoli.ab2), 'ci': {'high': 5}}])
bar_chart.render_to_file('kirbybauer_ecoli.svg')
# reload the tab that contains the graph
Explanation: Add the rest of the antibiotis to the graph
End of explanation |
6,092 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
gensim doc2vec & IMDB sentiment dataset
TODO
Step1: The data is small enough to be read into memory.
Step2: Set-up Doc2Vec Training & Evaluation Models
Approximating experiment of Le & Mikolov "Distributed Representations of Sentences and Documents", also with guidance from Mikolov's example go.sh
Step3: Following the paper, we also evaluate models in pairs. These wrappers return the concatenation of the vectors from each model. (Only the singular models are trained.)
Step5: Predictive Evaluation Methods
Helper methods for evaluating error rate.
Step6: Bulk Training
Using explicit multiple-pass, alpha-reduction approach as sketched in gensim doc2vec blog post – with added shuffling of corpus on each pass.
Note that vector training is occurring on all documents of the dataset, which includes all TRAIN/TEST/DEV docs.
Evaluation of each model's sentiment-predictive power is repeated after each pass, as an error rate (lower is better), to see the rates-of-relative-improvement. The base numbers reuse the TRAIN and TEST vectors stored in the models for the logistic regression, while the inferred results use newly-inferred TEST vectors.
(On a 4-core 2.6Ghz Intel Core i7, these 20 passes training and evaluating 3 main models takes about an hour.)
Step7: Achieved Sentiment-Prediction Accuracy
Step8: In my testing, unlike the paper's report, DBOW performs best. Concatenating vectors from different models only offers a small predictive improvement. The best results I've seen are still just under 10% error rate, still a ways from the paper's 7.42%.
Examining Results
Are inferred vectors close to the precalculated ones?
Step9: (Yes, here the stored vector from 20 epochs of training is usually one of the closest to a freshly-inferred vector for the same words. Note the defaults for inference are very abbreviated – just 3 steps starting at a high alpha – and likely need tuning for other applications.)
Do close documents seem more related than distant ones?
Step10: (Somewhat, in terms of reviewer tone, movie genre, etc... the MOST cosine-similar docs usually seem more like the TARGET than the MEDIAN or LEAST.)
Do the word vectors show useful similarities?
Step11: Do the DBOW words look meaningless? That's because the gensim DBOW model doesn't train word vectors – they remain at their random initialized values – unless you ask with the dbow_words=1 initialization parameter. Concurrent word-training slows DBOW mode significantly, and offers little improvement (and sometimes a little worsening) of the error rate on this IMDB sentiment-prediction task.
Words from DM models tend to show meaningfully similar words when there are many examples in the training data (as with 'plot' or 'actor'). (All DM modes inherently involve word vector training concurrent with doc vector training.)
Are the word vectors from this dataset any good at analogies?
Step12: Even though this is a tiny, domain-specific dataset, it shows some meager capability on the general word analogies – at least for the DM/concat and DM/mean models which actually train word vectors. (The untrained random-initialized words of the DBOW model of course fail miserably.)
Slop
Step13: To mix the Google dataset (if locally available) into the word tests...
Step14: To get copious logging output from above steps...
Step15: To auto-reload python code while developing... | Python Code:
import locale
import glob
import os.path
import requests
import tarfile
import sys
import codecs
dirname = 'aclImdb'
filename = 'aclImdb_v1.tar.gz'
locale.setlocale(locale.LC_ALL, 'C')
if sys.version > '3':
control_chars = [chr(0x85)]
else:
control_chars = [unichr(0x85)]
# Convert text to lower-case and strip punctuation/symbols from words
def normalize_text(text):
norm_text = text.lower()
# Replace breaks with spaces
norm_text = norm_text.replace('<br />', ' ')
# Pad punctuation with spaces on both sides
for char in ['.', '"', ',', '(', ')', '!', '?', ';', ':']:
norm_text = norm_text.replace(char, ' ' + char + ' ')
return norm_text
import time
start = time.clock()
if not os.path.isfile('aclImdb/alldata-id.txt'):
if not os.path.isdir(dirname):
if not os.path.isfile(filename):
# Download IMDB archive
url = u'http://ai.stanford.edu/~amaas/data/sentiment/' + filename
r = requests.get(url)
with open(filename, 'wb') as f:
f.write(r.content)
tar = tarfile.open(filename, mode='r')
tar.extractall()
tar.close()
# Concat and normalize test/train data
folders = ['train/pos', 'train/neg', 'test/pos', 'test/neg', 'train/unsup']
alldata = u''
for fol in folders:
temp = u''
output = fol.replace('/', '-') + '.txt'
# Is there a better pattern to use?
txt_files = glob.glob('/'.join([dirname, fol, '*.txt']))
for txt in txt_files:
with codecs.open(txt, 'r', encoding='utf-8') as t:
t_clean = t.read()
for c in control_chars:
t_clean = t_clean.replace(c, ' ')
temp += t_clean
temp += "\n"
temp_norm = normalize_text(temp)
with codecs.open('/'.join([dirname, output]), 'w', encoding='utf-8') as n:
n.write(temp_norm)
alldata += temp_norm
with codecs.open('/'.join([dirname, 'alldata-id.txt']), 'w', encoding='utf-8') as f:
for idx, line in enumerate(alldata.splitlines()):
num_line = u"_*{0} {1}\n".format(idx, line)
f.write(num_line)
end = time.clock()
print ("total running time: ", end-start)
import os.path
assert os.path.isfile("aclImdb/alldata-id.txt"), "alldata-id.txt unavailable"
Explanation: gensim doc2vec & IMDB sentiment dataset
TODO: section on introduction & motivation
TODO: prerequisites + dependencies (statsmodels, patsy, ?)
Requirements
Following are the dependencies for this tutorial:
- testfixtures
- statsmodels
Load corpus
Fetch and prep exactly as in Mikolov's go.sh shell script. (Note this cell tests for existence of required files, so steps won't repeat once the final summary file (aclImdb/alldata-id.txt) is available alongside this notebook.)
End of explanation
import gensim
from gensim.models.doc2vec import TaggedDocument
from collections import namedtuple
SentimentDocument = namedtuple('SentimentDocument', 'words tags split sentiment')
alldocs = [] # will hold all docs in original order
with open('aclImdb/alldata-id.txt', encoding='utf-8') as alldata:
for line_no, line in enumerate(alldata):
tokens = gensim.utils.to_unicode(line).split()
words = tokens[1:]
tags = [line_no] # `tags = [tokens[0]]` would also work at extra memory cost
split = ['train','test','extra','extra'][line_no//25000] # 25k train, 25k test, 25k extra
sentiment = [1.0, 0.0, 1.0, 0.0, None, None, None, None][line_no//12500] # [12.5K pos, 12.5K neg]*2 then unknown
alldocs.append(SentimentDocument(words, tags, split, sentiment))
train_docs = [doc for doc in alldocs if doc.split == 'train']
test_docs = [doc for doc in alldocs if doc.split == 'test']
doc_list = alldocs[:] # for reshuffling per pass
print('%d docs: %d train-sentiment, %d test-sentiment' % (len(doc_list), len(train_docs), len(test_docs)))
Explanation: The data is small enough to be read into memory.
End of explanation
from gensim.models import Doc2Vec
import gensim.models.doc2vec
from collections import OrderedDict
import multiprocessing
cores = multiprocessing.cpu_count()
assert gensim.models.doc2vec.FAST_VERSION > -1, "this will be painfully slow otherwise"
simple_models = [
# PV-DM w/concatenation - window=5 (both sides) approximates paper's 10-word total window size
Doc2Vec(dm=1, dm_concat=1, size=100, window=5, negative=5, hs=0, min_count=2, workers=cores),
# PV-DBOW
Doc2Vec(dm=0, size=100, negative=5, hs=0, min_count=2, workers=cores),
# PV-DM w/average
Doc2Vec(dm=1, dm_mean=1, size=100, window=10, negative=5, hs=0, min_count=2, workers=cores),
]
# speed setup by sharing results of 1st model's vocabulary scan
simple_models[0].build_vocab(alldocs) # PV-DM/concat requires one special NULL word so it serves as template
print(simple_models[0])
for model in simple_models[1:]:
model.reset_from(simple_models[0])
print(model)
models_by_name = OrderedDict((str(model), model) for model in simple_models)
Explanation: Set-up Doc2Vec Training & Evaluation Models
Approximating experiment of Le & Mikolov "Distributed Representations of Sentences and Documents", also with guidance from Mikolov's example go.sh:
./word2vec -train ../alldata-id.txt -output vectors.txt -cbow 0 -size 100 -window 10 -negative 5 -hs 0 -sample 1e-4 -threads 40 -binary 0 -iter 20 -min-count 1 -sentence-vectors 1
Parameter choices below vary:
100-dimensional vectors, as the 400d vectors of the paper don't seem to offer much benefit on this task
similarly, frequent word subsampling seems to decrease sentiment-prediction accuracy, so it's left out
cbow=0 means skip-gram which is equivalent to the paper's 'PV-DBOW' mode, matched in gensim with dm=0
added to that DBOW model are two DM models, one which averages context vectors (dm_mean) and one which concatenates them (dm_concat, resulting in a much larger, slower, more data-hungry model)
a min_count=2 saves quite a bit of model memory, discarding only words that appear in a single doc (and are thus no more expressive than the unique-to-each doc vectors themselves)
End of explanation
from gensim.test.test_doc2vec import ConcatenatedDoc2Vec
models_by_name['dbow+dmm'] = ConcatenatedDoc2Vec([simple_models[1], simple_models[2]])
models_by_name['dbow+dmc'] = ConcatenatedDoc2Vec([simple_models[1], simple_models[0]])
Explanation: Following the paper, we also evaluate models in pairs. These wrappers return the concatenation of the vectors from each model. (Only the singular models are trained.)
End of explanation
import numpy as np
import statsmodels.api as sm
from random import sample
# for timing
from contextlib import contextmanager
from timeit import default_timer
import time
@contextmanager
def elapsed_timer():
start = default_timer()
elapser = lambda: default_timer() - start
yield lambda: elapser()
end = default_timer()
elapser = lambda: end-start
def logistic_predictor_from_data(train_targets, train_regressors):
logit = sm.Logit(train_targets, train_regressors)
predictor = logit.fit(disp=0)
#print(predictor.summary())
return predictor
def error_rate_for_model(test_model, train_set, test_set, infer=False, infer_steps=3, infer_alpha=0.1, infer_subsample=0.1):
Report error rate on test_doc sentiments, using supplied model and train_docs
train_targets, train_regressors = zip(*[(doc.sentiment, test_model.docvecs[doc.tags[0]]) for doc in train_set])
train_regressors = sm.add_constant(train_regressors)
predictor = logistic_predictor_from_data(train_targets, train_regressors)
test_data = test_set
if infer:
if infer_subsample < 1.0:
test_data = sample(test_data, int(infer_subsample * len(test_data)))
test_regressors = [test_model.infer_vector(doc.words, steps=infer_steps, alpha=infer_alpha) for doc in test_data]
else:
test_regressors = [test_model.docvecs[doc.tags[0]] for doc in test_docs]
test_regressors = sm.add_constant(test_regressors)
# predict & evaluate
test_predictions = predictor.predict(test_regressors)
corrects = sum(np.rint(test_predictions) == [doc.sentiment for doc in test_data])
errors = len(test_predictions) - corrects
error_rate = float(errors) / len(test_predictions)
return (error_rate, errors, len(test_predictions), predictor)
Explanation: Predictive Evaluation Methods
Helper methods for evaluating error rate.
End of explanation
from collections import defaultdict
best_error = defaultdict(lambda :1.0) # to selectively-print only best errors achieved
from random import shuffle
import datetime
alpha, min_alpha, passes = (0.025, 0.001, 20)
alpha_delta = (alpha - min_alpha) / passes
print("START %s" % datetime.datetime.now())
for epoch in range(passes):
shuffle(doc_list) # shuffling gets best results
for name, train_model in models_by_name.items():
# train
duration = 'na'
train_model.alpha, train_model.min_alpha = alpha, alpha
with elapsed_timer() as elapsed:
train_model.train(doc_list)
duration = '%.1f' % elapsed()
# evaluate
eval_duration = ''
with elapsed_timer() as eval_elapsed:
err, err_count, test_count, predictor = error_rate_for_model(train_model, train_docs, test_docs)
eval_duration = '%.1f' % eval_elapsed()
best_indicator = ' '
if err <= best_error[name]:
best_error[name] = err
best_indicator = '*'
print("%s%f : %i passes : %s %ss %ss" % (best_indicator, err, epoch + 1, name, duration, eval_duration))
if ((epoch + 1) % 5) == 0 or epoch == 0:
eval_duration = ''
with elapsed_timer() as eval_elapsed:
infer_err, err_count, test_count, predictor = error_rate_for_model(train_model, train_docs, test_docs, infer=True)
eval_duration = '%.1f' % eval_elapsed()
best_indicator = ' '
if infer_err < best_error[name + '_inferred']:
best_error[name + '_inferred'] = infer_err
best_indicator = '*'
print("%s%f : %i passes : %s %ss %ss" % (best_indicator, infer_err, epoch + 1, name + '_inferred', duration, eval_duration))
print('completed pass %i at alpha %f' % (epoch + 1, alpha))
alpha -= alpha_delta
print("END %s" % str(datetime.datetime.now()))
Explanation: Bulk Training
Using explicit multiple-pass, alpha-reduction approach as sketched in gensim doc2vec blog post – with added shuffling of corpus on each pass.
Note that vector training is occurring on all documents of the dataset, which includes all TRAIN/TEST/DEV docs.
Evaluation of each model's sentiment-predictive power is repeated after each pass, as an error rate (lower is better), to see the rates-of-relative-improvement. The base numbers reuse the TRAIN and TEST vectors stored in the models for the logistic regression, while the inferred results use newly-inferred TEST vectors.
(On a 4-core 2.6Ghz Intel Core i7, these 20 passes training and evaluating 3 main models takes about an hour.)
End of explanation
# print best error rates achieved
for rate, name in sorted((rate, name) for name, rate in best_error.items()):
print("%f %s" % (rate, name))
Explanation: Achieved Sentiment-Prediction Accuracy
End of explanation
doc_id = np.random.randint(simple_models[0].docvecs.count) # pick random doc; re-run cell for more examples
print('for doc %d...' % doc_id)
for model in simple_models:
inferred_docvec = model.infer_vector(alldocs[doc_id].words)
print('%s:\n %s' % (model, model.docvecs.most_similar([inferred_docvec], topn=3)))
Explanation: In my testing, unlike the paper's report, DBOW performs best. Concatenating vectors from different models only offers a small predictive improvement. The best results I've seen are still just under 10% error rate, still a ways from the paper's 7.42%.
Examining Results
Are inferred vectors close to the precalculated ones?
End of explanation
import random
doc_id = np.random.randint(simple_models[0].docvecs.count) # pick random doc, re-run cell for more examples
model = random.choice(simple_models) # and a random model
sims = model.docvecs.most_similar(doc_id, topn=model.docvecs.count) # get *all* similar documents
print(u'TARGET (%d): «%s»\n' % (doc_id, ' '.join(alldocs[doc_id].words)))
print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model)
for label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:
print(u'%s %s: «%s»\n' % (label, sims[index], ' '.join(alldocs[sims[index][0]].words)))
Explanation: (Yes, here the stored vector from 20 epochs of training is usually one of the closest to a freshly-inferred vector for the same words. Note the defaults for inference are very abbreviated – just 3 steps starting at a high alpha – and likely need tuning for other applications.)
Do close documents seem more related than distant ones?
End of explanation
word_models = simple_models[:]
import random
from IPython.display import HTML
# pick a random word with a suitable number of occurences
while True:
word = random.choice(word_models[0].wv.index2word)
if word_models[0].wv.vocab[word].count > 10:
break
# or uncomment below line, to just pick a word from the relevant domain:
#word = 'comedy/drama'
similars_per_model = [str(model.most_similar(word, topn=20)).replace('), ','),<br>\n') for model in word_models]
similar_table = ("<table><tr><th>" +
"</th><th>".join([str(model) for model in word_models]) +
"</th></tr><tr><td>" +
"</td><td>".join(similars_per_model) +
"</td></tr></table>")
print("most similar words for '%s' (%d occurences)" % (word, simple_models[0].wv.vocab[word].count))
HTML(similar_table)
Explanation: (Somewhat, in terms of reviewer tone, movie genre, etc... the MOST cosine-similar docs usually seem more like the TARGET than the MEDIAN or LEAST.)
Do the word vectors show useful similarities?
End of explanation
# assuming something like
# https://word2vec.googlecode.com/svn/trunk/questions-words.txt
# is in local directory
# note: this takes many minutes
for model in word_models:
sections = model.accuracy('questions-words.txt')
correct, incorrect = len(sections[-1]['correct']), len(sections[-1]['incorrect'])
print('%s: %0.2f%% correct (%d of %d)' % (model, float(correct*100)/(correct+incorrect), correct, correct+incorrect))
Explanation: Do the DBOW words look meaningless? That's because the gensim DBOW model doesn't train word vectors – they remain at their random initialized values – unless you ask with the dbow_words=1 initialization parameter. Concurrent word-training slows DBOW mode significantly, and offers little improvement (and sometimes a little worsening) of the error rate on this IMDB sentiment-prediction task.
Words from DM models tend to show meaningfully similar words when there are many examples in the training data (as with 'plot' or 'actor'). (All DM modes inherently involve word vector training concurrent with doc vector training.)
Are the word vectors from this dataset any good at analogies?
End of explanation
This cell left intentionally erroneous.
Explanation: Even though this is a tiny, domain-specific dataset, it shows some meager capability on the general word analogies – at least for the DM/concat and DM/mean models which actually train word vectors. (The untrained random-initialized words of the DBOW model of course fail miserably.)
Slop
End of explanation
from gensim.models import KeyedVectors
w2v_g100b = KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin.gz', binary=True)
w2v_g100b.compact_name = 'w2v_g100b'
word_models.append(w2v_g100b)
Explanation: To mix the Google dataset (if locally available) into the word tests...
End of explanation
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
rootLogger = logging.getLogger()
rootLogger.setLevel(logging.INFO)
Explanation: To get copious logging output from above steps...
End of explanation
%load_ext autoreload
%autoreload 2
Explanation: To auto-reload python code while developing...
End of explanation |
6,093 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Document retrieval from wikipedia data
Fire up GraphLab Create
Step1: Load some text data - from wikipedia, pages on people
Step2: Data contains
Step3: Explore the dataset and checkout the text it contains
Exploring the entry for president Obama
Step4: Exploring the entry for actor George Clooney
Step5: Get the word counts for Obama article
Step6: Sort the word counts for the Obama article
Turning dictonary of word counts into a table
Step7: Sorting the word counts to show most common words at the top
Step8: Most common words include uninformative words like "the", "in", "and",...
Compute TF-IDF for the corpus
To give more weight to informative words, we weigh them by their TF-IDF scores.
Step9: Examine the TF-IDF for the Obama article
Step10: Words with highest TF-IDF are much more informative.
Manually compute distances between a few people
Let's manually compare the distances between the articles for a few famous people.
Step11: Is Obama closer to Clinton than to Beckham?
We will use cosine distance, which is given by
(1-cosine_similarity)
and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham.
Step12: Build a nearest neighbor model for document retrieval
We now create a nearest-neighbors model and apply it to document retrieval.
Step13: Applying the nearest-neighbors model for retrieval
Who is closest to Obama?
Step14: As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians.
Other examples of document retrieval | Python Code:
import graphlab
Explanation: Document retrieval from wikipedia data
Fire up GraphLab Create
End of explanation
people = graphlab.SFrame('people_wiki.gl/')
Explanation: Load some text data - from wikipedia, pages on people
End of explanation
people.head()
len(people)
Explanation: Data contains: link to wikipedia article, name of person, text of article.
End of explanation
obama = people[people['name'] == 'Barack Obama']
obama
obama['text']
Explanation: Explore the dataset and checkout the text it contains
Exploring the entry for president Obama
End of explanation
clooney = people[people['name'] == 'George Clooney']
clooney['text']
Explanation: Exploring the entry for actor George Clooney
End of explanation
obama['word_count'] = graphlab.text_analytics.count_words(obama['text'])
print obama['word_count']
Explanation: Get the word counts for Obama article
End of explanation
obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])
Explanation: Sort the word counts for the Obama article
Turning dictonary of word counts into a table
End of explanation
obama_word_count_table.head()
obama_word_count_table.sort('count',ascending=False)
Explanation: Sorting the word counts to show most common words at the top
End of explanation
people['word_count'] = graphlab.text_analytics.count_words(people['text'])
people.head()
tfidf = graphlab.text_analytics.tf_idf(people['word_count'])
tfidf[0]
people['tfidf'] = tfidf['docs']
Explanation: Most common words include uninformative words like "the", "in", "and",...
Compute TF-IDF for the corpus
To give more weight to informative words, we weigh them by their TF-IDF scores.
End of explanation
obama = people[people['name'] == 'Barack Obama']
obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
Explanation: Examine the TF-IDF for the Obama article
End of explanation
clinton = people[people['name'] == 'Bill Clinton']
beckham = people[people['name'] == 'David Beckham']
Explanation: Words with highest TF-IDF are much more informative.
Manually compute distances between a few people
Let's manually compare the distances between the articles for a few famous people.
End of explanation
graphlab.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0])
graphlab.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])
Explanation: Is Obama closer to Clinton than to Beckham?
We will use cosine distance, which is given by
(1-cosine_similarity)
and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham.
End of explanation
knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name')
Explanation: Build a nearest neighbor model for document retrieval
We now create a nearest-neighbors model and apply it to document retrieval.
End of explanation
knn_model.query(obama)
Explanation: Applying the nearest-neighbors model for retrieval
Who is closest to Obama?
End of explanation
swift = people[people['name'] == 'Taylor Swift']
knn_model.query(swift)
jolie = people[people['name'] == 'Angelina Jolie']
knn_model.query(jolie)
arnold = people[people['name'] == 'Arnold Schwarzenegger']
knn_model.query(arnold)
elton_john = people[people['name'] == 'Elton John']
elton_john
#stack('word_count', new_column_name = ['word','count'])
elton_john_wc_table = elton_john[['word_count']].stack('word_count',new_column_name = ['word','count'])
elton_john_wc_table
elton_john_wc_table.sort('count',ascending=False)
elton_john_tfidf_table = elton_john[['tfidf']].stack('tfidf',new_column_name= ['tf-idf','count'])
elton_john_tfidf_table.sort('count',ascending=False)
vic_beckham = people[people['name'] == 'Victoria Beckham']
graphlab.distances.cosine(elton_john['tfidf'][0],vic_beckham['tfidf'][0])
paul_m = people[people['name']== 'Paul McCartney']
graphlab.distances.cosine(elton_john['tfidf'][0],paul_m['tfidf'][0])
knn_model_tfidf = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name',distance='cosine')
knn_model_wc= graphlab.nearest_neighbors.create(people,features=['word_count'],label = 'name',distance='cosine')
knn_model_wc.query(elton_john)
knn_model_tfidf.query(elton_john)
knn_model_wc.query(vic_beckham)
knn_model_tfidf.query(vic_beckham)
Explanation: As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians.
Other examples of document retrieval
End of explanation |
6,094 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Colaboratory
Before you start
When you open a new Colab from Github (like this one), you cannot save changes. So it's usually best to store the Colab in you personal drive "File > Save a copy in drive..." before you do anything else.
Introduction
Some important links to keep open during the workshop – open these tabs now!
Step1: You can also only execute one single statement in a cell.
Step2: What to do if you get stuck
If you should get stuck and the documentation doesn't help you consider using additional help.
Step3: Importing TensorFlow
We'll be using TensorFlow 2.1.0 in this workshop. This will soon be the default, but for the time being we still need to activate it with the Colab-specific %tensorflow_version magic.
Step4: Running shell commands
You can run shell commands directly in Colab
Step5: Autocompletion and docstrings
Jupyter shows possible completions of partially typed
commands.
Try it for yourself by displaying all available tf. methods that start with one.
Step6: In addition, you can also display docstrings to see the function signature and possible parameters.
Step7: Alternatively, you might also inspect function details with docstrings if available by appending a "?".
Step8: Note
Step9: Runtimes
As noted in the introduction above, Colab provides multiple runtimes with different hardware accelerators
Step10: As can be seen, the machine has been allocated just very recently for our purposes.
VM specifications
Step11: Plotting
The notebook environment also provides options to visualize and interact with data.
We'll take a short look at the plotting/visualization libraries Matplotlib and Altair.
Matplotlib
Matplotlib is one of the most famous Python plotting libraries and can be used to plot results within a cell's output (see Matplotlib Introduction).
Let's try to plot something with it.
Step12: Altair
Another declarative visualization library for Python is Altair (see Altair
Step13: Notebook Magics
The IPython and Colab environment support built-in magic commands called magics (see
Step14: Line magics
You can also make use of line magics which can be inserted anywhere at the beginning of a line inside a cell and need to be prefixed with %.
Examples include
Step15: Note
Step16: Data handling
There are multiple ways to provide data to a Colabs's VM environment.
Note
Step17: List a subset of the contained files using the gsutil tool.
Step18: Conveniently, TensorFlow natively supports multiple file systems such as
Step19: Snippets
Finally, we can take a look at the snippets support in Colab.
If you're using Jupyter please see Jupyter contrib nbextensions - Snippets menu as this is not natively supported.
Snippets are a way to quickly "bookmark" pieces of code or text that you might want to insert into specific cells.
Step20: We have created some default snippets for this workshop in
Step21: Pro tip
Step22: Forms
You can simplify cells by hiding their code and displaying a form instead.
Note
Step23: Interactive debugging
An example of an IPython tool that you can utilize is the interactive debugger
provided inside an IPython environment like Colab.
For instance, by using %pdb on, you can automatically trigger the debugger on exceptions to further analyze the state.
Some useful debugger commands are | Python Code:
# YOUR ACTION REQUIRED:
# Execute this cell first using <CTRL-ENTER> and then using <SHIFT-ENTER>.
# Note the difference in which cell is selected after execution.
print('Hello world!')
Explanation: Colaboratory
Before you start
When you open a new Colab from Github (like this one), you cannot save changes. So it's usually best to store the Colab in you personal drive "File > Save a copy in drive..." before you do anything else.
Introduction
Some important links to keep open during the workshop – open these tabs now!:
TF documentation : Use the search box (top right) to get documentation on Tensorflow's rich API.
solutions/ : Every notebook in the exercises/ directory has a corresponding notebook in the solutions/ directory.
Colaboratory (Colab) is a Jupyter notebook environment which allows you to work with data and code in an interactive manner. You can decide where you want to run your code:
Using a hosted runtime provided by Google (default)
Locally using your own machine and resources
It supports Python 3 and comes with a set of pre-installed libraries like Tensorflow and Matplotlib but also gives you the option to install more libraries on demand. The resulting notebooks can be shared in a straightforward way.
Caveats:
The virtual machines used for the runtimes are ephemeral so make sure to safe your data in a persistent location like locally (downloading), in the Google Cloud Storage or Google Drive.
The service is free of use but the performance of default runtimes can be insufficient for your purposes.
You have the option to select a runtime with GPU or TPU support.
"Colaboratory is intended for interactive use. Long-running background computations, particularly on GPUs, may be stopped. [...] We encourage users who wish to run continuous or long-running computations through Colaboratory’s UI to use a local runtime." - See Colaboratory FAQ
Getting started
Connect to a runtime now by clicking connect in the top right corner if you don't already see a green checkmark there.
To get a better overview you might want to activate the Table of contents by clicking on the arrow on the left.
Important shortcuts
Action | Colab Shortcut | Jupyter Shortcut
---|---|---
Executes current cell | <CTRL-ENTER> | <CTRL-ENTER>
Executes current cell and moves to next cell | <SHIFT-ENTER> | S<HIFT-ENTER>
Executes current selection | <CTRL-SHIFT-ENTER> | N/A
Insert cell above | <CTRL-M> <A> | <A>
Append cell below | <CTRL-M> <B> | <B>
Shows searchable command palette | <CTRL-SHIFT-P> | <CTRL-SHIFT-P>
Convert cell to code | <CTRL-M> <Y> | <Y>
Convert cell to Markdown | <CTRL-M> <M> | <M>
Autocomplete (on by default) | <CTRL+SPACE> | <TAB>
Goes from edit to "command" mode | <ESC> | <ESC>
Goes from "command" to edit mode | <ENTER> | <ENTER>
Show keyboard shortcuts | <CTRL-M> <H> | <H>
<p align="center"><b>Note:</b> On OS X you can use `<COMMAND>` instead of `<CTRL>`</p>
Give it a try!
End of explanation
# YOUR ACTION REQUIRED:
# Execute only the first print statement by selecting the first line and pressing
# <CTRL-SHIFT-ENTER>.
print('Only print this line.')
print('Avoid printing this line.')
Explanation: You can also only execute one single statement in a cell.
End of explanation
def xor_str(a, b):
return ''.join([chr(ord(a[i % len(a)]) ^ ord(b[i % len(b)]))
for i in range(max(len(a), len(b)))])
# YOUR ACTION REQUIRED:
# Try to find the correct value for the variable below.
workshop_secret = '(replace me!)'
xor_str(workshop_secret,
'\x03\x00\x02\x10\x00\x1f\x03L\x1b\x18\x00\x06\x07\x06K2\x19)*S;\x17\x08\x1f\x00\x05F\x1e\x00\x14K\x115\x16\x07\x10\x1cR1\x03\x1d\x1cS\x1a\x00\x13J')
# Hint: You might want to checkout the ../solutions directory
# (you should already have opened this directory in a browser tab :-)
Explanation: What to do if you get stuck
If you should get stuck and the documentation doesn't help you consider using additional help.
End of explanation
# We must call this "magic" before importing TensorFlow. We will explain
# further down what "magics" (starting with %) are.
%tensorflow_version 2.x
# Include basic dependencies and display the tensorflow version.
import tensorflow as tf
tf.__version__
Explanation: Importing TensorFlow
We'll be using TensorFlow 2.1.0 in this workshop. This will soon be the default, but for the time being we still need to activate it with the Colab-specific %tensorflow_version magic.
End of explanation
# Print the current working directory and list all files in it.
!pwd
!ls
# Especially useful: Installs new packages.
!pip install qrcode
import qrcode
qrcode.make('Colab rocks!')
Explanation: Running shell commands
You can run shell commands directly in Colab: simply prepend the command with a !.
End of explanation
# YOUR ACTION REQUIRED:
# Set the cursor to after tf.one and press <CTRL-SPACE>.
# On Mac, only <OPTION-ESCAPE> might work.
tf.one
Explanation: Autocompletion and docstrings
Jupyter shows possible completions of partially typed
commands.
Try it for yourself by displaying all available tf. methods that start with one.
End of explanation
# YOUR ACTION REQUIRED:
# Complete the command to `tf.maximum` and then add the opening bracket "(" to
# see the function documentation.
tf.maximu
Explanation: In addition, you can also display docstrings to see the function signature and possible parameters.
End of explanation
tf.maximum?
Explanation: Alternatively, you might also inspect function details with docstrings if available by appending a "?".
End of explanation
test_dict = {'key0': 'Tensor', 'key1': 'Flow'}
test_dict?
Explanation: Note: This also works for any other type of object as can be seen below.
End of explanation
# Display how long the system has been running.
# Note : this shows "0 users" because no user is logged in via SSH.
!uptime
Explanation: Runtimes
As noted in the introduction above, Colab provides multiple runtimes with different hardware accelerators:
CPU (default)
GPU
TPU
which can be selected by choosing "Runtime > Change runtime type" in the menu.
Please be aware that selecting a new runtime will assign a new virtual machine (VM).
In general, assume that any changes you make to the VM environment including data storage are ephemeral. Particularly, this might require to execute previous cells again as their content is unknown to a new runtime otherwise.
Let's take a closer look at one of such provided VMs.
Once we have been assigned a runtime we can inspect it further.
End of explanation
# Display available and used memory.
!free -h
print("-"*70)
# Display the CPU specification.
!lscpu
print("-"*70)
# Display the GPU specification (if available).
!(nvidia-smi | grep -q "has failed") && echo "No GPU found!" || nvidia-smi
Explanation: As can be seen, the machine has been allocated just very recently for our purposes.
VM specifications
End of explanation
# Display the Matplotlib outputs within a cell's output.
%matplotlib inline
import numpy as np
from matplotlib import pyplot
# Create a randomized scatterplot using matplotlib.
x = np.random.rand(100).astype(np.float32)
noise = np.random.normal(scale=0.3, size=len(x))
y = np.sin(x * 7) + noise
pyplot.scatter(x, y)
Explanation: Plotting
The notebook environment also provides options to visualize and interact with data.
We'll take a short look at the plotting/visualization libraries Matplotlib and Altair.
Matplotlib
Matplotlib is one of the most famous Python plotting libraries and can be used to plot results within a cell's output (see Matplotlib Introduction).
Let's try to plot something with it.
End of explanation
# Load an example dataset.
from vega_datasets import data
cars = data.cars()
# Plot the dataset, referencing dataframe column names.
import altair as alt
alt.Chart(cars).mark_point().encode(
x='Horsepower',
y='Miles_per_Gallon',
color='Origin',
tooltip=['Name', 'Origin', 'Horsepower', 'Miles_per_Gallon']
).interactive()
Explanation: Altair
Another declarative visualization library for Python is Altair (see Altair: Declarative Visualization in Python).
Try to zoom in/out and to hover over individual data points in the resulting plot below.
End of explanation
%%sh
echo "This is a shell script!"
# List all running VM processes.
ps -ef
echo "Done"
# Embed custom HTML directly into a cell's output.
%%html
<marquee>HTML rocks</marquee>
Explanation: Notebook Magics
The IPython and Colab environment support built-in magic commands called magics (see: IPython - Magics).
In addition to default Python, these commands might be handy for example when it comes to interacting directly with the VM or the Notebook itself.
Cell magics
Cell magics define a mode for a complete cell and are prefixed with %%.
Examples include:
%%bash or %%sh
%%html
%%javascript
End of explanation
n = 1000000
%time list1 = [i for i in range(n)]
print("")
%time list2 = [i for i in range(int(n/2))]
Explanation: Line magics
You can also make use of line magics which can be inserted anywhere at the beginning of a line inside a cell and need to be prefixed with %.
Examples include:
%time - display the required time to execute the current line
%cd - change the current working directory
%pdb - invoke an interactive Python debugger
%lsmagic - list all available line magic and cell magic functions
For example, if you want to find out how long one specific line requires to be executed you can just prepend %time.
End of explanation
%%time
n = 1000000
list1 = [i for i in range(n)]
list2 = [i for i in range(int(n/2))]
Explanation: Note: Some line magics like %time can also be used for complete cells by writing %%time.
End of explanation
from google.colab import auth
auth.authenticate_user()
Explanation: Data handling
There are multiple ways to provide data to a Colabs's VM environment.
Note: This section only applies to Colab.
Jupyter has a file explorer and other options for data handling.
The options include:
* Uploading files from the local file system.
* Connecting to Google Cloud Storage (explained below).
* Connecting to Google Drive (see: Snippets: Drive; will be used in the next Colabs).
Uploading files from the local file system
If you need to manually upload files to the VM, you can use the files tab on the left. The files tab also allows you to browse the contents of the VM and when you double click on a file you'll see a small text editor on the right.
Connecting to Google Cloud Storage
Google Cloud Storage (GCS) is a cloud file storage service with a RESTful API.
We can utilize it to store our own data or to access data provided by the following identifier:
gs://[BUCKET_NAME]/[OBJECT_NAME]
We'll use the data provided in gs://amld-datasets/zoo_img as can be seen below.
Before we can interact with the cloud environment, we need to grant permissions accordingly (also see External data: Cloud Storage).
End of explanation
!gsutil ls gs://amld-datasets/zoo_img | head
Explanation: List a subset of the contained files using the gsutil tool.
End of explanation
# Note: This cell hangs if you forget to call auth.authenticate_user() above.
tf.io.gfile.glob('gs://amld-datasets/zoo_img/*')[:10]
Explanation: Conveniently, TensorFlow natively supports multiple file systems such as:
GCS - Google Cloud Storage
HDFS - Hadoop
S3 - Amazon Simple Storage
An example for the GCS filesystem can be seen below.
End of explanation
# YOUR ACTION REQUIRED:
# Explore existing snippets by going to the `Code snippets` section.
# Click on the <> button on the left sidebar to open the snippets.
# Alternatively, you can press `<CTRL><ALT><P>` (or `<COMMAND><OPTION><P>` for
# OS X).
Explanation: Snippets
Finally, we can take a look at the snippets support in Colab.
If you're using Jupyter please see Jupyter contrib nbextensions - Snippets menu as this is not natively supported.
Snippets are a way to quickly "bookmark" pieces of code or text that you might want to insert into specific cells.
End of explanation
from google.colab import snippets
# snippets.register('https://colab.research.google.com/drive/1OFSjEmqC-UC66xs-LR7-xmgkvxYTrAcN')
Explanation: We have created some default snippets for this workshop in:
https://colab.research.google.com/drive/1OFSjEmqC-UC66xs-LR7-xmgkvxYTrAcN
In order to use these snippets, you can:
Click on "Tools > Settings".
Copy the above url into "Custom snippet notebook URL" and press enter.
As soon as you update the settings, the snippets will then become available in every Colab. Search for "amld" to quickly find them.
Alternatively, you can also add snippets via the API (but this needs to be done for every Colab/kernel):
End of explanation
from IPython.core.magic import register_line_cell_magic
@register_line_cell_magic
def mymagic(line_content, cell_content=None):
print('line_content="%s" cell_content="%s"' % (line_content, cell_content))
%mymagic Howdy Alice!
%%mymagic simple question
Howdy Alice!
how are you?
Explanation: Pro tip : Maybe this is a good moment to create your own snippets and register them in settings. You can then start collecting often-used code and have it ready when you need it... In this Colab you'll need to have text cells with titles (like ### snippet name) preceeding the code cells.
----- Optional part -----
Custom line magic
You can also define your own line/cell magic in the following way.
End of explanation
#@title Execute me
# Hidden cell content.
print("Double click the cell to see its content.")
# Form example mostly taken from "Adding form fields" Snippet.
#@title Example form
#@markdown Specify some test data and execute this cell.
string_type = 'test_string' #@param {type: "string"}
slider_value = 145 #@param {type: "slider", min: 100, max: 200}
number = 1339 #@param {type: "number"}
date = '2019-01-26' #@param {type: "date"}
pick_me = "a" #@param ['a', 'b', 'c']
#@markdown ---
print("Submitted data:")
print(string_type, slider_value, number, date, pick_me)
Explanation: Forms
You can simplify cells by hiding their code and displaying a form instead.
Note: You can display or hide the code by double clicking the form which might be on the right side.
End of explanation
# YOUR ACTION REQUIRED:
# Execute this cell, print the variable contents of a, b and exit the debugger.
%pdb on
a = 67069 / 47 - 0x5a
b = a - 0x539
#c = a / b # Will throw an exception.
Explanation: Interactive debugging
An example of an IPython tool that you can utilize is the interactive debugger
provided inside an IPython environment like Colab.
For instance, by using %pdb on, you can automatically trigger the debugger on exceptions to further analyze the state.
Some useful debugger commands are:
Description | Command
---|---
h(elp) | Display available commands
p(rint) x | Show content of object x
w(here) | Show current instruction pointer position
q(uit) | Leave the debugger
End of explanation |
6,095 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Better ML Engineering with ML Metadata
Learning Objectives
Download the dataset
Create an InteractiveContext
Construct the TFX Pipeline
Query the MLMD Database
Introduction
Assume a scenario where you set up a production ML pipeline to classify penguins. The pipeline ingests your training data, trains and evaluates a model, and pushes it to production.
However, when you later try using this model with a larger dataset that contains different kinds of penguins, you observe that your model does not behave as expected and starts classifying the species incorrectly.
At this point, you are interested in knowing
Step1: Install and import TFX
Step2: Please ignore the incompatibility error and warnings. Make sure to re-run the cell.
You must restart the kernel after installing TFX. Select Kernel > Restart kernel > Restart from the menu.
Do not proceed with the rest of this notebook without restarting the kernel.
Import other libraries
Step3: Import the MLMD library.
Step4: Download the dataset
We use the Palmer Penguins dataset which can be found on Github. We processed the dataset by leaving out any incomplete records, and drops island and sex columns, and converted labels to int32. The dataset contains 334 records of the body mass and the length and depth of penguins' culmens, and the length of their flippers. You use this data to classify penguins into one of three species.
Step5: Create an InteractiveContext
To run TFX components interactively in this notebook, create an InteractiveContext. The InteractiveContext uses a temporary directory with an ephemeral MLMD database instance.
In general, it is a good practice to group similar pipeline runs under a Context.
Step6: Construct the TFX Pipeline
A TFX pipeline consists of several components that perform different aspects of the ML workflow. In this notebook, you create and run the ExampleGen, StatisticsGen, SchemaGen, and Trainer components and use the Evaluator and Pusher component to evaluate and push the trained model.
Refer to the components tutorial for more information on TFX pipeline components.
Note
Step7: Instantiate and run the StatisticsGen Component
Step8: Instantiate and run the SchemaGen Component
Step9: Instantiate and run the Trainer Component
Step10: Run the Trainer component.
Step11: Evaluate and push the model
Use the Evaluator component to evaluate and 'bless' the model before using the Pusher component to push the model to a serving directory.
Step12: Running the TFX pipeline populates the MLMD Database. In the next section, you use the MLMD API to query this database for metadata information.
Query the MLMD Database
The MLMD database stores three types of metadata
Step13: Create some helper functions to view the data from the MD store.
Step14: First, query the MD store for a list of all its stored ArtifactTypes.
Step15: Next, query all PushedModel artifacts.
Step16: Query the MD store for the latest pushed model. This notebook has only one pushed model.
Step17: One of the first steps in debugging a pushed model is to look at which trained model is pushed and to see which training data is used to train that model.
MLMD provides traversal APIs to walk through the provenance graph, which you can use to analyze the model provenance.
Step18: Query the parent artifacts for the pushed model.
Step19: Query the properties for the model.
Step20: Query the upstream artifacts for the model.
Step21: Get the training data the model trained with.
Step22: Now that you have the training data that the model trained with, query the database again to find the training step (execution). Query the MD store for a list of the registered execution types.
Step23: The training step is the ExecutionType named tfx.components.trainer.component.Trainer. Traverse the MD store to get the trainer run that corresponds to the pushed model. | Python Code:
!pip install --upgrade pip
Explanation: Better ML Engineering with ML Metadata
Learning Objectives
Download the dataset
Create an InteractiveContext
Construct the TFX Pipeline
Query the MLMD Database
Introduction
Assume a scenario where you set up a production ML pipeline to classify penguins. The pipeline ingests your training data, trains and evaluates a model, and pushes it to production.
However, when you later try using this model with a larger dataset that contains different kinds of penguins, you observe that your model does not behave as expected and starts classifying the species incorrectly.
At this point, you are interested in knowing:
What is the most efficient way to debug the model when the only available artifact is the model in production?
Which training dataset was used to train the model?
Which training run led to this erroneous model?
Where are the model evaluation results?
Where to begin debugging?
ML Metadata (MLMD) is a library that leverages the metadata associated with ML models to help you answer these questions and more. A helpful analogy is to think of this metadata as the equivalent of logging in software development. MLMD enables you to reliably track the artifacts and lineage associated with the various components of your ML pipeline.
In this notebook, you set up a TFX Pipeline to create a model that classifies penguins into three species based on the body mass and the length and depth of their culmens, and the length of their flippers. You then use MLMD to track the lineage of pipeline components.
Each learning objective will correspond to a #TODO in the notebook where you will complete the notebook cell's code before running. Refer to the solution for reference.
Setup
First, we install and import the necessary packages, set up paths, and download data.
Upgrade Pip
End of explanation
!pip install -q -U tfx
Explanation: Install and import TFX
End of explanation
import os
import tempfile
import urllib
import pandas as pd
import tensorflow_model_analysis as tfma
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
Explanation: Please ignore the incompatibility error and warnings. Make sure to re-run the cell.
You must restart the kernel after installing TFX. Select Kernel > Restart kernel > Restart from the menu.
Do not proceed with the rest of this notebook without restarting the kernel.
Import other libraries
End of explanation
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
import ml_metadata as mlmd
print('MLMD version: {}'.format(mlmd.__version__))
from ml_metadata.proto import metadata_store_pb2
Explanation: Import the MLMD library.
End of explanation
DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/data/labelled/penguins_processed.csv'
_data_root = tempfile.mkdtemp(prefix='tfx-data')
# Join various path components
_data_filepath = # TODO: Your code goes here
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
Explanation: Download the dataset
We use the Palmer Penguins dataset which can be found on Github. We processed the dataset by leaving out any incomplete records, and drops island and sex columns, and converted labels to int32. The dataset contains 334 records of the body mass and the length and depth of penguins' culmens, and the length of their flippers. You use this data to classify penguins into one of three species.
End of explanation
interactive_context = # TODO: Your code goes here
Explanation: Create an InteractiveContext
To run TFX components interactively in this notebook, create an InteractiveContext. The InteractiveContext uses a temporary directory with an ephemeral MLMD database instance.
In general, it is a good practice to group similar pipeline runs under a Context.
End of explanation
example_gen = # TODO: Your code goes here
interactive_context.run(example_gen)
Explanation: Construct the TFX Pipeline
A TFX pipeline consists of several components that perform different aspects of the ML workflow. In this notebook, you create and run the ExampleGen, StatisticsGen, SchemaGen, and Trainer components and use the Evaluator and Pusher component to evaluate and push the trained model.
Refer to the components tutorial for more information on TFX pipeline components.
Note: Constructing a TFX Pipeline by setting up the individual components involves a lot of boilerplate code. For the purpose of this notebook, it is alright if you do not fully understand every line of code in the pipeline setup.
Instantiate and run the ExampleGen Component
End of explanation
statistics_gen = # TODO: Your code goes here
interactive_context.run(statistics_gen)
Explanation: Instantiate and run the StatisticsGen Component
End of explanation
infer_schema = # TODO: Your code goes here
interactive_context.run(infer_schema)
Explanation: Instantiate and run the SchemaGen Component
End of explanation
# Define the module file for the Trainer component
trainer_module_file = 'penguin_trainer.py'
%%writefile {trainer_module_file}
# Define the training algorithm for the Trainer module file
import os
from typing import List, Text
import tensorflow as tf
from tensorflow import keras
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
from tensorflow_metadata.proto.v0 import schema_pb2
# Features used for classification - culmen length and depth, flipper length,
# body mass, and species.
_LABEL_KEY = 'species'
_FEATURE_KEYS = [
'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'
]
def _input_fn(file_pattern: List[Text],
data_accessor: tfx.components.DataAccessor,
schema: schema_pb2.Schema, batch_size: int) -> tf.data.Dataset:
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY), schema).repeat()
def _build_keras_model():
inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]
d = keras.layers.concatenate(inputs)
d = keras.layers.Dense(8, activation='relu')(d)
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
return model
def run_fn(fn_args: tfx.components.FnArgs):
schema = schema_pb2.Schema()
tfx.utils.parse_pbtxt_file(fn_args.schema_path, schema)
train_dataset = _input_fn(
fn_args.train_files, fn_args.data_accessor, schema, batch_size=10)
eval_dataset = _input_fn(
fn_args.eval_files, fn_args.data_accessor, schema, batch_size=10)
model = _build_keras_model()
model.fit(
train_dataset,
epochs=int(fn_args.train_steps / 20),
steps_per_epoch=20,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
model.save(fn_args.serving_model_dir, save_format='tf')
Explanation: Instantiate and run the Trainer Component
End of explanation
trainer = tfx.components.Trainer(
module_file=os.path.abspath(trainer_module_file),
examples=example_gen.outputs['examples'],
schema=infer_schema.outputs['schema'],
train_args=tfx.proto.TrainArgs(num_steps=100),
eval_args=tfx.proto.EvalArgs(num_steps=50))
interactive_context.run(trainer)
Explanation: Run the Trainer component.
End of explanation
_serving_model_dir = os.path.join(tempfile.mkdtemp(),
'serving_model/penguins_classification')
eval_config = tfma.EvalConfig(
model_specs=[
tfma.ModelSpec(label_key='species', signature_name='serving_default')
],
metrics_specs=[
tfma.MetricsSpec(metrics=[
tfma.MetricConfig(
class_name='SparseCategoricalAccuracy',
threshold=tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.6})))
])
],
slicing_specs=[tfma.SlicingSpec()])
evaluator = tfx.components.Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
schema=infer_schema.outputs['schema'],
eval_config=eval_config)
interactive_context.run(evaluator)
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=_serving_model_dir)))
interactive_context.run(pusher)
Explanation: Evaluate and push the model
Use the Evaluator component to evaluate and 'bless' the model before using the Pusher component to push the model to a serving directory.
End of explanation
connection_config = interactive_context.metadata_connection_config
store = mlmd.MetadataStore(connection_config)
# All TFX artifacts are stored in the base directory
base_dir = connection_config.sqlite.filename_uri.split('metadata.sqlite')[0]
Explanation: Running the TFX pipeline populates the MLMD Database. In the next section, you use the MLMD API to query this database for metadata information.
Query the MLMD Database
The MLMD database stores three types of metadata:
Metadata about the pipeline and lineage information associated with the pipeline components
Metadata about artifacts that were generated during the pipeline run
Metadata about the executions of the pipeline
A typical production environment pipeline serves multiple models as new data arrives. When you encounter erroneous results in served models, you can query the MLMD database to isolate the erroneous models. You can then trace the lineage of the pipeline components that correspond to these models to debug your models
Set up the metadata (MD) store with the InteractiveContext defined previously to query the MLMD database.
End of explanation
def display_types(types):
# Helper function to render dataframes for the artifact and execution types
table = {'id': [], 'name': []}
for a_type in types:
table['id'].append(a_type.id)
table['name'].append(a_type.name)
return pd.DataFrame(data=table)
def display_artifacts(store, artifacts):
# Helper function to render dataframes for the input artifacts
table = {'artifact id': [], 'type': [], 'uri': []}
for a in artifacts:
table['artifact id'].append(a.id)
artifact_type = store.get_artifact_types_by_id([a.type_id])[0]
table['type'].append(artifact_type.name)
table['uri'].append(a.uri.replace(base_dir, './'))
return pd.DataFrame(data=table)
def display_properties(store, node):
# Helper function to render dataframes for artifact and execution properties
table = {'property': [], 'value': []}
for k, v in node.properties.items():
table['property'].append(k)
table['value'].append(
v.string_value if v.HasField('string_value') else v.int_value)
for k, v in node.custom_properties.items():
table['property'].append(k)
table['value'].append(
v.string_value if v.HasField('string_value') else v.int_value)
return pd.DataFrame(data=table)
Explanation: Create some helper functions to view the data from the MD store.
End of explanation
display_types(store.get_artifact_types())
Explanation: First, query the MD store for a list of all its stored ArtifactTypes.
End of explanation
pushed_models = store.get_artifacts_by_type("PushedModel")
display_artifacts(store, pushed_models)
Explanation: Next, query all PushedModel artifacts.
End of explanation
pushed_model = pushed_models[-1]
display_properties(store, pushed_model)
Explanation: Query the MD store for the latest pushed model. This notebook has only one pushed model.
End of explanation
def get_one_hop_parent_artifacts(store, artifacts):
# Get a list of artifacts within a 1-hop of the artifacts of interest
artifact_ids = [artifact.id for artifact in artifacts]
executions_ids = set(
event.execution_id
for event in store.get_events_by_artifact_ids(artifact_ids)
if event.type == mlmd.proto.Event.OUTPUT)
artifacts_ids = set(
event.artifact_id
for event in store.get_events_by_execution_ids(executions_ids)
if event.type == mlmd.proto.Event.INPUT)
return [artifact for artifact in store.get_artifacts_by_id(artifacts_ids)]
Explanation: One of the first steps in debugging a pushed model is to look at which trained model is pushed and to see which training data is used to train that model.
MLMD provides traversal APIs to walk through the provenance graph, which you can use to analyze the model provenance.
End of explanation
# TODO
parent_artifacts = get_one_hop_parent_artifacts(store, [pushed_model])
display_artifacts(store, parent_artifacts)
Explanation: Query the parent artifacts for the pushed model.
End of explanation
exported_model = parent_artifacts[0]
display_properties(store, exported_model)
Explanation: Query the properties for the model.
End of explanation
model_parents = get_one_hop_parent_artifacts(store, [exported_model])
display_artifacts(store, model_parents)
Explanation: Query the upstream artifacts for the model.
End of explanation
used_data = model_parents[0]
display_properties(store, used_data)
Explanation: Get the training data the model trained with.
End of explanation
display_types(store.get_execution_types())
Explanation: Now that you have the training data that the model trained with, query the database again to find the training step (execution). Query the MD store for a list of the registered execution types.
End of explanation
def find_producer_execution(store, artifact):
executions_ids = set(
event.execution_id
for event in store.get_events_by_artifact_ids([artifact.id])
if event.type == mlmd.proto.Event.OUTPUT)
return store.get_executions_by_id(executions_ids)[0]
trainer = # TODO: Your code goes here
display_properties(store, trainer)
Explanation: The training step is the ExecutionType named tfx.components.trainer.component.Trainer. Traverse the MD store to get the trainer run that corresponds to the pushed model.
End of explanation |
6,096 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'hadgem3-gc31-hh', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: NERC
Source ID: HADGEM3-GC31-HH
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:26
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
6,097 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I’m trying to solve a simple ODE to visualise the temporal response, which works well for constant input conditions using the new solve_ivp integration API in SciPy. For example: | Problem:
import scipy.integrate
import numpy as np
N0 = 1
time_span = [0, 10]
def dN1_dt(t, N1):
input = 1-np.cos(t) if 0<t<2*np.pi else 0
return -100*N1 + input
sol = scipy.integrate.solve_ivp(fun=dN1_dt, t_span=time_span, y0=[N0,]) |
6,098 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example of DOV search methods for interpretations (formele stratigrafie)
Use cases explained below
Get 'formele stratigrafie' in a bounding box
Get 'formele stratigrafie' with specific properties
Get 'formele stratigrafie' in a bounding box based on specific properties
Select 'formele stratigrafie' in a municipality and return date
Get 'formele stratigrafie' based on fields not available in the standard output dataframe
Get 'formele stratigrafie' data, returning fields not available in the standard output dataframe
Step1: Get information about the datatype 'Formele stratigrafie'
Step2: A description is provided for the 'Formele stratigrafie' datatype
Step3: The different fields that are available for objects of the 'Formele stratigrafie' datatype can be requested with the get_fields() method
Step4: You can get more information of a field by requesting it from the fields dictionary
Step5: Some fields have an additional item values that lists the possible values the field can have, and a definition describing what they mean
Step6: Example use cases
Get 'Formele stratigrafie' in a bounding box
Get data for all the 'Formele stratigrafie' interpretations that are geographically located within the bounds of the specified box.
The coordinates are in the Belgian Lambert72 (EPSG
Step7: The dataframe contains one 'Formele stratigrafie' interpretation where two layers ('laag') were identified. The available data are flattened to represent unique attributes per row of the dataframe.
Using the pkey_interpretatie field one can request the details of this interpretation in a webbrowser
Step8: Using the field's values information, we can translate the stratigraphical codes to human readable layers
Step9: Get 'Formele stratigrafie' with specific properties
Next to querying interpretations based on their geographic location within a bounding box, we can also search for interpretations matching a specific set of properties. For this we can build a query using a combination of the 'FormeleStratigrafie' fields and operators provided by the WFS protocol.
A list of possible operators can be found below
Step10: In this example we build a query using the PropertyIsGreaterThan and PropertyIsEqualTo operators to find all interpretations that are at least 20 m deep, that are deemed appropriate for a range of 1 km from a defined point
Step11: Once again we can use the pkey_interpretatie as a permanent link to the information of these interpretations
Step12: Get 'Formele stratigrafie' in a bounding box based on specific properties
We can combine a query on attributes with a query on geographic location to get the interpretations within a bounding box that have specific properties.
The following example requests the interpretations of boreholes only, within the given bounding box.
(Note that the datatype of the literal parameter should be a string, regardless of the datatype of this field in the output dataframe.)
Step13: We can look at one of the interpretations in a webbrowser using its pkey_interpretatie
Step14: Select 'Formele stratigrafie' in a municipality and return date
We can limit the columns in the output dataframe by specifying the return_fields parameter in our search.
In this example we query all the 'Formele stratigrafie' interpretations in a distance of 1 km and return their date
Step15: Get 'Formele stratigrafie' based on fields not available in the standard output dataframe
To keep the output dataframe size acceptable, not all available WFS fields are included in the standard output. However, one can use this information to select interpretations as illustrated below.
For example, make a selection of the interpretations in municipality the of Antwerp, before 1/1/1990
Step16: Get 'Formele stratigrafie' data, returning fields not available in the standard output dataframe
As denoted in the previous example, not all available fields are available in the default output frame to keep its size limited. However, you can request any available field by including it in the return_fields parameter of the search
Step17: Visualize results
Using Folium, we can display the results of our search on a map. | Python Code:
%matplotlib inline
import inspect, sys
# check pydov path
import pydov
Explanation: Example of DOV search methods for interpretations (formele stratigrafie)
Use cases explained below
Get 'formele stratigrafie' in a bounding box
Get 'formele stratigrafie' with specific properties
Get 'formele stratigrafie' in a bounding box based on specific properties
Select 'formele stratigrafie' in a municipality and return date
Get 'formele stratigrafie' based on fields not available in the standard output dataframe
Get 'formele stratigrafie' data, returning fields not available in the standard output dataframe
End of explanation
from pydov.search.interpretaties import FormeleStratigrafieSearch
itp = FormeleStratigrafieSearch()
Explanation: Get information about the datatype 'Formele stratigrafie'
End of explanation
itp.get_description()
Explanation: A description is provided for the 'Formele stratigrafie' datatype:
End of explanation
fields = itp.get_fields()
# print available fields
for f in fields.values():
print(f['name'])
Explanation: The different fields that are available for objects of the 'Formele stratigrafie' datatype can be requested with the get_fields() method:
End of explanation
fields['Datum']
Explanation: You can get more information of a field by requesting it from the fields dictionary:
* name: name of the field
* definition: definition of this field
* cost: currently this is either 1 or 10, depending on the datasource of the field. It is an indication of the expected time it will take to retrieve this field in the output dataframe.
* notnull: whether the field is mandatory or not
* type: datatype of the values of this field
End of explanation
fields['lid1']
Explanation: Some fields have an additional item values that lists the possible values the field can have, and a definition describing what they mean:
End of explanation
from pydov.util.location import Within, Box
df = itp.search(location=Within(Box(153145, 206930, 153150, 206935)))
df.head()
Explanation: Example use cases
Get 'Formele stratigrafie' in a bounding box
Get data for all the 'Formele stratigrafie' interpretations that are geographically located within the bounds of the specified box.
The coordinates are in the Belgian Lambert72 (EPSG:31370) coordinate system and are given in the order of lower left x, lower left y, upper right x, upper right y.
End of explanation
for pkey_interpretatie in set(df.pkey_interpretatie):
print(pkey_interpretatie)
Explanation: The dataframe contains one 'Formele stratigrafie' interpretation where two layers ('laag') were identified. The available data are flattened to represent unique attributes per row of the dataframe.
Using the pkey_interpretatie field one can request the details of this interpretation in a webbrowser:
End of explanation
df['lid1_lb'] = df['lid1'].map(fields['lid1']['values'])
df['lid2_lb'] = df['lid2'].map(fields['lid2']['values'])
df[['pkey_interpretatie', 'diepte_laag_van', 'diepte_laag_tot', 'lid1', 'lid1_lb', 'relatie_lid1_lid2', 'lid2', 'lid2_lb']]
Explanation: Using the field's values information, we can translate the stratigraphical codes to human readable layers:
End of explanation
[i for i,j in inspect.getmembers(sys.modules['owslib.fes'], inspect.isclass) if 'Property' in i]
Explanation: Get 'Formele stratigrafie' with specific properties
Next to querying interpretations based on their geographic location within a bounding box, we can also search for interpretations matching a specific set of properties. For this we can build a query using a combination of the 'FormeleStratigrafie' fields and operators provided by the WFS protocol.
A list of possible operators can be found below:
End of explanation
from owslib.fes import And, PropertyIsGreaterThan, PropertyIsEqualTo
from pydov.util.location import WithinDistance, Point
query = And([PropertyIsEqualTo(propertyname='Betrouwbaarheid',
literal='goed'),
PropertyIsGreaterThan(propertyname='diepte_tot_m',
literal='20'),
])
df = itp.search(query=query,
location=WithinDistance(Point(153145, 206930), 1000))
df.head()
Explanation: In this example we build a query using the PropertyIsGreaterThan and PropertyIsEqualTo operators to find all interpretations that are at least 20 m deep, that are deemed appropriate for a range of 1 km from a defined point:
End of explanation
for pkey_interpretatie in set(df.pkey_interpretatie):
print(pkey_interpretatie)
Explanation: Once again we can use the pkey_interpretatie as a permanent link to the information of these interpretations:
End of explanation
from owslib.fes import PropertyIsEqualTo
query = PropertyIsEqualTo(
propertyname='Type_proef',
literal='Boring')
df = itp.search(
location=Within(Box(153145, 206930, 154145, 207930)),
query=query
)
df.head()
Explanation: Get 'Formele stratigrafie' in a bounding box based on specific properties
We can combine a query on attributes with a query on geographic location to get the interpretations within a bounding box that have specific properties.
The following example requests the interpretations of boreholes only, within the given bounding box.
(Note that the datatype of the literal parameter should be a string, regardless of the datatype of this field in the output dataframe.)
End of explanation
for pkey_interpretatie in set(df.pkey_interpretatie):
print(pkey_interpretatie)
Explanation: We can look at one of the interpretations in a webbrowser using its pkey_interpretatie:
End of explanation
df = itp.search(location=WithinDistance(Point(153145, 206930), 1000),
return_fields=('Datum',))
df.head()
df.describe()
Explanation: Select 'Formele stratigrafie' in a municipality and return date
We can limit the columns in the output dataframe by specifying the return_fields parameter in our search.
In this example we query all the 'Formele stratigrafie' interpretations in a distance of 1 km and return their date:
End of explanation
from owslib.fes import And, PropertyIsEqualTo, PropertyIsLessThan
query = And([PropertyIsEqualTo(propertyname='gemeente',
literal='Antwerpen'),
PropertyIsLessThan(propertyname='Datum',
literal='1990-01-01')]
)
df = itp.search(query=query,
return_fields=('pkey_interpretatie', 'Datum'))
df.head()
Explanation: Get 'Formele stratigrafie' based on fields not available in the standard output dataframe
To keep the output dataframe size acceptable, not all available WFS fields are included in the standard output. However, one can use this information to select interpretations as illustrated below.
For example, make a selection of the interpretations in municipality the of Antwerp, before 1/1/1990:
!remark: mind that the municipality attribute is merely an attribute that is defined by the person entering the data. It can be ok, empty, outdated or wrong!
End of explanation
query = PropertyIsEqualTo(
propertyname='gemeente',
literal='Leuven')
df = itp.search(query=query,
return_fields=('pkey_interpretatie', 'pkey_boring', 'pkey_sondering',
'x', 'y', 'Z_mTAW', 'gemeente', 'Auteurs', 'Proefnummer'))
df.head()
Explanation: Get 'Formele stratigrafie' data, returning fields not available in the standard output dataframe
As denoted in the previous example, not all available fields are available in the default output frame to keep its size limited. However, you can request any available field by including it in the return_fields parameter of the search:
End of explanation
# import the necessary modules (not included in the requirements of pydov!)
import folium
from folium.plugins import MarkerCluster
from pyproj import Transformer
# convert the coordinates to lat/lon for folium
def convert_latlon(x1, y1):
transformer = Transformer.from_crs("epsg:31370", "epsg:4326", always_xy=True)
x2,y2 = transformer.transform(x1, y1)
return x2, y2
df['lon'], df['lat'] = zip(*map(convert_latlon, df['x'], df['y']))
# convert to list
loclist = df[['lat', 'lon']].values.tolist()
# initialize the Folium map on the centre of the selected locations, play with the zoom until ok
fmap = folium.Map(location=[df['lat'].mean(), df['lon'].mean()], zoom_start=12)
marker_cluster = MarkerCluster().add_to(fmap)
for loc in range(0, len(loclist)):
folium.Marker(loclist[loc], popup=df['Proefnummer'][loc]).add_to(marker_cluster)
fmap
Explanation: Visualize results
Using Folium, we can display the results of our search on a map.
End of explanation |
6,099 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Display Exercise 1
Imports
Put any needed imports needed to display rich output the following cell
Step1: Basic rich display
Find a Physics related image on the internet and display it in this notebook using the Image object.
Load it using the url argument to Image (don't upload the image to this server).
Make sure the set the embed flag so the image is embedded in the notebook data.
Set the width and height to 600px.
Step2: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate. | Python Code:
from IPython.display import Image
from IPython.display import HTML
from IPython.display import IFrame
assert True # leave this to grade the import statements
Explanation: Display Exercise 1
Imports
Put any needed imports needed to display rich output the following cell:
End of explanation
Image(url = 'http://newsroom.unl.edu/releases/downloadables/photo/20090923solenoid.jpg', width = 600, height = 600)
assert True # leave this to grade the image display
Explanation: Basic rich display
Find a Physics related image on the internet and display it in this notebook using the Image object.
Load it using the url argument to Image (don't upload the image to this server).
Make sure the set the embed flag so the image is embedded in the notebook data.
Set the width and height to 600px.
End of explanation
%%html
<table>
<th>Name </th>
<th>Symbol</th>
<th>Antiparticle</th>
<th>Charge (e)</th>
<th>Mass(MeV/$c^2$)</th>
</tr>
<tr>
<td> up </td>
<td> u </td>
<td> $\bar{u}$ </td>
<td> +$\frac{2}{3}$ </td>
<td> 0.511 </td>
</tr>
<tr>
<td> down </td>
<td> d </td>
<td> $\bar{d}$ </td>
<td> -$\frac{1}{3}$ </td>
<td> 3.5-6.0 </td>
</tr>
<tr>
<td> charm </td>
<td> c </td>
<td> $\bar{c}$ </td>
<td> +$\frac{2}{3}$ </td>
<td> 1,160-1,340 </td>
</tr>
<tr>
<td> strange </td>
<td> s </td>
<td> $\bar{s}$ </td>
<td> -$\frac{1}{3}$ </td>
<td> 70-130 </td>
</tr>
<tr>
<td> top </td>
<td> t </td>
<td> $\bar{t}$ </td>
<td> +$\frac{2}{3}$ </td>
<td> 169,100-173,300 </td>
</tr>
<tr>
<td> bottom </td>
<td> b </td>
<td> $\bar{b}$ </td>
<td> -$\frac{1}{3}$ </td>
<td> 4,130-4,370 </td>
assert True # leave this here to grade the quark table
Explanation: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.