markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
See contents of the bucket on the web interface (URL will be outputted below)
|
print "https://console.developers.google.com/project/" + project_id + "/storage/browser/" + BUCKET_NAME + "/?authuser=0"
|
credentials/Test.ipynb
|
louisdorard/bml-base
|
mit
|
Google Prediction
Initialize API wrapper
|
import googleapiclient.gpred as gpred
oauth_file = %env GPRED_OAUTH_FILE
api = gpred.api(oauth_file)
|
credentials/Test.ipynb
|
louisdorard/bml-base
|
mit
|
Making predictions against a hosted model
Let's use the sample.sentiment hosted model (made publicly available by Google)
|
# projectname has to be 414649711441
prediction_request = api.hostedmodels().predict(project='414649711441',
hostedModelName='sample.sentiment',
body={'input': {'csvInstance': ['I hate that stuff is so stupid']}})
result = prediction_request.execute()
# We can print the raw result
print result
|
credentials/Test.ipynb
|
louisdorard/bml-base
|
mit
|
Listes
Ces exercices sont un peu foireux : les "listes" en Python ne sont pas des listes simplement chaรฎnรฉes !
Exercice 1 : taille
|
from typing import TypeVar, List
_a = TypeVar('alpha')
def taille(liste : List[_a]) -> int:
longueur = 0
for _ in liste:
longueur += 1
return longueur
taille([])
taille([1, 2, 3])
len([])
len([1, 2, 3])
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 2 : concat
|
from typing import TypeVar, List
_a = TypeVar('alpha')
def concatene(liste1 : List[_a], liste2 : List[_a]) -> List[_a]:
# return liste1 + liste2 # easy solution
liste = []
for i in liste1:
liste.append(i)
for i in liste2:
liste.append(i)
return liste
concatene([1, 2], [3, 4])
[1, 2] + [3, 4]
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Mais attention le typage est toujours optionnel en Python :
|
concatene([1, 2], ["pas", "entier", "?"])
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 3 : appartient
|
from typing import TypeVar, List
_a = TypeVar('alpha')
def appartient(x : _a, liste : List[_a]) -> bool:
for y in liste:
if x == y:
return True # on stoppe avant la fin
return False
appartient(1, [])
appartient(1, [1])
appartient(1, [1, 2, 3])
appartient(4, [1, 2, 3])
1 in []
1 in [1]
1 in [1, 2, 3]
4 in [1, 2, 3]
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Notre implรฉmentation est รฉvidemment plus lente que le test x in liste de la librarie standard...
Mais pas tant :
|
%timeit appartient(1000, list(range(10000)))
%timeit 1000 in list(range(10000))
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 4 : miroir
|
from typing import TypeVar, List
_a = TypeVar('alpha')
def miroir(liste : List[_a]) -> List[_a]:
# return liste[::-1] # version facile
liste2 = []
for x in liste:
liste2.insert(0, x)
return liste2
miroir([2, 3, 5, 7, 11])
[2, 3, 5, 7, 11][::-1]
%timeit miroir([2, 3, 5, 7, 11])
%timeit [2, 3, 5, 7, 11][::-1]
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 5 : alterne
La sรฉmantique n'รฉtait pas trรจs claire, mais on peut imaginer quelque chose comme รงa :
|
from typing import TypeVar, List
_a = TypeVar('alpha')
def alterne(liste1 : List[_a], liste2 : List[_a]) -> List[_a]:
liste3 = []
i, j = 0, 0
n, m = len(liste1), len(liste2)
while i < n and j < m: # encore deux
liste3.append(liste1[i])
i += 1
liste3.append(liste2[j])
j += 1
while i < n: # si n > m
liste3.append(liste1[i])
i += 1
while j < m: # ou si n < m
liste3.append(liste2[j])
j += 1
return liste3
alterne([3, 5], [2, 4, 6])
alterne([1, 3, 5], [2, 4, 6])
alterne([1, 3, 5], [4, 6])
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
La complexitรฉ est linรฉaire en $\mathcal{O}(\max(|\text{liste 1}|, |\text{liste 2}|)$.
Exercice 6 : nb_occurrences
|
from typing import TypeVar, List
_a = TypeVar('alpha')
def nb_occurrences(x : _a, liste : List[_a]) -> int:
nb = 0
for y in liste:
if x == y:
nb += 1
return nb
nb_occurrences(0, [1, 2, 3, 4])
nb_occurrences(2, [1, 2, 3, 4])
nb_occurrences(2, [1, 2, 2, 3, 2, 4])
nb_occurrences(5, [1, 2, 3, 4])
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 7 : pairs
C'est un filtrage :
|
filter?
from typing import List
def pairs(liste : List[int]) -> List[int]:
# return list(filter(lambda x : x % 2 == 0, liste))
return [x for x in liste if x % 2 == 0]
pairs([1, 2, 3, 4, 5, 6])
pairs([1, 2, 3, 4, 5, 6, 7, 100000])
pairs([1, 2, 3, 4, 5, 6, 7, 100000000000])
pairs([1, 2, 3, 4, 5, 6, 7, 1000000000000000000])
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 8 : range
|
from typing import List
def myrange(n : int) -> List[int]:
liste = []
i = 1
while i <= n:
liste.append(i)
i += 1
return liste
myrange(4)
from typing import List
def intervale(a : int, b : int=None) -> List[int]:
if b == None:
a, b = 1, a
liste = []
i = a
while i <= b:
liste.append(i)
i += 1
return liste
intervale(10)
intervale(1, 4)
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 9 : premiers
Plusieurs possibilitรฉs. Un filtre d'Erathosthรจne marche bien, ou une filtration.
Je ne vais pas utiliser de tableaux donc on est un peu rรฉduit d'utiliser une filtration (filtrage ? pattern matching)
|
def racine(n : int) -> int:
i = 1
for i in range(n + 1):
if i*i > n:
return i - 1
return i
racine(1)
racine(5)
racine(102)
racine(120031)
from typing import List
def intervale2(a : int, b : int, pas : int=1) -> List[int]:
assert pas > 0
liste = []
i = a
while i <= b:
liste.append(i)
i += pas
return liste
intervale2(2, 12, 1)
intervale2(2, 12, 3)
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Une version purement fonctionnelle est moins facile qu'une version impรฉrative avec une rรฉfรฉrence boolรฉenne.
|
def estDivisible(n : int, k : int) -> bool:
return (n % k) == 0
estDivisible(10, 2)
estDivisible(10, 3)
estDivisible(10, 4)
estDivisible(10, 5)
def estPremier(n : int) -> bool:
return (n == 2) or (n == 3) or not any(map(lambda k: estDivisible(n, k), intervale2(2, racine(n), 1)))
for n in range(2, 20):
print(n, list(map(lambda k: estDivisible(n, k), intervale2(2, racine(n), 1))))
from typing import List
def premiers(n : int) -> List[int]:
return [p for p in intervale2(2, n, 1) if estPremier(p)]
premiers(10)
premiers(100)
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Listes simplement chaรฎnรฉe (manuellement dรฉfinies)
Comme ces exercices รฉtaient un peu foireux ร รฉcrire avec les "listes" de Python, qui ne sont pas des listes simplement chaรฎnรฉes, je propose une autre solution oรน l'on va dรฉfinir une petite classe qui reprรฉsentera une liste simplement chaรฎnรฉe, et on va รฉcrire les fonctions demandรฉes avec cette classe.
La classe ListeChainee
On va supposer que les listes que l'on reprรฉsentera ne contiendront pas la valeur None, qui est utilisรฉe pour reprรฉsenter l'absence de tรชte et/ou de queue de la liste.
|
class ListeChainee():
def __init__(self, hd=None, tl=None):
self.hd = hd
self.tl = tl
def __repr__(self) -> str:
if self.tl is None:
if self.hd is None:
return "[]"
else:
return f"{self.hd} :: []"
else:
return f"{self.hd} :: {self.tl}"
def jolie(self) -> str:
if self.tl is None:
if self.hd is None:
return "[]"
else:
return f"[{self.hd}]"
else:
j = self.tl.jolie()
j = j.replace("[", "").replace("]", "")
if j == "":
return f"[{self.hd}]"
else:
return f"[{self.hd}, {j}]"
# รฉquivalent ร :: en OCaml
def insert(hd, tl: ListeChainee) -> ListeChainee:
""" Insรจre hd en dรฉbut de la liste chainรฉe tl."""
return ListeChainee(hd=hd, tl=tl)
# liste vide, puis des listes plus grandes
vide = ListeChainee() # []
l_1 = insert(1, vide) # 1 :: [] ~= [1]
l_12 = insert(2, l_1) # 2 :: 1 :: [] ~= [2, 1]
l_123 = insert(3, l_12) # 3 :: 2 :: 1 :: []
print(vide) # []
print(l_1) # 1 :: []
print(l_12) # 2 :: 1 :: []
print(l_123) # 3 :: 2 :: 1 :: []
print(vide.jolie()) # []
print(l_1.jolie()) # [1]
print(l_12.jolie()) # [2, 1]
print(l_123.jolie()) # [3, 2, 1]
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 1 : taille
Par exemple la longueur sera bien en O(n) si n=taille(liste) avec cette approche rรฉcursive :
|
from typing import Optional
def taille(liste: Optional[ListeChainee]) -> int:
if liste is None:
return 0
elif liste.tl is None:
return 0 if liste.hd is None else 1
return 1 + taille(liste.tl)
print(taille(vide)) # 0
print(taille(l_1)) # 1
print(taille(l_12)) # 2
print(taille(l_123)) # 3
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 2 : concat
Je vais dรฉjร commencer par รฉcrire une fonction copy qui permet de copier rรฉcursivement une liste simplement chaรฎnรฉe, pour รชtre sรปr que l'on ne modifie pas en place une des listes donnรฉes en argument.
|
def copy(liste: ListeChainee) -> ListeChainee:
if liste.tl is None:
return ListeChainee(hd=liste.hd, tl=None)
else:
return ListeChainee(hd=liste.hd, tl=copy(liste.tl))
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
On peut vรฉrifier que cela marche en regardant, par exemple, l'id de deux objets si le deuxiรจme est une copie du premier : les id seront bien diffรฉrents.
|
print(id(vide))
print(id(copy(vide)))
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Et donc pour concatรฉner deux chaรฎnes, c'est facile :
|
def concat(liste1: ListeChainee, liste2: ListeChainee) -> ListeChainee:
if taille(liste1) == 0:
return liste2
elif taille(liste2) == 0:
return liste1
# nouvelle liste : comme รงa changer queue.tl ne modifie PAS liste1
resultat = copy(liste1)
queue = resultat
while taille(queue.tl) > 0:
queue = queue.tl
assert taille(queue.tl) == 0
queue.tl = ListeChainee(hd=liste2.hd, tl=liste2.tl)
return resultat
print(concat(vide, l_1))
print(vide) # pas modifiรฉe : []
print(l_1) #ย pas modifiรฉe : 1 :: []
concat(l_1, l_12) # 1 :: 2 :: 1 :: []
concat(l_1, l_123) # 1 :: 3 :: 2 :: 1 :: []
concat(l_1, vide) # 1 :: []
concat(l_12, vide) # 2 :: 1 :: []
concat(l_12, l_1) # 2 :: 1 :: 1 :: []
concat(l_123, l_123) # 3 :: 2 :: 1 :: 3 :: 2 :: 1 :: []
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 3 : appartient
C'est en complexitรฉ linรฉaire dans le pire des cas.
|
def appartient(x, liste: ListeChainee) -> bool:
if liste.hd is None:
return False
else:
if liste.hd == x:
return True
else:
return appartient(x, liste.tl)
assert appartient(0, vide) == False
assert appartient(0, l_1) == False
assert appartient(0, l_12) == False
assert appartient(0, l_123) == False
assert appartient(1, l_1) == True
assert appartient(1, l_12) == True
assert appartient(1, l_123) == True
assert appartient(2, l_1) == False
assert appartient(2, l_12) == True
assert appartient(2, l_123) == True
assert appartient(3, l_1) == False
assert appartient(3, l_12) == False
assert appartient(3, l_123) == True
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 4 : miroir
Ce sera en temps quadratique, ร cause de toutes les recopies :
|
def miroir(liste: ListeChainee) -> ListeChainee:
if taille(liste) <= 1:
return copy(liste)
else:
hd, tl = liste.hd, copy(liste.tl) #ย O(n)
juste_hd = ListeChainee(hd=hd, tl=None) #ย O(1)
return concat(miroir(tl), juste_hd) # O(n^2) + O(n) ร cause de concat
print(miroir(vide)) # []ย => []
print(miroir(l_1)) # [1] => [1]
print(miroir(l_12)) # [2, 1] => [1, 2]
print(miroir(l_123)) # [3, 2, 1] => [1, 2, 3]
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 5 : alterne
La sรฉmantique n'รฉtait pas trรจs claire, mais on peut imaginer quelque chose comme รงa :
si une des deux listes est vide, on prend l'autre,
si les deux ne sont pas vide, on prend le dรฉbut de l1, de l2, puis alterne(queue l1, queue l2)
|
def alterne(liste1: ListeChainee, liste2: ListeChainee) -> ListeChainee:
if taille(liste1) == 0:
return copy(liste2) # on recopie pour ne rien modifier
if taille(liste2) == 0:
return copy(liste1) # on recopie pour ne rien modifier
h1, t1 = liste1.hd, liste1.tl
h2, t2 = liste2.hd, liste2.tl
return insert(h1, insert(h2, alterne(t1, t2)))
print(alterne(l_1, l_12)) # [1, 2, 1]
print(alterne(l_12, l_1)) # [2, 1, 1]
print(alterne(l_123, l_1)) # [3, 1, 2, 1]
print(alterne(l_123, l_12)) # [3, 2, 2, 1, 1]
print(alterne(l_123, l_123)) # [3, 3, 2, 2, 1, 1]
print(alterne(l_12, l_123)) # [2, 3, 1, 2, 1]
print(alterne(l_1, l_123)) # [1, 3, 2, 1]
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
La complexitรฉ est quadratique en $\mathcal{O}((\max(|\text{liste 1}|, |\text{liste 2}|)^2)$ ร cause des recopies.
Exercice 6 : nb_occurrences
Ce sera en temps linรฉaire, dans tous les cas.
|
def nb_occurrences(x, liste: ListeChainee) -> int:
if liste is None or liste.hd is None:
return 0
else:
count = 1 if x == liste.hd else 0
if liste.tl is None:
return count
else:
return count + nb_occurrences(x, liste.tl)
assert nb_occurrences(1, vide) == 0
assert nb_occurrences(1, l_1) == 1
assert nb_occurrences(1, l_12) == 1
assert nb_occurrences(2, l_12) == 1
assert nb_occurrences(1, l_123) == 1
assert nb_occurrences(2, l_123) == 1
assert nb_occurrences(3, l_123) == 1
assert nb_occurrences(1, concat(l_1, l_1)) == 2
assert nb_occurrences(2, concat(l_1, l_12)) == 1
assert nb_occurrences(3, concat(l_12, l_1)) == 0
assert nb_occurrences(1, concat(l_12, l_12)) == 2
assert nb_occurrences(2, concat(l_12, l_12)) == 2
assert nb_occurrences(1, concat(l_123, concat(l_1, l_1))) == 3
assert nb_occurrences(2, concat(l_123, concat(l_1, l_12))) == 2
assert nb_occurrences(3, concat(l_123, concat(l_12, l_1))) == 1
assert nb_occurrences(3, concat(l_123, concat(l_12, l_12))) == 1
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
On peut facilement รฉcrire une variante qui sera rรฉcursive terminale ("tail recursive") :
|
def nb_occurrences(x, liste: ListeChainee, count=0) -> int:
if liste is None or liste.hd is None:
return count
else:
count += 1 if x == liste.hd else 0
if liste.tl is None:
return count
else:
return nb_occurrences(x, liste.tl, count=count)
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 7 : pairs
C'est un filtrage par le prรฉdicat x % 2 == 0.
Autant รฉcrire la fonction de filtrage gรฉnรฉrique :
|
def filtrer(liste: ListeChainee, predicate) -> ListeChainee:
if liste is None or liste.hd is None: # liste de taille 0
return ListeChainee(hd=None, tl=None)
elif liste.tl is None: #ย liste de taille 1
if predicate(liste.hd): #ย on renvoie [hd]
return ListeChainee(hd=liste.hd, tl=None)
else: # on renvoie []
return ListeChainee(hd=None, tl=None)
else: #ย liste de taille >= 2
if predicate(liste.hd):
return insert(liste.hd, filtrer(liste.tl, predicate))
else:
return filtrer(liste.tl, predicate)
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Et donc c'est rapide :
|
def pairs(liste: ListeChainee) -> ListeChainee:
def predicate(x):
return (x % 2) == 0
# aussi : predicate = lambda x: (x % 2) == 0
return filtrer(liste, predicate)
def impairs(liste: ListeChainee) -> ListeChainee:
def predicate(x):
return (x % 2) == 1
return filtrer(liste, predicate)
print(pairs(vide)) # []
print(pairs(l_1)) # []
print(pairs(l_12)) # [2]
print(pairs(l_123)) # [2]
print(pairs(insert(4, insert(6, insert(8, l_123))))) # [4, 6, 8, 2]
print(pairs(insert(5, insert(6, insert(8, l_123))))) #ย [6, 8, 2]
print(impairs(vide)) # []
print(impairs(l_1)) # [1]
print(impairs(l_12)) # [1]
print(impairs(l_123)) # [3, 1]
print(impairs(insert(4, insert(6, insert(8, l_123))))) # [3, 1]
print(impairs(insert(5, insert(6, insert(8, l_123))))) #ย [5, 3, 1]
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 8 : range
Ce sera de complexitรฉ temporelle linรฉaire :
|
def myrange(n: int) -> ListeChainee:
if n <= 0:
return ListeChainee(hd=None, tl=None)
elif n == 1:
return ListeChainee(hd=1, tl=None)
# return insert(1, vide)
else:
return ListeChainee(hd=n, tl=myrange(n-1))
print(myrange(1)) # [1]
print(myrange(2)) # [1, 2]
print(myrange(3)) # [1, 2, 3]
print(myrange(4)) # [1, 2, 3, 4]
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Si on veut les avoir dans l'ordre croissant, il faudrait utiliser miroir qui est quadratique.
Autant รฉcrire directement une fonction intervale(a, b) qui renvoie la liste simplement chaรฎnรฉe contenant a :: (a+1) :: ... :: b :
|
def intervale(a: int, b: Optional[int]=None) -> ListeChainee:
if b is None:
a, b = 1, a
n = b - a
if n < 0: # [a..b] = []
return ListeChainee(hd=None, tl=None)
elif n == 0: #ย [a..b] = [a]
return ListeChainee(hd=a, tl=None)
else: # [a..b] = a :: [a+1..b]
return ListeChainee(hd=a, tl=intervale(a+1, b))
print(intervale(10)) # [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
print(intervale(1, 4)) # [1, 2, 3, 4]
print(intervale(13, 13)) # [13]
print(intervale(13, 10)) # []
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Une autre approche est d'รฉcrire la fonction mymap et de dire que
python
intervale_bis(a, b) = miroir(mymap(lambda x: x + (a - 1), myrange(b - a + 1)))
|
from typing import Callable
def mymap(fonction: Callable, liste: ListeChainee) -> ListeChainee:
if liste is None or liste.hd is None: # liste de taille 0
return ListeChainee(hd=None, tl=None)
elif liste.tl is None: # liste de taille 1
return ListeChainee(hd=fonction(liste.hd), tl=None)
else: # liste de taille >= 2
return ListeChainee(hd=fonction(liste.hd), tl=mymap(fonction, liste.tl))
print(myrange(10))
print(mymap(lambda x: x, myrange(10)))
def intervale_bis(a: int, b: int) -> ListeChainee:
return miroir(mymap(lambda x: x + (a - 1), myrange(b - a + 1)))
print(intervale_bis(1, 10)) # [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
print(intervale_bis(1, 4)) # [1, 2, 3, 4]
print(intervale_bis(13, 13)) # [13]
print(intervale_bis(13, 10)) # []
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 9 : premiers
Plusieurs possibilitรฉs. Un filtre d'Erathosthรจne marche bien, ou une filtrage.
Je ne vais pas utiliser de tableaux donc on est un peu rรฉduit d'utiliser une filtrage.
On a besoin des fonctions suivantes :
calculer la racine entiรจre de $n$, trรจs facile par une boucle,
calculer les nombres impairs entre 5 et $\lfloor \sqrt{n} \rfloor$,
filtrer cette liste de nombres impairs pour garder ceux qui divisent $n$,
et dire que $n$ est premier s'il a un diviseur non trivial.
|
def racine(n: int) -> int:
i = 1
for i in range(n + 1):
if i*i > n:
return i - 1
return i
print(racine(1)) #ย 1
print(racine(5)) #ย 2
print(racine(102)) # 10
print(racine(120031)) # 346
def intervale2(a: int, b: Optional[int]=None, pas: int=1) -> ListeChainee:
if b is None:
a, b = 1, a
n = b - a
if n < 0: # [a..b::p] = []
return ListeChainee(hd=None, tl=None)
elif n == 0: #ย [a..b::p] = [a]
return ListeChainee(hd=a, tl=None)
else: # [a..b::p] = a :: [a+p..b::p]
return ListeChainee(hd=a, tl=intervale2(a + pas, b=b, pas=pas))
print(intervale2(1, 10, 2)) # [1, 3, 5, 7, 9]
print(intervale2(1, 4, 2)) # [1, 3]
print(intervale2(13, 13, 2)) # [13]
print(intervale2(13, 10, 2)) # []
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Une version purement fonctionnelle est moins facile qu'une version impรฉrative avec une rรฉfรฉrence boolรฉenne.
|
def estDivisible(n: int, k: int) -> bool:
return (n % k) == 0
estDivisible(10, 2)
estDivisible(10, 3)
estDivisible(10, 4)
estDivisible(10, 5)
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
On est prรชt ร รฉcrire estPremier :
|
def estPremier(n : int) -> bool:
return taille(filtrer(intervale2(2, racine(n), 1), lambda k: estDivisible(n, k))) == 0
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
En effet il suffit de construire d'abord la liste des entiers impairs de 2 ร $\lfloor \sqrt{n} \rfloor$, de les filtrer par ceux qui divisent $n$, et de vรฉrifier si on a aucun diviseur (taille(..) == 0) auquel cas $n$ est premier, ou si $n$ a au moins un diviseur auquel cas $n$ n'est pas premier.
|
for n in range(2, 20):
print("Petits diviseurs de", n, " -> ", filtrer(intervale2(2, racine(n), 1), lambda k: estDivisible(n, k)))
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
On voit dans l'exemple ci dessus les nombres premiers comme ceux n'ayant aucun diviseurs, et les nombres non premiers comme ceux ayant au moins un diviseur.
|
def premiers(n : int) -> ListeChainee:
return filtrer(intervale2(2, n, 1), estPremier)
premiers(10) # [2, 3, 5, 7]
premiers(100) #ย [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97]
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Quelques tris par comparaison
On fera les tris en ordre croissant.
|
test = [3, 1, 8, 4, 5, 6, 1, 2]
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 10 : Tri insertion
|
from typing import TypeVar, List
_a = TypeVar('alpha')
def insere(x : _a, liste : List[_a]) -> List[_a]:
if len(liste) == 0:
return [x]
else:
t, q = liste[0], liste[1:]
if x <= t:
return [x] + liste
else:
return [t] + insere(x, q)
def tri_insertion(liste : List[_a]) -> List[_a]:
if len(liste) == 0:
return []
else:
t, q = liste[0], liste[1:]
return insere(t, tri_insertion(q))
tri_insertion(test)
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Complexitรฉ en temps $\mathcal{O}(n^2)$.
Exercice 11 : Tri insertion gรฉnรฉrique
|
from typing import TypeVar, List, Callable
_a = TypeVar('alpha')
def insere2(ordre : Callable[[_a, _a], bool], x : _a, liste : List[_a]) -> List[_a]:
if len(liste) == 0:
return [x]
else:
t, q = liste[0], liste[1:]
if ordre(x, t):
return [x] + liste
else:
return [t] + insere2(ordre, x, q)
def tri_insertion2(ordre : Callable[[_a, _a], bool], liste : List[_a]) -> List[_a]:
if len(liste) == 0:
return []
else:
t, q = liste[0], liste[1:]
return insere2(ordre, t, tri_insertion2(ordre, q))
ordre_croissant = lambda x, y: x <= y
tri_insertion2(ordre_croissant, test)
ordre_decroissant = lambda x, y: x >= y
tri_insertion2(ordre_decroissant, test)
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 12 : Tri selection
|
from typing import TypeVar, List, Tuple
_a = TypeVar('alpha')
def selectionne_min(liste : List[_a]) -> Tuple[_a, List[_a]]:
if len(liste) == 0:
raise ValueError("Selectionne_min sur liste vide")
else:
def cherche_min(mini : _a, autres : List[_a], reste : List[_a]) -> Tuple[_a, List[_a]]:
if len(reste) == 0:
return (mini, autres)
else:
t, q = reste[0], reste[1:]
if t < mini:
return cherche_min(t, [mini] + autres, q)
else:
return cherche_min(mini, [t] + autres, q)
t, q = liste[0], liste[1:]
return cherche_min(t, [], q)
test
selectionne_min(test)
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
(On voit que la liste autre a รฉtรฉ inversรฉe)
|
def tri_selection(liste : List[_a]) -> List[_a]:
if len(liste) == 0:
return []
else:
mini, autres = selectionne_min(liste)
return [mini] + tri_selection(autres)
tri_selection(test)
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Complexitรฉ en temps : $\mathcal{O}(n^2)$.
Exercices 13, 14, 15 : Tri fusion
|
from typing import TypeVar, List, Tuple
_a = TypeVar('alpha')
def separe(liste : List[_a]) -> Tuple[List[_a], List[_a]]:
if len(liste) == 0:
return ([], [])
elif len(liste) == 1:
return ([liste[0]], [])
else:
x, y, q = liste[0], liste[1], liste[2:]
a, b = separe(q)
return ([x] + a, [y] + b)
test
separe(test)
def fusion(liste1 : List[_a], liste2 : List[_a]) -> List[_a]:
if (len(liste1), len(liste2)) == (0, 0):
return []
elif len(liste1) == 0:
return liste2
elif len(liste2) == 0:
return liste1
else: # les deux sont non vides
x, a = liste1[0], liste1[1:]
y, b = liste2[0], liste2[1:]
if x <= y:
return [x] + fusion(a, [y] + b)
else:
return [y] + fusion([x] + a, b)
fusion([1, 3, 7], [2, 3, 8])
def tri_fusion(liste : List[_a]) -> List[_a]:
if len(liste) <= 1:
return liste
else:
a, b = separe(liste)
return fusion(tri_fusion(a), tri_fusion(b))
tri_fusion(test)
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Complexitรฉ en temps $\mathcal{O}(n \log n)$.
Comparaisons
|
%timeit tri_insertion(test)
%timeit tri_selection(test)
%timeit tri_fusion(test)
from sys import setrecursionlimit
setrecursionlimit(100000)
# nรฉcessaire pour tester les diffรฉrentes fonctions rรฉcursives sur de grosses listes
import random
def test_random(n : int) -> List[int]:
return [random.randint(-1000, 1000) for _ in range(n)]
for n in [10, 100, 1000]:
print("\nFor n =", n)
for tri in [tri_insertion, tri_selection, tri_fusion]:
print(" and tri = {}".format(tri.__name__))
%timeit tri(test_random(n))
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
C'est assez pour vรฉrifier que le tri fusion est bien plus efficace que les autres.
On voit aussi que les tris par insertion et sรฉlection sont pire que linรฉaires,
Mais que le tri par fusion est presque linรฉaire (pour $n$ petits, $n \log n$ est presque linรฉaire).
Listes : l'ordre supรฉrieur
Je ne corrige pas les questions qui รฉtaient traitรฉes dans le TP1.
Exercice 16 : applique
|
from typing import TypeVar, List, Callable
_a, _b = TypeVar('_a'), TypeVar('_b')
def applique(f : Callable[[_a], _b], liste : List[_a]) -> List[_b]:
# Triche :
return list(map(f, liste))
# 1รจre approche :
return [f(x) for x in liste]
# 2รจme approche :
fliste = []
for x in liste:
fliste.append(f(x))
return fliste
# 3รจme approche
n = len(liste)
if n == 0: return []
fliste = [liste[0] for _ in range(n)]
for i in range(n):
fliste[i] = f(liste[i])
return fliste
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 17
|
def premiers_carres_parfaits(n : int) -> List[int]:
return applique(lambda x : x * x, list(range(1, n + 1)))
premiers_carres_parfaits(12)
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 18 : itere
|
from typing import TypeVar, List, Callable
_a = TypeVar('_a')
def itere(f : Callable[[_a], None], liste : List[_a]) -> None:
for x in liste:
f(x)
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 19
|
print_int = lambda i: print("{}".format(i))
def affiche_liste_entiers(liste : List[int]) -> None:
print("Debut")
itere(print_int, liste)
print("Fin")
affiche_liste_entiers([1, 2, 4, 5, 12011993])
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 20 : qqsoit et ilexiste
|
from typing import TypeVar, List, Callable
_a = TypeVar('_a')
# Comme all(map(f, liste))
def qqsoit(f : Callable[[_a], bool], liste : List[_a]) -> bool:
for x in liste:
if not f(x): return False # arret preliminaire
return True
# Comme any(map(f, liste))
def ilexiste(f : Callable[[_a], bool], liste : List[_a]) -> bool:
for x in liste:
if f(x): return True # arret preliminaire
return False
qqsoit(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5])
ilexiste(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5])
%timeit qqsoit(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5])
%timeit all(map(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5]))
%timeit ilexiste(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5])
%timeit any(map(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5]))
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 21 : appartient version 2
|
def appartient_curry(x : _a) -> Callable[[List[_a]], bool]:
return lambda liste: ilexiste(lambda y: x == y, liste)
def appartient(x : _a, liste : List[_a]) -> bool:
return ilexiste(lambda y: x == y, liste)
def toutes_egales(x : _a, liste : List[_a]) -> bool:
return qqsoit(lambda y: x == y, liste)
appartient_curry(1)([1, 2, 3])
appartient(1, [1, 2, 3])
appartient(5, [1, 2, 3])
toutes_egales(1, [1, 2, 3])
toutes_egales(5, [1, 2, 3])
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Est-ce que notre implรฉmentation peut รชtre plus rapide que le test x in liste ?
Non, mais elle est aussi rapide. C'est dรฉjร pas mal !
|
%timeit appartient(random.randint(-10, 10), [random.randint(-1000, 1000) for _ in range(1000)])
%timeit random.randint(-10, 10) in [random.randint(-1000, 1000) for _ in range(1000)]
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 22 : filtre
|
from typing import TypeVar, List, Callable
_a = TypeVar('_a')
# Comme list(filter(f, liste))
def filtre(f : Callable[[_a], bool], liste : List[_a]) -> List[_a]:
# return [x for x in liste if f(x)]
liste2 = []
for x in liste:
if f(x):
liste2.append(x)
return liste2
filtre(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5])
filtre(lambda x: (x % 2) != 0, [1, 2, 3, 4, 5])
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 23
Je vous laisse trouver pour premiers.
|
pairs = lambda liste: filtre(lambda x: (x % 2) == 0, liste)
impairs = lambda liste: filtre(lambda x: (x % 2) != 0, liste)
pairs(list(range(10)))
impairs(list(range(10)))
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 24 : reduit
|
from typing import TypeVar, List, Callable
_a = TypeVar('_a')
# Comme list(filter(f, liste))
def reduit_rec(f : Callable[[_a, _b], _a], acc : _a, liste : List[_b]) -> _a:
if len(liste) == 0:
return acc
else:
h, q = liste[0], liste[1:]
return reduit(f, f(acc, h), q)
# Version non rรฉcursive, bien plus efficace
def reduit(f : Callable[[_a, _b], _a], acc : _a, liste : List[_b]) -> _a:
acc_value = acc
for x in liste:
acc_value = f(acc_value, x)
return acc_value
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Trรจs pratique pour calculer des sommes, notamment.
Exercice 25 : somme, produit
|
from operator import add
somme_rec = lambda liste: reduit_rec(add, 0, liste)
somme = lambda liste: reduit(add, 0, liste)
somme_rec(list(range(10)))
somme(list(range(10)))
sum(list(range(10)))
%timeit somme_rec(list(range(10)))
%timeit somme(list(range(10)))
%timeit sum(list(range(10)))
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Pour de petites listes, la version rรฉcursive est aussi efficace que la version impรฉrative. Chouette !
|
%timeit somme_rec(list(range(1000)))
%timeit somme(list(range(1000)))
%timeit sum(list(range(1000)))
from operator import mul
produit = lambda liste: reduit(mul, 1, liste)
produit(list(range(1, 6))) # 5! = 120
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Bonus :
|
def factorielle(n : int) -> int:
return produit(range(1, n + 1))
for n in range(1, 15):
print("{:>7}! = {:>13}".format(n, factorielle(n)))
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 26 : miroir version 2
|
miroir = lambda liste: reduit(lambda l, x : [x] + l, [], liste)
miroir([2, 3, 5, 7, 11])
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Attention en Python, les listes ne sont PAS simplement chainรฉes, donc lambda l, x : [x] + l est en temps linรฉaire en $|l| = n$, pas en $\mathcal{O}(1)$ comme en Caml/OCaml pour fun l x -> x :: l.
Arbres
/!\ Les deux derniรจres parties sont bien plus difficiles en Python qu'en Caml.
Exercice 27
|
from typing import Dict, Optional, Tuple
# Impossible de dรฉfinir un type rรฉcursivement, pas comme en Caml
arbre_bin = Dict[str, Optional[Tuple[Dict, Dict]]]
from pprint import pprint
arbre_test = {'Noeud': (
{'Noeud': (
{'Noeud': (
{'Feuille': None},
{'Feuille': None}
)},
{'Feuille': None}
)},
{'Feuille': None}
)}
pprint(arbre_test)
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Avec une syntaxe amรฉliorรฉe, on se rapproche de trรจs prรจs de la syntaxe de Caml/OCaml :
|
Feuille = {'Feuille': None}
Noeud = lambda x, y : {'Noeud': (x, y)}
arbre_test = Noeud(Noeud(Noeud(Feuille, Feuille), Feuille), Feuille)
pprint(arbre_test)
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 28
Compte le nombre de feuilles et de sommets.
|
def taille(a : arbre_bin) -> int:
# Pattern matching ~= if, elif,.. sur les clรฉs de la profondeur 1
# du dictionnaire (une seule clรฉ)
if 'Feuille' in a:
return 1
elif 'Noeud' in a:
x, y = a['Noeud']
return 1 + taille(x) + taille(y)
taille(arbre_test) # 7
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 29
|
def hauteur(a : arbre_bin) -> int:
if 'Feuille' in a:
return 0
elif 'Noeud' in a:
x, y = a['Noeud']
return 1 + max(hauteur(x), hauteur(y))
hauteur(arbre_test) # 3
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 30
Bonus. (รcrivez une fonction testant si un arbre รฉtiquetรฉ par des entiers est tournoi.)
Parcours d'arbres binaires
Aprรจs quelques exercices manipulant cette structure de dictionnaire, รฉcrire la suite n'est pas trop difficile.
Exercice 31
|
from typing import TypeVar, Union, List
F = TypeVar('F')
N = TypeVar('N')
element_parcours = Union[F, N]
parcours = List[element_parcours]
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 32 : Parcours naifs (complexitรฉ quadratique)
|
def parcours_prefixe(a : arbre_bin) -> parcours:
if 'Feuille' in a:
return [F]
elif 'Noeud' in a:
g, d = a['Noeud']
return [N] + parcours_prefixe(g) + parcours_prefixe(d)
parcours_prefixe(arbre_test)
def parcours_postfixe(a : arbre_bin) -> parcours:
if 'Feuille' in a:
return [F]
elif 'Noeud' in a:
g, d = a['Noeud']
return parcours_postfixe(g) + parcours_postfixe(d) + [N]
parcours_postfixe(arbre_test)
def parcours_infixe(a : arbre_bin) -> parcours:
if 'Feuille' in a:
return [F]
elif 'Noeud' in a:
g, d = a['Noeud']
return parcours_infixe(g) + [N] + parcours_infixe(d)
parcours_infixe(arbre_test)
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Pourquoi ont-ils une complexitรฉ quadratique ? La concatรฉnation (@ en OCaml, + en Python) ne se fait pas en temps constant mais linรฉaire dans la taille de la plus longue liste.
Exercice 33 : Parcours linรฉaires
On ajoute une fonction auxiliaire et un argument vus qui est une liste qui stocke les รฉlements observรฉs dans l'ordre du parcours
|
def parcours_prefixe2(a : arbre_bin) -> parcours:
def parcours(vus, b):
if 'Feuille' in b:
vus.insert(0, F)
return vus
elif 'Noeud' in b:
vus.insert(0, N)
g, d = b['Noeud']
return parcours(parcours(vus, g), d)
p = parcours([], a)
return p[::-1]
parcours_prefixe2(arbre_test)
def parcours_postfixe2(a : arbre_bin) -> parcours:
def parcours(vus, b):
if 'Feuille' in b:
vus.insert(0, F)
return vus
elif 'Noeud' in b:
g, d = b['Noeud']
p = parcours(parcours(vus, g), d)
p.insert(0, N)
return p
p = parcours([], a)
return p[::-1]
parcours_postfixe2(arbre_test)
def parcours_infixe2(a : arbre_bin) -> parcours:
def parcours(vus, b):
if 'Feuille' in b:
vus.insert(0, F)
return vus
elif 'Noeud' in b:
g, d = b['Noeud']
p = parcours(vus, g)
p.insert(0, N)
return parcours(p, d)
p = parcours([], a)
return p[::-1]
parcours_infixe2(arbre_test)
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 34 : parcours en largeur et en profondeur
Pour utiliser une file de prioritรฉ (priority queue), on utilise le module collections.deque.
|
from collections import deque
def parcours_largeur(a : arbre_bin) -> parcours:
file = deque()
# fonction avec effet de bord sur la file
def vasy() -> parcours:
if len(file) == 0:
return []
else:
b = file.pop()
if 'Feuille' in b:
# return [F] + vasy()
v = vasy()
v.insert(0, F)
return v
elif 'Noeud' in b:
g, d = b['Noeud']
file.insert(0, g)
file.insert(0, d)
# return [N] + vasy()
v = vasy()
v.insert(0, N)
return v
file.insert(0, a)
return vasy()
parcours_largeur(arbre_test)
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
En remplaรงant la file par une pile (une simple list), on obtient le parcours en profondeur, avec la mรชme complexitรฉ.
|
def parcours_profondeur(a : arbre_bin) -> parcours:
pile = []
# fonction avec effet de bord sur la file
def vasy() -> parcours:
if len(pile) == 0:
return []
else:
b = pile.pop()
if 'Feuille' in b:
# return [F] + vasy()
v = vasy()
v.append(F)
return v
elif 'Noeud' in b:
g, d = b['Noeud']
pile.append(g)
pile.append(d)
# return [N] + vasy()
v = vasy()
v.insert(0, N)
return v
pile.append(a)
return vasy()
parcours_profondeur(arbre_test)
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Exercice 35 et fin
Reconstruction depuis le parcours prefixe
|
test_prefixe = parcours_prefixe2(arbre_test)
test_prefixe
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
L'idรฉe de cette solution est la suivante :
j'aimerais une fonction rรฉcursive qui fasse le travail;
le problรจme c'est que si on prend un parcours prefixe, soit il commence
par F et l'arbre doit รชtre une feuille; soit il est de la forme N::q
oรน q n'est plus un parcours prefixe mais la concatรฉnation de DEUX parcours
prefixe, on ne peut donc plus appeler la fonction sur q.
On va donc รฉcrire une fonction qui prend une liste qui contient plusieurs
parcours concatรฉnรฉ et qui renvoie l'arbre correspondant au premier parcours
et ce qui n'a pas รฉtรฉ utilisรฉ :
|
from typing import Tuple
def reconstruit_prefixe(par : parcours) -> arbre_bin:
def reconstruit(p : parcours) -> Tuple[arbre_bin, parcours]:
if len(p) == 0:
raise ValueError("parcours invalide pour reconstruit_prefixe")
elif p[0] == F:
return (Feuille, p[1:])
elif p[0] == N:
g, q = reconstruit(p[1:])
d, r = reconstruit(q)
return (Noeud(g, d), r)
# call it
a, p = reconstruit(par)
if len(p) == 0:
return a
else:
raise ValueError("parcours invalide pour reconstruit_prefixe")
reconstruit_prefixe([F])
reconstruit_prefixe(test_prefixe)
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Et cet exemple va รฉchouer :
|
reconstruit_prefixe([N, F, F] + test_prefixe) # รฉchoue
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Reconstruction depuis le parcours en largeur
Ce n'est pas รฉvident quand on ne connait pas. L'idรฉe est de se servir d'une file
pour stocker les arbres qu'on reconstruit peu ร peu depuis les feuilles. La file
permet de rรฉcupรฉrer les bons sous-arbres quand on rencontre un noeud
|
largeur_test = parcours_largeur(arbre_test)
largeur_test
from collections import deque
def reconstruit_largeur(par : parcours) -> arbre_bin:
file = deque()
# Fonction avec effets de bord
def lire_element(e : element_parcours) -> None:
if e == F:
file.append(Feuille)
elif e == N:
d = file.popleft()
g = file.popleft() # attention ร l'ordre !
file.append(Noeud(g, d))
# Applique cette fonction ร chaque รฉlement du parcours
for e in reversed(par):
lire_element(e)
if len(file) == 1:
return file.popleft()
else:
raise ValueError("parcours invalide pour reconstruit_largeur")
largeur_test
reconstruit_largeur(largeur_test)
arbre_test
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Le mรชme algorithme (enfin presque, modulo interversion de g et d)
avec une pile donne une autre version de la reconstruction du parcours prefixe.
|
from collections import deque
def reconstruit_prefixe2(par : parcours) -> arbre_bin:
pile = deque()
# Fonction avec effets de bord
def lire_element(e : element_parcours) -> None:
if e == F:
pile.append(Feuille)
elif e == N:
g = pile.pop()
d = pile.pop() # attention ร l'ordre !
pile.append(Noeud(g, d))
# Applique cette fonction ร chaque รฉlement du parcours
for e in reversed(par):
lire_element(e)
if len(pile) == 1:
return pile.pop()
else:
raise ValueError("parcours invalide pour reconstruit_prefixe2")
prefixe_test = parcours_prefixe2(arbre_test)
prefixe_test
reconstruit_prefixe2(prefixe_test)
arbre_test
|
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
|
Naereen/notebooks
|
mit
|
Text classification with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_text_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.
This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used text classification model to classify movie reviews on a mobile device. The text classification model classifies text into predefined categories. The inputs should be preprocessed text and the outputs are the probabilities of the categories. The dataset used in this tutorial are positive and negative movie reviews.
Prerequisites
Install the required packages
To run this example, install the required packages, including the Model Maker package from the GitHub repo.
|
!pip install -q tflite-model-maker
|
tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb
|
Intel-Corporation/tensorflow
|
apache-2.0
|
Get the MNIST Dataset
|
def get_mnist(sc, mnist_path):
(train_images, train_labels) = mnist.read_data_sets(mnist_path = "/tmp/mnist", "train")
train_images = np.reshape(train_images, (60000, 1, 28, 28))
rdd_train_images = sc.parallelize(train_images)
rdd_train_sample = rdd_train_images.map(lambda img:
Sample.from_ndarray(
(img > 128) * 1.0,
[(img > 128) * 1.0, (img > 128) * 1.0]))
return rdd_train_sample
mnist_path = "/tmp/mnist" # please replace this
train_data = get_mnist(sc, mnist_path)
# (train_images, train_labels) = mnist.read_data_sets(mnist_path, "train")
|
apps/variational-autoencoder/using_variational_autoencoder_to_generate_digital_numbers.ipynb
|
intel-analytics/BigDL
|
apache-2.0
|
<br>
<br>
Lets do it with Gradient Descent now
|
theta0 = 0
theta1 = 2
alpha = 0.1
interations = 100
cost_log = []
theta_log = [];
inc = 0.1
for j in range(interations):
cost = 0
grad = 0
for i in range(m):
hx = theta1*X[i,0] + theta0
cost += pow((hx - y[i,0]),2)
grad += ((hx - y[i,0]))*X[i,0]
cost = cost/(2*m)
grad = grad/(2*m)
theta1 = theta1 - alpha*grad
cost_log.append(cost)
theta_log.append(theta1)
theta_log
|
lectures/lec04-linear-regression-example.ipynb
|
w4zir/ml17s
|
mit
|
We will produce some plots based on a frequency range to illustrate the concepts:
|
import matplotlib.pyplot as plt
%matplotlib inline
ff = np.linspace(0.01, 6., num=600)
wn = 2.*np.pi*ff
|
Rayleigh_damping.ipynb
|
pxcandeias/py-notebooks
|
mit
|
Back to top
Mass proportional damping
Mass proportional damping means that the damping matrix is somehow a multiple of the mass matrix:
\begin{equation}
\mathbf{C} = \alpha \cdot \mathbf{M}
\end{equation}
where $\alpha$ is the constant of mass proportionality. In these circunstances, the dynamic equilibrium equation can be written as:
\begin{equation}
\mathbf{M} \times \mathbf{a(t)} + \alpha \cdot \mathbf{M} \times \mathbf{v(t)} + \mathbf{K} \times \mathbf{d(t)} = \mathbf{F(t)}
\end{equation}
Proceeding the same as above, one obtains the $NDOF$ independent modal equilibrium equations:
\begin{equation}
\mathbf{M_n} \times \mathbf{\ddot q_n(t)} + \alpha \cdot \mathbf{M_n} \times \mathbf{\dot q_n(t)} + \mathbf{K_n} \times \mathbf{q_n(t)} = \mathbf{F_n(t)}
\end{equation}
or, equivalently:
\begin{equation}
\mathbf{\ddot q_n(t)} + \alpha \cdot \mathbf{\dot q_n(t)} + \mathbf{\omega_n^2} \cdot \mathbf{q_n(t)} = \mathbf{a_n(t)}
\end{equation}
Comparing expressions, one obtains
\begin{equation}
\alpha = 2 \cdot \zeta_n \cdot \omega_n \Leftrightarrow \zeta_n = \frac{\alpha}{2 \cdot \omega_n}
\end{equation}
from where it can be seen that the mass proportional damping is a hyperbolic function of the vibration frequency $\omega_n$.
|
alpha = 0.1
zn_a = alpha/(2.*wn)
plt.plot(wn, zn_a, label='mass proportional')
plt.xlabel('Vibration frequency $\omega_n$ [rad/s]')
plt.ylabel('Damping coefficient $\zeta_n$ [-]')
plt.legend(loc='best')
plt.grid(True)
plt.xlim([0, 35.])
plt.ylim([0, 0.2])
plt.show()
|
Rayleigh_damping.ipynb
|
pxcandeias/py-notebooks
|
mit
|
Back to top
Stiffness proportional damping
Stiffness proportional damping means that damping matrix is somehow a multiple of the stiffness matrix:
\begin{equation}
\mathbf{C} = \beta \cdot \mathbf{K}
\end{equation}
where $\beta$ is the constant of stiffness proportionality. In these circunstances, the dynamic equilibrium equation can be written as:
\begin{equation}
\mathbf{M} \times \mathbf{a(t)} + \beta \cdot \mathbf{K} \times \mathbf{v(t)} + \mathbf{K} \times \mathbf{d(t)} = \mathbf{F(t)}
\end{equation}
Proceeding the same as above, one obtains the $NDOF$ independent modal equilibrium equations:
\begin{equation}
\mathbf{M_n} \times \mathbf{\ddot q_n(t)} + \beta \cdot \mathbf{K_n} \times \mathbf{\dot q_n(t)} + \mathbf{K_n} \times \mathbf{q_n(t)} = \mathbf{F_n(t)}
\end{equation}
or, equivalently:
\begin{equation}
\mathbf{\ddot q_n(t)} + \beta \cdot \mathbf{\omega_n^2} \cdot \mathbf{\dot q_n(t)} + \mathbf{\omega_n^2} \cdot \mathbf{q_n(t)} = \mathbf{a_n(t)}
\end{equation}
Comparing expressions, one obtains
\begin{equation}
\beta \cdot \omega_n^2 = 2 \cdot \zeta \cdot \omega_n \Leftrightarrow \zeta_n = \frac{\omega_n \cdot \beta}{2}
\end{equation}
from where it can be seen that the stiffness proportional damping is a linear function of the vibration frequency $\omega_n$.
|
beta = 0.005
zn_b = (beta*wn)/2.
plt.plot(wn, zn_b, label='stiffness proportional')
plt.xlabel('Vibration frequency $\omega_n$ [rad/s]')
plt.ylabel('Damping coefficient $\zeta_n$ [-]')
plt.legend(loc='best')
plt.grid(True)
plt.xlim([0, 35.])
plt.ylim([0, 0.2])
plt.show()
|
Rayleigh_damping.ipynb
|
pxcandeias/py-notebooks
|
mit
|
Back to top
Rayleigh damping
When Rayleigh damping is considered it means that the damping coefficient is a combination of the two previous ones, that is, it is a multiple of mass and stifnness:
\begin{equation}
\mathbf{C} = \alpha \cdot \mathbf{M} + \beta \cdot \mathbf{K}
\end{equation}
where $\alpha$ and $\beta$ have the previous meanings. In these circunstances, the dynamic equilibrium equation can be written as:
\begin{equation}
\mathbf{M} \times \mathbf{a(t)} + \left[ \alpha \cdot \mathbf{M} + \beta \cdot \mathbf{K} \right] \times \mathbf{v(t)} + \mathbf{K} \times \mathbf{d(t)} = \mathbf{F(t)}
\end{equation}
Proceeding the same as above, one obtains the $NDOF$ independent modal equilibrium equations:
\begin{equation}
\mathbf{M_n} \times \mathbf{\ddot q_n(t)} + \left[ \alpha \cdot \mathbf{M_n} + \beta \cdot \mathbf{K_n} \right] \times \mathbf{\dot q_n(t)} + \mathbf{K_n} \times \mathbf{q_n(t)} = \mathbf{F_n(t)}
\end{equation}
or, equivalently:
\begin{equation}
\mathbf{\ddot q_n(t)} + \left[ \alpha + \beta \cdot \mathbf{\omega_n^2} \right] \cdot \mathbf{\dot q_n(t)} + \mathbf{\omega_n^2} \cdot \mathbf{q_n(t)} = \mathbf{a_n(t)}
\end{equation}
Comparing expressions, one obtains
\begin{equation}
\alpha + \beta \cdot \omega_n^2 = 2 \cdot \zeta \cdot \omega_n \Leftrightarrow \zeta_n = \frac{\alpha}{2 \cdot \omega_n} + \frac{\omega_n \cdot \beta}{2}
\end{equation}
from where it can be seen that the Rayleigh damping is the sum of the previous linear and hyperbolic functions of the vibration frequency $\omega_n$.
|
plt.hold(True)
plt.plot(wn, zn_a+zn_b, label='Rayleigh damping')
plt.plot(wn, zn_a, label='mass proportional')
plt.plot(wn, zn_b, label='stiffness proportional')
plt.xlabel('Vibration frequency $\omega_n$ [rad/s]')
plt.ylabel('Damping coefficient $\zeta_n$ [-]')
plt.legend(loc='best')
plt.grid(True)
plt.xlim([0, 35.])
plt.ylim([0, 0.2])
plt.show()
|
Rayleigh_damping.ipynb
|
pxcandeias/py-notebooks
|
mit
|
When the Rayleigh damping is used in MDOF systems, the coefficients $\alpha$ and $\beta$ can be computed in order to give an appropriate damping coefficient value for a given frequency range, related to the vibration modes of interest for the dynamic analysis. This is achieved by setting a simple two equation system whose solution yields the values of $\alpha$ and $\beta$:
$$
\left[\begin{array}{cc}
\zeta_1 \ \zeta_2
\end{array}\right]
=
\left[\begin{array}{cc}
\frac{1}{2 \cdot \omega_1} && \frac{\omega_1}{2} \ \frac{1}{2 \cdot \omega_2} && \frac{\omega_2}{2}
\end{array}\right]
\times
\left[\begin{array}{cc}
\alpha \ \beta
\end{array}\right]
$$
Back to top
Example
Let us consider a MDOF system where there are several vibration modes of interest, ranging from 1 to 4 Hz, and that we want compute the dynamic response considering a damping coefficient of 2% for the first mode and 5% for the last mode.
|
f1, f2 = 1., 4.
z1, z2 = 0.02, 0.05
w1 = 2.*np.pi*f1
w2 = 2.*np.pi*f2
alpha, beta = np.linalg.solve([[1./(2.*w1), w1/2.], [1./(2.*w2), w2/2.]], [z1, z2])
print('Alpha={:.6f}\nBeta={:.6f}'.format(alpha, beta))
|
Rayleigh_damping.ipynb
|
pxcandeias/py-notebooks
|
mit
|
We can check that the Rayleigh damping assumes the required values at the desired frequencies, although may vary considerably for other frequencies:
|
zn_a = alpha/(2.*wn)
zn_b = (beta*wn)/2.
plt.hold(True)
plt.plot(wn, zn_a+zn_b, label='Rayleigh damping')
plt.plot(wn, zn_a, label='mass proportional')
plt.plot(wn, zn_b, label='stiffness proportional')
plt.plot(w1, z1, 'o')
plt.plot(w2, z2, 'o')
plt.axvline(w1, ls=':')
plt.axhline(z1, ls=':')
plt.axvline(w2, ls=':')
plt.axhline(z2, ls=':')
plt.xlabel('Vibration frequency $\omega_n$ [rad/s]')
plt.ylabel('Damping coefficient $\zeta_n$ [-]')
plt.legend(loc='best')
plt.xlim([0, 35.])
plt.ylim([0, 0.2])
plt.show()
|
Rayleigh_damping.ipynb
|
pxcandeias/py-notebooks
|
mit
|
๋ค์์ผ๋ก ์ด ์ฝํผ์ค๋ฅผ ์
๋ ฅ ์ธ์๋ก ํ์ฌ Word2Vec ํด๋์ค ๊ฐ์ฒด๋ฅผ ์์ฑํ๋ค. ์ด ์์ ์ ํธ๋ ์ด๋์ด ์ด๋ฃจ์ด์ง๋ค.
|
from gensim.models.word2vec import Word2Vec
%%time
model = Word2Vec(sentences)
|
30. ๋ฅ๋ฌ๋/06. ๋จ์ด ์๋ฒ ๋ฉ์ ์๋ฆฌ์ gensim.word2vec ์ฌ์ฉ๋ฒ.ipynb
|
zzsza/Datascience_School
|
mit
|
ํธ๋ ์ด๋์ด ์๋ฃ๋๋ฉด init_sims ๋ช
๋ น์ผ๋ก ํ์์๋ ๋ฉ๋ชจ๋ฆฌ๋ฅผ unload ์ํจ๋ค.
|
model.init_sims(replace=True)
|
30. ๋ฅ๋ฌ๋/06. ๋จ์ด ์๋ฒ ๋ฉ์ ์๋ฆฌ์ gensim.word2vec ์ฌ์ฉ๋ฒ.ipynb
|
zzsza/Datascience_School
|
mit
|
์ด์ ์ด ๋ชจํ์์ ๋ค์๊ณผ ๊ฐ์ ๋ฉ์๋๋ฅผ ์ฌ์ฉํ ์ ์๋ค. ๋ณด๋ค ์์ธํ ๋ด์ฉ์ https://radimrehurek.com/gensim/models/word2vec.html ๋ฅผ ์ฐธ์กฐํ๋ค.
similarity : ๋ ๋จ์ด์ ์ ์ฌ๋ ๊ณ์ฐ
most_similar : ๊ฐ์ฅ ์ ์ฌํ ๋จ์ด๋ฅผ ์ถ๋ ฅ
|
model.similarity('actor', 'actress')
model.similarity('he', 'she')
model.similarity('actor', 'she')
model.most_similar("villain")
|
30. ๋ฅ๋ฌ๋/06. ๋จ์ด ์๋ฒ ๋ฉ์ ์๋ฆฌ์ gensim.word2vec ์ฌ์ฉ๋ฒ.ipynb
|
zzsza/Datascience_School
|
mit
|
most_similar ๋ฉ์๋๋ positive ์ธ์์ negative ์ธ์๋ฅผ ์ฌ์ฉํ์ฌ ๋ค์๊ณผ ๊ฐ์ ๋จ์ด๊ฐ ๊ด๊ณ๋ ์ฐพ์ ์ ์๋ค.
actor + he - actress = she
|
model.most_similar(positive=['actor', 'he'], negative='actress', topn=1)
|
30. ๋ฅ๋ฌ๋/06. ๋จ์ด ์๋ฒ ๋ฉ์ ์๋ฆฌ์ gensim.word2vec ์ฌ์ฉ๋ฒ.ipynb
|
zzsza/Datascience_School
|
mit
|
์ด๋ฒ์๋ ๋ค์ด๋ฒ ์ํ ๊ฐ์ ์ฝํผ์ค๋ฅผ ์ฌ์ฉํ์ฌ ํ๊ตญ์ด ๋จ์ด ์๋ฒ ๋ฉ์ ํด๋ณด์.
|
import codecs
def read_data(filename):
with codecs.open(filename, encoding='utf-8', mode='r') as f:
data = [line.split('\t') for line in f.read().splitlines()]
data = data[1:] # header ์ ์ธ
return data
train_data = read_data('/home/dockeruser/data/nsmc/ratings_train.txt')
from konlpy.tag import Twitter
tagger = Twitter()
def tokenize(doc):
return ['/'.join(t) for t in tagger.pos(doc, norm=True, stem=True)]
train_docs = [row[1] for row in train_data]
sentences = [tokenize(d) for d in train_docs]
from gensim.models import word2vec
model = word2vec.Word2Vec(sentences)
model.init_sims(replace=True)
model.similarity(*tokenize(u'์
๋น ์์
'))
model.similarity(*tokenize(u'์
๋น ๊ฐ๋'))
from konlpy.utils import pprint
pprint(model.most_similar(positive=tokenize(u'๋ฐฐ์ฐ ๋จ์'), negative=tokenize(u'์ฌ๋ฐฐ์ฐ'), topn=1))
|
30. ๋ฅ๋ฌ๋/06. ๋จ์ด ์๋ฒ ๋ฉ์ ์๋ฆฌ์ gensim.word2vec ์ฌ์ฉ๋ฒ.ipynb
|
zzsza/Datascience_School
|
mit
|
Now that we have an idea of how to use h5py to read in an h5 file, let's try it out. Note that if the h5 file is stored in a different directory than where you are running your notebook, you need to include the path (either relative or absolute) to the directory where that data file is stored. Use os.path.join to create the full path of the file.
|
# Note that you will need to update this filepath for your local machine
f = h5py.File('/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_reflectance.h5','r')
|
tutorials/Python/Hyperspectral/intro-hyperspectral/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py.ipynb
|
NEONScience/NEON-Data-Skills
|
agpl-3.0
|
This 3-D shape (1000,1000,426) corresponds to (y,x,bands), where (x,y) are the dimensions of the reflectance array in pixels. Hyperspectral data sets are often called "cubes" to reflect this 3-dimensional shape.
<figure>
<a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/DataCube.png">
<img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/DataCube.png"></a>
<figcaption> A "cube" showing a hyperspectral data set. Source: National Ecological Observatory Network
(NEON)
</figcaption>
</figure>
NEON hyperspectral data contain around 426 spectral bands, and when working with tiled data, the spatial dimensions are 1000 x 1000, where each pixel represents 1 meter. Now let's take a look at the wavelength values. First, we will extract wavelength information from the serc_refl variable that we created:
|
#define the wavelengths variable
wavelengths = serc_refl['Metadata']['Spectral_Data']['Wavelength']
#View wavelength information and values
print('wavelengths:',wavelengths)
|
tutorials/Python/Hyperspectral/intro-hyperspectral/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py.ipynb
|
NEONScience/NEON-Data-Skills
|
agpl-3.0
|
Here we can see that we extracted a 2-D array (1000 x 1000) of the scaled reflectance data corresponding to the wavelength band 56. Before we can use the data, we need to clean it up a little. We'll show how to do this below.
Scale factor and No Data Value
This array represents the scaled reflectance for band 56. Recall from exploring the HDF5 data in HDFViewer that NEON AOP reflectance data uses a Data_Ignore_Value of -9999 to represent missing data (often called NaN), and a reflectance Scale_Factor of 10000.0 in order to save disk space (can use lower precision this way).
<figure>
<a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/HDF5-general/hdfview_SERCrefl.png">
<img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/HDF5-general/hdfview_SERCrefl.png"></a>
<figcaption> Screenshot of the NEON HDF5 file format.
Source: National Ecological Observatory Network
</figcaption>
</figure>
We can extract and apply the Data_Ignore_Value and Scale_Factor as follows:
|
#View and apply scale factor and data ignore value
scaleFactor = serc_reflArray.attrs['Scale_Factor']
noDataValue = serc_reflArray.attrs['Data_Ignore_Value']
print('Scale Factor:',scaleFactor)
print('Data Ignore Value:',noDataValue)
b56[b56==int(noDataValue)]=np.nan
b56 = b56/scaleFactor
print('Cleaned Band 56 Reflectance:\n',b56)
|
tutorials/Python/Hyperspectral/intro-hyperspectral/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py.ipynb
|
NEONScience/NEON-Data-Skills
|
agpl-3.0
|
Here you can see that adjusting the colorlimit displays features (eg. roads, buildings) much better than when we set the colormap limits to the entire range of reflectance values.
Extension: Basic Image Processing -- Contrast Stretch & Histogram Equalization
We can also try out some basic image processing to better visualize the
reflectance data using the ski-image package.
Histogram equalization is a method in image processing of contrast adjustment
using the image's histogram. Stretching the histogram can improve the contrast
of a displayed image, as we will show how to do below.
<figure>
<a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/histogram_equalization.png">
<img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/histogram_equalization.png"></a>
<figcaption> Histogram equalization is a method in image processing of contrast adjustment
using the image's histogram. Stretching the histogram can improve the contrast
of a displayed image, as we will show how to do below.
Source: <a href="https://en.wikipedia.org/wiki/Talk%3AHistogram_equalization#/media/File:Histogrammspreizung.png"> Wikipedia - Public Domain </a>
</figcaption>
</figure>
The following tutorial section is adapted from skikit-image's tutorial
<a href="http://scikit-image.org/docs/stable/auto_examples/color_exposure/plot_equalize.html#sphx-glr-auto-examples-color-exposure-plot-equalize-py" target="_blank"> Histogram Equalization</a>.
Below we demonstrate a widget to interactively display different linear contrast stretches:
Explore the contrast stretch feature interactively using IPython widgets:
|
from skimage import exposure
from IPython.html.widgets import *
def linearStretch(percent):
pLow, pHigh = np.percentile(b56[~np.isnan(b56)], (percent,100-percent))
img_rescale = exposure.rescale_intensity(b56, in_range=(pLow,pHigh))
plt.imshow(img_rescale,extent=serc_ext,cmap='gist_earth')
#cbar = plt.colorbar(); cbar.set_label('Reflectance')
plt.title('SERC Band 56 \n Linear ' + str(percent) + '% Contrast Stretch');
ax = plt.gca()
ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation #
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree
interact(linearStretch,percent=(0,50,1))
|
tutorials/Python/Hyperspectral/intro-hyperspectral/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py.ipynb
|
NEONScience/NEON-Data-Skills
|
agpl-3.0
|
ํ
์ํ๋ก 1 ์ฝ๋๋ฅผ ํ
์ํ๋ก 2๋ก ๋ฐ๊พธ๊ธฐ
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/migrate">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
TensorFlow.org์์ ๋ณด๊ธฐ</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/guide/migrate.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
๊ตฌ๊ธ ์ฝ๋ฉ(Colab)์์ ์คํํ๊ธฐ</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/migrate.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
๊นํ๋ธ(GitHub) ์์ค ๋ณด๊ธฐ</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/guide/migrate.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Note: ์ด ๋ฌธ์๋ ํ
์ํ๋ก ์ปค๋ฎค๋ํฐ์์ ๋ฒ์ญํ์ต๋๋ค. ์ปค๋ฎค๋ํฐ ๋ฒ์ญ ํ๋์ ํน์ฑ์ ์ ํํ ๋ฒ์ญ๊ณผ ์ต์ ๋ด์ฉ์ ๋ฐ์ํ๊ธฐ ์ํด ๋
ธ๋ ฅํจ์๋
๋ถ๊ตฌํ๊ณ ๊ณต์ ์๋ฌธ ๋ฌธ์์ ๋ด์ฉ๊ณผ ์ผ์นํ์ง ์์ ์ ์์ต๋๋ค.
์ด ๋ฒ์ญ์ ๊ฐ์ ํ ๋ถ๋ถ์ด ์๋ค๋ฉด
tensorflow/docs-l10n ๊นํ ์ ์ฅ์๋ก ํ ๋ฆฌํ์คํธ๋ฅผ ๋ณด๋ด์ฃผ์๊ธฐ ๋ฐ๋๋๋ค.
๋ฌธ์ ๋ฒ์ญ์ด๋ ๋ฆฌ๋ทฐ์ ์ฐธ์ฌํ๋ ค๋ฉด
docs-ko@tensorflow.org๋ก
๋ฉ์ผ์ ๋ณด๋ด์ฃผ์๊ธฐ ๋ฐ๋๋๋ค.
์ด ๋ฌธ์๋ ์ ์์ค ํ
์ํ๋ก API๋ฅผ ์ฌ์ฉ์๋ฅผ ์ํ ๊ฐ์ด๋์
๋๋ค.
๋ง์ฝ ๊ณ ์์ค API(tf.keras)๋ฅผ ์ฌ์ฉํ๊ณ ์๋ค๋ฉด ํ
์ํ๋ก 2.0์ผ๋ก ๋ฐ๊พธ๊ธฐ ์ํด ํ ์ผ์ด ๊ฑฐ์ ์์ต๋๋ค:
์ตํฐ๋ง์ด์ ํ์ต๋ฅ ๊ธฐ๋ณธ๊ฐ์ ํ์ธํด ๋ณด์ธ์.
์ธก์ ์งํ์ "์ด๋ฆ"์ด ๋ฐ๋์์ ์ ์์ต๋๋ค.
์ฌ์ ํ ํ
์ํ๋ก 1.X ๋ฒ์ ์ ์ฝ๋๋ฅผ ์์ ํ์ง ์๊ณ ํ
์ํ๋ก 2.0์์ ์คํํ ์ ์์ต๋๋ค(contrib ๋ชจ๋์ ์ ์ธ):
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
ํ์ง๋ง ์ด๋ ๊ฒ ํ๋ฉด ํ
์ํ๋ก 2.0์์ ์ ๊ณตํ๋ ๋ง์ ์ฅ์ ์ ํ์ฉํ ์ ์์ต๋๋ค. ์ด ๋ฌธ์๋ ์ฑ๋ฅ์ ๋์ด๋ฉด์ ์ฝ๋๋ ๋ ๊ฐ๋จํ๊ณ ์ ์ง๋ณด์ํ๊ธฐ ์ฝ๋๋ก ์
๊ทธ๋ ์ด๋ํ๋ ๋ฐฉ๋ฒ์ ์๋ดํฉ๋๋ค.
์๋ ๋ณํ ์คํฌ๋ฆฝํธ
์ฒซ ๋ฒ์งธ ๋จ๊ณ๋ ์
๊ทธ๋ ์ด๋ ์คํฌ๋ฆฝํธ๋ฅผ ์ฌ์ฉํด ๋ณด๋ ๊ฒ์
๋๋ค.
์ด๋ ํ
์ํ๋ก 2.0์ผ๋ก ์
๊ทธ๋ ์ด๋ํ๊ธฐ ์ํด ์ฒ์ ์๋ํ ์ผ์
๋๋ค. ํ์ง๋ง ์ด ์์
์ด ๊ธฐ์กด ์ฝ๋๋ฅผ ํ
์ํ๋ก 2.0 ์คํ์ผ๋ก ๋ฐ๊พธ์ด ์ฃผ์ง๋ ๋ชปํฉ๋๋ค. ์ฌ์ ํ ํ๋ ์ด์คํ๋(placeholder)๋ ์ธ์
(session), ์ปฌ๋ ์
(collection), ๊ทธ์ธ 1.x ์คํ์ผ์ ๊ธฐ๋ฅ์ ์ฌ์ฉํ๊ธฐ ์ํด tf.compat.v1 ์๋์ ๋ชจ๋์ ์ฐธ์กฐํ๊ณ ์์ ๊ฒ์
๋๋ค.
๊ณ ์์ค ๋์ ๋ณ๊ฒฝ
tf.compat.v1.disable_v2_behavior()๋ฅผ ์ฌ์ฉํด ํ
์ํ๋ก 2.0์์ ์ฝ๋๋ฅผ ์คํํ๋ค๋ฉด ์ ์ญ ๋ฒ์์ ๋ณ๊ฒฝ ์ฌํญ์ ๋ํด ์๊ณ ์์ด์ผ ํฉ๋๋ค. ์ฃผ์ ๋ณ๊ฒฝ ์ฌํญ์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
์ฆ์ ์คํ, v1.enable_eager_execution() : ์๋ฌต์ ์ผ๋ก tf.Graph๋ฅผ ์ฌ์ฉํ๋ ๋ชจ๋ ์ฝ๋๋ ์คํจํ ๊ฒ์
๋๋ค. ์ฝ๋๋ฅผ with tf.Graph().as_default() ์ปจํ์คํธ(context)๋ก ๊ฐ์ธ์ผ ํฉ๋๋ค.
๋ฆฌ์์ค(resource) ๋ณ์, v1.enable_resource_variables(): ์ผ๋ถ ์ฝ๋๋ TF ๋ ํผ๋ฐ์ค ๋ณ์์ ๊ฒฐ์ ์ ์ด์ง ์์ ํ๋์ ์ํฅ์ ๋ฐ์ ์ ์์ต๋๋ค.
๋ฆฌ์์ค ๋ณ์๋ ์ ์ฅ๋๋ ๋์ ์ ๊น๋๋ค. ๋ฐ๋ผ์ ์ดํดํ๊ธฐ ์ฌ์ด ์ผ๊ด์ฑ์ ๋ณด์ฅํฉ๋๋ค.
๊ทน๋จ์ ์ธ ๊ฒฝ์ฐ ๋์์ ๋ฐ๊ฟ ์ ์์ต๋๋ค.
์ถ๊ฐ๋ก ๋ณต์ฌ๋ณธ์ ๋ง๋ค๊ณ ๋ฉ๋ชจ๋ฆฌ ์ฌ์ฉ๋์ ๋์ผ ์ ์์ต๋๋ค.
tf.Variable ์์ฑ์์ use_resource=False๋ฅผ ์ ๋ฌํ์ฌ ๋นํ์ฑํํ ์ ์์ต๋๋ค.
ํ
์ ํฌ๊ธฐ, v1.enable_v2_tensorshape(): TF 2.0์์ ํ
์ ํฌ๊ธฐ๋ ๊ฐ๋จํฉ๋๋ค. t.shape[0].value ๋์ ์ t.shape[0]์ ์ฌ์ฉํ ์ ์์ต๋๋ค. ๋ณ๊ฒฝ ์ฌํญ์ด ์๊ธฐ ๋๋ฌธ์ ๋น์ฅ ๊ณ ์น๋ ๊ฒ์ด ์ข์ต๋๋ค. TensorShape ์๋ฅผ ์ฐธ๊ณ ํ์ธ์.
์ ์ด ํ๋ฆ, v1.enable_control_flow_v2(): TF 2.0 ์ ์ด ํ๋ฆ ๊ตฌํ์ด ๊ฐ๋จํ๊ฒ ๋ฐ๋์๊ธฐ ๋๋ฌธ์ ๋ค๋ฅธ ๊ทธ๋ํ ํํ์ ๋ง๋ญ๋๋ค. ์ด์๊ฐ ์๋ค๋ฉด ๋ฒ๊ทธ๋ฅผ ์ ๊ณ ํด ์ฃผ์ธ์.
2.0์ ๋ง๋๋ก ์ฝ๋ ์์ ํ๊ธฐ
ํ
์ํ๋ก 1.x ์ฝ๋๋ฅผ ํ
์ํ๋ก 2.0์ผ๋ก ๋ณํํ๋ ๋ช ๊ฐ์ง ์๋ฅผ ์๊ฐํ๊ฒ ์ต๋๋ค. ์ด ์์
์ ํตํด ์ฑ๋ฅ์ ์ต์ ํํ๊ณ ๊ฐ์ํ๋ API์ ์ด์ ์ ์ฌ์ฉํ ์ ์์ต๋๋ค.
๊ฐ๊ฐ์ ๊ฒฝ์ฐ์ ์์ ํ๋ ํจํด์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
1. tf.Session.run ํธ์ถ์ ๋ฐ๊พธ์ธ์.
๋ชจ๋ tf.Session.run ํธ์ถ์ ํ์ด์ฌ ํจ์๋ก ๋ฐ๊พธ์ด์ผ ํฉ๋๋ค.
feed_dict์ tf.placeholder๋ ํจ์์ ๋งค๊ฐ๋ณ์๊ฐ ๋ฉ๋๋ค.
fetches๋ ํจ์์ ๋ฐํ๊ฐ์ด ๋ฉ๋๋ค.
๋ณํ ๊ณผ์ ์์ ์ฆ์ ์คํ ๋ชจ๋ ๋๋ถ์ ํ์ค ํ์ด์ฌ ๋๋ฒ๊ฑฐ pdb๋ฅผ ์ฌ์ฉํ์ฌ ์ฝ๊ฒ ๋๋ฒ๊น
ํ ์ ์์ต๋๋ค.
๊ทธ๋ค์ ๊ทธ๋ํ ๋ชจ๋์์ ํจ์จ์ ์ผ๋ก ์คํํ ์ ์๋๋ก tf.function ๋ฐ์ฝ๋ ์ดํฐ๋ฅผ ์ถ๊ฐํฉ๋๋ค. ๋ ์์ธํ ๋ด์ฉ์ ์คํ ๊ทธ๋ํ ๊ฐ์ด๋๋ฅผ ์ฐธ๊ณ ํ์ธ์.
๋
ธํธ:
v1.Session.run๊ณผ ๋ฌ๋ฆฌ tf.function์ ๋ฐํ ์๊ทธ๋์ฒ(signature)๊ฐ ๊ณ ์ ๋์ด ์๊ณ ํญ์ ๋ชจ๋ ์ถ๋ ฅ์ ๋ฐํํฉ๋๋ค. ์ฑ๋ฅ์ ๋ฌธ์ ๊ฐ ๋๋ค๋ฉด ๋ ๊ฐ์ ํจ์๋ก ๋๋์ธ์.
tf.control_dependencies๋ ๋น์ทํ ์ฐ์ฐ์ด ํ์์์ต๋๋ค: tf.function์ ์ฐ์ฌ์ง ์์๋๋ก ์คํ๋ฉ๋๋ค. ์๋ฅผ ๋ค์ด tf.Variable ํ ๋น์ด๋ tf.assert๋ ์๋์ผ๋ก ์คํ๋ฉ๋๋ค.
2. ํ์ด์ฌ ๊ฐ์ฒด๋ฅผ ์ฌ์ฉํ์ฌ ๋ณ์์ ์์ค์ ๊ด๋ฆฌํ์ธ์.
TF 2.0์์ ์ด๋ฆ ๊ธฐ๋ฐ ๋ณ์ ์ถ์ ์ ๋งค์ฐ ๊ถ์ฅ๋์ง ์์ต๋๋ค. ํ์ด์ฌ ๊ฐ์ฒด๋ก ๋ณ์๋ฅผ ์ถ์ ํ์ธ์.
v1.get_variable ๋์ ์ tf.Variable์ ์ฌ์ฉํ์ธ์.
๋ชจ๋ v1.variable_scope๋ ํ์ด์ฌ ๊ฐ์ฒด๋ก ๋ฐ๊พธ์ด์ผ ํฉ๋๋ค. ์ผ๋ฐ์ ์ผ๋ก ๋ค์ ์ค ํ๋๊ฐ ๋ ๊ฒ์
๋๋ค:
tf.keras.layers.Layer
tf.keras.Model
tf.Module
๋ง์ฝ (tf.Graph.get_collection(tf.GraphKeys.VARIABLES)์ฒ๋ผ) ๋ณ์์ ๋ฆฌ์คํธ๊ฐ ํ์ํ๋ค๋ฉด Layer์ Model ๊ฐ์ฒด์ .variables์ด๋ .trainable_variables ์์ฑ์ ์ฌ์ฉํ์ธ์.
Layer์ Model ํด๋์ค๋ ์ ์ญ ์ปฌ๋ ์
์ด ํ์ํ์ง ์๋๋ก ๋ช ๊ฐ์ง ๋ค๋ฅธ ์์ฑ๋ค๋ ์ ๊ณตํฉ๋๋ค. .losses ์์ฑ์ tf.GraphKeys.LOSSES ์ปฌ๋ ์
๋์ ์ฌ์ฉํ ์ ์์ต๋๋ค.
์์ธํ ๋ด์ฉ์ ์ผ๋ผ์ค ๊ฐ์ด๋๋ฅผ ์ฐธ๊ณ ํ์ธ์.
๊ฒฝ๊ณ : tf.compat.v1์ ์๋น์ ๊ธฐ๋ฅ์ ์๋ฌต์ ์ผ๋ก ์ ์ญ ์ปฌ๋ ์
์ ์ฌ์ฉํฉ๋๋ค.
3. ํ๋ จ ๋ฃจํ๋ฅผ ์
๊ทธ๋ ์ด๋ํ์ธ์.
ํ๋ ค๋ ๋ฌธ์ ์ ๋ง๋ ๊ณ ์์ค API๋ฅผ ์ฌ์ฉํ์ธ์. ํ๋ จ ๋ฃจํ(loop)๋ฅผ ์ง์ ๋ง๋๋ ๊ฒ๋ณด๋ค tf.keras.Model.fit ๋ฉ์๋๋ฅผ ์ฌ์ฉํ๋ ๊ฒ์ด ์ข์ต๋๋ค.
๊ณ ์์ค ํจ์๋ ํ๋ จ ๋ฃจํ๋ฅผ ์ง์ ๋ง๋ค ๋ ๋์น๊ธฐ ์ฌ์ด ์ฌ๋ฌ ๊ฐ์ง ์ ์์ค์ ์ธ๋ถ ์ฌํญ๋ค์ ๊ด๋ฆฌํด ์ค๋๋ค. ์๋ฅผ ๋ค์ด ์๋์ผ๋ก ๊ท์ (regularization) ์์ค์ ์์งํ๊ฑฐ๋ ๋ชจ๋ธ์ ํธ์ถํ ๋ training=True๋ก ๋งค๊ฐ๋ณ์๋ฅผ ์ค์ ํด ์ค๋๋ค.
4. ๋ฐ์ดํฐ ์
๋ ฅ ํ์ดํ๋ผ์ธ์ ์
๊ทธ๋ ์ด๋ํ์ธ์.
๋ฐ์ดํฐ ์
๋ ฅ์ ์ํด tf.data ๋ฐ์ดํฐ์
์ ์ฌ์ฉํ์ธ์. ์ด ๊ฐ์ฒด๋ ํจ์จ์ ์ด๊ณ ๊ฐ๊ฒฐํ๋ฉฐ ํ
์ํ๋ก์ ์ ํตํฉ๋ฉ๋๋ค.
tf.keras.Model.fit ๋ฉ์๋์ ๋ฐ๋ก ์ ๋ฌํ ์ ์์ต๋๋ค.
model.fit(dataset, epochs=5)
ํ์ด์ฌ์์ ์ง์ ๋ฐ๋ณต์ํฌ ์ ์์ต๋๋ค:
for example_batch, label_batch in dataset:
break
5. compat.v1์์ ๋ง์ด๊ทธ๋ ์ด์
ํ๊ธฐ
tf.compat.v1 ๋ชจ๋์๋ ์์ ํ ํ
์ํ๋ก 1.x API๊ฐ ๋ค์ด ์์ต๋๋ค.
TF2 ์
๊ทธ๋ ์ด๋ ์คํฌ๋ฆฝํธ๋ ์์ ํ ๊ฒฝ์ฐ ์ด์ ๋์ผํ 2.0 ๋ฒ์ ์ผ๋ก ๋ฐ๊ฟ๋๋ค. ์ฆ 2.0 ๋ฒ์ ์ ๋์์ด ์์ ํ ๋์ผํ ๊ฒฝ์ฐ์
๋๋ค(์๋ฅผ ๋ค๋ฉด, v1.arg_max๊ฐ tf.argmax๋ก ์ด๋ฆ์ด ๋ฐ๋์๊ธฐ ๋๋ฌธ์ ๋์ผํ ํจ์์
๋๋ค).
์
๊ทธ๋ ์ด๋ ์คํฌ๋ฆฝํธ๊ฐ ์ฝ๋๋ฅผ ์์ ํ๊ณ ๋๋ฉด ์ฝ๋์ compat.v1์ด ๋ง์ด ๋ฑ์ฅํ ๊ฒ์
๋๋ค. ์ฝ๋๋ฅผ ์ดํด ๋ณด๋ฉด์ ์๋์ผ๋ก 2.0 ๋ฒ์ ์ผ๋ก ๋ฐ๊ฟ๋๋ค(2.0 ๋ฒ์ ์ด ์๋ค๋ฉด ๋ก๊ทธ์ ์ธ๊ธ๋์ด ์์ ๊ฒ์
๋๋ค).
๋ชจ๋ธ ๋ณํํ๊ธฐ
์ค๋น
|
import tensorflow as tf
import tensorflow_datasets as tfds
|
site/ko/guide/migrate.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
์ ์์ค ๋ณ์์ ์ฐ์ฐ ์คํ
์ ์์ค API๋ฅผ ์ฌ์ฉํ๋ ์๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
์ฌ์ฌ์ฉ์ ์ํด ๋ณ์ ๋ฒ์(variable scopes)๋ฅผ ์ฌ์ฉํ๊ธฐ
v1.get_variable๋ก ๋ณ์๋ฅผ ๋ง๋ค๊ธฐ
๋ช
์์ ์ผ๋ก ์ปฌ๋ ์
์ ์ฐธ์กฐํ๊ธฐ
๋ค์๊ณผ ๊ฐ์ ๋ฉ์๋๋ฅผ ์ฌ์ฉํ์ฌ ์๋ฌต์ ์ผ๋ก ์ปฌ๋ ์
์ ์ฐธ์กฐํ๊ธฐ:
v1.global_variables
v1.losses.get_regularization_loss
๊ทธ๋ํ ์
๋ ฅ์ ์ํด v1.placeholder๋ฅผ ์ฌ์ฉํ๊ธฐ
session.run์ผ๋ก ๊ทธ๋ํ๋ฅผ ์คํํ๊ธฐ
๋ณ์๋ฅผ ์๋์ผ๋ก ์ด๊ธฐํํ๊ธฐ
๋ณํ ์
๋ค์ ์ฝ๋๋ ํ
์ํ๋ก 1.x๋ฅผ ์ฌ์ฉํ ์ฝ๋์์ ๋ณผ ์ ์๋ ํจํด์
๋๋ค.
```python
in_a = tf.placeholder(dtype=tf.float32, shape=(2))
in_b = tf.placeholder(dtype=tf.float32, shape=(2))
def forward(x):
with tf.variable_scope("matmul", reuse=tf.AUTO_REUSE):
W = tf.get_variable("W", initializer=tf.ones(shape=(2,2)),
regularizer=tf.contrib.layers.l2_regularizer(0.04))
b = tf.get_variable("b", initializer=tf.zeros(shape=(2)))
return W * x + b
out_a = forward(in_a)
out_b = forward(in_b)
reg_loss = tf.losses.get_regularization_loss(scope="matmul")
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
outs = sess.run([out_a, out_b, reg_loss],
feed_dict={in_a: [1, 0], in_b: [0, 1]})
```
๋ณํ ํ
๋ณํ๋ ์ฝ๋์ ํจํด์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
๋ณ์๋ ํ์ด์ฌ ์ง์ญ ๊ฐ์ฒด์
๋๋ค.
forward ํจ์๋ ์ฌ์ ํ ํ์ํ ๊ณ์ฐ์ ์ ์ํฉ๋๋ค.
Session.run ํธ์ถ์ forward ํจ์๋ฅผ ํธ์ถํ๋ ๊ฒ์ผ๋ก ๋ฐ๋๋๋ค.
tf.function ๋ฐ์ฝ๋ ์ดํฐ๋ ์ ํ ์ฌํญ์ผ๋ก ์ฑ๋ฅ์ ์ํด ์ถ๊ฐํ ์ ์์ต๋๋ค.
์ด๋ค ์ ์ญ ์ปฌ๋ ์
๋ ์ฐธ์กฐํ์ง ์๊ณ ๊ท์ ๋ฅผ ์ง์ ๊ณ์ฐํฉ๋๋ค.
์ธ์
์ด๋ ํ๋ ์ด์คํ๋๋ฅผ ์ฌ์ฉํ์ง ์์ต๋๋ค.
|
W = tf.Variable(tf.ones(shape=(2,2)), name="W")
b = tf.Variable(tf.zeros(shape=(2)), name="b")
@tf.function
def forward(x):
return W * x + b
out_a = forward([1,0])
print(out_a)
out_b = forward([0,1])
regularizer = tf.keras.regularizers.l2(0.04)
reg_loss = regularizer(W)
|
site/ko/guide/migrate.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
tf.layers ๊ธฐ๋ฐ์ ๋ชจ๋ธ
v1.layers ๋ชจ๋์ ๋ณ์๋ฅผ ์ ์ํ๊ณ ์ฌ์ฌ์ฉํ๊ธฐ ์ํด v1.variable_scope์ ์์กดํ๋ ์ธต ํจ์๋ฅผ ํฌํจํฉ๋๋ค.
๋ณํ ์
```python
def model(x, training, scope='model'):
with tf.variable_scope(scope, reuse=tf.AUTO_REUSE):
x = tf.layers.conv2d(x, 32, 3, activation=tf.nn.relu,
kernel_regularizer=tf.contrib.layers.l2_regularizer(0.04))
x = tf.layers.max_pooling2d(x, (2, 2), 1)
x = tf.layers.flatten(x)
x = tf.layers.dropout(x, 0.1, training=training)
x = tf.layers.dense(x, 64, activation=tf.nn.relu)
x = tf.layers.batch_normalization(x, training=training)
x = tf.layers.dense(x, 10, activation=tf.nn.softmax)
return x
train_out = model(train_data, training=True)
test_out = model(test_data, training=False)
```
๋ณํ ํ
์ธต์ ๋จ์ํ๊ฒ ์์ ๊ฒฝ์ฐ์ tf.keras.Sequential์ด ์ ํฉํฉ๋๋ค. (๋ณต์กํ ๋ชจ๋ธ์ธ ๊ฒฝ์ฐ ์ฌ์ฉ์ ์ ์ ์ธต๊ณผ ๋ชจ๋ธ์ด๋ ํจ์ํ API๋ฅผ ์ฐธ๊ณ ํ์ธ์.)
๋ชจ๋ธ์ด ๋ณ์์ ๊ท์ ์์ค์ ๊ด๋ฆฌํฉ๋๋ค.
v1.layers์์ tf.keras.layers๋ก ๋ฐ๋ก ๋งคํ๋๊ธฐ ๋๋ฌธ์ ์ผ๋์ผ๋ก ๋ณํ๋ฉ๋๋ค.
๋๋ถ๋ถ ๋งค๊ฐ๋ณ์๋ ๋์ผํฉ๋๋ค. ๋ค๋ฅธ ๋ถ๋ถ์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
๋ชจ๋ธ์ด ์คํ๋ ๋ ๊ฐ ์ธต์ training ๋งค๊ฐ๋ณ์๊ฐ ์ ๋ฌ๋ฉ๋๋ค.
์๋ model ํจ์์ ์ฒซ ๋ฒ์งธ ๋งค๊ฐ๋ณ์(์
๋ ฅ x)๋ ์ฌ๋ผ์ง๋๋ค. ์ธต ๊ฐ์ฒด๊ฐ ๋ชจ๋ธ ๊ตฌ์ถ๊ณผ ๋ชจ๋ธ ํธ์ถ์ ๊ตฌ๋ถํ๊ธฐ ๋๋ฌธ์
๋๋ค.
์ถ๊ฐ ๋
ธํธ:
tf.contrib์์ ๊ท์ ๋ฅผ ์ด๊ธฐํํ๋ค๋ฉด ๋ค๋ฅธ ๊ฒ๋ณด๋ค ๋งค๊ฐ๋ณ์ ๋ณํ๊ฐ ๋ง์ต๋๋ค.
๋ ์ด์ ์ปฌ๋ ์
์ ์ฌ์ฉํ์ง ์๊ธฐ ๋๋ฌธ์ v1.losses.get_regularization_loss์ ๊ฐ์ ํจ์๋ ๊ฐ์ ๋ฐํํ์ง ์์ต๋๋ค. ์ด๋ ํ๋ จ ๋ฃจํ๋ฅผ ๋ง๊ฐ๋จ๋ฆด ์ ์์ต๋๋ค.
|
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(0.04),
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10)
])
train_data = tf.ones(shape=(1, 28, 28, 1))
test_data = tf.ones(shape=(1, 28, 28, 1))
train_out = model(train_data, training=True)
print(train_out)
test_out = model(test_data, training=False)
print(test_out)
# ํ๋ จ๋๋ ์ ์ฒด ๋ณ์
len(model.trainable_variables)
# ๊ท์ ์์ค
model.losses
|
site/ko/guide/migrate.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
๋ณ์์ v1.layers์ ํผ์ฉ
๊ธฐ์กด ์ฝ๋๋ ์ข
์ข
์ ์์ค TF 1.x ๋ณ์์ ๊ณ ์์ค v1.layers ์ฐ์ฐ์ ํผ์ฉํฉ๋๋ค.
๋ณ๊ฒฝ ์
```python
def model(x, training, scope='model'):
with tf.variable_scope(scope, reuse=tf.AUTO_REUSE):
W = tf.get_variable(
"W", dtype=tf.float32,
initializer=tf.ones(shape=x.shape),
regularizer=tf.contrib.layers.l2_regularizer(0.04),
trainable=True)
if training:
x = x + W
else:
x = x + W * 0.5
x = tf.layers.conv2d(x, 32, 3, activation=tf.nn.relu)
x = tf.layers.max_pooling2d(x, (2, 2), 1)
x = tf.layers.flatten(x)
return x
train_out = model(train_data, training=True)
test_out = model(test_data, training=False)
```
๋ณ๊ฒฝ ํ
์ด๋ฐ ์ฝ๋๋ฅผ ๋ณํํ๋ ค๋ฉด ์ด์ ์์ ์ฒ๋ผ ์ธต๋ณ๋ก ๋งคํํ๋ ํจํด์ ์ฌ์ฉํ์ธ์.
v1.variable_scope๋ ๊ธฐ๋ณธ์ ์ผ๋ก ํ๋์ ์ธต์
๋๋ค. ๋ฐ๋ผ์ tf.keras.layers.Layer๋ก ์ฌ์์ฑํฉ๋๋ค. ์์ธํ ๋ด์ฉ์ ์ด ๋ฌธ์๋ฅผ ์ฐธ๊ณ ํ์ธ์.
์ผ๋ฐ์ ์ธ ํจํด์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
__init__์์ ์ธต์ ํ์ํ ๋งค๊ฐ๋ณ์๋ฅผ ์
๋ ฅ ๋ฐ์ต๋๋ค.
build ๋ฉ์๋์์ ๋ณ์๋ฅผ ๋ง๋ญ๋๋ค.
call ๋ฉ์๋์์ ์ฐ์ฐ์ ์คํํ๊ณ ๊ฒฐ๊ณผ๋ฅผ ๋ฐํํฉ๋๋ค.
|
# ๋ชจ๋ธ์ ์ถ๊ฐํ๊ธฐ ์ํด ์ฌ์ฉ์ ์ ์ ์ธต์ ๋ง๋ญ๋๋ค.
class CustomLayer(tf.keras.layers.Layer):
def __init__(self, *args, **kwargs):
super(CustomLayer, self).__init__(*args, **kwargs)
def build(self, input_shape):
self.w = self.add_weight(
shape=input_shape[1:],
dtype=tf.float32,
initializer=tf.keras.initializers.ones(),
regularizer=tf.keras.regularizers.l2(0.04),
trainable=True)
# call ๋ฉ์๋๊ฐ ๊ทธ๋ํ ๋ชจ๋์์ ์ฌ์ฉ๋๋ฉด
# training ๋ณ์๋ ํ
์๊ฐ ๋ฉ๋๋ค.
@tf.function
def call(self, inputs, training=None):
if training:
return inputs + self.w
else:
return inputs + self.w * 0.5
custom_layer = CustomLayer()
print(custom_layer([1]).numpy())
print(custom_layer([1], training=True).numpy())
train_data = tf.ones(shape=(1, 28, 28, 1))
test_data = tf.ones(shape=(1, 28, 28, 1))
# ์ฌ์ฉ์ ์ ์ ์ธต์ ํฌํจํ ๋ชจ๋ธ์ ๋ง๋ญ๋๋ค.
model = tf.keras.Sequential([
CustomLayer(input_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
])
train_out = model(train_data, training=True)
test_out = model(test_data, training=False)
|
site/ko/guide/migrate.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
๋
ธํธ:
ํด๋์ค ์์์ผ๋ก ๋ง๋ ์ผ๋ผ์ค ๋ชจ๋ธ๊ณผ ์ธต์ v1 ๊ทธ๋ํ(์ฐ์ฐ๊ฐ์ ์์กด์ฑ์ด ์๋์ผ๋ก ์ ์ด๋์ง ์์ต๋๋ค)์ ์ฆ์ ์คํ ๋ชจ๋ ์์ชฝ์์ ์คํ๋ ์ ์์ด์ผ ํฉ๋๋ค.
์คํ ๊ทธ๋ํ(autograph)์ ์์กด์ฑ ์๋ ์ ์ด(automatic control dependency)๋ฅผ ์ํด tf.function()์ผ๋ก call() ๋ฉ์๋๋ฅผ ๊ฐ์๋๋ค.
call ๋ฉ์๋์ training ๋งค๊ฐ๋ณ์๋ฅผ ์ถ๊ฐํ๋ ๊ฒ์ ์์ง ๋ง์ธ์.
๊ฒฝ์ฐ์ ๋ฐ๋ผ ์ด ๊ฐ์ tf.Tensor๊ฐ ๋ฉ๋๋ค.
๊ฒฝ์ฐ์ ๋ฐ๋ผ ์ด ๊ฐ์ ํ์ด์ฌ ๋ถ๋ฆฌ์ธ(boolean)์ด ๋ฉ๋๋ค.
self.add_weight()๋ฅผ ์ฌ์ฉํ์ฌ ์์ฑ์ ๋ฉ์๋๋ def build() ๋ฉ์๋์์ ๋ชจ๋ธ ๋ณ์๋ฅผ ๋ง๋ญ๋๋ค.
build ๋ฉ์๋์์ ์
๋ ฅ ํฌ๊ธฐ๋ฅผ ์ฐธ์กฐํ ์ ์์ผ๋ฏ๋ก ์ ์ ํ ํฌ๊ธฐ์ ๊ฐ์ค์น๋ฅผ ๋ง๋ค ์ ์์ต๋๋ค.
tf.keras.layers.Layer.add_weight๋ฅผ ์ฌ์ฉํ๋ฉด ์ผ๋ผ์ค๊ฐ ๋ณ์์ ๊ท์ ์์ค์ ๊ด๋ฆฌํ ์ ์์ต๋๋ค.
์ฌ์ฉ์ ์ ์ ์ธต ์์ tf.Tensors ๊ฐ์ฒด๋ฅผ ํฌํจํ์ง ๋ง์ธ์.
tf.function์ด๋ ์ฆ์ ์คํ ๋ชจ๋์์ ๋ชจ๋ ํ
์๊ฐ ๋ง๋ค์ด์ง์ง๋ง ์ด ํ
์๋ค์ ๋์ ๋ฐฉ์์ ๋ค๋ฆ
๋๋ค.
์ํ๋ฅผ ์ ์ฅํ๊ธฐ ์ํด์๋ tf.Variable์ ์ฌ์ฉํ์ธ์. ๋ณ์๋ ์์ชฝ ๋ฐฉ์์ ๋ชจ๋ ์ฌ์ฉํ ์ ์์ต๋๋ค.
tf.Tensors๋ ์ค๊ฐ ๊ฐ์ ์ ์ฅํ๊ธฐ ์ํ ์ฉ๋๋ก๋ง ์ฌ์ฉํฉ๋๋ค.
Slim & contrib.layers๋ฅผ ์ํ ๋
ธํธ
์์ ํ
์ํ๋ก 1.x ์ฝ๋๋ Slim ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ฅผ ๋ง์ด ์ฌ์ฉํฉ๋๋ค. ์ด ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ ํ
์ํ๋ก 1.x์ tf.contrib.layers๋ก ํจํค์ง๋์ด ์์ต๋๋ค. contrib ๋ชจ๋์ ๋ ์ด์ ํ
์ํ๋ก 2.0์์ ์ง์ํ์ง ์๊ณ tf.compat.v1์๋ ํฌํจ๋์ง ์์ต๋๋ค. Slim์ ์ฌ์ฉํ ์ฝ๋๋ฅผ TF 2.0์ผ๋ก ๋ณํํ๋ ๊ฒ์ v1.layers๋ฅผ ์ฌ์ฉํ ์ฝ๋๋ฅผ ๋ณ๊ฒฝํ๋ ๊ฒ๋ณด๋ค ๋ ์ด๋ ต์ต๋๋ค. ์ฌ์ค Slim ์ฝ๋๋ v1.layers๋ก ๋จผ์ ๋ณํํ๊ณ ๊ทธ ๋ค์ ์ผ๋ผ์ค๋ก ๋ณํํ๋ ๊ฒ์ด ์ข์ต๋๋ค.
arg_scopes๋ฅผ ์ญ์ ํ์ธ์. ๋ชจ๋ ๋งค๊ฐ๋ณ์๋ ๋ช
์์ ์ผ๋ก ์ค์ ๋์ด์ผ ํฉ๋๋ค.
normalizer_fn๊ณผ activation_fn๋ฅผ ์ฌ์ฉํด์ผ ํ๋ค๋ฉด ๋ถ๋ฆฌํ์ฌ ๊ฐ๊ฐ ํ๋์ ์ธต์ผ๋ก ๋ง๋์ธ์.
๋ถ๋ฆฌ ํฉ์ฑ๊ณฑ(separable conv) ์ธต์ ํ ๊ฐ ์ด์์ ๋ค๋ฅธ ์ผ๋ผ์ค ์ธต์ผ๋ก ๋งคํํฉ๋๋ค(๊น์ด๋ณ(depthwise), ์ ๋ณ(pointwise), ๋ถ๋ฆฌ(separable) ์ผ๋ผ์ค ์ธต).
Slim๊ณผ v1.layers๋ ๋งค๊ฐ๋ณ์ ์ด๋ฆ๊ณผ ๊ธฐ๋ณธ๊ฐ์ด ๋ค๋ฆ
๋๋ค.
์ผ๋ถ ๋งค๊ฐ๋ณ์๋ ๋ค๋ฅธ ์ค์ผ์ผ(scale)์ ๊ฐ์ง๋๋ค.
์ฌ์ ํ๋ จ๋ Slim ๋ชจ๋ธ์ ์ฌ์ฉํ๋ค๋ฉด tf.keras.applications๋ TFHub๋ฅผ ํ์ธํด ๋ณด์ธ์.
์ผ๋ถ tf.contrib ์ธต์ ํ
์ํ๋ก ๋ด๋ถ์ ํฌํจ๋์ง ๋ชปํ์ง๋ง TF ์ ๋์จ(add-on) ํจํค์ง๋ก ์ฎ๊ฒจ์ก์ต๋๋ค.
ํ๋ จ
์ฌ๋ฌ ๊ฐ์ง ๋ฐฉ๋ฒ์ผ๋ก tf.keras ๋ชจ๋ธ์ ๋ฐ์ดํฐ๋ฅผ ์ฃผ์
ํ ์ ์์ต๋๋ค. ํ์ด์ฌ ์ ๋๋ ์ดํฐ(generator)์ ๋ํ์ด ๋ฐฐ์ด์ ์
๋ ฅ์ผ๋ก ์ฌ์ฉํ ์ ์์ต๋๋ค.
tf.data ํจํค์ง๋ฅผ ์ฌ์ฉํ์ฌ ๋ชจ๋ธ์ ๋ฐ์ดํฐ๋ฅผ ์ฃผ์
ํ๋ ๊ฒ์ด ๊ถ์ฅ๋๋ ๋ฐฉ๋ฒ์
๋๋ค. ์ด ํจํค์ง๋ ๋ฐ์ดํฐ ์กฐ์์ ์ํ ๊ณ ์ฑ๋ฅ ํด๋์ค๋ค์ ํฌํจํ๊ณ ์์ต๋๋ค.
tf.queue๋ ๋ฐ์ดํฐ ๊ตฌ์กฐ๋ก๋ง ์ง์๋๊ณ ์
๋ ฅ ํ์ดํ๋ผ์ธ์ผ๋ก๋ ์ง์๋์ง ์์ต๋๋ค.
๋ฐ์ดํฐ์
์ฌ์ฉํ๊ธฐ
ํ
์ํ๋ก ๋ฐ์ดํฐ์
(Datasets) ํจํค์ง(tfds)๋ tf.data.Dataset ๊ฐ์ฒด๋ก ์ ์๋ ๋ฐ์ดํฐ์
์ ์ ์ฌํ๊ธฐ ์ํ ์ ํธ๋ฆฌํฐ๊ฐ ํฌํจ๋์ด ์์ต๋๋ค.
์๋ฅผ ๋ค์ด tfds๋ฅผ ์ฌ์ฉํ์ฌ MNIST ๋ฐ์ดํฐ์
์ ์ ์ฌํ๋ ์ฝ๋๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
|
datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']
|
site/ko/guide/migrate.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
๊ทธ ๋ค์ ํ๋ จ์ฉ ๋ฐ์ดํฐ๋ฅผ ์ค๋นํฉ๋๋ค:
๊ฐ ์ด๋ฏธ์ง์ ์ค์ผ์ผ์ ์กฐ์ ํฉ๋๋ค.
์ํ์ ์์๋ฅผ ์์ต๋๋ค.
์ด๋ฏธ์ง์ ๋ ์ด๋ธ(label)์ ๋ฐฐ์น๋ฅผ ๋ง๋ญ๋๋ค.
|
BUFFER_SIZE = 10 # ์ค์ ์ฝ๋์์๋ ๋ ํฐ ๊ฐ์ ์ฌ์ฉํฉ๋๋ค.
BATCH_SIZE = 64
NUM_EPOCHS = 5
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
|
site/ko/guide/migrate.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
๊ฐ๋จํ ์์ ๋ฅผ ์ํด 5๊ฐ์ ๋ฐฐ์น๋ง ๋ฐํํ๋๋ก ๋ฐ์ดํฐ์
์ ์๋ฆ
๋๋ค:
|
train_data = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
test_data = mnist_test.map(scale).batch(BATCH_SIZE)
STEPS_PER_EPOCH = 5
train_data = train_data.take(STEPS_PER_EPOCH)
test_data = test_data.take(STEPS_PER_EPOCH)
image_batch, label_batch = next(iter(train_data))
|
site/ko/guide/migrate.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
์ผ๋ผ์ค ํ๋ จ ๋ฃจํ ์ฌ์ฉํ๊ธฐ
ํ๋ จ ๊ณผ์ ์ ์ธ๋ถ์ ์ผ๋ก ์ ์ดํ ํ์๊ฐ ์๋ค๋ฉด ์ผ๋ผ์ค์ ๋ด์ฅ ๋ฉ์๋์ธ fit, evaluate, predict๋ฅผ ์ฌ์ฉํ๋ ๊ฒ์ด ์ข์ต๋๋ค. ์ด ๋ฉ์๋๋ค์ ๋ชจ๋ธ ๊ตฌํ(Sequential, ํจ์ํ API, ํด๋์ค ์์)์ ์๊ด์์ด ์ผ๊ด๋ ํ๋ จ ์ธํฐํ์ด์ค๋ฅผ ์ ๊ณตํฉ๋๋ค.
์ด ๋ฉ์๋๋ค์ ์ฅ์ ์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
๋ํ์ด ๋ฐฐ์ด, ํ์ด์ฌ ์ ๋๋ ์ดํฐ, tf.data.Datasets์ ์ฌ์ฉํ ์ ์์ต๋๋ค.
์๋์ผ๋ก ๊ท์ ์ ํ์ฑํ ์์ค์ ์ ์ฉํฉ๋๋ค.
๋ค์ค ์ฅ์น ํ๋ จ์ ์ํด tf.distribute์ ์ง์ํฉ๋๋ค.
์์์ ํธ์ถ ๊ฐ๋ฅํ ๊ฐ์ฒด๋ฅผ ์์ค๊ณผ ์ธก์ ์งํ๋ก ์ฌ์ฉํ ์ ์์ต๋๋ค.
tf.keras.callbacks.TensorBoard์ ๊ฐ์ ์ฝ๋ฐฑ(callback)์ด๋ ์ฌ์ฉ์ ์ ์ ์ฝ๋ฐฑ์ ์ง์ํฉ๋๋ค.
์๋์ผ๋ก ํ
์ํ๋ก ๊ทธ๋ํ๋ฅผ ์ฌ์ฉํ๋ฏ๋ก ์ฑ๋ฅ์ด ๋ฐ์ด๋ฉ๋๋ค.
Dataset์ ์ฌ์ฉํ์ฌ ๋ชจ๋ธ์ ํ๋ จํ๋ ์์ ๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค. (์์ธํ ์๋ ๋ฐฉ์์ ํํ ๋ฆฌ์ผ์ ์ฐธ๊ณ ํ์ธ์.)
|
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(0.02),
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10)
])
# ์ฌ์ฉ์ ์ ์ ์ธต์ด ์๋ ๋ชจ๋ธ์
๋๋ค.
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(train_data, epochs=NUM_EPOCHS)
loss, acc = model.evaluate(test_data)
print("์์ค {}, ์ ํ๋ {}".format(loss, acc))
|
site/ko/guide/migrate.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
์ฌ์ฉ์ ์ ์ ํ๋ จ ๋ฃจํ ๋ง๋ค๊ธฐ
์ผ๋ผ์ค ๋ชจ๋ธ์ ํ๋ จ ์คํ
(step)์ด ์ข์ง๋ง ๊ทธ ์ธ ๋ค๋ฅธ ๊ฒ์ ๋ ์ ์ดํ๋ ค๋ฉด ์์ ๋ง์ ๋ฐ์ดํฐ ๋ฐ๋ณต ๋ฃจํ๋ฅผ ๋ง๋ค๊ณ tf.keras.model.train_on_batch ๋ฉ์๋๋ฅผ ์ฌ์ฉํด ๋ณด์ธ์.
๊ธฐ์ตํ ์ : ๋ง์ ๊ฒ์ tf.keras.Callback์ผ๋ก ๊ตฌํํ ์ ์์ต๋๋ค.
์ด ๋ฉ์๋๋ ์์์ ์ธ๊ธํ ๋ฉ์๋์ ์ฅ์ ์ ๋ง์ด ๊ฐ์ง๊ณ ์๊ณ ์ฌ์ฉ์๊ฐ ๋ฐ๊นฅ์ชฝ ๋ฃจํ๋ฅผ ์ ์ดํ ์ ์์ต๋๋ค.
ํ๋ จํ๋ ๋์ ์ฑ๋ฅ์ ํ์ธํ๊ธฐ ์ํด tf.keras.model.test_on_batch๋ tf.keras.Model.evaluate ๋ฉ์๋๋ฅผ ์ฌ์ฉํ ์๋ ์์ต๋๋ค.
๋
ธํธ: train_on_batch์ test_on_batch๋ ๊ธฐ๋ณธ์ ์ผ๋ก ํ๋์ ๋ฐฐ์น์ ๋ํ ์์ค๊ณผ ์ธก์ ๊ฐ์ ๋ฐํํฉ๋๋ค. reset_metrics=False๋ฅผ ์ ๋ฌํ๋ฉด ๋์ ๋ ์ธก์ ๊ฐ์ ๋ฐํํฉ๋๋ค. ์ด ๋๋ ๋์ ๋ ์ธก์ ๊ฐ์ ์ ์ ํ๊ฒ ์ด๊ธฐํํด ์ฃผ์ด์ผ ํฉ๋๋ค. AUC์ ๊ฐ์ ์ผ๋ถ ์งํ๋ reset_metrics=False๋ฅผ ์ค์ ํด์ผ ์ฌ๋ฐ๋ฅด๊ฒ ๊ณ์ฐ๋ฉ๋๋ค.
์์ ๋ชจ๋ธ์ ๊ณ์ ์ฌ์ฉํฉ๋๋ค:
|
# ์ฌ์ฉ์ ์ ์ ์ธต์ด ์๋ ๋ชจ๋ธ์
๋๋ค.
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
for epoch in range(NUM_EPOCHS):
# ๋์ ๋ ์ธก์ ๊ฐ์ ์ด๊ธฐํํฉ๋๋ค.
model.reset_metrics()
for image_batch, label_batch in train_data:
result = model.train_on_batch(image_batch, label_batch)
metrics_names = model.metrics_names
print("ํ๋ จ: ",
"{}: {:.3f}".format(metrics_names[0], result[0]),
"{}: {:.3f}".format(metrics_names[1], result[1]))
for image_batch, label_batch in test_data:
result = model.test_on_batch(image_batch, label_batch,
# return accumulated metrics
reset_metrics=False)
metrics_names = model.metrics_names
print("\nํ๊ฐ: ",
"{}: {:.3f}".format(metrics_names[0], result[0]),
"{}: {:.3f}".format(metrics_names[1], result[1]))
|
site/ko/guide/migrate.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
<a id="custom_loops"/>
ํ๋ จ ๋จ๊ณ ์ปค์คํฐ๋ง์ด์ง
์์ ๋๋ฅผ ๋์ด๊ณ ์ ์ด๋ฅผ ๋ ํ๋ ค๋ฉด ๋ค์ ์ธ ๋จ๊ณ๋ฅผ ์ฌ์ฉํด ์์ ๋ง์ ํ๋ จ ๋ฃจํ๋ฅผ ๊ตฌํํ ์ ์์ต๋๋ค:
์ํ ๋ฐฐ์น๋ฅผ ๋ง๋๋ ํ์ด์ฌ ์ ๋๋ ์ดํฐ๋ tf.data.Dataset์ ๋ฐ๋ณตํฉ๋๋ค.
tf.GradientTape์ ์ฌ์ฉํ์ฌ ๊ทธ๋๋์ธํธ๋ฅผ ๊ณ์ฐํฉ๋๋ค.
tf.keras.optimizer๋ฅผ ์ฌ์ฉํ์ฌ ๋ชจ๋ธ์ ๊ฐ์ค์น ๋ณ์๋ฅผ ์
๋ฐ์ดํธํฉ๋๋ค.
๊ธฐ์ตํ ์ :
ํด๋์ค ์์์ผ๋ก ๋ง๋ ์ธต๊ณผ ๋ชจ๋ธ์ call ๋ฉ์๋์๋ ํญ์ training ๋งค๊ฐ๋ณ์๋ฅผ ํฌํจํ์ธ์.
๋ชจ๋ธ์ ํธ์ถํ ๋ training ๋งค๊ฐ๋ณ์๋ฅผ ์ฌ๋ฐ๋ฅด๊ฒ ์ง์ ํ๋์ง ํ์ธํ์ธ์.
์ฌ์ฉ ๋ฐฉ์์ ๋ฐ๋ผ ๋ฐฐ์น ๋ฐ์ดํฐ์์ ๋ชจ๋ธ์ด ์คํ๋ ๋๊น์ง ๋ชจ๋ธ ๋ณ์๊ฐ ์์ฑ๋์ง ์์ ์ ์์ต๋๋ค.
๋ชจ๋ธ์ ๊ท์ ์์ค ๊ฐ์ ๊ฒ๋ค์ ์ง์ ๊ด๋ฆฌํด์ผ ํฉ๋๋ค.
v1์ ๋นํด ๋จ์ํด์ง ๊ฒ:
๋ฐ๋ก ๋ณ์๋ฅผ ์ด๊ธฐํํ ํ์๊ฐ ์์ต๋๋ค. ๋ณ์๋ ์์ฑ๋ ๋ ์ด๊ธฐํ๋ฉ๋๋ค.
์์กด์ฑ์ ์๋์ผ๋ก ์ ์ดํ ํ์๊ฐ ์์ต๋๋ค. tf.function ์์์๋ ์ฐ์ฐ์ ์ฆ์ ์คํ ๋ชจ๋์ฒ๋ผ ์คํ๋ฉ๋๋ค.
|
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(0.02),
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(0.001)
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
@tf.function
def train_step(inputs, labels):
with tf.GradientTape() as tape:
predictions = model(inputs, training=True)
regularization_loss = tf.math.add_n(model.losses)
pred_loss = loss_fn(labels, predictions)
total_loss = pred_loss + regularization_loss
gradients = tape.gradient(total_loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
for epoch in range(NUM_EPOCHS):
for inputs, labels in train_data:
train_step(inputs, labels)
print("๋ง์ง๋ง ์ํฌํฌ", epoch)
|
site/ko/guide/migrate.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
์๋ก์ด ์คํ์ผ์ ์ธก์ ์งํ
ํ
์ํ๋ก 2.0์์ ์ธก์ ์งํ์ ์์ค์ ๊ฐ์ฒด์
๋๋ค. ์ด ๊ฐ์ฒด๋ ์ฆ์ ์คํ ๋ชจ๋์ tf.function์์ ๋ชจ๋ ์ฌ์ฉํ ์ ์์ต๋๋ค.
์์ค์ ํธ์ถ ๊ฐ๋ฅํ ๊ฐ์ฒด์
๋๋ค. ๋งค๊ฐ๋ณ์๋ก (y_true, y_pred)๋ฅผ ๊ธฐ๋ํฉ๋๋ค:
|
cce = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
cce([[1, 0]], [[-1.0,3.0]]).numpy()
|
site/ko/guide/migrate.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
์ธก์ ๊ฐ์ฒด๋ ๋ค์๊ณผ ๊ฐ์ ๋ฉ์๋๋ฅผ ๊ฐ์ง๋๋ค:
update_state() โ ์๋ก์ด ์ธก์ ๊ฐ์ ์ถ๊ฐํฉ๋๋ค.
result() โ ๋์ ๋ ์ธก์ ๊ฒฐ๊ณผ๋ฅผ ์ป์ต๋๋ค.
reset_states() โ ๋ชจ๋ ์ธก์ ๋ด์ฉ์ ์ง์๋๋ค.
์ด ๊ฐ์ฒด๋ ํธ์ถ ๊ฐ๋ฅํฉ๋๋ค. update_state ๋ฉ์๋์ฒ๋ผ ์๋ก์ด ์ธก์ ๊ฐ๊ณผ ํจ๊ป ํธ์ถํ๋ฉด ์ํ๋ฅผ ์
๋ฐ์ดํธํ๊ณ ์๋ก์ด ์ธก์ ๊ฒฐ๊ณผ๋ฅผ ๋ฐํํฉ๋๋ค.
์ธก์ ๋ณ์๋ฅผ ์๋์ผ๋ก ์ด๊ธฐํํ ํ์๊ฐ ์์ต๋๋ค. ํ
์ํ๋ก 2.0์ ์๋์ผ๋ก ์์กด์ฑ์ ๊ด๋ฆฌํ๊ธฐ ๋๋ฌธ์ ์ด๋ค ๊ฒฝ์ฐ์๋ ์ ๊ฒฝ ์ธ ํ์๊ฐ ์์ต๋๋ค.
๋ค์์ ์ธก์ ๊ฐ์ฒด๋ฅผ ์ฌ์ฉํ์ฌ ์ฌ์ฉ์ ์ ์ ํ๋ จ ๋ฃจํ ์์์ ํ๊ท ์์ค์ ๊ด๋ฆฌํ๋ ์ฝ๋์
๋๋ค.
|
# ์ธก์ ๊ฐ์ฒด๋ฅผ ๋ง๋ญ๋๋ค.
loss_metric = tf.keras.metrics.Mean(name='train_loss')
accuracy_metric = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
@tf.function
def train_step(inputs, labels):
with tf.GradientTape() as tape:
predictions = model(inputs, training=True)
regularization_loss = tf.math.add_n(model.losses)
pred_loss = loss_fn(labels, predictions)
total_loss = pred_loss + regularization_loss
gradients = tape.gradient(total_loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
# ์ธก์ ๊ฐ์ ์
๋ฐ์ดํธํฉ๋๋ค.
loss_metric.update_state(total_loss)
accuracy_metric.update_state(labels, predictions)
for epoch in range(NUM_EPOCHS):
# ์ธก์ ๊ฐ์ ์ด๊ธฐํํฉ๋๋ค.
loss_metric.reset_states()
accuracy_metric.reset_states()
for inputs, labels in train_data:
train_step(inputs, labels)
# ์ธก์ ๊ฒฐ๊ณผ๋ฅผ ์ป์ต๋๋ค.
mean_loss = loss_metric.result()
mean_accuracy = accuracy_metric.result()
print('์ํฌํฌ: ', epoch)
print(' ์์ค: {:.3f}'.format(mean_loss))
print(' ์ ํ๋: {:.3f}'.format(mean_accuracy))
|
site/ko/guide/migrate.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.