markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
See contents of the bucket on the web interface (URL will be outputted below)
print "https://console.developers.google.com/project/" + project_id + "/storage/browser/" + BUCKET_NAME + "/?authuser=0"
credentials/Test.ipynb
louisdorard/bml-base
mit
Google Prediction Initialize API wrapper
import googleapiclient.gpred as gpred oauth_file = %env GPRED_OAUTH_FILE api = gpred.api(oauth_file)
credentials/Test.ipynb
louisdorard/bml-base
mit
Making predictions against a hosted model Let's use the sample.sentiment hosted model (made publicly available by Google)
# projectname has to be 414649711441 prediction_request = api.hostedmodels().predict(project='414649711441', hostedModelName='sample.sentiment', body={'input': {'csvInstance': ['I hate that stuff is so stupid']}}) result = prediction_request.execute() # We can print the raw result print result
credentials/Test.ipynb
louisdorard/bml-base
mit
Listes Ces exercices sont un peu foireux : les "listes" en Python ne sont pas des listes simplement chaรฎnรฉes ! Exercice 1 : taille
from typing import TypeVar, List _a = TypeVar('alpha') def taille(liste : List[_a]) -> int: longueur = 0 for _ in liste: longueur += 1 return longueur taille([]) taille([1, 2, 3]) len([]) len([1, 2, 3])
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 2 : concat
from typing import TypeVar, List _a = TypeVar('alpha') def concatene(liste1 : List[_a], liste2 : List[_a]) -> List[_a]: # return liste1 + liste2 # easy solution liste = [] for i in liste1: liste.append(i) for i in liste2: liste.append(i) return liste concatene([1, 2], [3, 4]) [1, 2] + [3, 4]
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Mais attention le typage est toujours optionnel en Python :
concatene([1, 2], ["pas", "entier", "?"])
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 3 : appartient
from typing import TypeVar, List _a = TypeVar('alpha') def appartient(x : _a, liste : List[_a]) -> bool: for y in liste: if x == y: return True # on stoppe avant la fin return False appartient(1, []) appartient(1, [1]) appartient(1, [1, 2, 3]) appartient(4, [1, 2, 3]) 1 in [] 1 in [1] 1 in [1, 2, 3] 4 in [1, 2, 3]
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Notre implรฉmentation est รฉvidemment plus lente que le test x in liste de la librarie standard... Mais pas tant :
%timeit appartient(1000, list(range(10000))) %timeit 1000 in list(range(10000))
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 4 : miroir
from typing import TypeVar, List _a = TypeVar('alpha') def miroir(liste : List[_a]) -> List[_a]: # return liste[::-1] # version facile liste2 = [] for x in liste: liste2.insert(0, x) return liste2 miroir([2, 3, 5, 7, 11]) [2, 3, 5, 7, 11][::-1] %timeit miroir([2, 3, 5, 7, 11]) %timeit [2, 3, 5, 7, 11][::-1]
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 5 : alterne La sรฉmantique n'รฉtait pas trรจs claire, mais on peut imaginer quelque chose comme รงa :
from typing import TypeVar, List _a = TypeVar('alpha') def alterne(liste1 : List[_a], liste2 : List[_a]) -> List[_a]: liste3 = [] i, j = 0, 0 n, m = len(liste1), len(liste2) while i < n and j < m: # encore deux liste3.append(liste1[i]) i += 1 liste3.append(liste2[j]) j += 1 while i < n: # si n > m liste3.append(liste1[i]) i += 1 while j < m: # ou si n < m liste3.append(liste2[j]) j += 1 return liste3 alterne([3, 5], [2, 4, 6]) alterne([1, 3, 5], [2, 4, 6]) alterne([1, 3, 5], [4, 6])
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
La complexitรฉ est linรฉaire en $\mathcal{O}(\max(|\text{liste 1}|, |\text{liste 2}|)$. Exercice 6 : nb_occurrences
from typing import TypeVar, List _a = TypeVar('alpha') def nb_occurrences(x : _a, liste : List[_a]) -> int: nb = 0 for y in liste: if x == y: nb += 1 return nb nb_occurrences(0, [1, 2, 3, 4]) nb_occurrences(2, [1, 2, 3, 4]) nb_occurrences(2, [1, 2, 2, 3, 2, 4]) nb_occurrences(5, [1, 2, 3, 4])
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 7 : pairs C'est un filtrage :
filter? from typing import List def pairs(liste : List[int]) -> List[int]: # return list(filter(lambda x : x % 2 == 0, liste)) return [x for x in liste if x % 2 == 0] pairs([1, 2, 3, 4, 5, 6]) pairs([1, 2, 3, 4, 5, 6, 7, 100000]) pairs([1, 2, 3, 4, 5, 6, 7, 100000000000]) pairs([1, 2, 3, 4, 5, 6, 7, 1000000000000000000])
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 8 : range
from typing import List def myrange(n : int) -> List[int]: liste = [] i = 1 while i <= n: liste.append(i) i += 1 return liste myrange(4) from typing import List def intervale(a : int, b : int=None) -> List[int]: if b == None: a, b = 1, a liste = [] i = a while i <= b: liste.append(i) i += 1 return liste intervale(10) intervale(1, 4)
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 9 : premiers Plusieurs possibilitรฉs. Un filtre d'Erathosthรจne marche bien, ou une filtration. Je ne vais pas utiliser de tableaux donc on est un peu rรฉduit d'utiliser une filtration (filtrage ? pattern matching)
def racine(n : int) -> int: i = 1 for i in range(n + 1): if i*i > n: return i - 1 return i racine(1) racine(5) racine(102) racine(120031) from typing import List def intervale2(a : int, b : int, pas : int=1) -> List[int]: assert pas > 0 liste = [] i = a while i <= b: liste.append(i) i += pas return liste intervale2(2, 12, 1) intervale2(2, 12, 3)
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Une version purement fonctionnelle est moins facile qu'une version impรฉrative avec une rรฉfรฉrence boolรฉenne.
def estDivisible(n : int, k : int) -> bool: return (n % k) == 0 estDivisible(10, 2) estDivisible(10, 3) estDivisible(10, 4) estDivisible(10, 5) def estPremier(n : int) -> bool: return (n == 2) or (n == 3) or not any(map(lambda k: estDivisible(n, k), intervale2(2, racine(n), 1))) for n in range(2, 20): print(n, list(map(lambda k: estDivisible(n, k), intervale2(2, racine(n), 1)))) from typing import List def premiers(n : int) -> List[int]: return [p for p in intervale2(2, n, 1) if estPremier(p)] premiers(10) premiers(100)
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Listes simplement chaรฎnรฉe (manuellement dรฉfinies) Comme ces exercices รฉtaient un peu foireux ร  รฉcrire avec les "listes" de Python, qui ne sont pas des listes simplement chaรฎnรฉes, je propose une autre solution oรน l'on va dรฉfinir une petite classe qui reprรฉsentera une liste simplement chaรฎnรฉe, et on va รฉcrire les fonctions demandรฉes avec cette classe. La classe ListeChainee On va supposer que les listes que l'on reprรฉsentera ne contiendront pas la valeur None, qui est utilisรฉe pour reprรฉsenter l'absence de tรชte et/ou de queue de la liste.
class ListeChainee(): def __init__(self, hd=None, tl=None): self.hd = hd self.tl = tl def __repr__(self) -> str: if self.tl is None: if self.hd is None: return "[]" else: return f"{self.hd} :: []" else: return f"{self.hd} :: {self.tl}" def jolie(self) -> str: if self.tl is None: if self.hd is None: return "[]" else: return f"[{self.hd}]" else: j = self.tl.jolie() j = j.replace("[", "").replace("]", "") if j == "": return f"[{self.hd}]" else: return f"[{self.hd}, {j}]" # รฉquivalent ร  :: en OCaml def insert(hd, tl: ListeChainee) -> ListeChainee: """ Insรจre hd en dรฉbut de la liste chainรฉe tl.""" return ListeChainee(hd=hd, tl=tl) # liste vide, puis des listes plus grandes vide = ListeChainee() # [] l_1 = insert(1, vide) # 1 :: [] ~= [1] l_12 = insert(2, l_1) # 2 :: 1 :: [] ~= [2, 1] l_123 = insert(3, l_12) # 3 :: 2 :: 1 :: [] print(vide) # [] print(l_1) # 1 :: [] print(l_12) # 2 :: 1 :: [] print(l_123) # 3 :: 2 :: 1 :: [] print(vide.jolie()) # [] print(l_1.jolie()) # [1] print(l_12.jolie()) # [2, 1] print(l_123.jolie()) # [3, 2, 1]
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 1 : taille Par exemple la longueur sera bien en O(n) si n=taille(liste) avec cette approche rรฉcursive :
from typing import Optional def taille(liste: Optional[ListeChainee]) -> int: if liste is None: return 0 elif liste.tl is None: return 0 if liste.hd is None else 1 return 1 + taille(liste.tl) print(taille(vide)) # 0 print(taille(l_1)) # 1 print(taille(l_12)) # 2 print(taille(l_123)) # 3
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 2 : concat Je vais dรฉjร  commencer par รฉcrire une fonction copy qui permet de copier rรฉcursivement une liste simplement chaรฎnรฉe, pour รชtre sรปr que l'on ne modifie pas en place une des listes donnรฉes en argument.
def copy(liste: ListeChainee) -> ListeChainee: if liste.tl is None: return ListeChainee(hd=liste.hd, tl=None) else: return ListeChainee(hd=liste.hd, tl=copy(liste.tl))
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
On peut vรฉrifier que cela marche en regardant, par exemple, l'id de deux objets si le deuxiรจme est une copie du premier : les id seront bien diffรฉrents.
print(id(vide)) print(id(copy(vide)))
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Et donc pour concatรฉner deux chaรฎnes, c'est facile :
def concat(liste1: ListeChainee, liste2: ListeChainee) -> ListeChainee: if taille(liste1) == 0: return liste2 elif taille(liste2) == 0: return liste1 # nouvelle liste : comme รงa changer queue.tl ne modifie PAS liste1 resultat = copy(liste1) queue = resultat while taille(queue.tl) > 0: queue = queue.tl assert taille(queue.tl) == 0 queue.tl = ListeChainee(hd=liste2.hd, tl=liste2.tl) return resultat print(concat(vide, l_1)) print(vide) # pas modifiรฉe : [] print(l_1) #ย pas modifiรฉe : 1 :: [] concat(l_1, l_12) # 1 :: 2 :: 1 :: [] concat(l_1, l_123) # 1 :: 3 :: 2 :: 1 :: [] concat(l_1, vide) # 1 :: [] concat(l_12, vide) # 2 :: 1 :: [] concat(l_12, l_1) # 2 :: 1 :: 1 :: [] concat(l_123, l_123) # 3 :: 2 :: 1 :: 3 :: 2 :: 1 :: []
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 3 : appartient C'est en complexitรฉ linรฉaire dans le pire des cas.
def appartient(x, liste: ListeChainee) -> bool: if liste.hd is None: return False else: if liste.hd == x: return True else: return appartient(x, liste.tl) assert appartient(0, vide) == False assert appartient(0, l_1) == False assert appartient(0, l_12) == False assert appartient(0, l_123) == False assert appartient(1, l_1) == True assert appartient(1, l_12) == True assert appartient(1, l_123) == True assert appartient(2, l_1) == False assert appartient(2, l_12) == True assert appartient(2, l_123) == True assert appartient(3, l_1) == False assert appartient(3, l_12) == False assert appartient(3, l_123) == True
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 4 : miroir Ce sera en temps quadratique, ร  cause de toutes les recopies :
def miroir(liste: ListeChainee) -> ListeChainee: if taille(liste) <= 1: return copy(liste) else: hd, tl = liste.hd, copy(liste.tl) #ย O(n) juste_hd = ListeChainee(hd=hd, tl=None) #ย O(1) return concat(miroir(tl), juste_hd) # O(n^2) + O(n) ร  cause de concat print(miroir(vide)) # []ย => [] print(miroir(l_1)) # [1] => [1] print(miroir(l_12)) # [2, 1] => [1, 2] print(miroir(l_123)) # [3, 2, 1] => [1, 2, 3]
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 5 : alterne La sรฉmantique n'รฉtait pas trรจs claire, mais on peut imaginer quelque chose comme รงa : si une des deux listes est vide, on prend l'autre, si les deux ne sont pas vide, on prend le dรฉbut de l1, de l2, puis alterne(queue l1, queue l2)
def alterne(liste1: ListeChainee, liste2: ListeChainee) -> ListeChainee: if taille(liste1) == 0: return copy(liste2) # on recopie pour ne rien modifier if taille(liste2) == 0: return copy(liste1) # on recopie pour ne rien modifier h1, t1 = liste1.hd, liste1.tl h2, t2 = liste2.hd, liste2.tl return insert(h1, insert(h2, alterne(t1, t2))) print(alterne(l_1, l_12)) # [1, 2, 1] print(alterne(l_12, l_1)) # [2, 1, 1] print(alterne(l_123, l_1)) # [3, 1, 2, 1] print(alterne(l_123, l_12)) # [3, 2, 2, 1, 1] print(alterne(l_123, l_123)) # [3, 3, 2, 2, 1, 1] print(alterne(l_12, l_123)) # [2, 3, 1, 2, 1] print(alterne(l_1, l_123)) # [1, 3, 2, 1]
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
La complexitรฉ est quadratique en $\mathcal{O}((\max(|\text{liste 1}|, |\text{liste 2}|)^2)$ ร  cause des recopies. Exercice 6 : nb_occurrences Ce sera en temps linรฉaire, dans tous les cas.
def nb_occurrences(x, liste: ListeChainee) -> int: if liste is None or liste.hd is None: return 0 else: count = 1 if x == liste.hd else 0 if liste.tl is None: return count else: return count + nb_occurrences(x, liste.tl) assert nb_occurrences(1, vide) == 0 assert nb_occurrences(1, l_1) == 1 assert nb_occurrences(1, l_12) == 1 assert nb_occurrences(2, l_12) == 1 assert nb_occurrences(1, l_123) == 1 assert nb_occurrences(2, l_123) == 1 assert nb_occurrences(3, l_123) == 1 assert nb_occurrences(1, concat(l_1, l_1)) == 2 assert nb_occurrences(2, concat(l_1, l_12)) == 1 assert nb_occurrences(3, concat(l_12, l_1)) == 0 assert nb_occurrences(1, concat(l_12, l_12)) == 2 assert nb_occurrences(2, concat(l_12, l_12)) == 2 assert nb_occurrences(1, concat(l_123, concat(l_1, l_1))) == 3 assert nb_occurrences(2, concat(l_123, concat(l_1, l_12))) == 2 assert nb_occurrences(3, concat(l_123, concat(l_12, l_1))) == 1 assert nb_occurrences(3, concat(l_123, concat(l_12, l_12))) == 1
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
On peut facilement รฉcrire une variante qui sera rรฉcursive terminale ("tail recursive") :
def nb_occurrences(x, liste: ListeChainee, count=0) -> int: if liste is None or liste.hd is None: return count else: count += 1 if x == liste.hd else 0 if liste.tl is None: return count else: return nb_occurrences(x, liste.tl, count=count)
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 7 : pairs C'est un filtrage par le prรฉdicat x % 2 == 0. Autant รฉcrire la fonction de filtrage gรฉnรฉrique :
def filtrer(liste: ListeChainee, predicate) -> ListeChainee: if liste is None or liste.hd is None: # liste de taille 0 return ListeChainee(hd=None, tl=None) elif liste.tl is None: #ย liste de taille 1 if predicate(liste.hd): #ย on renvoie [hd] return ListeChainee(hd=liste.hd, tl=None) else: # on renvoie [] return ListeChainee(hd=None, tl=None) else: #ย liste de taille >= 2 if predicate(liste.hd): return insert(liste.hd, filtrer(liste.tl, predicate)) else: return filtrer(liste.tl, predicate)
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Et donc c'est rapide :
def pairs(liste: ListeChainee) -> ListeChainee: def predicate(x): return (x % 2) == 0 # aussi : predicate = lambda x: (x % 2) == 0 return filtrer(liste, predicate) def impairs(liste: ListeChainee) -> ListeChainee: def predicate(x): return (x % 2) == 1 return filtrer(liste, predicate) print(pairs(vide)) # [] print(pairs(l_1)) # [] print(pairs(l_12)) # [2] print(pairs(l_123)) # [2] print(pairs(insert(4, insert(6, insert(8, l_123))))) # [4, 6, 8, 2] print(pairs(insert(5, insert(6, insert(8, l_123))))) #ย [6, 8, 2] print(impairs(vide)) # [] print(impairs(l_1)) # [1] print(impairs(l_12)) # [1] print(impairs(l_123)) # [3, 1] print(impairs(insert(4, insert(6, insert(8, l_123))))) # [3, 1] print(impairs(insert(5, insert(6, insert(8, l_123))))) #ย [5, 3, 1]
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 8 : range Ce sera de complexitรฉ temporelle linรฉaire :
def myrange(n: int) -> ListeChainee: if n <= 0: return ListeChainee(hd=None, tl=None) elif n == 1: return ListeChainee(hd=1, tl=None) # return insert(1, vide) else: return ListeChainee(hd=n, tl=myrange(n-1)) print(myrange(1)) # [1] print(myrange(2)) # [1, 2] print(myrange(3)) # [1, 2, 3] print(myrange(4)) # [1, 2, 3, 4]
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Si on veut les avoir dans l'ordre croissant, il faudrait utiliser miroir qui est quadratique. Autant รฉcrire directement une fonction intervale(a, b) qui renvoie la liste simplement chaรฎnรฉe contenant a :: (a+1) :: ... :: b :
def intervale(a: int, b: Optional[int]=None) -> ListeChainee: if b is None: a, b = 1, a n = b - a if n < 0: # [a..b] = [] return ListeChainee(hd=None, tl=None) elif n == 0: #ย [a..b] = [a] return ListeChainee(hd=a, tl=None) else: # [a..b] = a :: [a+1..b] return ListeChainee(hd=a, tl=intervale(a+1, b)) print(intervale(10)) # [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] print(intervale(1, 4)) # [1, 2, 3, 4] print(intervale(13, 13)) # [13] print(intervale(13, 10)) # []
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Une autre approche est d'รฉcrire la fonction mymap et de dire que python intervale_bis(a, b) = miroir(mymap(lambda x: x + (a - 1), myrange(b - a + 1)))
from typing import Callable def mymap(fonction: Callable, liste: ListeChainee) -> ListeChainee: if liste is None or liste.hd is None: # liste de taille 0 return ListeChainee(hd=None, tl=None) elif liste.tl is None: # liste de taille 1 return ListeChainee(hd=fonction(liste.hd), tl=None) else: # liste de taille >= 2 return ListeChainee(hd=fonction(liste.hd), tl=mymap(fonction, liste.tl)) print(myrange(10)) print(mymap(lambda x: x, myrange(10))) def intervale_bis(a: int, b: int) -> ListeChainee: return miroir(mymap(lambda x: x + (a - 1), myrange(b - a + 1))) print(intervale_bis(1, 10)) # [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] print(intervale_bis(1, 4)) # [1, 2, 3, 4] print(intervale_bis(13, 13)) # [13] print(intervale_bis(13, 10)) # []
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 9 : premiers Plusieurs possibilitรฉs. Un filtre d'Erathosthรจne marche bien, ou une filtrage. Je ne vais pas utiliser de tableaux donc on est un peu rรฉduit d'utiliser une filtrage. On a besoin des fonctions suivantes : calculer la racine entiรจre de $n$, trรจs facile par une boucle, calculer les nombres impairs entre 5 et $\lfloor \sqrt{n} \rfloor$, filtrer cette liste de nombres impairs pour garder ceux qui divisent $n$, et dire que $n$ est premier s'il a un diviseur non trivial.
def racine(n: int) -> int: i = 1 for i in range(n + 1): if i*i > n: return i - 1 return i print(racine(1)) #ย 1 print(racine(5)) #ย 2 print(racine(102)) # 10 print(racine(120031)) # 346 def intervale2(a: int, b: Optional[int]=None, pas: int=1) -> ListeChainee: if b is None: a, b = 1, a n = b - a if n < 0: # [a..b::p] = [] return ListeChainee(hd=None, tl=None) elif n == 0: #ย [a..b::p] = [a] return ListeChainee(hd=a, tl=None) else: # [a..b::p] = a :: [a+p..b::p] return ListeChainee(hd=a, tl=intervale2(a + pas, b=b, pas=pas)) print(intervale2(1, 10, 2)) # [1, 3, 5, 7, 9] print(intervale2(1, 4, 2)) # [1, 3] print(intervale2(13, 13, 2)) # [13] print(intervale2(13, 10, 2)) # []
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Une version purement fonctionnelle est moins facile qu'une version impรฉrative avec une rรฉfรฉrence boolรฉenne.
def estDivisible(n: int, k: int) -> bool: return (n % k) == 0 estDivisible(10, 2) estDivisible(10, 3) estDivisible(10, 4) estDivisible(10, 5)
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
On est prรชt ร  รฉcrire estPremier :
def estPremier(n : int) -> bool: return taille(filtrer(intervale2(2, racine(n), 1), lambda k: estDivisible(n, k))) == 0
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
En effet il suffit de construire d'abord la liste des entiers impairs de 2 ร  $\lfloor \sqrt{n} \rfloor$, de les filtrer par ceux qui divisent $n$, et de vรฉrifier si on a aucun diviseur (taille(..) == 0) auquel cas $n$ est premier, ou si $n$ a au moins un diviseur auquel cas $n$ n'est pas premier.
for n in range(2, 20): print("Petits diviseurs de", n, " -> ", filtrer(intervale2(2, racine(n), 1), lambda k: estDivisible(n, k)))
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
On voit dans l'exemple ci dessus les nombres premiers comme ceux n'ayant aucun diviseurs, et les nombres non premiers comme ceux ayant au moins un diviseur.
def premiers(n : int) -> ListeChainee: return filtrer(intervale2(2, n, 1), estPremier) premiers(10) # [2, 3, 5, 7] premiers(100) #ย [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97]
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Quelques tris par comparaison On fera les tris en ordre croissant.
test = [3, 1, 8, 4, 5, 6, 1, 2]
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 10 : Tri insertion
from typing import TypeVar, List _a = TypeVar('alpha') def insere(x : _a, liste : List[_a]) -> List[_a]: if len(liste) == 0: return [x] else: t, q = liste[0], liste[1:] if x <= t: return [x] + liste else: return [t] + insere(x, q) def tri_insertion(liste : List[_a]) -> List[_a]: if len(liste) == 0: return [] else: t, q = liste[0], liste[1:] return insere(t, tri_insertion(q)) tri_insertion(test)
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Complexitรฉ en temps $\mathcal{O}(n^2)$. Exercice 11 : Tri insertion gรฉnรฉrique
from typing import TypeVar, List, Callable _a = TypeVar('alpha') def insere2(ordre : Callable[[_a, _a], bool], x : _a, liste : List[_a]) -> List[_a]: if len(liste) == 0: return [x] else: t, q = liste[0], liste[1:] if ordre(x, t): return [x] + liste else: return [t] + insere2(ordre, x, q) def tri_insertion2(ordre : Callable[[_a, _a], bool], liste : List[_a]) -> List[_a]: if len(liste) == 0: return [] else: t, q = liste[0], liste[1:] return insere2(ordre, t, tri_insertion2(ordre, q)) ordre_croissant = lambda x, y: x <= y tri_insertion2(ordre_croissant, test) ordre_decroissant = lambda x, y: x >= y tri_insertion2(ordre_decroissant, test)
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 12 : Tri selection
from typing import TypeVar, List, Tuple _a = TypeVar('alpha') def selectionne_min(liste : List[_a]) -> Tuple[_a, List[_a]]: if len(liste) == 0: raise ValueError("Selectionne_min sur liste vide") else: def cherche_min(mini : _a, autres : List[_a], reste : List[_a]) -> Tuple[_a, List[_a]]: if len(reste) == 0: return (mini, autres) else: t, q = reste[0], reste[1:] if t < mini: return cherche_min(t, [mini] + autres, q) else: return cherche_min(mini, [t] + autres, q) t, q = liste[0], liste[1:] return cherche_min(t, [], q) test selectionne_min(test)
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
(On voit que la liste autre a รฉtรฉ inversรฉe)
def tri_selection(liste : List[_a]) -> List[_a]: if len(liste) == 0: return [] else: mini, autres = selectionne_min(liste) return [mini] + tri_selection(autres) tri_selection(test)
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Complexitรฉ en temps : $\mathcal{O}(n^2)$. Exercices 13, 14, 15 : Tri fusion
from typing import TypeVar, List, Tuple _a = TypeVar('alpha') def separe(liste : List[_a]) -> Tuple[List[_a], List[_a]]: if len(liste) == 0: return ([], []) elif len(liste) == 1: return ([liste[0]], []) else: x, y, q = liste[0], liste[1], liste[2:] a, b = separe(q) return ([x] + a, [y] + b) test separe(test) def fusion(liste1 : List[_a], liste2 : List[_a]) -> List[_a]: if (len(liste1), len(liste2)) == (0, 0): return [] elif len(liste1) == 0: return liste2 elif len(liste2) == 0: return liste1 else: # les deux sont non vides x, a = liste1[0], liste1[1:] y, b = liste2[0], liste2[1:] if x <= y: return [x] + fusion(a, [y] + b) else: return [y] + fusion([x] + a, b) fusion([1, 3, 7], [2, 3, 8]) def tri_fusion(liste : List[_a]) -> List[_a]: if len(liste) <= 1: return liste else: a, b = separe(liste) return fusion(tri_fusion(a), tri_fusion(b)) tri_fusion(test)
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Complexitรฉ en temps $\mathcal{O}(n \log n)$. Comparaisons
%timeit tri_insertion(test) %timeit tri_selection(test) %timeit tri_fusion(test) from sys import setrecursionlimit setrecursionlimit(100000) # nรฉcessaire pour tester les diffรฉrentes fonctions rรฉcursives sur de grosses listes import random def test_random(n : int) -> List[int]: return [random.randint(-1000, 1000) for _ in range(n)] for n in [10, 100, 1000]: print("\nFor n =", n) for tri in [tri_insertion, tri_selection, tri_fusion]: print(" and tri = {}".format(tri.__name__)) %timeit tri(test_random(n))
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
C'est assez pour vรฉrifier que le tri fusion est bien plus efficace que les autres. On voit aussi que les tris par insertion et sรฉlection sont pire que linรฉaires, Mais que le tri par fusion est presque linรฉaire (pour $n$ petits, $n \log n$ est presque linรฉaire). Listes : l'ordre supรฉrieur Je ne corrige pas les questions qui รฉtaient traitรฉes dans le TP1. Exercice 16 : applique
from typing import TypeVar, List, Callable _a, _b = TypeVar('_a'), TypeVar('_b') def applique(f : Callable[[_a], _b], liste : List[_a]) -> List[_b]: # Triche : return list(map(f, liste)) # 1รจre approche : return [f(x) for x in liste] # 2รจme approche : fliste = [] for x in liste: fliste.append(f(x)) return fliste # 3รจme approche n = len(liste) if n == 0: return [] fliste = [liste[0] for _ in range(n)] for i in range(n): fliste[i] = f(liste[i]) return fliste
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 17
def premiers_carres_parfaits(n : int) -> List[int]: return applique(lambda x : x * x, list(range(1, n + 1))) premiers_carres_parfaits(12)
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 18 : itere
from typing import TypeVar, List, Callable _a = TypeVar('_a') def itere(f : Callable[[_a], None], liste : List[_a]) -> None: for x in liste: f(x)
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 19
print_int = lambda i: print("{}".format(i)) def affiche_liste_entiers(liste : List[int]) -> None: print("Debut") itere(print_int, liste) print("Fin") affiche_liste_entiers([1, 2, 4, 5, 12011993])
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 20 : qqsoit et ilexiste
from typing import TypeVar, List, Callable _a = TypeVar('_a') # Comme all(map(f, liste)) def qqsoit(f : Callable[[_a], bool], liste : List[_a]) -> bool: for x in liste: if not f(x): return False # arret preliminaire return True # Comme any(map(f, liste)) def ilexiste(f : Callable[[_a], bool], liste : List[_a]) -> bool: for x in liste: if f(x): return True # arret preliminaire return False qqsoit(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5]) ilexiste(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5]) %timeit qqsoit(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5]) %timeit all(map(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5])) %timeit ilexiste(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5]) %timeit any(map(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5]))
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 21 : appartient version 2
def appartient_curry(x : _a) -> Callable[[List[_a]], bool]: return lambda liste: ilexiste(lambda y: x == y, liste) def appartient(x : _a, liste : List[_a]) -> bool: return ilexiste(lambda y: x == y, liste) def toutes_egales(x : _a, liste : List[_a]) -> bool: return qqsoit(lambda y: x == y, liste) appartient_curry(1)([1, 2, 3]) appartient(1, [1, 2, 3]) appartient(5, [1, 2, 3]) toutes_egales(1, [1, 2, 3]) toutes_egales(5, [1, 2, 3])
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Est-ce que notre implรฉmentation peut รชtre plus rapide que le test x in liste ? Non, mais elle est aussi rapide. C'est dรฉjร  pas mal !
%timeit appartient(random.randint(-10, 10), [random.randint(-1000, 1000) for _ in range(1000)]) %timeit random.randint(-10, 10) in [random.randint(-1000, 1000) for _ in range(1000)]
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 22 : filtre
from typing import TypeVar, List, Callable _a = TypeVar('_a') # Comme list(filter(f, liste)) def filtre(f : Callable[[_a], bool], liste : List[_a]) -> List[_a]: # return [x for x in liste if f(x)] liste2 = [] for x in liste: if f(x): liste2.append(x) return liste2 filtre(lambda x: (x % 2) == 0, [1, 2, 3, 4, 5]) filtre(lambda x: (x % 2) != 0, [1, 2, 3, 4, 5])
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 23 Je vous laisse trouver pour premiers.
pairs = lambda liste: filtre(lambda x: (x % 2) == 0, liste) impairs = lambda liste: filtre(lambda x: (x % 2) != 0, liste) pairs(list(range(10))) impairs(list(range(10)))
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 24 : reduit
from typing import TypeVar, List, Callable _a = TypeVar('_a') # Comme list(filter(f, liste)) def reduit_rec(f : Callable[[_a, _b], _a], acc : _a, liste : List[_b]) -> _a: if len(liste) == 0: return acc else: h, q = liste[0], liste[1:] return reduit(f, f(acc, h), q) # Version non rรฉcursive, bien plus efficace def reduit(f : Callable[[_a, _b], _a], acc : _a, liste : List[_b]) -> _a: acc_value = acc for x in liste: acc_value = f(acc_value, x) return acc_value
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Trรจs pratique pour calculer des sommes, notamment. Exercice 25 : somme, produit
from operator import add somme_rec = lambda liste: reduit_rec(add, 0, liste) somme = lambda liste: reduit(add, 0, liste) somme_rec(list(range(10))) somme(list(range(10))) sum(list(range(10))) %timeit somme_rec(list(range(10))) %timeit somme(list(range(10))) %timeit sum(list(range(10)))
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Pour de petites listes, la version rรฉcursive est aussi efficace que la version impรฉrative. Chouette !
%timeit somme_rec(list(range(1000))) %timeit somme(list(range(1000))) %timeit sum(list(range(1000))) from operator import mul produit = lambda liste: reduit(mul, 1, liste) produit(list(range(1, 6))) # 5! = 120
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Bonus :
def factorielle(n : int) -> int: return produit(range(1, n + 1)) for n in range(1, 15): print("{:>7}! = {:>13}".format(n, factorielle(n)))
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 26 : miroir version 2
miroir = lambda liste: reduit(lambda l, x : [x] + l, [], liste) miroir([2, 3, 5, 7, 11])
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Attention en Python, les listes ne sont PAS simplement chainรฉes, donc lambda l, x : [x] + l est en temps linรฉaire en $|l| = n$, pas en $\mathcal{O}(1)$ comme en Caml/OCaml pour fun l x -&gt; x :: l. Arbres /!\ Les deux derniรจres parties sont bien plus difficiles en Python qu'en Caml. Exercice 27
from typing import Dict, Optional, Tuple # Impossible de dรฉfinir un type rรฉcursivement, pas comme en Caml arbre_bin = Dict[str, Optional[Tuple[Dict, Dict]]] from pprint import pprint arbre_test = {'Noeud': ( {'Noeud': ( {'Noeud': ( {'Feuille': None}, {'Feuille': None} )}, {'Feuille': None} )}, {'Feuille': None} )} pprint(arbre_test)
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Avec une syntaxe amรฉliorรฉe, on se rapproche de trรจs prรจs de la syntaxe de Caml/OCaml :
Feuille = {'Feuille': None} Noeud = lambda x, y : {'Noeud': (x, y)} arbre_test = Noeud(Noeud(Noeud(Feuille, Feuille), Feuille), Feuille) pprint(arbre_test)
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 28 Compte le nombre de feuilles et de sommets.
def taille(a : arbre_bin) -> int: # Pattern matching ~= if, elif,.. sur les clรฉs de la profondeur 1 # du dictionnaire (une seule clรฉ) if 'Feuille' in a: return 1 elif 'Noeud' in a: x, y = a['Noeud'] return 1 + taille(x) + taille(y) taille(arbre_test) # 7
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 29
def hauteur(a : arbre_bin) -> int: if 'Feuille' in a: return 0 elif 'Noeud' in a: x, y = a['Noeud'] return 1 + max(hauteur(x), hauteur(y)) hauteur(arbre_test) # 3
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 30 Bonus. (ร‰crivez une fonction testant si un arbre รฉtiquetรฉ par des entiers est tournoi.) Parcours d'arbres binaires Aprรจs quelques exercices manipulant cette structure de dictionnaire, รฉcrire la suite n'est pas trop difficile. Exercice 31
from typing import TypeVar, Union, List F = TypeVar('F') N = TypeVar('N') element_parcours = Union[F, N] parcours = List[element_parcours]
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 32 : Parcours naifs (complexitรฉ quadratique)
def parcours_prefixe(a : arbre_bin) -> parcours: if 'Feuille' in a: return [F] elif 'Noeud' in a: g, d = a['Noeud'] return [N] + parcours_prefixe(g) + parcours_prefixe(d) parcours_prefixe(arbre_test) def parcours_postfixe(a : arbre_bin) -> parcours: if 'Feuille' in a: return [F] elif 'Noeud' in a: g, d = a['Noeud'] return parcours_postfixe(g) + parcours_postfixe(d) + [N] parcours_postfixe(arbre_test) def parcours_infixe(a : arbre_bin) -> parcours: if 'Feuille' in a: return [F] elif 'Noeud' in a: g, d = a['Noeud'] return parcours_infixe(g) + [N] + parcours_infixe(d) parcours_infixe(arbre_test)
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Pourquoi ont-ils une complexitรฉ quadratique ? La concatรฉnation (@ en OCaml, + en Python) ne se fait pas en temps constant mais linรฉaire dans la taille de la plus longue liste. Exercice 33 : Parcours linรฉaires On ajoute une fonction auxiliaire et un argument vus qui est une liste qui stocke les รฉlements observรฉs dans l'ordre du parcours
def parcours_prefixe2(a : arbre_bin) -> parcours: def parcours(vus, b): if 'Feuille' in b: vus.insert(0, F) return vus elif 'Noeud' in b: vus.insert(0, N) g, d = b['Noeud'] return parcours(parcours(vus, g), d) p = parcours([], a) return p[::-1] parcours_prefixe2(arbre_test) def parcours_postfixe2(a : arbre_bin) -> parcours: def parcours(vus, b): if 'Feuille' in b: vus.insert(0, F) return vus elif 'Noeud' in b: g, d = b['Noeud'] p = parcours(parcours(vus, g), d) p.insert(0, N) return p p = parcours([], a) return p[::-1] parcours_postfixe2(arbre_test) def parcours_infixe2(a : arbre_bin) -> parcours: def parcours(vus, b): if 'Feuille' in b: vus.insert(0, F) return vus elif 'Noeud' in b: g, d = b['Noeud'] p = parcours(vus, g) p.insert(0, N) return parcours(p, d) p = parcours([], a) return p[::-1] parcours_infixe2(arbre_test)
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 34 : parcours en largeur et en profondeur Pour utiliser une file de prioritรฉ (priority queue), on utilise le module collections.deque.
from collections import deque def parcours_largeur(a : arbre_bin) -> parcours: file = deque() # fonction avec effet de bord sur la file def vasy() -> parcours: if len(file) == 0: return [] else: b = file.pop() if 'Feuille' in b: # return [F] + vasy() v = vasy() v.insert(0, F) return v elif 'Noeud' in b: g, d = b['Noeud'] file.insert(0, g) file.insert(0, d) # return [N] + vasy() v = vasy() v.insert(0, N) return v file.insert(0, a) return vasy() parcours_largeur(arbre_test)
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
En remplaรงant la file par une pile (une simple list), on obtient le parcours en profondeur, avec la mรชme complexitรฉ.
def parcours_profondeur(a : arbre_bin) -> parcours: pile = [] # fonction avec effet de bord sur la file def vasy() -> parcours: if len(pile) == 0: return [] else: b = pile.pop() if 'Feuille' in b: # return [F] + vasy() v = vasy() v.append(F) return v elif 'Noeud' in b: g, d = b['Noeud'] pile.append(g) pile.append(d) # return [N] + vasy() v = vasy() v.insert(0, N) return v pile.append(a) return vasy() parcours_profondeur(arbre_test)
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Exercice 35 et fin Reconstruction depuis le parcours prefixe
test_prefixe = parcours_prefixe2(arbre_test) test_prefixe
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
L'idรฉe de cette solution est la suivante : j'aimerais une fonction rรฉcursive qui fasse le travail; le problรจme c'est que si on prend un parcours prefixe, soit il commence par F et l'arbre doit รชtre une feuille; soit il est de la forme N::q oรน q n'est plus un parcours prefixe mais la concatรฉnation de DEUX parcours prefixe, on ne peut donc plus appeler la fonction sur q. On va donc รฉcrire une fonction qui prend une liste qui contient plusieurs parcours concatรฉnรฉ et qui renvoie l'arbre correspondant au premier parcours et ce qui n'a pas รฉtรฉ utilisรฉ :
from typing import Tuple def reconstruit_prefixe(par : parcours) -> arbre_bin: def reconstruit(p : parcours) -> Tuple[arbre_bin, parcours]: if len(p) == 0: raise ValueError("parcours invalide pour reconstruit_prefixe") elif p[0] == F: return (Feuille, p[1:]) elif p[0] == N: g, q = reconstruit(p[1:]) d, r = reconstruit(q) return (Noeud(g, d), r) # call it a, p = reconstruit(par) if len(p) == 0: return a else: raise ValueError("parcours invalide pour reconstruit_prefixe") reconstruit_prefixe([F]) reconstruit_prefixe(test_prefixe)
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Et cet exemple va รฉchouer :
reconstruit_prefixe([N, F, F] + test_prefixe) # รฉchoue
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Reconstruction depuis le parcours en largeur Ce n'est pas รฉvident quand on ne connait pas. L'idรฉe est de se servir d'une file pour stocker les arbres qu'on reconstruit peu ร  peu depuis les feuilles. La file permet de rรฉcupรฉrer les bons sous-arbres quand on rencontre un noeud
largeur_test = parcours_largeur(arbre_test) largeur_test from collections import deque def reconstruit_largeur(par : parcours) -> arbre_bin: file = deque() # Fonction avec effets de bord def lire_element(e : element_parcours) -> None: if e == F: file.append(Feuille) elif e == N: d = file.popleft() g = file.popleft() # attention ร  l'ordre ! file.append(Noeud(g, d)) # Applique cette fonction ร  chaque รฉlement du parcours for e in reversed(par): lire_element(e) if len(file) == 1: return file.popleft() else: raise ValueError("parcours invalide pour reconstruit_largeur") largeur_test reconstruit_largeur(largeur_test) arbre_test
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Le mรชme algorithme (enfin presque, modulo interversion de g et d) avec une pile donne une autre version de la reconstruction du parcours prefixe.
from collections import deque def reconstruit_prefixe2(par : parcours) -> arbre_bin: pile = deque() # Fonction avec effets de bord def lire_element(e : element_parcours) -> None: if e == F: pile.append(Feuille) elif e == N: g = pile.pop() d = pile.pop() # attention ร  l'ordre ! pile.append(Noeud(g, d)) # Applique cette fonction ร  chaque รฉlement du parcours for e in reversed(par): lire_element(e) if len(pile) == 1: return pile.pop() else: raise ValueError("parcours invalide pour reconstruit_prefixe2") prefixe_test = parcours_prefixe2(arbre_test) prefixe_test reconstruit_prefixe2(prefixe_test) arbre_test
agreg/TP_Programmation_2017-18/TP2__Python.ipynb
Naereen/notebooks
mit
Text classification with TensorFlow Lite Model Maker <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_text_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications. This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used text classification model to classify movie reviews on a mobile device. The text classification model classifies text into predefined categories. The inputs should be preprocessed text and the outputs are the probabilities of the categories. The dataset used in this tutorial are positive and negative movie reviews. Prerequisites Install the required packages To run this example, install the required packages, including the Model Maker package from the GitHub repo.
!pip install -q tflite-model-maker
tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb
Intel-Corporation/tensorflow
apache-2.0
Get the MNIST Dataset
def get_mnist(sc, mnist_path): (train_images, train_labels) = mnist.read_data_sets(mnist_path = "/tmp/mnist", "train") train_images = np.reshape(train_images, (60000, 1, 28, 28)) rdd_train_images = sc.parallelize(train_images) rdd_train_sample = rdd_train_images.map(lambda img: Sample.from_ndarray( (img > 128) * 1.0, [(img > 128) * 1.0, (img > 128) * 1.0])) return rdd_train_sample mnist_path = "/tmp/mnist" # please replace this train_data = get_mnist(sc, mnist_path) # (train_images, train_labels) = mnist.read_data_sets(mnist_path, "train")
apps/variational-autoencoder/using_variational_autoencoder_to_generate_digital_numbers.ipynb
intel-analytics/BigDL
apache-2.0
<br> <br> Lets do it with Gradient Descent now
theta0 = 0 theta1 = 2 alpha = 0.1 interations = 100 cost_log = [] theta_log = []; inc = 0.1 for j in range(interations): cost = 0 grad = 0 for i in range(m): hx = theta1*X[i,0] + theta0 cost += pow((hx - y[i,0]),2) grad += ((hx - y[i,0]))*X[i,0] cost = cost/(2*m) grad = grad/(2*m) theta1 = theta1 - alpha*grad cost_log.append(cost) theta_log.append(theta1) theta_log
lectures/lec04-linear-regression-example.ipynb
w4zir/ml17s
mit
We will produce some plots based on a frequency range to illustrate the concepts:
import matplotlib.pyplot as plt %matplotlib inline ff = np.linspace(0.01, 6., num=600) wn = 2.*np.pi*ff
Rayleigh_damping.ipynb
pxcandeias/py-notebooks
mit
Back to top Mass proportional damping Mass proportional damping means that the damping matrix is somehow a multiple of the mass matrix: \begin{equation} \mathbf{C} = \alpha \cdot \mathbf{M} \end{equation} where $\alpha$ is the constant of mass proportionality. In these circunstances, the dynamic equilibrium equation can be written as: \begin{equation} \mathbf{M} \times \mathbf{a(t)} + \alpha \cdot \mathbf{M} \times \mathbf{v(t)} + \mathbf{K} \times \mathbf{d(t)} = \mathbf{F(t)} \end{equation} Proceeding the same as above, one obtains the $NDOF$ independent modal equilibrium equations: \begin{equation} \mathbf{M_n} \times \mathbf{\ddot q_n(t)} + \alpha \cdot \mathbf{M_n} \times \mathbf{\dot q_n(t)} + \mathbf{K_n} \times \mathbf{q_n(t)} = \mathbf{F_n(t)} \end{equation} or, equivalently: \begin{equation} \mathbf{\ddot q_n(t)} + \alpha \cdot \mathbf{\dot q_n(t)} + \mathbf{\omega_n^2} \cdot \mathbf{q_n(t)} = \mathbf{a_n(t)} \end{equation} Comparing expressions, one obtains \begin{equation} \alpha = 2 \cdot \zeta_n \cdot \omega_n \Leftrightarrow \zeta_n = \frac{\alpha}{2 \cdot \omega_n} \end{equation} from where it can be seen that the mass proportional damping is a hyperbolic function of the vibration frequency $\omega_n$.
alpha = 0.1 zn_a = alpha/(2.*wn) plt.plot(wn, zn_a, label='mass proportional') plt.xlabel('Vibration frequency $\omega_n$ [rad/s]') plt.ylabel('Damping coefficient $\zeta_n$ [-]') plt.legend(loc='best') plt.grid(True) plt.xlim([0, 35.]) plt.ylim([0, 0.2]) plt.show()
Rayleigh_damping.ipynb
pxcandeias/py-notebooks
mit
Back to top Stiffness proportional damping Stiffness proportional damping means that damping matrix is somehow a multiple of the stiffness matrix: \begin{equation} \mathbf{C} = \beta \cdot \mathbf{K} \end{equation} where $\beta$ is the constant of stiffness proportionality. In these circunstances, the dynamic equilibrium equation can be written as: \begin{equation} \mathbf{M} \times \mathbf{a(t)} + \beta \cdot \mathbf{K} \times \mathbf{v(t)} + \mathbf{K} \times \mathbf{d(t)} = \mathbf{F(t)} \end{equation} Proceeding the same as above, one obtains the $NDOF$ independent modal equilibrium equations: \begin{equation} \mathbf{M_n} \times \mathbf{\ddot q_n(t)} + \beta \cdot \mathbf{K_n} \times \mathbf{\dot q_n(t)} + \mathbf{K_n} \times \mathbf{q_n(t)} = \mathbf{F_n(t)} \end{equation} or, equivalently: \begin{equation} \mathbf{\ddot q_n(t)} + \beta \cdot \mathbf{\omega_n^2} \cdot \mathbf{\dot q_n(t)} + \mathbf{\omega_n^2} \cdot \mathbf{q_n(t)} = \mathbf{a_n(t)} \end{equation} Comparing expressions, one obtains \begin{equation} \beta \cdot \omega_n^2 = 2 \cdot \zeta \cdot \omega_n \Leftrightarrow \zeta_n = \frac{\omega_n \cdot \beta}{2} \end{equation} from where it can be seen that the stiffness proportional damping is a linear function of the vibration frequency $\omega_n$.
beta = 0.005 zn_b = (beta*wn)/2. plt.plot(wn, zn_b, label='stiffness proportional') plt.xlabel('Vibration frequency $\omega_n$ [rad/s]') plt.ylabel('Damping coefficient $\zeta_n$ [-]') plt.legend(loc='best') plt.grid(True) plt.xlim([0, 35.]) plt.ylim([0, 0.2]) plt.show()
Rayleigh_damping.ipynb
pxcandeias/py-notebooks
mit
Back to top Rayleigh damping When Rayleigh damping is considered it means that the damping coefficient is a combination of the two previous ones, that is, it is a multiple of mass and stifnness: \begin{equation} \mathbf{C} = \alpha \cdot \mathbf{M} + \beta \cdot \mathbf{K} \end{equation} where $\alpha$ and $\beta$ have the previous meanings. In these circunstances, the dynamic equilibrium equation can be written as: \begin{equation} \mathbf{M} \times \mathbf{a(t)} + \left[ \alpha \cdot \mathbf{M} + \beta \cdot \mathbf{K} \right] \times \mathbf{v(t)} + \mathbf{K} \times \mathbf{d(t)} = \mathbf{F(t)} \end{equation} Proceeding the same as above, one obtains the $NDOF$ independent modal equilibrium equations: \begin{equation} \mathbf{M_n} \times \mathbf{\ddot q_n(t)} + \left[ \alpha \cdot \mathbf{M_n} + \beta \cdot \mathbf{K_n} \right] \times \mathbf{\dot q_n(t)} + \mathbf{K_n} \times \mathbf{q_n(t)} = \mathbf{F_n(t)} \end{equation} or, equivalently: \begin{equation} \mathbf{\ddot q_n(t)} + \left[ \alpha + \beta \cdot \mathbf{\omega_n^2} \right] \cdot \mathbf{\dot q_n(t)} + \mathbf{\omega_n^2} \cdot \mathbf{q_n(t)} = \mathbf{a_n(t)} \end{equation} Comparing expressions, one obtains \begin{equation} \alpha + \beta \cdot \omega_n^2 = 2 \cdot \zeta \cdot \omega_n \Leftrightarrow \zeta_n = \frac{\alpha}{2 \cdot \omega_n} + \frac{\omega_n \cdot \beta}{2} \end{equation} from where it can be seen that the Rayleigh damping is the sum of the previous linear and hyperbolic functions of the vibration frequency $\omega_n$.
plt.hold(True) plt.plot(wn, zn_a+zn_b, label='Rayleigh damping') plt.plot(wn, zn_a, label='mass proportional') plt.plot(wn, zn_b, label='stiffness proportional') plt.xlabel('Vibration frequency $\omega_n$ [rad/s]') plt.ylabel('Damping coefficient $\zeta_n$ [-]') plt.legend(loc='best') plt.grid(True) plt.xlim([0, 35.]) plt.ylim([0, 0.2]) plt.show()
Rayleigh_damping.ipynb
pxcandeias/py-notebooks
mit
When the Rayleigh damping is used in MDOF systems, the coefficients $\alpha$ and $\beta$ can be computed in order to give an appropriate damping coefficient value for a given frequency range, related to the vibration modes of interest for the dynamic analysis. This is achieved by setting a simple two equation system whose solution yields the values of $\alpha$ and $\beta$: $$ \left[\begin{array}{cc} \zeta_1 \ \zeta_2 \end{array}\right] = \left[\begin{array}{cc} \frac{1}{2 \cdot \omega_1} && \frac{\omega_1}{2} \ \frac{1}{2 \cdot \omega_2} && \frac{\omega_2}{2} \end{array}\right] \times \left[\begin{array}{cc} \alpha \ \beta \end{array}\right] $$ Back to top Example Let us consider a MDOF system where there are several vibration modes of interest, ranging from 1 to 4 Hz, and that we want compute the dynamic response considering a damping coefficient of 2% for the first mode and 5% for the last mode.
f1, f2 = 1., 4. z1, z2 = 0.02, 0.05 w1 = 2.*np.pi*f1 w2 = 2.*np.pi*f2 alpha, beta = np.linalg.solve([[1./(2.*w1), w1/2.], [1./(2.*w2), w2/2.]], [z1, z2]) print('Alpha={:.6f}\nBeta={:.6f}'.format(alpha, beta))
Rayleigh_damping.ipynb
pxcandeias/py-notebooks
mit
We can check that the Rayleigh damping assumes the required values at the desired frequencies, although may vary considerably for other frequencies:
zn_a = alpha/(2.*wn) zn_b = (beta*wn)/2. plt.hold(True) plt.plot(wn, zn_a+zn_b, label='Rayleigh damping') plt.plot(wn, zn_a, label='mass proportional') plt.plot(wn, zn_b, label='stiffness proportional') plt.plot(w1, z1, 'o') plt.plot(w2, z2, 'o') plt.axvline(w1, ls=':') plt.axhline(z1, ls=':') plt.axvline(w2, ls=':') plt.axhline(z2, ls=':') plt.xlabel('Vibration frequency $\omega_n$ [rad/s]') plt.ylabel('Damping coefficient $\zeta_n$ [-]') plt.legend(loc='best') plt.xlim([0, 35.]) plt.ylim([0, 0.2]) plt.show()
Rayleigh_damping.ipynb
pxcandeias/py-notebooks
mit
๋‹ค์Œ์œผ๋กœ ์ด ์ฝ”ํผ์Šค๋ฅผ ์ž…๋ ฅ ์ธ์ˆ˜๋กœ ํ•˜์—ฌ Word2Vec ํด๋ž˜์Šค ๊ฐ์ฒด๋ฅผ ์ƒ์„ฑํ•œ๋‹ค. ์ด ์‹œ์ ์— ํŠธ๋ ˆ์ด๋‹์ด ์ด๋ฃจ์–ด์ง„๋‹ค.
from gensim.models.word2vec import Word2Vec %%time model = Word2Vec(sentences)
30. ๋”ฅ๋Ÿฌ๋‹/06. ๋‹จ์–ด ์ž„๋ฒ ๋”ฉ์˜ ์›๋ฆฌ์™€ gensim.word2vec ์‚ฌ์šฉ๋ฒ•.ipynb
zzsza/Datascience_School
mit
ํŠธ๋ ˆ์ด๋‹์ด ์™„๋ฃŒ๋˜๋ฉด init_sims ๋ช…๋ น์œผ๋กœ ํ•„์š”์—†๋Š” ๋ฉ”๋ชจ๋ฆฌ๋ฅผ unload ์‹œํ‚จ๋‹ค.
model.init_sims(replace=True)
30. ๋”ฅ๋Ÿฌ๋‹/06. ๋‹จ์–ด ์ž„๋ฒ ๋”ฉ์˜ ์›๋ฆฌ์™€ gensim.word2vec ์‚ฌ์šฉ๋ฒ•.ipynb
zzsza/Datascience_School
mit
์ด์ œ ์ด ๋ชจํ˜•์—์„œ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ๋ณด๋‹ค ์ž์„ธํ•œ ๋‚ด์šฉ์€ https://radimrehurek.com/gensim/models/word2vec.html ๋ฅผ ์ฐธ์กฐํ•œ๋‹ค. similarity : ๋‘ ๋‹จ์–ด์˜ ์œ ์‚ฌ๋„ ๊ณ„์‚ฐ most_similar : ๊ฐ€์žฅ ์œ ์‚ฌํ•œ ๋‹จ์–ด๋ฅผ ์ถœ๋ ฅ
model.similarity('actor', 'actress') model.similarity('he', 'she') model.similarity('actor', 'she') model.most_similar("villain")
30. ๋”ฅ๋Ÿฌ๋‹/06. ๋‹จ์–ด ์ž„๋ฒ ๋”ฉ์˜ ์›๋ฆฌ์™€ gensim.word2vec ์‚ฌ์šฉ๋ฒ•.ipynb
zzsza/Datascience_School
mit
most_similar ๋ฉ”์„œ๋“œ๋Š” positive ์ธ์ˆ˜์™€ negative ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋‹จ์–ด๊ฐ„ ๊ด€๊ณ„๋„ ์ฐพ์„ ์ˆ˜ ์žˆ๋‹ค. actor + he - actress = she
model.most_similar(positive=['actor', 'he'], negative='actress', topn=1)
30. ๋”ฅ๋Ÿฌ๋‹/06. ๋‹จ์–ด ์ž„๋ฒ ๋”ฉ์˜ ์›๋ฆฌ์™€ gensim.word2vec ์‚ฌ์šฉ๋ฒ•.ipynb
zzsza/Datascience_School
mit
์ด๋ฒˆ์—๋Š” ๋„ค์ด๋ฒ„ ์˜ํ™” ๊ฐ์ƒ ์ฝ”ํผ์Šค๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ•œ๊ตญ์–ด ๋‹จ์–ด ์ž„๋ฒ ๋”ฉ์„ ํ•ด๋ณด์ž.
import codecs def read_data(filename): with codecs.open(filename, encoding='utf-8', mode='r') as f: data = [line.split('\t') for line in f.read().splitlines()] data = data[1:] # header ์ œ์™ธ return data train_data = read_data('/home/dockeruser/data/nsmc/ratings_train.txt') from konlpy.tag import Twitter tagger = Twitter() def tokenize(doc): return ['/'.join(t) for t in tagger.pos(doc, norm=True, stem=True)] train_docs = [row[1] for row in train_data] sentences = [tokenize(d) for d in train_docs] from gensim.models import word2vec model = word2vec.Word2Vec(sentences) model.init_sims(replace=True) model.similarity(*tokenize(u'์•…๋‹น ์˜์›…')) model.similarity(*tokenize(u'์•…๋‹น ๊ฐ๋™')) from konlpy.utils import pprint pprint(model.most_similar(positive=tokenize(u'๋ฐฐ์šฐ ๋‚จ์ž'), negative=tokenize(u'์—ฌ๋ฐฐ์šฐ'), topn=1))
30. ๋”ฅ๋Ÿฌ๋‹/06. ๋‹จ์–ด ์ž„๋ฒ ๋”ฉ์˜ ์›๋ฆฌ์™€ gensim.word2vec ์‚ฌ์šฉ๋ฒ•.ipynb
zzsza/Datascience_School
mit
Now that we have an idea of how to use h5py to read in an h5 file, let's try it out. Note that if the h5 file is stored in a different directory than where you are running your notebook, you need to include the path (either relative or absolute) to the directory where that data file is stored. Use os.path.join to create the full path of the file.
# Note that you will need to update this filepath for your local machine f = h5py.File('/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_reflectance.h5','r')
tutorials/Python/Hyperspectral/intro-hyperspectral/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
This 3-D shape (1000,1000,426) corresponds to (y,x,bands), where (x,y) are the dimensions of the reflectance array in pixels. Hyperspectral data sets are often called "cubes" to reflect this 3-dimensional shape. <figure> <a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/DataCube.png"> <img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/DataCube.png"></a> <figcaption> A "cube" showing a hyperspectral data set. Source: National Ecological Observatory Network (NEON) </figcaption> </figure> NEON hyperspectral data contain around 426 spectral bands, and when working with tiled data, the spatial dimensions are 1000 x 1000, where each pixel represents 1 meter. Now let's take a look at the wavelength values. First, we will extract wavelength information from the serc_refl variable that we created:
#define the wavelengths variable wavelengths = serc_refl['Metadata']['Spectral_Data']['Wavelength'] #View wavelength information and values print('wavelengths:',wavelengths)
tutorials/Python/Hyperspectral/intro-hyperspectral/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
Here we can see that we extracted a 2-D array (1000 x 1000) of the scaled reflectance data corresponding to the wavelength band 56. Before we can use the data, we need to clean it up a little. We'll show how to do this below. Scale factor and No Data Value This array represents the scaled reflectance for band 56. Recall from exploring the HDF5 data in HDFViewer that NEON AOP reflectance data uses a Data_Ignore_Value of -9999 to represent missing data (often called NaN), and a reflectance Scale_Factor of 10000.0 in order to save disk space (can use lower precision this way). <figure> <a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/HDF5-general/hdfview_SERCrefl.png"> <img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/HDF5-general/hdfview_SERCrefl.png"></a> <figcaption> Screenshot of the NEON HDF5 file format. Source: National Ecological Observatory Network </figcaption> </figure> We can extract and apply the Data_Ignore_Value and Scale_Factor as follows:
#View and apply scale factor and data ignore value scaleFactor = serc_reflArray.attrs['Scale_Factor'] noDataValue = serc_reflArray.attrs['Data_Ignore_Value'] print('Scale Factor:',scaleFactor) print('Data Ignore Value:',noDataValue) b56[b56==int(noDataValue)]=np.nan b56 = b56/scaleFactor print('Cleaned Band 56 Reflectance:\n',b56)
tutorials/Python/Hyperspectral/intro-hyperspectral/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
Here you can see that adjusting the colorlimit displays features (eg. roads, buildings) much better than when we set the colormap limits to the entire range of reflectance values. Extension: Basic Image Processing -- Contrast Stretch & Histogram Equalization We can also try out some basic image processing to better visualize the reflectance data using the ski-image package. Histogram equalization is a method in image processing of contrast adjustment using the image's histogram. Stretching the histogram can improve the contrast of a displayed image, as we will show how to do below. <figure> <a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/histogram_equalization.png"> <img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/histogram_equalization.png"></a> <figcaption> Histogram equalization is a method in image processing of contrast adjustment using the image's histogram. Stretching the histogram can improve the contrast of a displayed image, as we will show how to do below. Source: <a href="https://en.wikipedia.org/wiki/Talk%3AHistogram_equalization#/media/File:Histogrammspreizung.png"> Wikipedia - Public Domain </a> </figcaption> </figure> The following tutorial section is adapted from skikit-image's tutorial <a href="http://scikit-image.org/docs/stable/auto_examples/color_exposure/plot_equalize.html#sphx-glr-auto-examples-color-exposure-plot-equalize-py" target="_blank"> Histogram Equalization</a>. Below we demonstrate a widget to interactively display different linear contrast stretches: Explore the contrast stretch feature interactively using IPython widgets:
from skimage import exposure from IPython.html.widgets import * def linearStretch(percent): pLow, pHigh = np.percentile(b56[~np.isnan(b56)], (percent,100-percent)) img_rescale = exposure.rescale_intensity(b56, in_range=(pLow,pHigh)) plt.imshow(img_rescale,extent=serc_ext,cmap='gist_earth') #cbar = plt.colorbar(); cbar.set_label('Reflectance') plt.title('SERC Band 56 \n Linear ' + str(percent) + '% Contrast Stretch'); ax = plt.gca() ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation # rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree interact(linearStretch,percent=(0,50,1))
tutorials/Python/Hyperspectral/intro-hyperspectral/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py/Intro_NEON_AOP_HDF5_Reflectance_Tiles_py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
ํ…์„œํ”Œ๋กœ 1 ์ฝ”๋“œ๋ฅผ ํ…์„œํ”Œ๋กœ 2๋กœ ๋ฐ”๊พธ๊ธฐ <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/guide/migrate"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png" /> TensorFlow.org์—์„œ ๋ณด๊ธฐ</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/guide/migrate.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> ๊ตฌ๊ธ€ ์ฝ”๋žฉ(Colab)์—์„œ ์‹คํ–‰ํ•˜๊ธฐ</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/migrate.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> ๊นƒํ—ˆ๋ธŒ(GitHub) ์†Œ์Šค ๋ณด๊ธฐ</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/guide/migrate.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Note: ์ด ๋ฌธ์„œ๋Š” ํ…์„œํ”Œ๋กœ ์ปค๋ฎค๋‹ˆํ‹ฐ์—์„œ ๋ฒˆ์—ญํ–ˆ์Šต๋‹ˆ๋‹ค. ์ปค๋ฎค๋‹ˆํ‹ฐ ๋ฒˆ์—ญ ํ™œ๋™์˜ ํŠน์„ฑ์ƒ ์ •ํ™•ํ•œ ๋ฒˆ์—ญ๊ณผ ์ตœ์‹  ๋‚ด์šฉ์„ ๋ฐ˜์˜ํ•˜๊ธฐ ์œ„ํ•ด ๋…ธ๋ ฅํ•จ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ  ๊ณต์‹ ์˜๋ฌธ ๋ฌธ์„œ์˜ ๋‚ด์šฉ๊ณผ ์ผ์น˜ํ•˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ฒˆ์—ญ์— ๊ฐœ์„ ํ•  ๋ถ€๋ถ„์ด ์žˆ๋‹ค๋ฉด tensorflow/docs-l10n ๊นƒํ—™ ์ €์žฅ์†Œ๋กœ ํ’€ ๋ฆฌํ€˜์ŠคํŠธ๋ฅผ ๋ณด๋‚ด์ฃผ์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ๋ฌธ์„œ ๋ฒˆ์—ญ์ด๋‚˜ ๋ฆฌ๋ทฐ์— ์ฐธ์—ฌํ•˜๋ ค๋ฉด docs-ko@tensorflow.org๋กœ ๋ฉ”์ผ์„ ๋ณด๋‚ด์ฃผ์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ์ด ๋ฌธ์„œ๋Š” ์ €์ˆ˜์ค€ ํ…์„œํ”Œ๋กœ API๋ฅผ ์‚ฌ์šฉ์ž๋ฅผ ์œ„ํ•œ ๊ฐ€์ด๋“œ์ž…๋‹ˆ๋‹ค. ๋งŒ์•ฝ ๊ณ ์ˆ˜์ค€ API(tf.keras)๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋‹ค๋ฉด ํ…์„œํ”Œ๋กœ 2.0์œผ๋กœ ๋ฐ”๊พธ๊ธฐ ์œ„ํ•ด ํ•  ์ผ์ด ๊ฑฐ์˜ ์—†์Šต๋‹ˆ๋‹ค: ์˜ตํ‹ฐ๋งˆ์ด์ € ํ•™์Šต๋ฅ  ๊ธฐ๋ณธ๊ฐ’์„ ํ™•์ธํ•ด ๋ณด์„ธ์š”. ์ธก์ • ์ง€ํ‘œ์˜ "์ด๋ฆ„"์ด ๋ฐ”๋€Œ์—ˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์—ฌ์ „ํžˆ ํ…์„œํ”Œ๋กœ 1.X ๋ฒ„์ „์˜ ์ฝ”๋“œ๋ฅผ ์ˆ˜์ •ํ•˜์ง€ ์•Š๊ณ  ํ…์„œํ”Œ๋กœ 2.0์—์„œ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(contrib ๋ชจ๋“ˆ์€ ์ œ์™ธ): import tensorflow.compat.v1 as tf tf.disable_v2_behavior() ํ•˜์ง€๋งŒ ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ํ…์„œํ”Œ๋กœ 2.0์—์„œ ์ œ๊ณตํ•˜๋Š” ๋งŽ์€ ์žฅ์ ์„ ํ™œ์šฉํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ์ด ๋ฌธ์„œ๋Š” ์„ฑ๋Šฅ์„ ๋†’์ด๋ฉด์„œ ์ฝ”๋“œ๋Š” ๋” ๊ฐ„๋‹จํ•˜๊ณ  ์œ ์ง€๋ณด์ˆ˜ํ•˜๊ธฐ ์‰ฝ๋„๋ก ์—…๊ทธ๋ ˆ์ด๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•ˆ๋‚ดํ•ฉ๋‹ˆ๋‹ค. ์ž๋™ ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ ์ฒซ ๋ฒˆ์งธ ๋‹จ๊ณ„๋Š” ์—…๊ทธ๋ ˆ์ด๋“œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋Š” ํ…์„œํ”Œ๋กœ 2.0์œผ๋กœ ์—…๊ทธ๋ ˆ์ด๋“œํ•˜๊ธฐ ์œ„ํ•ด ์ฒ˜์Œ ์‹œ๋„ํ•  ์ผ์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด ์ž‘์—…์ด ๊ธฐ์กด ์ฝ”๋“œ๋ฅผ ํ…์„œํ”Œ๋กœ 2.0 ์Šคํƒ€์ผ๋กœ ๋ฐ”๊พธ์–ด ์ฃผ์ง€๋Š” ๋ชปํ•ฉ๋‹ˆ๋‹ค. ์—ฌ์ „ํžˆ ํ”Œ๋ ˆ์ด์Šคํ™€๋”(placeholder)๋‚˜ ์„ธ์…˜(session), ์ปฌ๋ ‰์…˜(collection), ๊ทธ์™ธ 1.x ์Šคํƒ€์ผ์˜ ๊ธฐ๋Šฅ์„ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด tf.compat.v1 ์•„๋ž˜์˜ ๋ชจ๋“ˆ์„ ์ฐธ์กฐํ•˜๊ณ  ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ณ ์ˆ˜์ค€ ๋™์ž‘ ๋ณ€๊ฒฝ tf.compat.v1.disable_v2_behavior()๋ฅผ ์‚ฌ์šฉํ•ด ํ…์„œํ”Œ๋กœ 2.0์—์„œ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•œ๋‹ค๋ฉด ์ „์—ญ ๋ฒ”์œ„์˜ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์— ๋Œ€ํ•ด ์•Œ๊ณ  ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ฃผ์š” ๋ณ€๊ฒฝ ์‚ฌํ•ญ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ์ฆ‰์‹œ ์‹คํ–‰, v1.enable_eager_execution() : ์•”๋ฌต์ ์œผ๋กœ tf.Graph๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ชจ๋“  ์ฝ”๋“œ๋Š” ์‹คํŒจํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ฝ”๋“œ๋ฅผ with tf.Graph().as_default() ์ปจํƒ์ŠคํŠธ(context)๋กœ ๊ฐ์‹ธ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฆฌ์†Œ์Šค(resource) ๋ณ€์ˆ˜, v1.enable_resource_variables(): ์ผ๋ถ€ ์ฝ”๋“œ๋Š” TF ๋ ˆํผ๋Ÿฐ์Šค ๋ณ€์ˆ˜์˜ ๊ฒฐ์ •์ ์ด์ง€ ์•Š์€ ํ–‰๋™์— ์˜ํ–ฅ์„ ๋ฐ›์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฆฌ์†Œ์Šค ๋ณ€์ˆ˜๋Š” ์ €์žฅ๋˜๋Š” ๋™์•ˆ ์ž ๊น๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ดํ•ดํ•˜๊ธฐ ์‰ฌ์šด ์ผ๊ด€์„ฑ์„ ๋ณด์žฅํ•ฉ๋‹ˆ๋‹ค. ๊ทน๋‹จ์ ์ธ ๊ฒฝ์šฐ ๋™์ž‘์„ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ถ”๊ฐ€๋กœ ๋ณต์‚ฌ๋ณธ์„ ๋งŒ๋“ค๊ณ  ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์„ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. tf.Variable ์ƒ์„ฑ์ž์— use_resource=False๋ฅผ ์ „๋‹ฌํ•˜์—ฌ ๋น„ํ™œ์„ฑํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ…์„œ ํฌ๊ธฐ, v1.enable_v2_tensorshape(): TF 2.0์—์„œ ํ…์„œ ํฌ๊ธฐ๋Š” ๊ฐ„๋‹จํ•ฉ๋‹ˆ๋‹ค. t.shape[0].value ๋Œ€์‹ ์— t.shape[0]์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ์ž‘๊ธฐ ๋•Œ๋ฌธ์— ๋‹น์žฅ ๊ณ ์น˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. TensorShape ์˜ˆ๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ์ œ์–ด ํ๋ฆ„, v1.enable_control_flow_v2(): TF 2.0 ์ œ์–ด ํ๋ฆ„ ๊ตฌํ˜„์ด ๊ฐ„๋‹จํ•˜๊ฒŒ ๋ฐ”๋€Œ์—ˆ๊ธฐ ๋•Œ๋ฌธ์— ๋‹ค๋ฅธ ๊ทธ๋ž˜ํ”„ ํ‘œํ˜„์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ์ด์Šˆ๊ฐ€ ์žˆ๋‹ค๋ฉด ๋ฒ„๊ทธ๋ฅผ ์‹ ๊ณ ํ•ด ์ฃผ์„ธ์š”. 2.0์— ๋งž๋„๋ก ์ฝ”๋“œ ์ˆ˜์ •ํ•˜๊ธฐ ํ…์„œํ”Œ๋กœ 1.x ์ฝ”๋“œ๋ฅผ ํ…์„œํ”Œ๋กœ 2.0์œผ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๋ช‡ ๊ฐ€์ง€ ์˜ˆ๋ฅผ ์†Œ๊ฐœํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์„ ํ†ตํ•ด ์„ฑ๋Šฅ์„ ์ตœ์ ํ™”ํ•˜๊ณ  ๊ฐ„์†Œํ™”๋œ API์˜ ์ด์ ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ๊ฐ์˜ ๊ฒฝ์šฐ์— ์ˆ˜์ •ํ•˜๋Š” ํŒจํ„ด์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. tf.Session.run ํ˜ธ์ถœ์„ ๋ฐ”๊พธ์„ธ์š”. ๋ชจ๋“  tf.Session.run ํ˜ธ์ถœ์„ ํŒŒ์ด์ฌ ํ•จ์ˆ˜๋กœ ๋ฐ”๊พธ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. feed_dict์™€ tf.placeholder๋Š” ํ•จ์ˆ˜์˜ ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ๋ฉ๋‹ˆ๋‹ค. fetches๋Š” ํ•จ์ˆ˜์˜ ๋ฐ˜ํ™˜๊ฐ’์ด ๋ฉ๋‹ˆ๋‹ค. ๋ณ€ํ™˜ ๊ณผ์ •์—์„œ ์ฆ‰์‹œ ์‹คํ–‰ ๋ชจ๋“œ ๋•๋ถ„์— ํ‘œ์ค€ ํŒŒ์ด์ฌ ๋””๋ฒ„๊ฑฐ pdb๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‰ฝ๊ฒŒ ๋””๋ฒ„๊น…ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋‹ค์Œ ๊ทธ๋ž˜ํ”„ ๋ชจ๋“œ์—์„œ ํšจ์œจ์ ์œผ๋กœ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ๋„๋ก tf.function ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ๋” ์ž์„ธํ•œ ๋‚ด์šฉ์€ ์˜คํ† ๊ทธ๋ž˜ํ”„ ๊ฐ€์ด๋“œ๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ๋…ธํŠธ: v1.Session.run๊ณผ ๋‹ฌ๋ฆฌ tf.function์€ ๋ฐ˜ํ™˜ ์‹œ๊ทธ๋‹ˆ์ฒ˜(signature)๊ฐ€ ๊ณ ์ •๋˜์–ด ์žˆ๊ณ  ํ•ญ์ƒ ๋ชจ๋“  ์ถœ๋ ฅ์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์„ฑ๋Šฅ์— ๋ฌธ์ œ๊ฐ€ ๋œ๋‹ค๋ฉด ๋‘ ๊ฐœ์˜ ํ•จ์ˆ˜๋กœ ๋‚˜๋ˆ„์„ธ์š”. tf.control_dependencies๋‚˜ ๋น„์Šทํ•œ ์—ฐ์‚ฐ์ด ํ•„์š”์—†์Šต๋‹ˆ๋‹ค: tf.function์€ ์“ฐ์—ฌ์ง„ ์ˆœ์„œ๋Œ€๋กœ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด tf.Variable ํ• ๋‹น์ด๋‚˜ tf.assert๋Š” ์ž๋™์œผ๋กœ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. 2. ํŒŒ์ด์ฌ ๊ฐ์ฒด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ณ€์ˆ˜์™€ ์†์‹ค์„ ๊ด€๋ฆฌํ•˜์„ธ์š”. TF 2.0์—์„œ ์ด๋ฆ„ ๊ธฐ๋ฐ˜ ๋ณ€์ˆ˜ ์ถ”์ ์€ ๋งค์šฐ ๊ถŒ์žฅ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ํŒŒ์ด์ฌ ๊ฐ์ฒด๋กœ ๋ณ€์ˆ˜๋ฅผ ์ถ”์ ํ•˜์„ธ์š”. v1.get_variable ๋Œ€์‹ ์— tf.Variable์„ ์‚ฌ์šฉํ•˜์„ธ์š”. ๋ชจ๋“  v1.variable_scope๋Š” ํŒŒ์ด์ฌ ๊ฐ์ฒด๋กœ ๋ฐ”๊พธ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ๋‹ค์Œ ์ค‘ ํ•˜๋‚˜๊ฐ€ ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค: tf.keras.layers.Layer tf.keras.Model tf.Module ๋งŒ์•ฝ (tf.Graph.get_collection(tf.GraphKeys.VARIABLES)์ฒ˜๋Ÿผ) ๋ณ€์ˆ˜์˜ ๋ฆฌ์ŠคํŠธ๊ฐ€ ํ•„์š”ํ•˜๋‹ค๋ฉด Layer์™€ Model ๊ฐ์ฒด์˜ .variables์ด๋‚˜ .trainable_variables ์†์„ฑ์„ ์‚ฌ์šฉํ•˜์„ธ์š”. Layer์™€ Model ํด๋ž˜์Šค๋Š” ์ „์—ญ ์ปฌ๋ ‰์…˜์ด ํ•„์š”ํ•˜์ง€ ์•Š๋„๋ก ๋ช‡ ๊ฐ€์ง€ ๋‹ค๋ฅธ ์†์„ฑ๋“ค๋„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. .losses ์†์„ฑ์€ tf.GraphKeys.LOSSES ์ปฌ๋ ‰์…˜ ๋Œ€์‹  ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ž์„ธํ•œ ๋‚ด์šฉ์€ ์ผ€๋ผ์Šค ๊ฐ€์ด๋“œ๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ๊ฒฝ๊ณ : tf.compat.v1์˜ ์ƒ๋‹น์ˆ˜ ๊ธฐ๋Šฅ์€ ์•”๋ฌต์ ์œผ๋กœ ์ „์—ญ ์ปฌ๋ ‰์…˜์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. 3. ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์—…๊ทธ๋ ˆ์ด๋“œํ•˜์„ธ์š”. ํ’€๋ ค๋Š” ๋ฌธ์ œ์— ๋งž๋Š” ๊ณ ์ˆ˜์ค€ API๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ํ›ˆ๋ จ ๋ฃจํ”„(loop)๋ฅผ ์ง์ ‘ ๋งŒ๋“œ๋Š” ๊ฒƒ๋ณด๋‹ค tf.keras.Model.fit ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๊ณ ์ˆ˜์ค€ ํ•จ์ˆ˜๋Š” ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ง์ ‘ ๋งŒ๋“ค ๋•Œ ๋†“์น˜๊ธฐ ์‰ฌ์šด ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ์ €์ˆ˜์ค€์˜ ์„ธ๋ถ€ ์‚ฌํ•ญ๋“ค์„ ๊ด€๋ฆฌํ•ด ์ค๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์ž๋™์œผ๋กœ ๊ทœ์ œ(regularization) ์†์‹ค์„ ์ˆ˜์ง‘ํ•˜๊ฑฐ๋‚˜ ๋ชจ๋ธ์„ ํ˜ธ์ถœํ•  ๋•Œ training=True๋กœ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์„ค์ •ํ•ด ์ค๋‹ˆ๋‹ค. 4. ๋ฐ์ดํ„ฐ ์ž…๋ ฅ ํŒŒ์ดํ”„๋ผ์ธ์„ ์—…๊ทธ๋ ˆ์ด๋“œํ•˜์„ธ์š”. ๋ฐ์ดํ„ฐ ์ž…๋ ฅ์„ ์œ„ํ•ด tf.data ๋ฐ์ดํ„ฐ์…‹์„ ์‚ฌ์šฉํ•˜์„ธ์š”. ์ด ๊ฐ์ฒด๋Š” ํšจ์œจ์ ์ด๊ณ  ๊ฐ„๊ฒฐํ•˜๋ฉฐ ํ…์„œํ”Œ๋กœ์™€ ์ž˜ ํ†ตํ•ฉ๋ฉ๋‹ˆ๋‹ค. tf.keras.Model.fit ๋ฉ”์„œ๋“œ์— ๋ฐ”๋กœ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. model.fit(dataset, epochs=5) ํŒŒ์ด์ฌ์—์„œ ์ง์ ‘ ๋ฐ˜๋ณต์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: for example_batch, label_batch in dataset: break 5. compat.v1์—์„œ ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜ ํ•˜๊ธฐ tf.compat.v1 ๋ชจ๋“ˆ์—๋Š” ์™„์ „ํ•œ ํ…์„œํ”Œ๋กœ 1.x API๊ฐ€ ๋“ค์–ด ์žˆ์Šต๋‹ˆ๋‹ค. TF2 ์—…๊ทธ๋ ˆ์ด๋“œ ์Šคํฌ๋ฆฝํŠธ๋Š” ์•ˆ์ „ํ•  ๊ฒฝ์šฐ ์ด์™€ ๋™์ผํ•œ 2.0 ๋ฒ„์ „์œผ๋กœ ๋ฐ”๊ฟ‰๋‹ˆ๋‹ค. ์ฆ‰ 2.0 ๋ฒ„์ „์˜ ๋™์ž‘์ด ์™„์ „ํžˆ ๋™์ผํ•œ ๊ฒฝ์šฐ์ž…๋‹ˆ๋‹ค(์˜ˆ๋ฅผ ๋“ค๋ฉด, v1.arg_max๊ฐ€ tf.argmax๋กœ ์ด๋ฆ„์ด ๋ฐ”๋€Œ์—ˆ๊ธฐ ๋•Œ๋ฌธ์— ๋™์ผํ•œ ํ•จ์ˆ˜์ž…๋‹ˆ๋‹ค). ์—…๊ทธ๋ ˆ์ด๋“œ ์Šคํฌ๋ฆฝํŠธ๊ฐ€ ์ฝ”๋“œ๋ฅผ ์ˆ˜์ •ํ•˜๊ณ  ๋‚˜๋ฉด ์ฝ”๋“œ์— compat.v1์ด ๋งŽ์ด ๋“ฑ์žฅํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ฝ”๋“œ๋ฅผ ์‚ดํŽด ๋ณด๋ฉด์„œ ์ˆ˜๋™์œผ๋กœ 2.0 ๋ฒ„์ „์œผ๋กœ ๋ฐ”๊ฟ‰๋‹ˆ๋‹ค(2.0 ๋ฒ„์ „์ด ์žˆ๋‹ค๋ฉด ๋กœ๊ทธ์— ์–ธ๊ธ‰๋˜์–ด ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค). ๋ชจ๋ธ ๋ณ€ํ™˜ํ•˜๊ธฐ ์ค€๋น„
import tensorflow as tf import tensorflow_datasets as tfds
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
์ €์ˆ˜์ค€ ๋ณ€์ˆ˜์™€ ์—ฐ์‚ฐ ์‹คํ–‰ ์ €์ˆ˜์ค€ API๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ์˜ˆ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ์žฌ์‚ฌ์šฉ์„ ์œ„ํ•ด ๋ณ€์ˆ˜ ๋ฒ”์œ„(variable scopes)๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ v1.get_variable๋กœ ๋ณ€์ˆ˜๋ฅผ ๋งŒ๋“ค๊ธฐ ๋ช…์‹œ์ ์œผ๋กœ ์ปฌ๋ ‰์…˜์„ ์ฐธ์กฐํ•˜๊ธฐ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์•”๋ฌต์ ์œผ๋กœ ์ปฌ๋ ‰์…˜์„ ์ฐธ์กฐํ•˜๊ธฐ: v1.global_variables v1.losses.get_regularization_loss ๊ทธ๋ž˜ํ”„ ์ž…๋ ฅ์„ ์œ„ํ•ด v1.placeholder๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ session.run์œผ๋กœ ๊ทธ๋ž˜ํ”„๋ฅผ ์‹คํ–‰ํ•˜๊ธฐ ๋ณ€์ˆ˜๋ฅผ ์ˆ˜๋™์œผ๋กœ ์ดˆ๊ธฐํ™”ํ•˜๊ธฐ ๋ณ€ํ™˜ ์ „ ๋‹ค์Œ ์ฝ”๋“œ๋Š” ํ…์„œํ”Œ๋กœ 1.x๋ฅผ ์‚ฌ์šฉํ•œ ์ฝ”๋“œ์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋Š” ํŒจํ„ด์ž…๋‹ˆ๋‹ค. ```python in_a = tf.placeholder(dtype=tf.float32, shape=(2)) in_b = tf.placeholder(dtype=tf.float32, shape=(2)) def forward(x): with tf.variable_scope("matmul", reuse=tf.AUTO_REUSE): W = tf.get_variable("W", initializer=tf.ones(shape=(2,2)), regularizer=tf.contrib.layers.l2_regularizer(0.04)) b = tf.get_variable("b", initializer=tf.zeros(shape=(2))) return W * x + b out_a = forward(in_a) out_b = forward(in_b) reg_loss = tf.losses.get_regularization_loss(scope="matmul") with tf.Session() as sess: sess.run(tf.global_variables_initializer()) outs = sess.run([out_a, out_b, reg_loss], feed_dict={in_a: [1, 0], in_b: [0, 1]}) ``` ๋ณ€ํ™˜ ํ›„ ๋ณ€ํ™˜๋œ ์ฝ”๋“œ์˜ ํŒจํ„ด์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ๋ณ€์ˆ˜๋Š” ํŒŒ์ด์ฌ ์ง€์—ญ ๊ฐ์ฒด์ž…๋‹ˆ๋‹ค. forward ํ•จ์ˆ˜๋Š” ์—ฌ์ „ํžˆ ํ•„์š”ํ•œ ๊ณ„์‚ฐ์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. Session.run ํ˜ธ์ถœ์€ forward ํ•จ์ˆ˜๋ฅผ ํ˜ธ์ถœํ•˜๋Š” ๊ฒƒ์œผ๋กœ ๋ฐ”๋€๋‹ˆ๋‹ค. tf.function ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋Š” ์„ ํƒ ์‚ฌํ•ญ์œผ๋กœ ์„ฑ๋Šฅ์„ ์œ„ํ•ด ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์–ด๋–ค ์ „์—ญ ์ปฌ๋ ‰์…˜๋„ ์ฐธ์กฐํ•˜์ง€ ์•Š๊ณ  ๊ทœ์ œ๋ฅผ ์ง์ ‘ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. ์„ธ์…˜์ด๋‚˜ ํ”Œ๋ ˆ์ด์Šคํ™€๋”๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค.
W = tf.Variable(tf.ones(shape=(2,2)), name="W") b = tf.Variable(tf.zeros(shape=(2)), name="b") @tf.function def forward(x): return W * x + b out_a = forward([1,0]) print(out_a) out_b = forward([0,1]) regularizer = tf.keras.regularizers.l2(0.04) reg_loss = regularizer(W)
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
tf.layers ๊ธฐ๋ฐ˜์˜ ๋ชจ๋ธ v1.layers ๋ชจ๋“ˆ์€ ๋ณ€์ˆ˜๋ฅผ ์ •์˜ํ•˜๊ณ  ์žฌ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด v1.variable_scope์— ์˜์กดํ•˜๋Š” ์ธต ํ•จ์ˆ˜๋ฅผ ํฌํ•จํ•ฉ๋‹ˆ๋‹ค. ๋ณ€ํ™˜ ์ „ ```python def model(x, training, scope='model'): with tf.variable_scope(scope, reuse=tf.AUTO_REUSE): x = tf.layers.conv2d(x, 32, 3, activation=tf.nn.relu, kernel_regularizer=tf.contrib.layers.l2_regularizer(0.04)) x = tf.layers.max_pooling2d(x, (2, 2), 1) x = tf.layers.flatten(x) x = tf.layers.dropout(x, 0.1, training=training) x = tf.layers.dense(x, 64, activation=tf.nn.relu) x = tf.layers.batch_normalization(x, training=training) x = tf.layers.dense(x, 10, activation=tf.nn.softmax) return x train_out = model(train_data, training=True) test_out = model(test_data, training=False) ``` ๋ณ€ํ™˜ ํ›„ ์ธต์„ ๋‹จ์ˆœํ•˜๊ฒŒ ์Œ“์„ ๊ฒฝ์šฐ์—” tf.keras.Sequential์ด ์ ํ•ฉํ•ฉ๋‹ˆ๋‹ค. (๋ณต์žกํ•œ ๋ชจ๋ธ์ธ ๊ฒฝ์šฐ ์‚ฌ์šฉ์ž ์ •์˜ ์ธต๊ณผ ๋ชจ๋ธ์ด๋‚˜ ํ•จ์ˆ˜ํ˜• API๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”.) ๋ชจ๋ธ์ด ๋ณ€์ˆ˜์™€ ๊ทœ์ œ ์†์‹ค์„ ๊ด€๋ฆฌํ•ฉ๋‹ˆ๋‹ค. v1.layers์—์„œ tf.keras.layers๋กœ ๋ฐ”๋กœ ๋งคํ•‘๋˜๊ธฐ ๋•Œ๋ฌธ์— ์ผ๋Œ€์ผ๋กœ ๋ณ€ํ™˜๋ฉ๋‹ˆ๋‹ค. ๋Œ€๋ถ€๋ถ„ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ถ€๋ถ„์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ๋ชจ๋ธ์ด ์‹คํ–‰๋  ๋•Œ ๊ฐ ์ธต์— training ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ์›๋ž˜ model ํ•จ์ˆ˜์˜ ์ฒซ ๋ฒˆ์งธ ๋งค๊ฐœ๋ณ€์ˆ˜(์ž…๋ ฅ x)๋Š” ์‚ฌ๋ผ์ง‘๋‹ˆ๋‹ค. ์ธต ๊ฐ์ฒด๊ฐ€ ๋ชจ๋ธ ๊ตฌ์ถ•๊ณผ ๋ชจ๋ธ ํ˜ธ์ถœ์„ ๊ตฌ๋ถ„ํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์ถ”๊ฐ€ ๋…ธํŠธ: tf.contrib์—์„œ ๊ทœ์ œ๋ฅผ ์ดˆ๊ธฐํ™”ํ–ˆ๋‹ค๋ฉด ๋‹ค๋ฅธ ๊ฒƒ๋ณด๋‹ค ๋งค๊ฐœ๋ณ€์ˆ˜ ๋ณ€ํ™”๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๋” ์ด์ƒ ์ปฌ๋ ‰์…˜์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์— v1.losses.get_regularization_loss์™€ ๊ฐ™์€ ํ•จ์ˆ˜๋Š” ๊ฐ’์„ ๋ฐ˜ํ™˜ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด๋Š” ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ๋ง๊ฐ€๋œจ๋ฆด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.04), input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dropout(0.1), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(10) ]) train_data = tf.ones(shape=(1, 28, 28, 1)) test_data = tf.ones(shape=(1, 28, 28, 1)) train_out = model(train_data, training=True) print(train_out) test_out = model(test_data, training=False) print(test_out) # ํ›ˆ๋ จ๋˜๋Š” ์ „์ฒด ๋ณ€์ˆ˜ len(model.trainable_variables) # ๊ทœ์ œ ์†์‹ค model.losses
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
๋ณ€์ˆ˜์™€ v1.layers์˜ ํ˜ผ์šฉ ๊ธฐ์กด ์ฝ”๋“œ๋Š” ์ข…์ข… ์ €์ˆ˜์ค€ TF 1.x ๋ณ€์ˆ˜์™€ ๊ณ ์ˆ˜์ค€ v1.layers ์—ฐ์‚ฐ์„ ํ˜ผ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋ณ€๊ฒฝ ์ „ ```python def model(x, training, scope='model'): with tf.variable_scope(scope, reuse=tf.AUTO_REUSE): W = tf.get_variable( "W", dtype=tf.float32, initializer=tf.ones(shape=x.shape), regularizer=tf.contrib.layers.l2_regularizer(0.04), trainable=True) if training: x = x + W else: x = x + W * 0.5 x = tf.layers.conv2d(x, 32, 3, activation=tf.nn.relu) x = tf.layers.max_pooling2d(x, (2, 2), 1) x = tf.layers.flatten(x) return x train_out = model(train_data, training=True) test_out = model(test_data, training=False) ``` ๋ณ€๊ฒฝ ํ›„ ์ด๋Ÿฐ ์ฝ”๋“œ๋ฅผ ๋ณ€ํ™˜ํ•˜๋ ค๋ฉด ์ด์ „ ์˜ˆ์ œ์ฒ˜๋Ÿผ ์ธต๋ณ„๋กœ ๋งคํ•‘ํ•˜๋Š” ํŒจํ„ด์„ ์‚ฌ์šฉํ•˜์„ธ์š”. v1.variable_scope๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ํ•˜๋‚˜์˜ ์ธต์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ tf.keras.layers.Layer๋กœ ์žฌ์ž‘์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ž์„ธํ•œ ๋‚ด์šฉ์€ ์ด ๋ฌธ์„œ๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ์ผ๋ฐ˜์ ์ธ ํŒจํ„ด์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: __init__์—์„œ ์ธต์— ํ•„์š”ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ž…๋ ฅ ๋ฐ›์Šต๋‹ˆ๋‹ค. build ๋ฉ”์„œ๋“œ์—์„œ ๋ณ€์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. call ๋ฉ”์„œ๋“œ์—์„œ ์—ฐ์‚ฐ์„ ์‹คํ–‰ํ•˜๊ณ  ๊ฒฐ๊ณผ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค.
# ๋ชจ๋ธ์— ์ถ”๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ์ž ์ •์˜ ์ธต์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. class CustomLayer(tf.keras.layers.Layer): def __init__(self, *args, **kwargs): super(CustomLayer, self).__init__(*args, **kwargs) def build(self, input_shape): self.w = self.add_weight( shape=input_shape[1:], dtype=tf.float32, initializer=tf.keras.initializers.ones(), regularizer=tf.keras.regularizers.l2(0.04), trainable=True) # call ๋ฉ”์„œ๋“œ๊ฐ€ ๊ทธ๋ž˜ํ”„ ๋ชจ๋“œ์—์„œ ์‚ฌ์šฉ๋˜๋ฉด # training ๋ณ€์ˆ˜๋Š” ํ…์„œ๊ฐ€ ๋ฉ๋‹ˆ๋‹ค. @tf.function def call(self, inputs, training=None): if training: return inputs + self.w else: return inputs + self.w * 0.5 custom_layer = CustomLayer() print(custom_layer([1]).numpy()) print(custom_layer([1], training=True).numpy()) train_data = tf.ones(shape=(1, 28, 28, 1)) test_data = tf.ones(shape=(1, 28, 28, 1)) # ์‚ฌ์šฉ์ž ์ •์˜ ์ธต์„ ํฌํ•จํ•œ ๋ชจ๋ธ์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. model = tf.keras.Sequential([ CustomLayer(input_shape=(28, 28, 1)), tf.keras.layers.Conv2D(32, 3, activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), ]) train_out = model(train_data, training=True) test_out = model(test_data, training=False)
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
๋…ธํŠธ: ํด๋ž˜์Šค ์ƒ์†์œผ๋กœ ๋งŒ๋“  ์ผ€๋ผ์Šค ๋ชจ๋ธ๊ณผ ์ธต์€ v1 ๊ทธ๋ž˜ํ”„(์—ฐ์‚ฐ๊ฐ„์˜ ์˜์กด์„ฑ์ด ์ž๋™์œผ๋กœ ์ œ์–ด๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค)์™€ ์ฆ‰์‹œ ์‹คํ–‰ ๋ชจ๋“œ ์–‘์ชฝ์—์„œ ์‹คํ–‰๋  ์ˆ˜ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜คํ† ๊ทธ๋ž˜ํ”„(autograph)์™€ ์˜์กด์„ฑ ์ž๋™ ์ œ์–ด(automatic control dependency)๋ฅผ ์œ„ํ•ด tf.function()์œผ๋กœ call() ๋ฉ”์„œ๋“œ๋ฅผ ๊ฐ์Œ‰๋‹ˆ๋‹ค. call ๋ฉ”์„œ๋“œ์— training ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์„ ์žŠ์ง€ ๋งˆ์„ธ์š”. ๊ฒฝ์šฐ์— ๋”ฐ๋ผ ์ด ๊ฐ’์€ tf.Tensor๊ฐ€ ๋ฉ๋‹ˆ๋‹ค. ๊ฒฝ์šฐ์— ๋”ฐ๋ผ ์ด ๊ฐ’์€ ํŒŒ์ด์ฌ ๋ถˆ๋ฆฌ์–ธ(boolean)์ด ๋ฉ๋‹ˆ๋‹ค. self.add_weight()๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ƒ์„ฑ์ž ๋ฉ”์„œ๋“œ๋‚˜ def build() ๋ฉ”์„œ๋“œ์—์„œ ๋ชจ๋ธ ๋ณ€์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. build ๋ฉ”์„œ๋“œ์—์„œ ์ž…๋ ฅ ํฌ๊ธฐ๋ฅผ ์ฐธ์กฐํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ์ ์ ˆํ•œ ํฌ๊ธฐ์˜ ๊ฐ€์ค‘์น˜๋ฅผ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. tf.keras.layers.Layer.add_weight๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์ผ€๋ผ์Šค๊ฐ€ ๋ณ€์ˆ˜์™€ ๊ทœ์ œ ์†์‹ค์„ ๊ด€๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž ์ •์˜ ์ธต ์•ˆ์— tf.Tensors ๊ฐ์ฒด๋ฅผ ํฌํ•จํ•˜์ง€ ๋งˆ์„ธ์š”. tf.function์ด๋‚˜ ์ฆ‰์‹œ ์‹คํ–‰ ๋ชจ๋“œ์—์„œ ๋ชจ๋‘ ํ…์„œ๊ฐ€ ๋งŒ๋“ค์–ด์ง€์ง€๋งŒ ์ด ํ…์„œ๋“ค์˜ ๋™์ž‘ ๋ฐฉ์‹์€ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ์ƒํƒœ๋ฅผ ์ €์žฅํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” tf.Variable์„ ์‚ฌ์šฉํ•˜์„ธ์š”. ๋ณ€์ˆ˜๋Š” ์–‘์ชฝ ๋ฐฉ์‹์— ๋ชจ๋‘ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. tf.Tensors๋Š” ์ค‘๊ฐ„ ๊ฐ’์„ ์ €์žฅํ•˜๊ธฐ ์œ„ํ•œ ์šฉ๋„๋กœ๋งŒ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. Slim & contrib.layers๋ฅผ ์œ„ํ•œ ๋…ธํŠธ ์˜ˆ์ „ ํ…์„œํ”Œ๋กœ 1.x ์ฝ”๋“œ๋Š” Slim ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๋งŽ์ด ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ํ…์„œํ”Œ๋กœ 1.x์˜ tf.contrib.layers๋กœ ํŒจํ‚ค์ง€๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. contrib ๋ชจ๋“ˆ์€ ๋” ์ด์ƒ ํ…์„œํ”Œ๋กœ 2.0์—์„œ ์ง€์›ํ•˜์ง€ ์•Š๊ณ  tf.compat.v1์—๋„ ํฌํ•จ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. Slim์„ ์‚ฌ์šฉํ•œ ์ฝ”๋“œ๋ฅผ TF 2.0์œผ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ฒƒ์€ v1.layers๋ฅผ ์‚ฌ์šฉํ•œ ์ฝ”๋“œ๋ฅผ ๋ณ€๊ฒฝํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค ๋” ์–ด๋ ต์Šต๋‹ˆ๋‹ค. ์‚ฌ์‹ค Slim ์ฝ”๋“œ๋Š” v1.layers๋กœ ๋จผ์ € ๋ณ€ํ™˜ํ•˜๊ณ  ๊ทธ ๋‹ค์Œ ์ผ€๋ผ์Šค๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. arg_scopes๋ฅผ ์‚ญ์ œํ•˜์„ธ์š”. ๋ชจ๋“  ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ช…์‹œ์ ์œผ๋กœ ์„ค์ •๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. normalizer_fn๊ณผ activation_fn๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•œ๋‹ค๋ฉด ๋ถ„๋ฆฌํ•˜์—ฌ ๊ฐ๊ฐ ํ•˜๋‚˜์˜ ์ธต์œผ๋กœ ๋งŒ๋“œ์„ธ์š”. ๋ถ„๋ฆฌ ํ•ฉ์„ฑ๊ณฑ(separable conv) ์ธต์€ ํ•œ ๊ฐœ ์ด์ƒ์˜ ๋‹ค๋ฅธ ์ผ€๋ผ์Šค ์ธต์œผ๋กœ ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค(๊นŠ์ด๋ณ„(depthwise), ์ ๋ณ„(pointwise), ๋ถ„๋ฆฌ(separable) ์ผ€๋ผ์Šค ์ธต). Slim๊ณผ v1.layers๋Š” ๋งค๊ฐœ๋ณ€์ˆ˜ ์ด๋ฆ„๊ณผ ๊ธฐ๋ณธ๊ฐ’์ด ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ์ผ๋ถ€ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋‹ค๋ฅธ ์Šค์ผ€์ผ(scale)์„ ๊ฐ€์ง‘๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ›ˆ๋ จ๋œ Slim ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•œ๋‹ค๋ฉด tf.keras.applications๋‚˜ TFHub๋ฅผ ํ™•์ธํ•ด ๋ณด์„ธ์š”. ์ผ๋ถ€ tf.contrib ์ธต์€ ํ…์„œํ”Œ๋กœ ๋‚ด๋ถ€์— ํฌํ•จ๋˜์ง€ ๋ชปํ–ˆ์ง€๋งŒ TF ์• ๋“œ์˜จ(add-on) ํŒจํ‚ค์ง€๋กœ ์˜ฎ๊ฒจ์กŒ์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์œผ๋กœ tf.keras ๋ชจ๋ธ์— ๋ฐ์ดํ„ฐ๋ฅผ ์ฃผ์ž…ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํŒŒ์ด์ฌ ์ œ๋„ˆ๋ ˆ์ดํ„ฐ(generator)์™€ ๋„˜ํŒŒ์ด ๋ฐฐ์—ด์„ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. tf.data ํŒจํ‚ค์ง€๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์— ๋ฐ์ดํ„ฐ๋ฅผ ์ฃผ์ž…ํ•˜๋Š” ๊ฒƒ์ด ๊ถŒ์žฅ๋˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. ์ด ํŒจํ‚ค์ง€๋Š” ๋ฐ์ดํ„ฐ ์กฐ์ž‘์„ ์œ„ํ•œ ๊ณ ์„ฑ๋Šฅ ํด๋ž˜์Šค๋“ค์„ ํฌํ•จํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. tf.queue๋Š” ๋ฐ์ดํ„ฐ ๊ตฌ์กฐ๋กœ๋งŒ ์ง€์›๋˜๊ณ  ์ž…๋ ฅ ํŒŒ์ดํ”„๋ผ์ธ์œผ๋กœ๋Š” ์ง€์›๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์…‹ ์‚ฌ์šฉํ•˜๊ธฐ ํ…์„œํ”Œ๋กœ ๋ฐ์ดํ„ฐ์…‹(Datasets) ํŒจํ‚ค์ง€(tfds)๋Š” tf.data.Dataset ๊ฐ์ฒด๋กœ ์ •์˜๋œ ๋ฐ์ดํ„ฐ์…‹์„ ์ ์žฌํ•˜๊ธฐ ์œ„ํ•œ ์œ ํ‹ธ๋ฆฌํ‹ฐ๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด tfds๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ MNIST ๋ฐ์ดํ„ฐ์…‹์„ ์ ์žฌํ•˜๋Š” ์ฝ”๋“œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค:
datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True) mnist_train, mnist_test = datasets['train'], datasets['test']
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
๊ทธ ๋‹ค์Œ ํ›ˆ๋ จ์šฉ ๋ฐ์ดํ„ฐ๋ฅผ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค: ๊ฐ ์ด๋ฏธ์ง€์˜ ์Šค์ผ€์ผ์„ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ์ƒ˜ํ”Œ์˜ ์ˆœ์„œ๋ฅผ ์„ž์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€์™€ ๋ ˆ์ด๋ธ”(label)์˜ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค.
BUFFER_SIZE = 10 # ์‹ค์ „ ์ฝ”๋“œ์—์„œ๋Š” ๋” ํฐ ๊ฐ’์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. BATCH_SIZE = 64 NUM_EPOCHS = 5 def scale(image, label): image = tf.cast(image, tf.float32) image /= 255 return image, label
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
๊ฐ„๋‹จํ•œ ์˜ˆ์ œ๋ฅผ ์œ„ํ•ด 5๊ฐœ์˜ ๋ฐฐ์น˜๋งŒ ๋ฐ˜ํ™˜ํ•˜๋„๋ก ๋ฐ์ดํ„ฐ์…‹์„ ์ž๋ฆ…๋‹ˆ๋‹ค:
train_data = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE) test_data = mnist_test.map(scale).batch(BATCH_SIZE) STEPS_PER_EPOCH = 5 train_data = train_data.take(STEPS_PER_EPOCH) test_data = test_data.take(STEPS_PER_EPOCH) image_batch, label_batch = next(iter(train_data))
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
์ผ€๋ผ์Šค ํ›ˆ๋ จ ๋ฃจํ”„ ์‚ฌ์šฉํ•˜๊ธฐ ํ›ˆ๋ จ ๊ณผ์ •์„ ์„ธ๋ถ€์ ์œผ๋กœ ์ œ์–ดํ•  ํ•„์š”๊ฐ€ ์—†๋‹ค๋ฉด ์ผ€๋ผ์Šค์˜ ๋‚ด์žฅ ๋ฉ”์„œ๋“œ์ธ fit, evaluate, predict๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ด ๋ฉ”์„œ๋“œ๋“ค์€ ๋ชจ๋ธ ๊ตฌํ˜„(Sequential, ํ•จ์ˆ˜ํ˜• API, ํด๋ž˜์Šค ์ƒ์†)์— ์ƒ๊ด€์—†์ด ์ผ๊ด€๋œ ํ›ˆ๋ จ ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฉ”์„œ๋“œ๋“ค์˜ ์žฅ์ ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ๋„˜ํŒŒ์ด ๋ฐฐ์—ด, ํŒŒ์ด์ฌ ์ œ๋„ˆ๋ ˆ์ดํ„ฐ, tf.data.Datasets์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ž๋™์œผ๋กœ ๊ทœ์ œ์™€ ํ™œ์„ฑํ™” ์†์‹ค์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์ค‘ ์žฅ์น˜ ํ›ˆ๋ จ์„ ์œ„ํ•ด tf.distribute์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์ž„์˜์˜ ํ˜ธ์ถœ ๊ฐ€๋Šฅํ•œ ๊ฐ์ฒด๋ฅผ ์†์‹ค๊ณผ ์ธก์ • ์ง€ํ‘œ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. tf.keras.callbacks.TensorBoard์™€ ๊ฐ™์€ ์ฝœ๋ฐฑ(callback)์ด๋‚˜ ์‚ฌ์šฉ์ž ์ •์˜ ์ฝœ๋ฐฑ์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์ž๋™์œผ๋กœ ํ…์„œํ”Œ๋กœ ๊ทธ๋ž˜ํ”„๋ฅผ ์‚ฌ์šฉํ•˜๋ฏ€๋กœ ์„ฑ๋Šฅ์ด ๋›ฐ์–ด๋‚ฉ๋‹ˆ๋‹ค. Dataset์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ์˜ˆ์ œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. (์ž์„ธํ•œ ์ž‘๋™ ๋ฐฉ์‹์€ ํŠœํ† ๋ฆฌ์–ผ์„ ์ฐธ๊ณ ํ•˜์„ธ์š”.)
model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.02), input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dropout(0.1), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(10) ]) # ์‚ฌ์šฉ์ž ์ •์˜ ์ธต์ด ์—†๋Š” ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit(train_data, epochs=NUM_EPOCHS) loss, acc = model.evaluate(test_data) print("์†์‹ค {}, ์ •ํ™•๋„ {}".format(loss, acc))
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
์‚ฌ์šฉ์ž ์ •์˜ ํ›ˆ๋ จ ๋ฃจํ”„ ๋งŒ๋“ค๊ธฐ ์ผ€๋ผ์Šค ๋ชจ๋ธ์˜ ํ›ˆ๋ จ ์Šคํ…(step)์ด ์ข‹์ง€๋งŒ ๊ทธ ์™ธ ๋‹ค๋ฅธ ๊ฒƒ์„ ๋” ์ œ์–ดํ•˜๋ ค๋ฉด ์ž์‹ ๋งŒ์˜ ๋ฐ์ดํ„ฐ ๋ฐ˜๋ณต ๋ฃจํ”„๋ฅผ ๋งŒ๋“ค๊ณ  tf.keras.model.train_on_batch ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด์„ธ์š”. ๊ธฐ์–ตํ•  ์ : ๋งŽ์€ ๊ฒƒ์„ tf.keras.Callback์œผ๋กœ ๊ตฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ฉ”์„œ๋“œ๋Š” ์•ž์—์„œ ์–ธ๊ธ‰ํ•œ ๋ฉ”์„œ๋“œ์˜ ์žฅ์ ์„ ๋งŽ์ด ๊ฐ€์ง€๊ณ  ์žˆ๊ณ  ์‚ฌ์šฉ์ž๊ฐ€ ๋ฐ”๊นฅ์ชฝ ๋ฃจํ”„๋ฅผ ์ œ์–ดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จํ•˜๋Š” ๋™์•ˆ ์„ฑ๋Šฅ์„ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด tf.keras.model.test_on_batch๋‚˜ tf.keras.Model.evaluate ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋…ธํŠธ: train_on_batch์™€ test_on_batch๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ํ•˜๋‚˜์˜ ๋ฐฐ์น˜์— ๋Œ€ํ•œ ์†์‹ค๊ณผ ์ธก์ •๊ฐ’์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. reset_metrics=False๋ฅผ ์ „๋‹ฌํ•˜๋ฉด ๋ˆ„์ ๋œ ์ธก์ •๊ฐ’์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋•Œ๋Š” ๋ˆ„์ ๋œ ์ธก์ •๊ฐ’์„ ์ ์ ˆํ•˜๊ฒŒ ์ดˆ๊ธฐํ™”ํ•ด ์ฃผ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. AUC์™€ ๊ฐ™์€ ์ผ๋ถ€ ์ง€ํ‘œ๋Š” reset_metrics=False๋ฅผ ์„ค์ •ํ•ด์•ผ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๊ณ„์‚ฐ๋ฉ๋‹ˆ๋‹ค. ์•ž์˜ ๋ชจ๋ธ์„ ๊ณ„์† ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค:
# ์‚ฌ์šฉ์ž ์ •์˜ ์ธต์ด ์—†๋Š” ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) for epoch in range(NUM_EPOCHS): # ๋ˆ„์ ๋œ ์ธก์ •๊ฐ’์„ ์ดˆ๊ธฐํ™”ํ•ฉ๋‹ˆ๋‹ค. model.reset_metrics() for image_batch, label_batch in train_data: result = model.train_on_batch(image_batch, label_batch) metrics_names = model.metrics_names print("ํ›ˆ๋ จ: ", "{}: {:.3f}".format(metrics_names[0], result[0]), "{}: {:.3f}".format(metrics_names[1], result[1])) for image_batch, label_batch in test_data: result = model.test_on_batch(image_batch, label_batch, # return accumulated metrics reset_metrics=False) metrics_names = model.metrics_names print("\nํ‰๊ฐ€: ", "{}: {:.3f}".format(metrics_names[0], result[0]), "{}: {:.3f}".format(metrics_names[1], result[1]))
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
<a id="custom_loops"/> ํ›ˆ๋ จ ๋‹จ๊ณ„ ์ปค์Šคํ„ฐ๋งˆ์ด์ง• ์ž์œ ๋„๋ฅผ ๋†’์ด๊ณ  ์ œ์–ด๋ฅผ ๋” ํ•˜๋ ค๋ฉด ๋‹ค์Œ ์„ธ ๋‹จ๊ณ„๋ฅผ ์‚ฌ์šฉํ•ด ์ž์‹ ๋งŒ์˜ ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ๊ตฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ์ƒ˜ํ”Œ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“œ๋Š” ํŒŒ์ด์ฌ ์ œ๋„ˆ๋ ˆ์ดํ„ฐ๋‚˜ tf.data.Dataset์„ ๋ฐ˜๋ณตํ•ฉ๋‹ˆ๋‹ค. tf.GradientTape์„ ์‚ฌ์šฉํ•˜์—ฌ ๊ทธ๋ž˜๋””์–ธํŠธ๋ฅผ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. tf.keras.optimizer๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์˜ ๊ฐ€์ค‘์น˜ ๋ณ€์ˆ˜๋ฅผ ์—…๋ฐ์ดํŠธํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ์–ตํ•  ์ : ํด๋ž˜์Šค ์ƒ์†์œผ๋กœ ๋งŒ๋“  ์ธต๊ณผ ๋ชจ๋ธ์˜ call ๋ฉ”์„œ๋“œ์—๋Š” ํ•ญ์ƒ training ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ํฌํ•จํ•˜์„ธ์š”. ๋ชจ๋ธ์„ ํ˜ธ์ถœํ•  ๋•Œ training ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ง€์ •ํ–ˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ์‚ฌ์šฉ ๋ฐฉ์‹์— ๋”ฐ๋ผ ๋ฐฐ์น˜ ๋ฐ์ดํ„ฐ์—์„œ ๋ชจ๋ธ์ด ์‹คํ–‰๋  ๋•Œ๊นŒ์ง€ ๋ชจ๋ธ ๋ณ€์ˆ˜๊ฐ€ ์ƒ์„ฑ๋˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ๊ทœ์ œ ์†์‹ค ๊ฐ™์€ ๊ฒƒ๋“ค์„ ์ง์ ‘ ๊ด€๋ฆฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. v1์— ๋น„ํ•ด ๋‹จ์ˆœํ•ด์ง„ ๊ฒƒ: ๋”ฐ๋กœ ๋ณ€์ˆ˜๋ฅผ ์ดˆ๊ธฐํ™”ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ๋ณ€์ˆ˜๋Š” ์ƒ์„ฑ๋  ๋•Œ ์ดˆ๊ธฐํ™”๋ฉ๋‹ˆ๋‹ค. ์˜์กด์„ฑ์„ ์ˆ˜๋™์œผ๋กœ ์ œ์–ดํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. tf.function ์•ˆ์—์„œ๋„ ์—ฐ์‚ฐ์€ ์ฆ‰์‹œ ์‹คํ–‰ ๋ชจ๋“œ์ฒ˜๋Ÿผ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค.
model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.02), input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dropout(0.1), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(10) ]) optimizer = tf.keras.optimizers.Adam(0.001) loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) @tf.function def train_step(inputs, labels): with tf.GradientTape() as tape: predictions = model(inputs, training=True) regularization_loss = tf.math.add_n(model.losses) pred_loss = loss_fn(labels, predictions) total_loss = pred_loss + regularization_loss gradients = tape.gradient(total_loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) for epoch in range(NUM_EPOCHS): for inputs, labels in train_data: train_step(inputs, labels) print("๋งˆ์ง€๋ง‰ ์—ํฌํฌ", epoch)
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
์ƒˆ๋กœ์šด ์Šคํƒ€์ผ์˜ ์ธก์ • ์ง€ํ‘œ ํ…์„œํ”Œ๋กœ 2.0์—์„œ ์ธก์ • ์ง€ํ‘œ์™€ ์†์‹ค์€ ๊ฐ์ฒด์ž…๋‹ˆ๋‹ค. ์ด ๊ฐ์ฒด๋Š” ์ฆ‰์‹œ ์‹คํ–‰ ๋ชจ๋“œ์™€ tf.function์—์„œ ๋ชจ๋‘ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์†์‹ค์€ ํ˜ธ์ถœ ๊ฐ€๋Šฅํ•œ ๊ฐ์ฒด์ž…๋‹ˆ๋‹ค. ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ (y_true, y_pred)๋ฅผ ๊ธฐ๋Œ€ํ•ฉ๋‹ˆ๋‹ค:
cce = tf.keras.losses.CategoricalCrossentropy(from_logits=True) cce([[1, 0]], [[-1.0,3.0]]).numpy()
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
์ธก์ • ๊ฐ์ฒด๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ฉ”์„œ๋“œ๋ฅผ ๊ฐ€์ง‘๋‹ˆ๋‹ค: update_state() โ€” ์ƒˆ๋กœ์šด ์ธก์ •๊ฐ’์„ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. result() โ€” ๋ˆ„์ ๋œ ์ธก์ • ๊ฒฐ๊ณผ๋ฅผ ์–ป์Šต๋‹ˆ๋‹ค. reset_states() โ€” ๋ชจ๋“  ์ธก์ • ๋‚ด์šฉ์„ ์ง€์›๋‹ˆ๋‹ค. ์ด ๊ฐ์ฒด๋Š” ํ˜ธ์ถœ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. update_state ๋ฉ”์„œ๋“œ์ฒ˜๋Ÿผ ์ƒˆ๋กœ์šด ์ธก์ •๊ฐ’๊ณผ ํ•จ๊ป˜ ํ˜ธ์ถœํ•˜๋ฉด ์ƒํƒœ๋ฅผ ์—…๋ฐ์ดํŠธํ•˜๊ณ  ์ƒˆ๋กœ์šด ์ธก์ • ๊ฒฐ๊ณผ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ธก์ • ๋ณ€์ˆ˜๋ฅผ ์ˆ˜๋™์œผ๋กœ ์ดˆ๊ธฐํ™”ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ํ…์„œํ”Œ๋กœ 2.0์€ ์ž๋™์œผ๋กœ ์˜์กด์„ฑ์„ ๊ด€๋ฆฌํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์–ด๋–ค ๊ฒฝ์šฐ์—๋„ ์‹ ๊ฒฝ ์“ธ ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ์ธก์ • ๊ฐ์ฒด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์šฉ์ž ์ •์˜ ํ›ˆ๋ จ ๋ฃจํ”„ ์•ˆ์—์„œ ํ‰๊ท  ์†์‹ค์„ ๊ด€๋ฆฌํ•˜๋Š” ์ฝ”๋“œ์ž…๋‹ˆ๋‹ค.
# ์ธก์ • ๊ฐ์ฒด๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. loss_metric = tf.keras.metrics.Mean(name='train_loss') accuracy_metric = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy') @tf.function def train_step(inputs, labels): with tf.GradientTape() as tape: predictions = model(inputs, training=True) regularization_loss = tf.math.add_n(model.losses) pred_loss = loss_fn(labels, predictions) total_loss = pred_loss + regularization_loss gradients = tape.gradient(total_loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) # ์ธก์ •๊ฐ’์„ ์—…๋ฐ์ดํŠธํ•ฉ๋‹ˆ๋‹ค. loss_metric.update_state(total_loss) accuracy_metric.update_state(labels, predictions) for epoch in range(NUM_EPOCHS): # ์ธก์ •๊ฐ’์„ ์ดˆ๊ธฐํ™”ํ•ฉ๋‹ˆ๋‹ค. loss_metric.reset_states() accuracy_metric.reset_states() for inputs, labels in train_data: train_step(inputs, labels) # ์ธก์ • ๊ฒฐ๊ณผ๋ฅผ ์–ป์Šต๋‹ˆ๋‹ค. mean_loss = loss_metric.result() mean_accuracy = accuracy_metric.result() print('์—ํฌํฌ: ', epoch) print(' ์†์‹ค: {:.3f}'.format(mean_loss)) print(' ์ •ํ™•๋„: {:.3f}'.format(mean_accuracy))
site/ko/guide/migrate.ipynb
tensorflow/docs-l10n
apache-2.0