markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
centre Este método recibe una lista de puntos, y devuelve el punto que está en el centro del polígono que forman dichos puntos. Las coordenadas de los puntos deben ser rational. En caso de no pasar una lista de puntos rational, devuelve None. Ejemplos:
points=[(rational(4,5),rational(1,2)),(rational(4,2),rational(3,1)),(rational(8,3),rational(3,5)),(rational(7,2),rational(4,5)), (rational(7,9),rational(4,9)),(rational(9,8),rational(10,7))] point=Simplex.centre(points) print("("+str(point[0])+","+str(point[1])+")") # Si recibe algo que no es una lista de puntos rational, devuelve None points=[(4.0,5.0),(4.0,3.0),(8.0,5.0),(7.0,4.0),(7.0,9.0),(10.0,4.0)] print(Simplex.centre(points))
Documentation/Tutorial librería Simplex.py.ipynb
carlosclavero/PySimplex
gpl-3.0
isThePoint Este método recibe una lista de puntos, cuyas coordenadas son rational, un valor, que es el cálculo de la distancia al centro, y el centro de los puntos de la lista. El método devuelve el punto de la lista cuya distancia al centro, es el valor introducido. Si ningún punto, cumple la distancia devuelve None. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
listPoints=[(rational(4,5),rational(1,2)),(rational(4,2),rational(3,1)),(rational(8,3),rational(3,5)),(rational(7,2) ,rational(4,5)),(rational(7,9),rational(4,9)),(rational(9,8),rational(10,7))] M = (1.811574074074074,1.1288359788359787) value = 2.7299657524245156 point=Simplex.isThePoint(listPoints, value, M) print("("+str(point[0])+","+str(point[1])+")") # En caso de no recibir una lista de puntos rational, en el primer parámetro, un número en el segundo o una tupla en el tercero, # devuelve None(ver si coge float en el centro) print(Simplex.isThePoint(listPoints, value, 4))
Documentation/Tutorial librería Simplex.py.ipynb
carlosclavero/PySimplex
gpl-3.0
calculateOrder Este método recibe una lista de puntos, cuyas coordenadas son rational, y devuelve la misma lista de puntos, pero ordenadas en sentido horario. En caso de no introducir una lista de rational, devuelve None. Ejemplos:
listPoints=[(rational(4,5),rational(1,2)),(rational(4,2),rational(3,1)),(rational(8,3),rational(3,5)),(rational(7,2), rational(4,5)), (rational(7,9),rational(4,9)),(rational(9,8),rational(10,7))] Simplex.calculateOrder(listPoints) # Si recibe algo que no es una lista de puntos con coordenadas rational listPoints=[(4.0,5.0),(4.0,3.0),(8.0,5.0),(7.0,4.0),(7.0,9.0),(10.0,4.0)] print(Simplex.calculateOrder(listPoints))
Documentation/Tutorial librería Simplex.py.ipynb
carlosclavero/PySimplex
gpl-3.0
pointIsInALine Este método recibe un punto en una tupla, una restricción sin signos ni recursos en un array de numpy, y el recurso, como un número. El método devuelve True, si el punto, esta sobre la línea que representa la restricción en el plano, en otro caso devuelve False. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
# Si el punto está en la línea, devuelve True point = (3,4) line = np.array([3,2]) resource = 17 Simplex.pointIsInALine(point, line, resource) # El método funciona con rational point = (rational(3,1),rational(4,2)) line = np.array([rational(3,3),rational(2,1)]) resource = rational(7,1) Simplex.pointIsInALine(point, line, resource) # Si el punto no está en la línea, devuelve False point = (3,4) line = np.array([3,2]) resource = 10 Simplex.pointIsInALine(point, line, resource) # El método no funciona exactamente con float point = (3.0,4.0) line = np.array([3.0,2.0]) resource = 17.00001 Simplex.pointIsInALine(point, line, resource) # En caso de no recibir una tupla,en el primer parámetro, un array de numpy en el segundo o un número en el tercero, devuelve # None print(Simplex.pointIsInALine(point, 3, resource))
Documentation/Tutorial librería Simplex.py.ipynb
carlosclavero/PySimplex
gpl-3.0
deleteLinePointsOfList Este método recibe un conjunto de puntos en una lista, un array de numpy con un conjunto de restricciones sin signos, ni recursos, y un array de numpy con los recursos de las restricciones. El método devuelve la lista de puntos, pero sin aquellos puntos que están en la línea que representa alguna de las restricciones introducidas. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
# Elimina el último punto que está en una línea listPoints=[(rational(3,1),rational(5,7)),(rational(5,8),rational(6,2)),(rational(4,6),rational(8,9)),(rational(8,1), rational(2,1))] matrix=np.array([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]]) resources=np.array([rational(18,1),rational(8,1),rational(0,1)]) Simplex.deleteLinePointsOfList(listPoints, matrix, resources) # Si recibe algo que no es una lista de puntos con coordenadas rational,o algo que no es un array de numpy con elementos rational # en el segundo y tercer parámetro,devuelve None print(Simplex.deleteLinePointsOfList(listPoints, 4, resources))
Documentation/Tutorial librería Simplex.py.ipynb
carlosclavero/PySimplex
gpl-3.0
showProblemSolution Este método resuelve el problema de programación lineal que se le pasa por parámetro, de manera gráfica. Para ello, recibe una matriz de numpy que contiene las restricciones, sin signos ni recursos, un array de numpy que contiene los recursos, una lista de strings, que contienen los signos de las restricciones, un string que contiene la función en el formato "max/min 2 -3" y un valor False o un nombre, que determina si se quiere guardar la imagen en el archivo con el nombre indicado. El método muestra la solución gráfica, siempre que el problema tenga solo 2 variables, en otro caso devuelve None. No es necesario que se introduzca el problema en forma estándar. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
%matplotlib inline matrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]]) resources=np.array([rational(18,1),rational(8,1),rational(0,1)]) signs=["<=","<=",">="] function="max 2 1" save= False Simplex.showProblemSolution(matrix, resources, signs, function, save) # Si el número de signos es diferente a la longitud del vector de recursos o diferente del número de filas de la matriz, # devuelve None matrix=np.matrix([[2,1],[1,-1],[5,2]]) resources=np.array([[18],[8]]) signs=["<=","<=",">="] function="max 2 1" save=False print(Simplex.showProblemSolution(matrix, resources, signs, function, save)) # Si se pasa por parámetro algo que no es una matriz de numpy en el primer parámetro con elementos rational, algo que no es un # array de numpy con elementos rationalen el segundo,algo que no es una lista de strings en el tercero,algo que no es un string # en el cuarto o algo que no sea False o un string en el quinto,devuelve None matrix=np.matrix([[2,1],[1,-1],[5,2]]) resources=np.array([[18],[8],[4]]) signs=["<=","<=",">="] function="max 2 1" print(Simplex.showProblemSolution(matrix, resources, signs, function, False))
Documentation/Tutorial librería Simplex.py.ipynb
carlosclavero/PySimplex
gpl-3.0
Fitting a decaying oscillation For this problem you are given a raw dataset in the file decay_osc.npz. This file contains three arrays: tdata: an array of time values ydata: an array of y values dy: the absolute uncertainties (standard deviations) in y Your job is to fit the following model to this data: $$ y(t) = A e^{-\lambda t} \cos{\omega t + \delta} $$ First, import the data using NumPy and make an appropriately styled error bar plot of the raw data.
data=np.load('decay_osc.npz') tdata=data['tdata'] ydata=data['ydata'] dy=data['dy'] tdata,ydata,dy plt.plot(tdata,ydata) plt.errorbar? plt.errorbar(tdata,ydata,dy,fmt='k.') assert True # leave this to grade the data import and raw data plot
assignments/assignment12/FittingModelsEx02.ipynb
rvperry/phys202-2015-work
mit
Now, using curve_fit to fit this model and determine the estimates and uncertainties for the parameters: Print the parameters estimates and uncertainties. Plot the raw and best fit model. You will likely have to pass an initial guess to curve_fit to get a good fit. Treat the uncertainties in $y$ as absolute errors by passing absolute_sigma=True.
def model(t,A,o,l,d): return A*np.exp(-l*t)*np.cos(o*t)+d theta_best,theta_cov=opt.curve_fit(model,tdata,ydata,np.array((6,1,1,0)),dy,absolute_sigma=True) print('A = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0]))) print('omega = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1]))) print('lambda = {0:.3f} +/- {1:.3f}'.format(theta_best[2], np.sqrt(theta_cov[2,2]))) print('delta = {0:.3f} +/- {1:.3f}'.format(theta_best[3], np.sqrt(theta_cov[3,3]))) tfit=np.linspace(0,20,100) A,o,l,d=theta_best yfit=A*np.exp(-l*tfit)*np.cos(o*tfit)+d plt.plot(tfit,yfit) plt.plot(tdata,ydata,'k.') plt.xlabel('time') plt.ylabel('y') plt.title('Decaying Oscillator') plt.axhline(0,color='lightgray') assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors
assignments/assignment12/FittingModelsEx02.ipynb
rvperry/phys202-2015-work
mit
2. Calcule el área de un circulo de radio 5
r = 5 a = (r**2) * 3.141596 print a
Camilo/Taller 1.ipynb
spulido99/Programacion
mit
3. Escriba código que imprima todos los colores de que están en color_list_1 y no estan presentes en color_list_2 Resultado esperado : {'Black', 'White'}
color_list_1 = set(["White", "Black", "Red"]) color_list_2 = set(["Red", "Green"]) print color_list_1 print color_list_1 - color_list_2 # Resultado = [] # for i in color_list_1: # if not color_list_1[i] in color_list_2: # Resultado += color_list_1[i] # else: # pass # print Resultado
Camilo/Taller 1.ipynb
spulido99/Programacion
mit
4 Imprima una línea por cada carpeta que compone el Path donde se esta ejecutando python e.g. C:/User/sergio/code/programación Salida Esperada: + User + sergio + code + programacion
import os wkd = os.getcwd() wkd.split("/")
Camilo/Taller 1.ipynb
spulido99/Programacion
mit
Manejo de Listas 5. Imprima la suma de números de my_list
my_list = [5,7,8,9,17] print my_list suma = 0 for i in my_list: suma += i print suma
Camilo/Taller 1.ipynb
spulido99/Programacion
mit
6. Inserte un elemento_a_insertar antes de cada elemento de my_list
elemento_a_insertar = 'E' my_list = [1, 2, 3, 4]
Camilo/Taller 1.ipynb
spulido99/Programacion
mit
La salida esperada es una lista así: [E, 1, E, 2, E, 3, E, 4]
print my_list print elemento_a_insertar my_list.insert(0, elemento_a_insertar) my_list.insert(2, elemento_a_insertar) my_list.insert(4, elemento_a_insertar) my_list.insert(6, elemento_a_insertar) print my_list
Camilo/Taller 1.ipynb
spulido99/Programacion
mit
7. Separe my_list en una lista de lista cada N elementos
N = 3 my_list = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n']
Camilo/Taller 1.ipynb
spulido99/Programacion
mit
Salida Epserada: [['a', 'd', 'g', 'j', 'm'], ['b', 'e', 'h', 'k', 'n'], ['c', 'f', 'i', 'l']]
#new_list = [i**2 for i in range(5)] # lamda functions () to apply a function to each variable in a list and creat another #print new_list # function zip to pare lists of the same length. function enumerate. x = [4,2,5,6] y = [5,3,1,6] z = zip(x,y) print z N= 3 new_list = [[] for _ in range(N)] for i, item in enumerate(my_list): new_list[i % N].append(item) print new_list
Camilo/Taller 1.ipynb
spulido99/Programacion
mit
8. Encuentra la lista dentro de list_of_lists que la suma de sus elementos sea la mayor
list_of_lists = [ [1,2,3], [4,5,6], [10,11,12], [7,8,9] ]
Camilo/Taller 1.ipynb
spulido99/Programacion
mit
Salida Esperada: [10, 11, 12]
print max(list_of_lists)
Camilo/Taller 1.ipynb
spulido99/Programacion
mit
Manejo de Diccionarios 9. Cree un diccionario que para cada número de 1 a N de llave tenga como valor N al cuadrado
N = 5
Camilo/Taller 1.ipynb
spulido99/Programacion
mit
Salida Esperada: {1:1, 2:4, 3:9, 4:16, 5:25}
Dict = {} Dict[1] = 1**2 Dict[2] = 2**2 Dict[3] = 3**2 Dict[4] = 4**2 Dict[5] = 5**2 print Dict N=5 D = {} for i in range(N): D[i] = i**2 print D
Camilo/Taller 1.ipynb
spulido99/Programacion
mit
10. Concatene los diccionarios en dictionary_list para crear uno nuevo
dictionary_list=[{1:10, 2:20} , {3:30, 4:40}, {5:50,6:60}]
Camilo/Taller 1.ipynb
spulido99/Programacion
mit
Salida Esperada: {1: 10, 2: 20, 3: 30, 4: 40, 5: 50, 6: 60}
new_dic = {} for i in range(len(dictionary_list)): new_dic.update(dictionary_list[i]) print new_dic Dicc = {} for i in dictionary_list: for k in i: Dicc[k] = i[k] print Dicc
Camilo/Taller 1.ipynb
spulido99/Programacion
mit
11. Añada un nuevo valor "cuadrado" con el valor de "numero" de cada diccionario elevado al cuadrado
dictionary_list=[{'numero': 10, 'cantidad': 5} , {'numero': 12, 'cantidad': 3}, {'numero': 5, 'cantidad': 45}]
Camilo/Taller 1.ipynb
spulido99/Programacion
mit
Salida Esperada: [{'numero': 10, 'cantidad': 5, 'cuadrado': 100} , {'numero': 12, 'cantidad': 3, , 'cuadrado': 144}, {'numero': 5, 'cantidad': 45, , 'cuadrado': 25}]
for i in range(0,len(dictionary_list)): n = dictionary_list[i]['numero'] sqr = n**2 dictionary_list[i]['cuadrado'] = sqr print dictionary_list
Camilo/Taller 1.ipynb
spulido99/Programacion
mit
Manejo de Funciones 12. Defina y llame una función que reciba 2 parametros y solucione el problema 3
def loca(list1,list2): print list1 - list2 loca(color_list_1, color_list_2)
Camilo/Taller 1.ipynb
spulido99/Programacion
mit
13. Defina y llame una función que reciva de parametro una lista de listas y solucione el problema 8
def marx(lista): return max(lista) print marx(list_of_lists)
Camilo/Taller 1.ipynb
spulido99/Programacion
mit
14. Defina y llame una función que reciva un parametro N y resuleva el problema 9
def dic(N): Dict ={} for i in range(1,N): Dict[i] = i**2 return Dict print dic(4)
Camilo/Taller 1.ipynb
spulido99/Programacion
mit
Processing Test Consolidating the returned CSVs into one is relatively painless Main issue is that for some reason the time is still in GMT, and needs 5 hours in milliseconds subtracted from the epoch Validating against Weather Underground read from O'Hare
s3_client = boto3.client('s3') resource = boto3.resource('s3') # Disable signing for anonymous requests to public bucket resource.meta.client.meta.events.register('choose-signer.s3.*', disable_signing) def file_list(client, bucket, prefix=''): paginator = client.get_paginator('list_objects') for result in client.list_objects(Bucket=bucket, Prefix=prefix, Delimiter='/')['Contents']: yield result.get('Key') gen_s3_files = list(file_list(s3_client, 'nexrad-etl', prefix='test-aug3/')) for i, f in enumerate(gen_s3_files): s3_client.download_file('nexrad-etl',f,'test-aug3/nexrad{}.csv'.format(i)) folder_files = os.listdir(os.path.join(os.getcwd(), 'test-aug3')) nexrad_df_list = list() for f in folder_files: if f.endswith('.csv'): try: nexrad_df_list.append(pd.read_csv('test-aug3/{}'.format(f))) except: #print(f) pass print(len(nexrad_df_list)) merged_nexrad = pd.concat(nexrad_df_list) merged_nexrad['timestamp'] = pd.to_datetime(((merged_nexrad['timestamp'] / 1000) - (5*3600*1000)), unit='ms') #merged_nexrad['timestamp'] = pd.to_datetime(merged_nexrad['timestamp'] / 1000, unit='ms') merged_nexrad = merged_nexrad.set_index(pd.DatetimeIndex(merged_nexrad['timestamp'])) merged_nexrad = merged_nexrad.sort_values('timestamp') merged_nexrad = merged_nexrad.fillna(0.0) # Get diff between previous two reads merged_nexrad['diff'] = merged_nexrad['timestamp'].diff() merged_nexrad = merged_nexrad[1:] print(merged_nexrad.shape) merged_nexrad.index.min() merged_nexrad['diff'] = (merged_nexrad['diff'] / np.timedelta64(1, 'm')).astype(float) / 60 merged_nexrad.head() aug_day_ohare = merged_nexrad['2016-08-12'][['timestamp','60666','diff']] aug_day_ohare.head() aug_day_ohare['60666'] = (aug_day_ohare['60666']*aug_day_ohare['diff'])/25.4 aug_day_ohare.head()
nexrad-etl/Validate NEXRAD with Weather Underground.ipynb
NORCatUofC/rain
mit
NEXRAD at O'Hare Zip 60666
# Checking against Weather Underground read for O'Hare on this day print(aug_day_ohare['60666'].sum()) aug_day_ohare['60666'].plot()
nexrad-etl/Validate NEXRAD with Weather Underground.ipynb
NORCatUofC/rain
mit
Wunderground
wunderground = pd.read_csv('test-aug3/aug-12.csv') wunderground['PrecipitationIn'] = wunderground['PrecipitationIn'].fillna(0.0) wunderground['TimeCDT'] = pd.to_datetime(wunderground['TimeCDT']) wunderground = wunderground.set_index(pd.DatetimeIndex(wunderground['TimeCDT'])) wund_hour = wunderground['PrecipitationIn'].resample('1H').max() print(wund_hour.sum()) wund_hour.plot()
nexrad-etl/Validate NEXRAD with Weather Underground.ipynb
NORCatUofC/rain
mit
Part 1: Data Wrangle Load and transform the data for analysis
# load federal document data from pickle file fed_reg_data = r'data/fed_reg_data.pickle' fed_data = pd.read_pickle(fed_reg_data) # load twitter data from csv twitter_file_path = r'data/twitter_01_20_17_to_3-2-18.pickle' twitter_data = pd.read_pickle(twitter_file_path) # Change the index (date), to a column fed_data['date'] = fed_data.index twitter_data['date'] = twitter_data.index
similarity_analysis.ipynb
mtchem/Twitter-Politics
mit
Combine data for analysis <p> Create a dataframe that contains: <ul> <li> Each document, from both data sets, as a string </li> <li> The date the text was published </li> <li> A label for the type of document (0= twitter doc, 1= federal doc) </li> </ul> </p>
# keep text strings and rename columns fed = fed_data[['str_text', 'date']].rename({'str_text': 'texts'}, axis = 'columns') tweet = twitter_data[['text', 'date']].rename({'text': 'texts'}, axis = 'columns') # Add a label for the type of document (Tweet = 0, Fed = 1) tweet['label'] = 0 fed['label'] = 1 # concatinate the dataframes comb_text = pd.concat([fed,tweet]) # Re_index so that each doc has a unique id_number comb_text = comb_text.reset_index() comb_text['ID'] = range(0,len(comb_text)) # Look at the dataframe to make sure it works comb_text = comb_text[['texts','date','label', 'ID']] comb_text.head(3)
similarity_analysis.ipynb
mtchem/Twitter-Politics
mit
Transform text data into a word-frequency array <p> Computers cannot understand a text like humans, so in order to analyze text data, I first need to make every word a feature (column) in an array, where each document (row) is represented by a weighted* frequency of each word (column) they contain. An example text and array are shown below. </p> <p> Using Scikit Learn to create a word-frequency array: <ul> <li> Define list of stop words (nonsense or non-meaninful words, such as 'the', 'a', 'of', 'q34fqwer3'). </li> <li> Instantiate a tf-idf object (term frequency-inverse document frequency reweighting), that removes the stop words, and filters any word that appears in 99% of the documents</li> <li> Create a matrix representation of the documents </li> <li> Create list of the words each feature(column) represents </li> <li> Print a list of the excluded words </li> </ul> </p> *Weighting the word frequencies lowers the importance that very frequently used domain-specific words are considered less important during the analysis
# nonsense words, and standard words like proclimation and dates more_stop = set(['presidential', 'documents', 'therfore','i','donald', 'j', 'trump', 'president', 'order', 'authority', 'vested', 'articles','january','february','march','april','may','june','july','august','september','october', 'november','december','jan','feb','mar','apr','jun','jul','aug','sep','oct','nov','dec', '2017','2018','act','agencies','agency','wh','rtlwanjjiq','pmgil08opp','blkgzkqemw','qcdljff3wn','erycjgj23r ','fzep1e9mo7','m0hmpbuz6c','rdo6jt2pip','kyv866prde','aql4jlvndh', 'tx5snacaas','t0eigo6lp8','jntoth0mol','8b8aya7v1s', 'x25t9tqani','q7air0bum2','ypfvhtq8te','ejxevz3a1r','1zo6zc2pxt', 'strciewuws','lhos4naagl','djlzvlq6tj', 'theplumlinegs', '3eyf3nir4b','cbewjsq1a3','lvmjz9ax0u', 'dw0zkytyft','sybl47cszn','6sdcyiw4kt','¼ï','yqf6exhm7x','cored8rfl2','6xjxeg1gss','dbvwkddesd', 'ncmsf4fqpr','twunktgbnb','ur0eetseno','ghqbca7yii','cbqrst4ln4','c3zikdtowc','6snvq0dzxn','ekfrktnvuy', 'k2jakipfji','œthe ','p1fh8jmmfa','vhmv7qoutk','mkuhbegzqs','ajic3flnki','mvjbs44atr', 'wakqmkdpxa','e0bup1k83z','ðÿ','ºðÿ','µðÿ','eqmwv1xbim','hlz48rlkif','td0rycwn8c','vs4mnwxtei','75wozgjqop', 'e1q36nkt8g','u8inojtf6d','rmq1a5bdon','5cvnmhnmuh','pdg7vqqv6m','s0s6xqrjsc','5cvnmhnmuh','wlxkoisstg', 'tmndnpbj3m','dnzrzikxhd','4qckkpbtcr','x8psdeb2ur','fejgjt4xp9','evxfqavnfs','aty8r3kns2','pdg7vqqv6m','nqhi7xopmw', 'lhos4naagl','32tfova4ov','zkyoioor62','np7kyhglsv','km0zoaulyh','kwvmqvelri','pirhr7layt', 'v3aoj9ruh4','https','cg4dzhhbrv','qojom54gy8','75wozgjqop','aty8r3kns2','nxrwer1gez','rvxcpafi2a','vb0ao3s18d', 'qggwewuvek','ddi1ywi7yz','r5nxc9ooa4','6lt9mlaj86','1jb53segv4','vhmv7qoutk','i7h4ryin3h', 'aql4jlvndh','yfv0wijgby','nonhjywp4j','zomixteljq','iqum1rfqso','2nl6slwnmh','qejlzzgjdk', 'p3crvve0cy','s0s6xqrjsc','gkockgndtc','2nl6slwnmh','zkyoioor62','clolxte3d4','iqum1rfqso', 'msala9poat','p1f12i9gvt','mit2lj7q90','qejlzzgjdk','pjldxy3hd9','vjzkgtyqb9','b2nqzj53ft', 'tpz7eqjluh','enyxyeqgcp','avlrroxmm4','2kuqfkqbsx','kwvmqvelri','œi','9lxx1iqo7m','vdtiyl0ua7', 'dmhl7xieqv','3jbddn8ymj','gysxxqazbl','ðÿž','tx5snacaas','4igwdl4kia','kqdbvxpekk','1avysamed4', 'cr4i8dvunc','bsp5f3pgbz','rlwst30gud','rlwst30gud','g4elhh9joh', '2017', 'January', 'kuqizdz4ra', 'nvdvrrwls4','ymuqsvvtsb', 'rgdu9plvfk','bk7sdv9phu','b5qbn6llze','xgoqphywrt ','hscs4y9zjk ', 'soamdxxta8','erycjgj23r','ryyp51mxdq','gttk3vjmku','j882zbyvkj','9pfqnrsh1z','ubbsfohmm7', 'xshsynkvup','xwofp9z9ir','1iw7tvvnch','qeeknfuhue','riqeibnwk2','seavqk5zy5','7ef6ac6kec', 'htjhrznqkj','8vsfl9mzxx','xgoqphywrt','zd0fkfvhvx','apvbu2b0jd','mstwl628xe','4hnxkr3ehw','mjij7hg3eu', '1majwrga3d','x6fuuxxyxe','6eqfmrzrnv','h1zi5xrkeo','kju0moxchk','trux3wzr3u','suanjs6ccz', 'ecf5p4hjfz','m5ur4vv6uh','8j7y900vgk','7ef6ac6kec','d0aowhoh4x','aqqzmt10x7','zauqz4jfwv', 'bmvjz1iv2a','gtowswxinv','1w3lvkpese','8n4abo9ihp','f6jo60i0ul','od7l8vpgjq','odlz2ndrta', '9tszrcc83j','6ocn9jfmag','qyt4bchvur','wkqhymcya3','tp4bkvtobq','baqzda3s2e','March','April', 'op2xdzxvnc','d7es6ie4fy','proclamation','hcq9kmkc4e','rf9aivvb7g','sutyxbzer9','s0t3ctqc40','aw0av82xde']) # defines all stop words my_stop = text.ENGLISH_STOP_WORDS.union(more_stop) # Instantiate TfidfVectorizer to remove common english words, and any word used in 99% of the documents tfidf = TfidfVectorizer(stop_words = my_stop , max_df = 0.99) # create matrix representation of all documents text_mat = tfidf.fit_transform(comb_text.texts) # make a list of feature words words = tfidf.get_feature_names()
similarity_analysis.ipynb
mtchem/Twitter-Politics
mit
Excluded Words <p> Below is a printed list of all of the excluded words. I include this because I am not a political scientist or a linguist. What I consider to be nonsense maybe important and you may want to modify this list. </p>
# print excluded words from the matrix features print(tfidf.get_stop_words())
similarity_analysis.ipynb
mtchem/Twitter-Politics
mit
Part 2: Analysis Use unsupervised machine learning to analyze both President Trump's tweets, official presidential actions and explore any correlation between the two Part 2A: Determine the document's topics <p> Model the documents with non-zero matrix factorization (NMF): <ul> <li> Instantiate NMF model with 260 components (1/10th the number of documents) and initialized with Nonnegative Double Singular Value Decomposition (NNDSVD, better for sparseness)</li> <li> Fit(learn the NMF model for the tf-idf matrix) model</li> <li> Transform the model, which applies the fit to the matirix </li> <li> Make a dataframe with the NMF components for each word </li> </ul> </p>
# instantiate model NMF_model = NMF(n_components=260 , init = 'nndsvd') # fit the model NMF_model.fit(text_mat) # transform the text frequecy matrix using the fitted NMF model nmf_features = NMF_model.transform(text_mat) # create a dataframe with words as a columns, NMF components as rows components_df = pd.DataFrame(NMF_model.components_, columns = words)
similarity_analysis.ipynb
mtchem/Twitter-Politics
mit
Part 2B: Find the top 5 topic words (components) for each document <p> Using the components dataframe create a dictionary with components as keys, and top words as values: <ul> <li> Make an empty dictionary and loop through each row of NMF components</li> <li> Add to the dictionary where the key is the NMF component and the value is the topic words for that component (the column names with the largest component values)</li> </ul> </p>
# create dictionary with the key = component, value = top 5 words topic_dict = {} for i in range(0,260): component = components_df.iloc[i, :] topic_dict[i] = component.nlargest() # look at a few of the component topics print(topic_dict[0].index) print(topic_dict[7].index)
similarity_analysis.ipynb
mtchem/Twitter-Politics
mit
Part 2C: Cosine Similarity <p> The informal and non-regular grammar used in tweets makes a direct comparison with documents published by the Executive Office, which uses formal vocabulary and grammar, difficult. Therefore, I will use the metric, cosine similarity, which compares the distance between feature vectors, instead of direct word comparison. Higher cosine similarities between two documents indicate greater topic similarity. </p> <p>Calculating cosine similarities of NMF features: <ul> <li> Normalize NMF features (calculated in part 2A)</li> <li> Create dataframe where each row contains the normalized NMF features for a document and its ID number</li> <li> Look at each row(decomposed article) and calculate its cosine similarity to all other document's normalized NMF features </li> <li> Create a dictionary where the key is the document ID, and the value is a pandas series of the 5 most similar documents (including its self)</li>
# normalize previouly found nmf features norm_features = normalize(nmf_features) #dataframe of document's NMF features, where rows are documents and columns are NMF components df_norms = pd.DataFrame(norm_features) # initialize empty dictionary similarity_dict= {} # loop through each row of the df_norms dataframe for i in range(len(norm_features)): # isolate one row, by ID number row = df_norms.loc[i] # calculate the top cosine similarities top_sim = (df_norms.dot(row)).nlargest() # append results to dictionary similarity_dict[i] = (top_sim.index, top_sim)
similarity_analysis.ipynb
mtchem/Twitter-Politics
mit
Part 3: Use the cosine similarity results to explore how (or if) President Trump's tweets and official actions correlate Part 3A: Find Twitter documents that have at least one federal document in its top 5 cosine similarity scores (and vice versa) <p> Using the results of part 2C, find the types of documents are the most similar, then sum the labels (0=twitter, 1= federal document). If similar documents are a mix of tweets and federal documents, then the sum of their value will be either 1,2,3 or 4. <ul> <li> Create a dataframe with the document ID number as the index and the document type label (tweet = 0, fed_doc = 1)</li> <li> Loop through each document in the dataframe and use the similarity dictionary to find the list of most similar document ID numbers and the sum of the similarity scores</li> <li> For each list of similar documents, sum the value of the document type labels. If the sum value is 1, 2, 3, or 4, that means there are both tweets and federal documents in the group</li> </ul> </p>
# dataframe with document ID and labels doc_label_df = comb_text[['label', 'ID']].copy().set_index('ID') # inialize list for the sum of all similar documents label label_sums =[] similarity_score_sum = [] # loop through all of the documents for doc_num in doc_label_df.index: # sum the similarity scores similarity_sum = similarity_dict[doc_num][1].sum() similarity_score_sum.append(similarity_sum) #find the list of similar documents similar_doc_ID_list = list(similarity_dict[doc_num][0]) # loop through labels s_label = 0 for ID_num in similar_doc_ID_list: # sum the label values for each similar document s_label = s_label + doc_label_df.loc[ID_num].label # append the sum of the labels for ONE document label_sums.append(s_label) # add the similarity score sum to dataframe as separate column doc_label_df['similarity_score_sum'] = similarity_score_sum # add the similar document's summed label value to the dataframe as a separate column doc_label_df['sum_of_labels'] = label_sums
similarity_analysis.ipynb
mtchem/Twitter-Politics
mit
Part 3B: Look at the topics of tweets that have similar federal documents (and vice versa) <p> Isolate documents with mixed types of similar documents and high similarity scores <ul> <li> Filter dataframe to include only top_similar_label_sums with a value of 1, 2, 3, or 4</li> <li> Filter again to only include groups with high combinded similarity scores</li> <li> Remove and duplicate groups </li> </ul> </p>
# Filter dataframe for federal documents with similar tweets, and vice versa df_filtered = doc_label_df[doc_label_df['sum_of_labels'] != 0][doc_label_df['sum_of_labels'] != 5].copy().reset_index() # Make sure it worked print(df_filtered.head()) print(len(df_filtered)) # Look at the ones that have all top 5 documents with a cosine similarity score of 0.9 or above. #The sum of scores need to be 4.6 or higher similar_score_min = 4.6 highly_similar = df_filtered[df_filtered.similarity_score_sum >= similar_score_min]
similarity_analysis.ipynb
mtchem/Twitter-Politics
mit
Remove duplicate highly similar groups
# create a list of all the group lists doc_groups = [] for doc_id in highly_similar.ID: doc_groups.append(sorted(list(similarity_dict[doc_id][0]))) # make the interior lists tuples, then make a set of them unique_groups = set([tuple(x) for x in doc_groups]) unique_groups
similarity_analysis.ipynb
mtchem/Twitter-Politics
mit
Part 3C: Manually look at the documents. Are they similar? Components = 100 , Highly similar score = 4.9 <p> Four of the 5 unique groups are basically the same <ul> {(58, 80, 105, 149, 1139), (58, 80, 126, 149, 1139), (58, 80, 126, 185, 1139), (58, 80, 149, 185, 1139), (131, 170, 478, 479, 2044)} </ul> Thoses components (58, 80, 105, 126, 149, 185, 1139) are all about national emergencies. The 5 group is about national security and national emergencies </p> Components = 260 , Highly similar cutoff score = 4.6 6 unique groups can be further distilled to one set (27, 28, 229, 248, 196, 203,2576, 2546, 204, 1151, 1892)
print(comb_text.texts.loc[1892]) print(comb_text.texts.loc[27])
similarity_analysis.ipynb
mtchem/Twitter-Politics
mit
As always, let's do imports and initialize a longger and a new Bundle.
import phoebe from phoebe import u # units logger = phoebe.logger() b = phoebe.default_binary()
development/tutorials/settings.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Accessing Settings Settings are found with their own context in the Bundle and can be accessed through the get_setting method
b.get_setting()
development/tutorials/settings.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
or via filtering/twig access
b['setting']
development/tutorials/settings.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
and can be set as any other Parameter in the Bundle Available Settings Now let's look at each of the available settings and what they do phoebe_version phoebe_version is a read-only parameter in the settings to store the version of PHOEBE used. dict_set_all dict_set_all is a BooleanParameter (defaults to False) that controls whether attempting to set a value to a ParameterSet via dictionary access will set all the values in that ParameterSet (if True) or raise an error (if False)
b['dict_set_all@setting'] b['teff@component']
development/tutorials/settings.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
In our default binary there are temperatures ('teff') parameters for each of the components ('primary' and 'secondary'). If we were to do: b['teff@component'] = 6000 this would raise an error. Under-the-hood, this is simply calling: b.set_value('teff@component', 6000) which of course would also raise an error. In order to set both temperatures to 6000, you would either have to loop over the components or call the set_value_all method:
b.set_value_all('teff@component', 4000) print(b['value@teff@primary@component'], b['value@teff@secondary@component'])
development/tutorials/settings.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
If you want dictionary access to use set_value_all instead of set_value, you can enable this parameter
b['dict_set_all@setting'] = True b['teff@component'] = 8000 print(b['value@teff@primary@component'], b['value@teff@secondary@component'])
development/tutorials/settings.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Now let's disable this so it doesn't confuse us while looking at the other options
b.set_value_all('teff@component', 6000) b['dict_set_all@setting'] = False
development/tutorials/settings.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
dict_filter dict_filter is a Parameter that accepts a dictionary. This dictionary will then always be sent to the filter call which is done under-the-hood during dictionary access.
b['incl']
development/tutorials/settings.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
In our default binary, there are several inclination parameters - one for each component ('primary', 'secondary', 'binary') and one with the constraint context (to keep the inclinations aligned). This can be inconvenient... if you want to set the value of the binary's inclination, you must always provide extra information (like '@component'). Instead, we can always have the dictionary access search in the component context by doing the following
b['dict_filter@setting'] = {'context': 'component'} b['incl']
development/tutorials/settings.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Now we no longer see the constraint parameters. All parameters are always accessible with method access:
b.filter(qualifier='incl')
development/tutorials/settings.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Now let's reset this option... keeping in mind that we no longer have access to the 'setting' context through twig access, we'll have to use methods to clear the dict_filter
b.set_value('dict_filter@setting', {})
development/tutorials/settings.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
run_checks_compute (/figure/solver/solution) The run_checks_compute option allows setting the default compute option(s) sent to b.run_checks, including warnings in the logger raised by interactive checks (see phoebe.interactive_checks_on). Similar options also exist for checks at the figure, solver, and solution level.
b['run_checks_compute@setting'] b.add_dataset('lc') b.add_compute('legacy') print(b.run_checks()) b['run_checks_compute@setting'] = ['phoebe01'] print(b.run_checks())
development/tutorials/settings.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
auto_add_figure, auto_remove_figure The auto_add_figure and auto_remove_figure determine whether new figures are automatically added to the Bundle when new datasets, distributions, etc are added. This is False by default within Python, but True by default within the UI clients.
b['auto_add_figure'] b['auto_add_figure'].description b['auto_remove_figure'] b['auto_remove_figure'].description
development/tutorials/settings.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
web_client, web_client_url The web_client and web_client_url settings determine whether the client is opened in a web-browser or with the installed desktop client whenever calling b.ui or b.ui_figures. For more information, see the UI from Jupyter tutorial.
b['web_client'] b['web_client'].description b['web_client_url'] b['web_client_url'].description
development/tutorials/settings.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Estimators <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/guide/estimator"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/estimator.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/estimator.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/estimator.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Warning: Estimators are not recommended for new code. Estimators run v1.Session-style code which is more difficult to write correctly, and can behave unexpectedly, especially when combined with TF 2 code. Estimators do fall under our compatibility guarantees, but will receive no fixes other than security vulnerabilities. See the migration guide for details. This document introduces tf.estimator—a high-level TensorFlow API. Estimators encapsulate the following actions: Training Evaluation Prediction Export for serving TensorFlow implements several pre-made Estimators. Custom estimators are still suported, but mainly as a backwards compatibility measure. Custom estimators should not be used for new code. All Estimators—pre-made or custom ones—are classes based on the tf.estimator.Estimator class. For a quick example, try Estimator tutorials. For an overview of the API design, check the white paper. Setup
!pip install -U tensorflow_datasets import tempfile import os import tensorflow as tf import tensorflow_datasets as tfds
site/en-snapshot/guide/estimator.ipynb
tensorflow/docs-l10n
apache-2.0
Advantages Similar to a tf.keras.Model, an estimator is a model-level abstraction. The tf.estimator provides some capabilities currently still under development for tf.keras. These are: Parameter server based training Full TFX integration Estimators Capabilities Estimators provide the following benefits: You can run Estimator-based models on a local host or on a distributed multi-server environment without changing your model. Furthermore, you can run Estimator-based models on CPUs, GPUs, or TPUs without recoding your model. Estimators provide a safe distributed training loop that controls how and when to: Load data Handle exceptions Create checkpoint files and recover from failures Save summaries for TensorBoard When writing an application with Estimators, you must separate the data input pipeline from the model. This separation simplifies experiments with different datasets. Using pre-made Estimators Pre-made Estimators enable you to work at a much higher conceptual level than the base TensorFlow APIs. You no longer have to worry about creating the computational graph or sessions since Estimators handle all the "plumbing" for you. Furthermore, pre-made Estimators let you experiment with different model architectures by making only minimal code changes. tf.estimator.DNNClassifier, for example, is a pre-made Estimator class that trains classification models based on dense, feed-forward neural networks. A TensorFlow program relying on a pre-made Estimator typically consists of the following four steps: 1. Write an input functions For example, you might create one function to import the training set and another function to import the test set. Estimators expect their inputs to be formatted as a pair of objects: A dictionary in which the keys are feature names and the values are Tensors (or SparseTensors) containing the corresponding feature data A Tensor containing one or more labels The input_fn should return a tf.data.Dataset that yields pairs in that format. For example, the following code builds a tf.data.Dataset from the Titanic dataset's train.csv file:
def train_input_fn(): titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.AUTOTUNE)) return titanic_batches
site/en-snapshot/guide/estimator.ipynb
tensorflow/docs-l10n
apache-2.0
The input_fn is executed in a tf.Graph and can also directly return a (features_dics, labels) pair containing graph tensors, but this is error prone outside of simple cases like returning constants. 2. Define the feature columns. Each tf.feature_column identifies a feature name, its type, and any input pre-processing. For example, the following snippet creates three feature columns. The first uses the age feature directly as a floating-point input. The second uses the class feature as a categorical input. The third uses the embark_town as a categorical input, but uses the hashing trick to avoid the need to enumerate the options, and to set the number of options. For further information, check the feature columns tutorial.
age = tf.feature_column.numeric_column('age') cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32)
site/en-snapshot/guide/estimator.ipynb
tensorflow/docs-l10n
apache-2.0
3. Instantiate the relevant pre-made Estimator. For example, here's a sample instantiation of a pre-made Estimator named LinearClassifier:
model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 )
site/en-snapshot/guide/estimator.ipynb
tensorflow/docs-l10n
apache-2.0
For more information, you can go the linear classifier tutorial. 4. Call a training, evaluation, or inference method. All Estimators provide train, evaluate, and predict methods.
model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break
site/en-snapshot/guide/estimator.ipynb
tensorflow/docs-l10n
apache-2.0
Benefits of pre-made Estimators Pre-made Estimators encode best practices, providing the following benefits: Best practices for determining where different parts of the computational graph should run, implementing strategies on a single machine or on a cluster. Best practices for event (summary) writing and universally useful summaries. If you don't use pre-made Estimators, you must implement the preceding features yourself. Custom Estimators The heart of every Estimator—whether pre-made or custom—is its model function, model_fn, which is a method that builds graphs for training, evaluation, and prediction. When you are using a pre-made Estimator, someone else has already implemented the model function. When relying on a custom Estimator, you must write the model function yourself. Note: A custom model_fn will still run in 1.x-style graph mode. This means there is no eager execution and no automatic control dependencies. You should plan to migrate away from tf.estimator with custom model_fn. The alternative APIs are tf.keras and tf.distribute. If you still need an Estimator for some part of your training you can use the tf.keras.estimator.model_to_estimator converter to create an Estimator from a keras.Model. Create an Estimator from a Keras model You can convert existing Keras models to Estimators with tf.keras.estimator.model_to_estimator. This is helpful if you want to modernize your model code, but your training pipeline still requires Estimators. Instantiate a Keras MobileNet V2 model and compile the model with the optimizer, loss, and metrics to train with:
keras_mobilenet_v2 = tf.keras.applications.MobileNetV2( input_shape=(160, 160, 3), include_top=False) keras_mobilenet_v2.trainable = False estimator_model = tf.keras.Sequential([ keras_mobilenet_v2, tf.keras.layers.GlobalAveragePooling2D(), tf.keras.layers.Dense(1) ]) # Compile the model estimator_model.compile( optimizer='adam', loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), metrics=['accuracy'])
site/en-snapshot/guide/estimator.ipynb
tensorflow/docs-l10n
apache-2.0
Create an Estimator from the compiled Keras model. The initial model state of the Keras model is preserved in the created Estimator:
est_mobilenet_v2 = tf.keras.estimator.model_to_estimator(keras_model=estimator_model)
site/en-snapshot/guide/estimator.ipynb
tensorflow/docs-l10n
apache-2.0
Treat the derived Estimator as you would with any other Estimator.
IMG_SIZE = 160 # All images will be resized to 160x160 def preprocess(image, label): image = tf.cast(image, tf.float32) image = (image/127.5) - 1 image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE)) return image, label def train_input_fn(batch_size): data = tfds.load('cats_vs_dogs', as_supervised=True) train_data = data['train'] train_data = train_data.map(preprocess).shuffle(500).batch(batch_size) return train_data
site/en-snapshot/guide/estimator.ipynb
tensorflow/docs-l10n
apache-2.0
To train, call Estimator's train function:
est_mobilenet_v2.train(input_fn=lambda: train_input_fn(32), steps=50)
site/en-snapshot/guide/estimator.ipynb
tensorflow/docs-l10n
apache-2.0
Similarly, to evaluate, call the Estimator's evaluate function:
est_mobilenet_v2.evaluate(input_fn=lambda: train_input_fn(32), steps=10)
site/en-snapshot/guide/estimator.ipynb
tensorflow/docs-l10n
apache-2.0
For more details, please refer to the documentation for tf.keras.estimator.model_to_estimator. Saving object-based checkpoints with Estimator Estimators by default save checkpoints with variable names rather than the object graph described in the Checkpoint guide. tf.train.Checkpoint will read name-based checkpoints, but variable names may change when moving parts of a model outside of the Estimator's model_fn. For forwards compatibility saving object-based checkpoints makes it easier to train a model inside an Estimator and then use it outside of one.
import tensorflow.compat.v1 as tf_compat def toy_dataset(): inputs = tf.range(10.)[:, None] labels = inputs * 5. + tf.range(5.)[None, :] return tf.data.Dataset.from_tensor_slices( dict(x=inputs, y=labels)).repeat().batch(2) class Net(tf.keras.Model): """A simple linear model.""" def __init__(self): super(Net, self).__init__() self.l1 = tf.keras.layers.Dense(5) def call(self, x): return self.l1(x) def model_fn(features, labels, mode): net = Net() opt = tf.keras.optimizers.Adam(0.1) ckpt = tf.train.Checkpoint(step=tf_compat.train.get_global_step(), optimizer=opt, net=net) with tf.GradientTape() as tape: output = net(features['x']) loss = tf.reduce_mean(tf.abs(output - features['y'])) variables = net.trainable_variables gradients = tape.gradient(loss, variables) return tf.estimator.EstimatorSpec( mode, loss=loss, train_op=tf.group(opt.apply_gradients(zip(gradients, variables)), ckpt.step.assign_add(1)), # Tell the Estimator to save "ckpt" in an object-based format. scaffold=tf_compat.train.Scaffold(saver=ckpt)) tf.keras.backend.clear_session() est = tf.estimator.Estimator(model_fn, './tf_estimator_example/') est.train(toy_dataset, steps=10)
site/en-snapshot/guide/estimator.ipynb
tensorflow/docs-l10n
apache-2.0
tf.train.Checkpoint can then load the Estimator's checkpoints from its model_dir.
opt = tf.keras.optimizers.Adam(0.1) net = Net() ckpt = tf.train.Checkpoint( step=tf.Variable(1, dtype=tf.int64), optimizer=opt, net=net) ckpt.restore(tf.train.latest_checkpoint('./tf_estimator_example/')) ckpt.step.numpy() # From est.train(..., steps=10)
site/en-snapshot/guide/estimator.ipynb
tensorflow/docs-l10n
apache-2.0
SavedModels from Estimators Estimators export SavedModels through tf.Estimator.export_saved_model.
input_column = tf.feature_column.numeric_column("x") estimator = tf.estimator.LinearClassifier(feature_columns=[input_column]) def input_fn(): return tf.data.Dataset.from_tensor_slices( ({"x": [1., 2., 3., 4.]}, [1, 1, 0, 0])).repeat(200).shuffle(64).batch(16) estimator.train(input_fn)
site/en-snapshot/guide/estimator.ipynb
tensorflow/docs-l10n
apache-2.0
To save an Estimator you need to create a serving_input_receiver. This function builds a part of a tf.Graph that parses the raw data received by the SavedModel. The tf.estimator.export module contains functions to help build these receivers. The following code builds a receiver, based on the feature_columns, that accepts serialized tf.Example protocol buffers, which are often used with tf-serving.
tmpdir = tempfile.mkdtemp() serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn( tf.feature_column.make_parse_example_spec([input_column])) estimator_base_path = os.path.join(tmpdir, 'from_estimator') estimator_path = estimator.export_saved_model(estimator_base_path, serving_input_fn)
site/en-snapshot/guide/estimator.ipynb
tensorflow/docs-l10n
apache-2.0
You can also load and run that model, from python:
imported = tf.saved_model.load(estimator_path) def predict(x): example = tf.train.Example() example.features.feature["x"].float_list.value.extend([x]) return imported.signatures["predict"]( examples=tf.constant([example.SerializeToString()])) print(predict(1.5)) print(predict(3.5))
site/en-snapshot/guide/estimator.ipynb
tensorflow/docs-l10n
apache-2.0
tf.estimator.export.build_raw_serving_input_receiver_fn allows you to create input functions which take raw tensors rather than tf.train.Examples. Using tf.distribute.Strategy with Estimator (Limited support) tf.estimator is a distributed training TensorFlow API that originally supported the async parameter server approach. tf.estimator now supports tf.distribute.Strategy. If you're using tf.estimator, you can change to distributed training with very few changes to your code. With this, Estimator users can now do synchronous distributed training on multiple GPUs and multiple workers, as well as use TPUs. This support in Estimator is, however, limited. Check out the What's supported now section below for more details. Using tf.distribute.Strategy with Estimator is slightly different than in the Keras case. Instead of using strategy.scope, now you pass the strategy object into the RunConfig for the Estimator. You can refer to the distributed training guide for more information. Here is a snippet of code that shows this with a premade Estimator LinearRegressor and MirroredStrategy:
mirrored_strategy = tf.distribute.MirroredStrategy() config = tf.estimator.RunConfig( train_distribute=mirrored_strategy, eval_distribute=mirrored_strategy) regressor = tf.estimator.LinearRegressor( feature_columns=[tf.feature_column.numeric_column('feats')], optimizer='SGD', config=config)
site/en-snapshot/guide/estimator.ipynb
tensorflow/docs-l10n
apache-2.0
Here, you use a premade Estimator, but the same code works with a custom Estimator as well. train_distribute determines how training will be distributed, and eval_distribute determines how evaluation will be distributed. This is another difference from Keras where you use the same strategy for both training and eval. Now you can train and evaluate this Estimator with an input function:
def input_fn(): dataset = tf.data.Dataset.from_tensors(({"feats":[1.]}, [1.])) return dataset.repeat(1000).batch(10) regressor.train(input_fn=input_fn, steps=10) regressor.evaluate(input_fn=input_fn, steps=10)
site/en-snapshot/guide/estimator.ipynb
tensorflow/docs-l10n
apache-2.0
Step 1: Build the MNIST LSTM model.
import numpy as np import tensorflow as tf model = tf.keras.models.Sequential([ tf.keras.layers.Input(shape=(28, 28), name='input'), tf.keras.layers.LSTM(20, time_major=False, return_sequences=True), tf.keras.layers.Flatten(), tf.keras.layers.Dense(10, activation=tf.nn.softmax, name='output') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.summary()
tensorflow/lite/examples/experimental_new_converter/Keras_LSTM_fusion_Codelab.ipynb
sarvex/tensorflow
apache-2.0
Step 2: Train & Evaluate the model. We will train the model using MNIST data.
# Load MNIST dataset. (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 x_train = x_train.astype(np.float32) x_test = x_test.astype(np.float32) # Change this to True if you want to test the flow rapidly. # Train with a small dataset and only 1 epoch. The model will work poorly # but this provides a fast way to test if the conversion works end to end. _FAST_TRAINING = False _EPOCHS = 5 if _FAST_TRAINING: _EPOCHS = 1 _TRAINING_DATA_COUNT = 1000 x_train = x_train[:_TRAINING_DATA_COUNT] y_train = y_train[:_TRAINING_DATA_COUNT] model.fit(x_train, y_train, epochs=_EPOCHS) model.evaluate(x_test, y_test, verbose=0)
tensorflow/lite/examples/experimental_new_converter/Keras_LSTM_fusion_Codelab.ipynb
sarvex/tensorflow
apache-2.0
Step 3: Convert the Keras model to TensorFlow Lite model.
run_model = tf.function(lambda x: model(x)) # This is important, let's fix the input size. BATCH_SIZE = 1 STEPS = 28 INPUT_SIZE = 28 concrete_func = run_model.get_concrete_function( tf.TensorSpec([BATCH_SIZE, STEPS, INPUT_SIZE], model.inputs[0].dtype)) # model directory. MODEL_DIR = "keras_lstm" model.save(MODEL_DIR, save_format="tf", signatures=concrete_func) converter = tf.lite.TFLiteConverter.from_saved_model(MODEL_DIR) tflite_model = converter.convert()
tensorflow/lite/examples/experimental_new_converter/Keras_LSTM_fusion_Codelab.ipynb
sarvex/tensorflow
apache-2.0
Step 4: Check the converted TensorFlow Lite model. Now load the TensorFlow Lite model and use the TensorFlow Lite python interpreter to verify the results.
# Run the model with TensorFlow to get expected results. TEST_CASES = 10 # Run the model with TensorFlow Lite interpreter = tf.lite.Interpreter(model_content=tflite_model) interpreter.allocate_tensors() input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() for i in range(TEST_CASES): expected = model.predict(x_test[i:i+1]) interpreter.set_tensor(input_details[0]["index"], x_test[i:i+1, :, :]) interpreter.invoke() result = interpreter.get_tensor(output_details[0]["index"]) # Assert if the result of TFLite model is consistent with the TF model. np.testing.assert_almost_equal(expected, result) print("Done. The result of TensorFlow matches the result of TensorFlow Lite.") # Please note: TfLite fused Lstm kernel is stateful, so we need to reset # the states. # Clean up internal states. interpreter.reset_all_variables()
tensorflow/lite/examples/experimental_new_converter/Keras_LSTM_fusion_Codelab.ipynb
sarvex/tensorflow
apache-2.0
Cross-validated pipelines including scaling, we need to estimate mean and standard deviation separately for each fold. To do that, we build a pipeline.
from sklearn.pipeline import Pipeline, make_pipeline from sklearn.svm import SVC from sklearn.preprocessing import StandardScaler standard_scaler = StandardScaler() standard_scaler.fit(X_train) X_train_scaled = standard_scaler.transform(X_train) svm = SVC().fit(X_train_scaled, y_train) #pipeline = Pipeline([("scaler", StandardScaler()), # ("svm", SVC())]) # short version: pipeline = make_pipeline(StandardScaler(), SVC()) pipeline.fit(X_train, y_train) pipeline.score(X_test, y_test) pipeline.predict(X_test)
Preprocessing and Pipelines.ipynb
amueller/pydata-amsterdam-2016
cc0-1.0
Cross-validation with a pipeline
from sklearn.cross_validation import cross_val_score cross_val_score(pipeline, X_train, y_train)
Preprocessing and Pipelines.ipynb
amueller/pydata-amsterdam-2016
cc0-1.0
Grid Search with a pipeline
import numpy as np from sklearn.grid_search import GridSearchCV param_grid = {'svc__C': 10. ** np.arange(-3, 3), 'svc__gamma' : 10. ** np.arange(-3, 3) } grid_pipeline = GridSearchCV(pipeline, param_grid=param_grid) grid_pipeline.fit(X_train, y_train) grid_pipeline.score(X_test, y_test)
Preprocessing and Pipelines.ipynb
amueller/pydata-amsterdam-2016
cc0-1.0
Exercise Make a pipeline out of the StandardScaler and KNeighborsClassifier and search over the number of neighbors.
# %load solutions/pipeline_knn.py
Preprocessing and Pipelines.ipynb
amueller/pydata-amsterdam-2016
cc0-1.0
Note that the default settings on the NCBI BLAST website are not quite the same as the defaults on QBLAST. If you get different results, you’ll need to check the parameters (e.g., the expectation value threshold and the gap values). For example, if you have a nucleotide sequence you want to search against the nucleotide database (nt) using BLASTN, and you know the GI number of your query sequence, you can use:
from Bio.Blast import NCBIWWW result_handle = NCBIWWW.qblast("blastn", "nt", "8332116")
notebooks/07 - Blast.ipynb
tiagoantao/biopython-notebook
mit
Alternatively, if we have our query sequence already in a FASTA formatted file, we just need to open the file and read in this record as a string, and use that as the query argument:
from Bio.Blast import NCBIWWW fasta_string = open("data/m_cold.fasta").read() result_handle = NCBIWWW.qblast("blastn", "nt", fasta_string)
notebooks/07 - Blast.ipynb
tiagoantao/biopython-notebook
mit
We could also have read in the FASTA file as a SeqRecord and then supplied just the sequence itself:
from Bio.Blast import NCBIWWW from Bio import SeqIO record = SeqIO.read("data/m_cold.fasta", format="fasta") result_handle = NCBIWWW.qblast("blastn", "nt", record.seq)
notebooks/07 - Blast.ipynb
tiagoantao/biopython-notebook
mit
Supplying just the sequence means that BLAST will assign an identifier for your sequence automatically. You might prefer to use the SeqRecord object’s format method to make a FASTA string (which will include the existing identifier):
from Bio.Blast import NCBIWWW from Bio import SeqIO record = SeqIO.read("data/m_cold.fasta", format="fasta") result_handle = NCBIWWW.qblast("blastn", "nt", record.format("fasta"))
notebooks/07 - Blast.ipynb
tiagoantao/biopython-notebook
mit
This approach makes more sense if you have your sequence(s) in a non-FASTA file format which you can extract using Bio.SeqIO (see Chapter 5 - Sequence Input and Output.) Whatever arguments you give the qblast() function, you should get back your results in a handle object (by default in XML format). The next step would be to parse the XML output into Python objects representing the search results (Section [sec:parsing-blast]), but you might want to save a local copy of the output file first. I find this especially useful when debugging my code that extracts info from the BLAST results (because re-running the online search is slow and wastes the NCBI computer time). Saving blast output We need to be a bit careful since we can use result_handle.read() to read the BLAST output only once – calling result_handle.read() again returns an empty string.
with open("data/my_blast.xml", "w") as out_handle: out_handle.write(result_handle.read()) result_handle.close()
notebooks/07 - Blast.ipynb
tiagoantao/biopython-notebook
mit
After doing this, the results are in the file my_blast.xml and the original handle has had all its data extracted (so we closed it). However, the parse function of the BLAST parser (described in [sec:parsing-blast]) takes a file-handle-like object, so we can just open the saved file for input:
result_handle = open("data/my_blast.xml")
notebooks/07 - Blast.ipynb
tiagoantao/biopython-notebook
mit
Now that we’ve got the BLAST results back into a handle again, we are ready to do something with them, so this leads us right into the parsing section (see Section [sec:parsing-blast] below). You may want to jump ahead to that now …. Running BLAST locally Introduction Running BLAST locally (as opposed to over the internet, see Section [sec:running-www-blast]) has at least major two advantages: Local BLAST may be faster than BLAST over the internet; Local BLAST allows you to make your own database to search for sequences against. Dealing with proprietary or unpublished sequence data can be another reason to run BLAST locally. You may not be allowed to redistribute the sequences, so submitting them to the NCBI as a BLAST query would not be an option. Unfortunately, there are some major drawbacks too – installing all the bits and getting it setup right takes some effort: Local BLAST requires command line tools to be installed. Local BLAST requires (large) BLAST databases to be setup (and potentially kept up to date). To further confuse matters there are several different BLAST packages available, and there are also other tools which can produce imitation BLAST output files, such as BLAT. Standalone NCBI BLAST+ The “new” NCBI BLAST+ suite was released in 2009. This replaces the old NCBI “legacy” BLAST package (see below). This section will show briefly how to use these tools from within Python. If you have already read or tried the alignment tool examples in Section [sec:alignment-tools] this should all seem quite straightforward. First, we construct a command line string (as you would type in at the command line prompt if running standalone BLAST by hand). Then we can execute this command from within Python. For example, taking a FASTA file of gene nucleotide sequences, you might want to run a BLASTX (translation) search against the non-redundant (NR) protein database. Assuming you (or your systems administrator) has downloaded and installed the NR database, you might run: ``` blastx -query opuntia.fasta -db nr -out opuntia.xml -evalue 0.001 -outfmt 5 ``` This should run BLASTX against the NR database, using an expectation cut-off value of $0.001$ and produce XML output to the specified file (which we can then parse). On my computer this takes about six minutes - a good reason to save the output to a file so you can repeat any analysis as needed. From within Biopython we can use the NCBI BLASTX wrapper from the Bio.Blast.Applications module to build the command line string, and run it:
from Bio.Blast.Applications import NcbiblastxCommandline help(NcbiblastxCommandline) blastx_cline = NcbiblastxCommandline(query="opuntia.fasta", db="nr", evalue=0.001, outfmt=5, out="opuntia.xml") blastx_cline print(blastx_cline) # stdout, stderr = blastx_cline()
notebooks/07 - Blast.ipynb
tiagoantao/biopython-notebook
mit
In this example there shouldn’t be any output from BLASTX to the terminal, so stdout and stderr should be empty. You may want to check the output file opuntia.xml has been created. As you may recall from earlier examples in the tutorial, the opuntia.fasta contains seven sequences, so the BLAST XML output should contain multiple results. Therefore use Bio.Blast.NCBIXML.parse() to parse it as described below in Section [sec:parsing-blast]. Other versions of BLAST NCBI BLAST+ (written in C++) was first released in 2009 as a replacement for the original NCBI “legacy” BLAST (written in C) which is no longer being updated. There were a lot of changes – the old version had a single core command line tool blastall which covered multiple different BLAST search types (which are now separate commands in BLAST+), and all the command line options were renamed. Biopython’s wrappers for the NCBI “legacy” BLAST tools have been deprecated and will be removed in a future release. To try to avoid confusion, we do not cover calling these old tools from Biopython in this tutorial. You may also come across Washington University BLAST (WU-BLAST), and its successor, Advanced Biocomputing BLAST (AB-BLAST, released in 2009, not free/open source). These packages include the command line tools wu-blastall and ab-blastall, which mimicked blastall from the NCBI “legacy” BLAST suite. Biopython does not currently provide wrappers for calling these tools, but should be able to parse any NCBI compatible output from them. Parsing BLAST output As mentioned above, BLAST can generate output in various formats, such as XML, HTML, and plain text. Originally, Biopython had parsers for BLAST plain text and HTML output, as these were the only output formats offered at the time. Unfortunately, the BLAST output in these formats kept changing, each time breaking the Biopython parsers. Our HTML BLAST parser has been removed, but the plain text BLAST parser is still available (see Section [sec:parsing-blast-deprecated]). Use it at your own risk, it may or may not work, depending on which BLAST version you’re using. As keeping up with changes in BLAST became a hopeless endeavor, especially with users running different BLAST versions, we now recommend to parse the output in XML format, which can be generated by recent versions of BLAST. Not only is the XML output more stable than the plain text and HTML output, it is also much easier to parse automatically, making Biopython a whole lot more stable. You can get BLAST output in XML format in various ways. For the parser, it doesn’t matter how the output was generated, as long as it is in the XML format. You can use Biopython to run BLAST over the internet, as described in section [sec:running-www-blast]. You can use Biopython to run BLAST locally, as described in section [sec:running-local-blast]. You can do the BLAST search yourself on the NCBI site through your web browser, and then save the results. You need to choose XML as the format in which to receive the results, and save the final BLAST page you get (you know, the one with all of the interesting results!) to a file. You can also run BLAST locally without using Biopython, and save the output in a file. Again, you need to choose XML as the format in which to receive the results. The important point is that you do not have to use Biopython scripts to fetch the data in order to be able to parse it. Doing things in one of these ways, you then need to get a handle to the results. In Python, a handle is just a nice general way of describing input to any info source so that the info can be retrieved using read() and readline() functions (see Section <span>sec:appendix-handles</span>). If you followed the code above for interacting with BLAST through a script, then you already have result_handle, the handle to the BLAST results. For example, using a GI number to do an online search:
from Bio.Blast import NCBIWWW result_handle = NCBIWWW.qblast("blastn", "nt", "8332116")
notebooks/07 - Blast.ipynb
tiagoantao/biopython-notebook
mit
If instead you ran BLAST some other way, and have the BLAST output (in XML format) in the file my_blast.xml, all you need to do is to open the file for reading:
result_handle = open("data/my_blast.xml")
notebooks/07 - Blast.ipynb
tiagoantao/biopython-notebook
mit
Now that we’ve got a handle, we are ready to parse the output. The code to parse it is really quite small. If you expect a single BLAST result (i.e., you used a single query):
from Bio.Blast import NCBIXML blast_record = NCBIXML.read(result_handle)
notebooks/07 - Blast.ipynb
tiagoantao/biopython-notebook
mit
or, if you have lots of results (i.e., multiple query sequences):
from Bio.Blast import NCBIXML blast_records = NCBIXML.parse(result_handle)
notebooks/07 - Blast.ipynb
tiagoantao/biopython-notebook
mit
Just like Bio.SeqIO and Bio.AlignIO (see Chapters [chapter:Bio.SeqIO] and [chapter:Bio.AlignIO]), we have a pair of input functions, read and parse, where read is for when you have exactly one object, and parse is an iterator for when you can have lots of objects – but instead of getting SeqRecord or MultipleSeqAlignment objects, we get BLAST record objects. To be able to handle the situation where the BLAST file may be huge, containing thousands of results, NCBIXML.parse() returns an iterator. In plain English, an iterator allows you to step through the BLAST output, retrieving BLAST records one by one for each BLAST search result:
from Bio.Blast import NCBIXML blast_records = NCBIXML.parse(result_handle) blast_record = next(blast_records) print(blast_record.database_sequences) # # ... do something with blast_record
notebooks/07 - Blast.ipynb
tiagoantao/biopython-notebook
mit
Or, you can use a for-loop. Note though that you can step through the BLAST records only once. Usually, from each BLAST record you would save the information that you are interested in. If you want to save all returned BLAST records, you can convert the iterator into a list:
for blast_record in blast_records: #Do something with blast_records blast_records = list(blast_records) blast_records = list(blast_records)
notebooks/07 - Blast.ipynb
tiagoantao/biopython-notebook
mit
Now you can access each BLAST record in the list with an index as usual. If your BLAST file is huge though, you may run into memory problems trying to save them all in a list. Usually, you’ll be running one BLAST search at a time. Then, all you need to do is to pick up the first (and only) BLAST record in blast_records:
from Bio.Blast import NCBIXML blast_records = NCBIXML.parse(result_handle)
notebooks/07 - Blast.ipynb
tiagoantao/biopython-notebook
mit
I guess by now you’re wondering what is in a BLAST record. The BLAST record class A BLAST Record contains everything you might ever want to extract from the BLAST output. Right now we’ll just show an example of how to get some info out of the BLAST report, but if you want something in particular that is not described here, look at the info on the record class in detail, and take a gander into the code or automatically generated documentation – the docstrings have lots of good info about what is stored in each piece of information. To continue with our example, let’s just print out some summary info about all hits in our blast report greater than a particular threshold. The following code does this:
E_VALUE_THRESH = 0.04 from Bio.Blast import NCBIXML result_handle = open("data/my_blast.xml", "r") blast_records = NCBIXML.parse(result_handle) for alignment in blast_record.alignments: for hsp in alignment.hsps: if hsp.expect < E_VALUE_THRESH: print("****Alignment****") print("sequence:", alignment.title) print("length:", alignment.length) print("e value:", hsp.expect) print(hsp.query[0:75] + "...") print(hsp.match[0:75] + "...") print(hsp.sbjct[0:75] + "...")
notebooks/07 - Blast.ipynb
tiagoantao/biopython-notebook
mit
最后的那个例子揭示了一个小缺陷,替换字符串并不会自动跟被匹配字符串的大小写保持一致。 为了修复这个,你可能需要一个辅助函数,就像下面的这样:
def matchcase(word): def replace(m): text = m.group() if text.isupper(): return word.upper() elif text.islower(): return word.lower() elif text[0].isupper(): return word.capitalize() else: return word return replace
02 strings and text/02.06 search replace case insensitive.ipynb
wuafeing/Python3-Tutorial
gpl-3.0
下面是使用上述函数的方法:
re.sub("python", matchcase("snake"), text, flags=re.IGNORECASE)
02 strings and text/02.06 search replace case insensitive.ipynb
wuafeing/Python3-Tutorial
gpl-3.0
First reload the data we generated in 1_notmnist.ipynb.
pickle_file = '../notMNIST.pickle' with open(pickle_file, 'rb') as f: save = pickle.load(f) train_dataset = save['train_dataset'] train_labels = save['train_labels'] valid_dataset = save['valid_dataset'] valid_labels = save['valid_labels'] test_dataset = save['test_dataset'] test_labels = save['test_labels'] del save # hint to help gc free up memory print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape)
google_dl_udacity/lesson3/3_regularization.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Reformat into a shape that's more adapted to the models we're going to train: - data as a flat matrix, - labels as float 1-hot encodings.
image_size = 28 num_labels = 10 def reformat(dataset, labels): dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32) # Map 1 to [0.0, 1.0, 0.0 ...], 2 to [0.0, 0.0, 1.0 ...] labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32) return dataset, labels train_dataset, train_labels = reformat(train_dataset, train_labels) valid_dataset, valid_labels = reformat(valid_dataset, valid_labels) test_dataset, test_labels = reformat(test_dataset, test_labels) print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape) def accuracy(predictions, labels): return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1)) / predictions.shape[0])
google_dl_udacity/lesson3/3_regularization.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Problem 1 Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy.
graph = tf.Graph() with graph.as_default(): ... loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))+ \ tf.scalar_mul(beta, tf.nn.l2_loss(weights1)+tf.nn.l2_loss(weights2))
google_dl_udacity/lesson3/3_regularization.ipynb
jinzishuai/learn2deeplearn
gpl-3.0